Google’s AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery’s positives.

  • HughJanus@lemmy.ml
    link
    fedilink
    English
    arrow-up
    67
    arrow-down
    1
    ·
    11 months ago

    People think of AI as some sort omniscient being. It’s just software spitting back the data that it’s been fed. It has no way to parse true information from false information because it doesn’t actually know anything.

    • baatliwala@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      11 months ago

      And then when you do ask humans to help AI in parsing true information people cry about censorship.

      • HughJanus@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        Well, it can be less difficult, but still difficult, for humans to parse the truth also.

      • Chailles@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        11 months ago

        The matter of being what is essentially the Arbiter of what is considered Truth or Morally Acceptable is never going to not be highly controversial.

    • Hamartiogonic@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      11 months ago

      Even though our current models can be really complex, they are still very very far away from being the elusive General Purpose AI sci-fi authors have been writing about for decades (if not centuries) already. GPT and others like it are merely Large Language Models, so don’t expect them to handle anything other than language.

      Humans think of the world through language, so it’s very easy to be deceived by an LLM to think that you’re actually talking to a GPAI. That misconception is an inherent flaw of the human mind. Language comes so naturally to us, and we’re often use it as a shortcut to assess the intelligence of other people. Generally speaking that works reasonably well, but an LLM is able to exploit that feature of human behavior in order to appear to be smarter than it really is.

    • hornedfiend@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      11 months ago

      What’s more worrisome are the sources it used to feed itself. Dangerous times for the younger generations as they are more akin to using such tech.

      • HughJanus@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        11 months ago

        What’s more worrisome are the sources it used to feed itself.

        It’s usually just the entirety of the internet in general.

          • HughJanus@lemmy.ml
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            11 months ago

            The internet is full of both the best and the worst of humanity. Much like humanity itself.