I’m sure this is a common topic but the timeline is pretty fast these days.

With bots looking more human than ever i’m wondering what’s going to happen once everyone start using them to spam the platform. Lemmy with it’s simple username/text layout seem to offer the perfect ground for bots, to verify if someone is real is going to take scrolling through all his comments and read them accurately one by one.

  • Lmaydev@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    It’s because it isn’t fed facts really. Words are converted into numbers and it understands the relationship between them.

    It has absolutely no understanding of facts, just how words are used with other words.

    It’s not like it’s looking up things in a database. It’s taking the provided words and applying a mathematical formula to create new words.

    • RoundSparrow@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      > It’s because it isn’t fed facts really.

      That’s an interesting theory of why it works that way. Personally, I think rights usage, as in copyright, is a huge problem for OpenAI and Microsoft (Bing)… and they are trying to avoid paying money for the training material they use. And if they accurately quoted source material, they would run into expensive costs they are trying to avoid.

      !aicopyright@lemm.ee