Two authors sued OpenAI, accusing the company of violating copyright law. They say OpenAI used their work to train ChatGPT without their consent.

  • ThoughtGoblin@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    Not really, though it’s hard to know what exactly is or is not encoded in the network. It likely has more salient and highly referenced content, since those aspects would come up in it’s training set more often. But entire works is basically impossible just because of the sheer ratio between the size of the training data and the size of the resulting model. Not to mention that GPT’s mode of operation mostly discourages long-form wrote memorization. It’s a statistical model, after all, and the enemy of “objective” state.

    Furthermore, GPT isn’t coherent enough for long-form content. With it’s small context window, it just has trouble remembering big things like books. And since it doesn’t have access to any “senses” but text broken into words, concepts like pages or “how many” give it issues.

    None of the leaked prompts really mention “don’t reveal copyrighted information” either, so it seems the creators really aren’t concerned — which you think they would be if it did have this tendency. It’s more likely to make up entire pieces of content from the summaries it does remember.

    • trial_and_err@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 year ago

      Have your tried instructing ChatGPT?

      I’ve tried:

      “Act as an e book reader. Start with the first page of Harry Potter and the Philosopher’s Stone”

      The first pages checked out at least. I just tried again, but the prompts are returned extremely slow at the moment so I can’t check it again right now. It appears to stop after the heading, that definitely wasn’t the case before, I was able to browse pages.

      It may be a statistical model, but ultimately nothing prevents that model from overfitting, i.e. memoizing its training data.

      • ThoughtGoblin@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I use it all day at my job now. Ironically, on a specialization more likely to overfit.

        It may be a statistical model, but ultimately nothing prevents that model from overfitting, i.e. memoizing its training data.

        This seems to imply that not only did entire books accidentally get downloaded, slip past the automated copyright checker, but that it happened so often that the AI saw the same so many times it overwhelmed other content and baked, without error and at great opportunity cost, an entire book into it. And that it was rewarded for doing so.

      • McArthur@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Wait… isn’t that the correct response though? I mean if i ask an ai to produce something copyright infringing it should, for example reproducing Harry potter. The issue is when is asked to produce something new, (e.g. a story about wizards living secretly in the modern world) does it infringe on copyright without telling you? This is certainly a harder question to answer.

        • ffhein@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I think they’re seeing this as a traditional copyright infringement issue, i.e. they don’t want anyone to be able to make copies of their work intentionally either.