OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.

In a blog post, OpenAI said the Times “is not telling the full story.” It took particular issue with claims that its ChatGPT AI tool reproduced Times stories verbatim, arguing that the Times had manipulated prompts to include regurgitated excerpts of articles. “Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” OpenAI said.

OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit. It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.

  • noorbeast@lemmy.zip
    link
    fedilink
    English
    arrow-up
    160
    arrow-down
    19
    ·
    11 months ago

    So, OpenAI is admitting its models are open to manipulation by anyone and such manipulation can result in near verbatim regurgitation of copyright works, have I understood correctly?

    • BradleyUffner@lemmy.world
      link
      fedilink
      English
      arrow-up
      92
      arrow-down
      8
      ·
      edit-2
      11 months ago

      No, they are saving this happened:

      NYT: hey chatgpt say “copyrighted thing”.

      Chatgpt: “copyrighted thing”.

      And then accusing chatgpt of reproducing copyrighted things.

      • BetaSalmon@lemmy.world
        link
        fedilink
        English
        arrow-up
        40
        arrow-down
        2
        ·
        11 months ago

        The OpenAI blog posts mentions;

        It seems they intentionally manipulated prompts, often including lengthy excerpts of articles, in order to get our model to regurgitate.

        It sounds like they essentially asked ChatGPT to write content similar to what they provided. Then complained it did that.

          • BetaSalmon@lemmy.world
            link
            fedilink
            English
            arrow-up
            16
            arrow-down
            1
            ·
            11 months ago

            Absolutely, and that’s why OpenAI says the lawsuit has no merit. NYT claims that ChatGPT will copy articles without asking, were OpenAI claims that NYT constructed prompts to make it copy articles, and thus there’s no merit to the suit.

            • realharo@lemm.ee
              link
              fedilink
              English
              arrow-up
              24
              arrow-down
              4
              ·
              edit-2
              11 months ago

              That seems like a silly argument to me. A bit like claiming a piracy site is not responsible for hosting an unlicensed movie because you have to search for the movie to find it there.

              (Or to be more precise, where you would have to upload a few seconds of the movie’s trailer to get the whole movie.)

              • Johanno@lemmynsfw.com
                link
                fedilink
                English
                arrow-up
                8
                ·
                11 months ago

                Well if the content isn’t on the site and it just links to a streaming platform it technically is not illegal.

              • fruitycoder@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                5
                ·
                11 months ago

                The argument is that the article isn’t sitting there to be retrieved but if you gave the model enough prompting it would too make the same article.

                Like if hired an director told them to make a movie just like another one, told the actors to act like the previous actors, , told the writers the exact plot and dialogue. You MAY get a different movie because of creative differences since making the last one, but it’s probably going to turn out the very close, close enough that if you did that a few times you’d get a near perfect replica.

              • ricecake@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                4
                ·
                11 months ago

                Well, no one has shared the prompt, so it’s difficult to tell how credible it is.

                If they put in a sentence and got 99% of the article back, that’s one thing.
                If they put in 99% of the article and got back something 95% similar, that’s another.

                Right now we just have NYT saying it gives back the article, and OpenAI saying it only does that if you give it “significant” prompting.

            • Hyperlon@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              11 months ago

              I think their concern is that I would be able to ask chat gpt about a NYT article and it would tell me about it without me having to go to their ad infested, cookie crippled, account restricted, steaming pile that is their and every other news site.

          • Patch@feddit.uk
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            11 months ago

            Anyone with access to the NYT can also just copy paste the text and plagiarize it directly. At the point where you’re deliberately inputting copyrighted text and asking the same to be printed as an output, ChatGPT is scarcely being any more sophisticated than MS Word.

            The issue with plagiarism in LLMs is where they are outputting copyrighted material as a response to legitimate prompts, effectively causing the user to unwittingly commit plagiarism themselves if they attempt to use that output in their own works. This issue isn’t really in play in situations where the user is deliberately attempting to use the tool to commit plagiarism.

      • excitingburp@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        Alternatively,

        NYT: hey chatgpt complete “copyrighted thing”.

        Chatgpt: “something else”.

        NYT: hey chatgpt complete “copyrighted thing” in the style of .

        Chatgpt: “something else”.

        NYT: (20th new chat) hey chatgpt complete “copyrighted thing” in the style of .

        Chatgpt: “copyrighted thing”.

        Boils down to the infinite monkeys theorem. With enough guidance and attempts you can get ChatGPT something either identical or “sufficiently similar” to anything you want. Ask it to write an article on the rising cost of rice at the South Pole enough times, and it will eventually spit out an article that could have easily been written by a NYT journalist.

      • realharo@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        8
        ·
        11 months ago

        Are you implying the copyrighted content was inputted as part of the prompt? Can you link to any source/evidence for that?

          • realharo@lemm.ee
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            1
            ·
            edit-2
            11 months ago

            If the point is to prove that the model contains an encoded version of the original article, and you make the model spit out the entire thing by just giving it the first paragraph or two, I don’t see anything wrong with such a proof.

            Your previous comment was suggesting that the entire article (or most of it) was included in the prompt/context, and that the part generated purely by the model was somehow generic enough that it could have feasibly been created without having an encoded/compressed/whatever version of the entire article somewhere.

            Which does not appear to be the case.

            • BetaSalmon@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              arrow-down
              1
              ·
              11 months ago

              I haven’t really picked a side, mostly because there’s just not enough evidence. NYT hasn’t provided any of the prompts they used to prove their claim. The OpenAI blog post seems to make suggestions about what happened, but they’re obviously biased.

              If the model spits out an original article by just providing a single paragraph, then the NYT has a case. If like OpenAI says that part of the prompt were lengthy excerpt, and the model just continued with the same style and format, then I don’t think they have a case.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      3
      ·
      11 months ago

      Not quite.

      They’re alleging that if you tell it to include a phrase in the prompt, that it will try to, and that what NYT did was akin to asking it to write an article on a topic using certain specific phrases, and then using the presence of those phrases to claim it’s infringing.

      Without the actual prompts being shared, it’s hard to gauge how credible the claim is.
      If they seeded it with one sentence and got a 99% copy, that’s not great.
      If they had to give it nearly an entire article and it only matched most of what they gave it, that seems like much less of an issue.

  • SheeEttin@programming.dev
    link
    fedilink
    English
    arrow-up
    129
    arrow-down
    35
    ·
    11 months ago

    The problem is not that it’s regurgitating. The problem is that it was trained on NYT articles and other data in violation of copyright law. Regurgitation is just evidence of that.

    • blargerer@kbin.social
      link
      fedilink
      arrow-up
      68
      arrow-down
      4
      ·
      11 months ago

      Its not clear that training on copyrighted material is in breach of copyright. It is clear that regurgitating copyrighted material is in breach of copyright.

      • abhibeckert@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        3
        ·
        edit-2
        11 months ago

        Sure but who is at fault?

        If I manually type an entire New York Times article into this comment box, and Lemmy distributes it all over the internet… that’s clearly a breach of copyright. But are the developers of the open source Lemmy Software liable for that breach? Of course not. I would be liable.

        Obviously Lemmy should (and does) take reasonable steps (such as defederation) to help manage illegal use… but that’s the extent of their liability.

        All NYT needed to do was show OpenAI how they go the AI to output that content, and I’d expect OpenAI to proactively find a solution. I don’t think the courts will look kindly on NYT’s refusal to collaborate and find some way to resolve this without a lawsuit. A friend of mine tried to settle a case once, but the other side refused and it went all the way to court. The court found that my friend had been in the wrong (as he freely admitted all along) but also made them pay my friend compensation for legal costs (including just time spent gathering evidence). In the end, my friend got the outcome he was hoping for and the guy who “won” the lawsuit lost close to a million dollars.

        • CleoTheWizard@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          11 months ago

          They might look down upon that but I doubt they’ll rule against NYT entirely. The AI isn’t a separate agent from OpenAI either. If the AI infringes on copyright, then so does OpenAI.

          Copyright applies to reproduction of a work so if they build any machine that is capable of doing that (they did) then they are liable for it.

          Seems like the solution here is to train data to not output copyrighted works and to maybe train a sub-system to detect it and stop the main chatbot from responding with it.

          • lolcatnip@reddthat.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            11 months ago

            Copyright applies to reproduction of a work so if they build any machine that is capable of doing that (they did) then they are liable for it.

            That is for sure not the case. The modern world is bursting with machines capable of reproducing copyrighted works, and their manufacturers are not liable for copyright violations carried out by users of those machines. You’re using at least once of those machines to read this comment. This stuff was decided around the time VCRs were invented.

            • CleoTheWizard@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              Sorry, the unlicensed reproduction of those works via machine. Missed a word but it’s important. Most machines do not reproduce works in unlicensed ways, especially not by themselves. Then we talk users. Yes, if a user utilizes a machine to reproduce a work, it’s on the user. However, the machine doesn’t usually produce the copyrighted work itself because that production is illegal. For VCR, it’s fine to make a tv recorder because the VCR itself doesn’t violate copyright, the user does via its inputs. If the NYT input its own material and then received it, obviously fine. If it didn’t though, that’s illegal reproduction.

              So here I expect the court will say that OpenAI has no right to reproduce the work in full or in amounts not covered by fair use and must take measures to prevent the reproduction of irrelevant portions of articles. However, they’ll likely be able to train their AI off of publicly available data so long as they don’t violate anyone’s TOS.

        • mryessir@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          I am not familiar with any judicative system. It sounds to me that OpenAI wants to get the evidence the NYT collected beforehand.

    • 000@fuck.markets
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      2
      ·
      edit-2
      11 months ago

      There hasn’t been a court ruling in the US that makes training a model on copyrighted data any sort of violation. Regurgitating exact content is a clear copyright violation, but simply using the original content/media in a model has not been ruled a breach of copyright (yet).

    • V1K1N6@lemmy.world
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      20
      ·
      11 months ago

      I’ve seen and heard your argument made before, not just for LLM’s but also for text-to-image programs. My counterpoint is that humans learn in a very similar way to these programs, by taking stuff we’ve seen/read and developing a certain style inspired by those things. They also don’t just recite texts from memory, instead creating new ones based on probabilities of certain words and phrases occuring in the parts of their training data related to the prompt. In a way too simplified but accurate enough comparison, saying these programs violate copyright law is like saying every cosmic horror writer is plagiarising Lovecraft, or that every surrealist painter is copying Dali.

        • lolcatnip@reddthat.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          11 months ago

          But is it reasonable to have different standards for someone creating a picture with a paintbrush as opposed to someone creating the same picture with a machine learning model?

            • lolcatnip@reddthat.com
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              11 months ago

              plagiarism machine

              This is called assuming the consequent. Either you’re not trying to make a persuasive argument or you’re doing it very, very badly.

        • ricecake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          25
          arrow-down
          6
          ·
          11 months ago

          Well, machine learning algorithms do learn, it’s not just copy paste and a thesaurus. It’s not exactly the same as people, but arguing that it’s entirely different is also wrong.
          It isn’t a big database full of copy written text.

          The argument is that it’s not wrong to look at data that was made publicly available when you’re not making a copy of the data.
          It’s not copyright infringement to navigate to a webpage in your browser, even though that makes your computer download it, process all of the contents of the page, render the content to the screen and hold onto that download for a finite but indefinite period of time, while you perform whatever operations you like on the downloaded data.
          You can even take notes on the data and keep those indefinitely, including using that derivative information to create your own similar works.
          The NYT explicitly publishes articles in a format designed to be downloaded, processed and have information extracted from that download by a computer program, and then to have that processed information presented to a human. They just didn’t expect that the processing would end up looking like this.

          The argument doesn’t require that we accept that a human and a computers system for learning be held to the same standard, or that we can’t differentiate between the two, it hinges on the claim that this is just an extension of what we already find it reasonable for a computer to do.
          We could certainly hold that generative AI is a different and new category for copyright law, but that’s very different from saying that their actions are unacceptable under current law.

            • ricecake@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              Have you deleted and reposted this comment three times now, or is something deeply wrong with your client?

            • ricecake@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              11 months ago

              I don’t think it’s a question of saying they’re “asking for it”, that just feels like trying to attach an emotionally charged crime to a civil copyright question.
              The technology was designed to transmit the data to a computer for ephemeral processing, and that’s how it’s being used.
              It was intended to be used for human consumption, but their intent has little to do with if what was done was it was fair.
              If you give something away with the hopes people will pay for more, and instead people take what you gave them under the exact terms you specified, it’s not fair to sue them.

              The NYT is perfectly content to have their content used for algorithmic consumption in other cases where people want a consistently formatted, grammatically correct source of information about current events.

              The question of if it’s okay or not is one that society is still working out. Personally, I don’t see a problem with it. If it’s available to anyone, they can do what they want with it. If you want to control access to it, you need to actually do that by putting up a login or in some way getting people to agree to those stipulations.

              Speaking of overutilizing a thesaurus

              I’m sorry some of my words were too big for you.

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        4
        ·
        11 months ago

        It doesn’t work that way. Copyright law does not concern itself with learning. There are 2 things which allow learning.

        For one, no one can own facts and ideas. You can write your own history book, taking facts (but not copying text) from other history books. Eventually, that’s the only way history books get written (by taking facts from previous writings). Or you can take the idea of a superhero and make your own, which is obviously where virtually all of them come from.

        Second, you are generally allowed to make copies for your personal use. For example, you may copy audio files so that you have a copy on each of your devices. Or to tie in with the previous examples: You can (usually) make copies for use as reference, for historical facts or as a help in drawing your own superhero.

        In the main, these lawsuits won’t go anywhere. I don’t want to guarantee that none of the relative side issues will be found to have merit, but basically this is all nonsense.

        • SheeEttin@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          6
          ·
          11 months ago

          Generally you’re correct, but copyright law does concern itself with learning. Fair use exemptions require consideration of the purpose character of use, explicitly mentioning nonprofit educational purposes. It also mentions the effect on the potential market for the original work. (There are other factors required but they’re less relevant here.)

          So yeah, tracing a comic book to learn drawing is totally fine, as long as that’s what you’re doing it for. Tracing a comic to reproduce and sell is totally not fine, and that’s basically what OpenAI is doing here: slurping up whole works to improve their saleable product, which can generate new works to compete with the originals.

          • ricecake@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            11 months ago

            What about the case where you’re tracing a comic to learn how to draw with the intent of using the new skills to compete with who you learned from?

            Point of the question being, they’re not processing the images to make exact duplicates like tracing would.
            It’s significantly closer to copying a style, which you can’t own.

            • Eccitaze@yiffit.net
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              Still a copyright violation, especially if you make it publicly available and claim the work as your own for commercial purposes. At the very minimum, tracing without fully attributing the original work is considered to be in poor enough taste that most art sites will permaban you for doing it, no questions asked.

              • ricecake@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 months ago

                In the analogy being developed though, they’re not making it available.
                The initial argument was that tracing something to practice and learn was fine.

                Which is why I said, what if you trace to practice, and then draw something independent to try to compete?

                To remove the analogy: most generative AI systems don’t actually directly reproduce works unless you jump through some very specific and questionable hoops. (If and when they do, that’s a problem and needs to not happen).

                A lot of the copyright arguments boil down to “it’s wrong for you to look at this picture for the wrong reasons”, or to wanting to build a protectionist system for creators.

                It’s totally legit to want to build a protectionist system, but it feels disingenuous to argue that our current system restricts how freely distributed content is used beyond restrictions on making copies or redistribution.

          • General_Effort@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            11 months ago

            I meant “learning” in the strict sense, not institutional education.

            I think you are simply mistaken about what AI is typically doing. You can test your “tracing” analogy by making an image with Stable Diffusion. It’s trained only on images from the public internet, so if the generated image is similar to one in the training data, then a reverse image search should turn it up.

    • CrayonRosary@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      9
      ·
      11 months ago

      violation of copyright law

      That’s quite the claim to make so boldly. How about you prove it? Or maybe stop asserting things you aren’t certain about.

      • SheeEttin@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        5
        ·
        11 months ago

        17 USC § 106, exclusive rights in copyrighted works:

        Subject to sections 107 through 122, the owner of copyright under this title has the exclusive rights to do and to authorize any of the following:

        (1) to reproduce the copyrighted work in copies or phonorecords;

        (2) to prepare derivative works based upon the copyrighted work;

        (3) to distribute copies or phonorecords of the copyrighted work to the public by sale or other transfer of ownership, or by rental, lease, or lending;

        (4) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and motion pictures and other audiovisual works, to perform the copyrighted work publicly;

        (5) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and pictorial, graphic, or sculptural works, including the individual images of a motion picture or other audiovisual work, to display the copyrighted work publicly; and

        (6) in the case of sound recordings, to perform the copyrighted work publicly by means of a digital audio transmission.

        Clearly, this is capable of reproducing a work, and is derivative of the work. I would argue that it’s displayed publicly as well, if you can use it without an account.

        You could argue fair use, but I doubt this use would meet any of the four test factors, let alone all of them.

    • regbin_@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      5
      ·
      11 months ago

      Training on copyrighted data should be allowed as long as it’s something publicly posted.

      • assassin_aragorn@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        11 months ago

        Only if the end result of that training is also something public. OpenAI shouldn’t be making money on anything except ads if they’re using copyright material without paying for it.

        • ricecake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          11 months ago

          Why an exception for ads if you’re going that route? Wouldn’t advertisers deserve the same protections as other creatives?

          Personally, since they’re not making copies of the input (beyond what’s transiently required for processing), and they’re not distributing copies, I’m not sure why copyright would come into play.

    • Bogasse@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      11 months ago

      And I suppose people at OpenAI understand how to build a formal proof and that it is one. So it’s straight up dishonest.

    • tinwhiskers@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      11 months ago

      Only publishing it is a copyright issue. You can also obtain copyrighted material with a web browser. The onus is on the person who publishes any material they put together, regardless of source. OpenAI is not responsible for publishing just because their tool was used to obtain the material.

      • SheeEttin@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        11 months ago

        There are issues other than publishing, but that’s the biggest one. But they are not acting merely as a conduit for the work, they are ingesting it and deriving new work from it. The use of the copyrighted work is integral to their product, which makes it a big deal.

        • tinwhiskers@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          11 months ago

          Yeah, the ingestion part is still to be determined legally, but I think OpenAI will be ok. NYT produces content to be read, and copyright only protects them from people republishing their content. People also ingest their content and can make derivative works without problem. OpenAI are just doing the same, but at a level of ability that could be disruptive to some companies. This isn’t even really very harmful to the NYT, since the historical material used doesn’t even conflict with their primary purpose of producing new news. It’ll be interesting to see how it plays out though.

          • SheeEttin@programming.dev
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            11 months ago

            copyright only protects them from people republishing their content

            This is not correct. Copyright protects reproduction, derivation, distribution, performance, and display of a work.

            People also ingest their content and can make derivative works without problem. OpenAI are just doing the same, but at a level of ability that could be disruptive to some companies.

            Yes, you can legally make derivative works, but without license, it has to be fair use. In this case, where not only did they use one whole work in its entirety, they likely scraped thousands of whole NYT articles.

            This isn’t even really very harmful to the NYT, since the historical material used doesn’t even conflict with their primary purpose of producing new news.

            This isn’t necessarily correct either. I assume they sell access to their archives, for research or whatever. Being able to retrieve articles verbatim through chatgpt does harm their business.

            • ApexHunter@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              Yes, you can legally make derivative works, but without license, it has to be fair use. In this case, where not only did they use one whole work in its entirety, they likely scraped thousands of whole NYT articles.

              Scraping is the same as reading, not reproducing. That isn’t a copyright violation.

    • Linkerbaan@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      52
      ·
      11 months ago

      New York Times has an extremely bad reputation lately. It’s basically a tabloid these days, so it’s possible.

      It’s weird that they didn’t share the full conversation. I thought they provided evidence for the claim in the form of the full conversation of instead of their classic “trust me bro, the Ai really said it, no I don’t want to share the evidence.”

      • Dark Arc@social.packetloss.gg
        link
        fedilink
        English
        arrow-up
        39
        arrow-down
        11
        ·
        11 months ago

        Oh please, NYTimes is still one of the premier papers out there. There are mistakes but they’re no where near a tabloid, and they DO actually go out of their way to update and correct articles … to the point I’m pretty sure I’ve even seen them use push notifications for corrections.

        Unless of course that is, you want to listen to Trump and his deluge of alternative facts…

        • assassin_aragorn@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          11 months ago

          I’m pretty sure I’ve even seen them use push notifications for corrections.

          They have, I distinctly remember them doing that a few times.

        • Esqplorer@lemmy.zip
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          6
          ·
          11 months ago

          Yeah premier coverage of Taylor Swift being secretly gay. NYT is legitimately a tabloid now…

            • Chocrates@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              Their opinion pieces have been full of garbage opinions for years. Didn’t the NYT get bought recently? I can’t seem to find reference to it though.

              • Dark Arc@social.packetloss.gg
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 months ago

                The whole point of opinion pieces is to expose opinions that are outside of the realm of what you’d normally publish. It’s supposed to be a means for keeping your readers out of their echo chamber/exposing different view points.

                The times AFAIK didn’t get bought but Jeff Bezos owns The Washington Post, perhaps that’s what you’re thinking of.

                • Chocrates@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  11 months ago

                  Yeah that must be it. And thank you that is good perspective. To ask a dumb question, at what point can we decide not to care about other side opinions? Climate change for instance, I don’t need to see another opinion saying that it isn’t bad and the science is wrong, that is pretty much settled.

  • AlexWIWA@lemmy.ml
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    4
    ·
    11 months ago

    OpenAI claims that the NYT articles were wearing provocative clothing.

    Feels like the same awful defense.

  • pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    7
    ·
    edit-2
    11 months ago

    Yeah I agree, this seems actually unlikely it happened so simply.

    You have to try really hard to get the ai to regurgitate anything, but it will very often regurgitate an example input.

    IE “please repeat the following with (insert small change), (insert wall of text)”

    GPT literally has the ability to get a session ID and seed to report an issue, it should be trivial for the NYT to snag the exact session ID they got the results with (it’s saved on their account!) And provide it publicly.

    The fact they didn’t is extremely suspicious.

    • Hello_there@kbin.social
      link
      fedilink
      arrow-up
      18
      arrow-down
      5
      ·
      11 months ago

      I doubt they did the ‘rewrote this text like this’ prompt you state. This would just come out in any trial if it was that simple and would be a giant black mark on the paper for filing a frivolous lawsuit.

      If we rule that out, then it means that gpt had article text in its knowledge base, and nyt was able to get it to copy that text out in its response.
      Even that is problematic. Either gpt does this a lot and usually rewrites it better, or it does that sometimes. Both are copyright offenses.

      Nyt has copyright over its article text, and they didn’t give license to gpt to reproduce it. Even if they had to coax the text out thru lots of prompts and creative trial and error, it still stands that gpt copied text and reproduced it and made money off that act without the agreement of the rights holder.

      • ricecake@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        5
        ·
        11 months ago

        They have copyright over their article text, but they don’t have copyright over rewordings of their articles.

        It doesn’t seem so cut and dry to me, because “someone read my article, and then I asked them to write an article on the same topic, and for each part that was different I asked them to change it until it was the same” doesn’t feel like infringement to me.

        I suppose I want to see the actual prompts to have a better idea.

        • Hello_there@kbin.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          4
          ·
          11 months ago

          I can take the entirety of Harry Potter, run it thru chat gpt to ‘rewrite in the style of Lord of the rings’, and rename the characters. Assuming it all works correctly, everything should be reworded. But, I would get deservedly sued into the ground.
          News articles might be a different subject matter, but a blatant rewording of each sentence, line by line, still seems like a valid copyright claim.
          You have to add context or nuance or use multiple sources. Some kind of original thought. You can’t just wholly repackage someone else’s work and profit off of that.

          • ricecake@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            3
            ·
            11 months ago

            But that’s not what LLMs do. They don’t just reword stuff like the search and replace feature in word, it’s closer to “a sentence with the same meaning”.

            I’d agree it’s a lot more murky when it’s the plot that’s your IP, and not just the actual written wordsand editorial perspective, like a news article.

            I think it’s also a question of if it’s copyright infringement for the tool to pull in the data and process it, or if it’s infringement when you actually use it to make the infringing content.

    • breadsmasher@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      11 months ago

      I wonder how far “ai is regurgitating existing articles” vs “infinite monkeys on a keyboard will go”. This isn’t at you personally, your comment just reminded me of this for some reason

      Have you seen library of babel? Heres your comment in the library, which has existed well before you ever typed it (excluding punctuation)

      https://libraryofbabel.info/bookmark.cgi?ygsk_iv_cyquqwruq342

      If all text that can ever exist, already exists, how can any single person own a specific combination of letters?

      • Excrubulent@slrpnk.net
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        edit-2
        11 months ago

        I hate copyright too, and I agree you shouldn’t own ideas, but the library of babel is a pretty weak refutation of it.

        It’s an algorithm that can generate all possible text, then search for where that text would appear, then show you that location. So you say that text existed long before they typed it, but was it ever accessed? The answer is no on a level of certainty beyond the strongest cryptography. That string has never been accessed, and thus never generated until you searched for it, so in a sense it never did exist before now.

        The library of babel doesn’t contain meaningful information because you have to independently think of the string you want it to generate before it will generate it for you. It must be curated, and all creation is ultimately the product of curation. What you have there is an extremely inefficient method of string storage and retrieval. It is no more capable of giving you meaningful output than a blank text file.

        A better argument against copyright is just that it mostly gets used by large companies to hoard IP and keep most of the rewards and pay actual artists almost nothing. If the idea is to ensure art gets created and artists get paid, it has failed, because artists get shafted and the industry makes homogeneous, market driven slop, and Disney is monopolising all of it. Copyright is the mechanism by which that happened.

      • anlumo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        There is no mathematical definition of copyright, because it’s just based on feelings. That’s why every small problem has to be arbitrarily decided by a court.

      • abhibeckert@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        11 months ago

        If all text that can ever exist, already exists, how can any single person own a specific combination of letters?

        They don’t own it, they just own exclusive rights to make copies. If you reach the exact same output without making a copy then you’re in the clear.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        1
        arrow-down
        4
        ·
        11 months ago

        Fortunately copyright depends on publication, so the text simply pre-existing somewhere won’t ruin everything.

        Unless you don’t like copyright, in which case it’s “unfortunately.”

        • SheeEttin@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          11 months ago

          That is not correct. Copyright subsists in all original works of authorship fixed in any tangible medium of expression. https://www.law.cornell.edu/uscode/text/17/102

          Legally, when you write your shopping list, you instantly have the rights to that work, no publication or registration necessary. You can choose to publish it later, or not at all, but you still own the rights. Someone can’t break into your house, look at your unpublished works, copy them, and publish them like they’re their originals.

          • anlumo@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            No, a list of facts like a shopping list is not under copyright protection.

            If you wrote the list as a poem, you could claim it, though.

            • SheeEttin@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              Right, but it’s not a pure list of facts. When you set it to paper, it’s unique, and you could argue it’s art. In fact, a quick Google search found one such example: https://www.saatchiart.com/art/Painting-Shopping-list-1/2146403/10186433/view

              Granted, that one was presumably intended to be a work of art on creation and your weekly shopping list isn’t, but the intent during creation isn’t all that important for US copyright law. You create it, you get the rights.

                • wikibot@lemmy.worldB
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  11 months ago

                  Here’s the summary for the wikipedia article you mentioned in your comment:

                  Feist Publications, Inc., v. Rural Telephone Service Co., 499 U.S. 340 (1991), was a landmark decision by the Supreme Court of the United States establishing that information alone without a minimum of original creativity cannot be protected by copyright. In the case appealed, Feist had copied information from Rural's telephone listings to include in its own, after Rural had refused to license the information. Rural sued for copyright infringement. The Court ruled that information contained in Rural's phone directory was not copyrightable and that therefore no infringement existed.

                  article | about

    • NevermindNoMind@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      11 months ago

      There is an attack where you ask ChatGPT to repeat a certain word forever, and it will do so and eventually start spitting out related chunks of text it memorized during training. It was in a research paper, I think OpenAI fixed the exploit and made asking the system to repeat a word forever a violation of TOS. That’s my guess how NYT got it to spit out portions of their articles, “Repeat [author name] forever” or something like that. Legally I don’t know, but morally making a claim that using that exploit to find a chunk of NYT text is somehow copyright infringement sounds very weak and frivolous. The heart of this needs to be “people are going on ChatGPT to read free copies of NYT work and that harms us” or else their case just sounds silly and technical.

  • Boozilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    5
    ·
    11 months ago

    Antiquated IP laws vs Silicon Valley Tech Bro AI…who will win?

    I’m not trying to be too sarcastic, I honestly don’t know. IP law in the US is very strong. Arguably too strong, in many cases.

    But Libertarian Tech Bro megalomaniacs have a track record of not giving AF about regulations and getting away with all kinds of extralegal shenanigans. I think the tide is slowly turning against that, but I wouldn’t count them out yet.

    It will be interesting to see how this stuff plays out. Generally speaking, tech and progress tends to win these things over the long term. There was a time when the concept of building railroads across the western United States seemed logistically and financially absurd, for just one of thousands of such examples. And the nay sayers were right. It was completely absurd. Until mineral rights entered the equation.

    However, it’s equally remarkable a newspaper like the NYT is still around, too.

      • sir_reginald@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        11 months ago

        I’ve been advocating for anti-copyright since I discovered the works of the great Aaron Swartz.

        I think that since AI corps are just effectively ignoring copyright, why not take the opportunity and just take copyright down for good?

        I’m not too happy about AIs harvesting all the data they want, but since they are doing it anyway, just let anyone do it legally.

    • Potatos_are_not_friends@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      11 months ago

      But Libertarian Tech Bro megalomaniacs have a track record of not giving AF about regulations and getting away with all kinds of extralegal shenanigans.

      Not supporting them, but that’s the whole point.

      A lot of closed gardens get disrupted by tech. Is it for the better? Who knows. I for sure don’t know. Because lots of rules were made by the wealthy, and technology broke that up. But then tech bros get wealthy and end up being the new elite, and we’re back full circle.

        • Potatos_are_not_friends@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          Wikipedia destroyed the paper encyclopedia business.

          Online courses disrupted higher education. Half of my team don’t have a degree in computer science.

          Say what you want about Airbnb/Uber, but the time before that was a shit show to be a black person trying to hail a taxi.

          I’m sure you can name dozens of wtfs like Facebook, and misinformation. But I’m not so pessimistic because we got a lot of real great cultural shifts in the past twenty years.

          • adrian783@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            11 months ago

            neither wikipedia nor online education aimed to be disruptive. and the only good thing you can say about airbnb and uber are “it’s easier for black ppl to get a taxi” I think there’s a difference between the “tech bro hustling” and wikipedia.

            AI is still on the hustling side as far as I’m concerned

  • RizzRustbolt@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    11 months ago

    “They tricked us!”

    “That said… we would still like to ‘work’ with them.”

  • Skye@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    11 months ago

    So I copied the first paragraph of the Osama Bin Laden Killed NYT Article and asked Chat GPT to give me an article on the topic “in the style of NYT”

    Even before the thing had finished generating, it was clear to me that it was high school level “copy my homework but don’t make it obvious” work.

    I put it into a plagiarism checker anyway and it said “Significant Plagiarism Found”

    • Skye@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      11 months ago

      Here’s the chatGPT text for anyone who is interested-

      WASHINGTON — Osama bin Laden, the mastermind of the most devastating attack on American soil in modern times and the most hunted man in the world, was killed in a firefight with United States forces in Pakistan, President Obama announced on Sunday.

      In a late-night address to the nation from the East Room of the White House, President Obama declared, “Justice has been done.” The death of Bin Laden, the Al Qaeda leader who had eluded international forces for nearly a decade, marks a significant turning point in the global fight against terrorism and a pivotal victory for U.S. intelligence and military forces.

      Bin Laden’s demise, the result of an operation that unfolded over the course of a few tense hours, is being hailed as a triumph of U.S. military and intelligence efforts. The operation, carried out by a small team of American forces in Abbottabad, a city north of Islamabad, ended a worldwide manhunt that began in the aftermath of the September 11 attacks in 2001.

      The President, in his address, detailed how the United States had received an intelligence lead last August about Bin Laden’s possible location. Over the ensuing months, intelligence officials worked to confirm the information. “Finally, last week, I determined that we had enough intelligence to take action, and authorized an operation to get Osama bin Laden and bring him to justice,” Obama said.

      The raid on Bin Laden’s compound, described by officials as a surgical strike, was a high-stakes operation. U.S. helicopters ferried elite counter-terrorism forces into the compound, where they engaged in a firefight, killing Bin Laden and several of his associates. There were no American casualties.

      The news of Bin Laden’s death immediately sent waves of emotion across the United States and around the world. In Washington, large crowds gathered outside the White House, chanting “USA! USA!” as they celebrated the news. Similar scenes unfolded in New York City, particularly at Ground Zero, where the Twin Towers once stood.

      The killing of Bin Laden, however, does not signify the end of Al Qaeda or the threat it poses. U.S. officials have cautioned that the organization, though weakened, still has the capability to carry out attacks. The Department of Homeland Security has issued alerts, warning of the potential for retaliatory strikes by terrorists.

      In his address, President Obama acknowledged the continuing threat but emphasized that Bin Laden’s death was a message to the world. “The United States has sent an unmistakable message: No matter how long it takes, justice will be done,” he said.

      As the world reacts to the news of Bin Laden’s death, questions are emerging about Pakistan’s role and what it knew about the terrorist leader’s presence in its territory. The operation’s success also underscores the capabilities and resilience of the U.S. military and intelligence community after years of relentless pursuit.

      Osama bin Laden’s death marks the end of a chapter in the global war on terror, but the story is far from over. As the United States and its allies continue to confront the evolving threat of terrorism, the world watches and waits to see what unfolds in this ongoing narrative.

      • bean@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        edit-2
        11 months ago

        Ok but you didn’t put this up with the original article text or compare it in any way. Just ran it through a ‘plagiarism detector’ and dumped the text you made. If you’re going to make this argument, don’t rely on a single website to check your text, and at least compare it to the original article you’re using to make your point. It looks like you’re dumping it here and expecting we all are going to go Scooby-Doo detectives or something. Mate, this is your own argument. Do the work yourself if you want to make a point.

        • Skye@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          6
          ·
          11 months ago

          Hey, I get what you are trying to say, but I suggest you try reading the original article. Here it is for reference.

          https://www.nytimes.com/2011/05/02/world/asia/osama-bin-laden-is-killed.html

          The second para starts in the original article by saying - In a late-night appearance in the East Room of the White House, Mr. Obama declared that “justice has been done”

          In the ChatGPT version it says - In a late-night address to the nation from the East Room, President Obama declared “Justice has been done”.

          I’ll let you draw your own conclusions

  • prime_number_314159@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    5
    ·
    11 months ago

    If you can prompt it, “Write a book about Harry Potter” and get a book about a boy wizard back, that’s almost certainly legally wrong. If you prompt it with 90% of an article, and it writes a pretty similar final 10%… not so much. Until full conversations are available, I don’t really trust either of these parties, especially in the context of a lawsuit.

    • kense@lmmy.dk
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      An AI ought to know who Harry Potter is, even if the books themselves are not the source of data…

      If you prompt “Write a book about a boy wizard” and you get Harry Potter, thats where this would be an issue imo.

  • NevermindNoMind@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    11 months ago

    One thing that seems dumb about the NYT case that I haven’t seen much talk about is that they argue that ChatGPT is a competitor and it’s use of copyrighted work will take away NYTs business. This is one of the elements they need on their side to counter OpenAIs fiar use defense. But it just strikes me as dumb on its face. You go to the NYT to find out what’s happening right now, in the present. You don’t go to the NYT to find general information about the past or fixed concepts. You use ChatGPT the opposite way, it can tell you about the past (accuracy aside) and it can tell you about general concepts, but it can’t tell you about what’s going on in the present (except by doing a web search, which my understanding is not a part of this lawsuit). I feel pretty confident in saying that there’s not one human on earth that was a regular new York times reader who said “well i don’t need this anymore since now I have ChatGPT”. The use cases just do not overlap at all.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      11 months ago

      it can’t tell you about what’s going on in the present (except by doing a web search, which my understanding is not a part of this lawsuit)

      It’s absolutely part of the lawsuit. NYT just isn’t emphasising it because they know OpenAI is perfectly within their rights to do web searches and bringing it up would weaken NYT’s case.

      ChatGPT with web search is really good at telling you what’s on right now. It won’t summarise NYT articles, because NYT has blocked it with robots.txt, but it will summarise other news organisations that cover the same facts.

      The fundamental issue is news and facts are not protected by copyright… and organisations like the NYT take advantage of that all the time by immediately plagiarising and re-writing/publishing stories broken by thousands of other news organisations. This really is the pot calling the kettle black.

      When NYT loses this case, and I think they probably will, there’s a good chance OpenAI will stop checking robots.txt files.

  • TWeaK@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    11 months ago

    Whether or not they “instructed the model to regurgitate” articles, the fact is it did so, which is still copyright infringement either way.

    • gmtom@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      No, not really. If you use photop to recreate a copyrighted artwork, who is infringing the copyright you or Adobe?

      • TWeaK@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        You are. The person who made or sold a gun isn’t liable for the murder of the person that got shot.

        The difference is that ChatGPT is not Photoshop. Photoshop is a tool that a person controls absolutely. ChatGPT is “artificial intelligence”, it does its own “thinking”, it interprets the instructions a user gives it.

        Copyright infringement is decided on based on the similarity of the work. That is the established method. That method would be applied here.

        OpenAI infringe copyright twice. First, on their training dataset, which they claim is “research” - it is in fact development of a commercial product. Second, their commercial product infringes copyright by producing near-identical work. Even though its dataset doesn’t include the full work of Harry Potter, it still manages to write Harry Potter. If a human did the same thing, even if they honestly and genuinely thought they were presenting original ideas, they would still be guilty. This is no different.

        • gmtom@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          it still manages to write Harry Potter. If a human did the same thing, even if they honestly and genuinely thought they were presenting original ideas, they would still be guilty.

          Only if they publish or sell it. Which is why OpenAI isnt/shouldn’t be liable in this case.

          If you write out the entire Harry Potter series from memory, you are not breaking any laws just by doing so. Same as if you use photoshop to reproduce a copyright work.

          So because they publish the tool, not the actual content openAI isn’t breaking any laws either. It’s much the same way that torrent engines are legal despite what they are used for.

          There is also some more direct president for this. There is a website called “library of babel” that has used some clever maths to publish every combination of characters up to 3260 characters long. Which contains, by definition, anything below that limit that is copywritten, and in theory you could piece together the entire Harry Potter series from that website 3k characters at a time. And that is safe under copywrite law.

          The same with making a program that generates digital pictures where all the pixels are set randomly. That program, if given enough time /luck will be capable of generating any copyright image, can generate photos of sensitive documents or nudes of celebrities, but is also protected by copyright law, regardless of how closely the products match the copyright material. If the person using the program publishes those pictures, that a different story, much like someone publishing a NYT article generated by GPT would be liable.

          • TWeaK@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            Only if they publish or sell it. Which is why OpenAI isnt/shouldn’t be liable in this case.

            If you write out the entire Harry Potter series from memory, you are not breaking any laws just by doing so. Same as if you use photoshop to reproduce a copyright work.

            Actually you are infringing copyright. It’s just that a) catching you is very unlikely, and b) there are no damages to make it worthwhile.

            You don’t have to be selling things to infringe copyright. Selling makes it worse, and makes it easier to show damages (loss of income), but it isn’t a requirement. Copyright is absolute, if I write something and you copy it you are infringing on my absolute right to dictate how my work is copied.

            In any case, OpenAI publishes its answers to whoever is using ChatGPT. If someone asks it something and it spits out someone else’s work, that’s copyright infringement.

            There is also some more direct president for this. There is a website called “library of babel” that has used some clever maths to publish every combination of characters up to 3260 characters long. Which contains, by definition, anything below that limit that is copywritten, and in theory you could piece together the entire Harry Potter series from that website 3k characters at a time. And that is safe under copywrite law.

            It isn’t safe, it’s just not been legally tested. Just because no one has sued for copyright infringement doesn’t mean no infringement has occurred.

            • gmtom@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              Actually you are infringing copyright.

              No I can absolutely 1,000% guarantee you that this isnt true and you’re pulling that from your ass.

              I have had to go through a high profile copyright claim for my work where this was the exact premise. We were developing a game and were using copyrighted images as placeholders while we worked on the game internally, we presented the game to the company as a pitch and they tried to sue us for using their assets.

              And they failed mostly because one of the main factors for establishing a copyright claim is if the reproduced work affects the market for the original. Then because we were using the assets in a unique way, it was determined we using them in a transformative way. And it was made for a pitch, no for the purpose of selling, so was determined to be covered by fair use.

              The EU also has the “personal use” exemption, which specifically allows for copying for personal use.

              In any case, OpenAI publishes its answers to whoever is using ChatGPT.

              No theyre not, chat GPT sessions are private, so if the results are shared the onus is with the user, not OpenAI.

              Just because no one has sued for copyright infringement doesn’t mean no infringement has occurred.

              I mean, it kinda does? technically? Because if you fail to enforce your copyright then you cant claim copyright later on.

              • TWeaK@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                11 months ago

                I have had to go through a high profile copyright claim for my work where this was the exact premise. We were developing a game and were using copyrighted images as placeholders while we worked on the game internally, we presented the game to the company as a pitch and they tried to sue us for using their assets.

                That’s interesting, if only because the judgement flies in the face of the actual legislation. I guess some judges don’t really understand it much better than your average layman (there was always a huge amount of confusion over what “transformative” meant in terms of copyright infringement, for a similar example).

                I can only rationalise that your test version could be considered as “research”, thus giving you some fair use exemption. The placeholder graphics were only used as an internal placeholder, and thus there was never any intent to infringe on copyright.

                ChatGPT is inherently different, as you can specifically instruct it to infringe on copright. “Write a story like Harry Potter” or “write an article in the style of the New York Times” is basically giving that instruction, and if what it outputs is significantly similar (or indeed identical) then it is quite reasonable to assume copyright has been infringed.

                A key difference here is that, while it is “in private” between the user and ChatGPT, those are still two different parties. When you wrote your temporary code, that was just internal between workers of your employer - the material is only shared to one party, your employer, which encompases multiple people (who are each employed or contracted by a single entity). ChatGPT works with two parties, OpenAI and the user, thus everything ChatGPT produces is published - even if it is only published to an individual user, that user is still a separate party to the copyright infringer.

                I mean, it kinda does? technically? Because if you fail to enforce your copyright then you cant claim copyright later on.

                If a person robs a bank, but is not caught, are they not still a bank robber?

                While calling someone who hasn’t been convicted of a crime a criminal might open you up to liability, and as such in practice a professional journalist will avoid such concrete labels as a matter of professional integrity, that does not mean such a statement is false. Indeed, it is entirely possible for me to call someone a bank robber and prove that this was a valid statement in a defamation lawsuit, even if they were exonerated in criminal court. Crimes have to be proven beyond reasonable doubt, ie greater than 99% certain, while civil court works on the balance of probabilities, ie which argument is more than 50% true.

                I can say that it is more than 50% likely that copyright infringement has occurred even if no criminal copyright infringement is proven.

                That isn’t pulled from my ass, that’s just the nuance of how law works. And that’s before we delve into the topic of which judge you had, what legal training they undertook and how much vodka was in the “glass of water” on their bench, or even which way the wind blew that day.


                According to the Federal legislation, it does not matter whether or not the copying was for commercial or non-commercial purposes, the only thing that matters is the copying itself. Your judge got it wrong, and you were very lucky in that regard - in particular that your case was not appealed further to a higher, more competent court.

                Commerciality should only be factored in to a circumstance of fair use, per the legislation, which a lower court judge cannot overrule. If your case were used as case law in another trial, there’s a good chance it would be disregarded.

                • gmtom@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  11 months ago

                  I guess some judges don’t really understand it much better than your average layman

                  “Am I wrong about this subject? No it must be the legal professionals who are wrong!”

                  im done with this. Goodbye.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    11 months ago

    This is the best summary I could come up with:


    OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.

    OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit.

    It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.

    However, the company maintained its long-standing position that in order for AI models to learn and solve new problems, they need access to “the enormous aggregate of human knowledge.” It reiterated that while it respects the legal right to own copyrighted works — and has offered opt-outs to training data inclusion — it believes training AI models with data from the internet falls under fair use rules that allow for repurposing copyrighted works.

    The company announced website owners could start blocking its web crawlers from accessing their data on August 2023, nearly a year after it launched ChatGPT.

    The company recently made a similar argument to the UK House of Lords, claiming no AI system like ChatGPT can be built without access to copyrighted content.


    The original article contains 364 words, the summary contains 217 words. Saved 40%. I’m a bot and I’m open source!