Sure, but you can get that with something more long-form, too; it’s not exclusive to Twitter/microblogging .
Sorry about that.
Sure, but you can get that with something more long-form, too; it’s not exclusive to Twitter/microblogging .
I would argue that the format incentivizes short quips and discussions lacking nuance in favor of brevity, and yes, therefore it’s “bad” (to use their term) to use Twitter even if musk wasn’t turning it into Truth Social.
Well, arguably the microblogging format does have some intrinsic disadvantages.
Well, maybe free time doesn’t happen in the first year, but I was a nuke; quals weren’t all that bad from what I remember.
What’s wrong with writing poetry on an aircraft carrier? I can’t speak to being on an aircraft carrier, but on a submarine you are not in war mode 24/7; there’s time to do ordinary things. (usually).
Let me guess: Tommy here hasn’t ever served in the military, right? All he knows about it is from movies?
Are you speaking legally or morally when you say someone “aught” to do something?
You most certainly can. The discussion about whether copyright applies to the output is nuanced but certainly valid, and notably separate from whether copyright allows copyright holders to restrict who or what gets trained on their work after it’s released for general consumption.
The article is literally about someone suing to prevent their art from being used for training. That’s the topic at hand.
Are you confused, or are you trying to shoehorn a different but related discussion into this one?
I was under the impression we were talking about using copyright to prevent a work from being used to train a generative model. There’s nothing in copyright that says anything about training anything. I’m not even convinced there should be.
There’s nothing in copyright law that covers this scenario, so anyone that says it’s “absolutely” one way or the other is telling you an opinion, not a fact.
I subscribed to releases! Good work so far!
Hey, I was up front about my data (or lack thereof) and we’re not talking about climate change or string theory, we’re talking about fast food delivery driver’s onboarding.
“The Internet” would just state it like a fact.
Are you saying that traditional food delivery drivers get trained specifically not to hit on people when they deliver food? I don’t have any data but I feel like that’s not really a thing. Maybe my concept of the training a food delivery driver gets is way off the mark?
I’m also pretty sure that it’s easier to give a bad review that others will see via one of these food delivery apps than it is if you go directly to the business.
I think we all agree that this is inappropriate and should not be happening, I just don’t see how it doesn’t apply at least equally to traditional delivery drivers.
Yeah I read that but I don’t have the knowledge to say “what a rookie mistake” or “in hindsight that was a bad idea”. I take it, it’s the former?
I’m not a cybersecurity expert. Did they make a foolish decision that would warrant a lack of trust, or were they just unlucky?
It’s not a bad heuristic to predict Trump. From staring directly at a solar eclipse to continuing to defame a person immediately after losing a defamation case about that person, Trump will always seemingly take the worst possible action in any given scenario.
I think (and am deeply saddened by it) that many people would go just for the proximity to Trump, not because they care one way or the other about Giuliani.
I can’t say I fully understand how LLMs work (can’t anyone??) but I know a little and your comment doesn’t seem to understand how they use training data. They don’t use their training data to “memorize” sentences, they use it as an example (among billions) of how language works. It’s still just an analogy, but it really is pretty close to LLMs “learning” a language by seeing it used over and over. Keeping in mind that we’re still in an analogy, it isn’t considered “derivative” when someone learns a language from examples of that language and then goes on to write a poem in that language.
Copyright doesn’t even apply, except perhaps on extremely fringe cases. If a journalist put their article up online for general consumption, then it doesn’t violate copyright to use that work as a way to train a LLM on what the language looks like when used properly. There is no aspect of copyright law that covers this, but I don’t see why it would be any different than the human equivalent. Would you really back up the NYT if they claimed that using their articles to learn English was in violation of their copyright? Do people need to attribute where they learned a new word or strengthened their understanding of a language if they answer a question using that word? Does that even make sense?
Here is a link to a high level primer to help understand how LLMs work: https://www.understandingai.org/p/large-language-models-explained-with
While it doesn’t automatically mean that Giuliani sees the money, Trump is apparently having a fundraiser dinner for him. So maybe he does have something on Trump, still.
Well, that’s a good point but I still think there are better services than Twitter/microblogging for that. Like our old friend RSS