They’re both BS machines and fact generators. It produced bullshit when asked about him because as far as I can tell he’s kind of a nobody, not because it’s just a stylistic generator. If he asked about a more prominent person likely to exist more significantly within the training corpus, it would likely be largely accurate. The hallucination problem stems from the system needing to produce a result regardless of whether it has a well trained semantic model for the question.
LLMs encode both the style of language and semantic relationships. For “who is Einstein”, both paths are well developed and the result is a reasonable response. For “who is Ryan McGreal”, the semantic relationships are weak or non-existent, but the stylistic path is undeterred, leading to the confidently plausible bullshit.
They don’t generate facts, as the article says. They choose the next most likely word. Everything is confidently plausible bullshit. That some of it is also true is just luck.
It’s obviously not “just” luck. We know LLMs learn a variety of semantic models of varying degrees of correctness. It’s just that no individual (inner) model is really that great, and most of them are bad. LLMs aren’t reliable or predictable (enough) to constitute a human-trustable source of information, but they’re not pure gibberish generators.
No, it’s true, “luck” might be overstating it. There’s a good chance most of what it says is as accurate as the corpus it was trained on. That doesn’t personally make me very confident, but ymmv.
That’s just not true. Semantic encodings work. It’s not like neural networks are some new untested concept, the LLMs have some new tricks under the hood and are way more extensive in their training goal, but they’re fundamentally the same thing. All neural networks are mimicry machines enabled and limited by their data, but mimicking largely correct data produces largely correct results when the answer, or interpolatable answers exists in the training data. The problem arises when asked to go further and further afield from their inputs. Some interpolation and substitutions work, but it gets increasingly unreliable the more niche the answer is.
While the LLM hype has very seriously oversold their abilities, the instinctive backlash to say they’re useless is similarly way off-base.
No one is saying “they’re useless.” But they are indeed bullshit machines, for the reasons the author (and you yourself) acknowledged. Their purposes is to choose likely words. That likely and correct are frequently the same shouldn’t blind us to the fact that correctness is a coincidence.
Yes, it’s been my career for the last two decades and before that was the focus of my education. The idea that “correctness is a coincidence” is absurd and either fails to understand how training works or rejects the entire premise of large data revealing functional relationships in the underlying processes.
They’re both BS machines and fact generators. It produced bullshit when asked about him because as far as I can tell he’s kind of a nobody, not because it’s just a stylistic generator. If he asked about a more prominent person likely to exist more significantly within the training corpus, it would likely be largely accurate. The hallucination problem stems from the system needing to produce a result regardless of whether it has a well trained semantic model for the question.
LLMs encode both the style of language and semantic relationships. For “who is Einstein”, both paths are well developed and the result is a reasonable response. For “who is Ryan McGreal”, the semantic relationships are weak or non-existent, but the stylistic path is undeterred, leading to the confidently plausible bullshit.
They don’t generate facts, as the article says. They choose the next most likely word. Everything is confidently plausible bullshit. That some of it is also true is just luck.
It’s obviously not “just” luck. We know LLMs learn a variety of semantic models of varying degrees of correctness. It’s just that no individual (inner) model is really that great, and most of them are bad. LLMs aren’t reliable or predictable (enough) to constitute a human-trustable source of information, but they’re not pure gibberish generators.
No, it’s true, “luck” might be overstating it. There’s a good chance most of what it says is as accurate as the corpus it was trained on. That doesn’t personally make me very confident, but ymmv.
That’s just not true. Semantic encodings work. It’s not like neural networks are some new untested concept, the LLMs have some new tricks under the hood and are way more extensive in their training goal, but they’re fundamentally the same thing. All neural networks are mimicry machines enabled and limited by their data, but mimicking largely correct data produces largely correct results when the answer, or interpolatable answers exists in the training data. The problem arises when asked to go further and further afield from their inputs. Some interpolation and substitutions work, but it gets increasingly unreliable the more niche the answer is.
While the LLM hype has very seriously oversold their abilities, the instinctive backlash to say they’re useless is similarly way off-base.
No one is saying “they’re useless.” But they are indeed bullshit machines, for the reasons the author (and you yourself) acknowledged. Their purposes is to choose likely words. That likely and correct are frequently the same shouldn’t blind us to the fact that correctness is a coincidence.
That’s an absurd statement. Do you have any experience with machine learning?
It isn’t; I do; do you?
Yes, it’s been my career for the last two decades and before that was the focus of my education. The idea that “correctness is a coincidence” is absurd and either fails to understand how training works or rejects the entire premise of large data revealing functional relationships in the underlying processes.
Or you’ve simply misunderstood what I’ve said despite your two decades of experience and education.
If you train a model on a bad dataset, will it give you correct data?
If you ask a question a model it doesn’t have enough data to be confident about an answer, will it still confidently give you a correct answer?
And, more importantly, is it trained to offer CORRECT data, or is it trained to return words regardless of whether or not that data is correct?
I mean, it’s like you haven’t even thought about this.