Top physicist says chatbots are just ‘glorified tape recorders’::Leading theoretical physicist Michio Kaku predicts quantum computers are far more important for solving mankind’s problems.
Top physicist says chatbots are just ‘glorified tape recorders’::Leading theoretical physicist Michio Kaku predicts quantum computers are far more important for solving mankind’s problems.
That’s an incredibly ignorant take.
LLMs are the first glimmer of artificial GENERAL intelligence (AGI). They are the most important invention perhaps of all time.
They will fundamentally change our society, our economy, and they pose an existential threat to mankind, not to mention begging existential questions about our purpose for existing at all in a world where very shortly computers will be able to out-think and out-create us.
Quantum computers are … neat. They will allow us to solve problems conventional computers can’t. They may make current encryption models obsolete … but I haven’t heard any proposed usage of them that would be even a fraction as profound as AGI.
They don’t really demonstrate general intelligence. They’re very powerful tools, but LLMs are still a form of specialized intelligence, they’re just specialized at language instead of some other task. I do agree that they’re closer than what we’ve seen in the past, but the fact that they don’t actually understand our world and can only mimic the way we talk about it still occasionally shines through.
You wouldn’t consider Midjourney or Stable Diffusion to have general intelligence because they can generate accurate pictures of a wide variety of things, and in my opinion, LLMs aren’t much different.
I’ve been working extensively with gpt4 since it came out, and it ABSOLUTELY is the engine that can power rudimentary AGI. You can supplement it with other tools, and give it a memory… ZERO doubt in my mind that GPT4-powered are AGI.
I disagree, but I guess we’ll have to wait and see! I do hope you’re wrong, as my experience with ChatGPT has shown me how incredibly biased it is and I would rather hope that once we do achieve AGI, it doesn’t have a political agenda in mind.
Biased in what way?
They seem to be patching it whenever something comes up, which is still not an acceptable solution because things keep coming up. One great example that I witnessed myself (but has since been patched) was that if you asked it for a joke about men, it would come up with a joke that degraded men, but if you ask it for a joke about women, it would chastise you for being insensitive to protected groups.
Now, it just comes up with a random joke and assigns the genders of the characters in the joke accordingly, but there are certainly still numerous other biases that either haven’t been patched or won’t be patched because they fit OpenAI’s worldview. I know it’s impossible to create a fully unbiased… anything (highly recommend There is No Algorithm for Truth by Tom Scott if you have the interest and free time), but LLMs trained on our speech have learned our biases and can behave in appalling ways at times.
Worse, the majority of the data used by LLMs comes from the internet; a place that often brings out the worst and most polarized sides of us.
That’s also very true. It’s a big problem.
Flexing that sci-fi knowledge real hard, my dude.
The AI that you’re describing probably won’t even be possible (if it even is possible. We don’t even fully understand human intelligence/brains yet) until quantum computing is ubiquitous so your whole argument is illogical.
For my own silly sci-fi take, I believe our brains are probably closer to quantum computing than traditional computing.
…What? No, why would that be a requirement? Unless you just mean that QC is easier so it will come first by chance?
I have worked extensively to build out gpt-4 and give it memory and other attributes.
I have no doubt at all that with supplemental modules to expand its context, it’s absolutely an AGI.