People think they are actually intelligent and perform reasoning.
They do both. The articles fails to successfully argue that point and just turns AIs failure to answer an irrelevant trivia question into a gotcha moment.
ChatGPT: I can perform certain types of reasoning and exhibit intelligent behavior to some extent, but it’s important to clarify the limitations of my capabilities. […] In summary, while I can perform certain forms of reasoning and exhibit intelligent behavior within the constraints of my training data, I do not possess general intelligence or the ability to think independently and creatively. My responses are based on patterns in the data I was trained on, and I cannot provide novel insights or adapt to new, unanticipated situations.
That said, this is one area where I wouldn’t trust the ChatGPT one bit. It has no introspection (outside of the prompt), due to not having any long term memory. So everything it says is based on whatever marketing material OpenAI trained it with.
Either way, any reasonable conversation with the bot will show that it can reason and is intelligent. The fact that it gets stuff wrong sometimes is absolutely irrelevant, since every human does that too.
You got to provide actual arguments, examples, failure cases, etc. Instead all I see is repetition of the same tired talking points from 9 months ago when the thing launched. It’s boring and makes me seriously doubt if humans are capable of original thought.
I think their creators have deliberately disconnected the runtime AI model from re-reading their own training material because it’s a copyright and licensing nightmare.
What else should they be?? They reflect human language.
People think they are actually intelligent and perform reasoning. This article discusses how and why that is not true.
They do both. The articles fails to successfully argue that point and just turns AIs failure to answer an irrelevant trivia question into a gotcha moment.
I would encourage you to ask ChatGPT itself if it is intelligent or performs reasoning.
That said, this is one area where I wouldn’t trust the ChatGPT one bit. It has no introspection (outside of the prompt), due to not having any long term memory. So everything it says is based on whatever marketing material OpenAI trained it with.
Either way, any reasonable conversation with the bot will show that it can reason and is intelligent. The fact that it gets stuff wrong sometimes is absolutely irrelevant, since every human does that too.
I think it’s hilarious you aren’t listening to anyone telling you you’re wrong, even the bot itself. Must be nice to be so confident.
You got to provide actual arguments, examples, failure cases, etc. Instead all I see is repetition of the same tired talking points from 9 months ago when the thing launched. It’s boring and makes me seriously doubt if humans are capable of original thought.
I think their creators have deliberately disconnected the runtime AI model from re-reading their own training material because it’s a copyright and licensing nightmare.