• 0 Posts
  • 24 Comments
Joined 4 years ago
cake
Cake day: July 8th, 2020

help-circle
  • I certainly am not surprised that OpenAI, Google and so on are overstating the capabilities of the products they are developing and currently selling. Obviously it’s important for the public at large to be aware that you can’t trust a company to accurately describe products it’s trying to sell you, regardless of what the product is.

    I am more interested in what academics have to say though. I expect them to be more objective and have more altruistic motivations than your typical marketeer. The reason I asked how you would define intelligence was really just because I find it an interesting area of thought which fascinates me and has done long before this new wave of LLMs hit the scene. It’s also one which does not have clear answers, and different people will have different insights and perspectives. There are different concepts which are often blurred together: intelligence, being clever, being well educated, and consciousness. I personally consider all of these to be separate concepts, and while they may have some overlap, they nevertheless are all very different things. I have met many people who have very little formal education but are nonetheless very intelligent. And in terms of AI and LLMs, I believe that an LLM does encapsulate some degree of genuine intelligence - they appear to somehow encode a model of the universe in their billions of parameters and they are able to meaningfully respond to natural language questions on almost any subject - however an LLM is unquestionably not a conscious being.


  • You’re right that we need a clear definition of intelligence if we are to make any predictions about achieving AGI. The researchers behind this article appear to mean “human-level cognition” which doesn’t seem to be a particularly objective or useful yardstick. To begin with, which human are we talking about? If they’re talking about an idealised maximally intelligent human, then I don’t think we should be surprised that we aren’t about to achieve that. The goal is not to recreate human cognition as if that’s some kind of holy grail. The goal is to make intelligent systems which can give results which are at least as good as what would be produced by a skilled and well-trained human working on the same problem.

    Can I ask you how you would define intelligence? And in particular, how would you - if you would at all - differentiate intelligence from being clever, or from being well educated?







  • It models only use of language

    This phrase, so casually deployed, is doing some seriously heavy lifting. Lanuage is by no means a trivial thing for a computer to meaningfully interpret, and the fact that LLMs do it so well is way more impressive than a casual observer might think.

    If you look at earlier procedural attempts to interpret language programmatically, you will see that time and again, the developers get stopped in their tracks because in order to understand a sentence, you need to understand the universe - or at the least a particular corner of it. For example, given the sentence “The stolen painting was found by a tree”, you need to know what a tree is in order to interpret this correctly.

    You can’t really use language *unless* you have a model of the universe.


  • Heroic works really well. I’ve just installed it myself recently, motivated mostly by a desire to finally play the free games I got off Epic. I’ve only installed two EGS games so far - Civ 6 and Guardians of the Galaxy - but they’re working perfectly, running via proton.

    The experience is so good I was actually inspired to buy my first game outside of steam in years, namely Wartales which I just bought yesterday on GOG. Installation is a breeze, it runs under proton, and as far as I can tell it is running perfectly.

    I sort of prefer Heroic to Steam in fact, because it starts almost immediately - no waiting around for 30 seconds while it tries to connect to the Steam network etc





  • What do you think evolved first - verbal communication or thoughts? Presumably we were able to think before we could speak, no? The words we have in our language are like pointers to internal concepts, and it seems to me that those internal concepts would have existed before language was a thing. The mouth-sounds as you put it are not the thoughts themselves, rather just labels for specific concepts. It might be possible and even convenient to think in mouth-sounds but it’s not necessary for logical thought.





  • I cannot wait until architecture-agnostic ML libraries are dominant and I can kiss CUDA goodbye for good

    I really hope this happens. After being on Nvidia for over a decade (960 for 5 years and similar midrange cards before that), I finally went AMD at the end of last year. Then of course AI burst onto the scene this year, and I’ve not yet managed to get stable diffusion running to the point it’s made me wonder if I might have made a bad choice.


  • Same. I had an Nvidia 960 for about 5 years on arch with very few problems. Maybe twice over that time I had to rollback to an older version temporarily due to some incompatibility with wine or such like.

    Towards the end of last year I finally decided to upgrade (mostly to play RDR2) and I went with AMD. I love the feel of using a pure open source gfx stack, but there is no real functional advantage to it.



  • I find your comment interesting because you are implying that some people believe being stupid or clever is a permanent unchangeable state. Presumably one is born as either one or the other?

    I would say that some ways of thinking are stupid. In particular when one does not challenge one’s assumptions. It’s possible to build a whole world of stupid on top of bad assumptions. If someone’s entire worldview is built in this way - a whole load of bad assumptions held together with poor logic and wishful thinking - I don’t think they’re even living in the real world any more, they’re living in a fantasy land.