Welcome to today’s daily kōrero!
Anyone can make the thread, first in first served. If you are here on a day and there’s no daily thread, feel free to create it!
Anyway, it’s just a chance to talk about your day, what you have planned, what you have done, etc.
So, how’s it going?
Thought for the day
Do you run thought experiments?
I was discussing with a friend the other day about the bullshit around OpenAI. The fun kernel of the conversation was “what if the board was correct”; if we assume that they have an AI that can reason. We ran a thought experiment; what if an AI can reason, how long before AI can do every job?.. and we kept going.
I like running thought experiments; they can be a lot of fun.
I do this. It’s good to keep my brain occupied while I do mundane stuff. I have to keep it occupied or it runs into other stuff
Definitely. I can relate so hard to the “middle out” scene in Silicon Valley.
As far as I can see it’s a good thing and part of intellectual curiousity. But some people seem to find thought experiments annoying, and it would be interesting to find out why.
Well now you mention it, how come AI is so crap in Star Trek and most other TV set in the future? Other than Data and the occasional other AI cameo, everything is surprisingly manual.
If it’s possible to create an AI better than a person, and that AI creates an AI even better, and this repeats, are there still going to be jobs that only humans can do?
What you are referring to is an AI superintelligence; the exponential growth is part of it.
As for science fiction, AI superintelligence makes humans irrelevant.
Haha yeah I guess in a world with AI superintelligence, you don’t need any humans, then the show isn’t very interesting.
Which raises the next question. In a galaxy with 100 billion stars, why hasn’t life on one planet somewhere that evolved a billion or half a billion years before us managed to make an AI replicating explorer that explores the galaxy? Maybe a superintelligent AI has no need to explore the galaxy?
That is an interesting question in its own right.
There are lots of theories on this:
Even if no “far future tech” is available, using just fusion based propulsion (near future tech). The galaxy could be colonized in a few million years. Which considering the age of the universe/galaxy is an extremely short time.
Yeah, I love the fermi paradox. What if AI is the great filter? Civilizations eventually build AI that can build better versions of itself, and the result is always that the AI kills the civilization (or some equivalent - say, people stop knowing how things work, the AI eventually breaks in some way, then people can’t survive in the world built for AI).
I also like the Dark Forest idea too. From the book The Dark Forest which is the sequel to The Three Body Problem. But knowing about it might spoil the book so I don’t want to explain it here.
I know about the dark forest idea, but it is a little flawed (a long with most of the solutions) it is predicated on the assumption where ALL civilizations follow the script.
No it doesn’t require that, because as soon as one doesn’t follow the script, they poke their metaphorical head up and BAM, get them before they get you. No one will have their head up for long because they get wiped out.
This is the case, but waging interstellar war necessarily will reveal your position to the grater galaxy. In which case, you have “poked your head up”. If you assume that there are more than two such civilizations then the war continues until there is only one.
At some point there are only two, it is unlikely that both will be eliminated to the point that neither can never rise again.
Run the thought experiment; assume you are civilization C, you detect that somewhere “near by” a civilization (Civ A) sends out a signal. A short time later a great war breaks out. Civ A is utterly eliminated, you detect that there are slagged planets and exploded moons. Whilst you could not detect directly that Civ B; you know that Civ B is out there somewhere, you know the time delay and thus can estimate the probable radius within which Civ B could exist. You step up your passive detection efforts focusing on eventually (100’s of years) you find Civ B. Now you know that they are doomed, but you need to ensure you don’t meet the same fate. But you also know that another civilization may have done exactly the same thing, and is watching Civ B for any sudden change. But you can’t let a know civilization exist when you known that they may find you at any time, and they will eliminate you as soon as they find you.
This is part of The Fermi Paradox.
TL;DR: If the universe is ancient and infinite, where is everybody?
Super AI is one possible answer: Everyone creates it, is subsumed by it.
If you’ve got effectively unlimited computational power, you just simulate everything rather than having to explore it
Whoops I sorta responded to the wrong post with my thoughts but yes, maybe life loses all meaning if you have super AI. Or maybe they eventually put you in a version of the matrix after reasoning happiness is the only meaning of life and so they can optimise our happiness this way.
Maybe it has already happened.
It’s simulations all the way down.