Welcome to today’s daily kōrero!

Anyone can make the thread, first in first served. If you are here on a day and there’s no daily thread, feel free to create it!

Anyway, it’s just a chance to talk about your day, what you have planned, what you have done, etc.

So, how’s it going?

  • absGeekNZ@lemmy.nz
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 months ago

    Thought for the day

    Do you run thought experiments?

    I was discussing with a friend the other day about the bullshit around OpenAI. The fun kernel of the conversation was “what if the board was correct”; if we assume that they have an AI that can reason. We ran a thought experiment; what if an AI can reason, how long before AI can do every job?.. and we kept going.

    I like running thought experiments; they can be a lot of fun.

    • Mojojojo1993@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      11 months ago

      I do this. It’s good to keep my brain occupied while I do mundane stuff. I have to keep it occupied or it runs into other stuff

    • liv@lemmy.nz
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      Definitely. I can relate so hard to the “middle out” scene in Silicon Valley.

      As far as I can see it’s a good thing and part of intellectual curiousity. But some people seem to find thought experiments annoying, and it would be interesting to find out why.

    • Dave@lemmy.nzOPM
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      Well now you mention it, how come AI is so crap in Star Trek and most other TV set in the future? Other than Data and the occasional other AI cameo, everything is surprisingly manual.

      If it’s possible to create an AI better than a person, and that AI creates an AI even better, and this repeats, are there still going to be jobs that only humans can do?

      • absGeekNZ@lemmy.nz
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        What you are referring to is an AI superintelligence; the exponential growth is part of it.

        As for science fiction, AI superintelligence makes humans irrelevant.

        • Dave@lemmy.nzOPM
          link
          fedilink
          arrow-up
          1
          ·
          11 months ago

          Haha yeah I guess in a world with AI superintelligence, you don’t need any humans, then the show isn’t very interesting.

          Which raises the next question. In a galaxy with 100 billion stars, why hasn’t life on one planet somewhere that evolved a billion or half a billion years before us managed to make an AI replicating explorer that explores the galaxy? Maybe a superintelligent AI has no need to explore the galaxy?

          • absGeekNZ@lemmy.nz
            link
            fedilink
            English
            arrow-up
            3
            ·
            11 months ago

            That is an interesting question in its own right.

            There are lots of theories on this:

            • From the mundane, maybe we are the first.
            • To the exotic, the “zoo” hypothesis states: that there is at least one group that are keeping us “blind” to the real universe by manipulating our measurements / ability to measure.

            Even if no “far future tech” is available, using just fusion based propulsion (near future tech). The galaxy could be colonized in a few million years. Which considering the age of the universe/galaxy is an extremely short time.

            • Dave@lemmy.nzOPM
              link
              fedilink
              arrow-up
              1
              ·
              11 months ago

              Yeah, I love the fermi paradox. What if AI is the great filter? Civilizations eventually build AI that can build better versions of itself, and the result is always that the AI kills the civilization (or some equivalent - say, people stop knowing how things work, the AI eventually breaks in some way, then people can’t survive in the world built for AI).

              I also like the Dark Forest idea too. From the book The Dark Forest which is the sequel to The Three Body Problem. But knowing about it might spoil the book so I don’t want to explain it here.

              • absGeekNZ@lemmy.nz
                link
                fedilink
                English
                arrow-up
                2
                ·
                11 months ago

                I know about the dark forest idea, but it is a little flawed (a long with most of the solutions) it is predicated on the assumption where ALL civilizations follow the script.

                • Dave@lemmy.nzOPM
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  11 months ago

                  No it doesn’t require that, because as soon as one doesn’t follow the script, they poke their metaphorical head up and BAM, get them before they get you. No one will have their head up for long because they get wiped out.

                  • absGeekNZ@lemmy.nz
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    edit-2
                    11 months ago

                    This is the case, but waging interstellar war necessarily will reveal your position to the grater galaxy. In which case, you have “poked your head up”. If you assume that there are more than two such civilizations then the war continues until there is only one.

                    At some point there are only two, it is unlikely that both will be eliminated to the point that neither can never rise again.

                    Run the thought experiment; assume you are civilization C, you detect that somewhere “near by” a civilization (Civ A) sends out a signal. A short time later a great war breaks out. Civ A is utterly eliminated, you detect that there are slagged planets and exploded moons. Whilst you could not detect directly that Civ B; you know that Civ B is out there somewhere, you know the time delay and thus can estimate the probable radius within which Civ B could exist. You step up your passive detection efforts focusing on eventually (100’s of years) you find Civ B. Now you know that they are doomed, but you need to ensure you don’t meet the same fate. But you also know that another civilization may have done exactly the same thing, and is watching Civ B for any sudden change. But you can’t let a know civilization exist when you known that they may find you at any time, and they will eliminate you as soon as they find you.

          • deadbeef79000@lemmy.nz
            link
            fedilink
            arrow-up
            2
            ·
            11 months ago

            This is part of The Fermi Paradox.

            TL;DR: If the universe is ancient and infinite, where is everybody?

            Super AI is one possible answer: Everyone creates it, is subsumed by it.

            If you’ve got effectively unlimited computational power, you just simulate everything rather than having to explore it

            • Dave@lemmy.nzOPM
              link
              fedilink
              arrow-up
              1
              ·
              11 months ago

              Whoops I sorta responded to the wrong post with my thoughts but yes, maybe life loses all meaning if you have super AI. Or maybe they eventually put you in a version of the matrix after reasoning happiness is the only meaning of life and so they can optimise our happiness this way.

              Maybe it has already happened.