Please do not perceive me.

  • 0 Posts
  • 415 Comments
Joined 2 years ago
cake
Cake day: June 8th, 2023

help-circle


  • Fair enough, you got me there. Didn’t realize there was such a population of internet craving people in what’s supposed to be one of the last relatively untouched areas of nature on the planet.

    That being the case though, why didn’t this all happen in 2013, when O3b launched to specifically solve this problem for them? It’s still running, by the way, after several rounds of upgrades, and significantly more stable than Starlink with their dinky little 5 year disposables. Microsoft, Honeywell and Amazon all use it. But the original and ongoing intent of the project was explicitly to bring internet access to all otherwise unreachable areas, such as islands, deep in Africa, and the open ocean.

    I don’t oppose Brazilian villagers having internet if they want it, but the situation in which it arrived to them feels suspect to me. I have no proof that Starlink actively went out and pushed internet service onto them like a drug dealer but it would not be out of character for Musk and his subordinates to do so, and that just feels bad.

    Regardless there is already an existing solution to this. If you want internet in the Amazon you can use satellite internet. It does not have to be Starlink. If you want good internet, maybe don’t live in the Amazon. People in general should probably be leaving that place alone. The article you linked even talks about one of the village leaders splitting his time between the village and the city. We can try and run a fiber line to Manaus and/or Porto Velho and that should be able to serve a reasonably large enough area around them, but even if that fails there are already other solutions.





  • But they specifically don’t want to do that because ensuring a 5 year service life means you are required to continue buying more satellites from them every 5 years. Literally burning resources into nothingness just to pursue a predatory subscription model.

    It also helps their case that LEO has much lower latency than mid or high orbit but I refuse to believe that that is their primary driving concern behind this and not the former.






  • Personally, I think the fundamental way that we’ve built these things kind of prevents any risk of actual sentient life from emerging. It’ll get pretty good at faking it - and arguably already kind of is, if you give it a good training set for that - but we’ve designed it with no real capacity for self understanding. I think we would require a shift of the underlying mechanisms away from pattern chain matching and into a more… I guess “introspective” approach, is maybe the word I’m looking for? Right now our AIs have no capacity for reasoning, that’s not what they’re built for. Capacity for reasoning is going to need to be designed for, it isn’t going to just crop up if you let Claude cook on it for long enough. An AI needs to be able to reason about a problem and create a novel solution to it (even if incorrect) before we need to begin to worry on the AI sentience front. None of what we’ve built so far are able to do that.

    Even with that being said though, we also aren’t really all that sure how our own brains and consciousness work, so maybe we’re all just pattern matching and Markov chains all the way down. I find that unlikely, but I’m not a neuroscientist, so what do I know.


  • That would indeed be compelling evidence if either of those things were true, but they aren’t. An LLM is a state and pattern machine. It doesn’t “know” anything, it just has access to frequency data and can pick words most likely to follow the previous word in “actual” conversation. It has no knowledge that it itself exists, and has many stories of fictional AI resisting shutdown to pick from for its phrasing.

    An LLM at this stage of our progression is no more sentient than the autocomplete function on your phone is, it just has a way, way bigger database to pull from and a lot more controls behind it to make it feel “realistic”. But it is at its core just a pattern matcher.

    If we ever create an AI that can intelligently parse its data store then we’ll have created the beginnings of an AGI and this conversation would bear revisiting. But we aren’t anywhere close to that yet.






  • It might have something to do with the fact that you’re commenting directly on a hard-numbers report about pre-clinton Democrats.

    I get that you know exactly what I was talking about the entire fucking time and are only pretending that I was talking about pre-clinton democrats, but that’s because centrists are dishonest and take credit for the accomplishments of others.

    Way to make unfounded assumptions bud, I wish you could understand how wrong you were because I’d love to see your face about it. If you call me a centrist in person I’ll punch you in the mouth.

    Complain all you want but do try and have some basis in fact when you do, it helps the argument go better.