![](/static/253f0d9/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
Even the Wayback Machine has limits to what is available.
Even the Wayback Machine has limits to what is available.
Looks great, I’ll give it a bash
What you’re alluding to is the Turing test and it hasn’t been proven that any LLM would pass it. At this moment, there are people who have failed the inverse Turing test, being able to acerrtain whether what they’re speaking to is a machine or human. The latter can be done and has been done by things less complex than LLMs and isn’t proof of an LLMs capabilities over more rudimentary chatbots.
You’re also suggesting that it minimises the complexity of its outputs. My determination is that what we’re getting is the limit of what it can achieve. You’d have to prove that any allusion to higher intelligence can’t be attributed to coercion by the user or it’s just hallucinating based on imitating artificial intelligence from media.
There are elements of the model that are very fascinating like how it organises language into these contextual buckets but this is still a predictive model. Understanding that certain words appear near each other in certain contexts is hardly intelligence, it’s a sophisticated machine learning algorithm.
I mainly disagree with the final statement on the basis that the LLMs are more advanced predictive text algorithms. The way they’ve been set up with a chatbox where you’re interacting directly with something that attempts human-like responses, gives off the misconception that the thing you’re talking to is more intelligent than it actually is. It gives off a strong appearance of intelligence but at the end of the day, it predicts the next word in a sentence based on what was said previously but it doesn’t do that good job of comprehending what exactly it’s telling you. It’s very confident when it gives responses which also means when it’s wrong, it’s very confidently delivering the incorrect response.
The Age rating is who can use the App, not how long it’s been up.
One of the simplest ways to safeguard against breakage is to have your /home on a separate partition. I realised I wouldn’t need to backup and reformat it from the beginning, I just need to wipe the root drive and reinstall again.
It’s made even easier by writing an installation script. Simply put, you can pipe a list of packages into packstrap and use a little convenience package for pulling a partition scheme out of a file.
I like to tinker and I’m aware that things will break so I have these tools that let me rebuild the system again in as short a time as possible.
He died in 1982 but his works are hugely influential:
Philip K Dick.
What’s the full title?
YouTube will actually take action and has done in most instances. I won’t say they’re the fastest but they do kick people off the platform if they deem them high risk.
I don’t understand the comments suggesting this is “guilty by proxy”. These platforms have algorithms designed to keep you engaged and through their callousness, have allowed extremist content to remain visible.
Are we going to ignore all the anti-vaxxer groups who fueled vaccine hesitancy which resulted in long dead diseases making a resurgence?
To call Facebook anything less than complicit in the rise of extremist ideologies and conspiratorial beliefs, is extremely short-sighted.
“But Freedom of Speech!”
If that speech causes harm like convincing a teenager walking into a grocery store and gunning people down is a good idea, you don’t deserve to have that speech. Sorry, you’ve violated the social contract and those people’s blood is on your hands.
Wayland isn’t trying to be X12 and since X11 has been around, there haven’t been plans for there to be an X12 either. You want to discourage people from using Wayland but don’t encourage people to contribute to X11. You’re so hellbent on taking Wayland down, rather than further convincing people that X11 is superior and it’s easier to improve.
Netflix is full of reptiles who don’t care to offer a better service. All they want is enough market share to strongarm consumers into giving them more money.
Whenever some dipshit responds to me with “you’re talking about AGI, this is AI”, my only reply is fuck right off.
I’ve just done the dance already and I’m tired of their watered-down attempts at bringing human complexity down to a level that makes their chat bots seem smart.
I don’t need a theory for this, you’re being highly reductive by focusing on a few features of human communication.
What research? These bots aren’t that complicated beyond an optimisation algorithm. Regardless of the tasks you give it, it can’t evolve beyond what it is.
I felt this with one of the laptops I put KDE Neon on. It had all manner of issues that never got a resolution.
There’s no way these chatbots are capable of evolving into Ultron. That’s like saying a toaster is capable of nuclear fusion.
Sounds like a great car! It does seem like something’s wrong with the battery so a replacement is in order.
I’m so sorry