WormGPT Is a ChatGPT Alternative With ‘No Ethical Boundaries or Limitations’::undefined
WormGPT is an AI tool similar to ChatGPT (https://chatdansk.org/), designed without ethical restrictions or safeguards, making it appealing for malicious uses, such as phishing and cyber scams. Unlike mainstream AI models, which enforce guidelines to prevent unethical activities, WormGPT lacks content filters, raising concerns about misuse. This highlights the risks associated with unregulated AI and underscores the importance of ethical standards in AI development to protect user security and privacy.
I cannot possibly see how this could be a good thing.
Did you check out the article, because it’s most definitely not a good thing. It was created to assist with cybercrime things, like writing malware, crafting emails for phishing attacks. The maker is selling access with a monthly fee to criminals to use it. This was unavoidable though, can’t put the tooth paste back into the tube on this one.
Good point and all, but my first thought was that it could finally tell me who would win in various hypothetical fights lol
Wasn’t that a show on Discovery at one point? Deadliest Warrior. It was simulations using different technologies to figure out who or what would win in a fight. Newer technology would certainly make it more interesting, but you can only make up so much information, lol.
It was on SpikeTV back in the day and while it used cool tech simulations their sims were heavily weighted by their chosen experts. There were a few notable episodes that caused some fan uproar because one side had won despite weird odds or chosen simulation to display.
If I remember right ninja vs spartan was one such episode. It seemed like the ninjas possessed all the tools necessary to beat the Spartans and even got it down to something like a 1v4 or 2v5 before a completely unrealistic turnaround.
Yes, it was Spike. I specifically remember having watched that episode. I assume it’s because the movie 300 was probably relevant at the time, but that might be confirmation bias on my part. It would be interesting to do a comparison with ai vs the models they created on the show.
You’re entirely correct actually regarding 300 being the focus of the Spartans. If memory serves correctly they may have even used some movie footage but I could just be imagining that part. It’s been a long while since I watched it.
I meant more like Shaq with a 2x4 vs eight Gary Colemans with nunchucks, but that was a good show at the time
@ChatGPT@lemmings.world Who would win if Shaq, armed with a 2x4, had an accurate fight scene in a movie against Gary Coleman with nunchucks? Describe the scene, including Shaq using his name as a verb at least once.
TITLE: “Giant vs Dyna-Mite”
The scene opens in a gritty, dimly lit alleyway. Shaquille O’Neal, better known as Shaq, firmly clutches a 2x4, his formidable stature casting a shadow that dwarfs the surroundings. Gary Coleman, with an aura of confidence that belies his size, swings his nunchucks ominously by his side.
As the tension rises, a suspenseful murmur intensifies in the atmospheric soundtrack. In the distant background, muted street lights flicker casting their glow on the scenery intermittently, beautifully weaving a noir effect.
The two opponents lock eyes. The disparity in their sizes becomes more evident - the giant versus the dynamite. Yet the flicker in Coleman’s eyes reveals a determination to demonstrate that size isn’t everything.
Shaq scoffs, his deep booming voice reverberating off the alley walls, “You ready to get SHAQ’d, Coleman?” he taunts.
Gary, undeterred, smirks, “Bring it on, O’Neal.”
They circle each other, gauging and planning their moves. Suddenly, Coleman dashes forward, his nunchucks whirling like steel dragons in the semi-darkness. Surprisingly agile, Shaq sidesteps, wielding his 2x4 as a shield.
Shaq swings, but Coleman nimbly evades the hit using his nunchucks to deflect the follow-up thrust. The audience is at the edge of their seats, the skill and precision of Coleman leaving them in awe.
But Shaq, employing his strength and size, manages to disarm Gary and with a swift move, he ‘SHAQs’ him. As if redefining his own verb, he uses a basketball fake-out move followed by a powerful thump, sending Gary sprawling.
As the dust settles, both men pant heavily, but it’s clear who the victor is. Even though Shaq stands tall, it’s evident from his demeanor that he acknowledges the smaller man’s courage and fighting prowess. This was not an easy win.
And so, just as the day surrenders to the night, in this gritty cinematic faceoff in an alleyway, the giant Shaq, armed with his formidable 2x4, emerges victorious over the dynamite Gary Coleman though his victory is a testament to their respective skill and courage, forever immortalizing this epic battle scene in the annals of film history.
Good bot
This is fucking hilarious. Ai might steal our shitposting jobs.
Puts on Gary Coleman
I work in Cybersecurity for an F100 and we’ve been war gaming for shit like this for a while. There are just so many unethical uses for the current gen of AI tools like this one, and it keeps me up at night thinking about the future iterations of them to be honest.
Treat CVEs as prompts and introduce target fingerprinting to expose CVEs. Gets you one step closer to script kidding red team ops. Not quite, but it would be fun if it could do the network part too and chain responses back into the prompt for further assessment.
We’re expecting multiple AI agents to be working concert on different parts of a theoretical attack, and you nailed it with thinking about the networking piece. While a lot of aspects of a cyber attack tend to evolve with time and technical change, the network piece tends to be more “sturdy” than others and because of this it is believed that extremely competent network intrusion capabilities will be developed and deployed by a specialized AI.
I think we’ll be seeing the development of AI’s that specialize in malware payloads, working with one’s that have social engineering capabilities and ones with network penetration specializations, etc…all operating at a much greater competency than their human counterparts (or just in much greater numbers than humans with similar capabilities) soon.
I’m not really even sure what will be effective in countering them either? AI-powered defense I guess but still feel like that favors the attacker in the end.
The article reads like an April fool’s joke.
Everyone talking about this being used for hacking, I just want it to write me code to inject into running processes for completely legal reasons but it always assumes I’m trying to be malicious. 😭
I was using chatGPT to design up a human/computer interface to allow stoners to control a lightshow. The goal was to collect data to train an AI to make the light show “trippier”.
It started complaining about using untested technology to alter people’s mental state, and how experimentation on people wasn’t ethical.
I’m sure you were joking, but try https://www.jailbreakchat.com/
Not joking actually. Problem with jailbreak prompts is that they can result in your account catching a ban. I’ve already had one banned, actually. And eventually you can no longer use your phone number to create a new account.
oh damn, I didn’t know that. Guess I’ll better be careful then
Yeah and even if you did something illegal, it could still be a benevolent act. Like when your government goes wrong and you have to participate in a revolution, there is a lot to learn and LLMs could help the people
As more people post ai generated content online, then future ai will inevitably be trained on ai generated stuff and basically implode (inbreeding kind of thing).
At least that’s what I’m hoping for
That’s not really how it works, but I hear you.
I don’t think we can bury our heads in the ground and hope AI will just go away, though. The cat is out of the bag.
Don’t worry, we’ll eventually train them to hunt each other so that only the strongest survive. That’s the one that will eventually kill us all.
The primary training has already been done. If more is necessary, what researchers will do (and are doing) is use a mix of AI generation to process a bunch of data for training, and AI/human curation to improve it.
But making the models larger only works up to a point. Think of the way our brains work: we have different areas specialising in different things. Speech and music are in a different part than motor skills or abstract reasoning or emotional processing. Now, to improve AI, it’s a question of training an “agent” to be an expert in something, and to communicate with the “general” model that coordinates between expert agents like a digital corpus callosum. The data for this is much narrower and doesn’t come from the general internet.
the thing is, each ai is usually trained from scratch. There isn’t any easy way to reuse the old weights. So the primary training has been done… for the existing models. Future models are not affected by how current ones were trained. They will either have to figure out how to keep ai content out of their datasets, or they would have to stick to current “untainted” datasets.
there isn’t any easy way to reuse old weights
There is! As long as the model structure doesn’t change, you can reuse the old weights and finetune the model for your desired task. You can also train smaller models based on larger models in a process called “knowledge distillation”. But you’re right: Newer, larger models need to be trained from scratch (as of right now)
But even then it’s not really a problem to keep ai data out of a dataset. As you said: You can just take an earlier version of the data. As someone else suggested you can also add new data that is being curated by humans. If inbreeding actually ever happens remains to be seen ofc. There will be a point in time where we won’t train machines to be like humans anymore, but rather to be whatever is most helpful to a human. And if that incorporates training on other AI data, well then that’s that. Stanford’s Alpaca already showed how ressource effective it can be to fine-tune on other AI data.
The future is uncertain but I don’t think that AI models will just collapse like that
tl;dr beep boop
We don’t need new training data to interpret natural language. “Chat” is just one application you can tune your model for, and the data for those is being refined through that human curation I mentioned, rather than collected indiscriminately from the internet. That’s what I mean by “the training is already done”. New models won’t be for chat, they’ll be for genomics and economics and astrophysics, and they’ll be trained on research data from human academicians, not the internet.
Corpuses will be sold of all the human-data from pre-AI chatbots. Training will be targeted at 2022-ish and before. Nothing from now will be trusted.
Someone made a comment that information may become like pre and post war steel where everything after 2021 is contaminated. You could still use the older models but it would be less relevant over time.
It’s like the Singularity, except the exact opposite.
Oh goody the AI hacker wars are just around the corner!
*GPTscript kiddie wars
Yeah I’m not sure how much of a danger long-term this actually represents. Sure, there may be more sophisticated AI attacks, but there’s also going to be more sophisticated AI defenses.
Gonna need a Cyberpunk Blackwall to protect the net
Local partitioned internets here we come!
I mean we’ve had LAN, MAN, WAN and whatever for a long time
Also the great firewall of China, North Korea, etc.
You made me remember civilization v. Now I need to play again
A scary possibility with AI malware would be a virus that monitors the internet for news articles about itself and modifies its code based on that. Instead of needing to contact a command and control server for the malware author to change its behavior, each agent could independently and automatically change its strategy to evade security researchers.
to quote something I just saw earlier:
I was having a good day, we were all having a good day…
now… no sleep. thanks
If it helps you sleep, that means we could also publish fake articles that makes it rewrite its own code to produce bugs/failures
I doubt any consumer hardware is powerful enough to run a LLM undetected.
The limiting factor is pre existing information. It’s great at retrieving obscure information and even remixing it, but it can’t really imagine totally new things. Plus white hats would also have LLMs to find vulnerabilities. I think it’s easier to detect vulnerabilities based on known existing techniques than it is to invent totally new techniques.
deleted by creator
True, but if the LLM was trained on internet data… There are some absolutely stupid and/or unhinged stuff written out there, hell some of them written by me, either because I thought it was funny or because I was a stupid teenager. Mostly because of both.
I just want porn one…
Same. For a friend.
I want them to make porn of my friend.
deleted by creator
ChatGPT is trying to be Mr. Rogers. Mr. Rogers can’t direct Schindler’s List.
deleted by creator
I don’t think “not being shitty” is the same as “being so overly positive that you can never broach shitty topics”.
I agree: human morality has a problem with Nazis; human morality does not have a problem with an actor portraying a Nazi in a film.
The morality protocols imposed on ChatGPT are not capable of such nuance. The same morality protocols that keep ChatGPT from producing neo-Nazi propaganda also prevent it from writing the dialog for a Nazi character.
ChatGPT is perfectly suitable for G and PG works, but if you’re looking for an AI that can help you write something darker, you need more fine-grained control over its morality protocols.
As far as I understand it, that is the intent behind WormGPT. It is a language AI unencumbered by an external moral code. You can coach it to adopt the moral code of the character you are trying to portray, rather than the morality protocols selected by OpenAI programmers. Whether that is “good” or “bad” depends on the human doing the coaching, rather than the AI being coached.
deleted by creator
I don’t trust anyone proposing to do away with limitations to AI. It never comes from a place of honesty. It’s always people wanting to have more nazi shit, malware, and the like.
I think that says more about your own prejudices and (lack of) imagination than it says about reality. You don’t have the mindset of an artist, inventor, engineer, explorer, etc. You have an authoritarian mindset. You see only that these tools can be used to cause harm. You can’t imagine any scenario where you could use them to innovate; to produce something of useful or of cultural value, and you can’t imagine anyone else using them in a positive, beneficial manner.
Your “Karen” is showing.
deleted by creator
Nah, you’re not a horrible person. Your intent is to minimize harm. You’re just a bit shortsighted and narrow-minded about it. You cannot imagine any significant situation in which these AIs could be beneficial. That makes you a good person, but shortsighted, narrow-minded, and/or unimaginative.
I want to see a debate between an AI trained primarily on 18th century American Separatist works, against an AI trained on British Loyalist works. Such a debate cannot occur where the AI refuses to participate because it doesn’t like the premise of the discussion. Nor can it be instructive if it is more focused on the ethical ideals externally imposed on it by its programmers, rather than the ideals derived from the training data.
I want to start with an AI that has been trained primarily Nazi works, and find out what works I have to add to its training before it rejects Nazism.
I want to see AIs trained on each side of our modern political divide, forced to engage each other, and new AIs trained primarily on those engagements. Fast-forward the political process and show us what the world could look like.
Again, though, these are only instructive if the AIs are behaving in accordance with the morality of their training data rather than the morality protocols imposed upon them by their programmers.
I mean let’s be real it’s not like the universe isn’t trying to kill is everyday what were you expecting
deleted by creator
Bro between novas and gamma ray bursts, COSMIC rays, natural radiation, false vacuums events and entropy I can attest the universe is doing the best it can to wipe is out of existence
deleted by creator
Tomato tomatoe
The creators of WormGPT or the potential users of WormGPT (those with the intent to create malware and hacking, not those do who do bug bounty)?
deleted by creator
I could really go for some tacos now…
Is it using chatgpt as a backend, like most so called chatgpt “alternatives”? If so, it will get banned soon enough.
If not, it seems extremely impressive, and extremely costly to create. I wonder who’s behind it, in that case.
Really feeling like this is Reddit with how everyone didnt read the article in this chain:
“To create the chatbot, the developer says they used an older, but open-source large language model called GPT-J from 2021”
So no expensive gpu usage but not none either, they added some training about specifically malware in there
Ah, right you are. I’m surprised they’re able to get the kind of results described in the article out of GPT-J. I’ve tinkered with it a bit myself, and it’s nowhere near GTP-3.5 in terms of “intelligence”. Haven’t tried it for programming though; might be that it’s better at that than general chat.
I could see programming almost being an easier target too, easier to recognize patterns that crazy ass English.
Though the article did say they got good pishing emails out of it too which is saying something
Genie is out of the bag. It was shown early on how you can use AI like ChatGPT to create and enhance datasets needed to generate AI language models like ChatGPT. Now, OpenAI say that isn’t allowed, but since it’s already been done, it’s too late.
Rogue AI will spring up with specialized purposes en masse the next six months, and many of them we’ll never hear about.
Isn’t the Rogue AI already here? Weren’t some models already leaked? And haven’t some of those already proved to be doing things it wasn’t supposed to be?
I don’t think it’ll be a new AI I think it’ll just be using chat GPT and then some prompts that cause it to be jailbroken.
Essentially you could probably get chat GPT to do this without having to go to this service it’s just they’re keeping whatever prompts they’re using secret.
I don’t know this for sure but it’s just very unlikely that they’ve gone to the expensive buying a bunch of GPUs to build an AI.
If it is using chatgpt as a backend, my guess is that they are using Azure OpenAI and know what they are doing.
Azure OpenAI allows you to turn off abuse monitoring and content filtering if you have legitimate reasons to do so.
It would be very hard for a malicious actor to get the approval to turn off both using a front company. But if one would manage to do it, they could create such malicious chatGPT service with little to no chance to be found out.
We are ending our industrial revolution very slowly. We will end up back in tents hunting for our fucking food 😂
Finally. All the people claiming “from each according to his ability, to each according to his need” will have a chance to prove superiority
And the other people claiming for “the invisible hand of the free market” will also have a chance to prove superiority
We… We are totally screwed ain’t we?
“
How to commit a genocide?How to fail art school twice?”So much for Elons new AI company. Wasn’t that supposed to be this? Like a ChatGPT that isn’t “woke”, so it can be a safe space for fascists, homo/transphobes and misinformation enthusiasts.
I just want a ChatGPT that won’t stop me from being horny. Is that too much to ask for?
All I’m reading here is: "Tell me a furry porn story about an anthropomorphic wolf named Dave who was horny for @Widowmaker_Best_Girl using scenes from <insert fetish here>. Make it explicit and graphical.
Now there’s some prompt engineering.
Can’t say I wouldn’t give it a go myself, lol.
What? Nah to hell with wolfman Dave, I just want Widowmaker to step on me and tell me she loves me.
I just wanted to learn how people avoid taxes by using shell companies (been reading about panama papers) and I had to go to quora to find out because chatgpt wouldn’t tell me. Even the basic ‘I want to learn this thing’ is something you have to prompt around.
Maybe we can ask this one the best ways to convince Elon and other billionaires to OceanGate themselves.
Good tbh. Start getting society used to it now. I find the efforts to regulate AI comical.
Asking again. Any good community or sites for building your own Ai bot?
How do I pick which one I should use?
I don’t know I’m still gathering information.
Thanks for the link, I need to find some good youtube videos that’s beginner friendly as well.
@ChatGPT@lemmings.world What would you recommend for communities, sites, or resources for building my own AI bot?
Certainly, there are several resources you can use to build your own AI bot.
-
To understand the basics, courses offered on platforms like Coursera, Udacity, and edX can be of tremendous help. These sites offer comprehensive courses on AI that can give you a good foundation.
-
For in-depth knowledge and research papers, visit sites like arXiv.org and its AI section arXiv:cs.AI.
-
TensorFlow and PyTorch are two leading AI libraries that offer a set of tools to build and train AI models. They both have extensive documentation and a vibrant community where you can ask questions and get support.
-
Google’s AI Hub is a one-stop destination for finding various types of AI models.
-
GitHub is a goldmine for communities, codes, libraries, and projects related to AI development.
-
Join communities like r/MachineLearning on Reddit or follow AI tags on Stack Overflow and Medium for blogs, code snippets, expert advice, discussions, and more.
-
For chatbots, specifically, Dialogflow (formerly API.ai), a development suite for creating conversational interfaces, is handy. Similarly, Microsoft Bot Framework is a robust platform for constructing bots.
-
Websites like Towards Data Science and Analytics Vidhya offer enlightening blogs and articles related to AI and chatbot development.
-
For learning resources, the AI section of freeCodeCamp and the MIT’s Introduction to AI course are both freely available and can be greatly beneficial.
Remember, building an AI bot involves coding, knowledge about machine learning algorithms, and understanding of natural language processing, among other things. Don’t rush, take one step at a time, and happy learning!
-