

Most of this is just marketing crap from Anthropic.
Finding vulnerabilities in code and generating complex, multistep exploits with publicly available models is possible now. This biggest hurdles now is setting correct context and actually knowing what to look for. Any “guardrails” for this behavior are easily bypassed by framing the detection and exploit generation as a valid dev style question in the most difficult of situations.
They likely just trained a model without guardrails in this case.
What they are doing here is over-hyping a problem and framing it like they are the only ones with a solution. LLM security issues are more in-focus now that companies have dumped a ton of resources into building AI systems they don’t really understand.
















I seem to remember that phenomenon quite a bit in the late 80’s/early 90’s. Many of my memories from that time are mostly lost to time, I do remember specifically just buying singles for Top 40’s as I knew everything else was going to be crap. (MC Hammer comes to mind, but there were many more around that period.)