• 0 Posts
  • 37 Comments
Joined 1 year ago
cake
Cake day: July 13th, 2023

help-circle






  • None of that really works anymore in the age of AI inpainting. Hashes / Perceptual worked well before but the people doing this are specifically interested in causing destruction and chaos with this content. they don’t need it to be authentic to do that.

    It’s a problem that requires AI on the defensive side but even that is just going to be eternal arms race. This problem cannot be solved with technology, only mitigated.

    The ability to exchange hashes on moderation actions against content may offer a way out, but it will change the decentralized nature of everything - basically bringing us back to the early days of the usenet, Usenet Death Penaty, etc.



  • you are answering a question with a different question. LLMs don’t make pictures of your mom. And this particular question?. One that has roughly existed since Photoshop existed.

    It just gets easier every year. It was already easy. You could already pay someone 15 bucks on Fiver to do all of that, for years now.

    Nothing really new here.

    The technology is also easy. Matrix math. About as easy to ban as mp3 downloads. Never stopped anyone. It’s progress. You are a medieval knight asking to put gunpowder back into the box, but it’s clear it cannot be put back - it is already illegal to make non consensual imagery just as it is illegal to copy books. And yet printers exist and photocopiers exist.

    Let me be very clear - accepting the reality that the technology is out there, it’s basic, easy to replicate and on a million computers now is not disrespectful to victims of no consensual imagery.

    You may not want to hear it, but just like with encryption, the only other choice society has is full surveillance of every computer to prevent people from doing “bad things”. everything you complain about is already illegal and has already been possible - it just gets cheaper every year. What you want to have protection from is technological progress because society sucks at dealing with the consequences of it.

    To be perfectly blunt, you don’t need to train any generative AI model for powerful deepfakes. You can use technology like Roop and Controlnet to synthesize any face on any image from a singe photograph. Training not necessary.

    When you look at it that way, what point is there to try to legislate training with these arguments? None.









  • It’s wishful thinking on your part. Every AI model in existence, from computer vision to the photo adjustments in your phone camera was trained this way.

    The only reason there’s a stink now is that certain lobbies suddenly lose their job as opposed to blue collar workers.

    But there’s more than a decade of precedent now to fall back on and not one legal case to show that it’s not fair use.

    So would you kindly cite the case decisions that back up your assertion? Or are you just hallucinating like an LLM because you want a certain outcome to be true? Geez, I wonder where the technology learned that.