• 0 Posts
  • 55 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • Yes, it would be much better at mitigating it and beat all humans at truth accuracy in general. And truths which can be easily individually proven and/or remain unchanged forever can basically be 100% all the time. But not all truths are that straight forward though.

    What I mentioned can’t really be unlinked from the issue, if you want to solve it completely. Have you ever found out later on that something you told someone else as fact turned out not to be so? Essentially, you ‘hallucinated’ a truth that never existed, but you were just that confident it was correct to share and spread it. It’s how we get myths, popular belief, and folklore.

    For those other truths, we simply ascertain the truth to be that which has reached a likelihood we consider it to be certain. But ideas and concepts we have in our minds constantly float around on that scale. And since we cannot really avoid talking to other people (or intelligent agents) to ascertain certain truths, misinterpretations and lies can sneak in to cause us to treat as truth that which is not. To avoid that would mean the having to be pretty much everywhere to personally interpret the information straight from the source. But then things like how fast it can process those things comes in to play. Without making guesses about what’s going to happen, you basically can’t function in reality.


  • Yes, a theoretical future AI that would be able to self-correct would eventually become more powerful than humans, especially if you could give it ways to run magnitudes more self-correcting mechanisms at the same time. But it would still be making ever so small assumptions when there is a gap in the information it has.

    It could be humble enough to admit it doesn’t know, but it can still be mistaken and think it has the right answer when it doesn’t. It would feel neigh omniscient, but it would never truly be.

    A roundtrip around the globe on glass fibre takes hundreds of milliseconds, so even if it has the truth on some matter, there’s no guarantee that didn’t change in the milliseconds it needed to become aware that the truth has changed. True omniscience simply cannot exists since information (and in turn the truth encoded by that information) also propagates at the speed of light.

    a big mistake you are making here is stating that it must be fed information that it knows to be true, this is not inherently true. You can train a model on all of the wrong things to do, as long it has the capability to understand this, it shouldn’t be a problem.

    The dataset that encodes all wrong things would be infinite in size, and constantly change. It can theoretically exist, but realistically it will never happen. And if it would be incomplete it has to make assumptions at some point based on the incomplete data it has, which would open it up to being wrong, which we would call a hallucination.


  • I’m not sure where you think I’m giving it too much credit, because as far as I read it we already totally agree lol. You’re right, methods exist to diminish the effect of hallucinations. That’s what the scientific method is. Current AI has no physical body and can’t run experiments to verify objective reality. It can’t fact check itself other than be told by the humans training it what is correct (and humans are fallible), and even then if it has gaps in what it knows it will fill it up with something probable - but which is likely going to be bullshit.

    All my point was, is that to truly fix it would be to basically create an omniscient being, which cannot exist in our physical world. It will always have to make some assumptions - just like we do.


  • Hallucinations in AI are fairly well understood as far as I’m aware. Explained in high level on the Wikipedia page for it. And I’m honestly not making any objective assessment of the technology itself. I’m making a deduction based on the laws of nature and biological facts about real life neural networks. (I do say AI is driven by the data it’s given, but that’s something even a layman might know)

    How to mitigate hallucinations is definitely something the experts are actively discussing and have limited success in doing so (and I certainly don’t have an answer there either), but a true fix should be impossible.

    I can’t exactly say why I’m passionate about it. In part I want people to be informed about what AI is and is not, because knowledge about the technology allows us to make more informed decision about the place AI takes in our society. But I’m also passionate about human psychology and creativity, and what we can learn about ourselves from the quirks we see in these technologies.



  • It will never be solved. Even the greatest hypothetical super intelligence is limited by what it can observe and process. Omniscience doesn’t exist in the physical world. Humans hallucinate too - all the time. It’s just that our approximations are usually correct, and then we don’t call it a hallucination anymore. But realistically, the signals coming from our feet take longer to process than those from our eyes, so our brain has to predict information to create the experience. It’s also why we don’t notice our blinks, or why we don’t see the blind spot our eyes have.

    AI representing a more primitive version of our brains will hallucinate far more, especially because it cannot verify anything in the real world and is limited by the data it has been given, which it has to treat as ultimate truth. The mistake was trying to turn AI into a source of truth.

    Hallucinations shouldn’t be treated like a bug. They are a feature - just not one the big tech companies wanted.

    When humans hallucinate on purpose (and not due to illness), we get imagination and dreams; fuel for fiction, but not for reality.


  • Its funny how something like this get posted every few days and people keep falling for it like its somehow going to end AI. The people that make these models are acutely aware of how to avoid model collapse.

    It’s totally fine for AI models to train on AI generated content that is of high enough quality. Part of the research to train models is building data sets with a text description matching the content, and filtering out content that is not organic enough (or even specifically including it as a ‘bad’ example for the AI to avoid). AI can produce material indistinguishable from human work, and it produces material that wasn’t originally in the training data. There’s no reason that can’t be good training data itself.




  • That’s a pretty sloppy reason. A nuanced topic is not well suited to be explained in anything but descriptive language. Especially if you care about people’s livelihoods and passion. I care about my artist friends, colleagues, and acquaintances. Hence I will support them in securing their endeavors in this changing landscape.

    Artists are largely not computer experts and artists using AI are buying Microsoft or Adobe or using freebies and pondering paid upgrades. They are also renting rather than buying because everything’s a subscription service now.

    I really don’t like this characterization of artists. They are not dumb nor incapable of learning. Technical artists exist too. Installing open source AI is relatively easy. Pretty much down to pressing a button. And because it’s open source, it’s free. Using them to it’s fullest effect is where the skill goes, and the artists I know are more than happy to develop their skills.

    A far bigger market for AI is for non-artists and scammers to fill up Amazon’s bookstore and the broader Internet full of more trash than it already was.

    The existence of bad usage of AI does not invalidate good usage of AI. The internet was already full of bad content before AI. The good stuff is what floats to the top. No sane person is going to pay to read some no name AI generated trash. But people will read a highly regarded book that just happened to be AI assisted.

    But the whole premise is silly. Did we demonize cars because bank robbers started using them to escape the police? Did we demonize cameras because people could take exact photo copies of someone else’s work? No. We demonized those that misused the tool. AI is no different.

    A scammer can generate thousands of garbage images and text without worth, before an artist being assisted by AI can make a single work. Just like a burglar can make more money easily by breaking into someone’s house and stealing all their money compared to working a day job for a month. There’s a reason these things are illegal and/or unethical. But those are reflections of the people doing this, not the things they use.


  • I mean, you ignored the entire rest of my comment to respond only to a hyperbole to illustrate that something is a bad argument. I’m sure they are making money off it, but small creators and artists can relatively make more money off it. And you claim that is not ‘actually happening’. But that is your opinion, how you view things. I talk with artists daily, and they use AI when it’s convenient to them, when it saves them work or allows them to focus on work they actually like. Just like how they use any other tool to their disposal.

    I know there are some very big name artists on social media who are making a fuss about this stuff, but I highly question their motives with my point of view in mind. Of course it makes sense for someone with a big social media following to rally up their supporters so they can get a payday. I regularly see them speak complete lies to their followers, and of course it works. When you actually talk to artists in real life, you’ll get a far more nuanced response.





  • There’s another thing here which is that you seem to believe this was actually made in large part by an AI while simultaneously stating the motivations of humans. So which is it?

    AI assisted works are, funnily enough, mostly a human production at this point. If you asked AI to make another George Carlin special for you, it would suck extremely hard. AI requires humans to succeed, it does not succeed at being human. And as such, it’s a human work at the end of the day. My opinion is that if we were being truthful, this comedy special would likely be considered AI assisted rather than fully AI generated.

    You seem really sure that I think this is fully (or largely) AI generated, but that’s never been a question I answered or alluded to believing before. I don’t believe that. I don’t even believe fully AI generated works to be worthy of being called true art. AI assisted works on the other hand, I do believe to be art. AI is a tool, and for it to be used for art it requires humans to provide input and humans to make decisions for it to be something that people will actually enjoy. And that is clearly what was done here.

    The primary beneficiary of all of the AI hype is Microsoft. Secondary beneficiary is Nvidia. These aren’t tiny companies.

    “The primary beneficiaries of art hype are pencil makers, brush makers, canvas makers, and of course, Adobe for making photoshop, Samsung and Wacom for making drawing tablets. Not to mention the art investors selling art from museums and art galleries all over the world for millions. These aren’t tiny entities.”

    See how ridiculous it is to make that argument? If something is popular, people and companies who are in a prime position to make money off it will try to do so, that is to be expected under our capitalist society. But small artists and small creators get the most elevation by the advance of open source AI. Big companies can already push out enough money to bring any work they create to the highest standards. A small creator cannot, but they can get far more, and far better results by using AI in their workflow. And because small creators often put far more heart and soul into their works, it allows them to compete with giants more easily. A clear win for small creators and artists.

    Just to be extra clear: I don’t like OpenAI. I don’t like Microsoft. I don’t like Nvidia to a certain degree. Open Source AI is not their piece of cake. They like proprietary, closed source AI. The kind where only they and the people that pay them get to use the advancements AI has made. That disgusts me. Open Source AI is the tool of choice for ethical AI.




  • Healthy or not, my lived experience is that assuming people are motivated by the things people are typically motivated by (e.g. greed, the desire for fame) is more often correct than assuming people have pure motives.

    Everyone likes praise to a certain extent, and desiring recognition for what you’ve made is independent from your intentions otherwise. My personal experience working with talented creative people is that the two are often intertwined. If you can make something that’s both fulfilling and economically sustainable, that’s what you’ll do. You can make something that’s extremely fulfilling, but if it doesn’t appeal to anyone but yourself, it doesn’t pay the bills. I’m not saying it’s not possible for them to not have that motivation, but in my opinion anyone ascribed to be malicious must be to some point proven to be that way. I have seen no such proof.

    I really understand your second point but… as with many things, some things require consent and some things don’t. Making a parody or an homage doesn’t (typically) require that consent. It would be nice to get it, but the man is dead and even his children cannot speak for him other than as legal owners of his estate. I personally would like to believe he wouldn’t care one bit, and I would have the same basis as anyone else to defend that, because nobody can ask a dead man for his opinions. It’s clear his children do not like it, but unless they have a legal basis for that it can be freely dismissed as not being something George would stand behind.

    I’ve watched pretty much every one of his shows, but I haven’t seen that documentary. I’ll see if I can watch it. But knowing George, he would have many words to exchange on both sides of the debate. The man was very much an advocate for freedom of creativity, but also very much in favor of artist protection. Open source AI has leveled the playing field for people that aren’t mega corporations to compete, but has also brought along insecurity and anxiety to creative fields. It’s not black and white.

    In fact, there is a quote attributed to him which sort of speaks on this topic. (Although I must admit, the original source is of a defunct newspaper and the wayback machine didn’t crawl the article)

    [On his work appearing on the Internet] It’s a conflicted feeling. I’m really a populist, down in the very center of me. I like the power people can accrue for themselves, and I like the idea of user-generated content and taking power from the corporations. The other half of the conflict, though, is that, traditionally speaking, artists are protected from copyright infringement. Fortunately, I don’t have to worry about solving this issue. It’s someone else’s job.

    August 9, 2007 in Las Vegas CityLife. So just a little less than a year before his death too.

    EDIT: Minor clarification


  • Completely true. But we cannot reasonably push the responsibility of the entire internet onto someone when they did their due diligence.

    Like, some people post CoD footage to youtube because it looks cool, and someone else either mistakes or malicious takes that and recontextualizes it to being combat footage from active warzones to shock people. Then people start reposting that footage with a fake explanation text on top of it, furthering the misinformation cycle. Do we now blame the people sharing their CoD footage for what other people did with it? Misinformation and propaganda are something society must work together on to combat.

    If it really matters, people would be out there warning people that the pictures being posted are fake. In fact, even before AI that’s what happened after tragedy happens. People would post images claiming to be of what happened, only to later be confirmed as being from some other tragedy. Or how some video games have fake leaks because someone rebranded fanmade content as a leak.

    Eventually it becomes common knowledge or easy to prove as being fake. Take this picture for instance:

    It’s been well documented that the bottom image is fake, and as such anyone can now find out what was covered up. It’s up to society to speak up when the damage is too great.