A survey of more than 2,000 smartphone users by second-hand smartphone marketplace SellCell found that 73% of iPhone users and a whopping 87% of Samsung Galaxy users felt that AI adds little to no value to their smartphone experience.

SellCell only surveyed users with an AI-enabled phone – thats an iPhone 15 Pro or newer or a Galaxy S22 or newer. The survey doesn’t give an exact sample size, but more than 1,000 iPhone users and more than 1,000 Galaxy users were involved.

Further findings show that most users of either platform would not pay for an AI subscription: 86.5% of iPhone users and 94.5% of Galaxy users would refuse to pay for continued access to AI features.

From the data listed so far, it seems that people just aren’t using AI. In the case of both iPhone and Galaxy users about two-fifths of those surveyed have tried AI features – 41.6% for iPhone and 46.9% for Galaxy.

So, that’s a majority of users not even bothering with AI in the first place and a general disinterest in AI features from the user base overall, despite both Apple and Samsung making such a big deal out of AI.

  • OfficerBribe@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    44 minutes ago

    Not sure if Google Lens counts as AI, but Circle to Search is a cool feature. And on Samsung specifically there is Smart Select that I occasionally use for text extraction, but I suppose it is just OCR.

    From Galaxy AI branded features I have tested only Drawing assist which is an image generator. Fooled around for 5 minutes and have not touched it again. I am using Samsung keyboard and I know it has some kind of text generator thing, but have not even bothered myself to try it.

    • Bluefruit@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      24 minutes ago

      Certainly counts, Samsung has a few features like grabbing text from images that I found useful.

      My problem with them is its all online stuff and I’d like that sort of thing to be processed on device but thats just me.

      I think folks often are thinking AI is only the crappy image generation or chat bots they get shoved to. AI is used in a lot of different things, only difference is that those implementations like drawing assist or that text grabbing feature are actually useful and are well done.

  • shortwavesurfer@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    12 minutes ago

    I’m shocked, I tell you. Absolutely shocked. And if you believe that, I got some oceanfront property in Arizona. I’ll sell you too.

  • thingAmaBob@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 hours ago

    Unless it can be a legit personal assistant, I’m not actually interested. Companies hyped AI way too much.

  • Zak@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    1 hour ago

    The AI thing I’d really like is an on-device classifier that decides with reasonably high reliability whether I would want my phone to interrupt me with a given notification or not. I already don’t allow useless notifications, but a message from a friend might be a question about something urgent, or a cat picture.

    What I don’t want is:

    • Ways to make fake photographs
    • Summaries of messages I could just skim the old fashioned way
    • Easier access to LLM chatbots

    It seems like those are the main AI features bundled on phones now, and I have no use for any of them.

  • Arkouda@lemmy.ca
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    2 hours ago

    AI was never meant for the average person but the average person had to be convinced it was for funding.

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 hour ago

    Surprise surprise!

    At work we deal with valuable information and we gotta be careful what to ask. Probably we’ll have a total ban on these things at work.

    At home we don’t give a fuck what your AI does. I just wanna relax and do nothing for as long as I can. So off load your AI onto a local system that doesn’t talk to your server and then we’ll talk.

  • ZeroGravitas@lemm.ee
    link
    fedilink
    English
    arrow-up
    108
    arrow-down
    5
    ·
    4 hours ago

    A 100% accurate AI would be useful. A 99.999% accurate AI is in fact useless, because of the damage that one miss might do.

    It’s like the French say: Add one drop of wine in a barrel of sewage and you get sewage. Add one drop of sewage in a barrel of wine and you get sewage.

    • Imacat@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      42 minutes ago

      99.999% accurate would be pretty useful. Theres plenty of misinformation without AI. Nothing and nobody will be perfect.

      Trouble is they range from 0-95% accurate depending on the topic and given context while being very confident when they’re wrong.

    • Dojan@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      4 hours ago

      I think it largely depends on what kind of AI we’re talking about. iOS has had models that let you extract subjects from images for a while now, and that’s pretty nifty. Affinity Photo recently got the same feature. Noise cancellation can also be quite useful.

      As for LLMs? Fuck off, honestly. My company apparently pays for MS CoPilot, something I only discovered when the garbage popped up the other day. I wrote a few random sentences for it to fix, and the only thing it managed to consistently do was screw the entire text up. Maybe it doesn’t handle Swedish? I don’t know.

      One of the examples I sent to a friend is as follows, but in Swedish;

      Microsoft CoPilot is an incredibly poor product. It has a tendency to make up entirely new, nonsensical words, as well as completely mangle the grammar. I really don’t understand why we pay for this. It’s very disappointing.

      And CoPilot was like “yeah, let me fix this for you!”

      Microsoft CoPilot is a comedy show without a manuscript. It makes up new nonsense words as though were a word-juggler on circus, and the grammar becomes mang like a bulldzer over a lawn. Why do we pay for this? It is buy a ticket to a show where actosorgets their lines. Entredibly disappointing.

    • Kaja • she/her@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      edit-2
      4 hours ago

      We’re not talking about an AI running a nuclear reactor, this article is about AI assistants on a personal phone. 0.001% failure rates for apps on your phone isn’t that insane, and generally the only consequence of those failures would be you need to try a slightly different query. Tools like Alexa or Siri mishear user commands probably more than 0.001% of the time, and yet those tools have absolutely caught on for a significant amount of people.

      The issue is that the failure rate of AI is high enough that you have to vet the outputs which typically requires about as much work as doing whatever you wanted the AI to do yourself, and using AI for creative things like art or videos is a fun novelty, but isn’t something that you’re doing regularly and so your phone trying to promote apps that you only want to use once in a blue moon is annoying. If AI were actually so useful you could query it with anything and 99.999% of the time get back exactly what you wanted, AI would absolutely become much more useful.

    • NuXCOM_90Percent@lemmy.zip
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      11
      ·
      4 hours ago

      People love to make these claims.

      Nothing is “100% accurate” to begin with. Humans spew constant FUD and outright malicious misinformation. Just do some googling for anything medical, for example.

      So either we acknowledge that everything is already “sewage” and this changes nothing or we acknowledge that people already can find value from searching for answers to questions and they just need to apply critical thought toward whether I_Fucked_your_mom_416 on gamefaqs is a valid source or not.

      Which gets to my big issue with most of the “AI Assistant” features. They don’t source their information. I am all for not needing to remember the magic incantations to restrict my searches to a single site or use boolean operators when I can instead “ask jeeves” as it were. But I still want the citation of where information was pulled from so I can at least skim it.

      • tauren@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 hour ago

        For real. If a human performs task X with 80% accuracy, an AI needs to perform the same task with 80.1% accuracy to be a better choice - not 100%. Furthermore, we should consider how much time it would take for a human to perform the task versus an AI. That difference can justify the loss of accuracy. It all depends on the problem you’re trying to solve. With that said, it feels like AI in mobile devices hardly solves any problems.

      • ZeroGravitas@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        I think you nailed it. In the grand scheme of things, critical thinking is always required.

        The problem is that, when it comes to LLMs, people seem to use magical thinking instead. I’m not an artist, so I oohd and aahd at some of the AI art I got to see, especially in the early days, when we weren’t flooded with all this AI slop. But when I saw the coding shit it spewed? Thanks, I’ll pass.

        The only legit use of AI in my field that I know of is an unit test generator, where tests were measured for stability and code coverage increase before being submitted to dev approval. But actual non-trivial production grade code? Hell no.

        • NuXCOM_90Percent@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          2 hours ago

          Even those examples are the kinds of things that “fall apart” if you actually think things through.

          Art? Actual human artists tend to use a ridiculous amount of “AI” these days and have been for well over a decade (probably closer to two, depending on how you define “AI”). Stuff like magic erasers/brushes are inherently looking at the picture around it (training data) and then extrapolating/magicking what it would look like if you didn’t have that logo on your shirt and so forth. Same with a lot of weathering techniques/algorithms and so forth.

          Same with coding. People more or less understand that anyone who is working on something more complex than a coding exercise is going to be googling a lot (even if it is just that you will never ever remember how to do file i/o in python off the top of your head). So a tool that does exactly that is… bad?

          Which gets back to the reality of things. Much like with writing a business email or organizing a calendar: If a computer program can do your entire job for you… maybe shut the fuck up about that program? Chatgpt et al aren’t meant to replace the senior or principle software engineer who is in lots of design meetings or optimizing the critical path of your corporate secret sauce.

          It is replacing junior engineers and interns (which is gonna REALLY hurt in ten years but…). Chatgpt hallucinated a nonsense function? That is what CI testing and code review is for. Same as if that intern forgot to commit a file or that rockstar from facebook never ran the test suite.

          Of course, the problem there is that the internet is chock full of “rock star coders” who just insist the world would be a better place if they never had to talk to anyone and were always given perfectly formed tickets so they could just put their headphones on and work and ignore Sophie’s birthday and never be bothered by someone asking them for help (because, trust me, you ALWAYS want to talk to That Guy about… anything). And they don’t realize that they were never actually hot shit and were mostly always doing entry level work.

          Personally? I only trust AI to directly write my code for me if it is in an airgapped environment because I will never trust black box code I pulled off the internet to touch corporate data. But I will 100% use it in place of google to get an example of how to do something that I can use for a utility function or adapt to solving my real problem. And, regardless, I will review and test that just as thoroughly as the code Fred in accounting’s son wrote because I am the one staying late if we break production.


          And just to add on, here is what I told a friend’s kid who is an undergrad comp sci:

          LLMs are awesome tools. But if the only thing you bring to the table is that you can translate the tickets I assigned to you to a query to chatgpt? Why am I paying you? Why am I not expensing a prompt engineering course on udemy and doing it myself?

          Right now? Finding a job is hard but there are a lot of people like me who understand we still need to hire entry level coders to make sure we have staff ready to replace attrition over the next decade (or even five years). But I can only hire so many people and we aren’t a charity: If you can’t do your job we will drop you the moment we get told to trim our budget.

          So use LLMs because they are an incredibly useful tool. But also get involved in design and planning as quickly as possible. You don’t want to be the person writing the prompts. You want to be the person figuring out what prompts we need to write.

      • AnAmericanPotato@programming.dev
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        3 hours ago

        99.999% would be fantastic.

        90% is not good enough to be a primary feature that discourages inspection (like a naive chatbot).

        What we have now is like…I dunno, anywhere from <1% to maybe 80% depending on your use case and definition of accuracy, I guess?

        I haven’t used Samsung’s stuff specifically. Some web search engines do cite their sources, and I find that to be a nice little time-saver. With the prevalence of SEO spam, most results have like one meaningful sentence buried in 10 paragraphs of nonsense. When the AI can effectively extract that tiny morsel of information, it’s great.

        Ideally, I don’t ever want to hear an AI’s opinion, and I don’t ever want information that’s baked into the model from training. I want it to process text with an awareness of complex grammar, syntax, and vocabulary. That’s what LLMs are actually good at.

        • NuXCOM_90Percent@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          3 hours ago

          Again: What is the percent “accurate” of an SEO infested blog about why ivermectin will cure all your problems? What is the percent “accurate” of some kid on gamefaqs insisting that you totally can see Lara’s tatas if you do this 90 button command? Or even the people who insist that Jimi was talking about wanting to kiss some dude in Purple Haze.

          Everyone is hellbent on insisting that AI hallucinates and… it does. You know who else hallucinates? Dumbfucks. And the internet is chock full of them. And guess what LLMs are training on? Its the same reason I always laugh when people talk about how AI can’t do feet or hands and ignore the existence of Rob Liefeld or WHY so many cartoon characters only have four fingers.

          Like I said: I don’t like the AI Assistants that won’t tell me where they got information from and it is why I pay for Kagi (they are also AI infested but they put that at higher tiers so I get a better search experience at the tier I pay for). But I 100% use stuff like chatgpt to sift through the ninety bazillion blogs to find me a snippet of a helm chart that I can then deep dive on whether a given function even exists.

          But the reality is that people are still benchmarking LLMs against a reality that has never existed. The question shouldn’t be “we need this to be 100% accurate and never hallucinate” and instead be “What web pages or resources were used to create this answer” and then doing what we should always be doing: Checking the sources to see if they at least seem trustworthy.

          • AnAmericanPotato@programming.dev
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 hours ago

            Again: What is the percent “accurate” of an SEO infested blog

            I don’t think that’s a good comparison in context. If Forbes replaced all their bloggers with ChatGPT, that might very well be a net gain. But that’s not the use case we’re talking about. Nobody goes to Forbes as their first step for information anyway (I mean…I sure hope not…).

            The question shouldn’t be “we need this to be 100% accurate and never hallucinate” and instead be “What web pages or resources were used to create this answer” and then doing what we should always be doing: Checking the sources to see if they at least seem trustworthy.

            Correct.

            If we’re talking about an AI search summarizer, then the accuracy lies not in how correct the information is in regard to my query, but in how closely the AI summary matches the cited source material. Kagi does this pretty well. Last I checked, Bing and Google did it very badly. Not sure about Samsung.

            On top of that, the UX is critically important. In a traditional search engine, the source comes before the content. I can implicitly ignore any results from Forbes blogs. Even Kagi shunts the sources into footnotes. That’s not a great UX because it elevates unvetted information above its source. In this context, I think it’s fair to consider the quality of the source material as part of the “accuracy”, the same way I would when reading Wikipedia. If Wikipedia replaced their editors with ChatGPT, it would most certainly NOT be a net gain.

          • ZeroGravitas@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            2 hours ago

            You know, I was happy to dig through 9yo StackOverflow posts and adapt answers to my needs, because at least those examples did work for somebody. LLMs for me are just glorified autocorrect functions, and I treat them as such.

            A colleague of mine had a recent experience with Copilot hallucinating a few Python functions that looked legit, ran without issue and did fuck all. We figured it out on testing, but boy was that a wake up call (colleague in question has what you might call an early adopter mindset).

      • tetris11@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        4 hours ago

        Perplexity is kinda half-decent with showing its sources, and I do rely on it a lot to get me 50% of the way there, at which point I jump into the suggested sources, do some of my own thinking, and do the other 50% myself.

        It’s been pretty useful to me so far.

        I’ve realised I don’t want complete answers to anything really. Give me a roundabout gist or template, and then tell me where to look for more if I’m interested.

  • SocialMediaRefugee@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    58 minutes ago

    I use chatgpt for things like debugging error codes but I have to be explicit with as much detail as possible or it will give me all sorts of inapplicable crap

  • atomicbocks@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    Personally, I am just not going to use the smallest screen I own to do most of the tasks they are pushing AI for. They can keep making them bigger and it’s still just going to be a phone first. If this is what they want then why can’t I just have the Watch and an iPad?

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 hours ago

    Anyone who has been paying attention has been waiting for this enormous bag of shit to explode already.

  • Captain Janeway@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 hour ago

    The only thing I want AI (on my phone) to do is limit my notifications and make calendar events for me. I don’t want to ask questions. I don’t want to start conversations.

    I want to open my phone and have 1 summary notification of things I received and things to do. I want the spammy ones to just be auto filtered because I never click on them.

    I’d also love if I could choose when to manage all of these notifications with my AI assistant. The only back and forth I’d like is around scheduling if I need to make changes.

  • mesamune@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    3 hours ago

    Sometimes I wonder what is going to happen to all this tech in 4 or so years when its less profitable to keep the AI centers on.

    Right now they are “free” because of all the investment that is going on. But they have a huge maintenance/energy cost.

    • cabbage@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      50 minutes ago

      They just need to capitalize the surveillance capabilities. Find a way to convince users they need access to everything on their phones in order to sell them first class convenience. Once you’ve done that there’s plenty of money to be made.

  • haych@lemmy.one
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 hours ago

    On Samsung they got rid of a perfectly good screenshot tool and replaced it with one that has AI, it’s slower, clunky, and not as good, I just want them to revert it. If I wanted AI I’d download an app.

    • OfficerBribe@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      39 minutes ago

      You are thinking about Smart Select? I just take fullscreen screenshot and then crop it if I need part of it. Did it even when I had previous Smart Select version. Overall I think new version with all previous 4 select options bundled in 1 is better.