• Tgo_up@lemm.ee
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    17
    ·
    1 day ago

    This is a bad example… If I ask a friend "is strawberry spelled with one or two r’s"they would think I’m asking about the last part of the word.

    The question seems to be specifically made to trip up LLMs. I’ve never heard anyone ask how many of a certain letter is in a word. I’ve heard people ask how you spell a word and if it’s with one or two of a specific letter though.

    If you think of LLMs as something with actual intelligence you’re going to be very unimpressed… It’s just a model to predict the next word.

    • renegadespork@lemmy.jelliefrontier.net
      link
      fedilink
      English
      arrow-up
      23
      ·
      1 day ago

      If you think of LLMs as something with actual intelligence you’re going to be very unimpressed… It’s just a model to predict the next word.

      This is exactly the problem, though. They don’t have “intelligence” or any actual reasoning, yet they are constantly being used in situations that require reasoning.

      • Tgo_up@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        What situations are you thinking of that requires reasoning?

        I’ve used LLMs to create software i needed but couldn’t find online.

        • renegadespork@lemmy.jelliefrontier.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          Creating software is a great example, actually. Coding absolutely requires reasoning. I’ve tried using code-focused LLMs to write blocks of code, or even some basic YAML files, but the output is often unusable.

          It rarely makes syntax errors, but it will do things like reference libraries that haven’t been imported or hallucinate functions that don’t exist. It also constantly misunderstands the assignment and creates something that technically works but doesn’t accomplish the intended task.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        1 day ago

        Maybe if you focus on pro- or anti-AI sources, but if you talk to actual professionals or hobbyists solving actual problems, you’ll see very different applications. If you go into it looking for problems, you’ll find them, likewise if you go into it for use cases, you’ll find them.

        • renegadespork@lemmy.jelliefrontier.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          19 hours ago

          Personally I have yet to find a use case. Every single time I try to use an LLM for a task (even ones they are supposedly good at), I find the results so lacking that I spend more time fixing its mistakes than I would have just doing it myself.

          • Scubus@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            16 hours ago

            So youve never used it as a starting point to learn about a new topic? You’ve never used it to look up a song when you can only remember a small section of lyrics? What about when you want to code a block of code that is simple but monotonous to code yourself? Or to suggest plans for how to create simple sturctures/inventions?

            Anything with a verifyable answer that youd ask on a forum can generally be answered by an llm, because theyre largely trained on forums and theres a decent section the training data included someone asking the question you are currently asking.

            Hell, ask chatgpt what use cases it would recommend for itself, im sure itll have something interesting.

            • renegadespork@lemmy.jelliefrontier.net
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 hours ago

              as a starting point to learn about a new topic

              No. I’ve used several models to “teach” me about subjects I already know a lot about, and they all frequently get many facts wrong. Why would I then trust it to teach me about something I don’t know about?

              to look up a song when you can only remember a small section of lyrics

              No, because traditional search engines do that just fine.

              when you want to code a block of code that is simple but monotonous to code yourself

              See this comment.

              suggest plans for how to create simple sturctures/inventions

              I guess I’ve never tried this.

              Anything with a verifyable answer that youd ask on a forum can generally be answered by an llm, because theyre largely trained on forums and theres a decent section the training data included someone asking the question you are currently asking.

              Kind of, but here’s the thing, it’s rarely faster than just using a good traditional search, especially if you know where to look and how to use advanced filtering features. Also, (and this is key) verifying the accuracy of an LLM’s answer requires about the same about of work as just not using an LLM in the first place, so I default to skipping the middle-man.

              Lastly, I haven’t even touched on the privacy nightmare that these systems pose if you’re not running local models.

    • Grandwolf319@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 day ago

      If you think of LLMs as something with actual intelligence you’re going to be very unimpressed

      Artificial sugar is still sugar.

      Artificial intelligence implies there is intelligence in some shape or form.

      • Tgo_up@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        Exactly. The naming of the technology would make you assume it’s intelligent. It’s not.

      • corsicanguppy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Artificial sugar is still sugar.

        Because it contains sucrose, fructose or glucose? Because it metabolises the same and matches the glycemic index of sugar?

        Because those are all wrong. What’s your criteria?

        • Grandwolf319@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          22 hours ago

          In this example a sugar is something that is sweet.

          Another example is artificial flavours still being a flavour.

          Or like artificial light being in fact light.

      • Scubus@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        15 hours ago

        Thats because it wasnt originally called AI. It was called an LLM. Techbros trying to sell it and articles wanting to fan the flames started called it AI and eventually it became common dialect. No one in the field seriously calls it AI, they generally save that terms to refer to general AI or at least narrow ai. Of which an llm is neither.

      • JohnEdwa@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 day ago

        Something that pretends or looks like intelligence, but actually isn’t at all is a perfectly valid interpretation of the word artificial - fake intelligence.