• jet@hackertalks.com
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    19 days ago

    These LLMs are going to keep hallucinating… I.e. acting like chat bots, until everyone understands not to trust them. Like uncle Jimmy who makes shit up all the time

      • jet@hackertalks.com
        link
        fedilink
        arrow-up
        2
        ·
        18 days ago

        No issue with the model, just that people attribute intelligence to them, when they’re just chat bots. And they run into these fun situations

  • Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    4
    ·
    19 days ago

    50 emails/day x 5 days x 40amonth=10,000 a month in lost sales—and that was only from people who cared enough to complain.

    Multiply that by 20. Because roughly, for each complainer, you’ll get 19 people simply thinking “you know what, screw it” and never voicing their discontent. 200k a month in lost sales.

    And… frankly? They deserve the losses.

    Pro-tip: you should “trust” the output of a large language model less than you’d trust the village idiot. Even when the later is drunk.