• Aceticon@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    7 months ago

    Don’t take this badly but you’re both overcomplicating (by totally unecessarilly “decorating” your post with wholly irrelevant details on the transmission and reception of specific forms of human communication) and oversimplifying (by going for some pretty irrelevant details and getting some of it wrong).

    Also there’s just one language model. The means by which the language was transmitted and turned into data (sound, images, direct ascii data, whatever) are something entirelly outside the scope of the language model.

    You have a really really confused idea of how all of this works and not just the computing stuff.

    Worse, even putting aside all of that “wtf” stuff about language transmission processes in your post, even them getting an LLM to do maths from language might not be a genuine breakthrough: they might’ve done this “maths support” by cheating, for example just having the NN recognize math-related language and transform maths-related language tokens into standard maths tokens that can be used by a perfectly normal algorithmic engine (i.e. hand-coded by humans) to calculate stuff and then translating the results back to human language tokens, something which wouldn’t be the “AI” part doing or understanding the concept of Mathsin any way whatsoever, just the AI translating tokens between formats and an algorithmic piece of software designed by a person doing the actual maths using hardcoded algorithms - somebody integrating a maths calculating program into an LLM isn’t AI, it’s just normal coding.

    Also the basis of the actual implementation of an LLM is basic maths and it’s stupidly simple to get, for example, a neuron in a neural network to add 2 numbers.