• 0 Posts
  • 616 Comments
Joined 3 years ago
cake
Cake day: July 2nd, 2023

help-circle
















  • I think I understand your point now.

    I still would want to apply pressure to it, because i disagree with the spirit of your assessment.

    Once a model is trained, they become functionally opaque. Weights shift… but WHY. What does that vector MEAN.

    I think wrenches are good. Will a 12mm wrench fit a 12mm bolt? Yes.

    In LLM bizarre world, the answer to everything is not “yes” or “no”, it’s “maybe, maybe not, within statistical bounds… try it… maybe it will… maybe it won’t… and by the way just because it fit yesterday is no guarantee it will fit again tomorrow… and I actually can’t definitively tell you why that is for this particular wrench”

    LLMs do something, and I agree they do that something well. I further agree with the spirit of most of the rest of your analysis: abstraction layers are doing a lot of heavy lifting.

    I think where I fundamentally disagree is that “they do what they say they do” by any definition beyond the simple tautology that everything is what it is.




  • I love that humans are inclined to anthropamorphize things. A door can’t be sad. A street can’t be lonely. The moon can’t be wistful. The ocean can’t be angry.

    But they can… in our heads. And that’s real for us.

    I think that, at least at a societal level, this part of the human condition has been mostly benign. Just a little bit of spice.

    LLMs seem to have short circuited that part in our brains. We can’t even describe errata of a system without anthropamorphizing it