• 10 Posts
  • 2.39K Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle
  • Meta’s in a strange business for that philosophy because, well… 99% of their income is ads. They model and engage users to sell ads.

    It’s not great dogfood for employees to try.


    And their “AI” situation is murky. They’ve actually use machine learning internally for a long, long time, but the recent rush to try and productize AI more directly is… mixed.

    They had a really good open weights LLM division, and built an interesting ecosystem around those “Llama” models. Small/medium businesses helped expand them. Meta employees interacted with other open source projects, too, and posted their own experiments. It was great! And a prime example of “eating your own dog food.”

    …But that lab had one failed experiment, so Zuckerberg killed the whole thing. As Zuck tends to do.

    And now they have some new division which, from my perspective in the tinkerer community, I would bluntly describe as “a clash of Tech Bro egos.” It’s generous to call experiments like an “AI CEO” as an attempt to test their own product, but it more closely resembles Zuckerberg’s pattern of frantically, nervously engaging in something with the nebulous hope it goes viral like Facebook did.












  • This is proof of why OpenAI’s… opaqueness is so dangerous.

    Chat LLMs tend to treat everything like an exam question or essay prompt, as a direct consequence of how the base models are finetuned. The hand is like a pivot point in a physic problem. But more importantly:

    • The chat context is sort of their whole world. Again, due to training format. So they tend to stubbornly adhere to what has already been said, and have no real means of self correction.

    • While we have no idea what OpenAI actually does, in basically every other open model, the vision component is trained separately from pure text input. Point being these models are alright at the very specific set of vision tasks they’re trained for, but the “coupling” of image input to the bulk of the LLM is very weak. The reasoning they can do over text does not carry over well.


    Point I’m trying to make is the biggest lie of Altman is pitching ChatGPT as a general intelligence… It’s not. It’s a dumb, narrow tool, like a drill with specific changeable bits. But they package and market it like it’s “smart”, which is a big fat lie.

    Go to any of the smaller AI vendors/models (like Minimax, with a new model today) and they do the opposite of this. They show specific uses in specific harnesses, and hyper optimize for that.



  • Corporate, for now.

    Thing is, once they’re out there, they’re free utilities, and they can’t be taken back.

    Also, they don’t really need to aggressively scrape the internet. There are many good public datasets now, and the Chinese are already making excellent use of synthetic dataset generation on (relative) shoestring budgets. Also, several nations and other large organizations are already funding open model efforts, but they just haven’t had the opportunity to catch up yet.


  • That’s pretty much what local ML is.

    If open weights LLMs take off, and business users realize they can just finetune tiny specialized models for stuff, OpenAI is toast. All of Big Tech’s bets are. It’s why they keep fanning the “AGI” lie, and why they’re pushing for regulation so hard, why they’re shoving LLMs where they just don’t fit and harping on safety.


  • To illustrate what I mean more clearly, look at the top comments/replies for the NASA Artemis posts, as an example.

    …It’s basically all conspiracy theorists, and government skeptics.

    Twitter’s focusing the Artemis posts on them because it’s what they want to see, and most engaging for them.

    In the EFF’s case, I’m not just talking about Musk’s influence. The algorithm will only show the EFF to users who would be highly engaged by it. E.g., angry skeptics who wouldn’t be swayed by the EFF anyway, or fans who already agree with the EFF. It’s literally not going to show the EFF to people who need to see it, as Twitter’s metrics would show it as unengaging.


    This is the “false image” I keep trying to dispel. Twitter is less and less an “even spread” of exposure like people think it is, like it sort of used to be, more-and-more a hyper focused bubble of what you want to hear, and only what you want to hear. All the changes Musk is making are amplifying that. Maybe that’s fine for some orgs, but there’s no point in the EFF staying in that kind of environment, regardless of ethics.