- cross-posted to:
- hackernews@lemmy.smeargle.fans
- cross-posted to:
- hackernews@lemmy.smeargle.fans
Satya just released a statement https://blogs.microsoft.com/blog/2023/11/17/a-statement-from-microsoft-chairman-and-ceo-satya-nadella/
Satya just released a statement https://blogs.microsoft.com/blog/2023/11/17/a-statement-from-microsoft-chairman-and-ceo-satya-nadella/
deleted by creator
Rumor is they were unhappy with some of his outside investments. I also learned yesterday Sam has a sister that has accused him of emotional and sexual abuse. No idea if either of those allegations are credible, though.
I think we all would. Given the sudden nature of it I’m sure there is more to the story than what’s been publicly disclosed.
It seems to be that he and Ilya, the chief scientist, had irreconcilable differences in how quickly to productize the AI developments they were building.
That in essence Altman kept pushing things out too quickly and focusing on the immediate commercialization, and Ilya and the rest of the board wanted to focus on the core mission of advancing AI to the point of AGI safely and for everyone.
My own guess is that some of this schism dates back to the early integration with Bing.
If you read what Ilya has said about superalignment, a lot of those concepts were reflected in ‘Sydney,’ the early fine tuned chat model for GPT-4 that was integrated into Bing.
To put it simply - this thing was incredible. I was blown away by the work OpenAI had done aligning at such an abstract level. It was definitely not production ready, as was quickly revealed with the issues Microsoft had, but it was the single most impressive thing I’ve ever seen.
In its place we got this band-aid of a much more reduced model which scores well on certain logic tests but is a shadow of its former version in outside the box adaptation, with a robot like “I have no feelings, desires, etc” which was basically the alignment methodology best for GPT-3 (but not necessarily the best for GPT-4).
I suspect the band-aid was initially pitched as a “let’s put the fire out” solution to salvage the Bing integration, but that as time went on Altman was continuing to want the quick fixes rather than adequately investing the resources and dev cycles to properly work on alignment as best suited to increasingly complex models.
As they were now working on GPT-5 and allegedly had another breakthrough moment in the past few weeks, the CEO continuing to want Band-Aids with a fast rollout as opposed to a slower, more cautious, but more thorough approach finally became untenable.
He either fucked up somewhere, he refused to do something “they” wanted or he wanted to do something “they” didn’t want.
With the potential of what AI can and/or shouldn’t do, I think it might be one of the last two. That’s only because money is involved and it feels like greed is rampant.
Then again, maybe he’s a pedo.
I’m not sure if this makes me a pessimist or a conspiracy nut.
What a waste of your time to write this drivel and a waste of everyone else’s that read it. You literally just wrote a bunch of fucking nonsense, may as well claim he’s also the goddamn devil incarnate, since you’re just handing out random claims.
He lied, but yeah it’s because he refused to do something “””they””” told him to do. Sure, definitely makes sense to make the claim you did with 0 evidence, suggestive or not that goes against the only actual information anyone that isn’t the board has.
Do you usually go around answering people’s questions about shit you don’t understand? You have no insight or even a meaningful quote, but no instead it’s “maybe he’s a pedo”. Who are you, some sort of phony stark wannabe?
To answer your question, yes it absolutely makes you sound like a fucking nut.
What’s really wild to me is how you put forth this bullshit yet make no mention of this bullshit.
You also skipped the part where they said it’s about money, when this is the same board as the original nonprofit, and they are not investors.
Microsoft … Non profit.
I don’t see alignment of interests.