

deleted by creator
deleted by creator
Aren’t fighters dead?
Look, I like cool planes, but military scenarios where 5-500 drones are worse than a single mega expensive jet not already covered by existing planes/missiles seem… very rare.
Look at Ukraine’s drone ops. I mean, hell, imagine if the DoD put their budget into that.
Well, exactly. Trump apparently has a line to Apple and could probably get Tim to take it down.
Yep.
It’s not the best upscale TBH.
Hence I brought up redoing it with some of the same techniques (oldschool vapoursynth processing + manual pixel peeping) mixed with more modern deinterlacing and better models than Waifu2X. Maybe even a finetune? Ban.
How does this make any sense?
Shouldn’t they be suing Apple to take it down if they don’t like it? I know they just want to weaken press, but it feels like an especially weak excuse.
Pro is 120hz.
But they are expensive as heck. I only got the 16 Plus because its a carrier loss leader, heh.
And wouldn’t fix some of my other quibbles with iOS’s inflexibility. My ancient jailbroken iPhone 4 was more customizable than now, and Apple is still slowly, poorly implementing features I had a decade ago. It’s mind boggling, and jailbreaking isn’t a good option anymore.
I got banned from a fandom subreddit for pointing out that a certain fan remaster was (partially, with tons of manual work) made with ML models. Specifically with oldschool GANs, and some smaller, older models as part of a deinterlacing pipeline, from before ‘generative AI’ was even a term.
My last Android phone was a Razer Phone 2, SD845 circa 2018. Basically stock Android 9.
And it was smooth as butter. It had a 120hz screen while my iPhone 16 is stuck at 60, and I can feel it. And it flew through some heavy web apps I use while the iPhone chugs and jumps around, even though the new SoC should objectively blow away even modern Android devices.
It wasn’t always this way; iOS used to be (subjectively) so much faster that it’s not even funny, at least back when I had an iPhone 6S(?). Maybe there was an inflection point? Or maybe it’s only the case with “close to stock” Android stuff that isn’t loaded with bloat.
Random aside, I switched from Android to iOS a year ago. I miss Android already.
The UI is more convoluted an clunky than iOS from years ago, just as uncustomizable, and performs shockly bad on heavy webpages on a brand new 16+. It’s got no freaking RAM, no sd card slot. Some free FOSS apps are nonexistant or paid only.
Security and OOTB privacy is better and app support is generally better, but that’s about it? I’d probably keep an iPhone around to bank on when I eventually switch…
And I shit you not, latinos will still vote MAGA in droves in 2026, and once again, analysts and Democrats will be left scratching their head wondering why while literally everyone they pass on the street is glued to their phone.
Maybe if they keep campaigning like it’s 1950, it’ll eventually work?
One thing about Anthropic/OpenAI models is they go off the rails with lots of conversation turns or long contexts. Like when they need to remember a lot of vending machine conversation I guess.
A more objective look: https://arxiv.org/abs/2505.06120v1
https://github.com/NVIDIA/RULER
Gemini is much better. TBH the only models I’ve seen that are half decent at this are:
“Alternate attention” models like Gemini, Jamba Large or Falcon H1, depending on the iteration. Some recent versions of Gemini kinda lose this, then get it back.
Models finetuned specifically for this, like roleplay models or the Samantha model trained on therapy-style chat.
But most models are overtuned for oneshots like fix this table or write me a function, and don’t invest much in long context performance because it’s not very flashy.
What @mierdabird@lemmy.dbzer0.com said, but the adapters arent cheap. You’re going to end up spending more than the 1060 is worth.
A used desktop to slap it in, that you turn on as needed, might make sense? Doubly so if you can find one with an RTX 3060, which would open up 32B models with TabbyAPI instead of ollama. Some configure them to wake on LAN and boot an LLM server.
Yes! Fission power is objectively great, with the biggest caveats being the huge upfront investment, slow construction, and (depending on the specific technology) proliferation concerns.
I honestly though Trump would consider it ‘woke’ as opposed to ‘clean coal,’ a term he used.
…Well, the pro nuclear angle is a tiny silver lining?
ChatGPT (last time I tried it) is extremely sycophantic though. Its high default sampling also leads to totally unexpected/random turns.
Google Gemini is now too.
And they log and use your dark thoughts.
I find that less sycophantic LLMs are way more helpful. Hence I bounce between Nemotron 49B and a few 24B-32B finetunes (or task vectors for Gemma) and find them way more helpful.
…I guess what I’m saying is people should turn towards more specialized and “openly thinking” free tools, not something generic, corporate, and purposely overpleasing like ChatGPT or most default instruct tunes.
TBH this is a huge factor.
I don’t use ChatGPT much less use it like it’s a person, but I’m socially isolated at the moment. So I bounce dark internal thoughts off of locally run LLMs.
It’s kinda like looking into a mirror. As long as I know I’m talking to a tool, it’s helpful, sometimes insightful. It’s private. And I sure as shit can’t afford to pay a therapist out of the gazoo for that.
It was one of my previous problems with therapy: payment depending on someone else, at preset times (not when I need it). Many sessions feels like they end when I’m barely scratching the surface. Yes therapy is great in general and for deeper feedback/guidance, but still.
To be clear, I don’t think this is a good solution in general. Tinkering with LLMs is part of my living, I understand the jist of how they work, I tend to use raw completion syntax or even base pretrains.
But most people anthropomorphize them because that’s how chat apps are presented. That’s problematic.
You can still use the IGP, which might be faster in some cases.
Oh actually that’s a great card for LLM serving!
Use the llama.cpp server from source, it has better support for Pascal cards than anything else:
https://github.com/ggml-org/llama.cpp/blob/master/docs/multimodal.md
Gemma 3 is a hair too big (like 17-18GB), so I’d start with InternVL 14B Q5K XL: https://huggingface.co/unsloth/InternVL3-14B-Instruct-GGUF
Or Mixtral 24B IQ4_XS for more ‘text’ intelligence than vision: https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506-GGUF
I’m a bit ‘behind’ on the vision model scene, so I can look around more if they don’t feel sufficient, or walk you through setting up the llama.cpp server. Basically it provides an endpoint which you can hit with the same API as ChatGPT.
Can’t speak to the rest of the claims, but Android practically does too. If one has to sideload an app, you’ve lost 99% of users, if not more.
It makes me suspect they’re not talking about the stock systems OEMs ship.
Relevant XKCD: https://xkcd.com/2501/