I’m beautiful and tough like a diamond…or beef jerky in a ball gown.

  • 59 Posts
  • 317 Comments
Joined 8 months ago
cake
Cake day: July 15th, 2025

help-circle


  • I get what you’re saying and the “individual carbon footprint” is often used to blame shift to regular people just living their lives, but we do still have a carbon footprint. It may be a tiny, rodent-sized footprint compared to the Kaiju-sized ones of big industries, but our actions and choices do have an effect (especially collectively).

    I just don’t like dismissing the individual carbon footprint as total propaganda because it’s not wrong (though I acknowledge it is abused). Dismissing it like that just puts out a defeatist “nothing I do matters” message when our individual choices do matter and add up.

    Can you live a totally carbon-neutral life in the modern age? No, probably not. But we also shouldn’t throw the baby out with the bathwater and do nothing.









  • Audio transcribing should be the little “waveform” icon at the right of the text input:

    Image generation, I’m not sure as that’s not a use-case I have and don’t think the small-ish models I run are even capable of that.

    I’m not sure how audio transcribing works in OpenWebUI (I think it has built-in models for that?) but image generation is a “capability” that needs to be both part of the model and enabled in the models settings (Admin => Settings => Models)


  • Disclaimer: : All of my LLM experience is with local models in Ollama on extremely modest hardware (an old laptop with NVidia graphics) , so I can’t speak for the technical reasons the context window isn’t infinite or at least larger on the big player’s models. My understanding is that the context window is basically its short term memory. In humans, short term memory is also fairly limited in capacity. But unlike humans, the LLM can’t really see (or hold) the big picture in its mind.

    But yeah, all you said is correct. Expanding on that, if you try to get it to generate something long-form, such as a novel, it’s basically just generating infinite chapters using the previous chapter (or as much of the history fits into its context window) as reference for the next. This means, at minimum, it’s going to be full of plot holes and will never reach a conclusion unless explicitly directed to wrap things up. And, again, given the limited context window, the ending will be full of plot holes and essentially based only on the previous chapter or two.

    It’s funny because I recently found an old backup drive from high school with some half-written Jurassic Park fan fiction on it, so I tasked an LLM with fleshing it out, mostly for shits and giggles. The result is pure slop that seems like it’s building to something and ultimately goes nowhere. The other funny thing is that it reads almost exactly like a season of Camp Cretaceous / Chaos Theory (the animated kids JP series) and I now fully believe those are also LLM-generated.



  • I used to buy their stuff and use tuya-convert to flash Tasmota onto them. But they kept updating the firmware to lock that out, and I ended up returning a batch of 15 smart plugs because none of them would flash. They were too much of a PITA to try to crack open and flash the ESP8266 manually so I returned the whole batch as defective, left a scathing review, and blackballed the whole brand.











  • It’s so common for “anti-censorship” to be code for “Nazi-friendly” that I’m immediately suspicious of any platform that uses that as a selling point.

    I’m similarly suspicious, but it’s not just code for “nazi-friendly” but also crackpots, maladaptives, etc. Rational people who read and say “anti-censorship” in this context know it means that it’s not beholden to corporate or government interests. But everyone else seems to want to interpret that as “I can say whatever I want! How dare you mod anything I say?! Freeze-peach, y’all!”

    I wish they’d pick a different term for these non-corporate alternatives, but I don’t have a better suggestion to offer right now.