• 17 Posts
  • 1.23K Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle


  • 3d not being required makes a hell a lot of sense and of course it wasn’t people have been drafting on paper for ages. They might’ve ended up on Mac or maybe Amiga, but an SGI workstation is quite an investment when you don’t even need to spin polygons. IRIS GL dates back to the early 80s, doesn’t seem so much to be a timeline but price and need thing. And it’s not like you can’t have a 3d view without acceleration, just would take a while to render and a frame every five seconds might still be usable.

    There apparently was an IRIX version at one time but with no user base preference, more likely they were thinking “where’s my C: drive” so once 3d acceleration hit the mainstream everyone happily switched back to Microsoft. Meanwhile you have 3d artists complaining that they can’t move windows with meta+lmb on windows.




  • OpenEXR. Though it probably could use a spec upgrade, in particular add JPEG-XL to the list of compression algorithms. It’s not like OpenEXR’s choices are bad, the lossy ones are just more geared towards fidelity than space savings, kind of the opposite of what you want for the web where saving space is often paramount and fidelity a bonus.

    Bonus: Supports multi-channel, so not just RGBA. Not terribly useful for your run off the mill camera, very useful in production where you might want to attach the depth buffer, cryptomatte etc and I guess you could also use it for the output of light field cameras. Oh there’s also multi-view so you can store not just stereo images but also whole all-around captures and stuff. There’s practically nothing pixel-related you can’t do with it though it might require custom tooling.


  • EU fines are working. Not in the sense that they would prevent companies from trying to do shit, but in the sense that they shape up once it has been levied: Understand that those 800m are a shot before the bow. If the behaviour continues, there’s going to be daily punitive fines that very quickly become very unaffordable.

    I mean, what is the money being used for?

    Goes towards the EU budget, reducing the amount the member states have to pay in. In other words Berlaymont doesn’t gain anything from levying fines, their budget stays the same.





  • The vast majority of sales are made to US based firms so they likely have a lot of sway.

    The sway is TSMC uses ASML EUV lithography machines and the US holds patents on those because they did foundational research regarding EUV lithography. Also, the EU hasn’t put China on the “it is illegal for EU companies to kowtow to US sanctions” list. Ironically ASML could sell to Cuba and Iran. If the EU were to tell ASML to sell to China the US would be free to not buy ASML machines any more and, doing that, kill off Intel’s fabs.

    None of this stuff has military relevance, you don’t need or even want to use small nodes (which require EUV) in military applications you want hardened chips instead. Run off the mill consumer chips go all frizzy if an EMP looks at them sideways. This is about the US protecting US fabs, foremost Intel. Not the chip design part but the manufacturing one.

    Europe hasn’t played the high-end end-consumer chip market for ages and I doubt we’ll do it any time soon. Having ASML, Zeiss etc. means that whoever actually produces that stuff wants to be friendly with us and strategically, both military and economy, our own production facilities are perfectly sufficient. Hence also why ESMC will only go as small as 12nm, it’s the most cost-effective node size and performance is perfectly adequate for a missile, a CNC mill, or a car infotainment system. Or the gyroscope chip in your phone (it’s almost certainly a Bosch), EUV doesn’t make a lick of sense when you’re doing MEMS. Where we have to catch up is chip design lets see how that RISC-V supercomputer chip turns out.



  • that meme makes is that it’s clear the gal doesn’t want to participate in the conversation due to body language.

    Not trying to argue against the meme, how it’s used and understood etc, but: You can’t interpret body language from a still image, you need at least like two or three movements, you need to see how someone reacts to their own movements so to speak. She might just as well be going “woah, cool”, slight backward surprise movement, and the two are the most wholesome couple you’ve ever met. Or she actually really wants to get out of there. That’s the point: The still image itself is too little information to make the distinction.


  • The problem is: Data is code, and code is data. An algorithm to compute prime numbers is equivalent to a list of prime numbers, (also, not relevant to this discussion, homoiconicity and interpretation). Yet we still want to make a distinction.

    Is a PAQ-compressed copy of the Hitchhiker’s guide code? Technically, yes, practically, no, because the code is just a fancy representation of data (PAQ is basically an exercise in finding algorithms that produce particular data to save space). Is a sorting algorithm code? Most definitely, it can’t even spit out data without being given an equally-sized amount of data. On that scale, from code to code representing data, AI models are at least 3/4th towards code representing data.

    As such I’d say that AI models are data in the same sense that holograms (these ones) are photographs. Do they represent a particular image? No, but they represent a related, indexable, set of images. What they definitely aren’t is rendering pipelines. Or, and that’s a whole another possible line of argument: Requiring Turing-complete interpretation.





  • Not to mention ARM chips which by and large were/are more efficient on the same node than x86 because of their design: ARM chip designers have been doing that efficiency thing since forever, owing to the mobile platform, while desktop designers only got into the game quite late. There’s also some wibbles like ARM insn decoding being inherently simpler but big picture that’s negligible.

    Intel just really, really has a talent for not seeing the writing on the wall while AMD made a habit out of it out of sheer necessity to even survive. Bulldozer nearly killed them (and the idea itself wasn’t even bad, it just didn’t work out) while Intel is tanking hit after hit after hit.


  • See there’s the stuff that happened, there’s the version that tankies want to believe (complete denial), which is actually different from the official CCP stance (“necessary and proportionate police action to ensure stability”, with the implication “enough questions, comrade, nothing more to see”), which is different from western public… myth, I have to say. Back when the stuff went down western journalists didn’t know what was happening, there were confusing reports, there were reports of violence, and then there was the tank man – taken the day after (IIRC, but definitely later and no he didn’t get run over). The collective imagination somehow constructed an image of the Chinese army rolling over students. Which is… metaphorically true, but not literally. And then the CCP is using that western imagination to spin their own tale of how the evil west is slandering them.


  • Lore books eh you’re giving me ideas. Hard to justify spending budget on that kind of stuff even if you have money to work with… how would one even get one’s hands on a woodprint artist? You know, the chisel and printing press kind? Imitating it is going to be hard indeed and figuring out how to do it not worth for a couple of one-off images you could just as well do without so either generating from prompt or telling the model to re-paint an input image in that style seems like the obvious solution.

    I think a similar rule applies as when it comes to code, and NIH syndrome syndrome: Whatever it is that is your primary focus you should write yourself, use libraries for the rest. If you write a shooter, you’re going to write the gunplay, but can take the renderer off the shelf. I you’re writing a walking simulator that happens to have a gun somewhere but is generally focussed on graphical atmosphere, go grab the gunplay off the shelf but write the renderer yourself.

    So unless the focus of your game is rummaging through books in an ancient library, go use that model.