• 13 Posts
  • 982 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • For a lorry, no. For a private vehicle, yes. Standard driving licenses only allow for up to 3.5t combined permissible weight (that is, vehicle and trailer plus maximum load), 750kg of those for trailer and load. If you want to drive a combination of vehicle and trailer individually up to 3.5t (so total 7t) you need a trailer license, anything above that you need a lorry license with all bells and whistles such as regular medical checkups.

    Or, differently put: A standard VW Golf can pull almost thrice as much as most drivers are allowed to pull.

    A small load for a private vehicle would be a small empty caravan, or a light trailer with some bikes. A Smart Fourtwo can pull 550kg which will definitely look silly but is otherwise perfectly reasonable, that’s enough for both applications.


  • My kitchen scales have a USB-C port. While I certainly would like it to have the capability to stream GB/s worth of measuring data over it fact of the matter is I paid like ten bucks for it, all it knows is how to charge the CR2032 cell inside. I also don’t expect it to support displayport alt mode, it has a seven-segment display I don’t really think it’s suitable as a computer monitor.

    What’s true though is that it’d be nice to have proper labelling standards for cables. It should stand to reason that the cable that came with the scales doesn’t support high performance modes, heck it doesn’t even have data lines literally the only thing it’s capable of is low-power charging, nothing wrong with that but it’d be nice to be able to tell that it can only do that at a semi-quick glance when fishing for a cable in the spaghetti bin.


  • A and B are the original, used for host and device sides, respectively. C is the same on both ends of the cable because figures there’s device classes which can sensibly act as both, in particular phones. It’s also the most modern of the bunch supporting higher data transfer and power delivery rates because back in the days where A and B where designed people were thinking about connecting mice and keyboards, not 8k monitors or kWhs worth of lithium batteries.

    The whole mini/micro shennanigans are alternative B types and quite deeply flawed, mechanically speaking.



  • barsoap@lemm.eetoFediverse@lemmy.worldThe Fediverse
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 day ago

    anyone can host an email service

    Eh, no. You could in the 2000s, nowadays spam protection is so tight, and necessarily that tight, that you need at least a full-time position actively managing the server or you’re getting blacklisted for some reason or the other. Other servers will simply not accept emails sent by you if you don’t look legit and professional.

    Definitely possible for a company with IT department, as a small company you want to outsource it (emails being on your domain doesn’t mean you’re managing the server), as a hobbyist, well you might be really into it but generally also no. Send protonmail or posteo or whoever a buck or something a month.


  • That’s the sort of thing that should just be an extension

    It most likely is on the technical level, just shipped by default and integrated into standard settings instead of the add-on ones. And it’s going to be opt-in, so you won’t have to go into about:config to disable it. Speaking of: You’re looking for extensions.pocket.enabled, it should be false. And before you say “muh diskspace” it’s probably like 5k of js and css or such.


  • Should all be in place. Even nvidia driver support. It’s one of the rare cases where I actually support nvidia on a technical level, that is, having explicit sync is good. I can also understand that they didn’t feel like implementing proper implicit sync (hence all the tearing etc) when it’s a technically inferior solution.

    OTOH, they shouldn’t have bloody waited until now to get this through. Had they not ignored wayland for a literal decade this all could’ve been resolved before it became an issue for end-users.





  • I argue that X11 would have hyperactive development, if we did not have Wayland

    Wayland was started by the X developers because they were sick and tired of hysterical raisins. Noone else volunteered to take over X, either, wayland devs are thus still stuck with maintaining XWayland themselves. I’m sure that at least a portion of the people shouting “but X just needs some work” at least had a look at the codebase, but then noped out of it – and subsequently stopped whining about the switch to Wayland.

    What’s been a bit disappointing is DEs getting on the wayland train so late. A lot of the kinks could have been worked out way earlier if they had given their 2ct of feedback right from the start, instead of waiting 10 years to even start thinking about migrating.



  • That does not seem to be a stray and yes there’s definitely reasons to take potshots at Gnome. They still don’t support server-side decorations. Everyone is absolutely fine with them not wanting to use them in their own apps, have them draw window decorations themselves, and every other DE lets gnome apps do exactly that, but Gnome is steadfastly and pointlessly refusing to draw decorations for apps which don’t want to draw their own decorations. It’d be like a hundred straight-forward lines of code for them.

    And that’s just the tip of the iceberg when it comes to breakage you have to expect when running Gnome.


  • Wayland kinda is an x.org project in the first place. AFAIK it’s officially organised under freedesktop but the core devs are x.org people.

    x.org as in the organisation and/or domain might not be needed any more, but the codebase is still maintained by exactly those Wayland devs for the sake of XWayland. Support for X11 clients isn’t going to go away any time soon. XWayland is also capable of running in rootfull mode and use X window managers, if there’s enough interest to continue the X.org distribution I would expect them to completely rip out the driver stack at some point and switch it over to an off the shelf minimum wayland compositor + XWayland. There’s people who are willing to maintain XWayland for compatibility’s sake, but all that old driver cruft, no way.


  • They have to be hotter than the temperature of the Sun

    Well they don’t strictly speaking have to but to get fusion you need a combination of pressure and temperature and increasing temperature is way easier than increasing pressure if you don’t happen to have the gravity of the sun to help you out. Compressing things with magnetic fields isn’t exactly easy.

    Efficiency in a fusion reactor would be how much of the fusion energy is captured, then how much of it you need to keep the fusion going, everything from plasma heating to cooling down the coils. Fuel costs are very small in comparison to everything else so being a bit wasteful isn’t actually that bad if it doesn’t make the reactor otherwise more expensive.

    What’s much more important is to be economical: All the currently-existing reactors are research reactors, they don’t care about operating costs, what the Max Planck people are currently figuring out is exactly that kind of stuff, “do we use a cheap material for the diverters and exchange them regularly, or do we use something fancy and service the reactor less often”: That’s an economical question, one that makes the reactor cheaper to operate so the overall price per kWh is lower. They’re planning on having the first commercial prototype up and running in the early 2030s. If they can achieve per kWh fuel and operating costs lower than gas they’ve won, even though levelised costs (that is, including construction of the plant amortised over time) will definitely still need lowering. Can’t exactly buy superconducting coils off the shelf right now, least of all in those odd shapes that stellerators use.


  • The ISA does include sse2 though which is 128 bit, already more than the pointer width. They also doubled the number of xmm registers compared to 32-bit sse2.

    Back in the days using those instructions often gained you nothing as the CPUs didn’t come with enough APUs to actually do operations on the whole vector in parallel.


  • graphics, video, neural-net acceleration.

    All three are kinda at least half-covered by the vector instructions which absolutely and utterly kills any BLAS workload dead. 3d workloads use fancy indexing schemes for texture mapping that aren’t included, video I guess you’d want some special APU sauce for wavelets or whatever (don’t know the first thing about codecs), neural nets should run fine as they are provided you have a GPU-like memory architecture, the vector extension certainly has gather/scatter opcodes. Oh, you’d want reduced precision but that’s in the pipeline.

    Especially with stuff like NNs though the microarch is going to matter a lot. Even if a say convolution kernel from one manufacturers uses instructions a chip from another manufacturer understands, it’s probably not going to perform at an optimal level.

    VPUs AFAIU are usually architected like DSPs: A bunch of APUs stitched together with a VLIW insn encoder very much not intended to run code that is in any way general-purpose, because the only thing it’ll ever run is hand-written assembly, anyway. Can’t find the numbers right now but IIRC my rk3399 comes with a VPU that out-flops both the six arm cores and the Mali GPU, combined, but it’s also hopeless to use for anything that can’t be streamed linearly from and to memory.

    Graphics is the by far most interesting one in my view. That is, it’s a lot general purpose stuff (for GPGPU values of “general purpose”) with only a couple of bits and pieces domain-specific.