(inb4 ethernet over HDMI: There is no implementation of the spec in the wild).
How about Thunderbolt? This looks like macOS, and while I’m not 100% sure if they utilize HDMI ports anymore, they certainly use Thunderbolt.
Formerly @russjr08@outpost.zeuslink.net
(inb4 ethernet over HDMI: There is no implementation of the spec in the wild).
How about Thunderbolt? This looks like macOS, and while I’m not 100% sure if they utilize HDMI ports anymore, they certainly use Thunderbolt.
I know the feeling, I just had a week off and returned to work on Friday a couple of days ago.
That’s quite unfortunate to hear. I use Bitwarden along with Gboard and very rarely run into issues - I believe most password managers have a quick settings toggle that you can add into your notification drawer to maybe get around this? From what I know though, these generally use the Accessibility framework to function, and thus will heavily depend on your password manager - it also gives a lot more access to those apps than the built in autofill framework.
Conversely I remember Bitwarden’s autofill support on iOS being quirky when I last used it (which to be fair, has been a while - I’m sure its improved since then). IIRC it pretty much always worked in Safari (and Safari Web Views within apps), but the actual applications themselves wouldn’t always give me the autofill prompt.
For me though, regardless of the platform it still is far more worth using a password manager and unique passwords per-site than to use a single password (or even a handful) across sites. I hope autofill support improves for those that it doesn’t work well with.
Oh wow, I didn’t expect another release so quickly! Props to the COSMIC team! I can’t recall where the roadmap for the features and their targeted releases went, but I hope we can get Night Light/Blue Light filtering soon.
I also did not know they had a Mastodon account, thanks for the shout so that I could give 'em a follow.
It depends on who you’re referring to as a casual user. My mother for example would certainly have a hard time with it, then figuring out the key to bring up the boot menu (and being faced with a scary dialog that they’ve never seen), then selecting the right device, then likely being faced with GRUB which would also look scary to her, and by then she’d be overwhelmed before even getting to the install portion.
I’d recommend using ROCM through a Distrobox container, personally I use this Distrobox container file and it has suited all of my needs with Stable Diffusion so far.
That is, if you’re still interested in it - I could totally understand writing it off after what happened 😅
This is fantastic, congratulations!
I usually just get by with Alacritty and Zellij, pairs pretty well together.
I personally use Sleep as Android which comes with a bunch of options to help ensure you’ve actually woken up. I utilize the “captcha” option in which when I go to turn off the alarm, it displays a screen full of sheep and all of them but one are sleeping - you have to click the one that is “awake” in order to dismiss the alarm. I guess the process wakes up my brain just enough so that I don’t go back to sleep, whereas with a regular alarm that has just a simple dismiss button I’ll absolutely either hit dismiss or one of the volume buttons to turn off the alarm before I’ve fully woken up.
I also have it set to buzz on my watch for 90 seconds before playing a sound on my phone (which escalates in volume) - I’ve not had a problem waking up with this in the years that I’ve been using it.
There are other options too, such as answering math questions, scanning a QR code, pressing your phone to an NFC tag, heavily shaking the phone, one called “Say cheese!” that makes you smile as hard as you can and uses the camera to detect it, and one that you have to “laugh out loud”.
Hmm, gotcha. I just tried out a fresh copy of text-gen-webui and it seems like the latest version is borked with ROCM (I get the CUDA error: invalid device function
error).
My next recommendation then would be LM Studio which to my knowledge can still output an OpenAI compatible API endpoint to be used in SillyTavern - I’ve used it in the past before and I didn’t even need to run it within Distrobox (I have all of the ROCM stuff installed locally, but I generally run most of the AI stuff in distrobox since it tends to require an older version of Python than Arch is currently using) - it seems they’ve recently started supporting running GGUF models via Vulkan, which I assume probably doesn’t require the ROCM stuff to be installed perhaps?
Might be worth a shot, I just downloaded the latest version (the UI has definitely changed a bit since I last used it) and just grabbed a copy of the Gemma model and ran it, and it seemed to work without an issue for me directly on the host.
The advanced configuration settings no longer seem to directly mention GPU acceleration like it used to, however I can see it utilizing GPU resources in nvtop
currently, and the speed it was generating at (the one in my screenshot was 83 tokens a second) couldn’t have possibly been done on the CPU so it seems to be fine on my side.
Yeah, I definitely am not a fan of how AMD handles rocm - there’s so many weird cases of “Well this card should work with rocm, but… [insert some weird quirk that you have to do, like the one I mentioned, or what you’ve run into]”.
Userspace/consumer side I enjoy AMD, but I fully understand why a lot of devs don’t make use of rocm and why Nvidia has such a tight hold on things in the GPU compute world with CUDA.
Ah, strange. I don’t suppose you specifically need a Fedora container? If not, I’ve been using this Ubuntu based distrobox container recipe for anything that requires ROCM and it has worked flawless for me.
If that still doesn’t work (I haven’t actually tried out kobolcpp yet), and you’re willing to try something other than kobolcpp, then I’d recommend the text-generation-webui project which supports a wide array of model types, including the GGUF types that Kobolcpp utilizes. Then if you really want to get deep into it, you can even pair it with SillyTavern (it is purely a frontend for a bunch of different LLM backends, text-generation-webui is one of the supported ones)!
What card do you use? I have a 6700XT and getting anything with ROCM running for me requires that I pass the HSA_OVERRIDE_GFX_VERSION=10.3.0
environmental variable to the related process, otherwise it just refuses to run properly. I wonder if it might be something similar for you too?
I did the same move for similar reasons! Although I still keep windows around on another SSS - and even the Windows Nvidia drivers were being funky for me.
Nvidia shares a lot of logic between their Windows and Linux driver as far as I’m aware, so I suppose it makes sense.
IIRC this was in regards to Microsoft wanting to close access to the kernel, while also still wanting to use kernel-level APIs for their security suite - which does come down to anticompetitive practices.
However, if Microsoft were not to offer separate products that used kernel-level APIs then in theory it would not have this same issue, which I assume is how Apple gets away with it. But, I am not a lawyer so its just speculation on my part.
Good god, I was finally prescribed Ambien for the first time recently, and I definitely now realize why it has the reputation that it does.
If they’re using Fedora, then it is highly likely that they are using GRUB as you have to very much go out of your way to utilize systemd-boot on Fedora the last time I checked.
Your theme looks great! I’d definitely love to hear when the new one comes out!
I absolutely love Moshidon!
Primarily I use Arch on my desktop (and by proxy, my Steam Deck which runs SteamOS), which is what I’ve landed on after a ton of distro hopping. The idea of Atomic distros catches my eyes, but for me in its present state there are too many steps needed in order to make deeper changes (for example, installing a kernel module) - but I quite like SteamOS on my Deck since I know it will always be in a “consistent” state, for example.
On servers I run a mix of Rocky Linux and Debian.