How could you tell?
But, seriously, those blue tongues are so cool!
🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍
How could you tell?
But, seriously, those blue tongues are so cool!
Plus, they’re just darned contrarian by nature.
Already done.
I mean, you have to use it to get software; and if you’re submitting patches to other people’s software; and I have inherited maintenance of a popular project that would just confuse a ton of people, including several distros, if I moved it. But I never create projects in github anymore. Sourcehut has been great.
All of the silos are in rural areas; those are mostly known and definitely first-strike targets. Cities need very few nukes to take out individually. Nowhere will anyone be rebuilding from the ashes. If the war is limited and nuclear winter doesn’t make the entire planet uninhabitable, the only places with a chance of surviving are the undeveloped countries. No developed country will be habitable.
Nuclear fallout is a bitch.
I haven’t tried it yet, and I haven’t had a reason to look into it. My experience with Fi was that you pay $10 per Gb - it didn’t come out of your normal bank - and per-minute charges. When I was traveling, I used my company phone, or if on vacation, purely data with heavy up front-caching as much as I could at the hotel. I really don’t like surprise bill sizes.
But to be honest, I haven’t tried Mint internationally, so I can’t say.
Not so bad. I use gmail as a backup for some accounts in case something happens to my VPS or domain, and my Amazon account is still linked to it out of laziness, but otherwise I never use it.
Oh. Except that I have an Android phone, and that’s linked to my gmail, although I don’t use any Google apps or services beyond Play. So I suppose my phone would stop working. Everything’s backed up, though, so maybe it’d be a good thing; maybe it’d motivate me to pull the trigger on a Light Phone. I kinda want a Minimal Phone because my F&F uses Jami, but that’d still be an Android phone, so it wouldn’t work either.
Fi isn’t that great. We were on Fi for years; I switched to Mint, my wife stayed on Fi until I was sure it was going to work. So far, I pay less for more, no gotchas.
It was amazing when it first came out; now it has a lot of competition that beats it.
Man, I wish. I love my Fujifilm, but I hate the damned batteries.
Yah, you’re right. Like I said, when you say “Metaverse” most people (on the street) are going to think of Meta’s. I doubt most people in even developed countries even remember Sony’s failed VR world.
Is there another networked VR world that is anywhere near as big as Meta’s today? With nearly as many users (even with as much of a ghost town as it purportedly is)?
I think you were talking about a hypothetical metaverse, whereas I was thinking about the only one that I know that has any traction - tenuous though it may be - at all, which is Meta’s.
In the books, yes. It didn’t exist IRL, and a poorly as it was done, FB’s metaverse was (is?) a real product.
Facebook/Meta has never had an original idea; I’m not trying to give them credit for anything. There were other VR “worlds” before FB’s (Sony’s, for example, which was also a failure).
I just found out that Steve Jackson Games actually owns the trademark to the name “Metaverse.” I’ll bet that drove Zuck nuts.
Yeah, I use systemd for the self-host stuff, but you should be able to use docker-compose files with podman-compose with no, or only minor, changes. Theoretically. If you’re comfortable with compose, you may have more luck. I didn’t have a lot of experience with docker-compose, and so when there’s hiccups I tend to just give up and do it manually, because it works just fine that way, too, and it’s easier (for me).
Well, yes. Of course you’re right that “metaverse” predates Facebook. They’ve successfully co-opted it by now, though; Meta is what the average person thinks of when you say “metaverse.” Stephenson’s was also fictional, unless you’re really generous and use “metaverse” as a synonym for “the internet.”
Only, NFTs and Crypto are relatively accessible; anyone can get in on the game. The Metaverse is a monopoly.
The bubbles are still going, BTW. Bitcoin prices are currently higher than they have ever been, thanks to America re-electing the Fascist Orangutan.
This is great additional information, much of which I didn’t know!
I’m doing the backing-up-twice thing; it’d probably be better if I backed up once and rsync’d - it’d be less computationally intensive and save disk space used by multiple restic caches. OTOH, it’d also have more moving parts and be harder to manage, and IME things that I touch rarely need to be as simple as possible because I forget how to use them in between uses.
Anyway, great response!
I started with rootless podman when I set up All My Things, and I have never had an issue with either maintaining or running it. Most Docker instructions are transposable, except that podman doesn’t assume everything lives as dockerhub and you always have to specify the host. I’ve run into a couple of edge cases where arguments are not 1:1 and I’ve had to dig to figure out what the argument is on podman. I don’t know if I’m actually more secure, but I feel more secure, and I really like not having the docker service running as root in the background. All in all, I think my experience with rootless podman has been better than my experience with docker, but at this point, I’ve had far more experience with podman.
Podman-compose gives me indigestion, but docker-compose didn’t exist or wasn’t yet common back when I used docker; and by the time I was setting up a homelab, I’d already settled on podman. So I just don’t use it most of the time, and wire things up by hand when necessary. Again, I don’t know whether that’s just me, or if podman-compose is more flaky than docker-compose. Podman-compose is certainly much younger and less battle-tested. So is podman but, as I said, I’ve been happy with it.
I really like running containers as separate users without that daemon - I can’t even remember what about the daemon was causing me grief; I think it may have been the fact that it was always running and consuming resources, even when I wasn’t running a container, which isn’t a consideration for a homelab. However, I’d rather deeply know one tool than kind of know two that do the same thing, and since I run containers in several different situations, using podman everywhere allows me to exploit the intimacy I wouldn’t have if I were using docker in some places and podman in others.
2¢
I have no opinion about rsync.net. I’d check which services restic supports; there are several, and it is it supports rsync.net and that’s what you want to use, you’re golden. Or, use another backup tool that has encryption-by-default and does support rsync.net - there are a couple of options.
I would just never store any data that wasn’t meant for public consumption unencrypted on someone else’s servers. I make an exception for my VPS, but that’s only because I’m more paranoid about exposing my LAN that putting my email on a VPS.
restic, and other backup tools, are generally not always on. You run them; they back up. If you run them only one a month, that’s how often they run. The remote mounting is just a nice feature when you want to grab a single file from one of the backups.
What you’re describing is a classic backup use-case. I’m recommending the easiest, cheapest, most reliable offsite solution I’ve used. restic has been around for years, and has a lot of users and a lot of eyeballs look at it, and it’s OSS. There are even GUIs for it, if you’re not comfortable with the CLI. B2 is generally well-regarded, is fairly easy to figure out, and has also been around for ages. Together, they make a solid combo. I also backup with restic to a local disk and use that for accessing history - B2 is just, as you say, in case of a fire, or theft, I suppose.
I wouldn’t.
Use a proper backup tool for this, like restic. BackBlaze has reasonable rates, especially of you’re mostly write-only, and restic has built-in support for B2 and encrypts everything by default. It also supports compression, but you won’t get much out of that on media files. restic is also cross-platform and a single executable, so you can throw binaries for OSX, Linux, and Windows on a USB stick and know you can get to your backups from anywhere. It also allows you to mount a remote repository like a filesystem (on Linux, at least), and browse a backup and get at individual files without having to restore everything. It’s super handy if you screw up a single file or directory.
Location services in Android are in-phone, and they’re definitely accurate and reporting to Google. I only clarified that your cell provider probably can’t locate you using triangulation via your cell Signal. Turn data off, and you’re fine; otherwise, Google is tracking you - and from what I’ve read, even if you have location services turned off.
They can’t, tho. There are two reasons for this.
Geolocating with cell towers requires trilateration, and needs special hardware on the cell towers. Companies used to install this hardware for emergency services, but stopped doing so as soon as they legally could as it’s very expensive. Cell towers can’t do triangulation by themselves as it requires even more expensive hardware to measure angles; trilateration doesn’t work without special equipment because wave propegation delays between the cellular antenna and the computers recording the signal are big enough to utterly throw off any estimate.
An additional factor in making trilateration (or even triangulation, in rural cases where they did sometimes install triangulation antenna arrays on the towers) is that, since the UMTS standard, cell chips work really hard to minimize their radio signal strength. They find the closest antenna and then reduce their power until they can just barely talk to the tower; and except in certain cases they only talk to one tower at a time. This means that, at any given point, only one tower is responsible for handling traffic for the phone, and for triangulation you need 3. In addition to saving battery power, it saves the cell companies money, because of traffic congestion: a single tower can only handle so much traffic, and they have to put in more antennas and computers if the mobile density gets too high.
The reason phones can use cellular signal to improve accuracy is because each phone can do its own triangulation, although it’s still not great and can be impossible because of power attenuation (being able to see only one tower - or maybe two - at a time); this is why Google and Apple use WiFi signals to improve accuracy, and why in-phone triangulation isn’t good enough: in any sufficiently dense urban or suburban environment, the combined informal of all the WiFi routers the phone can see, and the cell towers it can hear, can be enough to give a good, accurate position without having to turn on the GPS chip, obtain a satellite fix (which may be impossible indoors) and suck down power. But this is all done inside and from the phone - this isn’t something cell carriers can do themselves most of the time. Your phone has to send its location out somewhere.
TL;DR: Cell carriers usually can’t locate you with any real accuracy, without the help of your phone actively reporting its calculated location. This is largely because it’s very expensive for carriers to install the necessary hardware to get any accuracy of more than hundreds of meters; they are loath to spend that money, and legislation requiring them to do so no longer exists, or is no longer enforced.
Source: me. I worked for several years in a company that made all of the expensive equipment - hardware and software - and sold it to The Big Three carriers in the US. We also paid lobbyists to ensure that there were laws requiring cell providers to be able to locate phones for emergency services. We sent a bunch of our people and equipment to NYC on 9/11 and helped locate phones. I have no doubt law enforcement also used the capability, but that was between the cops and the cell providers. I know companies stopped doing this because we owned all of the patents on the technology and ruthlessly and successfully prosecuted the only one or two competitors in the market, and yet we still were going out of business at the end as, one by one, cell companies found ways to argue out of buying, installing, and maintaining all of this equipment. In the end, the competitors we couldn’t beat were Google and Apple, and the cell phones themselves.
Until January. Then that will all stop.