I have a 7950X, a pile of RAM, and an unfairly expensive RTX 4000-series GPU. The cursor occasionally hitches for ~400ms whenever doing things like opening task manager or resuming from the lock screen, so that checks out unfortunately.
I have a 7950X, a pile of RAM, and an unfairly expensive RTX 4000-series GPU. The cursor occasionally hitches for ~400ms whenever doing things like opening task manager or resuming from the lock screen, so that checks out unfortunately.
With regard to my examples, WEI provides full confidence and stability in identifying the browser.
Relying on detecting browsers by differentiating between their features and quirks involves on having a large suite of checks to run, some of which might become incorrect as browsers change over time. It’s a maintenance burden, to say the least.
In other posts, I’ve tried to point out how some of the articles and comments around WEI are more speculative than factual and received downvotes and accusations of boot-licking for it. Welcome to the club, I guess.
The speculation isn’t baseless, but I’m concerned about the lack of accurate information about WEI in its current form. If the majority of people believe WEI is immediately capable of enforcing web page integrity, share that incorrect fact around, and incite others, it’s going to create a very good excuse for dismissing all dissenting feedback of WEI as FUD. The first post linking to the GitHub repository brought in so many pissed off/uninformed people that the authors of the proposal actually locked the repo issues, preventing anyone else from voicing their concerns or providing examples of how implementing the specification could have unintended or negative consequences.
Furthermore, by highlighting the DRM and anti-adblock aspect of WEI, it’s failing to give proper attention to many of the other valid concerns like:
I very well could be wrong, but I think our (the public) opinions would have held more weight if they were presented in a rational, informed, and objective manner. Talking to software engineers as people generally goes down better than treating them like emotionless cogs in the corporate machine, you know?
Firefox will probably survive if they bow and add WEI support.
I can’t imagine Google, Microsoft, and Apple opening themselves up to further monopolization scrutiny by trying to keep attestation restricted to their own browsers on their own operating systems.
Self-built or community forks are probably screwed, though.
And here’s a concern about the decentralized-but-still-centralized nature of attesters:
From my understanding, attesting is conceptually similar to how the SSL/TLS infrastructure currently works:
Each ultimately-trusted attester has their own key pair (e.g. root certificate) for signing.
Some non-profit group or corporation collects all the public keys of these attesters and bundles them together.
The requesting party (web browser for TLS, web server for WEI) checks the signature sent by the other party against public keys in the requesting party’s bundle. If it matches one of them, the other party is trusted. If it doesn’t, they are not not trusted.
This works for TLS because we have a ton of root certificates, intermediate certificates, and signing authorities. If CA Foo is prejudice against you or your domain name, you can always go to another of the hundreds of CAs.
For WEI, there isn’t such an infrastructure in place. It’s likely that we’ll have these attesters to start with:
But hey, maybe we’ll have some intermediate attesters as well:
Even with that list, though, it doesn’t bode well for FOSS software. Who’s going to attest to various browser forks, or for browsers running on different operating systems that aren’t backed by corporations?
Furthermore, if this is meant to verify the integrity of browser environments, what is that going to mean for devices that don’t support Secure Boot? Will they be considered unverified because the OS can’t ensure it wasn’t tampered with by the bootloader?
Adding another issue to the pile:
Even if it isn’t the intent of the spec, it’s dangerous to allow for websites to differentiate between unverified browsers, browsers attested to by party A, and browser attested to by party B. Providing a mechanism for cryptographic verification opens the door for specific browsers to be enforced for websites.
For a corporate example:
Suppose we have ExampleTechFirm, a huge investor in a private AI company, ShutAI. ExampleTechFirm happens to also make a web browser, Sledge. ExampleTechFirm could exert influence on ShutAI so that ShutAI adds rate limiting to all browsers that aren’t verified with ShutAI as the attester. Now, anyone who isn’t using Sledge is being given a degraded experience. Because attesting uses cryptographic signatures, you can’t bypass this user-hostile quality of service mechanism; you have to install Sledge.
For a political example:
Consider that I’m General Aladeen, the leader of the country Wadiya. I want to spy on my citizens and know what all of them are doing on their computers. I don’t want to start a revolt by making it illegal to own a computer without my spyware EyeOfAladeen, nor do I have the resources to do that.
Instead, I enact a law that makes it illegal for companies to operate in Wadiya unless their web services refuse access to Wadiyan citizens that aren’t using a browser attested to by the “free, non-profit” Wadiyan Web Agency. Next, I have my scientists create and release a renamed versions of Chromium and Firefox with EyeOfAladeen bundled in them. Those are the only two browsers that are attested by the Wadiyan Web Agency.
Now, all my citizens are being encouraged to unknowingly install spyware. Goal achieved!
I hope you were being sarcastic, because, ideally, nobody implements this.
Good article. Not clickbait/ragebait, and it explains the specification simply and succinctly, while also demonstrating why it’s dangerous for the open web.
But the real questions is, can we change them?
Imagine this:
I’m not a lawyer, nor do I have the full context of the legislation you’re quoting, but my interpretation of that paragraph is that it only applies to aircrafts that are carrying passengers.
. . . in the air space in possession of another, by a person who is traveling in an aircraft, is privileged . . .
You’re the one who does this for a hobby, though. I’m sure that you know the laws more than I do :)
Circular dependencies can be removed in almost every case by splitting out a large module into smaller ones and adding an interface or two.
In your bot example, you have a circular dependency where (for example) the bot needs to read messages, then run a command from a module, which then needs to send messages back.
v-----------\
bot command_foo
\-----------^
This can be solved by making a command conform to an interface, and shifting the responsibility of registering commands to the code that creates the bot instance.
main <---
^ \
| \
bot ---> command_foo
The bot
module would expose the Bot
class and a Command
instance. The command_foo
module would import Bot
and export a class implementing Command
.
The main
function would import Bot
and CommandFoo
, and create an instance of the bot with CommandFoo
registered:
// bot module
export interface Command {
onRegister(bot: Bot, command: string);
onCommand(user: User, message: string);
}
// command_foo module
import {Bot, Command} from "bot";
export class CommandFoo implements Command {
private bot: Bot;
onRegister(bot: Bot, command: string) {
this.bot = bot;
}
onCommand(user: User, message: string) {
this.bot.replyTo(user, "Bar.");
}
}
// main
import {Bot} from "bot";
import {CommandFoo} from "command_foo";
let bot = new Bot();
bot.registerCommand("/foo", new CommandFoo());
bot.start();
It’s a few more lines of code, but it has no circular dependencies, reduced coupling, and more flexibility. It’s easier to write unit tests for, and users are free to extend it with whatever commands they want, without needing to modify the bot
module to add them.
A couple years back, I had some fun proof-of-concepting the terrible UX of preventing password managers or pasting passwords.
It can get so much worse than just an alert()
when right-clicking.
A small note: It doesn’t work with mobile virtual keyboards, since they don’t send keystrokes. Maybe that’s a bug, or maybe it’s a security feature ;)
But yeah, best tried with a laptop or desktop computer.
How it detects password managers:
Unexpected CSS or DOM changes to the input
element, such as an icon overlay for LastPass.
Paste event listening.
Right clicking.
Detecting if more than one character is inserted or deleted at a time.
In hindsight, it could be even worse by using Object.defineProperty
to check if the value
property is manipulated or if setAttribute
is called with the value
attribute.
This may be an unpopular opinion, but I like some of the ideas behind functional programming.
An excellent example would be where you have a stream of data that you need to process. With streams, filters, maps, and (to a lesser extent) reduction functions, you’re encouraged to write maintainable code. As long as everything isn’t horribly coupled and lambdas are replaced with named functions, you end up with a nicely readable pipeline that describes what happens at each stage. Having a bunch of smaller functions is great for unit testing, too!
But in Java… yeah, no. Java, the JVM and Java bytecode is not optimized for that style of programming.
As far as the language itself goes, the lack of suffix functions hurts readability. If we have code to do some specific, common operation over streams, we’re stuck with nesting. For instance,
var result = sortAndSumEveryNthValue(2,
data.stream()
.map(parseData)
.filter(ParsedData::isValid)
.map(ParsedData::getValue)
)
.map(value -> value / 2)
...
That would be much easier to read at a glance if we had a pipeline operator or something like Kotlin extension functions.
var result = data.stream()
.map(parseData)
.filter(ParsedData::isValid)
.map(ParsedData::getValue)
.sortAndSumEveryNthValue(2) // suffix form
.map(value -> value / 2)
...
Even JavaScript added a pipeline operator to solve this kind of nesting problem.
And then we have the issues caused by the implementation of the language. Everything except primitives are an object, and only objects can be passed into generic functions.
Lambda functions? Short-lived instances of anonymous classes that implement some interface.
Generics over a primitive type (e.g. HashMap<Integer, String>
)? Short-lived boxed primitives that automatically desugar to the primitive type.
If I wanted my functional code to be as fast as writing everything in an imperative style, I would have to trust that the JIT performs appropriate optimizations. Unfortunately, I don’t. There’s a lot that needs to be optimized:
I’m sure some of those are implemented, but as far as benchmarks have shown, Streams are still slower in Java 17. That’s not to say that Java’s functional programming APIs should be avoided at all costs—that’s premature optimization. But in hot loops or places where performance is critical, they are not the optimal choice.
Outside of Java but still within the JVM ecosystem, Kotlin actually has the capability to inline functions passed to higher-order functions at compile time.
/rant
From what I can tell, that’s basically what this is trying to do. Some company can sign a source image, then other companies can sign the changes made to the image. You can see that the image was created by so-and-so and then manipulated by so-and-other-so, and if you trust them both, you can trust the authenticity of the image.
It’s basically git
commit signing for images, but with the exclusionary characteristics of certificate signing (for their proposed trust model, at least. It could be used more like PGP, too).
I glossed through some of the specifications, and it appears to be voluntary. In a way, it’s similar to signing git commits: you create an image and chose to give provenance to (sign) it. If someone else edits the image, they can choose to keep the record going by signing the change with their identity. Different images can also be combined, and that would be noted down and signed as well.
So, suppose I see some image that claims to be an advertisement for “the world’s cheapest car”, a literal rectangle of sheet metal and wooden wheels. I could then inspect the image to try and figure out if that’s a legitimate product by BestCars Ltd, or if someone was trolling/memeing. It turns out that the image was signed by LegitimateAdCompany, Inc and combined signed assets from BestCars, Ltd and StockPhotos, LLC. Seeing that all of those are legitimate businesses, the chain of provenance isn’t broken, and BestCars being known to work with LegitimateAdCompany, I can be fairly confident that it’s not a meme photo.
Now, with that being said…
It doesn’t preclude scummy camera or phone manufacturers from generating identities unique their customers and/or hardware and signing photos without the user’s consent. Thankfully, at least, it seems like you can just strip away all the provenance data by copy-pasting the raw pixel data into a new image using a program that doesn’t support it (Paint?).
All bets are off if you publish or upload the photo first, though—a perceptual hash lookup could just link the image back to original one that does contain provenance data.
Yep! I ended up doing my entire co-op with them, and it meshed really well with my interest in creating developer-focused tooling and automation.
Unfortunately I didn’t have the time to make the necessary changes and get approval from legal to open-source it, but I spent a good few months creating a tool for validating constraints for deployments on a Kubernetes cluster. It basically lets the operations team specify rules to check deployments for footguns that affect the cluster health, and then can be run by the dev-ops teams locally or as a Kubernetes operator (a daemon service running on the cluster) that will spam a Slack channel if a team deploys something super dangerous.
The neat part was that the constraint checking logic was extremely powerful, completely customizable, versioned, and used a declarative policy language instead of a scripting language. None of the rules were hard-coded into the binary, and teams could even write their own rules to help them avoid past deployment issues. It handled iterating over arbitrary-sized lists, and even could access values across different files in the deployment to check complex constraints like some value in one manifest didn’t exceed a value declared in some other manifest.
I’m not sure if a new tool has come along to fill the niche that mine did, but at the time, the others all had their own issues that failed to meet the needs I was trying to satisfy (e.g. hard-coded, used JavaScript, couldn’t handle loops, couldn’t check across file boundaries, etc.).
It’s probably one of the tools I’m most proud of, honestly. I just wish I wrote the code better. Did not have much experience with Go at the time, and I really could have done a better job structuring the packages to have fewer layers of nested dependencies.
Back when I was in school, we had typing classes. I’m not sure if that’s because I’m younger than you and they assumed we has basic computer literacy, or older than you and they assumed we couldn’t type at all. In either case, we used Macs.
It wasn’t until university that we even had an option to use Linux on school computers, and that’s only because they have a big CS program. They’re also heavily locked-down Ubuntu instances that re-image the drive on boot, so it’s not like we could tinker much or learn how to install anything.
Unfortunately—at least in North America—you really have to go out of your way to learn how to do things in Linux. That’s just something most people don’t have the time for, and there’s not much incentive driving people to switch.
A small side note: I’m pretty thankful for Valve and the Steam Deck. I feel like it’s been doing a pretty good job teaching people how to approach Linux.
By going for a polished console-like experience with game mode by default, people are shown that Linux isn’t a big, scary mish-mash of terminal windows and obscure FOSS programs without a consistent design language. And by also making it possible to enter a desktop environment and plug in a keyboard and mouse, people can* explore a more conventional Linux graphical environment if they’re comfortable trying that.
Ah, that’s fair.
I’m having the opposite experience, unfortunately. I loved working at {co-op company} where I had a choice of developer environment (OS, IDE, and the permissions to freely install whatever software was needed without asking IT) and used Golang for most tasks.
The formal education has been nothing but stress and anxiety, though. Especially exams.
Did the formal education before the job ruin it for you, or did the job itself ruin it?
If that were the case, wouldn’t the mouse jump when the latest frame is presented? For me, it’s more that it just stays still until after Windows stops having a fuss.