Personal review:
A good recap of his previous writings and talks on the subject for the first third, but a bit long. Having paid attention to them for the past year or two, my attention started drifting a few times. I ended up being more impressed with how much he’s managed to condense explaining “enshittification” from 45+ minutes down to around 15.
As soon as he starts building off of that to work towards the core of his message for this talk, I was more-or-less glued to the screen. At first because it’s not exactly clear where he’s going, and there are (what felt like) many specific court rulings to keep up with. Thankfully, once he has laid enough groundwork he gets straight his point. I don’t want to spoil or otherwise lessen the performance he gives, so I won’t directly comment on what his point is in the body of this post - I think the comments are better suited for that anyways.
I found the rest to be pretty compelling. He rides the fine line between directionless discontent and overenthusiastic activist-with-a-plan as he doubles down on his narrative by calling back to the various bits of groundwork he laid before - now that we’re “in” on the idea, what felt like stumbling around in the dark turns into an illuminating path through some of the specifics of the last twenty to forty years of the dynamics of power between tech bosses and their employees. The rousing call to action was also great way to end and wrap it all up.
I’ve become very biased towards Cory Doctorow’s ideas, in part because they line up with a lot of the impressions I have from my few years working as a dev in a big-ish multinational tech company. This talk has done nothing to diminish that bias - on the contrary.
It’s like with people who are stuck in traffic. They are frustrated and so they wish for for change. They wish for more lanes and more roads (and bigger cars, faster cars, more cars). The natural human reaction when something doesn’t work is: Try the same thing harder! It’s not to try something else.
I think we have all been in situations where we failed to push a door open, and so we angrily pushed again harder before easily pulling the door open.
I see lots of people agreeing that there is a problem, as evidenced by the popularity of the term “enshittification”. But the reaction is to double down on the policies that got us here.
You can see that in AI threads here. People call for more intellectual property, more silo-ing of data. Of course, that won’t work and Doctorow has explained that on several occasions. https://pluralistic.net/2023/02/09/ai-monkeys-paw/#bullied-schoolkids https://doctorow.medium.com/https-pluralistic-net-2024-05-13-spooky-action-at-a-close-up-invisible-hand-5c873636eb47
Other institutions that are apparently considered trustworthy also “side with AI companies”, in that they understand that fair use is in the interest of society. For example, libraries including the Internet Archive. https://www.librarycopyrightalliance.org/wp-content/uploads/2023/06/AI-principles.pdf https://blog.archive.org/2023/11/02/internet-archive-submits-comments-on-copyright-and-artificial-intelligence/