I write ̶b̶u̶g̶s̶ features, show off my adorable standard issue cat, and give a shit about people and stuff. I’m also @CoderKat.

  • 0 Posts
  • 38 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle

  • I don’t follow what you’re saying. I love the shit out of many of my smaller communities. Reading TV subs after a new episode dropped was my favourite (and required a lot of active people). I wanted to discuss the Horizon DLC when I beat it the other day, but the Horizon sub here is super tiny. I tried to post on a generic gaming sub instead and did not get the discussion I wanted.

    Similarly, Pokemon Go subs on Reddit were super detailed places to discuss the game, including with detailed analysis of any change, data mining for upcoming stuff, etc. Here, there’s two subs that have just the sub creator trying to populate the sub. No actual discussion.

    It sucks and I miss those kinda communities.



  • The whole CSAM issue is why I’d never personally run an instance, nor any other kind of server that allows users to upload content. It’s an issue I have no desire to have to deal with moderating nor the legal risks of the content even existing on a server I control.

    While I’d like to hope that law enforcement would be reasonable and understand “oh, you’re just some small time host, just delete that stuff and you’re good”, my opinion on law enforcement is in the gutter. I wouldn’t trust law enforcement not to throw the book at me if someone did upload illegal content (or if I didn’t handle it correctly). Safest to let someone else deal with that risk.

    And even if you can win some case in court, just having to go to court can be ludicrously expensive and risk high impact negative press.



  • Scammers have long since used bots already for text based scams (though dumb ones). Phone calls are a lot harder, though. And there’s also “pig butchering” scams, which are the long cons. Most commonly those are fake relationships. I suspect those long cons would have a hard time convincing someone for months, as human scammers manage to do.

    I suspect that scammers will have a harder time utilizing AI, though. For one thing, scammers are often not that technologically advanced. They can put together some basic scripts, but utilizing AI is difficult. They could use established AI, but it’d almost surely be against their ToS, so the AI will likely try to filter scam attempts out.

    That said, it might be just a matter of time. Today, developing your own AI has a barrier to entry, but in the future, it is likely to get a lot easier. And with enough advancements, we could see AI being so good that fooling someone for months may be possible. Especially once AI gets good at generating video (long con scams usually do have scammers video chat their victims).

    And honestly, most scams have a hundred red flags anyway. As long as the AI doesn’t outright say something like “as a large language model…”, you could probably convince a non zero number of victims (and maybe even if the AI fucks up like that – I mean, somehow people get convinced the IRS takes app store gift cards, so clearly you don’t have to be that convincing).






  • Tiktok is the absolute worst at irrational censorship. It’s a shame because the site is immensely popular and that means it is full of very interesting content. Yet, this is far from the first unreasonable thing they’ve been removing. It’s well known how Tiktok users came up with alternative words to circumvent words that were likely to get their content removed (e.g., “unalived” instead of “killed”).


  • Strongly agreed. I think a lot of commenters in this thread are getting derailed by their feelings towards Meta. This is truly a dumb, dumb law and it’s extremely embarrassing that it even passed.

    It’s not just Meta. No company wants to comply with this poorly thought out law, written by people who apparently have no idea how the internet works.

    I think most of the people in the comments cheering this on haven’t read the bill. It requires them to pay news sites to link to the news site. Which is utterly insane. Linking to news sites is a win win. It means Facebook or Google gets to show relevant content and the news site gets users. This bill is going to hurt Canadian news sites because sites like Google and Facebook will avoid linking to them.



  • I also like to draw analogies to other age restrictions. If they’re allowed to drive a car, literally the most dangerous thing they can do in terms of causes of death, then how can they not be responsible enough to vote for their leaders?

    We also have no qualms about sentencing 16 year olds as adults if they commit a bad enough crime. This one strikes me as society knowing 16 year olds are perfectly capable of being responsible but we just give them a bit more leeway.

    And personally, I’ve met plenty of 16 year olds that are better informed about politics than a number of adults I know.




  • Barriers are relative. Everything that makes it slightly harder will stop a large chunk of bots, since bots aren’t able to easily adapt like humans can. Plenty of very basic bots are in fact stopped by lack of emails.

    But yeah, email verification is heavily more so that you can verify they have access to the email, and thus the email is safe to use for things like password resetting. Without it, webmasters can get swamped with complaints about people getting locked out of accounts or the likes because they signed up with the wrong email.

    In theory, you can also go further by only allowing email providers that have anti bot mechanisms, but it’s difficult to maintain that and it will always exclude some legitimate users.


  • I’m very skeptical that mCaptcha would actually work besides perhaps temporarily slowing bots down due to being niche. How expensive can you make it without hurting legitimate users? And how expensive does it need to be to discourage bots? Especially when purposefully designed bots can actually do the kinda math we’re talking about in optimized software and hardware while legitimate users can’t.



  • Sometimes reporting technically covers the last one. But usually not. Not all subs have rules against bigotry, trolling, dog whistles, general assholery, etc. I strongly hold it’s important that downvoting is an option to deal with these kinda things. It’s a way to show everyone that the comment isn’t acceptable.

    Plus even when reporting is an option, it may not be fast enough. Can’t really automate removals, either, as people will abuse that.

    Arguably “disagree but acceptable” should just not upvote. In a certain sense, that’s already a middle option.