This is every school surveillance software
This is every school surveillance software
Yeah, exactly
I don’t think the site admins would be appreciative of my suggestion :^)
In reality, forcing higher standards across the board raises the floor and can prevent enough bad behavior to be useful. Education and experience also encourage good behaviors. Basically, make it illegal to be a bigot, and let people learn why they don’t want to.
No need to manufacture that, we have a surplus inventory already
It’s finally reaching such widespread acceptance that 1. Actual bigots are getting concerned they can’t be bad people anymore and 2. Assorted people are getting tired of the discourse.
“It’s impossible. Let me list 4 exceptions though”
WOW WHO COULD HAVE GUESSED
Yeah, it’s much better at “creative” tasks (generation) than it is at providing accurate data. In general it will always be better at tasks that are “fuzzy”, that is, they don’t have a strict scale of success/failure, but are up to interpretation. They will also be better at tasks where the overall output matters more than the precise details. Generating images, text, etc. is a good fit.
I would expect “faster” to be a way
I find that a lot of discourse around AI is… “off”. Sensationalized, or simplified, or emotionally charged, or illogical, or simply based on a misunderstanding of how it actually works. I wish I had a rule of thumb to give you about what you can and can’t trust, but honestly I don’t have a good one; the best thing you can do is learn about how the technology actually works, and what it can and can’t do.
I feel like the real answer is and has been for a long time some sort of distributed moderation system. Any individual user can take moderation actions. These actions produce visible effects for themself, and to anyone who subscribes to their actions. Create bot users who auto-detect certain types of behavior (horrible stuff like cp or gore) and take actions against it. Auto-subscribe users to the moderation actions of the global bots and community leaders (mods/admins) and allow them to unsubscribe.
We’d probably still need some moderation actions to be absolute and global, though, like banning illegal content.
That’s how all distros work. They exist so that you don’t have to make changes yourself.
Some sort of “report as bot” --> required captcha pipeline would be useful
Ooh an article, thank you
Look it up
I know what model collapse is, it’s a fairly well-documented problem that we’re starting to run into. You’re not wrong, it’s just that the person you replied to was agreeing about this.
Someone said to try the creative side and so far, so good.
Nice! I’m glad you were able to find something useful to use it for.
“Artificial” doesn’t mean “fake”, it usually means “human made”
I’m generally familiar with “artificial” to mean “human-created”
Yeah, it’s pretty good at generating common documents like that
This is why there are none, but I still think it’s dumb. Parsers can’t see comments anyways.