We are now over 200 users and may see significantly more in the coming days and weeks.
Our instance is new, Lemmy is new to most of us, and there just isn’t the history needed to approach appropriate people and ask them to be moderators. So I’m taking applications instead.
It’s worth noting that moderation has not been an issue for us so far, but I’d like to be prepared.
There are two types of moderators I’m looking for. It will be easier to find the first than the second, so for the first type I’m keen to find people not in an NZ time zone to cover times when NZ is normally asleep. The types I’ve after are:
- Remove obviously not allowed content:
- Trolling or posts obviously trying to cause harm
- Animal abuse
- CSAM
- Doxing and violations of court ordered name suppressions
- Any other content illegal in NZ
- Grey area moderators
Rather than just banning people that don’t agree, I want to foster a community where everyone that wants to participate in good faith can, regardless of their views. I want this to be a community where everyone feels safe to be themselves. This can sound contradictory, but as long as we can attack ideas and not people, I believe there is a possibility to find our spot somewhere along this path.
This means that I don’t want to ban users that use derogatory terms, I want it to be the start of a discussion about why they are hurtful. I don’t want to remove racist comments, I want it to be the start of a discussion about what is and isn’t acceptable in our community. If everyone acts in good faith, we should be able to have these discussions respectfully. If people start commenting in bad faith, then this may require more traditional moderator interventions (e.g. temporary or permanent bans, and potential removal of content). Note that moderator actions are transparent, and listed in the “Modlog” linked at the bottom of every page (this includes moderator actions on other instances to some extent).
So I am taking applications from people willing to facilitate the discussions for this second type of content. I recognise this is a much bigger burden than the first type, but I hope we will have some volunteers willing to give it a go.
If you want to volunteer, note that a requirement will be that you’re willing and able to join a private Matrix chat room for moderators and admins where discussion can happen. This will be helpful for type 1 moderators too, in case there is confusion over what is type 1 content.
Note that moderators can be by community. If you’re keen to moderate e.g. just !wellington@lemmy.nz, then please include this in your “application”.
No need for life stories, just comment on your interest and, if relevant, some history or background of your moderation activities. You can reply here or DM me if you don’t want to post publicly, either is fine.
We cannot have a good-faith discussion with nazis (or tankies), that’s a losing proposition from the outset. We do not need to have a discussion about why the N word is a beyond the pale. We do not need to have a discussion about why genocide denial is wrong. Doing so just gives those ideas a platform.
The only response to those people that saves moderation energy for more productive activities is the ban hammer. Cut it off at the source.
I do not disagree. But I also think that many commonly used terms can stop people feeling safe and respected, and these should prompt a discussion.
I also don’t think we can know where the line is just yet, but I agree that extreme left or right viewpoints are typically not able to be had in good faith, and we don’t want to give them a platform. We do have to be careful, because traditionally non-mainstream platforms were where people went when they were kicked off reddit, not people who wanted to leave. However, overt nazism is not the only way to be racist, and often it can happen without thought.
I’m keen to find people willing to help me get this balance right - I am against banning anyone who disagrees with the opinions of the general consensus.
I’d question the extreme left vs extreme right approach - extreme left views tend to be “what if we had UBI” or “can communism work?”
The extreme right is arguing about people’s right to exist. They’re not equivalent.
I would not consider “what if we had UBI” to be extreme left.
Extreme left to me is authorotarian, Marxism-Leninism style oppression. But I’ll admit the term is used to mean different things.
No problem with banning authoritarianism. Say no to the Holodomor.
Agreed, there’s plenty of grey area about what might be appropriate online. But, there’s plenty that’s not appropriate anywhere. I feel a list of what’s not appropriate is the right way to go. Anytime we find things that are the grey area that’s the opportunity for a discussion.
I’m against a list. I want a community where it’s obvious things are not allowed. I don’t want to have a list that we have to point to. I just want to be open about what we are doing in terms of moderation, and let the community guide this.
Here’s an argument against listing rules: https://eev.ee/blog/2016/07/22/on-a-technicality/
I also suggest hopping over to https://beehaw.org and reading some pinned posts and sidebar content, they are taking a similar approach that has influenced me, and are growing much faster so we should be able to see there how well it scales.
I’ve read through some of those. Places like beehaw have much stronger entry criteria to vet things like that.
Look, ultimately it’s your instance, so you do you. But I don’t think allowing hate speech or Nazis is a viable place to start a discussion.
We will have a stricter entry criteria over time. That will start when we start to have a problem.
I also expect to need to adjust the approach over time. But surely an instance of a few hundred users can act like adults (famous last words).
She’ll be right, mate.
One of my favourite cases from the site we’re all fleeing from, is a number of mechanical subreddits having to auto ban a word that is a shortened version of transmission, to avoid triggering a ban of the user, and possibly the sub itself.
You can also use something similar to cockney rhyming to get around these bans, such as calling transgender individuals “trains”.
I just don’t get why anyone would say “trains” when “trans” is a letter shorter, and not considered offensive in the community, unless by context, in which case either “trains” or “trans” (or “transgender”, for that matter) would be being used offensively.
I mean, unless you’re punching down on trans people, why would you not just go with “trans”? I can understand the frustration of automod bans for a common shortening of “transmission”, or a very British shortening of “cigarette” - but that’s a solid argument against trying to get automod to police “part-time slurs”, not an argument for tolerating their use as slurs to somehow “further discussion” - if it’s clear in context that it’s being used as a slur, then drop the banhammer. If it’s a matter of people feeling safe to contribute, those who aren’t trying to make other users feel unsafe should be prioritised.
Can’t keep everyone happy, and you have to draw the line somewhere, otherwise there will never be an end to the “they’re always changing the rules” argument. If people are mouthing off with slurs, they’re not here for debate, they’re here to shut down debate. They’re not interested in good faith, they’re interested in seeing their transgressive words on screen, they’re interested in upsetting people, laughing at their expense, and moving on if they’re pushed. They’re bullies, and until someone pushes them off it, they will claim this soapbox when they find it.
The key point is that banning words is a blunt instrument that can be easily subverted, which is why I hope we never go down that route.
You need to look at the actual meaning behind what was said.
The counter point to this is the number of people who get banned, usually by bots, for talking about cigarettes or vehicle transmissions. Reddit definitely went too far, we need to find a middle ground.