2020/10/14/The Curse of Monster Island/woozle

From Issuepedia
< 2020/10/14/The Curse of Monster Island
Revision as of 16:28, 8 January 2021 by Woozle (talk | contribs) (Created page with "<blockquote> I wanted to study if it was possible for people from radically different worldviews to debate in an environment that had informal guidelines but no officially enf...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

I wanted to study if it was possible for people from radically different worldviews to debate in an environment that had informal guidelines but no officially enforced rules.

He had me up until "no officially enforced rules". My long experience has been that you have to have guidelines by which valid arguments can be separated from bogus ones, or else the venue will be dominated by emotional appeals that polarize the readership into those who can tell a bogus argument even when shrouded in an appeal versus those who are swayed by the appeal -- and right-wing argumentation is designed to ruthlessly exploit that.

So... the motive seems good, but why would anyone believe "no rules" could possibly work out well? Suspish.

It was already clear in 2016 that the regulation of online speech was going to be a rolling disaster, producing an endless stream of rage against the moderators, who were invariably cast as biased against the right.

It wasn't clear to me -- and in fact Mastodon, however egregious its failures at protecting the vulnerable in certain contexts, has at least shown that successful online moderation is possible when it is devolved down to a low enough user-to-mod ratio, where trust can exist and be sustained between users and moderators and where there is no corporate upper level that can remove mods for profit-driven reasons.

Worse, though, is the idea that toddlers raging against rule-enforcement is any kind of justification for the idea that rule-enforcement is impractical. It's when the toddlers squeal loudest that you know it's finally being done right.

Respectable experts have raised concerns about the lack of oversight for algorithmic moderation.

This is an entirely different problem, and follows/reinforces the dominant thinking about online moderation -- that it's somehow impractical to have humans do it, so we have to have algorithms... and the fact that those algorithms have so far been 100% designed by corporations whose primary goal is profit -- leading to "engagement" as a primary metric for success, leading to deprioritization of accuracy and safety -- is somehow not something that has occurred to him?

To be fair, he's expressing (in a rather roundabout way) a feeling of concern that algorithmic moderation won't work... but in the context of having just said that enforcing rules also won't work, it's unclear what point he's trying to make.

Our failures to address this problem proactively seem to promote a desire in some to return to a “golden age” of the internet, before things got so big that moderation of speech on an epic scale became necessary.

I wrote about this in 2017, reposted here. TLDR: no, going back to The Old Ways of the Net is not a solution; the ecosystem has changed, it's a larger environment with larger predators.

Almost immediately we had to add the guideline “no deleting” as individuals started to delete sections of posts to mess with arguments or cover spots where they’d messed up.

I heavily facepalm at this... this is why I started with wikis as a discussion platform, because that leaves you free to edit while also providing a record of edits. I don't understand why social media doesn't universally adopt this model (...aside from social media ultimately being driven by corporate profit-think, including Mastodon).

As I read the history of the next 3 years of Monster Island, what I find myself thinking is this: he set up a petri dish, and then was shocked and dismayed at the nasty stuff which grew there. He created an environment in which the worst behavior would not be punished, and then was shocked and dismayed at how bad behavior seemed to dominate.

...and then he not only didn't shut it down, he passed over management to someone else -- someone right-leaning, even! -- so the abuse could continue.


...and then in his conclusion he cites the "tragedy of the commons", which... is a bigoted myth, but maybe he can be forgiven for not knowing that? I was always suspicious of it but only found out about the history of it last year.

That said, his conclusion about unmoderated discussion is correct... but I'm not sure what the point is. I could have told him that in 2005.

And then there's this further conclusion:

More generally, I learned that discourse really only works if it’s properly moderated and everyone is committed to the system. That means that someone is going to have to be empowered to make decisions and enforce rules, and we’re going to have to find a way to invest enough trust to keep the discourse from collapsing.

...seems very kinda wishy-washy liberal-centrist. This is useful how? It also subtly hints at the eternal need for centrally-controlled hierarchy, which is obviously problematic. The fediverse in which Mastodon participates -- which I'd say is far more successful at moderation than any of the mainstream social media venues -- is pointedly not centrally controlled.

Also: If I had run that experiment, I think I'd be apologizing for my role in the violence and abuse he mentions.