Discussion about this post

User's avatar
Kyle van Oosterum's avatar

Great post I really enjoyed that! It’s also worth pointing out that having these laws in the books does not guarantee that they will always be used for good reasons or that they won’t be abused.

That being said, I wonder what you think about a different way to treat the Yarwood case. So it strikes me that potential harm is too broad a category to punish speech but the practical stakes of the speech being acted upon seems relevant. So maybe “we should go flick their ear“ there’s a potential harm but the stakes of people acting on it I guess are not as high compared with saying “we should slaughter people” or something like that where the low risk of death nevertheless merits some response or consequence. Whether that’s jailable, I’m unsure.

My own take is that platforms should respond to (non-inciting) hate speech and misinformation by amplifying counterspeech. For inciting rhetoric like Yarwood’s, it strikes me that platforms should demote or maybe remove such speech, the more likely it is to cause harm. But that’s a separate point from whether it should be criminalized I suppose.

I talk about this more here if you’re curious:

https://kylevo.substack.com/p/beyond-the-gates-making-speech-free

Synthetic Civilization's avatar

This isn’t just illiberalism.

It’s what happens when institutions shift from punishing actions to pre-empting potential narratives under uncertainty.

Once speech is treated as upstream risk, enforcement inevitably expands because “possible harm” has no natural stopping point.

No posts

Ready for more?