Censorship Vs. Moderation: How Does Subsocial Handle These?
Subsocial and anything built on it is censorship resistant, but there are still multiple avenues for moderation, which we are about to explore. Because lots of people feel very strongly about Trump, in many different ways, we will use his tweets as an example, because they makes a good one.
Every post on Subsocial must be made in a space. Think of spaces like a subreddit, a facebook group, a twitter profile, a blog, etc. Every space has one or more owners, who are able to moderate content in the space, so they can delete Trump's tweets from that space if they want to.
All content on Subsocial is hosted on IPFS. Actual text, images, videos, etc. are uploaded to an IPFS node when it is posted to Subsocial, and the CID of that content on IPFS is what is uploaded to the blockchain, to save space. It basically just points to the actual content that is on IPFS. Every IPFS node is hosted by someone or some group of people, and IPFS node operators are in control of what they host on their servers. So they can decide not to host Trump's tweets if they don't want to.
Websites consist of a frontend (website, mobile app) + a backend (servers, algorithms, etc.). In most cases, users aren't really aware of the backend. Twitter has a backend that we can't really see into, and they have one set of frontends that we are forced to use if we want to use Twitter. They also have complete control over all content on the platform. Subsocial is just the backend, and it is totally open, transparent, and permissionless. Anyone can build a frontend and connect it to the Subsocial blockchain, and read all of the content from the chain. But, operators of these frontends can control what they show on the frontend. For example, you could make a social network built on Subsocial that is just about cats, and you wouldn't show anything about dogs, even though there could be lots of content about dogs on the Subsocial blockchain. So, someone could set up a frontend built on Subsocial that only shows content trashtalking Trump, and doesn't show any content that is pro-Trump, or written by Trump.
Those are the 3 avenues of moderation. They are usually more relevant when people ask about illegal content being uploaded, but they are relevant here too. Obviously there is no way to prevent misinformation, without killing the people doing the misinforming, but there can also be a big debate on whether something is true or not. So at the end of the day, we are not here to be arbiters of truth, just defenders of free speech. People need to think for themselves and figure out if they think something is true or not, just like humans have been doing for millennia.
If someone wants to put themselves in a completely anti-Trump echo chamber, we can't do anything to stop them. If someone wants to put themselves in a completely pro-Trump echo chamber, we can't do anything to stop them. If someone creates their own space on Subsocial, hosts their own IPFS node, and runs their own frontend, we can't do anything to stop them, aside from doing an on-chain governance vote to ban them or something, in which case they can just make another account, since you can always make another account in crypto.
Content Lead & Community Manager at Subsocial, Content Lead at The Polkadot Experience & The Canary Network Experience, Advisor at SkyLabsCorp
Subsocial is a web3 social networking platform built to support the social apps of the future. These apps will feature built-in monetization methods and censorship resistance, where users own their content and social graphs.
Subsocial is a one-of-a-kind in the Polkadot ecosystem, and designed specifically for social interactions. These interactions do not have to be specifically social networking, as Subsocial can support apps like YouTube, Shopify, or even Airbnb.
To learn more about Subsocial and the future of social networking, check out our links:
Website | Twitter | Discord | Telegram | GitHub | Documentation