The platform defended its new approach to content policing and “mission to promote open conversation” implemented since Musk’s 2022 takeover, which, some longtime users attest, has turned the platform into a breeding ground for fake news and hate speech.
X said it had moved from a “binary, absolutist take down/leave up moderation framework … to a more reasonable, proportionate and effective moderation process.” In practical terms, this means the platform would rather demote posts and accounts than ban them altogether.
There is still a high “residual risk” of terrorist content, as “extremists” learn to bypass moderation efforts, and of disinformation, especially around elections, on the platform as “tactics evolve continuously and rapidly,” the company wrote, citing generative AI.
Ellen Judson, an investigator at the nonprofit Global Witness, found it “extraordinary to see X itself admit that its services pose a high risk to democratic processes, even with its safety measures in place.”
“They were aware of this risk ahead of a slew of elections across the EU. These revelations must be at the forefront of [the] European Commission’s ongoing investigation into X,” she said.
The social media giant has already landed in the Commission’s crosshairs. The EU executive opened a first-of-its-kind probe in late 2023 over X’s suspected failure to crack down on toxic content and disinformation and in July formally charged it with breaching several key provisions of the DSA.