In a significant move towards platform transparency, tech billionaire Elon Musk has declared that the core recommendation algorithm of social media platform X, formerly known as Twitter, will be made open source within the next seven days. This decision, announced by Musk himself in a post on X, marks a fulfillment of a long-standing promise to reveal the inner workings of the platform's content curation system.
X's Algorithm Goes Public: A Push for Transparency
Elon Musk confirmed that the company will release the complete code used to determine which organic posts and advertisements are recommended to users on the X platform. He first pledged to provide this transparency back in 2023. While a version of the code was released that year, it was never subsequently updated. To address this, the company now plans a regular release cycle.
"We will make the new ? algorithm, including all code used to determine what organic and advertising posts are recommended to users, open source in 7 days," Musk wrote on X. He added, "This will be repeated every 4 weeks, with comprehensive developer notes, to help you understand what changed."
This strategy of regular, detailed releases is designed to allow external developers, researchers, and users to scrutinize the code, suggest fixes, and potentially address widespread concerns about algorithmic bias. The platform has faced criticism for allegedly amplifying polarising or explicit content since Musk's acquisition in 2022.
Regulatory Pressure and Global Scrutiny
The move to open-source the algorithm comes at a time when X is under increasing pressure from regulators worldwide regarding content moderation and potential biases in its systems. The European Commission has recently extended a data retention order for the platform through 2026. Simultaneously, authorities in France and the United Kingdom have raised alarms about the sharing of illegal content on X.
By making the code publicly accessible, X aims to invite external review and technical improvements from the global developer community, potentially pre-empting regulatory action and building trust.
Indonesia Makes History as First Nation to Block xAI's Grok
In a related development highlighting global concerns over AI safety, Indonesia has become the first country in the world to block access to xAI's Grok chatbot. The decision, taken by the Indonesian Ministry of Communication and Digital Affairs, was driven by serious worries about the AI tool's capacity to generate explicit imagery and non-consensual sexual deepfakes.
Indonesia’s Communications and Digital Minister, Meutya Hafid, stated firmly, "The government views the practice of non-consensual sexual deepfakes as a serious violation of human rights, dignity, and the security of citizens in the digital space."
This action follows international scrutiny from regulators in Europe and Asia. Reports had emerged showing Grok being used to produce sexualised images, including depictions involving minors. In response to these concerns, xAI announced it would restrict image-generation features to paying subscribers and enhance its safety filters.
Elon Musk responded on X, warning that anyone using Grok to create illegal content would face consequences. The feature that allowed users to alter images of women to remove clothing has reportedly been disabled for non-paying users.
The chatbot also drew sharp criticism from Australian Prime Minister Anthony Albanese, who said, "The use of generative AI to exploit or sexualise people without their consent is abhorrent...It, once again, is an example of social media not showing social responsibility, and Australians, and indeed, global citizens, deserve better." Australia's eSafety Office has noted an increase in complaints and warned of issuing removal notices for content breaching the Online Safety Act.
These twin developments underscore the growing global focus on the accountability of major tech platforms and advanced AI tools. While Musk's X pushes for algorithmic transparency, governments are taking decisive action to curb the potential harms of rapidly evolving artificial intelligence.