The social media platform X has officially recognized shortcomings in its management of inappropriate content and has assured the Indian government of its commitment to adhere fully to Indian laws, according to officials. This development follows persistent pressure from the Ministry of Electronics and Information Technology (MeitY) regarding the spread of objectionable and sexually explicit material associated with the platform’s AI tool, Grok.

As per government sources, X has blocked approximately 3,500 pieces of reported content and has deleted over 600 user accounts that were found to violate Indian laws. The company has also pledged to enhance its protective measures to prevent the creation and distribution of obscene material moving forward.
The issue arose when regulators expressed concerns that Grok — X’s AI chatbot — was being exploited to generate and disseminate sexually explicit and degrading images, particularly those targeting women, through manipulated user prompts and synthetic outputs. Government officials characterized this spread as a significant breakdown in platform-level moderation and a breach of provisions under the Information Technology Act, 2000, and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. MeitY provided X with a deadline to present an Action Taken Report within 72 hours detailing corrective actions.
Confronted with possible legal repercussions, including the risk of forfeiting “safe harbour” protections that shield intermediaries from liability for user-generated content, X took action by removing the flagged material and committing to future adherence to national laws. Officials noted that X has expressed its intention to prohibit obscene imagery on its platform and to implement stricter content moderation practices.
The platform’s recognition of these issues represents an uncommon instance of public accountability in its ongoing conflict with Indian regulators regarding online safety and AI content utilization. Indian authorities have underscored that platforms cannot sidestep responsibility by citing technical features or third-party content and must ensure user dignity and privacy.
The government’s resolute position reflects a wider global scrutiny of AI tools and content moderation, with regulators in Europe and other regions also urging X to adopt strong safeguards and accountability concerning generative AI.