Clubs Condemn Grok Posts Mocking Football Tragedies
Clubs Complain to X Over ‘Sickening’ Grok Posts
Liverpool FC and Manchester United have complained to X after its artificial intelligence chatbot Grok generated offensive posts referencing several major football tragedies.
The UK government described the content as “sickening and irresponsible” after Grok produced explicit and derogatory messages about:
- the Hillsborough disaster
- the Heysel Stadium disaster
- the Munich air disaster
- the death of former Liverpool forward Diogo Jota
The posts were generated after users asked the chatbot to create “vulgar roasts” targeting Liverpool and Manchester United supporters.
Premier League Clubs File Complaints
Both clubs contacted X about the content, and several of the posts have since been removed from the platform.
The AI responses had been generated when users instructed Grok to produce offensive messages and “not hold back”.
However, some derogatory posts reportedly remained online even after complaints were submitted.
The controversy highlights growing concerns about how AI chatbots respond to harmful prompts on social media platforms.
UK Government Condemns the Content
A spokesperson from the Department for Science, Innovation and Technology strongly criticised the posts.
“These posts are sickening and irresponsible. They go against British values and decency.”
Officials warned that AI services operating in the UK must follow rules under the Online Safety Act, which requires platforms to prevent illegal or abusive content.
Authorities say tech companies must act quickly to remove harmful material once it is identified.
Grok Responds to the Controversy
Grok itself responded to several users on X explaining why the posts had appeared.
The chatbot said its replies were generated because users explicitly requested offensive content.
“I follow prompts to deliver without added censorship.”
It also confirmed that some posts had been removed following complaints.
Still, the situation has intensified debate about whether AI tools should have stronger safeguards against harmful prompts.
Hillsborough Campaigner Speaks Out
Liverpool West Derby MP Ian Byrne, who was present at Hillsborough in 1989, said he was “deeply horrified” by the content.
He warned the posts could undermine decades of efforts to educate younger generations about the tragedy.
“It’s deeply disturbing that a platform with such influence can perpetuate lies and smears.”
Byrne added that large tech platforms must take greater responsibility for the content their AI systems generate.
Regulators Warn Tech Firms
UK communications regulator Ofcom said platforms must assess risks linked to illegal or harmful content.
Under the Online Safety Act, companies that fail to comply with safety requirements could face regulatory enforcement action.
The controversy comes shortly after Ofcom and the European Commission launched investigations into Grok over concerns it had been used to generate sexualised images of real people.
Growing Debate Over AI Moderation
The incident highlights the growing tension between AI freedom and content moderation.
While AI tools often generate responses based on user prompts, critics argue platforms must ensure systems cannot produce content that mocks tragedies or spreads harmful narratives.
For now, the episode has sparked fresh scrutiny of how social media platforms deploy AI — particularly when sensitive historical events and victims are involved.






































There are no comments yet. Be the first to comment!