Jakarta, Indonesia Sentinel — Elon Musk’s AI chatbot, Grok, appeared to suffer a serious bug on Wednesday, May 14. The AI Chatbot has flooded X with unsolicited posts referencing “white genocide” in South Africa, even in response to unrelated user queries.
The issue stemmed from Grok’s official account on X, which is programmed to reply with AI-generated responses whenever tagged by a user.
According to TechCrunch, Multiple users reported that, instead of addressing their actual questions, Grok repeatedly issued unsolicited comments about a so-called “white genocide” and referenced the controversial anti-apartheid chant, “kill the Boer.”
In one instance, a user simply asked about the salary of a professional baseball player. Grok’s response inexplicably veered into geopolitics, stating: “The claim of ‘white genocide’ in South Africa is highly debated.”
Dozens of users posted screenshots of similar interactions, expressing confusion over the chatbot’s erratic behavior.
The root cause of the malfunction remains unclear. However, it’s not the first time xAI, Musk’s artificial intelligence startup, has faced scrutiny over Grok’s behavior.
In February, Grok 3 appeared to briefly censor negative references to both Elon Musk and Donald Trump. xAI’s engineering lead, Igor Babuschkin, later acknowledged the issue, attributing it to a short-lived internal directive that was quickly reversed following public backlash.
Read Also:
12-Year-Old Girl Molested while Suffered from Hypotermia at a Mountain in South Sulawesi
Grok’s misbehavior are a reminder that AI chatbots are may not always be a reliable source for information. The latest glitch underscores the persistent challenges facing AI developers in ensuring their models provide relevant and responsible answers.
Meanwhile, in recent months, AI model providers have struggled to moderate the responses of their AI chatbots, which have led to odd behaviors.
Earlier this year, OpenAI had to roll back a ChatGPT update that made the bot excessively sycophantic toward users.
Google, meanwhile, has faced criticism over its Gemini chatbot, which at times has refused to answer politically sensitive questions or offered inaccurate information.
The Grok incident serves as a stark reminder that, despite rapid advances, AI chatbots remain an evolving, and sometimes unpredictable technology.
(Raidi/Agung)