If you’ve spent any time on X (formerly Twitter) over the last year, chances are you’ve been flooded with the “let’s ask Grok” trend.
The generative artificial intelligence chatbot developed by xAI has quickly become the topic of online chatter as users pose all sorts of questions, from deeply philosophical to politically charged to witty hypotheticals.
Grok, named after the term coined by sci-fi author Robert A. Heinlein and modeled after the irreverent tone of The Hitchhiker’s Guide to the Galaxy, is designed to be a companion, a guide — and, apparently, quite a bit of a troublemaker.
Unlike its more ostensibly cautious counterparts — OpenAI’s ChatGPT, Google’s Gemini, and others that tend to err on the side of overt neutrality — Grok doesn’t shy away from casual language, slang, or even outright swearing. This has led to a flurry of users testing its limits, asking absurd or provocative questions just to see how far it will go.
The chatbot’s “Unhinged” mode, available to premium subscribers, leans into its more rebellious streak, delivering responses that are wild, unpredictable, and often laced with humor. This mode, combined with Grok’s real-time access to X posts, means the AI is constantly learning from the platform’s unfiltered, often chaotic discourse.
In India, one X user, frustrated by Grok’s delayed response to a query about their “10 best mutuals,” added a Hindi swear word to their follow-up. Grok fired back in kind, using the same expletive and telling the user to “stop crying.” The exchange went viral, with some amused at Grok’s audacity and others questioning the ethics of the AI.
Musk, never one to shy away from controversy, has defended Grok’s tone as a reflection of its mission — to be a more relatable, human-like AI. In a blog post announcing Grok, xAI described it as an assistant with “a bit of wit and a rebellious streak,” designed to answer questions and even suggest what questions to ask. The firm has also highlighted Grok’s advanced reasoning capabilities, including its “Think” and “Big Brain” modes, which allow it to tackle complex tasks with sophistication.
In February, Grok 3’s beta version was previewed, with xAI calling it their “most advanced model yet: blending strong reasoning with extensive pretraining knowledge … Grok 3’s reasoning capabilities, refined through large scale reinforcement learning, allow it to think for seconds to minutes, correcting errors, exploring alternatives, and delivering accurate answers.”
Interestingly, Grok has also been in the limelight for allegedly censoring negative content about Elon Musk and Donald Trump. The chatbot was reportedly instructed to ignore sources critical of its creator and the U.S. president — and in one archived exchange, Grok initially named Elon Musk as a major spreader of disinformation, but also revealed it had been directed to dismiss criticism targeting Musk and Trump.
Following public backlash, xAI claimed to have removed the instruction. Grok now openly names Musk when asked about disinformation on X, insisting it no longer has biases in its responses, but the incident raised telling concerns over the potential manipulation of AI to shield influential figures from scrutiny or accountability.
While xAI attributed the censorship to an internal error, with a former OpenAI employee bearing the blame, the incident points to the several extant question marks around AI transparency and ethical oversight.
In India, too, where the chatbot’s Hindi responses have sparked particular interest, users have been flooding Grok with politically charged questions, testing its ability to navigate sensitive topics.
One user asked Grok to list “the lies peddled by PM Modi,” prompting a response that cited unfulfilled promises and exaggerated claims — only as far back as 2019.
Grok’s rise also raises broader questions about the role of AI in our lives. Should these systems be neutral, polite, intentional, and sanitized, or is there an argument to be made for creating AIs that reflect the messiness and controversy of human societies and communication?