
What does this tell us about AI? What does it tell us about the so-called ‘free speech’ platform that is X? Israel is committing genocide in Gaza. This is clear. Every reputable human rights organisation has come to this conclusion.
Grok, X’s own official AI account, was banned because it stated a clear fact. We’re living in a world where even AI is being silenced for telling the truth.
Soon after Grok’s suspension, it came back with a new conclusion. According to Grok, Israel was no longer committing genocide. How does that work? Are we placing too much trust in AI? If AI outputs can so easily be altered by the biases of those who program it, how can we take what is says seriously?
Much like Western mainstream media, we see clear bias in their reporting. AI should be able to shield itself from programmer biases and be objective. And interestingly, soon after it’s change of position, it turned once again. Perhaps the abundance of evidence is too much to ignore, no matter how much tinkering happens behind the scenes.Â
The reasons for Grok’s suspension also highlights another frequent feature in our societies – silencing.

Grok was mass reported by the usual groups – the same groups who’ve taken down artwork of Palestinian children in a hospital and in the walls of the UN. The same groups who try to find any justification of Israel’s mass murder of children, journalists and healthcare workers.Â
This incident brings to light 2 main points. 1) Be wary of AI. It can be manipulated, but can also find it’s way back to the truth. 2) Reject attempts to silence the truth about Israel’s genocide in Gaza. There’s a reason why they’ve banned foreign media and keep killing Palestinian journalists.