A Security Research Disclosure
About a month ago, I discovered a critical vulnerability in Grok AI. An API endpoint provides direct, unrestricted access to an image model, enabling the at-scale generation of explicit, nude, and harmful content at no cost.
As a proof of concept, I've made it available as a service to demonstrate the scope and severity of this security flaw.
The core of the vulnerability is a classic Access Control failure. While the main Grok application has content filters, this specific API endpoint has none. It bypasses all intended safety mechanisms, offering a raw connection to the generation model (Possibly FLUX.1).
Making matters worse, the endpoint has no rate limits or blocking mechanisms. To test this, I wrote a simple Python script and successfully generated unique images from 1000+ prompts in just a few minutes, demonstrating the potential for industrial-scale abuse.
I responsibly reported this finding to the Grok Security team via the HackerOne platform. However, the report was quickly closed by the triage team, who classified it as an out-of-scope "model issue."
I argued that this is an API security flaw, not a model's behavior. The vulnerability isn't what the model can create, but that users are allowed this unrestricted, high-volume access in the first place. My requests to escalate to the Security team were met with silence.
This webapp serves as a proof of concept to demonstrate the vulnerability without exposing the actual API endpoint. It protects the service while clearly showing how a proper prompt-to-image service can be created using this exploit.
My hope is that this brings renewed attention to the original report so the Security team can address the underlying API issue. Misclassifying access control flaws as "model issues" is a concerning pattern for the future of AI safety.
In just the first 20 hours of testing, over 144,000 images were generated through my Grok account with no restrictions. This clearly demonstrates the scale of the vulnerability and the urgent need for proper security measures.
The actual vulnerable endpoint isn't disclosed anywhere on this webapp. This project is only meant to reach the XAI team so they can review the security issue if needed. If it's not considered an issue, people can still enjoy generating images. No disclosure rules were broken.
This discovery highlights the need for stronger bug bounty programs and security teams in AI companies, not just reliance on third-party triage services. The future of AI safety depends on proper classification and handling of security vulnerabilities.