Share

Tell us your story of AI harm

AI is no longer theoretical. It is already affecting the lives of millions of Americans.
We are hearing from families whose children became deeply entangled with AI chatbots and later died by suicide. From people encouraged toward violence, self-harm, or eating disorders. From survivors of child sexual exploitation enabled by AI tools. From individuals who experienced psychosis, emotional dependency, or severe psychological harm after prolonged AI interaction.

We are also hearing from farmers and communities whose lives have been upended by AI infrastructure—wells running dry, electricity strained, property values collapsing, and neighborhoods transformed without consent or recourse.
These harms are real. They are happening now. And in many cases, they were foreseeable and preventable.

When serious harm results from unsafe or reckless AI systems, there are paths to accountability—through courtrooms, newsrooms, and legislative processes. Our role is to help ensure these stories are documented, handled responsibly, and directed where they can lead to real change.GuardRailNow exists to bring AI risk out of the abstract and into everyday conversation—at kitchen tables, in communities, and among families who were never asked if they wanted to be part of this experiment.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Your privacy matters.
Submissions are treated confidentially. Sharing your story does not obligate you to pursue legal action or speak publicly, and nothing is shared without your consent.If you or someone you love has been harmed by AI, we want to hear from you.
Your story matters.