For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.
For weeks, xAI has faced backlash over undressing and sexualizing images of women and children generated by Grok. One researcher conducted a 24-hour analysis of the Grok account on X and estimated ...
Grok users can still make sexualized images of real people within the X and Grok apps, just not by tagging the @Grok account. I tried the Grok tool on images of myself. It quickly took off my clothes.
You're currently following this author! Want to unfollow? Unsubscribe via the link in your email. Follow Henry Chandonnet Every time Henry publishes a story, you’ll get an alert straight to your inbox ...
一些您可能无法访问的结果已被隐去。
显示无法访问的结果