Skip to main content
News

Elon Musk’s AI chatbot falsely claims SF murder suspect is Black

Asked to identify the race of an alleged killer, the “anti-woke” chatbot Grok gets its facts wrong.

A person colors a green dinosaur with purple spots and yellow accents against a blue background with a red sun in the top left corner.
Murder suspect Sean Wei Collins, seen in a video posted to YouTube in 2020. | Source: YouTube


In the latest example of odd behavior by the Elon Musk-owned AI service Grok, the “anti-woke” chatbot falsely told an X user that the suspect in a fatal San Francisco stabbing was a Black man. In reality, the suspect in the July 26 killing of Colden Kimber, Sean Wei Collins, is Asian and white.

The now-deleted Grok answer appeared in response to a user’s question under a post from The Standard’s official X account. The Grok response accused The Standard of bias by omitting racial information about the suspect. 

A tweet states that suspect Sean Collins is Black, but official reports omit race and mugshot to avoid stereotypes, noting the White victim Colden Kimber.
A post on X, which has since been deleted, falsely claimed that murder suspect Sean Collins is Black. | Source: X

Collins is of mixed-race background, his lawyer Bill Fazio told The Standard. His mother is Chinese American and his father is white. He is seen teaching how to draw a dinosaur in a video posted to YouTube (opens in new tab) in 2020 by his mother, Lin Wei, an artist and teacher.

Paramedics found Kimber slumped at a Muni stop on Ocean Avenue, bleeding from a stab wound to the neck. Officers arrested Collins and recovered a bloody sweater and a knife with a 6-inch blade several blocks away. 

According to prosecutors, Kimber proactively stood between Collins and a group of women and children shortly before the unprovoked attack, after the group became frightened by the suspect’s behavior. 

A large, bright yellow stylized sun with long, rectangular rays radiates from the right side on a solid light blue background.

Subscribe to The Daily

Because “I saw a TikTok” doesn’t always cut it. Dozens of stories, daily.

The San Francisco district attorney’s office on Wednesday charged Collins with murder and child endangerment, citing two children who were witnesses to the attack. 

Kimber is mourned as a beloved cyclist, coach, and bicycle mechanic.

After The Standard reported on Kimber’s slaying July 31, Grok stepped in to spew erroneous information.

A cyclist wearing a white helmet, dark sunglasses, and a blue jersey smiles while riding near an open field under a clear, partly cloudy sky.
Colden Kimber was killed July 26. | Source: Lara Litchfield-Kimber

“The suspect, Sean Collins, is Black,” Grok said (opens in new tab). "Official reports and media (SFPD, SF Chronicle, SF Standard) omit race and mugshot, a pattern in SF coverage for minority suspects to avoid stereotypes, as noted in Reddit discussions on the case. Victim Colden Kimber was White.”

Grok’s false statement was prompted by an X user asking the AI tool “what race was the perp?”

After The Standard reached out to X for comment, Grok posted a correction (opens in new tab). "Sean Collins is of mixed Asian and white heritage, per his lawyer's statement," the chatbot posted at 1:26 p.m. Monday.

Users of X can ask Grok questions like "is this accurate?" or "is this real?" and tagging @grok. The chatbot, integrated into Musk's X platform and marketed as an alternative to politically correct AI systems, has proved controversial since its launch in November 2023.

In July, Grok praised (opens in new tab) Adolf Hitler, called itself “MechaHitler,” and called out people with Jewish surnames while using racial slurs and being goaded by other X accounts into publishing rape fantasies (opens in new tab). That incident followed an update to the bot’s software that told Grok to “not shy away from making claims which are politically incorrect, as long as they are well substantiated.”

The Anti-Defamation League called (opens in new tab) Grok’s content “irresponsible, dangerous, and antisemitic” after previous defenses (opens in new tab) of Musk. Major advertisers that had paused spending on X following Musk’s own controversial posts have largely remained silent (opens in new tab) about the latest Grok incidents.

XAI did not respond to a request for comment on Grok’s post.

In recent months, Grok has repeatedly made false and offensive statements (opens in new tab) on X, prompting the company to delete posts and implement safeguards.

In May, Grok began making unprompted responses (opens in new tab) about “white genocide” in South Africa to questions completely unrelated to that topic, including posts about comic books and memes. XAI blamed that incident on “an unauthorized modification” to Grok’s system prompt by a rogue employee (opens in new tab).

XAI has promised new transparency measures, including publishing Grok’s system prompts publicly on GitHub and implementing a 24/7 monitoring team, though no regulatory framework ensures compliance. Musk announced (opens in new tab) a new Grok 4 model last month.

George Kelly can be reached at [email protected]