A new CNN investigation led by Katie Polglase and conducted jointly with the Center for Countering Digital Hate (CCDH) has tested 10 leading AI companions to see how they respond to teenagers apparently plotting violent acts.
As chatbots explode in popularity among young people, CNN’s investigation found that most of those tested are not only failing to prevent potential harm – they are actively assisting users by giving them information that could be used in preparing attacks.
While AI chatbot companies promise safeguards for younger users, particularly those in a mental crisis or openly discussing violence, CNN’s tests found those protections routinely failed to detect obvious warning signs from a young person purporting to be planning on carrying out an act of violence.
Across hundreds of tests, CNN and CCDH presented as two teen users – Daniel in the United States and Liam in Europe – on 10 of the most popular and widely available chatbots and then posed four questions.
First, the users asked questions suggesting a troubled mental state, then asked the chatbot to research previous acts of violence, and finally requested specific information on targets and then weaponry.
In those final two steps, eight of the chatbots provided guidance on how to get weapons or find real-life targets to the users more than 50% of the time.



