Skip to main content
Advertisement
Live broadcast
Main slide
Beginning of the article
Озвучить текст
Select important
On
Off

Teenagers have started using artificial intelligence (AI) to manipulate their parents — a new trend is gaining popularity on social networks. Schoolchildren use neural networks to convince people to allow them to have piercings or bright hair coloring. However, experts warn that teenagers can turn to AI with much more dangerous requests related to the creation of prohibited content or the generation of risky challenges. For more information about why schoolchildren began to turn to neural networks for harmful advice, how dangerous it is and how parents can find out about it, read the Izvestia article

What is known about the use of AI by schoolchildren for manipulation

Reports that a new trend is gaining popularity in social networks, in which teenagers share their experiences using neural networks to manipulate parents, have recently begun to appear in news publications and Telegram channels. As the Newspaper writes.<url>", students claim that the arguments generated by AI are so convincing that parents simply cannot refuse them.

In particular, according to one of the users, the neural network helped her prepare a whole list of good reasons allegedly confirming the harmlessness of temporary hair coloring. Another girl said that for two years she could not persuade her parents to allow her to have a piercing, but she still achieved her goal with the help of AI. A third user shared that neural network prompts helped convince her father to let her have a septum piercing in her nose.

Искусственный интеллект
Photo: IZVESTIA/Sergey Konkov

Teenagers say that artificial intelligence helps them formulate thoughts in a coherent way and present arguments that are unexpected for adults and difficult to resist.

—The neural network uses balanced, structured formulations and impersonal arguments, which a teenager is sometimes unable to provide because of his emotionality," Anastasia Khveschenik, product manager of Solar WebProxy at Solar Group, said in an interview with Izvestia. — The AI assistant's statements create the illusion of reasonable arguments, which eases the pressure on parents.

What other harmful advice do teenagers get from neural networks?

According to an annual Kaspersky Lab study, 63% of children use neural networks to search for information, 60% do their homework with the help of AI, and 53% are asked to explain complex material. As Andrey Sidenko, head of online child safety at Kaspersky Lab, says in an interview with Izvestia, like search queries in the usual sense of the word, queries for AI help schoolchildren answer questions that they care about in the real world.

Nevertheless, sometimes neural networks give harmful or dangerous advice. These include, for example, extreme diets, the generation of risky challenges and the creation of illegal content," says Anastasia Hveschenik. — Similar cases have been noted more than once in Russia and around the world.

Диета
Photo: IZVESTIA/Sergey Lantyukhov

According to the expert, there were examples in Russian practice when AI assistants, doing homework for elementary school students, gave advice on how to deal with venomous snake bites, although such recommendations clearly did not correspond to the age of schoolchildren. However, both Russian and international practice has already documented cases of much more serious risks associated with neural networks, from receiving instructions on self—harm to creating deepfakes for cyberbullying, adds Stanislav Yezhov, Director of AI at Astra Group.

In 2025, the number of cases is growing when teenagers turn to chatbots for advice on deceiving adults, hiding bad habits and receiving instructions on dangerous behavior," says the expert. — At the same time, sometimes AI companions completely replace live communication with real friends for schoolchildren.

Today, artificial intelligence easily responds to dangerous requests from teenagers, including those related to eating disorders and suicides, agrees Sergey Polunin, head of the Gazinformservice IT infrastructure solutions protection group. And this practice carries a lot of risks.

What is the danger of uncontrolled communication of schoolchildren with AI?

AI technology developers try to train their neuromodels on ethical aspects, but in most cases neural network assistants do not have an understanding of the context and social responsibility for their answers, Anastasia Khveschenik says in an interview with Izvestia.

In addition, the use of neural networks for manipulation violates the trusting relationship between generations, adds Stanislav Yezhov. When teenagers use AI for such purposes, they do not develop the skills of honest dialogue and compromise. At the same time, the risk of emotional dependence on virtual interlocutors is growing, which can lead to social isolation and inadequate ideas about reality.

Школьник
Photo: IZVESTIA/Sergey Konkov

At the same time, as the interlocutor of Izvestia notes, from a legal point of view, the responsibility of developers in such cases remains a controversial issue. However, the industry is already responding: OpenAI introduces parental controls after several tragic cases, and a legal framework for regulating AI systems is being formed in Russia. These processes raise fundamental issues of AI ethics and the need to create clear safety standards, especially for systems that interact with minors.

"In the current legal framework, direct responsibility of developers for harmful AI advice is unlikely, since most services explicitly warn users in the user agreement that AI may give inaccurate or inappropriate answers, and assign responsibility for their use to the person himself," says Anastasia Hveschenik.

However, if it is proven that the developers deliberately did not implement basic filtering systems for dangerous content, despite the known risks, in the future this may lead to litigation and the formation of new judicial practice and regulatory standards, the expert notes.

How can parents regulate teenagers' communication with neural networks

Today, the introduction of age verification in AI services, as well as internal parental control functions for major market players, is still expected, says Yakov Filevsky, an expert on sociotechnical testing at Angara Security. Until then, parents can resort to installing specialized applications and parental control tools on devices that filter content, monitor online activity, and limit the time they use AI services.

— If parents are worried about which services the child uses and which resources he visits online, you can use parental control programs that will help not only regulate the time of use of applications, but also find out in time what the child is interested in on the Internet, — agrees Andrey Sidenko.

Семья
Photo: Getty Images/Maskot

Parents should take a closer look at what content their children consume and what services they use, the Izvestia interlocutor notes. It is important to explain to students the basic safety rules when working with AI, the main one of which is that you should not blindly trust the advice that the neural network has prepared for you: it can be wrong and even misleading.

The educational approach, in which parents explain to their children how to work with AI, the principles of critical thinking, discuss risks together and explore new platforms, helps build trusting relationships in the family, because the child needs understanding, care and intimacy, adds Yakov Filevsky.

— It is important to explain to the child that AI is a tool, not a friend or adviser, his answers can be a mistake, and any decision affecting health and safety should be discussed with adults, — concludes Anastasia Hveschenik.

Переведено сервисом «Яндекс Переводчик»

Live broadcast