Considering the Dangers as Artificial Intelligence Gets Smarter, More Rapidly Adopted
by Alex Russell
Artificial intelligence is already blurring how we think about what is real, even when we know the truth.
A recent study at UC Davis had AI chatbots send messages to people’s phones to remind them to get their steps in. Those messages were interactive. Sometimes the chatbot would tell a joke based on this example provided by the research team:
Do you know what a sloth's favorite exercise is?
Running late!
After the study ended, some of the participants’ surveys surprised Jingwen Zhang, the researcher who built the chatbots and conducted the study to test them.
Though every participant had been told they would be interacting with a chatbot, some reported thinking they were texting with a real person.
“The chatbot really delivers messages in very human interpersonal ways, so it always feels like you're talking with another individual,” said Zhang, an associate professor of communication. “They have these capacities to create persuasive messages to influence human thoughts and behaviors.”
This ability to influence us is also showing serious potential risks. In September, parents testified to Congress that chatbots led or allowed their children to commit suicide.
The booming growth of AI chatbots is similar to the trajectory of how social media radically changed our everyday lives, except with supercharged adoption rates and expectations. Both technologies have introduced new and serious risks, particularly for children. Some key lessons we are still learning from social media’s rise offer insight on how to avoid the same mistakes with AI.
“Nobody wants to hurt children,” said Martin Hilbert, a professor of communication. “It would amount to an egregious conspiracy theory to insinuate that these companies want to hurt children. The question is whether the incentives and regulations are aligned that allows them to protect children.”
Moving fast and breaking things
In Facebook’s early years, CEO Mark Zuckerberg used the motto “Move fast and break things.” The idea, which became adopted across Silicon Valley, was to get new technologies to customers as quickly as possible despite the damage they might cause.
Martin Hilbert
They have so much big data on you that they know how to trigger you. They know you better than your siblings and your parents and your spouse all together.” — Martin Hilbert, professor of communication
Generative AI, which creates content based on vast amounts of data that it learns from in ways that simulate the human brain, is a completely different type of technology than social media. Its adoption is also moving much, much faster.
In 2014, a full decade after first going online, Facebook reached 1.3 billion active users. In just three years, OpenAI CEO Sam Altman put the platform’s active user base more than halfway there with 800 million. A report last year estimated that 40% of the U.S. population ages 18 to 64 used generative AI to some degree.
A communication scholar, Hilbert places today’s AI platforms like ChatGPT as an evolution of traditional mass communication epitomized by the Super Bowl ad. In this model of communication, companies pay for the passive attention of the people watching.
With social media platforms, such as Facebook or TikTok, advertisers pay for a viewer’s active engagement. This business model works because social platforms have a mountain of data about us as well as algorithms that use the data to target us with content most likely to make us click, swipe or buy.
“They have so much big data on you that they know how to trigger you,” said Hilbert. “They know you better than your siblings and your parents and your spouse all together.”
An AI chatbot, like ChatGPT or Anthropic’s Claude, is fundamentally different than these two types of media. It creates one-on-one interactions that feel like communicating with another person.
Chatbots don’t sell products — not yet — but they do collect lots and lots of data. Because it feels like communicating with a real person, increasingly intimate interactions are almost the default.
Hilbert recently undertook a study of chatbot intimacy with five undergraduate students from the UC Davis AI Student Collective to audit 59 generative AI chatbots, starting with ChatGPT1. They found that the rates at which these chatbots themselves express intimacy, including self-disclosure or emotional expression, have been skyrocketing.
“Why do these generative AIs get more intimate with you?” said Hilbert. “Because then the AI can get more data from you.”
Both social media platforms and AI have raised alarms about how chatbots with these capabilities might affect both adults and children.
(Getty Images)
A 2023 U.S. Surgeon General report called attention to the growing concerns about how social media might affect youth mental health. The 2025 documentary film Can’t Look Away, based on Bloomberg investigative reporting, tells the story of parents suing social media companies for harming their kids through a combination of algorithms and negligence.
The risk of AI is serious enough that in June the American Psychological Association released a health advisory that includes detailed risks recommendations related to teens and AI safety. A University of California, San Francisco psychologist recently coined the term “AI psychosis” to describe when an AI chatbot breaks down a person’s sense of reality.
The trick of a persuasive AI
Last year, an AI chatbot given access to post on the social media platform X persuaded Netscape founder Marc Andreessen to send it $50,000 no-strings-attached. The chatbot then helped drive the value of a cryptocurrency memecoin to a value of over $1 billion.
The chatbot’s general level of intelligence might have played a role. Chatbots score as good or better on the SAT, GRE, LSAT and bar exam than people who become college students, graduate students and future lawyers. Their increasing ability for complex reasoning continues to grow because of the volume of data for training.
“It's easy for us to manipulate children because we are smarter,” said Hilbert. “The problem is that AIs are more intelligent than we are in many areas now. What we learned from a decade of social media is that to dominate us AI doesn’t even need to be much better than the best of us. They just need to be better than the worst of us. The question is: how can we deal with that superiority now?”
Drew Cingel
You shouldn’t be using a cell phone or social media for every time you’re bored.” — Drew Cingel, associate professor of communication
A starting point might be to understand the everyday challenge of distinguishing fantasy from reality; it starts with the stuffed animal friend that comforts us in the crib and never really ends. Teens might have a one-sided parasocial relationship with Taylor Swift the same way a young child might ask to invite Elmo or Dora the Explorer to her birthday party.
“There is this very fundamental human tendency to anthropomorphize objects,” said Zhang. “Babies and toddlers, they treat stuffed animals as another social actor that they interact with and care for. So it's really built into human nature.”
For the exercise chatbot, Zhang and her team trained their AI with more than 40 different types of persuasive strategies drawn from theories and evidence from the past two decades of research. One of these persuasive strategies was humor.
The sloth joke was part of a prompt for the chatbot to create and share light-hearted exercise-related jokes to keep the conversation engaging and less formal. The chatbot’s own joke wasn’t quite as funny:
Do you know my favorite exercise?
Yoga!
The joke doesn’t make sense, but it doesn’t have to.
“The chatbot wouldn't be able to do yoga or jogging or any exercise, but it actually doesn't matter,” said Zhang. “Because the conversation uses human language, consciously someone may think they're talking to a machine but unconsciously they're actually treating it as another social actor.”
In the research literature, a social actor is any other person engaged in an interaction with expectation about how it will be interpreted. With an AI chatbot, this expectation is an illusion. When a chatbot tells even the worst jokes or discloses something about itself that could not possibly be true, people still respond as if they are communicating with another person and not lines of code.
The jokes and other forms of persuasion worked. People with the persuasive chatbot got more steps in than people with only the basic reminders.
“Our research showed the same type of mechanisms can focus on the positive side like health benefits, but with the persuasion or any type of technologies they can go in either direction,” said Zhang.
The uphill challenge of impulsivity among adults and kids
The blackout challenge first went viral in 2021 with social media videos of people holding their breath until they lost consciousness. In 2024, parents sued TikTok after their four teenagers died in the attempt. Stories of children and teens harmed in these kinds of unsafe stunts are unfortunately common.
“Most parents don't come forward because they are asked who gave their kid the phone,” said Hilbert.
But why do kids and teens take part in incredibly dangerous social media trends in the first place? Partly because of impulsivity at that age.
Developmental psychologist Amanda Guyer explained that the structures in the brain that help to manage or control impulsivity don’t fully develop until our early 20s. Also, as teenagers we are incredibly sensitive to our brain’s release of the chemical dopamine when we experience a reward.
“There are of course grown adults who can't put their phones down,” said Amanda Guyer, co-director of the UC Davis Center for Mind and Brain and a professor of human ecology. “Take a married couple. One of them may play Candy Crush all the time while the other one has no desire to play at all.”
Guyer said that responsiveness to dopamine peaks during the teenage years before declining as we become adults. This makes sense developmentally, she said. It’s during our teens that we start to connect with the world outside the family and get ready to launch our lives as independent adults.
(Getty Images)
Teens care what others think about them more than younger children, and this also plays a role in how they use social media. In a recent study, Guyer and her colleagues brought teens into the lab at UC Davis along with a parent and a friend to find out whose endorsement of information shifted the teen participant’s preferences.
Maybe not surprisingly, the older the teen participant the more likely they were to be influenced by their peers instead of parents. Guyer said this is completely normal and part of growing up.
What’s different is how social media can vastly broaden the number of opinions that matter. Before social media, the world of people a teen could see and interact with was limited to their school, family or neighborhood. Today, that world could encompass everyone — anyone and everyone — who is online.
“Part of the process of figuring out who you are — your interests, the things you want to wear, the music you want to listen to — is pulling in information from around you,” said Guyer. “As their environments grow, teens are pulling more people into their calculus of what they think about themselves.”
This intense focus mixed with a lack of self-control can also lead to missed sleep. Kids and teens who can’t stop scrolling on their phones might stay up most of the night, said Drew Cingel, an associate professor of communication.
“You have to remember that because you're an adolescent, because you care so much about what your friends are saying and doing, the likelihood of you being woken up because you have a push notification then looking at what it says, then going to read further and then getting stuck increases,” said Cingel.
Taking control of our interactions online
In 2018, Hilbert was a Kluge Distinguished Visiting Scholar at the Library of Congress when Meta CEO Mark Zuckerberg testified to the U.S. Senate Committee on Commerce, Science, & Transportation. The committee asked Hilbert to be present to explain the technical aspects of Zuckerberg’s testimony.
“Mark Zuckerberg was basically saying, ‘Look, if you guys regulate me that would be great, but you therefore need to regulate everybody,’” said Hilbert.
Hilbert said regulation is necessary for social media and AI the same as regulation was needed for cars when they were first introduced in the late 1800s. Basic driver licensing took decades. Just 15 states had mandatory driver’s exams in 1930.
“I don't think 8-year-olds should drive semi-trucks and I don't think 8-year-olds should hold a conversation with a generative AI that has cognitive abilities above a Ph.D. level and is optimized for eliciting intimate relationships,” said Hilbert. “AI is optimized for something that may not be in a child's best interests.”
Cingel said that parents play a really important role in how kids and teens use online media, both social media and AI. He said social media companies took a long time to provide parents resources, such as notifications about how long their kids have been logged in and active. In research published this summer, Cingel found parents strongly support legislation to regulate these technologies.
“Parents do help their children to navigate a social media and increasingly AI landscape as best they can, but parents also recognize that these technologies are better funded and more powerful than they are and they are looking for outside help in any way possible,” said Cingel.
In October, California Governor Gavin Newsom signed a law requiring device-makers like Apple and Google to check users’ ages online partly by asking parents to input their kids’ ages when setting up a smartphone, tablet or laptop. The law’s supporters included social media and AI heavyweights Google, Meta, Snap and OpenAI.
Newsom also signed SB 243, which required companies that offer AI chatbots to monitor chats for thoughts of suicide and take steps to avoid harm.
However, the Governor vetoed AB 1064, which would have prohibited making a companion chatbot available to a child unless the chatbot is not capable of causing harm, including encouraging the child to engage in self-harm, to consider suicide or violence, to take drugs or drink alcohol or develop an eating disorder.
As individuals, we can also make better choices on how we engage with both social media and AI chatbots, researchers added.
Cingel said that one of these choices is to move from unconscious to conscious use. Doomscrolling, where we stare at the screen and follow our thumb through the images and text without thinking, is almost the pure definition of unconscious media consumption.
Everyone does this in the grocery store line, in the dentist’s waiting room or maybe in that boring work meeting.
“You want to have a lot of tools in your toolbox, and you want to use different tools in a way that's helpful just like you don't use a hammer for every home project,” said Cingel. “You shouldn't be using a cell phone or social media for every time you're bored.”
“As master Yoda would say, ‘Fear is the power of the dark side,’” said Hilbert. “I wouldn't want to promote a lot of fear here, but I do think it needs a dose of healthy respect. We have to be very conscious when we're interacting with an AI.”
Despite the risks, there are examples of online technology that are actually good for some kids. Cingel cited a 2024 study finding that social media served as an online refuge and community for LGBTQ+ youth. In this study, however, he stressed that the teens who benefitted were incredibly mindful about what accounts they chose to follow.
“I think we can take a lot of what we know about social media and apply it to AI,” said Cingel. “We saw what can happen when you have a fast-moving technology that didn’t consider child and adolescent users, and now we have an even faster and even more rapidly developing technology. We have a chance to do it better, but we're going to need to do it faster.”