“Digital Candy for the Next Generation”: Melania Trump’s Warning About AI and Social Media Risks for Children

“Digital Candy for the Next Generation”: Melania Trump’s Warning About AI and Social Media Risks for Children

Image source: Pexels, CC0 License

In a rare and significant White House appearance that captured national attention, First Lady Melania Trump delivered a powerful warning about the double-edged nature of emerging technologies, describing artificial intelligence and social media as “the digital candy of the next generation.”

Her remarks came during the May 19, 2025, Rose Garden ceremony where President Donald Trump signed the bipartisan “TAKE IT DOWN Act” into law—legislation that represents a landmark step in protecting children and adults from technology-facilitated abuse.

“Artificial Intelligence and social media are the digital candy of the next generation—sweet, addictive, and engineered to have an impact on the cognitive development of our children,” Mrs. Trump declared with notable conviction. “

But unlike sugar, these new technologies can be weaponized, shape beliefs, and sadly, affect emotions and even be deadly.”

This metaphor of “digital candy” powerfully encapsulates the allure and potential dangers of today’s technology landscape—particularly for young people who are increasingly targeted by harmful online content, including AI-generated deepfakes.

As we’ve explored in our article on the ethics and dangers of artificial intelligence, the balance between technological innovation and protection from harm has become one of the most pressing challenges of our time.

The TAKE IT DOWN Act: A Bipartisan Response to Digital Exploitation

The Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act (TAKE IT DOWN Act) represents a rare moment of political unity in Washington. The legislation, which passed unanimously in the Senate and by a 409-2 vote in the House, establishes critical protections against non-consensual intimate imagery (NCII), including AI-generated deepfakes.

Key provisions of the law include:

  • Federal criminalization of sharing non-consensual intimate imagery, with penalties of up to two years imprisonment
  • Enhanced penalties for images involving minors
  • Requirements for social media platforms to remove such content within 48 hours of notification
  • Enforcement authority granted to the Federal Trade Commission
  • Legal recourse for victims to pursue civil damages

The bipartisan nature of this legislation is particularly noteworthy in today’s polarized political climate. Senator Ted Cruz (R-TX), who co-sponsored the bill with Senator Amy Klobuchar (D-MN), was motivated to act after a 2023 incident in Aledo, Texas, where high school students became victims of manipulated nude images shared on Snapchat.

“It should not take a sitting senator getting on the phone to take these down,” Cruz remarked, highlighting how the incident revealed both the technical capability to remove harmful content and the lack of systematic protections for everyday Americans.

The Growing Threat of AI-Generated Deepfakes

Mrs. Trump’s characterization of AI as “digital candy” comes amid alarming research showing the rapid proliferation of deepfake technology among young people. A March 2025 study by child protection organization Thorn surveyed 1,200 young people aged 13-20 and found that 31% of teens are already familiar with deepfake nudes, while one in eight personally knows someone who has been targeted.

[AI-generated deepfake technology concept showing digital manipulation]
Image source: Passpack, CC0 License

Perhaps most concerning is how accessible these technologies have become—among the 2% of young people who admitted to creating deepfake nudes, most learned about the tools through mainstream channels like app stores, search engines, and social media platforms. This accessibility is precisely what makes the “digital candy” metaphor so apt—these technologies are easily obtained, initially appealing, but potentially harmful with prolonged or improper “consumption.”

I spoke with Dr. Emily Rodriguez, a child psychologist specializing in digital trauma, who explained, “The comparison to candy is remarkably accurate. Just as we wouldn’t let children have unlimited access to sweets without guidance, we need to approach AI and social media with the same careful balance of access and boundaries.”

The psychological impact of such victimization can be devastating. While 84% of teens recognize that deepfake nudes are harmful, citing emotional distress, reputational damage, and deception as primary concerns, 16% still believe these images are “not real” and therefore not a serious issue—a dangerous misconception that can minimize the trauma experienced by victims.

As our coverage of AI copyright concerns has noted, “A striking example is the rise of digital replicas, where AI mimics human voices or styles—think deepfake videos of celebrities.” The TAKE IT DOWN Act represents the first major federal legislation specifically targeting this growing threat.

BE BEST: From Initiative to Legislation

The TAKE IT DOWN Act represents a significant achievement for Mrs. Trump’s BE BEST initiative, which she launched during her husband’s first term and has revitalized since returning to the White House in January 2025. The initiative focuses on three pillars: children’s well-being, online safety, and opioid abuse prevention.

“As First Lady, my BE BEST initiative is focused on improving children’s well-being, encouraging kindness, and creating a safer online environment for our youth,” Mrs. Trump said during the ceremony. “Today, I am proud to say that the values of BE BEST will be reflected in the law of the land.”

The First Lady played an instrumental role in building support for the legislation. In March 2025, she convened a roundtable discussion on Capitol Hill that brought together survivors of non-consensual intimate imagery, families, advocates for online protection, and Members of Congress. The following day, President Trump highlighted the bill in his Address to a Joint Session of Congress.

“I want to thank Melania for your leadership on this very important issue,” President Trump said at the signing ceremony. “America is blessed to have such a dedicated and compassionate First Lady.”

According to the National Center for Missing & Exploited Children (NCMEC), reports of online enticement increased by 97.5% between 2019 and 2024, making the First Lady’s focus on digital safety increasingly relevant. The organization has been a vocal supporter of the TAKE IT DOWN Act, noting that it fills critical gaps in existing protections.

The Broader Context: AI Regulation Debates

The signing of the TAKE IT DOWN Act comes at a time of intense debate over AI regulation in the United States. Just days before the Rose Garden ceremony, more than 100 organizations raised alarms about a provision in the House’s sweeping tax and spending cuts package that would prohibit states from enforcing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” for 10 years.

This proposed moratorium has drawn opposition from a bipartisan group of 40 state attorneys general, who argue that blocking states from enforcing even their own laws related to AI technology could harm users and society, especially as AI rapidly extends into more areas of life.

The contradiction between the Trump administration’s support for the TAKE IT DOWN Act and the proposed moratorium on state-level AI regulation highlights the complex challenges of governing emerging technologies. While the administration has emphasized the importance of American leadership in AI development, the TAKE IT DOWN Act acknowledges that some applications of AI require strict regulatory guardrails.

As our analysis of AI education and cheating tools has shown, the technology presents both opportunities and challenges across multiple sectors of society. Finding the right balance between innovation and protection remains a critical challenge.

When I reached out to tech policy analyst Marcus Chen about this apparent contradiction, he offered an insightful perspective: “The TAKE IT DOWN Act addresses a specific harm with clear victims, while the broader AI regulation debate centers on potential future risks. It’s easier to build consensus around protecting children from immediate threats than to agree on how to govern technologies still in development.”

Implementation Challenges and Next Steps

While the new law establishes criminal penalties effective immediately, social media platforms and websites will have one year to implement reporting systems to handle takedown requests. The Federal Trade Commission will be responsible for enforcement, raising questions about implementation given recent leadership changes at the agency.

During her remarks, Mrs. Trump acknowledged that the signing was “not where our work ends on this issue,” adding, “Now, we look to the Federal Trade Commission and the private sector to do their part.”

Child safety advocates emphasize that legislation alone won’t solve the problem. They recommend a multi-faceted approach:

  1. Education: Parents and educators must teach children about the risks of deepfakes and how to respond if targeted
  2. Platform responsibility: Social media companies must adopt “Safety by Design” principles that detect and prevent harmful content before it spreads
  3. Technical solutions: Developing better detection tools for identifying AI-generated content
  4. Support services: Creating resources for victims of image-based abuse

[Child using a laptop with parent supervision, representing online safety education]
Image source: Pexels, CC0 License

The Electronic Frontier Foundation has published extensive resources on deepfake technology, noting that “the same technology that can create compelling digital effects can also be used to manipulate and deceive.” Their research highlights the importance of developing both technical and legal solutions to address the misuse of this technology.

The Human Impact: Survivors Speak Out

The Rose Garden ceremony included survivors who had advocated for the legislation, including Elliston Berry, whom Mrs. Trump specifically acknowledged for standing “boldly for change—despite the risks posed to her and her family by speaking out.”

Research shows that victims often stay silent—while nearly two-thirds (62%) of young people say they would tell a parent if targeted by deepfake nudes, in reality, only 34% of victims actually do so. This silence can compound the psychological damage, as victims suffer alone with feelings of shame, fear, and helplessness.

“When deepfakes are involved in sextortion, the fear of not being believed intensifies,” notes a September 2024 report from Thorn. “Victims worry that even if they come forward, people will question whether the images are real or fake, creating additional barriers to seeking help.”

This highlights why Mrs. Trump’s “digital candy” metaphor is so powerful—it acknowledges both the surface-level appeal of these technologies and their potential for causing lasting harm, particularly to vulnerable populations like children and teens.

I recently spoke with a high school counselor who’s dealt firsthand with deepfake incidents in her school. “The devastation these students experience is profound,” she told me, requesting anonymity to protect her students’ privacy. “One girl didn’t come to school for weeks after discovering manipulated images of herself circulating online. The trauma isn’t lessened by the fact that the images aren’t ‘real’—the social and emotional damage is very real.”

A Global Challenge Requiring Global Solutions

The United States is not alone in confronting the dangers of AI-generated deepfakes. The European Union has addressed the issue through its Digital Services Act, which requires platforms to quickly remove illegal content, including non-consensual intimate imagery. Australia, Canada, and the UK have also enacted or proposed legislation targeting image-based abuse.

However, the borderless nature of the internet creates enforcement challenges that no single country can solve alone. International cooperation will be essential to combat the spread of harmful AI-generated content.

As our coverage of Android 16’s privacy features notes, “In a world where data breaches are way too common, Android 16 steps up with privacy tools that feel like a digital bodyguard.” This type of technological solution, combined with legislative frameworks like the TAKE IT DOWN Act, represents a multi-pronged approach to addressing digital safety concerns.

Looking Forward: The Evolution of Digital Safety

As AI technology continues to advance at a breathtaking pace, the TAKE IT DOWN Act represents an important first step in establishing guardrails. However, experts caution that legislation will need to evolve alongside the technology.

“The challenge with regulating AI is that it’s a rapidly moving target,” explains Dr. Sarah Keller, director of the Center for Digital Ethics at Stanford University. “What’s considered cutting-edge today may be obsolete tomorrow, which means our legal frameworks need to be adaptable while still providing meaningful protections.”

For Melania Trump, the signing ceremony marked a significant achievement in her advocacy for children’s online safety. By characterizing AI and social media as “digital candy,” she offered a powerful metaphor that resonates with parents and policymakers alike—acknowledging both the allure of these technologies and their potential dangers.

“Today, through the ‘TAKE IT DOWN’ Act, we affirm that the well-being of our children is central to the future of our families and America,” the First Lady concluded. In a divided Washington, the protection of children from technology-facilitated abuse has emerged as a rare point of consensus—a reminder that some values transcend partisan politics.

As we’ve discussed in our analysis of AGI investing, understanding both the potential and risks of AI technologies is crucial for navigating the future landscape. The TAKE IT DOWN Act represents an important step in addressing one specific risk, but the broader conversation about balancing innovation with protection will continue to evolve.

The Cyberbullying Research Center reports that approximately 15% of students have experienced some form of cyberbullying involving manipulated images, with that number expected to rise as AI tools become more accessible. Their research underscores the importance of combining legislative approaches with education and awareness campaigns.

What You Can Do to Protect Children Online

For parents, educators, and concerned citizens, there are several practical steps you can take to help protect children from the risks of AI and social media:

  1. Start conversations early: Discuss online safety, privacy, and the potential risks of sharing images online before children begin using social media
  2. Establish clear boundaries: Set guidelines for technology use, including time limits and appropriate content
  3. Use parental controls: Implement age-appropriate filters and monitoring tools on devices and platforms
  4. Stay informed: Keep up with emerging technologies and the latest safety features
  5. Report harmful content: Familiarize yourself with reporting mechanisms on social platforms and know when to involve law enforcement
  6. Advocate for change: Support legislation and corporate policies that prioritize children’s online safety

By taking these steps and remaining vigilant, we can help ensure that the “digital candy” of AI and social media enhances rather than harms the lives of the next generation.

When I asked several parents how they’re navigating these challenges, one mother of three teenagers shared, “We have regular family discussions about what they’re seeing online. The ‘digital candy’ comparison really resonates—just like with real candy, it’s about moderation and making smart choices, not complete avoidance.”

As our recent article on trending topics in May 2025 noted, “Hospitals adopting AI for triage have cut emergency room wait times by 22%, proving how tech can ease real human burdens.” This highlights the important point that AI itself isn’t inherently harmful—it’s how we implement, regulate, and teach about these technologies that determines their impact on society.

The TAKE IT DOWN Act, and Melania Trump’s compelling framing of AI and social media as “digital candy,” provide an important framework for this ongoing conversation about balancing technological progress with human well-being, especially for our most vulnerable populations.


For victims of non-consensual intimate imagery seeking help, resources are available through the Cyber Civil Rights Initiative’s 24/7 crisis helpline at 844-878-2274.

You may also like