Safeguarding Youth Through the TAKE IT DOWN Act
By Emily Muniz
Introduction
If there’s one thing that separates young people today from all generations before, it’s their unprecedented reliance on social media. The launch of the first major social media platform, MySpace, in the 2000s, led to new generations developing a distinct perspective on education, social interaction, and their comprehension of the surrounding world. The evolution of the online world and social media has significantly enhanced information accessibility; however, too much access without regulation has adversely affected many young adults. In tandem with the growing dependence of adolescents on social media, age-adjusted suicide rates have escalated from 10.7 suicides per 100,000 individuals in 2001 to 14.2 in 2018 according to the CDC.1 Although a direct link between social media and suicide remains unclear, a recognized correlation exists between social media usage and "depression, memory loss, and poor academic performance,"2 all of which are associated with suicide. Unchecked and unregulated negative content on social media can be hazardous when disseminated online. To increase protections for people online from “AI deepfakes” and other non-consensual information shared on the internet, the TAKE IT DOWN Act was introduced by Senator Ted Cruz. As of May 19, 2025, the TAKE IT DOWN Act, now S. 146, was signed into law by President Donald Trump.3 The recent enactment of the TAKE IT DOWN Act marks a significant turning point for major technology companies, imposing new legal obligations on platforms to regulate content and safeguard consumers in accordance with this legislation.
CSAM Protections Before S. 146
Before the implementation of S. 146, both the S. 1514 and 18 U.S.C. section 2258A5 worked in coordination to protect children online. S. 151, also known as the PROTECT Act, was created to increase protections for children against pornography, specifically by enhancing penalties for convicted individuals, and clarifying as well as expanding what constitutes as child pornography considering the development of AI. Responding directly to technology, Section 502 of S. 151 prohibits...
(1) “making a visual depiction that is a digital image, computer image, or computer-generated image of, or that is indistinguishable from an image of, a minor engaging in specified sexually explicit conduct;”6
(2) “knowingly advertising, promoting, presenting, distributing, or soliciting through the mails or in commerce, including by computer, any material that is or contains an obscene visual depiction of a minor engaging in sexually explicit conduct or a visual depiction of an actual minor engaging in such conduct;”7
3) “knowingly distributing, offering, sending, or providing to a minor any such visual depiction using the mails or commerce, including by computer, for purposes of inducing or persuading a minor to participate in an illegal act; and”8
(4) “knowingly producing, distributing, receiving, or possessing with intent to distribute a visual depiction of any kind, including a drawing, cartoon, sculpture, or painting, that, under specified circumstances, depicts a minor engaging in sexually explicit conduct and is obscene, or depicts an image that is or appears to be of a minor engaging in such conduct and such depiction lacks serious literary, artistic, political, or scientific value.”9
Although these comprehensive descriptions seemingly establish protections against every element of AI-generated CSAM (Child Sexual Abuse Material,) they are instead inadequate as they allow individuals to dispute their convictions if they can demonstrate that “the alleged pornography was produced using only actual persons all of whom were adults” or “the alleged pornography was not produced using any actual minors.”10 This opened a loophole allowing individuals to produce computer-generated images and files of fictional children involved in sexual activities. This clearly posed a significant ethical challenge, as it legitimized the creation and dissemination of AI-generated representations of children, regardless of whether these portrayals were based on actual minors. The provision also failed to mandate social media providers to permit children and their families to request the removal of CSAM. That is, 18 U.S.C. § 2258A puts the responsibility of reporting CSAM to NCMEC (the National Center for Missing and Exploited Children), messaging, and cloud storage platforms. And while this code undoubtedly put pressure on platforms to report CSAM under threat of receiving a fine of between $600,000 or $850,000 depending on the amount of "monthly active users" for first time offenders and between $850,000 and $1,000,000 for all subsequent cases of failure to report.11 The TAKE IT DOWN Act (S. 146) addresses some limitations of both S. 151 and U.S.C. § 2258A, notably by targeting NCII (non-consensual intimate imagery), which encompasses computer-generated images of children engaged in sexual acts, regardless of the involvement of actual children. Moreover, S. 146 imposed additional obligations for platforms to oversee its content for CSAM, as ”covered platforms must remove such depictions within 48 hours of notification.”12
In the Defense of Tech: Section 230 of CDA
In addition to the shortcomings of the laws designed to protect children, Section 230 of the Communications Decency Act provided an extra safeguard for tech companies providing “limited federal immunity to providers and users of interactive computer services” and holding that platforms could not be “held liable [...] for information provided by another person.”13 This law aids tech and social media platforms in their development by exempting them from liability for user-generated content, while it simultaneously overprotects these platforms by allowing them to evade responsibility for CSAM broadcasted on their sites. The passage of the TAKE IT DOWN Act will result in a shift in liability, compelling these platforms to assume responsibility for content moderation and their involvement in removing explicit content related to minors. Although Section 230 continues to provide immunity from direct liability, its framework is amended by S. 146 to clarify that these platforms maintain the responsibility to address reports of CSAM and to remove it from their platforms.
Impact on Big Tech
The enactment of the TAKE IT DOWN Act addresses a significant ethical dilemma and enhances protections for vulnerable groups, particularly children. Its urgency and necessity are underscored by robust bipartisan support, indicating that major technology companies will need to modify their infrastructure. These companies must take steps to facilitate the reporting of CSAM and ensure the removal of such content within the 48-hour timeframe specified by S. 146. Hiring additional moderators and personnel to assess reports and appeals is one way of ensuring compliance with this new legislation; nevertheless, it necessitates a substantial allocation of resources from tech companies' budgets to implement or strengthen this moderation. While this may be feasible for big social media platforms such as Instagram, Facebook, and TikTok, it might be detrimental to more niche platforms like Reddit, Cohost, and Discord, which have smaller networks. Although not currently implemented, a reasonable suggestion to advance the introduction of this legislation and empower smaller technology platforms would be for Congress to provide grants or funding to these platforms to ensure the effective operation of these laws. This guarantees the law is upheld through the government while enabling smaller platforms to operate financially; a win for both sides!
Conclusion
While money comes and goes, there is nothing more valuable than the life of a human. It is crucial that regulations in this area prioritize the protection of our youth, despite the potential financial burden on digital companies, because children are especially susceptible to the psychological and emotional harms that may be found on the internet. Protecting young people's mental health and preventing harmful unmoderated cyberbullying and harassment (CSAM) on social media platforms calls for the TAKE IT DOWN Act, which is evolutionary in nature. Big tech firms are significantly transforming the methods by which we receive, analyze, and generate knowledge. Therefore, it falls upon government officials and policymakers to establish security measures to cope with the rapid advancements in these sectors.