The takedown provision in TAKE IT DOWN applies to a much broader category of content—potentially any images involving intimate or sexual content—than the narrower NCII definitions found elsewhere in the bill. The takedown provision also lacks critical safeguards against frivolous or bad-faith takedown requests. Services will rely on automated filters, which are infamously blunt tools. They frequently flag legal content, from fair-use commentary to news reporting. The law’s tight time frame requires that apps and websites remove speech within 48 hours, rarely enough time to verify whether the speech is actually illegal. As a result, online service providers, particularly smaller ones, will likely choose to avoid the onerous legal risk by simply depublishing the speech rather than even attempting to verify it.
Congress is using the wrong approach to helping people whose intimate images are shared without their consent. TAKE IT DOWN pressures platforms to actively monitor speech, including speech that is presently encrypted. The law thus presents a huge threat to security and privacy online. While the bill is meant to address a serious problem, good intentions alone are not enough to make good policy. Lawmakers should be strengthening and enforcing existing legal protections for victims, rather than inventing new takedown regimes that are ripe for abuse.