SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"Simply put," said one critic, "the U.S. nuclear industry will fail if safety is not made a priority."
U.S. President Donald Trump on Friday signed a series of executive orders that will overhaul the independent federal agency that regulates the nation's nuclear power plants in order to speed the construction of new fissile reactors—a move that experts warned will increase safety risks.
According to a White House statement, Trump's directives "will usher in a nuclear energy renaissance," in part by allowing Department of Energy laboratories to conduct nuclear reactor design testing, green-lighting reactor construction on federal lands, and lifting regulatory barriers "by requiring the Nuclear Regulatory Commission (NRC) to issue timely licensing decisions."
The Trump administration is seeking to shorten the yearslong NRC process of approving new licenses for nuclear power plants and reactors to withinf 18 months.
"If you aren't independent of political and industry influence, then you are at risk of an accident."
White House Office of Science and Technology Director Michael Kratsios said Friday that "over the last 30 years, we stopped building nuclear reactors in America—that ends now."
"We are restoring a strong American nuclear industrial base, rebuilding a secure and sovereign domestic nuclear fuel supply chain, and leading the world towards a future fueled by American nuclear energy," he added.
However, the Union of Concerned Scientists (UCS) warned that the executive orders will result in "all but nullifying" the NRC's regulatory process, "undermining the independent federal agency's ability to develop and enforce safety and security requirements for commercial nuclear facilities."
"This push by the Trump administration to usurp much of the agency's autonomy as they seek to fast-ttrack the construction of nuclear plants will weaken critical, independent oversight of the U.S. nuclear industry and poses significant safety and security risks to the public," UCS added.
Edwin Lyman, director of nuclear power safety at the UCS, said, "Simply put, the U.S. nuclear industry will fail if safety is not made a priority."
"By fatally compromising the independence and integrity of the NRC, and by encouraging pathways for nuclear deployment that bypass the regulator entirely, the Trump administration is virtually guaranteeing that this country will see a serious accident or other radiological release that will affect the health, safety, and livelihoods of millions," Lyman added. "Such a disaster will destroy public trust in nuclear power and cause other nations to reject U.S. nuclear technology for decades to come."
Friday's executive orders follow reporting earlier this month by NPR that revealed the Trump administration has tightened control over the NRC, in part by compelling the agency to send proposed reactor safety rules to the White House for review and possible editing.
Allison Macfarlane, who was nominated to head the NRC during the Obama administration, called the move "the end of independence of the agency."
"If you aren't independent of political and industry influence, then you are at risk of an accident," Macfarlane warned.
On the first day of his second term, Trump also signed executive orders declaring a dubious "national energy emergency" and directing federal agencies to find ways to reduce regulatory roadblocks to "unleashing American energy," including by boosting fossil fuels and nuclear power.
The rapid advancement and adoption of artificial intelligence systems is creating a tremendous need for energy that proponents say can be met by nuclear power. The Three Mile Island nuclear plant—the site of the worst nuclear accident in U.S. history—is being revived with funding from Microsoft, while Google parent company Alphabet, online retail giant Amazon, and Facebook owner Meta are among the competitors also investing in nuclear energy.
"Do we really want to create more radioactive waste to power the often dubious and questionable uses of AI?" Johanna Neumann, Environment America Research & Policy Center's senior director of the Campaign for 100% Renewable Energy, asked in December.
"Big Tech should recommit to solutions that not only work but pose less risk to our environment and health," Neumann added.
"This is the facial recognition technology nightmare scenario that we have been worried about," said one civil liberties campaigner.
Amid a Washington Post investigation and pushback from civil liberties defenders, New Orleans police recently paused their sweeping—and apparently unlawful—use without public oversight of a private network of over 200 surveillance cameras and facial recognition technology to track and arrest criminal suspects.
On Monday, the Postpublished an exposé detailing how the New Orleans Police Department (NOPD) relied on real-time facial recognition technology provided by Project NOLA, a nonprofit organization operating out of the University of New Orleans, to locate and apprehend suspects.
"Facial recognition technology poses a direct threat to the fundamental rights of every individual and has no place in our cities."
Project NOLA's website says the group "operates the largest, most cost-efficient, and successful networked [high definition] crime camera program in America, which was created in 2009 by criminologist Bryan Lagarde to help reduce crime by dramatically increasing police efficiency and citizen awareness."
The Post's Douglas MacMillan and Aaron Schaffer described Project NOLA as "a surveillance method without a known precedent in any major American city that may violate municipal guardrails around use of the technology."
As MacMillan and Schaffer reported:
Police increasingly use facial recognition software to identify unknown culprits from still images, usually taken by surveillance cameras at or near the scene of a crime. New Orleans police took this technology a step further, utilizing a private network of more than 200 facial recognition cameras to watch over the streets, constantly monitoring for wanted suspects and automatically pinging officers' mobile phones through an app to convey the names and current locations of possible matches.
This, despite a 2022 municipal law
limiting police use of facial recognition. That ordinance reversed the city's earlier outright ban on the technology and was criticized by civil liberties advocates for dropping a provision that required permission from a judge or magistrate commissioner prior to use.
"This is the facial recognition technology nightmare scenario that we have been worried about," Nathan Freed Wessler, deputy director with the ACLU's Speech, Privacy, and Technology Project, told the Post. "This is the government giving itself the power to track anyone—for that matter, everyone—as we go about our lives walking around in public."
Since 2023, Project NOLA—which was paused last month amid the Post's investigation—has contributed to dozens of arrests. Proponents including NOPD and city officials credit the collaboration with Project NOLA for a decrease in crime in the city that had the nation's highest homicide rate as recently as 2022. Project NOLA has even been featured in the true crime series "Real Time Crime."
New Orleans Police Commissioner Anne Kirkpatrick told Project NOLA last month that its automated alerts must be shut off until she is "sure that the use of the app meets all the requirements of the law and policies."
Critics point to racial bias in facial recognition algorithms, which disproportionately misidentify racial minorities, as a particular cause for concern. According to one landmark federal study published in 2019, Black, Asian, and Native American people were up to 100 times likelier to be misidentified by facial recognition algorithms than white people.
The ACLU said in a statement that Project NOLA "supercharges the risks":
Consider Randal Reid, for example. He was wrongfully arrested based on faulty Louisiana facial recognition technology, despite never having set foot in the state. The false match cost him his freedom, his dignity, and thousands of dollars in legal fees. That misidentification happened based on a still image run through a facial recognition search in an investigation.
"We cannot ignore the real possibility of this tool being weaponized against marginalized communities, especially immigrants, activists, and others whose only crime is speaking out or challenging government policies," ACLU of Louisiana executive director Alanah Odoms said. "These individuals could be added to Project NOLA's watchlist without the public's knowledge, and with no accountability or transparency on the part of the police departments."
"Facial recognition technology poses a direct threat to the fundamental rights of every individual and has no place in our cities," Odoms asserted. "We call on the New Orleans Police Department and the city of New Orleans to halt this program indefinitely and terminate all use of live-feed facial recognition technology."
"Americans deserve both meaningful federal protections and the ability of their states to lead in advancing safety, fairness, and accountability when AI systems cause harm."
Demand Progress on Monday led over 140 organizations "committed to protecting civil rights, promoting consumer protections, and fostering responsible innovation" in a letter opposing U.S. House Republicans' inclusion of legislation that would ban state and local laws regulating artificial intelligence in a megabill advanced by the Budget Committee late Sunday.
Section 43201(c)—added by U.S. Rep. Brett Guthrie (R-Ky.) ahead of last Tuesday's markup session—says that "no state or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this act."
"Protections for civil rights and children's privacy, transparency in consumer-facing chatbots to prevent fraud, and other safeguards would be invalidated, even those that are uncontroversial."
In the new letter, the coalition highlighted how "sweeping" the GOP measure is, writing to House Speaker Mike Johnson (R-La.), Minority Leader Hakeem Jeffries (D-N.Y.), and members of Congress that "as AI systems increasingly shape critical aspects of Americans' lives—including hiring, housing, healthcare, policing, and financial services—states have taken important steps to protect their residents from the risks posed by unregulated or inadequately governed AI technologies."
"As we have learned during other periods of rapid technological advancement, like the industrial revolution and the creation of the automobile, protecting people from being harmed by new technologies, including by holding companies accountable when they cause harm, ultimately spurs innovation and adoption of new technologies," the coalition continued. "In other words, we will only reap the benefits of AI if people have a reason to trust it."
According to the letter:
This total immunity provision blocks enforcement of all state and local legislation governing AI systems, AI models, or automated decision systems for a full decade, despite those states moving those protections through their legislative processes, which include input from stakeholders, hearings, and multistakeholder deliberations. This moratorium would mean that even if a company deliberately designs an algorithm that causes foreseeable harm—regardless of how intentional or egregious the misconduct or how devastating the consequences—the company making that bad tech would be unaccountable to lawmakers and the public. In many cases, it would make it virtually impossible to achieve a level of transparency into the AI system necessary for state regulators to even enforce laws of general applicability, such as tort or antidiscrimination law.
"Many state laws are designed to prevent harms like algorithmic discrimination and to ensure recourse when automated systems harm individuals," the letter notes. "For example, there are many documented cases of AI having highly sexualized conversations with minors and even encouraging minors to commit harm to themselves and others; AI programs making healthcare decisions that have led to adverse and biased outcomes; and AI enabling thousands of women and girls to be victimized by nonconsensual deepfakes."
If Section 43201(c) passes the Republican-controlled Congress and is signed into law by President Donald Trump, "protections for civil rights and children's privacy, transparency in consumer-facing chatbots to prevent fraud, and other safeguards would be invalidated, even those that are uncontroversial," the letter warns. "The resulting unfettered abuses of AI or automated decision systems could run the gamut from pocketbook harms to working families like decisions on rental prices, to serious violations of ordinary Americans' civil rights, and even to large-scale threats like aiding in cyber attacks on critical infrastructure or the production of biological weapons."
The coalition also called out "Congress' inability to enact comprehensive legislation enshrining AI protections leaves millions of Americans more vulnerable to existing threats," and commended states for "filling the need for substantive policy debate over how to safely advance development of this technology."
In the absence of congressional action, former President Joe Biden also took some steps to protect people from the dangers of AI. However, as CNNpointed out Monday, "shortly after taking office this year, Trump revoked a sweeping Biden-era executive order designed to provide at least some safeguards around artificial intelligence. He also said he would rescind Biden-era restrictions on the export of critical U.S. AI chips earlier this month."
Today, Demand Progress and a coalition of artists, teachers, tech workers and more asked House leaders to reject a measure that would stop states from regulating AI. Read the full story by @claresduffy.bsky.social at @cnn.com
[image or embed]
— Demand Progress (@demandprogress.bsky.social) May 19, 2025 at 10:15 AM
The groups asserted that "no person, no matter their politics, wants to live in a world where AI makes life-or-death decisions without accountability... Section 43201(c) is not the only provision in this package that is of concern to our organizations, and there are some provisions on which we will undoubtedly disagree with each other. However, when it comes to this provision, we are united."
"Americans deserve both meaningful federal protections and the ability of their states to lead in advancing safety, fairness, and accountability when AI systems cause harm," concluded the coalition, which includes 350.org, the American Federation of Teachers, Center for Democracy & Technology, Economic Policy Institute, Free Press Action, Friends of the Earth U.S., Greenpeace USA, Groundwork Collaborative, National Nurses United, Public Citizen, Service Employees International Union, and more.
In a Monday statement announcing the letter, Demand Progress corporate power director Emily Peterson-Cassin blasted the provision as "a dangerous giveaway to Big Tech CEOs who have bet everything on a society where unfinished, unaccountable AI is prematurely forced into every aspect of our lives."
"Speaker Johnson and Leader Jeffries must listen to the American people and not just Big Tech campaign donations," she said. "State laws preventing AI from encouraging children to harm themselves, making uninformed decisions about who gets healthcare, and creating nonconsensual deepfakes will all be wiped away unless Congress reverses course."