Pixels, Politics and the Press 

By: Inaaya Firoz
Edited by: Alexia Sextou and Jerry Benedict

Artificial intelligence has become the weapon of the future. Rather than competing to outpace one another in advanced technology, as the United States and Russia once did during the Space Race, the world is now weaponizing artificial intelligence. Both Democrats and Republicans strategically use artificial intelligence to promote their political agendas. As a result, the lines between artificial intelligence and politics have blurred. 

Artificial intelligence has increasingly been used as a political weapon through the creation of AI-generated deepfakes, or “synthetic audio-visual media of human faces, bodies, or voices.” [1] In recent years, Signicat, a leading provider of digital identity solutions, found that there’s been a 5X increase in deepfake content, which corresponds with the rise of AI. It is challenging to escape the proliferation of AI-generated political content, as AI can easily create deepfakes, generate text, and formulate synthetic identities (such as bots), all of which can be disseminated in real-time to spread disinformation. For example, in July 2024, Russia’s intelligence services utilized an AI software tool called “Meliorator,” which created bot accounts on social media that were AI-generated profile pictures, names, and backstories, to post pro-Kremlin narratives surrounding geopolitical issues. COVID-19 especially saw an unprecedented weaponization of artificial intelligence, as AI bots led the surge of the “Reopen America” campaign. The AI bots even ended up fueling physical protests, with actual citizens eventually joining in the movement. In both the July 2024 “Meliorator” and the “Reopen America” campaign instances, the social media accounts pushed content to be trending within the social media platforms, making the movements more visible and manipulating public opinion. Studies even show that bots can be 66 times more active than ordinary users, which can allow them to overload individuals with information and create “noise rapidly.” [2] Both tech companies and the government can play a role in restricting false information online, with a 2025 Pew Research survey noting that as of 2025, 51% of Americans wanted the government to take steps to restrict false information online, whereas 60% of Americans believed tech companies should take steps to restrict false information online. [3] 

In response to the rise of artificial intelligence and its political weaponization, 26 states so far have enacted legislation. To address political deepfakes, states have adopted two different approaches to legislation: prohibition and disclosure. [4] For example, as of April 2024 through the CS/HB 919 Bill, Florida requires political advertisements and electioneering communications to have disclaimers, which disclose whether or not artificial intelligence was used in the creation of the communication. [5] This helps to stop the spread of political deepfakes as individuals are notified on whether artificial intelligence was utilized in creating the content they engage with, curbing false narratives and fostering caution around AI-generated media. Other states take a more direct approach, banning political deepfakes a certain number of days before elections. For example, as of May 2024 through the SF 3550 bill, Minnesota prohibits political deepfakes from being disseminated 30 days before primaries, and 90 days before a general election. [6] It is currently difficult to regulate political deepfakes because each state takes a different approach to defining as well as creating legislation surrounding artificial intelligence. Now, in December 2025, there has been no federal legislation explicitly targeting political deepfakes. Yet recently, the U.S. Congress passed the S.14 Take It Down Act regarding nonconsensual intimate visual depictions. It prohibits the use of technological deepfakes, acknowledges computer-generated deepfakes, and penalizes violators with criminal penalties. The act defines digital forgery as “visual depiction of an identifiable individual created through the use of software, machine learning, artificial intelligence, or any other computer-generated or technological means, including by adapting, modifying, manipulating, or altering an authentic visual depiction, which, when viewed as a whole by a reasonable person, is indistinguishable from an authentic visual depiction of the individual.” [7]

The U.S. still does not have a set definition for artificial intelligence, in contrast to the European Union’s explicit framework for defining AI under the Artificial Intelligence Act. Under the Artificial Intelligence Act, the EU defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” [8] Federally, the U.S. has some lessons to learn as it is rather stagnant, with no one set definition of machine-based systems; however, state-to-state, legislators have attempted to create legislation to address the issues brought forth by artificial intelligence. 

While numerous senators have introduced legislation, such as S. 2770, the Protect Elections from Deceptive AI Act, and H.R. 2794, the No Fakes Act of 2025, these bills have yet to be passed. [9, 10] To this date, only Texas, California, Colorado, and Utah are states that have signed AI governance laws. [11] AI governance laws and their regulation also become more challenging as both political parties have taken a different approach to artificial intelligence. In Trump’s second presidential term, the president issued Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” which “revokes certain existing AI policies and directives that act as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence.” [12]  The Guaranteeing Access and Innovation for National Artificial Intelligence Act was also recently proposed, with AI chipmakers prioritizing American institutions over countries of concern. It will achieve this by amending the Export Control Reform Act to require a license to export advanced AI chip technology to countries of concern, specifically for chips intended for data centers. American institutions will not require a license for AIC chip technology under this act. As Senator Elizabeth Warren, one of the lawmakers who introduced the bill, claims, “American customers—including small businesses and startups—shouldn’t be forced to wait in line behind China’s tech giants when purchasing the latest AI chips.” [13] The Trump administration, in line with Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” wrote America’s AI Action Plan in July of 2025, declaring “Build, Baby, Build!” This insinuated that the AI industry should continue to expand, but this does not directly address the conflicts that might be affiliated with AI, such as bias. In light of the potential conflicts that might arise with bias, it claims that AI systems “must be free from ideological bias and be designed to pursue objective truth rather than social engineering agendas when users seek factual information or analysis.” 

Beyond legislative and executive measures, AI is also being integrated into regulatory initiatives, such as the “Catch and Revoke” initiative. In this venture, Palantir, a big-tech company, provides the government with AI capabilities that monitor the data of foreign individuals with visas. This poses a further question: how can AI be utilized in a way that is bias-free and does not make assumptions? To date, AI has no safeguard against misclassifying information, scoring people without adequate information, and introducing new biases, which are issues addressed within the “Catch and Revoke” program due to its utilization of artificial intelligence in foreign monitoring. AI and its language represent the environment encompassing it, so it can introduce new biases that emerge due to social media. [14] This also applies to profiling, with politically left-leaning states leaning more towards defining a profile within their application to artificial intelligence. For example, both California and Connecticut (under Public Act No. 22-15) define AI identically as “any form of automated processing performed on personal data to evaluate, analyze, or predict personal aspects related to an identified or identifiable individual's economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.” [15] The insurance industry has addressed this issue through the National Association of Insurance Commissioners model bulletin, which claims, “An Insurer’s conduct in the state, including its use of AI Systems to make or support actions and decisions that impact consumers, is subject to investigation, including market conduct actions.” [16] There has been action with AI washing under section 5(a) of the FTC Act, as “the FTC has a mandate to pursue enforcement actions to prevent unfair or deceptive business practices.” The FTC passed Operation AI Comply in 2024, which was meant to punish companies using false claims about AI that would “supercharge deceptive or unfair conduct that harms consumers. “ [17] The U.S. federal government has yet to pass any action similar to this, however. 

Artificial intelligence and its role in politics have shifted as it becomes more prevalent, but it is slow to be addressed by the executive branch, as observed by state variation and current federal action regarding political deepfake legislation, lack of standardized AI definitions, oversight of AI in elections, profiling, and consumer protection. States like California, Texas, Colorado, and Utah have actively taken action, whereas there has been federal inaction. The legal implications of AI interest with rights to privacy, originally envisioned by the Founding Fathers, are now challenged by algorithmic bias and deepfake impersonation. Politicized AI could then be utilized to manipulate elections, spread misinformation, and entrench biased decision-making. With artificial intelligence blurring the line between what’s real and what's not real, it is important to be conscious of the media one is consuming, as political deepfakes and AI-generated news are more common than ever. Individuals should verify sources and cross-check information. Finally, federal baseline standards should be created, with a unified definition and regulations for AI political content to prevent political weaponization of AI. 

Notes:

  1. Pawelec, Maria. “Deepfakes and Democracy (Theory): How Synthetic Audio-Visual Media for Disinformation and Hate Speech Threaten Core Democratic Functions.” Digital Society, vol. 1, no. 2, Sept. 2022, link.springer.com/article/10.1007/s44206-022-00010-6. Accessed December 16, 2025.

  2. Romanishyn, Alexander, Olena Malytska, and Vitaliy Goncharuk. 2025. “AI-Driven Disinformation: Policy Recommendations for Democratic Resilience.” Frontiers in Artificial Intelligence 8 (July). https://doi.org/10.3389/frai.2025.1569115. Accessed December 16, 2025.

  3. Lipka, Michael, and Christopher St. Aubin. 2025. “Support Dips for U.S. Government, Tech Companies Restricting False or Violent Online Content.” Pew Research Center. April 14, 2025. https://www.pewresearch.org/short-reads/2025/04/14/support-dips-for-us-government-tech-companies-restricting-false-or-violent-online-content/. Accessed December 16, 2025.

  4. NCSL. “Artificial Intelligence (AI) in Elections and Campaigns.” National Conference of State Legislatures. 24 Oct. 2024. www.ncsl.org/elections-and-campaigns/artificial-intelligence-ai-in-elections-and-campaigns. Accessed November 21, 2025.

  5. Beller, Illana. “Tracker: State Legislation on Deepfakes in Elections.” Public Citizen, 20 October 2025, www.citizen.org/article/tracker-legislation-on-deepfakes-in-elections/. Accessed December 16, 2025.

  6. Ibid.

  7. Elvira, María. “S.146 - 119th Congress (2025-2026): TAKE IT down Act.” Congress, 2025, www.congress.gov/bill/119th-congress/senate-bill/146. Accessed November 21, 2025.

  8. European Union. “EU Artificial Intelligence Act | Article 3: Definitions” European Union, artificialintelligenceact.eu/article/3/. Accessed November 21, 2025.

  9. Protect Elections from Deceptive AI Act, S.2770, 118 Cong. (2023). Congress, 2023, www.congress.gov/bill/118th-congress/senate-bill/take2770. Accessed November 21, 2025.

  10. Elvira, María. “S.146 - 119th Congress (2025-2026): TAKE IT down Act.” Congress, 2025, www.congress.gov/bill/119th-congress/senate-bill/146. Accessed November 21, 2025.

  11. Botero, David and Cobun Zweifel-Keegan. “US State AI Governance Legislation Tracker.” International Association of Privacy Professionals, 2025, https://iapp.org/resources/article/us-state-ai-governance-legislation-tracker. Accessed November 21, 2025.

  12. The White House. “Removing Barriers to American Leadership in Artificial Intelligence.” The White House, 23 Jan. 2025, www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in artificial-intelligence/. Accessed November 21, 2025.

  13. Smith, Kristen. “GAIN AI Act Would Prioritize US Access to AI Chips.” Executive Gov, 13 Nov. 2025, www.executivegov.com/articles/senate-bill-us-ai-chip-access. Accessed November 21, 2025.

  14. Stewart, Josie, and Nicol Turner Lee. “How Tech Powers Immigration Enforcement.” Brookings, 6 Oct. 2025, www.brookings.edu/articles/how-tech-powers-immigration-enforcement/. Accessed November 21, 2025.

  15. Cohen, Joel M, Maria Beguiristain, Marietou Diouf, and Robert J DeNault. “AI Watch: Global Regulatory Tracker - United States | White & Case LLP.” White & Case LLP, 13 May 2024, www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-states. Accessed November 21, 2025.

  16. National Association of Insurance Commissioners. 2025. “Implementation of NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers.” https://content.naic.org/sites/default/files/cmte-h-big-data-artificial-intelligence-wg-ai-model-bulletin.pdf.pdf Accessed November 21, 2025.

  17. Cohen, Joel M, Maria Beguiristain, Marietou Diouf, and Robert J DeNault. 2025. “America's Investigations Review 2026 US Enforcement Agencies Intensify Scrutiny of AI Washing.”https://www.whitecase.com/sites/default/files/2025-08/gir-americas-investigations-review-2026-edition.pdf. Accessed November 21, 2025.



Bibliography:

Beller, Ilana. 2023. “Tracker: State Legislation on Deepfakes in Elections.” Public Citizen. November 20, 2023. https://www.citizen.org/article/tracker-legislation-on-deepfakes-in-elections/.

Botero, David and Cobun Zweifel-Keegan.“US State AI Governance Legislation Tracker.” 2025. International Association of Privacy Professionals. October 6, 2025. https://iapp.org/resources/article/us-state-ai-governance-legislation-tracker.

Cohen, Joel M, Maria Beguiristain, Marietou Diouf, and Robert J DeNault.“Americas Investigations Review 2026 US Enforcement Agencies Intensify Scrutiny of AI Washing.” n.d. Accessed November 21, 2025. https://www.whitecase.com/sites/default/files/2025-08/gir-americas-investigations-review-2026-edition.pdf.

Cohen, Joel M, Maria Beguiristain, Marietou Diouf, and Robert J DeNault. 2024. “AI Watch: Global Regulatory Tracker - United States | White & Case LLP.” Www.whitecase.com. May 13, 2024. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-states.

Elvira, María. 2025. “NO FAKES Act of 2025, H.R. 2794, 119th Congress.” Congress.gov. 2025. https://www.congress.gov/bill/119th-congress/house-bill/2794/text.

European Union. “Article 3: Definitions | EU Artificial Intelligence Act.” n.d. EU Artificial Intelligence Act. Accessed November 21, 2025. https://artificialintelligenceact.eu/article/3/.

Lipka, Michael, and Christopher St. Aubin. 2025. “Support Dips for U.S. Government, Tech Companies Restricting False or Violent Online Content.” Pew Research Center. April 14, 2025. https://www.pewresearch.org/short-reads/2025/04/14/support-dips-for-us-government-tech-companies-restricting-false-or-violent-online-content/.

National Association of Insurance Commissioners. “Implementation of NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers.” Accessed November 21, 2025. https://content.naic.org/sites/default/files/cmte-h-big-data-artificial-intelligence-wg-ai-model-bulletin.pdf.pdf.

NCSL. 2024. “Artificial Intelligence (AI) in Elections and Campaigns.” National Conference of State Legislatures. October 24, 2024. https://www.ncsl.org/elections-and-campaigns/artificial-intelligence-ai-in-elections-and-campaigns.

Pawelec, Maria. 2022. “Deepfakes and Democracy (Theory): How Synthetic Audio-Visual Media for Disinformation and Hate Speech Threaten Core Democratic Functions.” Digital Society 1 (2). https://doi.org/10.1007/s44206-022-00010-6.

Smith, Kristen. “GAIN AI Act Would Prioritize US Access to AI Chips.” 2025. Executive Gov. November 13, 2025. https://www.executivegov.com/articles/senate-bill-us-ai-chip-access.

“S.146 - 119th Congress (2025-2026): TAKE IT down Act.” Congress. 2025. https://www.congress.gov/bill/119th-congress/senate-bill/146.

“S.2770 - 118th Congress (2023-2024): Protect Elections from Deceptive AI Act.” Congress. 2023. https://www.congress.gov/bill/118th-congress/senate-bill/2770.

Stewart, Josie, and Nicol Turner Lee. 2025. “How Tech Powers Immigration Enforcement.” Brookings. October 6, 2025. https://www.brookings.edu/articles/how-tech-powers-immigration-enforcement/.

The White House. 2025. “Removing Barriers to American Leadership in Artificial Intelligence.” The White House. January 23, 2025. https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/.

Previous
Previous

Charging Forward: The Legal Future of Electric Vehicles and Emissions Regulation in the United States

Next
Next

Power, Presidential Authority and the Demolition of the East Wing