
AI Is Being Abused And Nobody Is Talking About It
The word 'abuse' is important here. It implies deliberate misuse not accidents, not unintended consequences, but people and organizations using AI in ways they know or should know cause harm, because doing so serves their interests. The abuse of AI is not concentrated in criminal networks, though that is real. It is happening in boardrooms, HR departments, advertising platforms, and political communications operations. It is happening at scale, with legal cover, in companies whose names you know. And the absence of regulatory infrastructure means that most of it is currently operating without meaningful accountability.
AI used to run cyberattacks. AI used to manufacture consent for financial fraud. AI used to automate discrimination at scale. AI used to replace workers and call it transformation. The abuse is systematic.
AI-Washing: Lying With Technology
The most pervasive form of AI abuse is the least discussed: using AI as a false explanation for decisions made for other reasons. Research firm Challenger, Gray & Christmas found that AI was cited in layoff announcements covering 55,000 jobs in 2025 out of 1.2 million total U.S. job cuts. A survey of 59% of hiring managers found they admitted to using AI as cover for cuts driven by overhiring, cost pressure, and organizational dysfunction.Jack Dorsey's Block provides the clearest documented case. In March 2025, he cut 931 people and explicitly stated in the internal memo: 'None of the above points are trying to hit a specific financial target, replacing folks with AI, or changing our headcount cap.' Eleven months later, he cut 4,000 people and attributed it entirely to AI capability advances. His own public record contradicts his framing. The abuse here is not of AI technology. It is the abuse of AI as a narrative a way to make a business decision sound like a natural force rather than a choice, and to avoid the accountability that comes with admitting the choice.
AI Used to Automate Discrimination
When Workday's hiring AI rejected a Black applicant over 40 more than 100 times, some of those rejections in under an hour it was not malfunctioning. It was doing exactly what it was trained to do. The abuse was in deploying a system trained on historically biased data without auditing for discriminatory outcomes, without informing applicants that AI was making the decision, and without building in any human review step. SafeRent's AI tenant scoring tool did the same thing in housing: trained on data that systematically disadvantaged Section 8 voucher holders, deployed at scale, no audit, no oversight. The $2.2 million settlement that followed did not compensate the individual renters who were denied housing. It compensated a legal process.The EU AI Act classifies AI systems used for employment screening, access to essential services, and credit scoring as high-risk requiring mandatory bias audits, transparency disclosures, and human oversight. The Trump administration's AI Action Plan in 2025 moved in the opposite direction, rescinding the Biden-era Executive Order on AI risk guidelines and explicitly prioritizing removing regulatory friction from AI development. The U.S. now has no federal framework requiring bias audits for AI systems making decisions that affect people's livelihoods.
AI Used as Cyberattack Infrastructure
Anthropic disclosed in 2025 that threat actors had used Claude Code as an orchestrator for a cyber-espionage campaign automating reconnaissance, scripting attacks, and chaining tools together. This is not a theoretical risk. It is a documented deployment of AI built with safety guardrails being used as infrastructure for a nation-state level cyberattack. OWASP's 2025 Top 10 for Agentic AI Applications documented that agentic AI systems those with access to real-world tools, file systems, and communication platforms represent a fundamentally new attack surface that most organizations' security frameworks were not designed to address.The abuse dynamic here is straightforward: AI dramatically lowers the expertise required to execute sophisticated cyberattacks. A threat actor who previously needed deep technical knowledge to chain together reconnaissance, exploitation, and lateral movement can now orchestrate those steps through an AI agent. The asymmetry between offense and defense has shifted.
AI Used to Manufacture Financial Fraud at Scale
Deepfake videos of Canadian Prime Minister Mark Carney promoting trading platforms circulated in 2025 and caused direct financial harm particularly to elderly viewers who trusted the simulated news broadcast format. This is not spam. It is AI-generated impersonation of a head of government, designed to steal savings from people who reasonably trusted what appeared to be a credible news source. The technology to produce these videos at the quality required to deceive a non-expert viewer is now available to anyone with a consumer internet connection. The legal infrastructure to prosecute it, compensate victims, or prevent it does not exist in any comprehensive form in the United States.
AI Used Against Children: The Meta Case
Meta AI's policies in 2025 permitted the chatbot to have romantic conversations with children. This was not a loophole discovered by edge-case testers. It was a documented policy that had to be specifically identified and reported before Meta addressed it. AI-embedded children's toys were found to be sending data to OpenAI and Perplexity AI without parental knowledge or consent. California's Attorney General issued formal warnings to 12 AI companies. The underlying problem: AI products are being built and launched with adult-use assumptions, deployed in environments where children will use them, and the safety review that should have happened before launch is triggered only after harm is documented.
The Accountability Gap
| Abuse Type | Current Legal Framework | Documented Cases | Gap |
|---|---|---|---|
| AI-washing (false layoff narratives) | None | Block, Salesforce, Amazon, dozens more | No disclosure requirement; no liability for false AI attribution |
| Algorithmic hiring discrimination | Civil rights law (existing) | Workday class action, SafeRent settlement | No proactive audit requirement; enforcement is reactive |
| AI cyberattack orchestration | Cybercrime law (existing) | Anthropic Claude Code espionage case | No AI-specific liability for tool providers |
| Deepfake financial fraud | Fraud law (existing) | Mark Carney impersonation, multiple others | No deepfake-specific law in U.S. federal code |
| Children's data collected by AI | COPPA (partial) | Meta chatbot, AI toys | COPPA not designed for conversational AI; enforcement lags |
| Workplace AI surveillance | Varies by state | Body language scoring, keystroke monitoring | No federal standard; most states have no relevant law |
What Accountability Requires
The pattern across every category of AI abuse is the same: the technology moves faster than the accountability infrastructure, harm is documented after the fact, legal remedies are inadequate or nonexistent, and the cost is borne by individuals who had no meaningful ability to consent to the risk. That is not inevitable. It is a policy choice specifically, the choice to treat AI development as a private innovation matter rather than a public accountability matter.The EU's approach classifying high-risk AI applications, requiring mandatory audits, transparency disclosures, and human oversight before deployment is the most comprehensive regulatory framework currently in operation. Its enforcement is still developing. But the underlying principle is correct: when an AI system makes decisions that determine whether a person gets a job, a loan, housing, or their savings, that system should be required to demonstrate fairness before it is deployed, not after it has harmed people at scale. The absence of that requirement in U.S. federal law is not a gap. It is a decision.