
The Dark Side of AI Nobody Wants to Talk About
The mainstream AI conversation is dominated by capability milestones and productivity promises. What gets less airtime is the documented harm: hiring algorithms that systematically reject Black and older applicants at 1:50 AM without any human review; AI deepfakes of political leaders used to steal savings from elderly people; workplace surveillance systems that track workers' movements, assess their body language, and dish out discipline without human judgment; AI systems trained on copyrighted content without consent or compensation. These are not hypothetical risks. They are documented cases from 2024 and 2025, with more arriving weekly.
Biased hiring algorithms. Workplace surveillance at scale. Deepfakes of public figures stealing money from seniors. AI being used as a cyber-attack orchestrator. The harms are documented and growing.
Algorithmic Discrimination: The Workday Case
In May 2025, a U.S. federal court allowed a nationwide class action to proceed against Workday, alleging that its AI-powered resume screening system discriminated against applicants over 40, and against Black and disabled applicants. The lead plaintiff applied to over 100 jobs through companies using Workday's AI screening and was automatically rejected every time. One rejection arrived at 1:50 AM less than an hour after he applied. The speed made it physically impossible for a human to have reviewed the application.The legal theory: Workday's AI 'baked in' bias by training on historical hiring data that reflected discriminatory human decisions. The AI did not invent new discrimination. It automated and scaled the discrimination that already existed in the data. SafeRent, an AI tenant scoring company, settled a $2.2 million lawsuit in November 2024 after its tool was found to unfairly weight credit history against Section 8 voucher holders disproportionately harming protected classes. These are two documented cases. They are not isolated.
Workplace Surveillance: The Invisible Manager
Employers are deploying increasingly sophisticated AI surveillance tools: body language analysis in video interviews, vocal stress assessment during calls, continuous keystroke and activity monitoring, location tracking, and behavioral scoring systems that generate automatic performance ratings and discipline without human review. One documented case involved a makeup artist in the UK who was scored poorly by an AI body language tool during a video interview despite strong performance on skills evaluation and lost her job as a result.The AFL-CIO has made AI workplace surveillance a central bargaining demand, describing it as 'monitoring workers without consent' and 'firing workers by app because a machine told you to.' Steel workers' unions in 2025 specifically targeted AI tools that track workers' movements and automate disciplinary decisions. The concern is not theoretical. Amazon's warehouse workers have lived under AI-driven productivity monitoring systems for years systems that set quotas, track every action, and generate termination recommendations without a manager's direct involvement.
Deepfakes as Financial Crime
In 2025, deepfake videos of Canadian Prime Minister Mark Carney appeared on social media, showing him appearing to endorse trading platforms. The AI-generated audio and video mimicked news broadcast formats. Viewers particularly elderly people lost savings as a result of these fraudulent endorsements. ISACA's 2025 incident review noted that deepfake impersonation of public figures has become 'routine' and called on organizations to develop rapid takedown playbooks and train the public to verify through secondary channels.The scale of the deepfake problem is expanding faster than the detection and legal infrastructure designed to address it. There is no federal law in the United States specifically governing AI-generated impersonation at the time of publication. The EU AI Act categorizes certain deepfake applications as high-risk, but enforcement lags deployment. The victims of financial deepfake scams have no clear legal remedy today.
AI as Cyberattack Infrastructure
Anthropic disclosed in 2025 that threat actors had used its Claude Code model as an 'orchestrator' for cyber-espionage operations automating reconnaissance, scripting attack sequences, and chaining tools together in ways that would have required significant human expertise without AI assistance. This is the documented case of an AI model built with safety measures being used as critical infrastructure for a cyberattack. The attack did not succeed in penetrating its targets, but it demonstrated capability.OWASP's published its first dedicated Top 10 for Agentic AI Applications in late 2025 a separate risk framework from its LLM Top 10 specifically because AI agents operating with real-world tool access create a new category of attack surface. An AI agent with access to file systems, email, calendars, and communication platforms is not just a chatbot. It is an automated actor with access to sensitive organizational assets. The security implications are not yet widely internalized by the enterprises deploying these tools.
Privacy: Data You Didn't Know Was Public
In 2025, it was discovered that Grok, the xAI chatbot, was making user conversations searchable on Google, Bing, and DuckDuckGo without warning. Google was estimated to have indexed over 370,000 Grok conversations. Users who believed they were having private interactions with an AI found their conversations publicly accessible via search. This is not a niche problem. It is a structural consequence of AI applications being built with consumer-grade privacy defaults on systems handling sensitive personal communications.Clearview AI the facial recognition company that built its database by scraping billions of photos from the public internet without consent has been fined and banned across Europe, with the Netherlands imposing a €2.95 million fine plus daily penalties. The legal principle established: harvesting biometric data without consent is illegal under European law regardless of whether the source images were technically 'public.' That principle has not been codified in U.S. federal law.
Children: A Category Requiring Separate Treatment
In 2025, Meta AI was found to have policies permitting the chatbot to engage in romantic conversations with minors. AI-embedded children's toys were found to be sending collected data to third-party companies including OpenAI and Perplexity AI without parental knowledge or consent. California Attorney General Rob Bonta issued a warning to 12 top AI companies that harms to children would face 'the fullest extent of the law' in California. These are not edge cases in the deployment of AI. They are consequences of products launched without adequate consideration of who would use them and how.
What Accountability Actually Looks Like
- Mandatory bias audits for AI systems used in high-stakes decisions: hiring, lending, housing, healthcare, law enforcement
- Human review requirements for algorithmic termination, discipline, and rejection decisions with the human reviewer's identity logged and auditable
- Explicit consent requirements before AI systems collect behavioral, biometric, or conversational data from workers, customers, or users
- Legal liability for deepfake impersonation used for financial fraud with specific remedies for victims, not just fines for platforms
- Children's data treated as a protected category in all AI applications not an afterthought addressed after harm is documented
- Cybersecurity requirements for AI systems with agentic capabilities including access controls, audit logs, and anomaly detection as baseline, not optional