AI GUERRILLA /// DAILY INTELLIGENCE BRIEF

The AI Ethics War Just Got Its First Casualty

Sunday, March 8, 2026
🔥 TOP STORIES
ETHICS

OpenAI's Head of Robotics Resigns Over Pentagon Deal

Caitlin Kalinowski, who led OpenAI's robotics and hardware division, resigned on Saturday citing concerns that the company's new Pentagon contract was "rushed without the guardrails defined." She wrote that surveillance of Americans without judicial oversight and lethal autonomy without human authorization "deserved more deliberation." Her departure is the most high-profile fallout from OpenAI's classified military deployment deal, which came after Anthropic was blacklisted by the Pentagon for refusing similar terms.

Read on TechCrunch →
DEFENSE

Pentagon Official Reveals 'Whoa Moment' Over Anthropic Dependency

Emil Michael, the Pentagon's CTO, revealed on the All-In podcast how deeply embedded Anthropic's Claude had become in military operations — and the panic that set in when they realized they could lose access. After Anthropic pushed for strict limits on surveillance and autonomous weapons, the Pentagon designated it a "supply chain risk" and pivoted to OpenAI. Nearly 900 tech workers across Google and OpenAI have now signed open letters calling for clearer military AI limits.

Read on Fortune →
LEGAL

Supreme Court Kills AI Copyright: Humans Only

The U.S. Supreme Court denied certiorari in Thaler v. Perlmutter on March 2, ending a decade-long fight to grant copyright to AI-generated art. The ruling cements that "human authorship" is a bedrock requirement — AI systems cannot be listed as authors. Critically, this doesn't block AI-assisted works where humans maintain creative control, but fully autonomous AI outputs now have zero IP protection in the U.S.

Read on CNBC →
🛠️ TOOL OF THE DAY

Claude Code

Anthropic's command-line agentic coding tool has been getting major attention this week as the "SAASpocalypse" narrative intensifies. Developers use it to delegate entire coding tasks — writing, debugging, file editing, and building websites — directly from their terminal. It's being cited by Wall Street analysts as one of the AI tools threatening traditional SaaS business models.

For: Developers, engineers, founders    Price: API usage (pay per token)

View Docs →
⚡ QUICK HITS

 Block CEO Jack Dorsey cut 4,000 jobs (40% of staff), calling it an AI-driven "structural change." Stock surged 18%. Salesforce CEO Benioff pushed back, calling it company-specific, not an industry trend.

Fortune →

 U.S. payrolls dropped 92,000 in February — far worse than expected — amid a wave of white-collar cuts at Morgan Stanley, Oracle, and Capital One. The "AI jobs" debate is intensifying.

Benzinga →

 Anthropic published a new framework for measuring AI's impact on labor markets, moving beyond simple job-displacement narratives to track how specific tasks are being augmented or automated.

Radical Data Science →

 Connecticut Supreme Court asked to dismiss a case after lawyers admitted their AI-generated legal brief contained completely fabricated citations — quotes that no court has ever written.

HumAI Blog →

 Google Gemini 3.1 Flash-Lite launched with a new "Thinking" mode. Alibaba's Qwen3.5-9B, a small open-source model, is reportedly outperforming OpenAI's much larger gpt-oss-120B on benchmarks.

LLM Stats →
💬 GUERRILLA TAKE

The Pentagon-AI saga is the defining story of 2026 — not benchmarks, not model releases. When a robotics executive walks away from OpenAI on principle, and 900 engineers sign protest letters, the industry is splitting in real time between "build for everyone" and "build for the state." The Supreme Court's copyright ruling adds another fault line: if AI-generated work can't be owned, the economic incentive shifts entirely to human-AI collaboration rather than full automation. For builders, the message is clear — the moat isn't the model anymore. It's trust, transparency, and the human layer on top.

AI GUERRILLA /// MARCH 8, 2026 /// NO FLUFF. NO FILLER. JUST SIGNAL.

Keep Reading