- AI Week in Review
- Posts
- AI Week in Review - 10/11/25
AI Week in Review - 10/11/25
Public Sector

The Details
Federal
GSA is offering federal agencies access to xAI’s Grok models for just $0.42 per agency under an 18‑month agreement, aiming to drive widespread AI adoption across government.
GSA has added Meta’s open‑source Llama models to its OneGov initiative, enabling streamlined, government-wide access without individual agency procurement deals.
NOAA and partners commissioned a concept study on AI‑driven Earth Observation Digital Twins to better integrate diverse environmental data, guide standards, and inform future architecture decisions.
The FY 2027 R&D guidance directs agencies to focus on AI, quantum, energy, security, health, and space, while instituting “Gold Standard Science,” workforce development, infrastructure access, cross‑sector collaboration, and high‑impact research.
The U.S. Department of State’s Enterprise Data & AI Strategy outlines a plan to modernize data infrastructure, standardize AI governance, and embed trusted AI across diplomatic operations to enhance decision‑making.
NIST’s CAISI evaluation found that DeepSeek’s AI models (from China) lag U.S. counterparts in performance, cost, and security, and are more vulnerable to hijacking, jailbreaks, and ideological bias.
President Trump signed an executive order to harness AI in pediatric cancer research—directing the MAHA Commission, strengthening the Childhood Cancer Data Initiative, and boosting federal-private collaboration.
The USPTO is launching an AI pilot called ASAP! to generate a “top‑10 prior art” list for applicants before formal review, enabling early claim adjustments and improving examination efficiency.
Sen. Sanders’ report warns that AI and automation could displace up to 97 million U.S. jobs over a decade, and calls for policies like profit‑sharing, robot taxes, stronger unions, and governance reform to protect workers.
State / Local
Massachusetts will pilot a semester‑long AI course—no coding background needed—in 30 school districts, reaching ~1,600 students and training 45 teachers in collaboration with PLTW.
Gov. Evers signed a bipartisan bill (2025 Act 34) expanding criminal bans on nonconsensual AI‑generated intimate imagery, defining “synthetic intimate representation,” and prohibiting reproduction/distribution with intent to harass.
Gov. Newsom signed SB 53 (Transparency in Frontier Artificial Intelligence Act), obliging large AI firms to publicly disclose safety frameworks, report critical incidents, protect whistleblowers, and launch a public “CalCompute” cloud infrastructure.
New York State is launching a pilot program to train 1,000 state employees on responsible AI use, pairing coursework via InnovateUS with a secure generative tool powered by Google Gemini.
New York State courts released a groundbreaking AI policy that enforces ethical guardrails, restricts generative AI use, and mandates training to ensure fairness and human oversight across judicial operations.
International
At the U.N. Security Council, U.S. remarks urged stronger international norms and multilateral cooperation to govern AI’s impact on peace and security.
Hong Kong plans to deploy tens of thousands of AI‑facial‐recognition surveillance cameras (up to 60,000 by 2028), with real‑time recognition possibly beginning later this year.
Deloitte Australia will issue a partial refund after a government‑commissioned report was found to include fabricated quotes and references, and later admitted use of Azure OpenAI tools in drafting.
Both Ukraine and Russia are deploying AI-powered systems to gain battlefield advantage, but the shift toward autonomous decision‑making introduces serious ethical and tactical risks.
Germany’s new “Modernization Agenda” elevates AI and digitization to central roles, promising administrative streamlining, public service automation, and investments across high‑tech sectors.
Everything Else
OpenAI’s new “GDPval” evaluation tests AI models on 1,320 real‑work tasks across 44 occupations—measuring how well they perform economically valuable, real‑world labor.
Bain’s 2025 report argues that deploying “agentic AI” (autonomous, reasoning agents) demands rethinking enterprise architecture, governance, data pipelines, and interoperability as firms scale deployment.
Anthropic’s new research shows that its AI model Claude Sonnet 4.5 now rivals or surpasses prior versions in identifying and patching software vulnerabilities, positioning it as a practical defender in cybersecurity.
Analysts warn AI hype may be spilling into a bubble: Big Tech firms like Meta and Oracle are hiding debt in special vehicles and borrowing heavily to fund costly AI buildouts.
OpenAI’s new video generation–based social app Sora 2 has rapidly climbed to the top of the U.S. App Store, sparking deep concerns about copyright, deepfakes, and the erosion of trust in video content.
The State of AI 2025 report highlights a shift to reasoning-capable models, rapid commercialization, multi‑gigawatt compute expansion, global competition in frontier AI, and a recalibration toward alignment through transparency.