- AI Week in Review
- Posts
- AI Week in Review - 2/15/26
AI Week in Review - 2/15/26
Public Sector

The Details
Federal
NASA’s Perseverance rover on Mars completed its first drives planned entirely by generative AI, with the system autonomously generating safe route waypoints and successfully navigating Martian terrain.
The U.S. Department of Labor’s Employment and Training Administration released a national AI Literacy Framework outlining core content and delivery principles to guide workforce and education systems in building AI knowledge and skills.
The Pentagon added ChatGPT to its secure GenAI.mil platform—joining other large language models to provide unclassified AI support to millions of Defense personnel while officials develop governance for safe use.
The Trump administration championed widespread AI adoption across federal agencies—nearly 3,000 use cases—prioritizing innovation and deregulation while critics warn that speed and weak oversight risk ethical, civil rights, and safety harms.
The U.S. Department of Energy launched the Genesis Mission Consortium, a public-private partnership using AI to speed scientific discovery, strengthen security, and drive energy innovation by uniting government, labs, industry, and academia.
An OpenAI memo to the U.S. House Select Committee warns China’s state-backed AI efforts risk undercutting U.S. leadership, urging stronger democratic AI investment and safeguards against adversarial model distillation.
USPTO’s Scout generative AI platform is gaining internal adoption as the agency advances cloud modernization to 58 percent completion, using AI to boost productivity, streamline workflows, and support mission delivery.
State / Local
Alabama Governor Kay Ivey established the Technology Quality Assurance Board to guide secure, ethical adoption of emerging tech—including AI—across state agencies and strengthen cybersecurity oversight.
Washington, D.C. became the first major U.S. city to require Responsible AI training for all government employees and contractors, equipping the workforce with practical guidance for safe, ethical AI use.
Pennsylvania National Guard Soldiers and civilian employees took an AI 201 course at Fort Indiantown Gap to sharpen responsible AI use, effective prompting, and critical thinking for military decision-making.
The Norwalk Police Department in Connecticut is trialing AI-powered bodycams that provide real-time translation across 50+ languages to improve communication between officers and non-English speaking residents.
Massachusetts Gov. Maura Healey announced the state will become the first in the U.S. to deploy a ChatGPT-powered AI assistant across its executive branch to support 40,000 staff with secure, governed AI tools.
Pennsylvania’s Artificial Intelligence: Advisory Committee Recommendations on the Adoption and Use of AI synthesizes research on AI risks, benefits, ethics, and sectoral impacts to guide responsible statewide AI policy and governance.
NY unveils
Oregon lawmakers are advancing Senate Bill 1546, a proposal aimed at governing how AI chatbots — especially “companion” tools like ChatGPT — interact with users, with a sharp focus on youth mental health and safety.
Montgomery County Public Schools will pilot an AI-powered weapons detection system using existing cameras at three high schools starting in March to flag visible threats and safety incidents for staff review.
International
China’s military is training AI‑controlled weapons inspired by hawks and coyotes to enhance autonomous swarm tactics in drones and robots, intensifying the AI‑driven arms race.
The International AI Safety Report 2026 delivers a global, evidence-based assessment of advanced general-purpose AI capabilities, emerging harms like misuse and cyber risk, and evolving safeguards to guide informed policymaking worldwide.
Australia’s Commercial Radio Code of Practice 2026 will require commercial radio stations to clearly disclose when AI-generated or synthetic voices host programs or news, plus new child-focused content safeguards.
The UK government is mobilizing leading British AI experts to modernize public services, strengthen national security, and accelerate responsible AI adoption across government through new partnerships and advisory initiatives.
The Institute for Global Change paper argues that middle powers can build national agency and influence in AI by cultivating broad open-source ecosystems across models, tools and data rather than competing at the frontier.
Canada and Germany signed a joint AI declaration of intent and launched the Sovereign Technology Alliance to deepen cooperation on secure AI infrastructure, research, talent development, commercialization, and sovereign tech capacity.
Singapore will invest over S$1 billion from 2025–2030 under its National AI Research and Development (NAIRD) Plan to deepen AI research, build talent, and strengthen its position as a global AI hub.
The INSS analysis describes underwater (subsea) data centers as ocean-deployed computing hubs cooled naturally by seawater, enhancing energy efficiency, reducing land use, and supporting sovereign digital infrastructure while addressing geopolitical and environmental considerations.
The Dutch Data Protection Authority warns that AI agents such as OpenClaw pose significant security and privacy risks, urging organizations to assess vulnerabilities carefully before deploying autonomous AI systems.
Carnegie argues that South–South AI collaboration can advance practical, AI development through shared infrastructure, policy coordination, and capacity building, enabling developing countries to shape global AI governance and innovation pathways.
The UK government is expanding its AI training partnership with industry to provide free AI skills training to 10 million workers by 2030, aiming to boost productivity and digital inclusion nationwide.
Stanford HAI highlights Davos 2026 discussions emphasizing AI governance, safety, and economic transformation, with leaders calling for global cooperation, clearer regulation, and balancing rapid innovation with societal risk management.
Everything Else
Google Threat Intelligence Group reports rising distillation (model extraction) attacks and increasing experimentation with AI across the cyberattack lifecycle, including AI-augmented operations, phishing, and early AI-integrated malware development.
Elon Musk slammed Anthropic’s AI models as “misanthropic and evil,” accusing their Claude systems of racial and demographic bias in a viral social media post following the company’s massive funding round.
A global survey of 3,335 public servants shows AI adoption in government accelerating, but effectiveness varies widely depending on access to tools, training, leadership support, and integration into daily work.
Anthropic CEO Dario Amodei warns advanced AI systems will inevitably fail in unpredictable ways, urging stronger safety measures, oversight, and global coordination before increasingly powerful models outpace society’s ability to manage risks.
Matt Shumer’s widely shared X post argues that recent AI advances signal an accelerating disruption of knowledge work and a potential “something big” shift in how AI shapes the future of labor.
The RAND report argues that national competitive success in the AI era depends less on dominating AI technology itself and more on strengthening societal foundations—empowering citizens, cohesion, and adaptable institutions to harness AI’s benefits.
MIT Sports Lab researchers are using AI to analyze figure skating jumps—measuring rotation speed, height, and landings—to help athletes improve and make Olympic broadcasts more data-driven and understandable.
A study in Nature Medicine finds that large language models (LLMs) excel on medical knowledge benchmarks but perform poorly in real-world public use, with human–LLM interaction failing to improve correct diagnosis or action choices.
Capgemini’s AI Perspectives 2026 report finds organizations shifting from AI hype to sustained, long-term investment—boosting budgets, scaling enterprise-wide deployment, prioritizing governance, skills, and human-AI collaboration for competitive advantage.
This CSET workshop report warns that automating AI R&D could accelerate AI capabilities dramatically—potentially creating strategic surprise—while urging better indicators and transparency to assess and manage escalating risks.
Oliver Wyman Forum outlines four strategies AI leaders use to gain advantage: embedding AI into core strategy, scaling talent and governance, prioritizing high-impact use cases, and building strong data and technology foundations.
Eight ways AI will shape geopolitics in 2026
The Atlantic Council outlines eight ways AI will shape geopolitics in 2026, from intensifying US-China competition and military transformation to influencing elections, energy systems, global governance, and economic power dynamics.
Brookings examines how non-state actors—including terrorists, criminals, and proxy groups—could exploit advanced AI for cyberattacks, disinformation, and weapons development, urging stronger safeguards and international cooperation to mitigate escalating risks.