AI Week in Review - 11/9/25

Public Sector

The Details

Federal

Idaho National Laboratory is piloting GenAI tools to automate and accelerate the preparation of engineering and safety‑analysis documentation for nuclear reactor licensing and permitting.

At Charles George VA Medical Center, the use of AI during colonoscopies has improved adenoma‑detection rates—each 1% increase in detection lowers a patient’s future cancer risk by about 3%.

The Dept of Commerce will establish the American AI Exports Program within 90 days — soliciting industry proposals for “full‑stack” U.S. AI technology export packages and offering federal financing and export support.

The White House’s AI and crypto czar David Sacks announced the U.S. government will not provide a bailout for AI companies, arguing that with at least five major frontier AI firms operating, if one fails others will fill the gap.

State / Local

California has enacted SB 243, the first U.S. law requiring safeguards on AI “companion” chatbots — including limits on interactions with minors, self‑harm protocols, and a private right of action.

Philadelphia’s City Council held its first AI hearing to question the Parker administration’s plans, oversight, and lack of clarity on AI uses—especially for public safety, surveillance, and data governance.

Texas has appointed Tony Sauerhoff as its first Chief AI & Innovation Officer, launching a new AI division in the state Department of Information Resources to oversee testing, ethics, and deployment of AI.

North Dakota’s Legislative Council adopted Meta’s Llama 3.2 (1B Instruct) model—running entirely on‑premises—to auto‑summarize bills, trimming hundreds of staff hours and accelerating legislative workflows.

A camera‑based AI gun‑detection system at Kenwood High School in the Baltimore County district flagged a student holding a Doritos bag as carrying a weapon, prompting armed police response—even though no weapon was found.

In Clark County Schools (Kentucky), school buses now feature reflective tape, bright lighting, four interior cameras, and an AI system that monitors bus health, driver speed, following distance and seatbelt usage.

The Arizona Department of Education reports that over 170,000 students — about 16 % of its public‑school population — are currently using AI‑powered tutoring tools.

Maine’s AI Task Force report outlines a strategy to position the state as a responsible AI leader—boosting innovation, workforce readiness and infrastructure while safeguarding privacy, equity and public‑sector integrity.

The AI Readiness Project—launched by The Rockefeller Foundation and Center for Civic Futures—is aiding state governments in moving from strategy to execution by fostering shared experimentation, pilot projects and a collective knowledge hub for AI in public service.

Greg Abbott has appointed six state officials and two tech executives to the newly formed Public Sector Artificial Intelligence Systems Advisory Board, created under 2025 legislation to guide Texas’s adoption of high‑risk AI tools and state‑agency governance.

University of Pennsylvania has entered into a cooperative agreement with the Pennsylvania Office of Administration to advise the state on AI strategy, governance and risk‑assessment by leveraging its faculty and research infrastructure.

The Indiana Secretary of State’s Office overhauled its online notary education system—leveraging generative AI to create course scripts, audio, video and adaptive content for 50,000 licensees—to reduce costs and boost accessibility.

A Northern California prosecutor disclosed that a court filing drafted with AI included made‑up legal precedents, prompting an immediate withdrawal and raising serious questions about generative‑AI use in the legal system.

International

Chinese tech firms are now dominating the open‑source AI space, releasing models more powerful and widely adopted than U.S. counterparts—raising concerns over influence, standards, and the U.S.’s ability to compete.

Microsoft warns that Russia and China are increasingly deploying AI-driven techniques — including deepfakes — to scale and intensify cyberattacks against U.S. targets.

China has released new guidelines directing the deployment of AI in government operations—emphasizing scenario-based adoption, data governance, safety measures, and life‑cycle oversight.

The UK government’s AI tool Consult processed over 50,000 consultation responses in two hours—matching human-level accuracy—and is projected to save 75,000 days of manual work annually.

China’s Ministry of Education published a 2025 guide on using generative AI in K–12 schools, emphasizing safe integration, limits by grade, anti‑cheating measures, and balancing innovation with ethics.

This Department of Industry Science and Resources outlines six foundational practices for organizations beginning to adopt AI—covering accountability, impact planning, risk management, transparency, monitoring and human oversight.

South Korean President Lee Jae‑Myung urged a tripling of government spending on AI infrastructure and technology—seeking 10.1 trillion won (~US$6.9 billion) in 2026—to build stronger computing, manufacturing and military‑AI capabilities.

Iceland is partnering with Anthropic to provide all teachers nationwide with access to Claude AI for lesson planning, personalized instruction, and professional development.

China plans to establish a Shanghai‑based global body for AI governance—putting Xi Jinping and Beijing at the heart of setting international AI standards.

The Ministry of Electronics and Information Technology unveiled the IndiaAI Mission’s AI Governance Guidelines outlining seven ethical principles, six governance pillars and an action plan promoting a “do no harm” approach to AI across sectors.

Everything Else

Anthropic and partners show that just ≈ 250 malicious documents—a fixed small count—can successfully insert backdoors into language models of vastly different sizes.

OpenAI launches AgentKit, a unified suite of tools to visually build, deploy, evaluate, and optimize AI agents—making agent development faster, safer, and more integrated with chat UIs and versioning.

Citi is mandating AI prompt‑training for ~175,000 employees globally, with a 60‑day window to complete tailored modules (experts ~10 min, beginners ~30 min).

Anthropic outlines how startups can build AI agents using Claude by defining clear goals, integrating external tools, and managing memory, planning and decision‑making capabilities.

OpenAI announced that while it will restructure and broaden its partnership with Microsoft, its founding nonprofit will continue to exercise governance control over its commercial arm to preserve its original mission.