AI Week in Review - 7/12/25

Public Sector

The Details

Federal

The U.S. Interior and Energy secretaries signed an AI-and-energy MOU with Israel, aiming to bolster grid optimization, cybersecurity, and research collaboration as part of America’s “energy dominance” agenda.

An AI-generated voice impersonating U.S. Secretary of State Marco Rubio has phoned and messaged at least five senior officials, triggering federal investigations and security warnings.

Anthropic will roll out Claude for Enterprise to all 10,000 employees at Lawrence Livermore National Laboratory, marking one of the Energy Department’s largest generative-AI deployments pending FedRAMP High accreditation.

EPA CIO Carter Farmer warns agencies that “shiny-object” AI projects without clear use cases or process redesign waste money, urging rigorous data and workflow vetting before deployment.

FDA’s agency-wide GenAI rollout succeeded by first unifying data, governance and security to augment experts, underscoring that robust infrastructure, ROI focus and culture are prerequisites for effective government AI.

NOAA’s National Hurricane Center and Google DeepMind signed a CRADA to provide real-time AI tropical cyclone forecasts, letting forecasters evaluate, integrate and improve machine-learning models for hurricane track and intensity.

NASA upgraded its GCMD Keyword Recommender, using the INDUS large-language model to auto-generate precise metadata for 3,200 + keywords, speeding discovery of Earth-science datasets across 43,000 records.

Interior’s OIG finds DOI’s AI program ballooned to 180 use cases but now faces funding, workforce and governance hurdles after federal policy shifts.

State / Local

AI can dramatically boost productivity, innovation and collaboration in public-sector software engineering, yet obstacles around security, compliance, legacy code and skills still slow adoption.

Cal Fire’s new AI chatbot, meant to provide wildfire information, gives outdated data and inconsistent answers—especially on evacuation guidance—highlighting gaps in California agencies’ rush to deploy generative AI.

Idaho will issue 2-year AI guidance enabling agencies to adopt tools like ChatGPT, yet early HR and DMV chatbot pilots reveal accuracy gaps, highlighting the need for cautious, transparent rollout.

Code for America’s 2025 Government AI Landscape Assessment rates U.S. states’ readiness—most remain “Developing”—while spotlighting leaders such as Utah, North Carolina and Colorado advancing governance, skills and infrastructure.

With Congress inactive, California, New York and Michigan advance bills demanding AI-developer transparency, incident reporting and whistleblower safeguards, letting state laws shape national standards—or spawn compliance patchworks.

With Congress shelving a federal AI pre-emption, California, Colorado and other states are rolling out strict rules for AI-driven hiring, creating a widening compliance patchwork for employers.

After surpassing a 25 % regulatory cut, Virginia Gov. Glenn Youngkin is piloting the nation’s first “agentic” generative-AI tool to spot redundancies and drive towards a 35 % reduction.

Code for America’s 2025 assessment ranks Pennsylvania alongside Utah and New Jersey as “advanced” in AI readiness, citing Shapiro’s governance board, staff training partnerships and strong technical infrastructure.

AI-generated phishing, deepfakes and automated intrusions are escalating, so agencies must pair classic cyber hygiene—patching, zero trust, MFA and staff training—with AI-powered detection to stay ahead.

San Francisco’s 2025 guidelines require city staff to use only vetted gen-AI tools, rigorously review outputs, disclose public uses, protect sensitive data, and ban decision-making or deepfakes without human oversight.

Centerville’s AI pilot mounts cameras on recycling trucks to detect cart contamination in real time and mail residents item-specific tips, hoping to cut processing costs and improve recycling quality.

Amarillo’s Animal Management & Welfare now uses Petco Love Lost’s free AI-powered facial-recognition database to match photos of lost pets with shelter and community reports, speeding reunions and easing shelter crowding.

Pennsylvania’s makes AI-generated deepfake voices or images used to defraud or harm a third-degree felony, bolstering Shapiro’s wider efforts to shield residents—especially seniors—from AI-powered scams.

International

École Polytechnique Fédérale de Lausanne, Swiss Federal Institute of Technology Zurich and the Swiss National Supercomputing Centre will release an open-source multilingual model trained on Alps to advance research.

U.S. AI innovation alone won’t beat China; Washington must also accelerate economic and military adoption, infrastructure, standards and safety to preserve long-term strategic leadership.

The EU’s voluntary General-Purpose AI Code of Practice, published 10 July 2025, guides AI model providers on transparency, copyright compliance, and systemic-risk safety to meet forthcoming AI Act obligations.

The UK’s Meta-funded, 12-month Open-Source AI Fellowship will embed elite engineers in government to build open-source tools that cut costs, boost productivity, and enhance national security.

New Zealand’s 2025 AI strategy prioritizes accelerating private-sector adoption, offering responsible guidance, tackling adoption barriers and boosting innovation through global partnerships and the nation’s science, innovation and technology strengths.

South Korea's Land Ministry will fund AI-powered city data hub pilots in Ulsan, Jeju and Chungbuk to manage vacant houses, parking safety and population decline, offering up to ₩1 billion each.

France’s digital directorate quietly launched DiploIA, an in-house, 100-language translation-and-transcription AI running on sovereign servers to accelerate work for 13,000 diplomats while safeguarding sensitive data.

Japan’s annual communications white paper urges greater generative-AI uptake after revealing only 26.7 % of citizens have ever used it—well below China’s 81 % and the U.S.’s 69 %.

Everything Else

Spotify’s viral ’60s-style act the Velvet Sundown admitted its music and personas are AI-generated, igniting debate over authorship, authenticity and playlist labeling on streaming platforms.

As AI threatens to automate entry-level white-collar roles, businesses, universities and policymakers must urgently redesign pathways for young workers to gain real-world skills critical for higher-level careers.

AI insiders say progress is racing ahead of public awareness, set to upend jobs, education and daily life within years, while governments and businesses scramble to catch up.

Apple’s stock slump reflects investor anxiety that the company is trailing rivals in AI, prompting analysts to urge an aggressive acquisition to catch up and avoid becoming a market “loser.”

Departing Llama researcher Tijmen Blankevoort warns Meta’s 2,000-strong AI division is stifled by layoffs, unclear goals and performance-review anxiety, calling its culture a “metastatic cancer” hobbling innovation.

After Grok’s racist spree and CEO Linda Yaccarino’s resignation, major brands stay silent yet further trim spending, underscoring X’s widening credibility gap with advertisers.

Stanford’s survey of 1,500 U.S. workers shows they welcome AI for repetitive chores yet insist on human oversight, revealing a big gap between staff desires and today’s AI capabilities.