- AI Week in Review
- Posts
- AI Week in Review - 3/15/25
AI Week in Review - 3/15/25
Public Sector

Hello, friends. Thank you all for supporting AI in the public sector. We are at a pivotal moment in history as AI continues to evolve, and it’s more important than ever to harness this technology to delivery mission outcomes. This week, we look at the key topics in public sector AI and then dive into the details with the latest news.
In this week’s edition:
AI in national security training
AI Action Plan comments delivered
The agents are coming!
The latest public sector AI news
The Highlights
AI in national security training

The Story
Who doesn’t love free training! Especially when it comes from great organizations like the Special Competitive Studies Project and Coursera. Oh, and I’m the instructor for module 2. 😀
More Details
Here’s what’s in the course:
Module 1: The Imperative
Understand the importance of being AI-ready.Module 2: Practical Applications
Explore how tools like ChatGPT, Gemini, Claude, and Copilot can be applied.Module 3: How to Leverage AI Tools for your Mission
Learn how to improve individual workflow in a national security context.Module 4: Agency AI Playbook
Learn how to advocate for AI adoption at your agency.
AI Action Plan comments delivered

Source: ChatGPT
The Story
There have already been over 430 comments submitted on the development of the White House’s AI Action Plan. Some of these have been publicly released. The AI Action Plan is due in July, so stay tuned.
More Details
Many organizations have publicly released their comments, including the following:
OpenAI Submission
Anthropic Submission
Google Submission
Center for Data Innovation Submission
The agents are coming!

Source: Shutterstock
The Story
2025 is the year of agents! So, what does that really mean? It means GenAI is going to be able to do more - and without as much oversight.
More Details
AI agents can act autonomously, accessing available tools, and performing long-running multi-step tasks.
Manus AI, a new Chinese agent, was flagged as the second DeepSeek moment earlier this week. Turns out, it may just be Anthropic with tool calls, but the video is amazing.
OpenAI launched a platform to build custom AI agents that was covered by the Wall Street Journal.
The future of AI isn’t the model - it’s the system. AI agents will be a critical part of those systems.
The Details
Federal
NIST cuts would put US behind AI eightball, tech groups warn Commerce secretary
Tech groups warn that cutting NIST’s AI programs could weaken U.S. leadership in AI, urging the Commerce Secretary to sustain research, standards, and global competitiveness.
Top oversight Democrat warns of DOGE AI use in federal agencies
Rep. Gerry Connolly warns federal agencies about unauthorized AI use, raising concerns that Elon Musk’s Department of Government Efficiency may be mishandling sensitive data and violating privacy laws.
New House AI, Energy Working Group issues RFI
The new House AI and Energy Working Group seeks input on strategies to meet AI-driven energy demands, strengthen the grid, and maintain U.S. leadership over China in energy and technology.
From bureaucracy to brilliance: AI in federal IT
To fully harness AI’s potential, federal agencies must invest in AI fluency, secure sovereign AI solutions, modern IT infrastructure, and workforce development while integrating AI with emerging technologies.
How the Air Force is experimenting with AI-enabled tech for battle management
The 805th Combat Training Squadron is experimenting with AI tools like Maven and Maverick to improve battle management, dynamic targeting, and decision-making, integrating them into next-generation command systems.
AI in government: Lessons from six years of research
The Ada Lovelace Institute report on AI in the public sector highlights the need for clear terminology, high-quality data, transparency, public engagement, and a focus on reimagining services rather than just automation.
What federal regulators can learn from the states about AI oversight
State-level AI oversight offers federal regulators valuable lessons on governance models, consumer protection, and risk mitigation, highlighting the need for a unified yet adaptable national AI policy.
Understanding U.S. allies’ legal authority to implement AI export controls
The U.S. has aggressively expanded AI and semiconductor export controls on China, but the strategy's success depends on allied nations' willingness and ability to implement similar restrictions outside traditional multilateral frameworks.
Depart of Education dismantle impacts the future of AI in schools
As the Department of Education gets cut, schools are turning to AI for tutoring and administrative support, though experts warn technology alone cannot replace human connection in learning.
Superintelligence strategy: Mutual Assured AI Malfunction (MAIM)
The Superintelligence Strategy report proposes a three-pronged approach—deterrence, nonproliferation, and competitiveness—to manage AI risks, introducing "Mutual Assured AI Malfunction" (MAIM) as a deterrence model akin to nuclear MAD.
InteI agency copes with workforce reductions amid AI modernization
The National Geospatial-Intelligence Agency is accelerating AI integration amid workforce reductions, balancing modernization with mission focus as it scales initiatives like Project Maven to manage growing intelligence data demands.
Military AI is here. Some experts are worried
The U.S. military partners with Scale AI for AI-driven operational planning, raising concerns among experts about autonomy, existential risks, and the need for global regulations on AI in warfare.
Trump’s uncertain AI doctrine
The Trump administration’s AI strategy prioritizes deregulation and innovation but remains unclear on safety measures, raising concerns about balancing rapid progress with public trust and global leadership.
State / Local
Charles County AI Task Force presents draft policy on responsible AI use
Charles County’s AI Task Force has proposed a responsible AI policy focusing on ethical use, security, and transparency, with plans for training, tool approval, and future AI integration.
Connecticut aims to lead AI innovation
The Connecticut AI Alliance (CAIA) is fostering AI growth with a $20M computing cluster, workforce training, and industry collaboration, positioning the state as a national AI leader.
North Carolina hires its first AI governance and policy exec
I-Sah Hsieh, a veteran AI and analytics expert, will lead North Carolina’s AI governance efforts, shaping policy and oversight to integrate AI responsibly into public services.
AI to reshape state transportation departments
State transportation departments in Texas and California are exploring AI to enhance operations, with Caltrans considering a chief data and AI officer role, signaling a major workforce transformation.
Adding AI into HR
Government agencies are integrating AI into human services to improve efficiency, workforce development, and service delivery while carefully balancing data privacy, ethical concerns, and maintaining the human element.
AI can help law enforcement make Nebraska safer
AI is enhancing law enforcement in Nebraska by automating tedious tasks, detecting scams, and improving efficiency, though ethical and privacy concerns must be carefully managed.
AI reporters unveiled for Arizona Supreme Court
The Arizona Supreme Court has introduced AI-generated reporters, Daniel and Victoria, to provide clear, timely explanations of case decisions, with potential for more AI spokespeople in the future.
CT looks to regulate workplace AI
Connecticut lawmakers are debating a bill to regulate AI in the workplace, with supporters emphasizing transparency and worker protections, while business groups warn of overregulation and high compliance costs.
Georgia lawmakers weigh how to regulate AI for state agencies
Georgia lawmakers are considering AI regulations for state agencies, with debate over extending oversight to local governments, as cities like Atlanta and Macon-Bibb begin integrating AI into public services.
State leaders call on Congress to ban DeepSeek
A coalition of 21 state attorneys general is urging Congress to ban the Chinese AI app DeepSeek from federal devices, citing national security risks and potential data access by the Chinese government.
AI helps teachers in FL
Florida is using AI assistants, Baxter and Professor Bruce, to help students with questions, support teachers in lesson planning, and enhance data collection for tracking student progress.
Maine docs cautious, but optimistic on AI
Maine’s largest health providers are cautiously adopting AI for administrative tasks like documentation, while smaller providers express concerns over privacy, security, and potential impacts on patient care.
Newsrooms are using AI to listen in on public meetings
Local newsrooms are using AI transcription tools to monitor public meetings, helping reporters uncover sources and leads, though human verification remains essential for accuracy and context.
Virginia legislation calls for human oversight of AI use in court decisions
Virginia lawmakers advance a bill mandating human oversight in AI-assisted court decisions, ensuring accountability and addressing concerns about algorithmic bias in the criminal justice system.
Can AI get past its power problem?
AI’s massive power demands are straining grids, prompting nuclear and renewable energy investments while raising concerns about sustainability, fairness, and transparency in energy allocation for AI-driven public services.
International
Iraq lags in AI readiness as neighbors advance
Iraq ranks among the lowest in the MENA region on the 2024 Government AI Readiness Index, struggling with weak AI strategy, outdated infrastructure, and a lack of investment in technology and talent.
China’s autonomous agent, Manus, changes everything
China's Manus AI, the world's first fully autonomous AI agent, disrupts the global AI landscape by independently executing complex tasks, raising ethical concerns and challenging Silicon Valley’s dominance.
Spain cracks down on AI: Mislabeling deepfakes could cost companies millions
Spain's proposed law could fine AI companies up to €35 million for mislabeling AI-generated content, aiming to curb deepfakes and align with the EU AI Act’s transparency rules.
French publishers and authors sue Meta over copyright works used in AI training
French publishers and authors are suing Meta, accusing it of using copyrighted works without permission to train its AI, highlighting ongoing legal battles over AI and intellectual property.
DeepSeek AI cranks open the spigots on Chinese venture capital
DeepSeek’s AI breakthrough has triggered a surge in Chinese venture capital interest, reversing years of decline and attracting global investors seeking opportunities in China’s AI sector.
UK Health Security Agency – AI update
The UKHSA continues expanding AI applications, including real-time pollen detection and AI-driven TB screening, while refining its AI strategy, readiness agenda, and enterprise-level adoption framework.
UK civil service staff reductions with AI efficiency
The UK government plans to integrate AI into public services to boost efficiency, potentially reducing civil service staff numbers, while investing in tech apprenticeships and AI teams to modernize Whitehall.
India’s path to AI autonomy
India’s AI vision emphasizes democratization, public-sector-led applications, and global leadership in ethical AI, leveraging homegrown innovations to address societal challenges and drive inclusive growth.
Seeking stability in the competition for AI advantage
A new report proposes "Mutually Assured AI Malfunction" (MAIM) to deter unilateral AI dominance, but experts warn it could heighten instability rather than prevent an AI arms race.
UK to take a ‘test and learn’ approach with spending on AI
The UK government is streamlining AI and digital project funding, adopting a “test and learn” approach to accelerate innovation, cut waste, and improve public services with agile, staged funding.
Everything Else
CEOs "shoving" AI "into everything" — with mixed results
Corporate leaders are aggressively pushing AI into workflows, but while it streamlines tasks, issues like hallucinations and job fears raise concerns about its true workplace value.
Over half of American adults have used an AI chatbot
Over half of U.S. adults have used AI chatbots, with ChatGPT leading in popularity, as an Elon University survey highlights growing integration of AI into daily life and personal interactions.
AI failed to detect critical health conditions
A study found that AI models predicting in-hospital mortality failed to detect 66% of critical injuries, highlighting concerns about relying solely on data-driven training in patient care.
5 issues to consider as AI reshapes work
As AI reshapes work, experts emphasize the need for worker input, transparency in hiring, proactive regulation, and policies that safeguard job security and well-being.
OpenAI wants businesses to build their own AI agents
OpenAI launched a platform for businesses to build custom AI agents, aiming to drive enterprise adoption by enabling automation in tasks like financial analysis and customer service.
AI search has a citation problem
A study found AI search engines frequently misattribute news sources, fabricate citations, and ignore publisher restrictions, raising concerns about transparency and accuracy in AI-generated search results.
Anthropic CEO says spies are after $100M AI secrets in a ‘few lines of code’
Anthropic CEO Dario Amodei warns that Chinese espionage threatens U.S. AI firms, urging the government to strengthen security at AI labs to protect valuable algorithmic secrets.
The future of AI isn’t the model—it’s the system
AI's value is shifting from standalone models to integrated systems of autonomous agents, as seen in Manus and enterprise AI adoption, while Google DeepMind advances robotics with AI-driven reasoning.
Malware's AI time bomb
Experts warn that AI-powered malware could revolutionize cyberattacks, but hackers are sticking to traditional tactics for now—giving companies a narrow window to bolster AI-driven cybersecurity defenses.
People find AI more compassionate than mental health experts
A study finds AI-generated mental health responses more compassionate than human experts, raising opportunities for AI in therapy but also concerns about privacy, bias, and over-reliance on artificial empathy.