Category: Uncategorized

  • When Human-Level Performance is not Enough – Leon Wong

    When Human-Level Performance is not Enough – Leon Wong

    Imagine you get blood work done, and your doctor spots something unusual in your results. Would you trust AI to decide your treatment?

    It’s true—AI has shown impressive medical knowledge, like Google’s Med-PaLM 2 passing medical licensing exams. But does that mean we’re ready for AI doctors? Not quite.

    That’s because passing a test is one thing. Doing the actual job is another.

    If a doctor’s only job were to pass medical exams, then an AI like Med-PaLM 2 could be a perfect replacement. But real patient care goes far beyond tests. It requires judgment, experience, and human context.

    The same is true when we think about AI in government. Just because AI can pass a test doesn’t mean it’s ready to handle the messy, complicated choices that shape people’s lives.

    This is especially true for people working in Health and Human Services (HHS), where decisions about food assistance, housing, healthcare, and child welfare directly impact people’s lives, raising the stakes even higher.

    So when, if ever, can we confidently hand them over to AI?

    The short answer: AI often has to do better than humans to be safe and effective in the real world. Here’s why.

    What does success really mean?

    To better understand why AI might struggle with real-world complexities, let’s look at an everyday example.

    Imagine a government system that automates data entry from handwritten applications. On paper, AI might match human performance in recognizing handwriting samples.

    But in the real world, caseworkers bring knowledge that AI lacks. For example, they know that a pay stub for “Elizabeth” might match an application for “Liz,” “Beth,” or “Betty.” They can also recognize that $1,200 is more likely to be a monthly rather than an annual income.

    Without that broader understanding, AI risks falling short where it matters most—real-world use.

    What’s at stake?

    Even if AI makes mistakes at a comparable rate to humans, the types of mistakes it makes are different. That matters.

    For example, an Air Canada chatbot hallucinated a fake bereavement fare reimbursement policy—something a human agent would almost never do. The company ended up getting sued and was ultimately held financially liable for the chatbot’s mistake. 

    Fairness is another issue to consider. McDonalds ended its AI-powered drive-through trial after it struggled with accented English. Of course, humans can also struggle to understand accented English—but with the scale of AI systems, widespread failures can disenfranchise entire groups, creating ethical and reputational fallout. 

    In human services, the costs and benefits of AI must be weighed at the macro and micro levels. 

    Consider AI-powered benefits determinations. Automated determinations could happen in minutes instead of months, making backlogs disappear for millions of applicants. But what if it wrongly denies assistance to a struggling single mom? That’s not just a glitch. It’s skipped meals, empty grocery carts, and an impossible choice between rent and dinner.

    For this reason, we need to be nuanced in the application of AI for human services. 

    Mistakes that favor applicants—like approving someone who turns out to be ineligible—are usually less harmful than mistakes that deny an eligible person the help they need. Or at least that’s true when we think about a single use case. But imagine if tons of approvals were accidentally granted. The consequences could be devastating for underfunded agencies that simply don’t have the resources to cover everyone.

    That’s why agencies need to weigh both the individual and system-wide risks when deciding how to use AI. One approach might be to allow automated approvals but require a human to review every denial. Another might be to decide that any error—at any level—is too risky, and keep AI out of decision-making entirely.

    But we also can’t ignore the cost of inaction. When agencies are stretched thin and backlogs drag on for months, people in need are left waiting—and that delay is a decision, too. In some cases, the speed of automation may be worth the trade-off.

    So, how do we make sure we’re using AI wisely? 

    Both agencies and vendors need to understand the “jagged frontier” of AI——the uneven edges where it excels in some tasks but struggles in others. That means getting specific about where AI can help, where it can hurt, and how to adapt as the technology evolves. This can be especially challenging since AI capabilities are evolving quickly. Policies need to stay flexible to keep up.

    Can you fail gracefully?

    Even the best AI systems will sometimes make mistakes. Since errors are inevitable, organizations must have strategies for catching them when possible and minimizing the impact of those that slip through. 

    Humans tend to do this naturally. Think about when we have trouble hearing someone on the phone. It’s common to ask people to repeat themselves, confirm what we thought we heard, or ask them to spell words. AI could, in theory, be designed with similar safeguards, but there are always practical limits. The net result is that  AI may need to outperform humans in certain tasks when it lacks reliable ways to catch and correct its own mistakes.

    Human review is a common strategy for catching AI errors. Instead of making critical decisions outright, AI tools can present evidence and recommendations to caseworkers. Appeals can also act as a safeguard, letting applicants flag suspected AI errors and request a review by a human caseworker.

    The key to making processes like this effective is to make sure that AI systems are designed with enough transparency (e.g., citing the specific regulations and evidence they used to make their decisions) so that people can efficiently audit the decisions they make.

    When humans are in the loop to help catch and correct errors, higher AI error rates can be tolerated.

    Key takeaways

    It’s tempting to measure AI success by whether it can match human performance. However, this perspective can oversimplify the complexities of AI in real-world scenarios. 

    Matching human performance isn’t always enough. Often, AI needs to outperform humans to make up for its lack of context, the high stakes of getting it wrong, and the fact that it can’t always tell when it’s made a mistake.

    HHS leaders can make informed decisions about using AI by:

    1. Critically evaluating—and clearly defining— how success will measured on the tasks you need AI to perform
    2. Understanding the different types of mistakes AI can make, the risks of those failures, and their associated costs
    3. Designing safeguards to mitigate AI’s limitations and ensure reliable outcomes

    Remember, the goal is not to replace humans but to enhance outcomes, ensuring that AI complements human strengths rather than merely imitating them.

    AI holds real promise for public service—but only when it’s applied with care, context, and humility.

    For HHS leaders, the challenge is not merely to decide whether to use AI, but how to use it in ways that protect people, improve outcomes, and reflect the values of public service. That means looking beyond flashy benchmarks and asking the harder questions: What does this tool really do? Who does it serve? And what happens when it gets it wrong?

    Getting it right will take cross-sector collaboration, thoughtful policy, and a commitment to centering both effectiveness and humanity. But if we do it well, we won’t just be keeping up with change—we’ll be shaping it.

    Because in public service, success isn’t just about speed or scale. It’s about trust. And trust isn’t built on performance alone—it’s built on judgment, transparency, and care.

  • This Week in Government Technology – March 9th – 16th, 2025

    This Week in Government Technology – March 9th – 16th, 2025

    Federal & National AI Developments

    AI Security & Compliance Firm Secures $5M in Funding

    Darwin, a technology company specializing in AI security and compliance services for state and local governments, has received $5 million in funding to scale its current projects. The company’s CEO expressed hopes that these funds would accelerate the development of tools to assist officials in planning AI deployments while ensuring compliance with ethical and regulatory standards.

    General Services Administration Continues “Demo Days”

    Despite facing budget and staff reductions, the General Services Administration (GSA) is proceeding with its internal “Demo Days” initiative. This program provides a platform for employees to propose and showcase innovative technology solutions that could enhance agency services and administration.

    State & Local AI Policies and Implementations

    California Advances AI Regulation Bills

    The California legislature is currently considering over 30 bills aimed at regulating AI use across government, nonprofit, and private sectors. Many of these proposals focus on safeguarding privacy rights and mitigating bias in AI deployments.

    Utah Partners with Nvidia for AI Training in Government

    Utah’s governor has signed a memorandum of understanding (MOU) with Nvidia to introduce AI-related apprenticeships for state employees. This initiative is part of a broader collaboration to enhance AI education for state university faculty and students.

    Florida Considers Statewide AI Policy

    Currently, AI policies in Florida government agencies are determined at the departmental level. However, state lawmakers are exploring the development of a standardized statewide AI policy. This initiative seeks to establish ethical AI use guidelines while also leveraging AI for cybersecurity improvements, financial audits, fraud prevention, and administrative optimization. If implemented, this policy could centralize AI deployment across Florida’s government operations.

    Transportation Agencies Leverage AI for Efficiency

    Government Technology published an analysis detailing how transportation agencies nationwide are increasingly using AI to enhance services and operations. AI applications include traffic flow prediction, bid document analysis, and infrastructure maintenance prioritization.

    South Carolina School District Explores AI for Education Support

    A South Carolina school district is evaluating an AI-powered assistant to streamline educational support tasks. Potential applications include generating tests and worksheets, providing individualized feedback to students, and translating school documents into multiple languages.

    Op-Eds & Thought Leadership in AI Governance

    AI & Cloud Technologies Boost State and Local Productivity

    A report from StateTech highlights that state and local agencies adopting AI and cloud technologies are experiencing notable productivity gains. Agencies that have been slower to integrate these technologies are seeing comparatively lower efficiency improvements, suggesting a competitive advantage for early adopters.

    Key AI Considerations for HHS Agencies

    Government Technology published an op-ed outlining essential factors for Health and Human Services (HHS) agencies when implementing AI. The article emphasized the importance of data security and client confidentiality, particularly as AI becomes more prevalent in child welfare services.

    State Governments Embrace Enterprise AI Management

    Government Technology also featured a sponsored article exploring how states are integrating AI into larger enterprise service management systems. These initiatives aim to create a coordinated and centralized approach to AI use, ensuring consistency and efficiency across state agencies.

    AI in Customer Service: Enhancing Human Support, Not Replacing It

    The Financial Times published an op-ed discussing the evolving role of AI in customer service. The article cautioned against using AI as a replacement for human interaction, instead advocating for its use as a tool to enhance efficiency and streamline the process of connecting customers with human representatives. The author stressed that AI should facilitate, rather than diminish, personalized customer experiences.

  • This Week in Government Technology – March 2nd – 9th, 2025

    This Week in Government Technology – March 2nd – 9th, 2025

    Federal AI Initiatives

    Civil Society Groups Urge AI Transparency in Federal Agencies

    A coalition of 18 civil society organizations has sent a letter to the Trump administration, advocating for preserving President Biden’s AI transparency policy. This policy required federal agencies to publicly disclose how AI is used in government services and programs. Supporters argue that maintaining transparency in AI deployment is essential for public trust and accountability, while opponents have raised concerns about administrative burden and operational security.

    The Debate Over a Unified National AI Policy

    An op-ed published this week in Government Technology argues that the absence of a singular national policy on AI governance has allowed states to experiment freely. The article suggests that a decentralized approach to AI governance could lead to more tailored and effective solutions at the state level rather than a one-size-fits-all federal mandate.

    State Initiatives

    New Hampshire Explores AI for Unemployment Fraud Prevention

    New Hampshire, which transitioned to an electronic unemployment insurance system in 2011, is now investigating how AI can be used to detect and mitigate fraudulent claims.

    Arkansas Launches AI-Powered Career Matching System

    After three years of development, Arkansas has unveiled its new LAUNCH platform. This AI-driven system connects unemployment insurance recipients with career resources and job opportunities suited to their backgrounds. The system aims to provide more targeted job placement services by leveraging statewide data pools and AI-driven matching algorithms.

    Connecticut Weighs AI in Healthcare—But Also Considers a Ban

    Following a recent executive order encouraging government agencies to enhance healthcare efficiency and transparency, Connecticut is considering how AI can improve service delivery, particularly as Medicare budget cuts loom. However, at the same time, a bill has been introduced in the state legislature that would ban the use of AI by health insurers when determining patient care, reflecting ongoing concerns about bias and accountability in AI-driven healthcare decisions.

    Oklahoma’s GenAI Procurement Tool in the Spotlight

    Statescoop has published an in-depth look at Oklahoma’s new “Process Copilot” system, a generative AI platform designed to enhance procurement processes and reduce payment errors and delays. The technology aims to increase efficiency in state purchasing while minimizing administrative friction.

    AI in Law Enforcement: Macon-Bibb County Sees Positive Results

    The police department in Macon-Bibb County, Georgia, has reported a 2.5% reduction in homicides since implementing AI-driven tools to enhance suspect tracking, community outreach, and investigative procedures.

    Virginia DOT Adopts AI for Smarter Budgeting

    The Virginia Department of Transportation (VDOT) has integrated AI into its financial management system to estimate project costs better and enhance budgeting efficiency. The goal is to maximize the dollar-for-dollar impact of highway maintenance and infrastructure projects while reducing cost overruns.

  • This Week in Government Technology – February 23rd – March 2nd, 2025

    This Week in Government Technology – February 23rd – March 2nd, 2025

    Federal AI Initiatives

    Google Public Sector released its annual survey of state and local government IT leaders, finding a notable decrease in concern around some common risks associated with AI adoption. Compared to last year:

    • 6% fewer leaders expressed concern about whether their staff would be AI-ready and well-trained.
    • 20% fewer reported worries about the impact of AI on government data security.
    • 13% fewer were concerned about risks to citizen privacy.

    According to Google Public Sector, these shifts suggest growing confidence among public sector leaders in managing AI responsibly and securely within government operations.

    State AI Initiatives


    New York

    Governor Kathy Hochul announced an expansion of the Empire AI consortium, a public-private research partnership backed by $165 million in new funding. The consortium’s work includes an “AI for Good” initiative focused on applying AI to public challenges such as food insecurity, education, government operations, climate change, cybersecurity, and health care.

    Indianapolis

    After identifying a gap in AI policy awareness among city employees, Indianapolis is partnering with InnovateUS to create AI policy training for its workforce. The program will educate staff about internal AI use guidelines and responsible implementation. Indianapolis joins states like California, New Jersey, and Maryland in working with InnovateUS on similar efforts.

    Utah

    Utah’s state CIO shared updates on current AI projects, including work to implement retrieval-augmented generation (RAG) technology within the state tax agency and develop AI-powered digital assistants for customer service across state agencies. These projects are part of Utah’s broader strategy to modernize government services through AI tools.

  • This Week in Government Technology – February 16th – 23rd, 2025

    This Week in Government Technology – February 16th – 23rd, 2025

    Federal AI Initiatives

    Preserving Federal AI Leadership Structures

    A recent op-ed from StateScoop emphasizes the importance of retaining Biden-era Chief AI Officers (CAIOs) under the Trump administration. The piece argues that CAIOs offer two critical advantages:

    • Specialized Expertise: These leaders bring deep technical knowledge essential for integrating advanced AI tools into federal workflows.
    • Clear Accountability: Having dedicated AI leadership ensures that modernization efforts are transparent, coordinated, and aligned with national priorities.

    State AI Initiatives

    Nevada’s Ambitious Statewide AI Modernization Project

    Nevada continues to lead statewide government modernization with its ongoing Enterprise Resource Planning System Modernization Project. The initiative, launched in 2022, focuses on overhauling decades-old infrastructure and digitizing essential services:

    • Financial System Overhaul: A major milestone was reached in mid-2023 when Nevada’s CIO announced a complete transformation of the state’s financial systems, including upgrades for digital payments and improved interagency collaboration.
    • “One Nevada” Digital Portal: This week, Nevada unveiled its next major objective—creating a centralized portal to unify health and human services (HHS) access. The portal will allow residents to use a single digital ID for streamlined access to services across state agencies, with plans to expand this model to all state services.
    • AI Regulations and Procurement Framework: Recognizing the growing importance of AI, Nevada’s leadership is developing guidelines for AI use in government service delivery, ensuring ethical deployment and responsible innovation.

    Castle Rock, Colorado, Leverages AI for Water Conservation

    In Castle Rock, Colorado, the municipal government is addressing significant water loss caused by underground pipe leaks—enough to supply 1,000 families annually. Officials have partnered with a Canadian AI firm specializing in AI-powered leak detection technology to tackle the issue. This collaboration aims to modernize the town’s infrastructure while significantly improving water conservation efforts.

    AI Leadership for State and Local Governments: A Growing Trend

    The concept of appointing state-level Chief AI Officers is gaining traction, following the federal model’s success. Analysts argue that specialized leadership will help states accelerate AI adoption, foster cross-agency collaboration, and ensure alignment with ethical standards.

    AI Tools Tailored for Local Government Efficiency

    Route Fifty’s latest op-ed argues that AI solutions should prioritize frontline employees in state and local governments. Emphasizing employee training for better public engagement and allowing staff to focus on direct community service by automating routine administrative tasks.

  • This Week in Government Technology – February 9th – 16th, 2025

    This Week in Government Technology – February 9th – 16th, 2025

    Federal and National AI Initiatives

    Congressional Push for AI Regulation

    Jay Obernolte, a California Republican Congressman, has urged Congress to pass federal AI regulations to prevent a fragmented approach across states. The lawmaker warned that without cohesive federal policies, the nation could face inconsistent AI safety and data privacy regulations.

    Obernolte is also leading efforts to establish a National AI Research Resource, which aims to create a shared public-private research infrastructure to facilitate responsible AI development and data usage.

    Federal IT Infrastructure Faces AI-Driven Strain

    A new report from Hewlett Packard warns that federal agencies’ rapid adoption of AI technology may soon overwhelm legacy IT infrastructure. The report highlights the urgent need for strategic planning to ensure government systems can support AI-driven workloads, cautioning that failure to modernize could disrupt essential services. It emphasizes investments in AI-ready hardware and computing capabilities as critical to maintaining service delivery and security.

    State-Level AI Innovations

    Washington State Proposes AI Bargaining Rights for Public Sector Unions

    Washington’s state legislature has introduced a bill granting public sector unions a say in AI deployment within government services. If passed, this legislation would ensure that unions can negotiate how AI technologies are integrated into administrative functions, potentially shaping policies on automation and job security.

    Los Angeles County Implements AI for Multilingual Emergency Communication

    Los Angeles County emergency services have begun utilizing AI-powered translation tools to improve communication with non-English-speaking residents. This initiative aims to enhance public safety and accessibility, ensuring that emergency messages reach all community members efficiently, regardless of language barriers.

    Texas and Iowa Explore AI for Government Efficiency

    Texas has established a new Government Efficiency Committee to explore the role of AI in modernizing state IT systems and improving public sector operations. Inspired by this move, Iowa’s governor has signed an executive order creating the Iowa Department of Government Efficiency (IDOGE), a 15-member task force dedicated to assessing AI and automation technologies’ potential to enhance state government efficiency.

    Oregon’s AI Advisory Council Releases Action Plan

    The Oregon State Government AI Advisory Council, formed in 2023, has released its final AI action plan, offering policy recommendations for AI governance. The plan calls for executive actions to formalize AI policies related to data security, human oversight, and workforce preparedness. The recommendations aim to ensure transparent and responsible AI deployment within Oregon’s state government.

    California Enhances Firefighting Training with AI

    The California Department of Forestry and Fire Protection integrates AI-driven augmented reality simulations into firefighter training programs. This initiative aims to improve training effectiveness and help personnel better prepare for real-world fire prevention and emergency response scenarios.

  • This Week in Government Technology – February 2nd – 9th, 2025

    This Week in Government Technology – February 2nd – 9th, 2025

    Federal and National AI Initiatives

    GovAI Coalition Launches AI Contract Hub

    The GovAI Coalition, a network of over 600 government agencies spanning local, state, and federal levels, has partnered with procurement services firm Pavilion to develop an AI Contract Hub. This initiative aims to provide agencies access to curated AI contracts, streamlining procurement processes and integrating ethical AI standards into government acquisitions.

    AI Safety Evaluation Initiative Underway

    The U.S. AI Safety Institute has announced that private firm Scale AI will serve as its first third-party AI safety evaluator. Organizations deploying AI systems can voluntarily submit their models for safety assessments, with the option of making evaluations publicly available. This initiative seeks to enhance transparency and accountability in AI deployment across government and industry.

    Tracking State Data Modernization Efforts

    Georgetown University’s Beeck Center for Social Impact and Innovation has introduced a new tool designed to monitor and analyze initiatives led by state Chief Data Officers. This resource offers insights into how states modernize their data governance and storage systems to support AI integration.

    Vice President Vance Engages in Global AI Policy Talks

    Vice President JD Vance has traveled to Paris for an international AI summit, discussing global AI regulatory strategies with world leaders. His participation signals the U.S. administration’s emphasis on a deregulatory approach to AI.

    State and Local AI Innovations

    Louisiana Launches AI-Focused Innovation Office

    Louisiana’s Department of Economic Development has established the Louisiana Innovation Office, focusing on AI deployment strategies for government and private sector applications. A key component of this initiative is the creation of the Louisiana Institute for AI, a nonprofit dedicated to AI regulation, strategy development, and funding opportunities for small businesses and startups in the state.

    Oklahoma Develops AI-Driven Budgeting and Procurement Tools

    The Oklahoma Office of Management and Enterprise Services (OMES) is leveraging AI to create a “digital twin” of the state’s financial and procurement data. This system allows for real-time monitoring of contracts and expenditures, helping agencies optimize spending and improve vendor management.

    Corpus Christi Port Authority Implements AI for Real-Time Port Management

    The Port Authority of Corpus Christi has integrated AI to develop a digital 3D model of the port, enabling real-time tracking of marine traffic. Officials hope this tool will enhance efficiency in port management and logistics planning.

  • This Week in Government Technology – January 26th – February 2nd, 2025

    This Week in Government Technology – January 26th – February 2nd, 2025

    As artificial intelligence continues to shape the public sector, government agencies are exploring new adoption, regulation, and governance strategies at all levels. This week, several significant developments have emerged in AI policy and deployment, ranging from municipal partnerships to federal strategy shifts.

    Federal AI Policy Shifts

    OpenAI Introduces ChatGPT Gov

    In an effort to streamline federal workloads, OpenAI launched ChatGPT Gov, a specialized chatbot designed to assist federal agencies. The tool is expected to support data analysis, automation, and other administrative functions.

    Changes in National AI Policy

    Following President Biden’s AI executive order rescission, President Trump issued a new AI executive order focusing on American leadership in AI development. The order directs federal agencies to develop a comprehensive national AI strategy prioritizing innovation and competitiveness.

    Additionally, the Office of Management and Budget (OMB) has been tasked with revising two Biden-era AI memos to align them with the administration’s desire to remove “ideological bias or engineered social agendas.”

    National AI Advisory Committee’s Expanded Role

    The National AI Advisory Committee, established in 2020, announced plans to develop ten AI policy recommendations for the administration and Congress. These proposals may include governance frameworks for AI use in government and regulations addressing AI’s impact on public- and private-sector workforces.

    State CIOs Advocate for AI Modernization

    The National Association for State CIOs (NASCIO) provided the Trump administration with its top priorities for state government technology. The recommendations remain largely consistent with previous years, emphasizing the need to modernize government IT infrastructure and explore AI’s role in improving public services.

    Federal AI Governance: A Fragmented Approach

    A Stanford study released this week criticized the current state of AI governance in the federal government, calling it “fragmented” and “inconsistent.” The report advocates for a unified, government-wide AI governance framework that sets standardized policies for AI use across agencies.

    State AI Developments

    Understanding AI Deployment in Local Government

    The Urban Institute released a new research report detailing how local governments are integrating AI and the obstacles they face. The study highlights three primary use cases:

    • AI-powered digital assistants for government employees
    • AI chatbots for online customer service
    • AI-driven accessibility improvements, such as simplifying complex document guides

    This research underscores the varied ways municipalities leverage AI to improve service delivery and operational efficiency.

    State Governments and AI Governance

    A new Pew Research Center article examined how state governments structure AI governance through executive orders and inter-state collaborations. These initiatives aim to establish ethical guidelines and best practices for AI deployment, reflecting a growing recognition of the need for oversight in AI adoption at the state level.

    New York City’s AI Initiative

    New York City announced a partnership with OpenAI to enhance municipal services and stimulate economic growth. The city also unveiled a broader AI strategy, including workforce development programs to prepare government employees and private-sector workers for an AI-driven economy.

    Minnesota Tackles Medicaid Fraud with AI

    Minnesota Governor Tim Walz announced a new initiative to explore AI-based solutions for detecting and reducing Medicaid fraud. This effort reflects a growing trend among state governments to harness AI to improve efficiency and prevent fraud in public assistance programs.

  • This Week in Government Technology – December 1st – 8th, 2024

    This Week in Government Technology – December 1st – 8th, 2024

    Federal Highlights

    Creation of the AI and Crypto Czar Role
    President Elect Donald Trump announced the establishment of a new executive office role: the “AI and Crypto Czar.” Venture capitalist David Sacks has been appointed to this position. Sacks is known for advocating minimal regulation to foster innovation, aligning with Trump’s broader anti-regulation stance. He supports integrating AI into national defense and security programs while promoting a free AI ecosystem.

    GovAI Coalition Formed
    A coalition of private sector technology leaders and public sector CIOs from across the country has come together to form the “GovAI Coalition.” The group aims to shape AI governance policies and practices in government. This week, over 1,700 tech professionals and representatives from more than 550 government organizations gathered in California for the coalition’s first in-person meeting. Discussions focused on creating standardized AI governance policies, laying the groundwork for future policy recommendations.

    Proposed Federal Legislation on AI in Financial and Housing Sectors
    A bipartisan group of lawmakers has introduced a bill to study the use of AI in financial and housing sectors. The legislation aims to assess the need for regulation to ensure fair and ethical AI implementation in these critical areas.

    State-Level Developments

    New Jersey

    The New Jersey AI Task Force released its final recommendations last month, focusing on modernizing data infrastructure, enhancing workforce development, and proposing privacy and security regulations. Following these recommendations, New Jersey launched AI training curriculums for government employees, aimed at improving foreign language translations for documents and websites. Additionally, a Statescoop article highlighted the state’s partnership with InnovateUS, showcasing its leadership in developing AI training programs for public sector employees.

    Colorado

    As concerns grow over the use of biometric data in AI systems, Colorado has emerged as a model for state-level data privacy policies. Statescoop detailed Colorado’s comprehensive approach to regulating sensitive data usage while addressing potential discriminatory impacts.

    California

    The California Center for Government Innovation hosted a gathering of state IT officials to discuss the need for modernizing data infrastructure to better integrate AI technology into public services.

    Nevada

    Nevada’s state CIO reported success with an AI pilot project in the Department of Employment. The initiative has significantly reduced processing times for claims appeals, helping to address backlogs and improve efficiency.

    Wisconsin

    Wisconsin’s legislature has established an “AI Study Committee” to explore AI usage across the state. The committee will engage with public and private stakeholders to develop regulatory proposals and an innovation investment plan.

    Maryland

    The Maryland Department of Transportation has deployed an AI-powered platform to analyze traffic patterns. This initiative aims to alleviate congestion, improve roadway safety, and combat gridlock.

  • This Week in Government Technology – November 24th – December 1st, 2024

    This Week in Government Technology – November 24th – December 1st, 2024

    State-Level Updates

    Arizona Updates AI Policies

    Arizona has updated its AI use policies based on insights from a state-wide survey of government employees’ technology usage. The revised policies prioritize modernizing the state’s data infrastructure to enable system-level AI integration across agencies while aligning with how employees are currently utilizing the technology.

    Survey on Local Government AI Readiness

    The Public Technology Institute published a survey examining AI adoption among municipal and county IT leaders. Findings include:

    • 38% of respondents feel unprepared to implement AI due to security, privacy, and skill-related obstacles.
    • 53% are developing AI governance frameworks.
    • 29% are collaborating with universities or industry experts for implementation.
    • 7% are creating AI-related job recruiting strategies.
    Insights from Georgia and Texas

    State CIOs from Georgia and Texas shared updates on their AI efforts, In Georgia, the state’s CIO discussed the “AI Lab,” a state-funded technology center for developing AI solutions tailored to government needs, and the challenges in modernizing data infrastructure. The Texas AI Advisory Council has been working with city and county governments, police departments, and academic experts to formulate AI policy recommendations.

    Federal-Level Updates

    AI Policy Direction Under New Administration

    A recent AP report analyzed the incoming Trump administration’s potential approach to AI. The article highlights moments where President Elect Donald Trump and his advisors, including Elon Musk, have expressed both enthusiasm and caution about AI’s potential, leaving questions about future federal regulation and bipartisan legislative efforts.

    DOJ Calls for Updated AI Governance

    The Department of Justice’s Office of the Inspector General has recommended revising its 2020 AI governance policy. The update aims to address new challenges posed by the evolving AI landscape, particularly regarding legal and judicial applications.

    US AI Safety Institute Task Force

    The US AI Safety Institute, under the National Institute of Standards and Technology (NIST), has launched a task force to test AI models deployed in national security contexts. The initiative focuses on assessing risks to critical infrastructure and data security.

    Global Updates

    UK Mandates AI Transparency

    The UK government announced a mandatory requirement for all government offices to register their use of AI in public records, aligning with a “pro-innovation approach” to regulation. However, only nine use cases have been registered to date, raising questions about the government’s commitment to transparency.

    EU Implements AI Act

    The European Union has begun enforcing its comprehensive AI regulatory law, the European Artificial Intelligence Act. The legislation mandates safety and impact evaluations for AI used in government contexts. Statescoop reported challenges faced by EU research organizations in standardizing evaluation criteria for diverse AI applications.