Category: Uncategorized

  • This Week in Government Technology – September 29th-October 6th, 2024

    This Week in Government Technology – September 29th-October 6th, 2024

    This week brings several significant developments at the intersection of government and AI technology, highlighting how states and cities across the U.S. are advancing AI policy, data infrastructure, and education.

    Updated AI Policy Tracker Map

    Government Technology has published an updated AI policy tracker map, marking a major shift in state-level AI governance. By early October, 33 states have formed AI Task Forces or equivalent bodies, up from just 26 earlier this year. This rapid growth shows an increasing commitment to formalizing nationwide AI oversight. The map also highlights which states have implemented significant AI regulations or governance policies, which are developing AI literacy and training curriculums for both public and private sector workforces. These developments reflect the growing need for structured, responsible AI integration at all levels of government.

    New York City’s Data Modernization Efforts

    New York City’s Office of Technology and Innovation has announced a major initiative to consolidate, reorganize, and update its city-wide datasets. This project is designed to optimize the city’s data infrastructure, enabling more effective deployment of AI systems. By modernizing and streamlining these datasets, New York is setting the stage for AI-driven improvements in city operations and public services, offering a model for other cities looking to harness AI’s potential.

    California’s AI Literacy Law for K-12 Schools

    In a groundbreaking move, California has passed new legislation requiring K-12 schools to develop and implement AI literacy curriculums. The law mandates that students be trained to use AI across key subjects such as math, science, and history, ensuring that the next generation is prepared for an AI-powered future. This forward-thinking policy positions California as a leader in integrating AI into education, addressing technological competency and ethical considerations.

  • Key Takeaways from NASCIO’s Annual Conference

    Key Takeaways from NASCIO’s Annual Conference

    This past week, NASCIO (the National Association of State CIOs) held its annual conference, bringing together state CIOs to discuss some of the most pressing challenges in government IT, with AI at the forefront of the conversation. The event provided an opportunity for state leaders to exchange ideas and collectively examine how emerging technologies like AI are shaping the future of public services.

    The conference culminated in the release of NASCIO’s highly anticipated annual CIO survey, which offered valuable insights into the status of AI across state governments. The survey revealed critical data on the types of AI applications in use, implementation challenges, and state agencies’ growing ambitions to integrate AI more fully into their operations.

    Below are some key highlights from the conference:

    • The Debate on Chief AI Officers: NASCIO’s president posed a thought-provoking question during his keynote: Is appointing Chief AI Officers at the state level more effective, or would the formation of state-wide AI oversight teams provide better governance? This conversation underlined the need for balanced leadership and thoughtful AI strategy across agencies.
    • AI’s Impact on IT Workforces: A common thread in the discussions was the readiness of the current IT workforce to manage AI technology. CIOs highlighted the need for upskilling and increased funding to ensure that teams can handle the complexity of AI projects and maintain data integrity.
    • Data Security and AI’s Role: With cybersecurity being a top priority, state CIOs acknowledged AI’s potential to enhance security. At the same time, they emphasized the importance of safeguarding government data, particularly as AI’s role expands and the threat landscape grows.
    • Enterprise Architecture Maturity: Only 13% of state CIOs reported having high levels of enterprise architecture (EA) maturity in their states, sparking discussions on the necessity of well-structured policies and data management practices to support AI initiatives effectively.

    State governments are already implementing AI, with notable examples from North Carolina, Colorado, and Arizona leading the charge. As these states face challenges and embrace innovations, it’s clear that the path forward for AI in the public sector will require a careful balance between innovation, security, and ethical considerations.

    Why Read These Articles?

    AI Strategy and Leadership: The approaches state CIOs take in AI adoption offer valuable insights into navigating similar challenges in public sector innovation.

    Optimizing Data Practices: The focus on enterprise architecture and data readiness directly aligns with the need for robust data management and modernization.

    Reskilling for AI: Addressing the skill gaps in IT workforces highlights the importance of training and reskilling initiatives.

    Mitigating AI-Related Risks: The discussions around security and privacy concerns emphasize the critical need for risk management strategies that ensure safe and compliant AI deployments.

  • This Week in Government Technology – September 22nd-29th, 2024

    This Week in Government Technology – September 22nd-29th, 2024

    Federal Highlights

    The U.S. Department of Labor has proactively promoted inclusive hiring practices by introducing an in-house AI framework. This new initiative focuses on reducing bias in the hiring process and improving recruitment efforts for disabled individuals. Meanwhile, over 20 federal executive branch agencies have released detailed compliance plans, following recent guidance from the Office of Budget and Management (OMB) on AI governance. These plans outline AI procurement, deployment, risk assessment, and data management strategies, signaling a more structured approach to federal AI oversight.

    In addition, the General Services Administration (GSA) reported a 41% increase in enrollment for its newly launched AI safety and training curriculum for federal employees. This significant uptake reflects a growing commitment within federal agencies to equip staff with the necessary knowledge and skills to manage and deploy AI technologies safely.

    State-Level News

    Maryland has joined New Jersey and California in rolling out an AI training curriculum tailored for public sector employees. This initiative aims to ensure that government workers are prepared to engage with AI technologies in their daily operations effectively, enhancing overall state agency efficiency.

    In New York, the city’s sixth annual Transit Tech Lab Competition concluded, bringing together dozens of AI developers eager to pitch solutions to improve NYC’s transit systems. Of the many participants, 18 developers were selected to partner with transit authorities over the next year to pilot AI technologies to improve the city’s public transportation efficiency.

    In Other News

    A bipartisan group of Stanford professors released a series of essays titled “The Digitalist Papers,” offering diverse viewpoints on AI regulation in the U.S. Among the contributors, former Secretary of State Condoleezza Rice advocated for more public involvement in shaping the role of AI in government. In contrast, others argued for minimal government regulation of AI development.

    Lastly, Prepared, a leading supplier of emergency dispatch technology, raised $27 million in funding to develop AI tools that enhance 911’s rapid response capabilities. Meanwhile, StateTech published an insightful op-ed this week, exploring how federal agencies can leverage AI to design more accessible programs and spaces, focusing on improving accessibility for disabled individuals.

  • Perspectives on Governor Newsom’s AI Regulation Veto

    Perspectives on Governor Newsom’s AI Regulation Veto

    Governor Gavin Newsom of California vetoed a landmark AI regulation bill to establish comprehensive safety measures for artificial intelligence (AI). The bill, SB 1047, intended to establish regulations on large-scale AI systems, faced criticism for its broad scope, with concerns about the uniform standards across both high- and low-risk AI applications.

    In his veto statement, Newsom emphasized that the bill failed to differentiate between AI use cases involving sensitive data and critical decision-making versus more routine tasks, and that a one-size-fits-all approach could stifle innovation. While the broader public discourse will evolve, we present four perspectives from leading media outlets—The Associated Press, The New York Times, The Wall Street Journal, and The Financial Times—which offer unique insights into the implications of this decision.

    Why Read These Articles?

    Regulatory Implications for AI in Public Sector: The debate around SB 1047 can influence how future AI governance is shaped, particularly for government agencies.

    Balancing Innovation with Risk: These articles delve into the challenge of promoting AI innovation while ensuring public safety, a key concern for anyone involved in AI governance.

    Understanding Risk-Based AI Governance: The discussion on SB 1047’s failure to differentiate between high- and low-risk AI applications highlights the importance of tailoring AI regulations to specific use cases.

    Shaping Future AI Policy: California often sets the stage for broader tech policy trends.

  • This Week in Government Technology – September 15th-22nd, 2024

    This Week in Government Technology – September 15th-22nd, 2024

    State and Local AI Adoption

    EY recently released a nationwide survey revealing that state and local agencies are lagging behind their federal counterparts in adopting AI technologies. The survey found that only 51% of state and local agencies use AI daily, compared to 64% of federal agencies. It also highlighted the major challenges to wider AI adoption, including a lack of governance and ethical frameworks, inadequate data infrastructure, and uncertainty about which AI tools best meet local needs.

    These obstacles took center stage at the Digital Benefits Conference hosted by the Beeck Center for Social Impact and Innovation. Leaders from local and state governments discussed how the absence of proper governance and data quality programs is holding back AI advancements in public benefit programs.

    Indiana’s AI Innovations

    Indiana stood out this week with two major AI-related announcements. The state has become one of the first to implement a generative AI chatbot on its official website, designed to guide users through state services and programs. Officials emphasized the importance of rigorous safety testing, mindful of lessons learned from previous AI missteps in other regions, like New York City.

    Additionally, Indiana announced its plans to integrate an AI assistant into the “Pivot” software, a tool that connects job seekers with resources across six state agencies. The AI will help tailor recommendations for job seekers, though concerns about the need for a robust data quality program remain.

    Federal AI Updates

    In Washington, the Committee on House Administration and the Office of the Chief Administrative Officer introduced a House-wide policy for safely deploying AI. This framework includes ethical guidelines and approved use cases, ensuring responsible AI use by House members and offices.

    Meanwhile, a new bill progressing through the Senate could require federal agencies to designate officials to oversee service delivery improvements, including leveraging AI to enhance government efficiency.

    However, not all federal AI developments are being met with optimism. The U.S. Commission on Civil Rights chair raised concerns about the risks associated with AI-driven facial recognition technologies, calling for federal oversight to prevent civil rights violations. Senators Chuck Schumer and Ed Markey echoed these concerns in a letter to the White House, urging the establishment of civil rights offices in federal agencies using AI for significant decision-making.

    AI Safety on the Global Stage

    President Biden announced plans for an international AI Safety Summit in San Francisco post-election, where government leaders and experts from the U.S. and the EU will gather to discuss unified AI safety policies. This signals an increasing push toward global cooperation on AI governance.

    AI Legislation

    Government Technology took a deep dive into the 120+ AI-related bills currently making their way through Congress. Most of these focus on AI’s applications in science, commerce, and defense, with few addressing the need for governance or regulatory frameworks to curb algorithmic bias in the public and private sectors.

    In California, debate continues over a broad AI regulatory bill. An op-ed published by Government Technology urged Governor Newsom to embrace regulation now, even if imperfect, and refine the law as the state gains more practical experience with AI technologies.

  • This Week in Government Technology – September 8th-15th, 2024

    This Week in Government Technology – September 8th-15th, 2024

    Building Trust Through AI in Colorado’s New Data Governance Framework

    Colorado’s newly released “Guide to Artificial Intelligence” aims to support state agencies in implementing generative AI technologies. Amy Bhikha, the state’s Chief Data Officer, emphasizes that success will depend on fostering innovation, building trust, and enhancing the state’s data governance program. The guidelines focus on balancing innovation with trust through a robust data governance framework, which involves inventorying state data and ensuring its quality and bias-free nature. This initiative follows earlier AI legislation, SB 24-205, establishing consumer protections for AI systems. Bhikha also highlighted the need for AI literacy among state employees and collaboration with other states to shape AI policies.

    NASCIO Survey Reveals Gaps in AI Data Management Across States

    A recent National Association of State CIOs (NASCIO) survey reveals that while 95% of 46 state respondents recognize the growing significance of AI and generative AI in data management, only 22% have implemented dedicated data quality programs. The survey focuses on data governance and business intelligence, but areas such as metadata and master data management are under-prioritized. NASCIO’s executive director, Doug Robinson, stresses the need for robust data quality frameworks and a data-centric culture to drive innovation and improve citizen services. The report also identifies a skills gap, with many agencies lacking data stewards and data literacy managers.

    New Legislation Aims to Increase Transparency in New York City’s AI Use

    New York is taking significant steps to enhance the transparency and responsible use of artificial intelligence (AI) in city and state governments. In New York City, Councilmember Jennifer Gutiérrez is introducing new legislation requiring publishing a public list of approved AI tools used by city agencies. This effort aims to boost transparency and accountability by providing detailed information on how AI tools are used and how they handle data. Scheduled for release in February 2025, the list will be updated every six months to foster collaboration between city agencies and ensure responsible use of AI technologies.

    At the state level, the New York State Forum has established an AI Workgroup led by Gail Galusha, Director of Data and AI Governance. This workgroup, which includes both public and private sector members, is tasked with deepening the understanding of AI in the public sector, promoting ethical AI practices, and equipping state governments with tools for AI implementation. Both initiatives reflect New York’s leadership in the evolving landscape of AI governance and responsible technology use.

    New AI Infrastructure Task Force to Boost US Leadership and Clean Energy Initiatives

    The White House announced the creation of a new task force on AI data center infrastructure following a meeting with top executives from major AI and tech companies, including OpenAI, Nvidia, and Anthropic. The task force will focus on securing clean energy, workforce needs, and accelerating the construction of AI data centers to strengthen U.S. leadership in AI. The initiative aims to support national security, create jobs, and ensure the infrastructure is powered by renewable energy. However, civil society groups expressed concerns over environmental impact and transparency, warning of potential negative consequences for energy costs and climate goals.

    House Panel Passes AI Bills to Boost Research, Workforce, and Safety Initiatives

    The House Science, Space, and Technology Committee passed nine bipartisan AI-related bills, including legislation to codify the National AI Research Resource (NAIRR) and establish the AI Safety Institute under a new name. These bills aim to enhance AI research, workforce development, and safety measures. While many House bills have counterparts moving through the Senate, differences remain, particularly around funding for AI safety initiatives. Legislators emphasize the importance of advancing AI technology without imposing burdensome regulations, but concerns over underfunding and international competition remain key discussion points.

  • Salesforce Expands into Public Sector with Industry-Specific AI Tools

    Salesforce Expands into Public Sector with Industry-Specific AI Tools

    You can read the full article here.

    Salesforce has introduced an “Industries AI” initiative designed to cater to specific sectors, including government. This project provides AI tools to improve public sector operations, particularly in areas such as benefit application processing, digital assessments, and inspection preparation. These tools aim to streamline processes, reduce administrative burden, and enhance decision-making. The initiative covers other industries like energy, education, and healthcare, signals Salesforce’s growing focus on public sector needs, and highlights AI’s role in transforming government operations.

    Why Read This Article?

    Targeted AI tools for government: The article provides insights into AI solutions specifically designed to address public sector needs.

    Real-world use cases: The article highlights actionable examples, such as benefits processing and inspection tools.

    Innovation in government technology: Understanding new AI technologies that large companies like Salesforce are investing in can help leaders anticipate future trends.

  • This Week in Government Technology – September 1st-8th, 2024

    This Week in Government Technology – September 1st-8th, 2024

    State of California Seeks AI Solutions for Public Sector Problems

    California is turning to generative AI (GenAI) to tackle some of its most pressing challenges, including housing, individuals without housing, and budget analysis. To explore how large language models (LLMs) might assist, the state is hosting a GenAI showcase, where vendors will demonstrate cutting-edge capabilities. The initiative follows Governor Gavin Newsom’s executive order and summit to examine GenAI’s potential to enhance public services. Vendors will have the opportunity to present AI solutions to key state departments to address inefficiencies and improve service delivery for residents. The first showcase will focus on California’s housing crisis, inviting AI developers to present their technology to the state government for use case evaluation and possible acquisition. 

    Presidential Advisors Advocate for AI Testing Protocols in Policing

    An advisory panel to the president has approved recommendations requiring federal law enforcement agencies to follow standardized protocols when field-testing AI tools. The National AI Advisory Committee’s Law Enforcement Subcommittee proposed a checklist for testing AI, emphasizing transparency and performance measurement. The recommendations aim to establish clear guidelines for AI testing, ensuring tools are safe and effective before broader adoption. The subcommittee also urged publicizing testing results and advocated for additional funding to support state and local law enforcement’s AI testing efforts. These steps mark progress toward responsible AI integration in law enforcement.

    Maryland’s Intentional Approach to AI Technology

    Maryland officials are cautiously approaching AI due to concerns over data security and unpredictable software behavior. While acknowledging AI’s potential to streamline services, officials stress the importance of safeguarding sensitive information. Governor Wes Moore’s executive order supports AI’s benefits but calls for strict oversight, and state agencies are closely monitoring AI’s use to prevent unauthorized data sharing. Local leaders also ensure AI applications are carefully tested before broader implementation to protect public trust in government operations.

    Civil Rights Groups Challenge DHS’s Use of AI in Immigration Enforcement

    A coalition of over 140 immigrant and civil rights organizations has sent a letter to Secretary of Homeland Security Alejandro Mayorkas, raising concerns about the Department of Homeland Security’s (DHS) use of artificial intelligence. The letter calls for DHS to suspend certain AI systems, particularly those used by Customs and Border Protection and Immigration and Customs Enforcement, citing violations of federal policies on responsible AI use. As agencies face upcoming deadlines related to AI use case inventories, the coalition argues that AI tools deployed for immigration enforcement lack transparency and may perpetuate bias and discrimination.

    A Comprehensive Guide to Data Policy for AI Development

    The Data Foundation has released a guide aimed at helping policymakers navigate the challenges of data policy in the context of artificial intelligence. As AI systems heavily rely on vast amounts of data, the guide emphasizes the importance of high-quality, responsibly governed data. It highlights key components for sound data practices, including data integrity, privacy protections, transparency, and technical infrastructure. The guide also addresses the evolving landscape of AI, urging policymakers to develop comprehensive approaches to ensure AI data use aligns with public interests, ethical standards, and democratic values.

  • How Intelligent Document Processing is Opening the Door to AI in State and Local Agencies

    How Intelligent Document Processing is Opening the Door to AI in State and Local Agencies

    You can read the full article here.

    This op-ed by Brent Blawat, an AI strategist with over 24 years of experience in technology development, explores the increasing adoption of Intelligent Document Processing (IDP) within state and local governments. Despite initial skepticism about artificial intelligence, IDP stands out as a palatable AI application, delivering immediate value through automation while incorporating human oversight. Blawat highlights the role of IDP in streamlining document-heavy processes such as tax form processing and DMV interactions, offering a bridge between the traditional and digital worlds without displacing human workers. These attributes make IDP an appealing first step for agencies that are cautious about AI implementation.

    The article underscores the inherent safeguards in IDP, such as human intervention when confidence scores are low, which ensures reliability and trust. Additionally, IDP can significantly reduce manual data entry, enabling public sector workers to focus on more strategic tasks. Blawat concludes that IDP’s potential to handle routine tasks efficiently and cost-effectively makes it a gateway to broader AI initiatives in government. With the right safeguards, IDP could ease public sector agencies into more advanced AI applications.

    Why Read This Article?

    Government adoption: Highlights how IDP is becoming a trusted AI use case, offering a practical entry point for cautious agencies.

    Human oversight: Emphasizes the built-in safeguards in IDP, where human review ensures accuracy, reducing the risk of errors.

    Workforce enhancement: Shows how IDP supports workers by automating routine tasks, allowing them to focus on higher-value, strategic activities.

    Cost-effective AI: Demonstrates how IDP can provide a measurable return on investment, an appealing factor for budget-conscious agencies.

  • This Week in Government Technology – August 25th-September 1st, 2024

    This Week in Government Technology – August 25th-September 1st, 2024

    AI Chatbots Writing Police Reports Raises Concerns

    Police departments in cities like Oklahoma City and Lafayette are beginning to experiment with AI tools to draft initial crime reports based on body camera footage and audio recordings. This new technology, developed by Axon, promises to reduce officers’ time on paperwork, freeing them to focus on active policing. While officers have praised its efficiency, concerns are rising among legal experts and community advocates about the potential for AI to introduce errors or biases into critical documents used in the justice system. Prosecutors and watchdogs are particularly cautious about the technology’s use in cases that could affect arrests or court outcomes, noting the need for human oversight to ensure accuracy and accountability.

    AI Technology Enhances Emergency Response in Jefferson County

    Jefferson County’s 911 communications center, Jeffcom 911, is implementing a new AI-powered platform to improve emergency response efficiency and address staffing challenges. The cloud-based system, provided by Carbyne, will reduce call times, enhance caller location accuracy, and streamline the handling of emergency and non-emergency calls. AI tools like language translation and machine learning-based call triage will help manage surges in calls during major incidents, ensuring resources are allocated effectively. The platform also supports accessibility with features like text messaging and video calling, providing enhanced services for people with disabilities.

    New AI Guidance Released While Tech Advocates Call for Software Reform

    The Biden administration has released its guidelines for federal agencies to submit their 2024 AI use case inventories, with a submission deadline of December 16, 2024. Notable updates in the final version include a more focused definition of excluded AI use cases and a new section requesting deadline extensions for compliance with risk management practices. At the same time, several major tech advocacy groups are urging Congress to pass critical legislation that would enhance oversight and transparency in software procurement across federal agencies before the end of this session.

    Former Labor CIO Calls for Accelerated Digital Modernization

    Former Labor Department CIO Gundeep Ahluwalia emphasized the need for continued modernization to support the rapidly evolving jobs market. During his tenure, foundational IT improvements like Wi-Fi access and laptops paved the way for advanced technologies such as AI and robotic process automation. As he transitions to a new role, Ahluwalia stressed that the department must accelerate its digital transformation to help retrain the American workforce for future jobs driven by emerging technologies, including green energy and AI. He warned that without sustained investment, the Labor Department risks falling behind in its mission to support and prepare workers for the future.

    NLRB Names David Gaston as First Chief AI Officer

    The National Labor Relations Board (NLRB) has appointed David Gaston as its first chief artificial intelligence officer. Gaston, a lawyer with significant experience in technology policy and digital evidence, will oversee the NLRB’s AI initiatives and ensure compliance with federal AI directives. This appointment comes amid a broader federal push to establish AI governance across agencies, with the Biden administration emphasizing regulatory oversight of AI technology. Gaston’s legal background makes his appointment unique compared to other agencies that have chosen more technical leaders for similar roles.