Category: Uncategorized

  • California Lawmakers Tackle the Challenges of Emerging Technologies

    California Lawmakers Tackle the Challenges of Emerging Technologies

    You can read the full article here.

    California lawmakers are actively addressing the concerns around artificial intelligence (AI) through a series of legislative measures, reflecting the growing anxiety about the potential societal impacts of AI technology. With approximately 50 AI-related bills, they aim to introduce safeguards that mitigate the risks posed by AI, such as job displacement, data security issues, and racial discrimination. These legislative efforts are a response to AI’s rapid advancement, which has surprised many, including experts. Proponents argue that proactive regulation is necessary to avoid the pitfalls experienced by the unchecked growth of social media. However, there is significant pushback from tech industry groups who fear excessive regulation could stifle innovation and reduce California’s competitive edge.

    Among the proposed measures are bills that mandate human oversight on driverless heavy-duty trucks and restrict the automation of jobs in public service call centers. Additionally, a bill by Senator Scott Wiener requires safety testing for large AI models to prevent misuse. Despite the tech industry’s concerns about overregulation, public sentiment appears to favor such safeguards, with polls indicating strong support for AI safety regulations. Advocates stress that without these measures, the public and workers remain vulnerable to the unchecked deployment of AI technologies, which could exacerbate existing biases and inequalities.

    Why Read This Article?

    Understand the Legislative Landscape: Gain insights into California’s current legislative efforts to regulate AI to address its potential risks.

    Public vs. Tech Industry Perspectives: Learn about the differing viewpoints between lawmakers and tech industry representatives on AI regulation.

    Specific Legislative Proposals: Get detailed information on specific bills and their intended impact on AI deployment and safety.

    Public Opinion and Advocacy: See how public sentiment and advocacy groups influence AI policy decisions.

  • This Week in Government Technology – June 9th-16th, 2024

    This Week in Government Technology – June 9th-16th, 2024

    As artificial intelligence (AI) continues revolutionizing various sectors, recent developments underscore the critical need for robust data management and strategic oversight in the public sector. This week’s highlights include key advancements in AI deployment, state-led modernization efforts, and legislative initiatives addressing AI risks. From Atlanta’s innovative approach to water infrastructure, California’s pioneering generative AI traffic safety project, and federal lawmakers’ push for comprehensive AI oversight, these updates showcase the multifaceted impact of AI across different domains.

    Preparing Government Data for AI Success

    In an op-ed by Celeste O’Dea, vice president of federal capture and engagement at Oracle, she highlights a key challenge government agencies face as they explore the potential of AI technology: preparing the necessary data for successful AI deployment. AI can significantly enhance efficiency in various operations, but its effectiveness hinges on having well-organized, accessible data. Many agencies are currently dealing with siloed and duplicated data spread across different platforms, including unstructured data stored on personal devices or as institutional knowledge. To leverage AI, agencies need to undertake a comprehensive data management overhaul, starting with identifying key questions and organizing data to eliminate redundancies and ensure completeness. This requires a disciplined approach and investment in scalable, cloud-based technologies, along with strong management commitment and expert guidance.

    California’s First Generative AI Contract to Enhance Road Safety

    The California Department of Transportation has awarded its first generative AI contract to Inrix, a transportation data and software firm, to implement a proof of concept for their AI traffic software, Inrix Compass. This innovative technology leverages real-time and historical traffic data and statewide crash and roadway inventories to assess risks and recommend safety improvements. The AI system is designed to predict traffic conditions and suggest changes to enhance safety across all types of roads in California. This initiative follows a call for generative AI solutions to reduce traffic and protect vulnerable road users. It reflects a growing trend of states utilizing advanced data analytics for roadway safety.

    AI to Boost Atlanta’s Efforts in Modernizing Water Infrastructure

    In response to recent water main breaks and a boil-water advisory in Atlanta, Mayor Andre Dickens announced plans to deploy AI-enhanced devices on water line valves to detect and address issues in the city’s aging water infrastructure. Collaborating with the US Army Corps of Engineers, the city aims to develop a comprehensive plan to assess and upgrade its water system. Mayor Dickens also mentioned potential federal funding to support these improvements, potentially amounting to billions of dollars.

    Lawmakers Push for AI Oversight Amid Rising Concerns

    In light of the recent bipartisan Senate report that largely avoided discussing the potential dangers of generative artificial intelligence, several lawmakers, including Senators Mitt Romney, Jack Reed, Jerry Moran, and Angus King, advocate for federal oversight of advanced AI models. They emphasize the importance of establishing regulatory frameworks to mitigate existential and security risks posed by AI technologies, such as ChatGPT, Claude 3, and Gemini Ultra. The proposed oversight aims to prevent AI from being misused in ways that could lead to biological, chemical, cyber, or nuclear threats.

  • AI Training for a Future-Ready Workforce in Oklahoma

    AI Training for a Future-Ready Workforce in Oklahoma

    You can read the full article here.

    Oklahoma has partnered with Google to launch the AI Essentials course, a free training program designed to provide foundational AI skills to 10,000 residents at a time. Announced on May 30th, this initiative is part of Governor Kevin Stitt’s broader strategy to make Oklahoma a leader in AI technology. The course is accessible to participants without prior AI experience or a degree and aims to bridge the AI skills gap, enhancing the state’s workforce competitiveness. By offering hands-on experience and targeting diverse participants, including those in rural areas, the program seeks to prepare residents for the evolving job market and attract more companies to Oklahoma.

    This initiative is a key component of Oklahoma’s comprehensive approach to AI, which includes forming the Task Force on AI and Emerging Technologies and integrating AI training into public school curricula. The state will assess the program’s impact and continuously improve engagement efforts by tracking metrics such as course completion rates and participants’ confidence in using AI. Oklahoma aims to create an agile and skilled workforce through this partnership, positioning itself as a hub for AI innovation and economic growth.

    Why Read This Article?

    AI Education: Discover how Oklahoma is addressing the AI skills gap to prepare its workforce for future job markets.

    AI’s Impact on The Workforce: Understand the broader implications of this initiative for economic development and workforce competitiveness.

    Data Driven Decision Making: Gain insights into the metrics used to track the success and impact of the training program.

  • This Week in Government Technology – June 2nd-9th, 2024

    This Week in Government Technology – June 2nd-9th, 2024

    As artificial intelligence (AI) continues revolutionizing various sectors, this week highlighted key advancements in AI governance, public-sector collaboration, and modernization efforts. From the launch of an international AI hub to New Jersey’s innovative employee engagement strategy and the VA’s progress amid budget challenges, here are the essential updates.

    FPF Launches International Hub for AI Governance

    The Future of Privacy Forum (FPF) has launched the FPF Center for Artificial Intelligence to support AI governance and policymaking on an international scale. The center aims to establish best practices, conduct research, and provide resources for stakeholders to navigate AI-related challenges, particularly in the public sector. It will include sector-specific working groups and a leadership council comprising senior public officials, academics, and the public. The center will also focus on AI assessments and their intersection with existing privacy assessments, supported by funding from the National Science Foundation and the Department of Energy.

    New Jersey Engages State Workers to Shape AI Future

    New Jersey is pioneering a collaborative approach to integrating artificial intelligence (AI) in state government by actively involving public-sector employees in the process. Through a comprehensive survey, the state aims to gather insights into employees’ knowledge, attitudes, and interests regarding AI, which will inform AI training and upskilling initiatives. This inclusive strategy seeks to enhance public services and empower workers, ensuring that AI tools are used responsibly and effectively. The state’s commitment to training its workforce and co-creating AI governance frameworks sets a model for public and private sectors in leveraging AI’s transformative potential.

    VA’s AI and Modernization Efforts Thrive Amid Budget Challenges

    Despite facing potential budget cuts and scrutiny from lawmakers, the Department of Veterans Affairs (VA) is making strides in AI and modernization efforts. Charles Worthington, the VA’s Chief Artificial Intelligence Officer and Chief Technology Officer, discussed the agency’s focus on targeted hiring for AI experts and sustaining its modernization initiatives. He emphasized the critical role of AI in transforming VA’s operations and the importance of integrating new technologies while maintaining existing systems. Worthington highlighted the need for a robust governance framework and AI governance council to manage AI use cases, ensuring secure and effective deployment. The VA’s efforts include enhancing technical infrastructure, upskilling the workforce, and running AI pilot projects to improve service delivery to veterans.

  • Public Perception on Generative AI in Government Services

    Public Perception on Generative AI in Government Services

    You can read the full article here.

    Keely Quinlin reports on KPMG’s 2024 annual survey, which unveils a promising future with generative artificial intelligence (GenAI) in government services. The survey reveals that more than half of the respondents see GenAI as a game-changer, capable of enhancing services such as healthcare benefits and motor vehicle operations. However, the perceived adequacy of current technological use in government is low, with only 30% of participants expressing satisfaction. The survey also uncovers significant concerns about cybersecurity, with a majority of respondents more worried about data breaches in government agencies compared to private companies.

    Lorna Stark, a KPMG executive, underscores the imperative for government agencies to embrace technology like generative AI to meet rising public expectations and enhance service efficiency, transparency, and constituent engagement.

    Why Read This Article?

    • Understand Public Attitudes on Generative AI: Gain insights into how different demographics perceive the role of generative AI in improving government services.
    • Learn About Regional Differences: Discover how public opinion varies across major cities and regions in the U.S.
    • Explore Generational Views: Understand the differing confidence levels in government technology usage across age groups.
    • Cybersecurity Concerns: Learn about the public’s heightened concerns regarding data security in government agencies.
    • Governance and Innovation: See how government agencies can leverage technology to improve efficiency and public engagement in service delivery.
  • This Week In Government Technology – May 26th-June 2nd, 2024

    This Week In Government Technology – May 26th-June 2nd, 2024

    As artificial intelligence (AI) continues to advance, this week featured significant developments in government initiatives and legislative actions. From California’s groundbreaking AI bills to new educational initiatives, here are the key highlights.

    California Passes Landmark AI Bills: New Standards, Anti-Bias Measures, and Research Hub

    California Senate bills 892 and 893, designed to establish an AI risk management standard and a research hub, passed unanimously and are now proceeding to the Assembly. Senate Bill 892 instructs the California Department of Technology to implement AI risk management regulations and requires risk assessments in contracts for automated decision systems. Senate Bill 893 proposes the creation of the California Artificial Intelligence Research Hub to facilitate AI research and development through collaboration between government, academia, and the private sector. Senator Steve Padilla stressed the bipartisan necessity of these measures to protect against unchecked technological growth. Both bills were approved with a 37-0 vote, mirroring similar initiatives in New Jersey and New York.

    California lawmakers have advanced several significant AI-related proposals aimed at building public trust, combating algorithmic discrimination, and outlawing deepfakes related to elections and pornography. Additional measures target the protection of jobs and likenesses from AI-generated clones and impose penalties for unauthorized digital cloning of deceased individuals. Lawmakers are also considering regulations for large generative AI systems to prevent their misuse in creating disasters. These efforts position California as a leader in AI regulation, emphasizing the need for robust oversight and ethical guidelines to mitigate the risks of rapidly advancing AI technologies.

    Code for America Launches AI Studio to Enhance Government AI Adoption

    Code for America has announced the launch of AI Studio, a new series of workshops to assist governments in implementing human-centered artificial intelligence. These workshops will be available in-person and virtually, focusing on the importance of testing, scaling, and responsibly adopting AI tools. The initiative emphasizes mitigating risks and exploring potential use cases while ensuring that AI technologies treat individuals with dignity and respect, particularly in accessing essential services such as food assistance and tax benefits.

    State CIOs Address AI Workforce Preparedness in Indiana and Texas

    A recent Statescoop Priorities podcast featuring Indiana and Texas’s Chief Information Officers (CIOs) delves into their states’ strategies for adopting AI technology, particularly emphasizing workforce training and preparedness. The discussion, which echoes concerns raised at the NASCIO 2024 conference, addresses the importance of equipping state workforces with the necessary skills and knowledge to integrate AI into government operations effectively.

  • This Week in Government Technology – May 19th-26th, 2024

    This Week in Government Technology – May 19th-26th, 2024

    As the world of artificial intelligence (AI) continues to expand, this week showcased government leadership prioritizing funding, regulatory frameworks, ethical considerations, and the deployment of emerging technology pilots across local and federal levels.

    Colorado Springs Chatbot

    The city of Colorado Springs has introduced “AskCOS,” an AI enabled chatbot designed to enhance constituent engagement by answering questions based on data from the city’s official website. This innovative tool, powered by Citibot and supported by city staff, can respond in 71 languages, providing information on various topics such as wheelchair-accessible parks and city council meeting schedules. The chatbot is integrated into the city’s website and is accessible via text, offering a streamlined way for residents to find information and report issues. City officials will monitor AskCOS to ensure quality responses and to gain insights for improving website content and resident engagement.

    Las Vegas Explores AI to Improve Traffic Safety

    The Regional Transportation Commission (RTC) of Las Vegas is piloting an innovative program called Advanced Intersection Analytics to enhance traffic safety at high-risk intersections. This initiative leverages AI, predictive analytics, historical data, cameras, and sensors to gather detailed information on traffic patterns and safety issues. The program, in collaboration with local governments and police departments, aims to identify and address violations such as red-light running, lane infractions, and pedestrian non-compliance. Data collected will support enforcement efforts and guide potential engineering changes to improve road safety.

    Report Reveals Potential Bias in IRS AI Audit System

    A recent report by the Government Accountability Office (GAO) has highlighted potential biases in the IRS’s primary AI tool for flagging audit tax returns. The Dependent Database (DDB) incorporates human inputs, which could introduce unintended racial biases. Despite regular reviews, the IRS has not comprehensively evaluated the DDB’s rules and filters, some of which have not been updated since 2001. This lack of thorough review may contribute to demographic disparities in audit selections, as indicated by a 2023 Stanford University study showing that Black taxpayers are disproportionately targeted for audits. The GAO’s findings urge the IRS to follow an AI accountability framework to address these issues, recommending six actions, all of which the IRS has agreed to implement.

    AI in Development to Enhance Government Websites

    The General Services Administration (GSA) is developing an open-source tool called “Gov CX Analyzer” to enhance how government websites gather and analyze user interaction data. Announced by GSA Administrator Robin Carnahan during the Workday Federal Forum, this AI-powered tool aims to provide deeper insights into user behavior, moving beyond traditional survey feedback to identify and address friction points on government websites. This initiative aligns with similar efforts by the Office of Management and Budget to improve customer experience performance across federal agency sites.

    A recent Statescoop Priorities podcast featuring Indiana and Texas’s Chief Information Officers (CIOs) delves into their states’ strategies for adopting AI technology, particularly emphasizing workforce training and preparedness. The discussion, which echoes concerns raised at the NASCIO 2024 conference, addresses the importance of equipping state workforces with the necessary skills and knowledge to integrate AI into government operations effectively.

  • HHS Unveils Plan for Responsible Innovation in Government Services

    HHS Unveils Plan for Responsible Innovation in Government Services

    You can read the full article here.

    The U.S. Department of Health and Human Services (HHS) has announced a new strategy to ensure the ethical and effective use of artificial intelligence (AI) in public benefits programs managed by state, local, tribal, and territorial governments. This initiative aims to leverage AI advancements to significantly improve the administration and delivery of public benefits, making them more efficient and responsive to recipients’ needs. In line with the OMB Memorandum M-24-10, HHS is focused on enhancing governance, fostering responsible innovation, and mitigating risks associated with AI in these systems. The plan outlines the application of the risk framework from the memorandum to public benefits, details existing guidelines for AI use, and identifies areas for future guidance.

    Why Read This Article?

    • Understand the HHS AI Plan: Gain insights into HHS’s strategy for integrating AI responsibly within public benefits programs.
    • Learn About Governance and Risk Management: Discover how HHS aligns with OMB’s framework to manage risks associated with AI.
    • Future Guidance and Innovations: Stay informed about upcoming topics HHS is considering for future AI guidance.
    • Enhance Efficiency in Public Benefits Administration: Explore how AI can improve the effectiveness of public benefits programs.
  • This Week in Government Technology – May 12th-19th, 2024

    This Week in Government Technology – May 12th-19th, 2024

    As the world of artificial intelligence(AI) continues to expand, we saw several instances of government leadership prioritizing funding, regulatory frameworks, and emerging technology pilots this week.

    $32 Billion Funding Proposal

    U.S. Senate Majority Leader Charles E. Schumer and his AI working group have endorsed an independent commission’s recommendation for the federal government to allocate at least $32 billion annually to non-defense AI systems. Initially put forth by the National Security Commission on Artificial Intelligence, the proposal is supported by a bipartisan group including Senators Martin Heinrich, Todd Young, and Mike Rounds. Alongside the funding proposal, the group released a road map for AI development, addressing privacy, data security, and emergency appropriations for research agencies. While Schumer aims to collaborate with House Speaker Mike Johnson, the plan has faced criticism for prioritizing innovation over addressing discrimination and civil rights concerns.

    Indiana Implements AI to Support Job Opportunities

    The Indiana Department of Workforce Development (DWD) has launched Pivot, an AI-powered tool designed to assist job seekers in identifying personalized career paths with precision. Developed by Resultant and introduced last November, Pivot uses wage record data to help users visualize current and future job opportunities. The AI capabilities of Pivot enable it to project sector-specific job demands and align job seekers’ skills with relevant training programs. DWD plans to expand Pivot’s functionality this fall to include recommendations on training providers and eventually make it accessible beyond the unemployment insurance system.

    Biden Administration Releases AI Principles to Protect Workers

    Following last year’s AI executive order, the Biden administration has introduced a set of principles to guide the interaction between workers and artificial intelligence (AI). These principles emphasize responsible use of workers’ data, upskilling support, and AI deployment transparency. Although voluntary, these guidelines reflect a commitment to enhancing work quality and life for all workers. Acting Labor Secretary Julie Su highlighted the administration’s dedication to ensuring technology serves people first. Companies like Microsoft and Indeed have already agreed to these principles, with the administration seeking further commitments from other tech firms. A new list of best practices is expected from the Labor Department soon.

  • The Risks and Rewards of AI-Generated Code in Government Systems

    The Risks and Rewards of AI-Generated Code in Government Systems

    You can read the full article here.

    As artificial intelligence (AI) integrates into government operations, its advantages in terms of efficiency and automation are undeniable. However, with these benefits come significant challenges, particularly security vulnerabilities associated with AI-generated code. While the efficiency gains are substantial, the security risks cannot be overlooked. As AI continues to develop, government agencies must remain vigilant, ensuring that advancements in AI technology are matched with equally sophisticated security and oversight measures. This balanced approach will be crucial in harnessing AI’s full potential while safeguarding the public interest.

    John Breeden’s article “Feds Beware: New Studies Demonstrate Key AI Shortcomings” delves into the complexities of AI adoption in government and highlights the necessity of balancing innovation with rigorous security measures.

    Why Read This Article?

    Comprehensive AI Accountability Framework: The federal government has instituted guidelines such as the AI Accountability Framework for Federal Agencies, which underscores responsible AI usage. This framework emphasizes governance, data management, performance, and monitoring to ensure that AI applications align with agency missions and operate securely.

    Challenges with AI-Generated Code: While AI’s capability to swiftly generate code is impressive, it introduces pressing security concerns. For instance, a study by the University of Quebec revealed that most applications created by AI harbor severe cybersecurity vulnerabilities. This finding emphasizes the need for improved security measures in government AI coding practices.

    Human-AI Collaboration: Given the identified challenges, the article advocates for a model where AI tools are used with human oversight. This approach not only mitigates risks but also enhances the effectiveness of governmental operations. By pairing AI’s rapid capabilities with the nuanced understanding of human developers, agencies can better safeguard against potential security flaws.

    Operational Enhancements: AI’s ability to automate complex coding tasks can significantly expedite government development processes. However, as technology continues to evolve, so too does the necessity for robust mechanisms to monitor and refine AI outputs to ensure they meet stringent security standards.