Category: Uncategorized

  • Nevada’s Generative AI Integration Contributes to State Executive of the Year Award

    Nevada’s Generative AI Integration Contributes to State Executive of the Year Award

    You can read the full article here.

    Under the guidance of Timothy Galluzi, Nevada’s Chief Information Officer, the state has pioneered the integration of generative AI (GenAI) to enhance government operations. The Nevada Department of Employment, Training and Rehabilitation (DETR) utilized GenAI to improve the unemployment claims adjudication process. This use of AI made it possible to automate interactions and data analysis, which helped streamline operations and reduce the administrative burden. Recognized for his leadership with the State Executive of the Year award, Galluzi attributes this success to the collective efforts of his team and the broader network of state agencies.

    Why Read This Article?

    • Comprehensive AI Guidelines: In response to emerging tech trends, Nevada developed robust guidelines to safeguard citizen data in AI applications, emphasizing a strategic and secure approach to technology adoption.
    • Operational Enhancements: DETR has effectively employed GenAI to optimize the unemployment claims adjudication process, showcasing substantial improvements in efficiency and user interaction.
    • Human-Centric AI Applications: Galluzi’s approach underlines the importance of maintaining human oversight in AI implementations, ensuring technology supports rather than supplants human decision-making.
  • Use Cases From Early AI Adoption in Government

    Use Cases From Early AI Adoption in Government

    You can read the full article here.

    As governments at various levels grapple with the complexities of artificial intelligence (AI), Julia Edinger’s article, “Where to Start With AI? Cities and States Offer Use Cases,” explores states, local governments, and cities that demonstrate practical implementations of AI, from enhancing cybersecurity to streamlining 311 services. Despite the absence of federal regulation until late 2023, Utah and Santa Cruz County have pioneered their own policies, aiming to balance innovation with necessary safeguards. The article underscores a pivotal shift towards localized AI governance, suggesting that early and considered policy frameworks can foster safe and beneficial AI applications across the public sector.

    Why Read This Article?

    • Practical AI Applications: Learn from the diverse and practical ways various states and cities use AI to enhance public services and cybersecurity.
    • Policy Pioneering: Explore how early adopters navigate the absence of broad federal regulations to create localized frameworks that could serve as models nationwide.
  • California’s Landmark Bill Prioritizes Human Safety and Ethical Standards in Artificial Intelligence Development  

    California’s Landmark Bill Prioritizes Human Safety and Ethical Standards in Artificial Intelligence Development  

    You can read the full article here.

    In the Washington Post’s article, “In Big Tech’s backyard, California lawmaker unveils landmark AI bill,” writers Gerrit De Vynck and Cat Zakrzewski explore the proactive steps taken by California State Sen. Scott Wiener to regulate artificial intelligence (AI) technologies. The bill outlines the requirement for thorough testing and robust safety measures for AI models before deployment, addressing potential dangers and unethical applications. This legislation could set a precedent for other states and influence nationwide policy, reflecting a pivotal shift toward integrating safety and ethics in AI development.

    Why Read This Article?

    • Regulatory Pioneering: Gain insights into how California’s legislative actions could shape national AI regulation.
    • Safety and Ethics in AI: Understand the importance of preemptive testing and ethical frameworks in deploying AI technologies.
  • Policy2Code Prototyping Challenge

    Policy2Code Prototyping Challenge

    You can read more about the Policy2Code Prototyping Challenge here.

    The Policy2Code Prototyping Challenge is an invitation to explore the application of generative AI, specifically Large Language Models (LLMs), in translating U.S. public benefits policies into plain language and software code. This initiative is part of the broader movement known as Rules as Code (RaC), which aims to make policy implementation more efficient and accessible by converting policy directives into software executable formats. Designed to foster collaboration among technologists, policy specialists, and other practitioners, the Prototyping Challenge explores AI’s potential benefits and limitations in this context. This unique opportunity can save countless human hours currently spent on manual translations and coding and accelerates the adoption of the RaC approach, potentially transforming how public services are delivered. 

    Applications are due May 22, 2024.

    Host Organizations

    The Policy2Code Prototyping challenge is organized by the Digital Benefits Network (DBN) at the Beeck Center for Social Impact + Innovation in partnership with the Massive Data Institute (MDI), both based at Georgetown University in Washington, D.C.

  • Gov Tech Today Podcast: Bridging the Gap with AI in Government Services in Gov Tech Today

    Gov Tech Today Podcast: Bridging the Gap with AI in Government Services in Gov Tech Today

    You can listen to this podcast on Apple and Spotify.

    In this episode of Gov Tech Today, hosts Russell Lowery and Jennifer Saha discuss with Justin Brown, former Secretary of Human Services for Oklahoma, the transformative impact of Artificial Intelligence (AI) in government operations. Brown brings valuable perspectives from his tenure during critical times and his efforts to foster a national dialogue on AI’s potential across state governments. This conversation is vital for understanding how AI can streamline government services, enhance workforce efficiency, and reduce regulatory burdens—all while emphasizing the importance of ethical deployment and education.

    Listeners will gain a deeper understanding of how AI solutions can significantly improve decision-making processes, increase accessibility, and address systemic challenges such as the benefits cliff. The episode also highlights the pioneering efforts behind creating the Center for Public Sector AI (CPSAI), underscoring its ongoing mission to support ethical AI integration that respects public values and enhances service delivery.

    Why Listen?

    • Insightful Leadership: Learn from a leader’s firsthand experience with AI in public sector reforms.
    • Relevant Challenges and Solutions: Understand the practical challenges and innovative solutions AI brings to public services.
    • Alignment with CPSAI Goals: This episode aligns closely with CPSAI’s mission to promote ethical AI use that benefits all citizens.
  • Thousand Stories Podcast: CPSAI — A Framework for Supporting Ethical AI in Public Sector

    Thousand Stories Podcast: CPSAI — A Framework for Supporting Ethical AI in Public Sector

    You can listen to this podcast on Apple and Spotify.

    The Thousand Stories Podcast episode from April 18, 2024, provides a comprehensive view of the Center for Public Sector AI (CPSAI), a transformative force in integrating Artificial Intelligence into public sector operations, particularly within health and human services. The episode delves into the multifaceted strategies and initiatives undertaken by CPSAI to ensure that AI technology enhances public services responsibly and effectively.

    CPSAI is a groundbreaking initiative demonstrating a commitment to nonpartisanship and nonprofit values. It operates with a clear focus on the ethical deployment of AI technologies, ensuring that these powerful tools are used to their fullest potential to improve public sector services without compromising ethical standards or public trust.

    CPSAI strongly emphasizes educating leaders within the health and human services sectors about the nuances of AI. This includes detailed training on the technological, ethical, and practical aspects of AI deployment, aiming to equip leaders with the knowledge to make informed decisions. These educational efforts are designed to transform state leaders into informed stakeholders who can effectively navigate the complexities of new technologies, thereby enhancing their ability to oversee and implement AI-driven projects.

    To safeguard against potential risks, CPSAI develops operational guardrails that serve as ethical and practical boundaries for AI deployment. These guidelines are continuously refined and shared across states to standardize safe and responsible AI integration. By establishing a framework of best practices and ethical standards, CPSAI helps ensure that AI applications are beneficial and do not inadvertently exacerbate existing disparities or introduce new ethical dilemmas.

    Project Clearinghouse stands out as a key initiative where AI projects are rigorously evaluated for their adherence to established guardrails and potential impact on public services. This platform fosters collaboration and resource sharing among various stakeholders. Project Clearinghouse standardizes the evaluation of AI projects and facilitates the exchange of best practices, enhancing the effectiveness and efficiency of AI applications across different states.

    Recognizing the importance of collaboration, CPSAI actively forms partnerships with academic institutions, technology firms, and other governmental and non-governmental organizations. These partnerships pool expertise and resources, thereby amplifying AI’s positive impacts. Through these strategic alliances, CPSAI leverages collective insights and innovations, ensuring that AI technologies are deployed in transformative ways that are aligned with public interests.

    Beyond theoretical discussions, CPSAI is deeply committed to the practical application of AI, focusing on how these technologies can concretely improve service delivery and the operational efficiency of public services. By encouraging experimentation and learning from successes and failures, CPSAI promotes an adaptive approach to technology deployment, aiming to solve real-world problems efficiently and ethically.

  • Governors Leading On Artificial Intelligence

    Governors Leading On Artificial Intelligence

    You can read the full article here.

    In a time when technology is rapidly advancing, governors across the United States are taking action on artificial intelligence (AI). Explore how these leaders are actively embracing AI’s potential and addressing its challenges.

    From healthcare to education and more, AI is reshaping industries, and governors are leading the way. Learn how the National Governors Association (NGA) supports states in navigating the AI landscape.

    But it’s not just about economic competitiveness—it’s also about fairness and privacy. Governors are promoting ethical AI adoption through initiatives like AI ethics commissions and workforce development programs.

    Discover how governors are driving progress and shaping a future where AI isn’t just a buzzword, but a tool for positive change. Read on to learn more about their efforts and the impact on citizens’ lives.

    In this ever-evolving world, ensuring new technologies are both safe and effective is an important public safety measure. Today, that new technology is Artificial Intelligence, maybe better known as AI. And look y’all, I am not going to stand here and preach like I know a lick about AI. However, I do know that new technologies can have benefits, but if not used responsibly, they can be dangerous. We are going to ensure that AI is used properly.”

    Governor Kay Ivey, 2024 State of the State

  • AI and the Future of Government Services

    AI and the Future of Government Services

    You can read the full article here.

    Benefits Data Trust (BDT) is a company dedicated to helping people live healthier, more independent lives by creating smarter ways to access essential benefits and services. Their new series, “Human-Serving AI,” features BDT CEO Trooper Sanders in discussion with AI experts, exploring how AI can transform government services by modernizing operations and increasing equity and efficiency in public benefits systems.

    Why Should You Watch the Series?

    • In-depth Expertise: Gain valuable insights from leaders like Afua Bruce and Karen Levy on integrating AI responsibly in public sectors.
    • Strategic Guidance: Learn about AI’s practical applications and ethical considerations in public services.
    • Policy Impact: Discover the implications of recent government actions on AI.
  • A Call for Community-Centric AI

    A Call for Community-Centric AI

    You can read the full article here.

    Afua Bruce’s article in The Guardian, “AI can be a force for good or ill in society, so everyone must shape it, not just the ‘tech guys’” delves into the powerful role artificial intelligence (AI) plays in society and the crucial need for broad, inclusive engagement in its development and application. Bruce articulates the importance of considering community needs and ethical constraints in deploying AI technologies, emphasizing that while AI holds tremendous potential for societal benefit, it also poses significant risks if not carefully managed.

    Why Read This Article?

    • Community-Centric AI: Understand the importance of involving diverse communities in shaping AI to ensure it meets their needs and incorporates their goals.
    • Ethical Deployment: Explore the crucial role of ethical guidelines in AI development.
  • ’45 Years’ Newsletter – Generative AI in Human Services

    ’45 Years’ Newsletter – Generative AI in Human Services

    Thesis

    Forty-five years represents my lifetime. It also represents a millennium of changes – some good, some bad – in the human services space. This newsletter asks the question, “Can we do human services better, so that in 45 years those that follow see us as forward-thinking change agents, not as those that squandered their opportunity to improve systems for children and families?”

    Story

    NOTE: ChatGPT (Generative AI) was utilized to assist in writing this newsletter. Transparency in this space is important

    With all the exciting developments in the human services space recently, there was a new conversation at the American Public Human Services Association’s annual national Health & Human Services Summit. Since the release of Chat GPT a few months ago, leaders from across the nation were eager to gather and discuss the opportunities and risks associated with Generative Artificial Intelligence. I had dozens of formal and informal conversations on this topic, and by and large, those conversations were optimistic and positive. It is clear that government is moving towards an attempted embrace of this technology. It is also clear that leaders who feel deeply and personally connected to serving have a healthy skepticism and a desire to build guardrails and share progress with each other around this technology.

    On the flight home, I asked GPT4.0 to ‘create a fictional story about how AI can help a person in need,’ and the output was something I wanted to share.

    ‘Once upon a time in the small town of Oakridge, there lived a young woman named Emily. She had faced numerous challenges throughout her life, including a visual impairment that made everyday tasks more difficult. However, her indomitable spirit and determination kept her moving forward.


    One day, Emily came across an advertisement for a newly developed AI companion designed to assist individuals with disabilities. Intrigued, she decided to give it a try, hoping it could offer some support in her daily life.
    The AI companion, named “Aria,” was an advanced AI system equipped with natural language processing and computer vision capabilities. Aria could understand and respond to Emily’s voice commands, making tasks more accessible and intuitive.

    From the moment Emily activated Aria, her life took a remarkable turn. The AI companion quickly became an invaluable ally, transforming her day-to-day experiences. Aria assisted Emily in reading books, emails, and articles by using optical character recognition to convert text into speech. This opened up a world of knowledge and literature that she had previously struggled to access.

    Not only did Aria assist with reading, but it also became a reliable navigator. Using real-time mapping and object detection, Aria guided Emily through her town, describing the surroundings and alerting her to obstacles or potential hazards. With Aria by her side, Emily gained newfound independence and confidence, confidently traversing the streets she once found daunting.

    Aria’s assistance extended beyond physical tasks. The AI companion utilized machine learning algorithms to learn Emily’s preferences and interests, suggesting new activities and hobbies that aligned with her passions. It even recommended accessible art classes where Emily could explore her creativity through tactile experiences.’

    Within days of hearing about ChatGPT and learning about Generative AI, it became clear that the promise of this technology is dramatic. It is also clear that government must take the lead in promoting its ethical deployment or the risk of being overrun would be dramatic. We all know that government isn’t good at taking the lead in these situations, but the current national landscape in human services is full of leaders and teams that can reverse the historical trend of government complacency. The sense of urgency is great. The opportunity is inspiring. The downside risk of inaction is severe and the most likely outcome if the human services system doesn’t take the lead now.

    Three Bullets

    Government involvement is crucial in shaping the AI landscape for the betterment of all. Here’s why:

    • Ensuring Ethical Deployment: Government leadership is crucial in governing the deployment of AI in human services to ensure ethical practices. By taking the lead, governments can establish clear guidelines and regulations that prioritize fairness, transparency, and accountability. This helps prevent biases and the misuse of AI technologies, ensuring that all populations are protected and citizens receive improved access to services.

    • Safeguarding Privacy and Data Protection: AI in human services often involves the collection and analysis of sensitive personal data. Governments must play a key role in governing the deployment to safeguard privacy and ensure data protection. By establishing robust data governance frameworks, governments can regulate the collection, storage, and use of data, ensuring compliance with privacy laws and preventing unauthorized access or misuse.

    • Promoting Human-Centered and Inclusive Services: The deployment of AI in human services should prioritize human well-being, inclusivity, and individual autonomy. Government leadership is essential to set standards that ensure AI systems are designed with a human-centered approach, maintaining the dignity and rights of service recipients. By taking the lead, governments can encourage the development of AI systems that are accessible, understandable, and respectful of diverse needs, fostering streamlined service provision.

    Overall, government leadership in governing the deployment of AI in human services is vital to establish ethical practices, protect privacy, and promote human-centric approaches. By setting clear regulations and guidelines, governments can ensure that AI technologies are deployed responsibly, benefiting society as a whole while upholding principles of fairness and promoting privacy and data security.

    Lessons Learned

    We are continually learning more, but there are lessons learned in other venues that can be helpful in the discussion around AI.

    1. Collaboration and Stakeholder Engagement |The process of AI governance requires collaboration among governments, industry experts, policymakers, and the public. Engaging stakeholders from various backgrounds and perspectives ensures that governance frameworks consider diverse viewpoints and address the concerns and needs of all. Effective collaboration promotes legitimacy and transparency in the governance process.


    2. Continuous Learning and Adaptation | The field of AI is rapidly evolving, and governance structures must adapt accordingly. Governments must foster a culture of continuous learning and keep pace with emerging technologies, new challenges, and the evolving ethical landscape. Regular evaluation and refinement of governance frameworks are crucial to ensure their relevance and effectiveness.


    3. Global Cooperation and Harmonization | AI transcends national boundaries, necessitating international cooperation in governance efforts. Governments should foster collaboration, share best practices, and harmonize standards to address cross-border challenges such as data governance and ethical considerations. Global cooperation enables collective action, promoting responsible AI development on a global scale.

    True North Principles… we must…

    1. Ethical Accountability | Prioritize ethical considerations and hold organizations accountable for the ethical implications of their AI systems. Establish clear guidelines and regulations that promote transparency, fairness, and accountability in AI development and deployment.
    2. Human-Centered Design | Ensure that AI technologies are designed with human well-being and societal benefit in mind. Focus on creating AI systems that augment human capabilities and promote positive social impact.
    3. Continuous Evaluation and Adaptation | Implement mechanisms for ongoing evaluation, monitoring, and adaptation of AI governance frameworks. Foster a culture of learning, encouraging regular updates to address emerging challenges and capitalize on new opportunities.

    By embracing these principles and establishing effective governance structures, governments can steer AI towards a future that benefits society as a whole.

    The End of the Story

    As Emily shared her experiences with Aria with friends and family, word quickly spread about the transformative power of AI assistance. Others in the community began to recognize the potential of AI in helping those in need. Inspired by Emily’s story, the local government collaborated with AI developers to ensure that AI companions like Aria were accessible to all residents who required assistance.

    With the support of the government, AI companions like Aria soon became readily available to people with various disabilities throughout Oakridge. The town saw a significant improvement in the lives of its residents as AI technology provided newfound opportunities and independence.

    Emily’s story not only highlighted the immense benefits of AI assistance but also emphasized the importance of empathy and human connection. While Aria enhanced Emily’s capabilities and empowered her, it was the combination of technology and compassion that truly made a difference.

    As time went on, AI companions like Aria continued to evolve, becoming more intuitive and personalized. They became trusted companions, enabling individuals with disabilities to break barriers, pursue their dreams, and contribute to their communities in meaningful ways.

    Emily’s journey exemplified the incredible potential of AI to positively impact the lives of those in need. It served as a reminder that, with the right technology and compassionate implementation, AI could be a powerful tool in creating a more inclusive and supportive society.

    And so, the town of Oakridge became a shining example of how AI, when harnessed for good, could transform the lives of individuals and foster a community that valued diversity and accessibility.

    The end.

    Note: This fictional story (written by AI) emphasizes the positive aspects of AI technology and its potential to assist individuals with disabilities. It’s important to remember that AI should always be developed and used responsibly, with consideration for ethical guidelines and human-centered design principles.