Previous Chapter: Executive Summary
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.

Introduction

Artificial intelligence (AI) technologies5 have grown rapidly and have the potential to transform the delivery of public services by automating routine tasks, streamlining operations, and providing data-driven insights to inform decision making (National Academies of Sciences, Engineering, and Medicine, 2017; Soe & Drechsler, 2018; Yigitcanlar et al., 2024a). New AI technologies continue to be introduced both as functional tools and as platforms that support collaboration, thereby changing administrative processes and potentially improving service outcomes across various domains such as health, human services, weather, public administration, public safety, and urban planning.

As AI capabilities expand, decision makers are increasingly becoming responsible for guiding the adoption of AI across the breadth of their jurisdictions while also protecting public values, rights, and wellbeing (Schiff et al., 2022). Notably, the rise of generative AI has significant implications and applications for state and local governments. In addition to the benefits enumerated above, AI adoption and integration present challenges ranging from gaps in infrastructure and digital transformation to policy and regulatory hurdles, including privacy, data security, bias, algorithmic accountability, transparency, financial constraints, and workforce skills and training gaps (Chen et al., 2024; D’Amico et al., 2020). Significant technology gaps are a challenge as well, as off-the-shelf AI tools that meet the specific needs of state and local agencies often do not yet exist. For example, even when privacy protections such as changing credentials and rotating message logs are built in, scholars and policy makers caution that municipal AI systems raise broad ethical concerns related to privacy, surveillance, algorithmic bias, and public accountability (New York City Office of Technology and Innovation, 2023; Young et al., 2019). These concerns can be addressed by engaging with the public, developing robust governance frameworks, and maintaining ongoing oversight (Lottu et al., 2024).

The increasing development of AI technologies presents a timely opportunity to provide evidence-based support that can inform state and local decision making around AI procurement, development, and adoption. While state and local leaders are tasked with providing strategic direction, overseeing procurement, and shaping implementation, they must also manage resource limitations, workforce gaps, and balance the potential benefits and harms of AI technologies (Sloane & Wüllhorst, 2025). This rapid expert consultation6 provides actionable guidance for public-sector leaders, policy makers, and practitioners responsible for planning, procuring, and overseeing the use of AI in state and local government. It is intended to support decision makers at all levels and in all roles—from strategy and governance to implementation and evaluation—in adopting AI technologies responsibly and effectively. Note that this consultation focuses primarily on AI in decision making—both systems that directly make decisions and those that materially inform or guide decisions that affect services, rights, or resource

___________________

5While there is no one widely accepted definition of AI, it is defined for the purposes of this report as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments” (Organisation for Economic Co-operation and Development, 2019, Section I).

6The full statement of task states: “The National Academies of Sciences, Engineering, and Medicine will produce a rapid expert consultation, ‘Strategies for integrating artificial intelligence (AI) tools into state and local decision-making,’ aimed at providing timely, actionable guidance to support state, tribal, territorial, and local decision-makers who are either using, in the process of integrating, or considering the adoption of AI technologies. Drawing on interdisciplinary research from behavioral economics, sociology, cognitive psychology, human-centered AI, public administration, policy studies, information sciences, and ethics, this consultation will consider the social, behavioral, and economic (SBE) aspects of AI adoption to provide practical insights into how decision-makers can assess whether AI technologies align with their governance goals. The consultation will explore the potential of AI to improve public services and governance at state and local levels and enhance efficiency and innovation across various sectors while also flagging associated ethical considerations, data security risks, and integration challenges. The pace at which AI systems are evolving presents unique opportunities and challenges for decision-makers. The focus of the consultation will be on the SBE aspects of AI adoption, ensuring that the discussion remains centered in the SBE domain rather than on the underlying technical components. The rapid expert consultation will be designed for timely, practical use by decision-makers but will not make recommendations. It will be reviewed in accordance with institutional guidelines.”

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.

allocations—and not on AI tools that can improve organizational efficiency and innovation without directly influencing public-facing decisions.

The Current State of AI in State and Local Government

Decision makers face several choices related to AI technologies—not only about how to adopt AI, but also whether to adopt it at all, when to do so, how, and under what specific circumstances. These decisions range from doing nothing or delaying adoption, to developing systems in-house, to hiring third-party vendors, to partnering with academics, to acquiring commercial tools, to leveraging public or open-source options (Weerts, 2025).7 The choices offer varying levels of effectiveness, risk, cost, and control, and require balancing key considerations, including data privacy and security, affordability, ethical implications, and long-term sustainability. These choices can arise across the entire technology implementation life cycle—from problem definition and planning; to scoping; to building readiness; and to ongoing engagement, monitoring, and accountability.

Although public attention on AI has surged in recent years, governments have been adopting these technologies for years (Chen et al., 2024). For example, AI chatbots and virtual assistants have already been delivering public services, improving quality, and enhancing efficiency (Chen et al., 2024). Additionally, automated decision systems, decision-support systems, and earlier versions of predictive AI have been utilized for several years in various applications, including property tax assessments, emergency response systems, and public health surveillance (Gloudemans & Sanderson, 2021; Nihi et al., 2025).

The application of AI in government has been framed as a federal concern, with considerable attention focused on AI regulations and strategies associated with national strategic or economic interests (Allam & Dhunny, 2019; Sloane, 2022). However, uptake of AI is increasing among state and local governments. In 2024, more than 450 bills related to AI were introduced throughout the United States (National Conference of State Legislatures, 2024), many of which have now passed and constitute new regulatory environments for AI development (Sloane & Wüllhorst, 2025). In these efforts, state legislatures have considered issues such as local government use of AI, impact assessments, use guidelines, procurement standards, and the establishment of oversight bodies (National Conference of State Legislatures, 2024). Additionally, at least 30 states have issued guidance on AI use within state agencies, such as Georgia’s AI Responsible Use Guidelines (National Conference of State Legislatures, 2024). Local governments have also been developing policies for ethical use, from large cities such as New York City to smaller communities such as Tempe, Arizona.

STRATEGIES FOR INTEGRATING AI INTO STATE AND LOCAL GOVERNMENT DECISION-MAKING

Stages of AI Adoption and Decision Making

Not all states and local jurisdictions are at the same stage in the adoption and integration of AI; some have integrated AI technologies into their work, while others are still exploring the most effective approach. Research suggests that a clear roadmap aligned with timeframes can be beneficial when exploring AI adoption and implementation (Jöhnk et al., 2021; Liu et al., 2024; Reim et al., 2020):

___________________

7State and local governments are adopting two main types of AI tools: general-purpose tools such as enterprise ChatGPT (Pennsylvania), large language model (LLM) wrappers (New Jersey’s AI agent, Massachusetts Genie, Boston’s Launchpad), and integrated tools such as Gemini (Colorado, Boston) and Co-Pilot; and more targeted applications—custom (Boston’s BidBot, Massachusetts’s MECA) or commercial (Westlaw’s Co-Counsel, Axon’s police report tool).

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
  • Short-term actions (1–2 years): This period may include developing policies that help guide appropriate and intended uses of AI, creating policy frameworks that address aspects of AI adoption, using pilot evaluation programs to check whether AI technologies match users’ needs, cataloguing critical cases that AI either is unable to address or creates, training the workforce, and building capacity (Eubanks, 2018; Sheridan, 2008). Maryland’s 2025 AI Enablement Strategy,8 for example, includes efforts to expand experimentation with generative AI technologies in state employee workflows, to enhance service delivery while reducing administrative burden.
  • Medium-term goals (3–5 years): At this stage, AI systems are scaled up using lessons learned from short-term actions to refine technologies and systems and enhance collaboration across departments or agencies. For example, several states—including California and New Jersey—are using executive orders to establish formal AI governance task forces that establish cross-agency priorities, issue procurement guidelines, and make recommendations on ethical implementation (Center for Democracy & Technology, 2025). California’s Executive Order N-12-239 directs agencies to conduct workforce impact assessments, develop ethical guidelines, and update procurement standards to ensure that scaled-up AI adoption delivers measurable operational and public service improvements. These coordinated efforts reflect a growing emphasis on multiagency collaboration and scaling of AI systems with ethical oversight.
  • Long-term vision (5+ years): This final period involves integrated mature AI systems that are governed by best practices and standards, with real-time decision-making support and clear mechanisms that enable effective adoption, engagement, and evaluation.

Several frameworks and approaches—including the Blueprint for an AI Bill of Rights (Office of Science and Technology Policy, 2022); NIST’s AI Risk Management Framework (National Institute of Standards and Technology, 2023); and recent work by Bignami (2022), Chen and colleagues (2023), Hanna and colleagues (2024), Kawakami and colleagues (2024), Merhi (2023), and Miller (2022)—offer converging guidance on responsible AI adoption. Synthesizing across these sources, the following guiding questions can help structure decision making throughout the AI adoption life cycle:

  • Goals and intended use: What problem is the AI system trying to solve, and who defines it? Are the goals of the community and public agencies aligned with the design and implementation of the AI system? How does one ensure that stakeholder engagement informs the system’s goals (as well as each step of the AI life cycle)? Has there been an evaluation of the successes of AI technologies in the specific application?
  • Societal and legal considerations: In what broader operational environment and communities will AI technologies be deployed? How are fairness, transparency, and

___________________

8See https://doit.maryland.gov/About-DoIT/Offices/Documents/2025%20Maryland%20AI%20Enablement%20Strategy%20and%20AI%20Study%20Roadmap.pdf

9See https://www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-_-GGN-Signed.pdf

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
  • safety being addressed? Beyond legal frameworks, how will system failures or unintended consequences of the system deployment be handled responsibly?
  • Data and modeling constraints: Are the data that are being used to train or operate the AI technology accurate, complete, up to date, fit for the purposes for which they will be used, representative of the population or case to which they apply, and assessed for issues of bias?
  • Operational and maintenance factors: Are leadership buy-in and support sufficient? Are the dedicated organizational maintenance structure and workforce capacity needed to manage the system post-deployment in place, and can the organizational structure sustain the system over time? How is the implementation/change management being handled?
  • Strategic and policy-level governance: What people are responsible for governing the system? Is the governance in alignment with broader organizational goals and public policy objectives? Is there a designated authority or cross-functional body tasked with making decisions about system performance, evaluating alignment with mission priorities, and ensuring accountability over time?

The sections that follow provide guidance to support decision making, implementation, and experimentation throughout the AI adoption life cycle. The discussion is intended to provide decision makers with concrete actions they can take, tailored to their roles and the stage of AI adoption.

1. Foundations and Governance (Why use AI and how to ensure its responsible use?)

This section examines some foundational considerations that address the question, why use AI and how can we ensure its responsible use? In addition to governance issues, the discussion touches on the problem definition; public engagement; and collaboration with federal, state, and local partners. Table 1 summarizes some foundational actions that can be taken at the beginning of the AI adoption process.

Table 1: Foundational Strategies

Goal Examples Implementation Strategy Responsibility Timeframe
Be purpose- and people-oriented Illinois Department of Human Services, high risk pregnancies model Start by grounding AI initiatives in public values and a clear understanding of human and organizational contexts Public engagement processes and input from residents, staff, and civil society at the outset can shape accountability throughout the life cycle Early-stage planning and development
Engage the public Long Beach, California’s Smart City public survey; Vermont’s AI Task Force Conduct consultations, surveys, or forums to gather community feedback. Include public representatives in advisory bodies and task forces. Codesign ethical guidelines with community input Local governments and task forces are responsible for including community input and reporting publicly on feedback and engagement outcomes Project-based to ongoing
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Build proportional AI governance City of Boise, Idaho’s AI policy; Utah’s AI Governance Office; District of Columbia’s AI Values and Strategic Plan Establish governance and internal use policies to guide how AI is managed AI ethics committees and interdepartmental oversight groups review use and ensure policies are followed Short-term (development); medium- to long-term (implementation)
Participate in and help shape emerging collaborative frameworks National Institute of Standards and Technology’s (NIST’s) AI Risk Management Framework; National League of Cities’ Artificial Intelligence Demystified—AI Toolkit for Municipalities Join and help form collaborations that provide technical assistance, shared toolkits, legal frameworks, and coordination infrastructure Partnerships report outcomes and maintain collaboration to meet shared goals Medium-term (forming partnerships and initiating projects)
Develop tiered AI procurement guidance Alaska’s AI impact assessment bill Set clear procurement standards with which to evaluate AI before use. Align procurement with national standards, such as NIST’s AI Risk Management Framework Procurement officers, review boards, and external evaluators ensure that vendors meet legal, ethical, and operational standards Medium-term (development and implementation of standards)

Be Purpose- and People-Oriented

Grounding AI initiatives in public values and a clear understanding of human and organizational contexts enables the use of AI not only to solve well-defined problems but also to explore new possibilities, anticipate future needs, and augment the capacity of public institutions to respond to complexity (Chen et al., 2023; Shneiderman, 2020). Instead of an approach based on replacing humans, a human–AI teaming approach that aligns AI with human expertise and responsibilities can be adopted (McNeese et al., 2018). A human-centered approach incorporates AI to complement human judgment, respect human roles, and reflect community values (McNeese et al., 2018; Schelble et al., 2024; Shneiderman, 2020).

To these ends, local governments can

  • Clarify public values: Start with a clear sense of purpose, defining how AI initiatives interact with public values, including transparency, equity, and participation (Shneiderman, 2020).
  • Center people and context: Consider who benefits, who may be affected, and how AI initiatives integrate existing human and organizational workflows. Conducting self-assessments of goals, workflows, and community priorities can be useful in understanding when, where, and how AI can be implemented in a manner that complements the current work of human beings (Zira et al., 2023).
  • Early engagement: Engage early with different stakeholders (e.g., communities, the public, businesses, and residents) to shape goals and inform decisions. This may involve initial input from advisory or oversight bodies that continue throughout the life cycle (Cheong, 2024).
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
  • Ground community needs: AI adoption is beneficial when driven by specific community needs rather than technological hype or vendor offerings (Jung, 2023). For example, faced with the challenge of identifying high-risk pregnancies—a community need—the Illinois Department of Human Services developed AI models that enhanced service outreach and impact, refining existing paper-based assessments (Pan et al., 2017).

Engage the Public

Public input and trust are essential in deploying AI within state and local government functions. Public engagement can help identify potential harms and conflicts of interest related to procurement, use, and governance, and bring a variety of perspectives to bear, thereby strengthening the design and monitoring of tools (Wilson, 2022).

Research indicates that meaningful public engagement, characterized by communities being represented and listened to and resources being provided to address their needs, can serve as a form of transparency and facilitate the acceptance of new technologies (Cheong, 2024; Zuiderwijk, Chen, & Salem, 2021). To this end, state and local governments can:

  • clearly define the goals and boundaries of system deployment to avoid mission creep (Yigitcanlar et al., 2021), and follow project scoping methodologies (Prasetyo et al., 2025);
  • embed public participation during the planning phase and oversight of AI technologies, and not as an afterthought (Stilgoe, 2024); and
  • engage a broad range of stakeholders (e.g., residents, community organizations, experts, and civil society) when designing and evaluating AI technologies (Bokhari & Myeong, 2023; Ulnicane et al., 2020).

Engagement can include consultations, surveys, forums, or participatory design processes. Examples of public engagement efforts include the following:

  • The City of Long Beach, California, which conducted a multilingual public survey as part of its Smart City Initiative, asking for feedback on such issues as data use for traffic monitoring, collection of personal information by law enforcement, and the potential sale of personal data (City of Long Beach, 2021). The city also hosted community meetings and workshops to advance the initiative.
  • Vermont’s AI Task Force, which included public representatives and codesigned ethical guidelines for generative AI tools (State of Vermont, 2020).

To support meaningful engagement, governments can use participatory design methods, including:

  • conducting consultations, surveys, or forums to gather diverse perspectives;
  • mandating public consultation for projects that may impact civil rights or liberties;
  • including public representatives in task forces and advisory bodies;
  • codesigning ethical guidelines with community input; and
  • publishing system impact assessments and soliciting feedback from the public (Dedema & Hagen, 2025).
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.

Build Proportional and Iterative AI Governance

AI governance policies need to be tailored to the types of tools or systems being adopted. For example, high-impact or public-facing systems might require multifaceted formal policies, whereas low-risk technologies might prioritize lightweight, streamlined, values-based guidelines that promote responsible use without stifling innovation. Governance policies can address the following elements:

  • Decision making: It is important to consider how decisions are made about which AI technologies will be adopted, rejected, or paused (Selten & Klievink, 2024). Using existing frameworks, such as NIST’s AI Risk Management Framework (National Institute of Standards and Technology, 2023) and the National League of Cities’ Artificial Intelligence Demystified—AI Toolkit for Municipalities (2024), can facilitate the adoption of a structured decision-making approach for AI technology without having to start from scratch. Developing a multilevel governance approach with input from state and local decision makers, private corporations, academia, and the public can increase trust (Choung et al., 2024).
  • Power and responsibilities: Clear delineation of powers and responsibilities are needed to highlight how decision-making authority will be distributed across the technology life cycle (Selten & Klievink, 2024)—for example, allocating operational decision-making responsibility to those with the technical expertise to make those decisions.
  • Ethical principles: It is essential to be clear about which ethical principles, beyond legal requirements, inform the long-term vision for AI use. Ethical frameworks can clarify the risk management processes in place to mitigate potential harms and biases, as well as the implementation of human oversight mechanisms (Floridi et al., 2018). Existing frameworks, such as NIST’s AI Risk Management Framework and the Institute of Electrical and Electronics Engineers’ (IEEE’s) P7001 standard, can serve as a starting point for those seeking to develop effective policies. Sonoma County, California, exemplifies a county-wide governance policy that outlines common definitions, ethical principles, and standards for ensuring that AI deployment aligns with public values and laws (County of Sonoma, 2024). The Colorado AI Act, which is based on the European Union AI Act,10 takes a risk-based approach to regulating high-risk AI systems, focusing on transparency, accountability, and risk mitigation over time.
  • Navigating tensions: Determining how to navigate potential tensions between competing goals, such as innovation versus fairness and efficiency versus transparency, is key when developing governance policies (Selten & Klievink, 2024). For example, policies can include guardrails for appropriate experimentation. State agencies in California11 and Pennsylvania,12 for example, encouraged pilot projects to explore ways of improving services and administration under clear guidelines. As technologies evolve, experimentation can support ongoing improvement in guidance and standards.
  • Communicating policy decisions: Proactively communicating clear and accessible information through multiple channels can promote better engagement and understanding among the public (Wang et al., 2024). Public disclosures that are transparent about the AI technologies being used, their functionality, and the data they utilize can increase transparency and accountability (Chen et al., 2025).
  • Adaptive policies: Policies need to be living documents that undergo regular reviews

___________________

10See https://artificialintelligenceact.eu/the-act/

11Contained in Executive Order N-12-23 (State of California, 2023).

12Contained in Executive Order 2023-19 (State of Pennsylvania, 2023).

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
  • and updates as AI continues to evolve and more risks emerge (Bengio et al., 2024). Creating mechanisms for regular review of AI policies based on system changes, workforce capacity, worker feedback, and public input can help keep policies adaptive. Texas’s Responsible Artificial Intelligence Governance Act (H.B. No. 149),13 for instance, would establish a regulatory sandbox for AI testing, workforce development investments, and protections against algorithmic discrimination, positioning the state for long-term ethical and operational resilience.

These policies can build on existing rules, such as the Health Insurance Portability and Accountability Act (HIPAA), the Family Educational Rights and Privacy Act (FERPA) for privacy preservation, Title VI for employment/education discrimination, and the Americans with Disabilities Act (ADA) for disability discrimination. For example, Utah’s Office of Artificial Intelligence Policy introduced privacy protocols for sensitive health and education data. It developed responsible use policies with language similar to that of existing HIPAA and FERPA laws (Utah State Legislature, 2024).

Internal policies become especially important as AI technologies continue to expand. While these technologies offer significant potential for innovation and efficiency, especially in the short run, they also introduce challenges related to integration, regulation, ethics, governance, and liability (Weerts, 2025). For example, Generative AI (GenAI) is rapidly transforming the way work is done, especially in the public sector, where its potential for innovation and efficiency is substantial. At the same time, the growing number of GenAI tools presents challenges around evaluation, regulation, ethics, and governance (Phillips-Wren & Virvou, 2025). Without clear policies, public agencies risk legal and reputational issues, as well as harm to their constituents. State and local guidelines can help by providing guardrails for responsible experimentation and safe use of AI-generated content. For example, the City of Boston, Massachusetts’ interim guidance14 provided principles, do’s and don’ts, and used case examples, and encouraged city employees to experiment. Similarly, New York City’s AI Action Plan included a review process to ensure transparency, accuracy, and fairness. Since GenAI tools can produce inaccurate output, policies also play a role in requiring careful review before use. Ongoing pilot projects, such as those in California and Pennsylvania, show how agencies are testing some of these tools internally and can now position themselves to carry out more robust evaluations and develop stronger evidence bases and guidelines.

Local and regional governments often lack the legal, technical, or financial capacity to undertake policy development (David et al., 2024). Adaption of widely recognized frameworks, regulations, and standards can address some of the capacity gaps. Table 2 summarizes some tools that may be relevant.

Table 2: Frameworks, Regulations, and Standards

Type Purpose / Why It Is Important Examples Relevance for the Public Sector
Principles and Frameworks Provide high-level guidance and values for ethical development and use White House Blueprint for an AI Bill of Rights; OECD AI Principles; United Nations Educational, Scientific and Cultural Organization (UNESCO) AI Ethics Guidelines Aligns systems with human rights, equity, and transparency; useful for oversight and procurement framing
Best Practices and Templates Operational guidance for design, National Institute of Standards and Technology’s (NIST’s) AI Risk Management Framework (RMF); National League of Cities AI Toolkit; GovAI Policy Stack Useful for developing local policies and risk assessments;

___________________

13See https://capitol.texas.gov/tlodocs/89R/billtext/pdf/HB00149I.pdf#navpanes=0

14See https://www.boston.gov/sites/default/files/file/2023/05/Guidelines-for-Using-Generative-AI-2023.pdf

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
procurement, and deployment includes templates and process checklists
Regulations Legally binding rules for development or use European Union’s (EU’s) AI Act; New York Local Law 144; Illinois Biometric Innovation Privacy Act (BIPA); Colorado Privacy Act Apply directly or influence local procurement and risk categories, especially in human resources or surveillance contexts
Technical Standards Specifications for safety, quality, fairness, and interoperability Institute of Electrical and Electronics Engineers (IEEE) P7001; IEEE 7010; International Standards Organization/International Electrotechnical Commission (ISO/IEC) 4200; IEEE Draft Standard 3119-2025 Supports vendor evaluation and technical due diligence in procurement and system design

Broader internal policies also aid in managing daily operations by supporting transparency, clarifying oversight roles, and coordinating internal processes across departments. For instance, Massachusetts’ Enterprise Acceptable Use of Information Technology Policy15 mandated human oversight in high-risk scenarios, and the State of Hawaii’s data guidance outlined how long system-related data should be retained. The Utah State Legislature (2024) introduced privacy protocols for sensitive health and education data and developed responsible use policies. To support the implementation of these strategies, local governments can

  • establish baseline AI policies across all departments and contractors (e.g., City of Boise, 2023);
  • align policies with existing legal frameworks such as HIPAA, ADA, FERPA, and relevant state privacy laws;
  • update existing legal frameworks to reflect new AI capabilities and risks;
  • update cybersecurity protocols to reflect new risks;
  • create mechanisms for regular review of AI policies based on system changes and workforce capacity;
  • consult the public through advisory bodies or forums, and visibly document and respond to input;
  • include guardrails for appropriate experimentation;
  • mandate human oversight for high-risk decisions;
  • require documentation and (ideally external) review of outputs before action and deployment;
  • set retention and protection policies for data used, generated, or resulting from systems, such as the detailed guidance in the California Consumer Privacy Act (CCPA)16 and the Maryland Online Data Privacy Act (MODPA)17 on purpose limitation and data minimization; and

___________________

15See https://www.mass.gov/doc/isp002-acceptable-use-policy/download

16See https://oag.ca.gov/privacy/ccpa

17See https://mgaleg.maryland.gov/2024RS/Chapters_noln/CH_454_hb0567e.pdf

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
  • set minimum expectations, clarify responsibilities, and enable a coordinated response.

Participate in and Help Shape Emerging Collaborative Frameworks

The lack of a unified operational hub to drive AI coordination across the federal, state, and local levels makes coordination across these levels difficult (Lawrence et al., 2023). While federal agencies such as NIST, the Office of Science and Technology Policy (OSTP), and the new AI Safety Institute issue AI guidance and standards, none of them function as a central operational coordinator across federal, state, and local governments, nor do they generally have the authority or infrastructure to ensure coordinated rollout or shared governance (Lawrence et al., 2023). As a result, states and cities are left to interpret and apply AI standards independently, which leads to fragmentation, duplication of effort, and inconsistent protections (National Governors Association, 2025). A central coordination office or clearinghouse could address these gaps by providing

  • state-to-state knowledge sharing,
  • shared procurement or sandbox environments,
  • support for under-resourced localities, and
  • technical assistance and legal frameworks (Yigitcanlar et al., 2023).

Existing cross-agency collaborations highlight the value of coordination. An example is a partnership between Johnson County, Kansas, and the Data-Driven Justice Initiative, whose predictive model for jail diversion for individuals with complex needs links data from justice, mental health, and emergency medical services (Salomon et al., 2017). Some state-level efforts also are emerging, although it is too early to determine their downstream impact:

  • New Jersey’s Executive Order No. 346 establishes a cross-agency working group to study emerging AI tools and deliver public recommendations in order to ensure the ethical and responsible use of these technologies (Office of the Governor of New Jersey, 2023).
  • Pennsylvania’s Executive Order 2023-19 establishes an AI Governing Board tasked with reviewing generative AI proposals across agencies, with a focus on bias, security, and alignment with state values (Office of Governor of Pennsylvania, 2023).

Although federal–local partnerships also exist, coordination can be weak, with siloed efforts and unclear pathways for local input (Congressional Research Service, 2025). When federal pilot programs are accessible with technical assistance, innovations can be scaled. For example, Charlotte-Mecklenburg Police’s Early Intervention System, developed as part of the White House’s Police Data Initiative, was later adopted by other jurisdictions (Helsby et al., 2017).18

Some national efforts support intergovernmental collaboration:

  • The National AI Research Institutes (e.g., Carnegie Mellon’s AI-SDM Institute and the University of Maryland’s TRAILS Institute) are funded by the National Science Foundation, which supports multisector partnerships involving local governments, universities, and industry players (National Science Foundation, 2023).
  • A second example is NIST’s AI Safety Institute and Consortium, still in the development stage.

Public associations can fill some of the coordination gaps. Organizations such as the National Governors

___________________

18This program was developed to flag officers at risk of adverse interactions with the public (Helsby et al., 2018).

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.

Association, the International City-County Management Association, the National Association of Counties, the National League of Cities, and the National Conference of State Legislatures offer toolkits, showcase use cases and policies, and facilitate knowledge exchange through conferences and webinars.

In addition, specialized intergovernmental networks focused on AI are emerging:

  • The GovAI Coalition,19 launched by the City of San Jose in 2023, includes cities such as Bellevue, Long Beach, San Antonio, San Diego, and St. Paul; the Colorado Department of Revenue; and a metropolitan transportation district in Oregon.
  • City AI Connect 20 is supported by Bloomberg Philanthropies through The Johns Hopkins University.
  • The Western Regional Innovation and Technology Alliance (WRITA)21 serves western states.
  • The Connective (formerly the Smart Region Consortium)22 is based in the Phoenix–Mesa, Arizona, metropolitan area.

These networks, along with civil society partners such as the Center for Democracy & Technology’s AI Governance Lab,23 Upturn, and EPIC, are laying the groundwork for collective learning. However, not all localities are equally positioned to benefit from these networks. Large cities (e.g., New York City, Los Angeles) may have their own AI offices and innovative teams that can easily participate. But rural towns or small municipalities often lack the technical staff, legal counsel, and resources to participate meaningfully in cross-government AI networks. Funding mechanisms are needed to compensate these smaller localities for their participation.

Develop Tiered AI Procurement Guidance

As the number of AI technologies continues to increase, the development of robust tiered procurement processes and guidance has become essential for state and local governments. While standards reduce the risk of deploying an ineffective system that is more costly to maintain than anticipated, insecure, or misaligned with public values and protections (Coglianese, 2023), they may also be scalable and tailored to user needs and risks. Procurement guidance navigates a tension between transparency and the proprietary interests of vendors, balancing commercial interests and secrecy with accountability and public trust (Hickok, 2024). Clear procurement guidance can ensure that vendors not only meet technical requirements, organizational needs, and operational goals, but also comply with legal, social, and ethical standards, avoiding a solution-in-search-of-a-problem (IEEE Draft Standard 3119-2025;24 World Economic Forum, 2023). The World Economic Forum’s AI Procurement in a Box provides guidance on assessing vendor transparency, mitigating bias, and implementing ethical safeguards (Coglianese & Shaikh, 2025). A tiered procurement approach that is purpose-oriented and aligns with system risk and public impact can ensure that AI technologies are not simply vendor driven.

Procurement policies can address a defined set of concerns rather than attempting to cover everything. In implementing AI procurement guidance, a tiered approach that aligns with various use cases can be useful to ensure that the procurement process can prioritize critical issues while maintaining flexibility to meet varying needs. For example, low-risk AI use cases lend themselves to a lower level of scrutiny during the procurement process relative to high-risk use cases, which require more intense human oversight and accountability. Focusing on the following key elements can reduce overwhelm and guide actionable implementation:

___________________

19See https://www.sanjoseca.gov/your-government/departments-offices/information-technology/ai-reviews-algorithm-register/govai-coalition

20See https://cityaiconnect.jhu.edu/

21See https://www.writa.us/

22See https://www.connective.city/>

23See https://cdt.org/cdt-ai-governance-lab/

24See https://standards.ieee.org/ieee/3119/10729/

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
  • demonstrating that the system’s design will lead to appropriate use in its intended contexts;
  • identifying alignment or misalignment with current work system practices, and therefore avoiding potential additional costs;
  • addressing data protection and human oversight requirements, as appropriate to the system’s use and impact; and
  • incorporating security testing, auditability, and responsibility for AI outputs in alignment with the tool’s goals.

Several state and local governments have started experimenting with innovative approaches to procurement. For example, California invited technology companies to propose GenAI technologies to help reduce traffic congestion and improve road safety (Sowell, 2024). Although framed as an open challenge, California’s approach emphasized alignment with ethical guidelines and responsible AI practices during vendor selection. And the State of Alaska introduced a bill requiring all AI systems used by state agencies to undergo an initial impact assessment and be continuously evaluated throughout their use (State of Alaska, 2024). It is important as well to allow for piloting so that users can test technologies before an AI tool or system is purchased or before full deployment. Doing so can help reduce procurement risks by allowing users to see whether the tool or system aligns with their expectations and needs.

Aligning procurement standards with broader national efforts, such as NIST’s AI Risk Management Framework (National Institute of Standards and Technology, 2023), can foster consistency, interoperability, and effective risk mitigation across jurisdictions. However, governments often encounter "black box" vendor solutions in which the underlying AI models are opaque, and unequal bargaining power complicates the government’s ability to demand accountability (Whittaker et al., 2018). To tackle these barriers, procurement contracts may incorporate access to sufficient system documentation, third-party testing or audits, mechanisms that address instances in which AI systems fail to meet performance expectations or cause harm, and concrete data governance guidelines that specify how the vendor’s access to user data will be restricted and how data controls will be enforced (Coglianese, 2023; Yigitcanlar et al., 2023). Collaborative design of templates for requests for proposals for use in procurement processes, as well as the use of best practices for vendor and product evaluation, can be a useful starting point for state and local agencies seeking to build on collective knowledge and consolidate lessons learned.

2. Planning and Scoping (How to get started responsibly?)

Getting started responsibly entails conducting feasibility assessments and workflow mapping and scoping internal capacity. (See Table 3.)

Table 3: Planning and Scoping Strategies

Goal Example Implementation Strategy Responsibility Timeframe
Feasibility assessments and workflow mapping Hawaii’s wildfire forecast system Defining the problem and setting objectives, assessing data readiness and quality, evaluating technical feasibility, and mapping existing workflows to ensure that the AI solution integrates with current Public-sector practitioners Early-stage planning
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
systems and processes
Scoping of internal capacity Tempe, Arizona’s, AI Review Evaluating workforce skills, assessing infrastructure readiness, documenting available resources, and implementing change management strategies to ensure that the organization can support and sustain AI technologies over time City managers, department heads, city council members, county-level executives Medium– to long–term

Conduct Feasibility Assessments and Workflow Mapping

Adopting a structured approach that includes initial scoping and feasibility assessment, followed by pilot testing, full-scale deployment, and ongoing improvement facilitates validation, user training, and process refinements before adoption (Al-Amin et al., 2024). Feasibility assessments determine what needs to be built or procured, why, and how to understand whether the adoption of specific AI technologies is viable (Kinkel et al., 2021). Tools such as the SITUATE AI guidebook assist in mapping goals to technical feasibility (Kawakami et al., 2024). Feasibility assessments often focus on technical or budgetary constraints but also can benefit from the inclusion of public values (Madan & Ashok, 2022). A thorough feasibility assessment focuses on defining user needs, operational goals, ethical boundaries, and legal constraints (Guha et al., 2024). For example, the development of a model to address a water main failure in Syracuse, New York, began with a citywide infrastructure mapping exercise to explore how the proposed technologies would integrate into public workflows (Kumar et al., 2018). Likewise, Philadelphia’s “Pitch & Pilot”25 program tests AI solutions on a small scale to evaluate key metrics before full-scale deployment.

Workflow planning involves mapping out the AI technology that will be integrated with the existing operational process(es) and how users will interact with the technology and ensuring alignment with operational needs. It is important to center user experiences and solicit staff perspectives to understand oversight mechanisms, governance needs, and public trust considerations. When possible, state and local governments can use shared frameworks or templates to reduce planning burdens. A strong feasibility and workflow mapping process can include the following elements:

  • Defining the problem and setting objectives: Asking, “What is the problem?”, and “Where might AI be integrated into an intervention to help with addressing the problem?” can be a good way to start (Campion et al., 2020). Cross-department collaborations can ensure that key points are identified, problems are prioritized, and collective capabilities are realized (Campion et al., 2020). Objectives need to be specific, measurable, and time bound.
  • Assessing data readiness and quality: Data readiness and quality are crucial factors in determining AI integration. It is important to develop standards for how to control what data are collected, how they are collected, and in what format they are stored (Sun & Medaglia, 2019).
  • Evaluating technical feasibility: This step enables organizations to determine whether

___________________

25See https://www.phila.gov/documents/pitch-and-pilot-calls-for-solutions/

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
  • the AI technologies they want to adopt fit the organization’s existing IT infrastructure by identifying and documenting gaps and system requirements (Kinkel et al., 2021).
  • Additional deployment planning: Beyond technical feasibility, it is important to determine how staff will use AI technologies day to day, how decisions will be made using the AI outputs, and how many disruptions to existing services will be managed.
  • Mapping existing workflows: Clear mapping ensures that the AI technologies being integrated are complementary to and in sync with existing systems and human roles, rather than replacing or duplicating them (Selten & Klievink, 2024). This step also includes defining roles and responsibilities for ongoing use, oversight, and issue resolution.

An example of early scoping is Hawaii’s wildfire forecast system, which was developed in collaboration with the University of Hawaii.26 The scoping and feasibility phase involved conducting infrastructure planning and defining user-specific outcomes (National League of Cities, 2024). Using guides such as the Data Science Project Scoping Guide,27 designed and used specifically for state and local data science, machine learning (ML), and AI projects, can be a good starting point for ideation and scoping.

Scope Internal Capacity

Related to the feasibility assessment discussed above, which focuses on technical and operational viability, scoping of internal capacity provides a broader institutional review and explores whether an organization can support AI adoption. A holistic approach that aligns technology feasibility with organizational goals and employee capabilities is beneficial (Uren & Edwards, 2023). The emphasis here is on staffing, technology already in use, change management, governance, and sustained funding—particularly important concerns because local governments often lack “the legal, financial, technological, and human resources necessary to effectively integrate AI solutions” (Eichholz, 2025, p. 2). For example, AI technologies are already present in existing technologies that might be used, including spam detection in emails or search functionality in Microsoft SharePoint. Building on some of these existing capabilities of existing workflows, tools, and teams can be beneficial. The scoping exercise needs to focus on long-term sustainability with respect to the operation, maintenance, and responsiveness of AI technologies. Some actionable strategies for this phase include the following:

  • Evaluating current workforce skills: Assessing technical capacity is crucial for identifying potential gaps and determining the institution’s ability to integrate AI technologies effectively (Engstrom et al., 2020). Local governments need to consider both technical expertise (e.g., AI/ML, data science) and domain-specific knowledge. Findings from this process can be beneficial in making decisions about training, hiring, or outsourcing expertise (see the section on “Building Internal Capacity”).
  • Determining infrastructure needs: This assessment extends beyond assessing fit and performance requirements for the proposed AI integration with respect to the existing system to documenting whether there is long-term operational sustainability and organization readiness for AI integration (Jöhnk et al., 2021). Some questions to ask here include, Is data governance in place? Is a secure and scalable network infrastructure in place? Is there a budget, and are processes in place for sustaining AI technologies in the long term? Tempe, Arizona’s AI Review, for example, requires infrastructure readiness and assigns oversight roles such as privacy officers (Yigitcanlar et al., 2024a).
  • Assessing available financial resources: Documenting available resources in terms of

___________________

26See https://hawaii.edu/epscor/ai-powered-wildfire-forecast-system-for-hawai%CA%BBi-is-goal-of-uh-researchers/

27See https://datasciencepublicpolicy.org/our-work/tools-guides/data-science-project-scoping-guide/

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
  • budget and funding is critical for the integration of AI technologies (Campion et al., 2022; Selten & Klievink, 2024; Wirtz et al., 2019). For smaller localities, access to internal funding is likely to be limited, and pooling resources with other regional partners or seeking external funding may be helpful. Some smaller municipalities use peer cities as vetted test cases before adopting AI tools, acknowledging the constraints of their staffing and budgets (National League of Cities, 2023).
  • Developing change management strategies: The introduction of new technologies may lead to some resistance within institutions. Anticipating the nature of this resistance can assist in efforts to secure internal buy-in. Change management strategies can include conducting tailored briefings for senior staff, department heads, and managers on the potential benefits and challenges of AI adoption (Martins, 2023). At the employee level, strategies include communicating a clear vision, articulating how employees will benefit, and outlining expected challenges (Martins, 2023). Additionally, engaging employees, including frontline staff early on through workshops or meetings can enhance internal legitimacy for AI deployment (Eichholz, 2025).

3. Design, Development (if internal) or Selection (if procuring externally) (How to align design and development with purpose and use?)

The effective and responsible use of AI in local government depends on whether the AI technologies being designed or adopted align with their intended goals and purpose. Research highlights principles of purpose-driven, participatory, and responsible design that are crucial in ensuring that AI meets community needs, supports public values, and addresses ethical concerns. (See Table 4.)

Table 4: How to Align Design with Purpose?

Goal Example Implementation Strategy Responsibility Timeframe
Aligning the problem definition with context, goals, and technical design National Institute of Standards and Technology’s (NIST’s) AI Risk Management Framework Refining the problem statement and verifying that the problem aligns with the context, organizational priorities, fairness considerations, and real-world feasibility City managers, department heads, city council members, county-level executives Early-stage planning
Defining evaluation criteria and assessing system-level impacts Tempe, Arizona’s, evaluation framework Defining evaluation criteria and how outcomes will be measured against pre-AI baseline benchmarks Public-sector practitioners Early–ongoing
Establishing feedback mechanisms International City/County Management Association’s (ICMA’s) Local Government AI Strategy Workbook Establishing feedback mechanisms, including measures for public perception, as well as staff feedback loops City managers, department heads, city council members, county-level executives Early–ongoing
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.

Align the Problem Definition with Context, Goals, and Technical Design

Before proceeding with AI adoption, it is important to define the problem statement, incorporating input from all stakeholders. Misaligned AI technologies can cause significant harm. For example, Michigan’s MiDAS system,28 intended to automate unemployment insurance claims, used a flawed algorithm and lacked sufficient human oversight, ultimately falsely accusing 40,000 individuals with a 93 percent error rate (Kohl, 2024). Similarly, predictive policing tools that have been trained on historically biased crime data result in disproportionate enforcement in already overpoliced neighborhoods (Richardson et al., 2019). The above two cases illustrate the need for governance structures that prioritize accuracy, accountability, trust, and refinement and ensure testing of real-world feasibility, especially when deploying AI for high stakes uses. Below are some actionable steps to ensure relevant, effective, and publicly trusted AI:

  • Purpose definition: Identifying clear goals for AI adoption (e.g., decision support, automation, prediction, and service delivery). Application areas include planning, analytics, security, energy management, and modeling (Yigitcanlar et al., 2024b).
  • Responsible innovation: Balancing benefits, risks, and impacts using responsible adoption frameworks (Yigitcanlar et al., 2021). Such frameworks can address sources of control, agency discretion, and accountability. The map function of NIST’s Risk Management Framework involves documenting the system’s purpose, assumptions, and potential benefits and harms, as well as the social, legal, and operational contexts in which it will operate.
  • Participatory design: Gathering input from all stakeholders in the design, scoping, and implementation stages is crucial to ensure that AI solutions are relevant, trusted, and aligned with local needs. In this process, it can be helpful to use the strategies outlined in other sections of this report: “Engage the Public,” “Use Partnerships with Stakeholders,” and “Engage and Coordinate with Federal, State, and Local Partners.”
  • Matching the technical choices regarding design and development with the deployment and use context: Explicitly enumerating the design choices that need to be made to develop or procure the system and matching those to the specific deployment settings (Barmer et al., 2021).

Define Evaluation Criteria and Assess System-Level Impacts

Defining evaluation criteria that are based on human needs and reflect the social and organizational contexts in which AI systems will operate—and, where possible, establishing baselines before deployment—enables tracking the progress of implementation throughout the adoption life cycle. For example, measuring current performance (e.g., processing time, response time, human task accuracy, and outcomes) at baseline can help quantify whether an AI system is performing any better than the baseline. Developing baseline systems that are simpler, cheaper, and easier to maintain may be a good strategy to enable comparison against existing processes. Many local governments operate in environments in which reliable baseline data may not be available, or existing data may be unreliable, incomplete, or biased (Campion et al., 2020). Recognizing any potential data limitations is critical, as evaluation based on complete and unbiased data can ensure that evaluations have an accurate starting point. In addition, evaluation frameworks are both technically and ethically complex, having to balance accuracy and efficiency with social values such as equity, transparency, and public trust (Sidorkin, 2025; Yigitcanlar et al., 2024a). Metrics for success need to be developed through participatory approaches and subject to independent oversight mechanisms to ensure that they align with public values (Heinisuo et al., 2025). Some actions at this point include the following:

___________________

28See https://wlr.law.wisc.edu/automated-stategraft-faulty-programming-and-improper-collections-in-michigans-unemployment-insurance-program

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
  • Defining performance criteria and desired outcomes: Clear, well-defined desired outcomes can help in determining whether adopted AI technologies are achieving their intended goals (Kelly et al., 2023). Defining criteria for success needs to include balancing competing priorities, including cost efficiency, operational efficiencies, accuracy, and precision, against fairness, bias detection, and community trust (Heinisuo et al., 2025; Yigitcanlar et al., 2024a). Identifying when and how outcomes will be measured, as well as who will be responsible for evaluating and adapting AI systems over time, is important as well (Leoni et al., 2021). For example, criteria for enhancing service delivery outcomes might include reducing waiting times for services, receiving positive feedback from residents, and reducing error rates. Adopting human-centered, social science–informed evaluation frameworks can capture qualities such as usefulness, clarity, fairness, and alignment with user expectations (Selbst et al., 2019). Such frameworks can ensure that AI outputs are meaningful and effective in real-world settings. Tempe, Arizona, for example, uses evaluation frameworks to track real-world performance over time.29
  • Establishing baselines where possible: Documenting pre-AI benchmarks for various processes can help in tracking whether the adopted technologies are working as intended (Königstorfer & Thalmann, 2022). This can include service delivery times, information dissemination, and administrative processes efficiency (Yigitcanlar, 2024a). It can also include organizational capabilities, including existing staff skills or digital infrastructure.
  • Using adaptive evaluation approaches: In cases where baseline data might not be available, adopting rolling assessments that evolve with system deployment and include iterative testing, and refinement can be helpful (Yigitcanlar, 2024a). Another approach is to deploy AI in shadow or pilot mode, where the AI system operates in parallel with the human process, creating a rolling baseline as both the AI system and the human perform simultaneously (Sidorkin, 2025). In addition, qualitative indicators like community feedback and perceptions of fairness should be incorporated alongside quantitative indicators (Sidorkin, 2025).
  • Evaluating at the system level and not just at the AI model level: An evaluation process needs to focus on the entire system within which the AI tool will be embedded. It includes the AI model but also incorporates downstream users and the impact on people.

Establish Feedback Mechanisms

Feedback mechanisms can be used to collect comments and experiences from those engaging with AI technologies, such as community members and staff, as suggested in the International City/County Management Association (ICMA) Local Government AI Strategy Workbook.30 Establishing these feedback loops can help in determining whether performance criteria and desired outcomes are being met by tracking key metrics, as well as identifying potential challenges and opportunities. The process encompasses the following elements:

  • Public perceptions: Providing opportunities for affected community members to offer feedback can help in understanding how the implemented AI technologies are being experienced, thereby enhancing trust and transparency (Yigitcanlar et al., 2022). It is important to integrate such opportunities for feedback before technologies are implemented. Offering various methods for feedback (e.g., surveys, suggestion

___________________

29See https://performance.tempe.gov/

30See https://lgr.icma.org/wp-content/uploads/2024/04/Use-of-Artificial-Intelligence-in-Local-Government-Strategy-and-Tools-1.pdf?utm_source=chatgpt.com

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
  • boxes, feedback lines, social media, or emails) can increase the likelihood of engagement by community members. Measurable factors include public satisfaction, trust, and experience with using AI technologies (Azzahro, Hidayanto, & Shihab, 2025).
  • Staff experience and feedback loops: Identifying adoption challenges can ensure that AI technologies are effective and meet the goals for which they are being deployed. Closing the loop and having local government staff provide feedback can aid in documenting any barriers or misalignments that may exist. Such staff feedback loops also need to be established at the beginning and can be used to track progress throughout the life cycle (David et al., 2024).

4. Capability and Culture (How to build readiness?)

This stage focuses on building internal capacity and competencies for AI, as well as forming collaborations with industry and academic partners. (See Table 5.)

Table 5: Building Readiness and Collaboration Strategies

Goal Examples Implementation Strategy Responsibility Timeframe
Build internal capacity and competency Maryland DoIT and InnovateUS training; San Jose, California’s partnership with NVIDIA and San Jose State University; Boise, Idaho’s AI Ambassador program Training existing staff to evaluate vendor claims, manage systems, and maintain tools over time; offering ongoing training on ethics, risks, and technology literacy Governance bodies, such as IT strategy committees or state-led training oversight teams; staff managers and department heads Ongoing and long-term
Use partnerships with stakeholders Schenectady, New York’s partnership with the State University of New York (SUNY) at Albany; California’s 2024 traffic AI pilot; Arizona’s AI Solutions Challenge Collaborate with civil society, industry, and academic partners for cutting-edge tools, pilot programs, security, usability testing, and technical support City managers, department heads, city council members, county-level executives Pilot to long-term

Build Internal Capacity and Competency

As more AI technologies become available, internal technical capacity is increasingly critical to ensure effective adoption and implementation. Strategies differ by existing capacity. Localities may train existing staff, and if resources allow, they can hire additional staff with the needed technical skills, partner with other departments, or engage in knowledge exchange with other localities.

Implementing ongoing training programs for public officials ensures that staff can understand the potential and limitations of AI technologies and mitigate associated risks (Chun & Noveck, 2025; Lauterbach & Bonime-Blanc, 2018). Other research highlights the importance of continuous, broad-based literacy and the need to impart new skills to both leadership and technical workers (Stone et al., 2020; Tambe et al., 2019). Training can focus on ethical awareness; basic technical understanding; risk assessments; cost structures; and development trends, since technologies are evolving rapidly.

There are several examples of state and local training programs. The Maryland Department of

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.

Information Technology, for instance, partnered with InnovateUS to offer free, asynchronous courses that include instruction, specifically for state employees, on how to audit systems for bias related to AI in the workplace.31 At the municipal level, the City of San Jose, working with NVIDIA and San Jose State University, offers opportunities to build AI literacy skills for city workers.32 Such partnerships are beneficial for localities that may lack the resources to run their own training programs. There are also training programs of academic think tanks and companies—some free and some fee-based—such as those offered by Partnership for Public Service,33 Innovate,34 Intel,35 IBM,36 Apolitical37, and Salesforce.38 Boise, Idaho’s, AI Ambassador program39 is an internal program that trains staff to act as peer trainers.

Beyond individual technical skills, organizational structures play a critical role. These structures are often inextricably linked and require significant investment, which can be challenging for smaller localities. Organizational readiness also includes leadership buy-in. A study of federal policies revealed a capacity and leadership gap that hindered the effective implementation and governance of AI across federal agencies (Lawrence et al., 2023). Chen and colleagues (2024) highlight that successful deployment depends not only on staff training but also on internal guidance and maintenance processes. To this end, local governments can establish governance bodies that are responsible for updating staff knowledge, as can be seen in early efforts by California, Oregon, and Pennsylvania (State of California, 2023; State of Oregon, 2023; State of Pennsylvania, 2023).

Building long-term capacity may also involve other strategies to support deployment and sustainability:

  • forming interdisciplinary teams to evaluate use cases and guide deployment (De Haes et al., 2020);
  • establishing competent IT strategy committees and structured communication systems (Ali & Green, 2012); and
  • sharing services and pooling expertise across jurisdictions (e.g., Rhode Island’s rural–urban agency collaborations, 2024).

Use Partnerships with Stakeholders

Cross-sector collaborations with industry, academia, and civil society partners can be beneficial. Governments may structure partnerships to reflect operational needs and include real-world performance evaluation. Such partnerships can help address technical, organizational, and social challenges, accelerating AI adoption (Campion et al., 2020; Garousi et al., 2016). These partnerships can also support local capacity-building efforts when structured well—for example, collaborating with regional schools, community colleges, or universities through student projects and research labs to identify needs and usability issues and build local competencies. Schenectady, New York, for example, partnered with the State University of New York (SUNY) at Albany to codevelop real-time infrastructure tracking tools,

___________________

31See https://innovate-us.org/partner/marylanddoit

32See https://www.sjmayormatt.com/news-room/city-of-san-jos-to-upskill-workforce-and-drive-ai-innovation-in-local-government-in-collaboration-with-nvidia

33See https://www.intel.com/content/www/us/en/developer/topic-technology/artificial-intelligence/training/overview.html

34See https://innovate-us.org/workshop-series/artificial-intelligence-for-the-public-sector

35See https://ourpublicservice.org/course/ai-government-leadership-program-state-local/

36See https://www.ibm.com/training/artificial-intelligence

37See https://apolitical.co/microcourses/en/ai-fundamentals-for-public-servants-opportunities-risks-and-strategies/

38See https://www.salesforce.com/artificial-intelligence/ai-course/

39See https://www.cityofboise.org/programs/innovation-plus-performance/ai-in-government/

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.

building shared capacity and test environments.40 Additionally, partnering with local universities can enable the creation of cost-effective tools (e.g., Arizona’s AI Innovation Challenge).41

Engaging with civil society groups can enhance community digital skills, accelerate the uptake of AI technologies, and improve public trust (Jaillant & Rees, 2022). California,42 for example, piloted a GenAI project in partnership with private-sector firms to analyze live traffic data and predict congestion. However, these partnerships can also create risks, including bias; vendor lock-in; poor alignment with broader needs and public values; privacy gaps; and the imposition of one-size-fits-all solutions on local contexts without addressing critical, unique needs (Eubanks, 2018; Whittlestone et al., 2019).

To guide the establishment of collaborations, local governments can

  • appoint individuals who are boundary spanners or have joint appointments to facilitate communication and align goals (Campion et al., 2020);
  • have regular meetings and points of contact (Garousi et al., 2016);
  • outline clear benefits for all partners to encourage buy-in (Garousi et al., 2016); and
  • include all relevant stakeholders to ensure that AI solutions are tailored to local needs (Cubric, 2020).

5. Ongoing Accountability and Engagement (How to manage implementation and maintain trust?)

The management of AI implementation involves continuous monitoring and improvement mechanisms, advisory bodies, and oversight mechanisms. (See Table 6).

Table 6: Accountability and Engagement Strategies

Goal Examples Implementation Strategy Responsibility Timeframe
Continuous monitoring and improvement mechanisms San Jose, California’s AI risk classification guidelines; Grove City, Ohio’s AI policy development Building monitoring and evaluation into AI systems from the beginning; defining clear objectives and metrics; establishing a baseline; and developing an evaluation plan to measure progress Local governments, risk committees, and stakeholder teams Ongoing and long-term
Create advisory and oversight bodies Texas’s AI Advisory Council; Maryland’s AI Subcabinet; District of Columbia’s AI Values Alignment Group; Oregon’s AI Advisory Council; New York City’s Local Law 144 audit review issues Establishing clear legal frameworks for enforceable authority; creating accountability mechanisms such as public-facing reports and independent evaluations; forming independent oversight bodies with broad representation, especially in high-impact areas State or local government leaders Medium- to long-term

Establish Tiered Continuous Monitoring and Improvement Mechanisms

Integrating monitoring and improvement mechanisms from the outset is crucial for maintaining public trust and accountability in AI-driven public services. Evaluation can cover the entire life cycle, from pre-deployment testing to real-world performance monitoring, and provide clear pathways for system

___________________

40See https://www.albany.edu/news-center/news/2024-ctg-ualbany-students-using-ai-help-city-schenectady-track-government

41See https://ai.asu.edu/AI-Innovation-Challenge

42See https://www.govtech.com/artificial-intelligence/caltrans-pilots-generative-ai-to-probe-resolve-traffic-woes

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.

improvement or decommissioning (National Association of Counties, 2024; Yigitcanlar et al., 2024b). Evaluations that focus on outcomes instead of just output can measure the actual impact of AI technologies on citizens and communities, rather than simply technical performance (David et al., 2024; National Association of Counties, 2024). Important as well is ensuring that the evaluation process itself is conducted ethically and is free of biases and privacy violations (David et al., 2024; Eichholz, 2025; National Conference of State Legislatures, 2024). Tempe, Arizona’s AI Evaluation Policy, for example, enforces expectations for ongoing system review and improvement (National League of Cities, 2023). However, some local governments may lack the funding, staff, or legal mandates to carry out long-term auditing or retraining, and their AI systems might degrade over time without ongoing maintenance (Badger et al., 2021; Li & Goel, 2024). These challenges need to be considered deliberately when deciding on continuous evaluation processes since the risks associated with deploying a system without robust evaluation can be significant. Key strategies here can include the following:

  • Independent audits: These audits, particularly when monitoring may have been outsourced to vendors, provide a practical governance mechanism that can focus on prospective risk assessments, operational audit trails, and compliance with jurisdictional requirements (Falco et al., 2021).
  • Responsible AI frameworks: Local governments can use the Responsible AI for Evaluation (RAI-E) framework, which provides practical tools and rubrics designed to help local governments evaluate AI technologies in a structured and accountable manner (Fonner & Coyle, 2024). The framework is adaptable for local contexts and includes methods for using feedback to refine a model informed by real-world use (Fonner & Coyle, 2024). Sharing templates, toolkits, and other resources can assist under resourced localities. These resources can support ongoing monitoring and drive continuous improvement, even where funding and expertise are limited (Matthew et al., 2024).
  • Risk management for potential threats: As governments deploy AI models in operational contexts, particularly for high-stakes decision making, they will become targets for malicious actors who may attempt to manipulate model predictions, poison training data, or exploit vulnerabilities in deployed systems (Bibri et al., 2024). Best practices include adopting robust ML practices, conducting systematic threat modeling, and combining technical solutions with ongoing risk assessment and transparent practices (Bose et al., 2022; Malali & Madugula, 2025).
  • Fairness and bias audits: Maintaining fairness and reducing bias requires examining data sources, model design, transparency, and cultural context. Accountability can be enhanced through regular audits, human oversight, and the integration of ethical values into policies (Landers & Behrend, 2022; Murikah et al., 2024). The City of San Jose, California’s, AI guidelines, for instance, categorize applications by risk level, requiring increased oversight for high-risk uses, particularly those that impact public services. Existing tools, such as Fairness 360,43 Aequitas,44 and Fairlearn,45 can serve as a good starting point for such efforts.

___________________

43See https://ai-fairness-360.org/

44See https://aequitasresource.org/

45See https://fairlearn.org/

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
  • Ongoing monitoring and governance: Continuous monitoring is essential for long-term, real-world impact. Frameworks such as AI for IMPACTS, originally developed for health care but applicable to a broader range of settings, can help local governments evaluate integration, performance, trust, cost, safety, and scalability (Jacob et al., 2024). An example is Grove City, Ohio, which is developing AI policies based on governance and risk mitigation, using an approach that entails reviewing existing policies and engaging a diverse range of stakeholders to align AI use with community needs.46

Create and Sustain Advisory and Oversight Bodies

The success of advisory groups and oversight bodies depends on having clear mandates, genuine representation, transparency, and established mechanisms for translating advice into actionable insights. For example, a study of New York City’s Local Law 144, which mandated annual bias audits for automated employment decision tools, found low compliance, with many employees failing to post the required audit reports (Wright et al., 2024). The study found that noncompliance arose because the law granted employers discretion over whether their systems fell within their scope, resulting in inconsistent enforcement due to limited accountability. Additional challenges include a lack of binding and enforceable authority; insufficient technical expertise to evaluate complex AI systems; and failure to guide decision making amid political pressures and industry capture, including the presence of temporary and limited-mandate groups (Nemitz, 2018; Whittlestone et al., 2019). To address these challenges, state and local governments can take the following steps:

  • Establish clear legal frameworks: Groups with clear, enforceable authority can provide legitimacy and establish mechanisms for addressing noncompliance. Several states have already established advisory structures with this in mind. Examples include Texas’s AI Advisory Council,47 which was established in 2023 to “study and monitor AI systems developed, employed, and procured by state agencies.” Additionally, Maryland established an AI Subcabinet within the Governor’s Executive Council to oversee the ethical and responsible use of AI (State of Maryland, 2024).
  • Create accountability mechanisms: Accountability mechanisms can be strengthened by implementing public-facing progress reports and conducting independent evaluations. Local governments, such as the District of Columbia, have established an advisory group on AI values alignment to monitor the District government’s AI adoption efforts and advise the mayor on promoting and adhering to responsible AI values (Government of the District of Columbia, 2024).
  • Establish independent oversight bodies: Independent advisory groups and ethics committees can play a vital role. They can review proposed AI use cases, monitor deployment, assess algorithmic impacts, and provide independent risk evaluations (Cantens, 2025; Yigitcanlar et al., 2024b).
  • Ensure that all stakeholders are represented: Representing all stakeholders, including industry experts, public administrators, legislators, and impacted community representatives, increases buy-in from all constituencies. An example of a state-level committee with broad representation is the Texas AI Advisory Council, which comprises members from academia, law enforcement, and the legal field and is tasked with monitoring AI systems developed, employed, and procured by state

___________________

46See https://www.americancityandcounty.com/artificial-intelligence/how-one-city-is-proactively-managing-ai-use-and-what-local-governments-can-learn-from-it

47See https://aiadvisorycouncil.texas.gov/s/

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
  • agencies (State of Texas, 2023b).
  • Allocate resources for technical support: These bodies and groups require resources to carry out their functions effectively. Oregon’s State Government AI Advisory Council was allocated resources for technical support in AI governance (State of Oregon, 2023) and tasked with guiding AI adoption across state agencies.

Table 7 shows different types of advisory and oversight bodies.

Table 7: Types of Advisory Groups and Committees

Type Core Function Accountability Mechanism Examples
AI Advisory Councils Advising on AI adoption and procurement, ensuring responsible use across government Regular reports to legislative bodies and response to recommendations State of Texas’s (2023a) AI Advisory Council; Vermont’s AI Advisory Council (Vermont General Assembly, 2022)
AI Ethics Review Boards Evaluating ethical implications of AI, ensuring alignment with public values Binding reviews and independent audits before AI deployment Vermont’s AI Code of Ethics (2023); Delaware’s AI Commission (Delaware General Assembly, 2024)
Algorithmic Transparency Task Forces Ensure the transparency and accountability of AI systems, focusing on public-sector applications Public meetings, progress tracking, and independent evaluations New York City’s Automated Decision Systems Task Force (2019); Government of D.C.’s AI Values Advisory Group (2024)
Cross-Agency Coordination Bodies Fostering collaboration across agencies for effective AI deployment Interagency reports; coordinated oversight with clear performance metrics California Department of Transportation’s AI Pilot (2024)
Independent AI Oversight Committees Providing post-deployment audits and assessing the risks of AI systems Legal mandates for audits and public-facing evaluation reports Maryland’s AI Subcabinet (2024); Texas’s Artificial Intelligence Advisory Council (2023)

KEY QUESTIONS TO FURTHER SUPPORT DECISION MAKING AND ADOPTION

AI is a fast-moving and rapidly changing field, and there remain unanswered questions regarding its integration into state and local decision making. These questions, listed below, point to opportunities for additional research and for creating avenues for gathering insights and exchanging knowledge across the experiences of a range of localities:

  • What centralized resources would effectively support local and state governments in the use and governance of AI technologies?
  • What is the potential for national standards in the future, and what should the process for developing those standards be?
  • How can state and local governments stay up to date on the relevance and alignment of their use of AI technology as AI capabilities, vendor offerings, and regulatory frameworks rapidly evolve?
  • How can potential consequences of the adoption of emergent AI technologies be anticipated and mitigated more effectively?
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
  • What forms of engagement are most productive for informing the adoption of AI technologies?
  • What governance structures best support AI use in different settings?
  • How does the use of AI technologies influence public-facing decisions?
  • What transferable lessons can be learned from ongoing evaluations of AI technologies in local settings?

CONCLUSION

The rapid growth of AI-based technologies requires state and local decision-makers to focus on how these technologies can be integrated effectively and safely into local governments, which play a key role in shaping their region’s economy, safety, and well-being. The benefits of AI-enabled systems are multifaceted, such as better equipping public-sector workers and increasing organizational capacity and structures. Yet AI adoption and implementation presents the problem of further creating a rural/urban divide that needs to be addressed. There are risks associated with strategies for integrating AI into state and local government decision making and local governments need to consider carefully before adopting and implementing AI-based systems, particularly when those systems automate decision-making processes.

SEAN is interested in your feedback. Was this rapid expert consultation useful? Send comments to

sean@nas.edu or (202) 334-3440.

Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 4
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 5
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 6
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 7
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 8
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 9
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 10
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 11
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 12
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 13
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 14
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 15
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 16
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 17
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 18
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 19
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 20
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 21
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 22
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 23
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 24
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 25
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 26
Suggested Citation: "Introduction." National Academies of Sciences, Engineering, and Medicine. 2025. Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation. Washington, DC: The National Academies Press. doi: 10.17226/29152.
Page 27
Next Chapter: REFERENCES
Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.