Artificial intelligence (AI) technologies5 have grown rapidly and have the potential to transform the delivery of public services by automating routine tasks, streamlining operations, and providing data-driven insights to inform decision making (National Academies of Sciences, Engineering, and Medicine, 2017; Soe & Drechsler, 2018; Yigitcanlar et al., 2024a). New AI technologies continue to be introduced both as functional tools and as platforms that support collaboration, thereby changing administrative processes and potentially improving service outcomes across various domains such as health, human services, weather, public administration, public safety, and urban planning.
As AI capabilities expand, decision makers are increasingly becoming responsible for guiding the adoption of AI across the breadth of their jurisdictions while also protecting public values, rights, and wellbeing (Schiff et al., 2022). Notably, the rise of generative AI has significant implications and applications for state and local governments. In addition to the benefits enumerated above, AI adoption and integration present challenges ranging from gaps in infrastructure and digital transformation to policy and regulatory hurdles, including privacy, data security, bias, algorithmic accountability, transparency, financial constraints, and workforce skills and training gaps (Chen et al., 2024; D’Amico et al., 2020). Significant technology gaps are a challenge as well, as off-the-shelf AI tools that meet the specific needs of state and local agencies often do not yet exist. For example, even when privacy protections such as changing credentials and rotating message logs are built in, scholars and policy makers caution that municipal AI systems raise broad ethical concerns related to privacy, surveillance, algorithmic bias, and public accountability (New York City Office of Technology and Innovation, 2023; Young et al., 2019). These concerns can be addressed by engaging with the public, developing robust governance frameworks, and maintaining ongoing oversight (Lottu et al., 2024).
The increasing development of AI technologies presents a timely opportunity to provide evidence-based support that can inform state and local decision making around AI procurement, development, and adoption. While state and local leaders are tasked with providing strategic direction, overseeing procurement, and shaping implementation, they must also manage resource limitations, workforce gaps, and balance the potential benefits and harms of AI technologies (Sloane & Wüllhorst, 2025). This rapid expert consultation6 provides actionable guidance for public-sector leaders, policy makers, and practitioners responsible for planning, procuring, and overseeing the use of AI in state and local government. It is intended to support decision makers at all levels and in all roles—from strategy and governance to implementation and evaluation—in adopting AI technologies responsibly and effectively. Note that this consultation focuses primarily on AI in decision making—both systems that directly make decisions and those that materially inform or guide decisions that affect services, rights, or resource
___________________
5While there is no one widely accepted definition of AI, it is defined for the purposes of this report as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments” (Organisation for Economic Co-operation and Development, 2019, Section I).
6The full statement of task states: “The National Academies of Sciences, Engineering, and Medicine will produce a rapid expert consultation, ‘Strategies for integrating artificial intelligence (AI) tools into state and local decision-making,’ aimed at providing timely, actionable guidance to support state, tribal, territorial, and local decision-makers who are either using, in the process of integrating, or considering the adoption of AI technologies. Drawing on interdisciplinary research from behavioral economics, sociology, cognitive psychology, human-centered AI, public administration, policy studies, information sciences, and ethics, this consultation will consider the social, behavioral, and economic (SBE) aspects of AI adoption to provide practical insights into how decision-makers can assess whether AI technologies align with their governance goals. The consultation will explore the potential of AI to improve public services and governance at state and local levels and enhance efficiency and innovation across various sectors while also flagging associated ethical considerations, data security risks, and integration challenges. The pace at which AI systems are evolving presents unique opportunities and challenges for decision-makers. The focus of the consultation will be on the SBE aspects of AI adoption, ensuring that the discussion remains centered in the SBE domain rather than on the underlying technical components. The rapid expert consultation will be designed for timely, practical use by decision-makers but will not make recommendations. It will be reviewed in accordance with institutional guidelines.”
allocations—and not on AI tools that can improve organizational efficiency and innovation without directly influencing public-facing decisions.
Decision makers face several choices related to AI technologies—not only about how to adopt AI, but also whether to adopt it at all, when to do so, how, and under what specific circumstances. These decisions range from doing nothing or delaying adoption, to developing systems in-house, to hiring third-party vendors, to partnering with academics, to acquiring commercial tools, to leveraging public or open-source options (Weerts, 2025).7 The choices offer varying levels of effectiveness, risk, cost, and control, and require balancing key considerations, including data privacy and security, affordability, ethical implications, and long-term sustainability. These choices can arise across the entire technology implementation life cycle—from problem definition and planning; to scoping; to building readiness; and to ongoing engagement, monitoring, and accountability.
Although public attention on AI has surged in recent years, governments have been adopting these technologies for years (Chen et al., 2024). For example, AI chatbots and virtual assistants have already been delivering public services, improving quality, and enhancing efficiency (Chen et al., 2024). Additionally, automated decision systems, decision-support systems, and earlier versions of predictive AI have been utilized for several years in various applications, including property tax assessments, emergency response systems, and public health surveillance (Gloudemans & Sanderson, 2021; Nihi et al., 2025).
The application of AI in government has been framed as a federal concern, with considerable attention focused on AI regulations and strategies associated with national strategic or economic interests (Allam & Dhunny, 2019; Sloane, 2022). However, uptake of AI is increasing among state and local governments. In 2024, more than 450 bills related to AI were introduced throughout the United States (National Conference of State Legislatures, 2024), many of which have now passed and constitute new regulatory environments for AI development (Sloane & Wüllhorst, 2025). In these efforts, state legislatures have considered issues such as local government use of AI, impact assessments, use guidelines, procurement standards, and the establishment of oversight bodies (National Conference of State Legislatures, 2024). Additionally, at least 30 states have issued guidance on AI use within state agencies, such as Georgia’s AI Responsible Use Guidelines (National Conference of State Legislatures, 2024). Local governments have also been developing policies for ethical use, from large cities such as New York City to smaller communities such as Tempe, Arizona.
Not all states and local jurisdictions are at the same stage in the adoption and integration of AI; some have integrated AI technologies into their work, while others are still exploring the most effective approach. Research suggests that a clear roadmap aligned with timeframes can be beneficial when exploring AI adoption and implementation (Jöhnk et al., 2021; Liu et al., 2024; Reim et al., 2020):
___________________
7State and local governments are adopting two main types of AI tools: general-purpose tools such as enterprise ChatGPT (Pennsylvania), large language model (LLM) wrappers (New Jersey’s AI agent, Massachusetts Genie, Boston’s Launchpad), and integrated tools such as Gemini (Colorado, Boston) and Co-Pilot; and more targeted applications—custom (Boston’s BidBot, Massachusetts’s MECA) or commercial (Westlaw’s Co-Counsel, Axon’s police report tool).
Several frameworks and approaches—including the Blueprint for an AI Bill of Rights (Office of Science and Technology Policy, 2022); NIST’s AI Risk Management Framework (National Institute of Standards and Technology, 2023); and recent work by Bignami (2022), Chen and colleagues (2023), Hanna and colleagues (2024), Kawakami and colleagues (2024), Merhi (2023), and Miller (2022)—offer converging guidance on responsible AI adoption. Synthesizing across these sources, the following guiding questions can help structure decision making throughout the AI adoption life cycle:
___________________
8See https://doit.maryland.gov/About-DoIT/Offices/Documents/2025%20Maryland%20AI%20Enablement%20Strategy%20and%20AI%20Study%20Roadmap.pdf
9See https://www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-_-GGN-Signed.pdf
The sections that follow provide guidance to support decision making, implementation, and experimentation throughout the AI adoption life cycle. The discussion is intended to provide decision makers with concrete actions they can take, tailored to their roles and the stage of AI adoption.
This section examines some foundational considerations that address the question, why use AI and how can we ensure its responsible use? In addition to governance issues, the discussion touches on the problem definition; public engagement; and collaboration with federal, state, and local partners. Table 1 summarizes some foundational actions that can be taken at the beginning of the AI adoption process.
Table 1: Foundational Strategies
| Goal | Examples | Implementation Strategy | Responsibility | Timeframe |
|---|---|---|---|---|
| Be purpose- and people-oriented | Illinois Department of Human Services, high risk pregnancies model | Start by grounding AI initiatives in public values and a clear understanding of human and organizational contexts | Public engagement processes and input from residents, staff, and civil society at the outset can shape accountability throughout the life cycle | Early-stage planning and development |
| Engage the public | Long Beach, California’s Smart City public survey; Vermont’s AI Task Force | Conduct consultations, surveys, or forums to gather community feedback. Include public representatives in advisory bodies and task forces. Codesign ethical guidelines with community input | Local governments and task forces are responsible for including community input and reporting publicly on feedback and engagement outcomes | Project-based to ongoing |
| Build proportional AI governance | City of Boise, Idaho’s AI policy; Utah’s AI Governance Office; District of Columbia’s AI Values and Strategic Plan | Establish governance and internal use policies to guide how AI is managed | AI ethics committees and interdepartmental oversight groups review use and ensure policies are followed | Short-term (development); medium- to long-term (implementation) |
| Participate in and help shape emerging collaborative frameworks | National Institute of Standards and Technology’s (NIST’s) AI Risk Management Framework; National League of Cities’ Artificial Intelligence Demystified—AI Toolkit for Municipalities | Join and help form collaborations that provide technical assistance, shared toolkits, legal frameworks, and coordination infrastructure | Partnerships report outcomes and maintain collaboration to meet shared goals | Medium-term (forming partnerships and initiating projects) |
| Develop tiered AI procurement guidance | Alaska’s AI impact assessment bill | Set clear procurement standards with which to evaluate AI before use. Align procurement with national standards, such as NIST’s AI Risk Management Framework | Procurement officers, review boards, and external evaluators ensure that vendors meet legal, ethical, and operational standards | Medium-term (development and implementation of standards) |
Grounding AI initiatives in public values and a clear understanding of human and organizational contexts enables the use of AI not only to solve well-defined problems but also to explore new possibilities, anticipate future needs, and augment the capacity of public institutions to respond to complexity (Chen et al., 2023; Shneiderman, 2020). Instead of an approach based on replacing humans, a human–AI teaming approach that aligns AI with human expertise and responsibilities can be adopted (McNeese et al., 2018). A human-centered approach incorporates AI to complement human judgment, respect human roles, and reflect community values (McNeese et al., 2018; Schelble et al., 2024; Shneiderman, 2020).
To these ends, local governments can
Public input and trust are essential in deploying AI within state and local government functions. Public engagement can help identify potential harms and conflicts of interest related to procurement, use, and governance, and bring a variety of perspectives to bear, thereby strengthening the design and monitoring of tools (Wilson, 2022).
Research indicates that meaningful public engagement, characterized by communities being represented and listened to and resources being provided to address their needs, can serve as a form of transparency and facilitate the acceptance of new technologies (Cheong, 2024; Zuiderwijk, Chen, & Salem, 2021). To this end, state and local governments can:
Engagement can include consultations, surveys, forums, or participatory design processes. Examples of public engagement efforts include the following:
To support meaningful engagement, governments can use participatory design methods, including:
AI governance policies need to be tailored to the types of tools or systems being adopted. For example, high-impact or public-facing systems might require multifaceted formal policies, whereas low-risk technologies might prioritize lightweight, streamlined, values-based guidelines that promote responsible use without stifling innovation. Governance policies can address the following elements:
___________________
10See https://artificialintelligenceact.eu/the-act/
11Contained in Executive Order N-12-23 (State of California, 2023).
12Contained in Executive Order 2023-19 (State of Pennsylvania, 2023).
These policies can build on existing rules, such as the Health Insurance Portability and Accountability Act (HIPAA), the Family Educational Rights and Privacy Act (FERPA) for privacy preservation, Title VI for employment/education discrimination, and the Americans with Disabilities Act (ADA) for disability discrimination. For example, Utah’s Office of Artificial Intelligence Policy introduced privacy protocols for sensitive health and education data. It developed responsible use policies with language similar to that of existing HIPAA and FERPA laws (Utah State Legislature, 2024).
Internal policies become especially important as AI technologies continue to expand. While these technologies offer significant potential for innovation and efficiency, especially in the short run, they also introduce challenges related to integration, regulation, ethics, governance, and liability (Weerts, 2025). For example, Generative AI (GenAI) is rapidly transforming the way work is done, especially in the public sector, where its potential for innovation and efficiency is substantial. At the same time, the growing number of GenAI tools presents challenges around evaluation, regulation, ethics, and governance (Phillips-Wren & Virvou, 2025). Without clear policies, public agencies risk legal and reputational issues, as well as harm to their constituents. State and local guidelines can help by providing guardrails for responsible experimentation and safe use of AI-generated content. For example, the City of Boston, Massachusetts’ interim guidance14 provided principles, do’s and don’ts, and used case examples, and encouraged city employees to experiment. Similarly, New York City’s AI Action Plan included a review process to ensure transparency, accuracy, and fairness. Since GenAI tools can produce inaccurate output, policies also play a role in requiring careful review before use. Ongoing pilot projects, such as those in California and Pennsylvania, show how agencies are testing some of these tools internally and can now position themselves to carry out more robust evaluations and develop stronger evidence bases and guidelines.
Local and regional governments often lack the legal, technical, or financial capacity to undertake policy development (David et al., 2024). Adaption of widely recognized frameworks, regulations, and standards can address some of the capacity gaps. Table 2 summarizes some tools that may be relevant.
Table 2: Frameworks, Regulations, and Standards
| Type | Purpose / Why It Is Important | Examples | Relevance for the Public Sector |
|---|---|---|---|
| Principles and Frameworks | Provide high-level guidance and values for ethical development and use | White House Blueprint for an AI Bill of Rights; OECD AI Principles; United Nations Educational, Scientific and Cultural Organization (UNESCO) AI Ethics Guidelines | Aligns systems with human rights, equity, and transparency; useful for oversight and procurement framing |
| Best Practices and Templates | Operational guidance for design, | National Institute of Standards and Technology’s (NIST’s) AI Risk Management Framework (RMF); National League of Cities AI Toolkit; GovAI Policy Stack | Useful for developing local policies and risk assessments; |
___________________
13See https://capitol.texas.gov/tlodocs/89R/billtext/pdf/HB00149I.pdf#navpanes=0
14See https://www.boston.gov/sites/default/files/file/2023/05/Guidelines-for-Using-Generative-AI-2023.pdf
| procurement, and deployment | includes templates and process checklists | ||
| Regulations | Legally binding rules for development or use | European Union’s (EU’s) AI Act; New York Local Law 144; Illinois Biometric Innovation Privacy Act (BIPA); Colorado Privacy Act | Apply directly or influence local procurement and risk categories, especially in human resources or surveillance contexts |
| Technical Standards | Specifications for safety, quality, fairness, and interoperability | Institute of Electrical and Electronics Engineers (IEEE) P7001; IEEE 7010; International Standards Organization/International Electrotechnical Commission (ISO/IEC) 4200; IEEE Draft Standard 3119-2025 | Supports vendor evaluation and technical due diligence in procurement and system design |
Broader internal policies also aid in managing daily operations by supporting transparency, clarifying oversight roles, and coordinating internal processes across departments. For instance, Massachusetts’ Enterprise Acceptable Use of Information Technology Policy15 mandated human oversight in high-risk scenarios, and the State of Hawaii’s data guidance outlined how long system-related data should be retained. The Utah State Legislature (2024) introduced privacy protocols for sensitive health and education data and developed responsible use policies. To support the implementation of these strategies, local governments can
___________________
15See https://www.mass.gov/doc/isp002-acceptable-use-policy/download
16See https://oag.ca.gov/privacy/ccpa
17See https://mgaleg.maryland.gov/2024RS/Chapters_noln/CH_454_hb0567e.pdf
The lack of a unified operational hub to drive AI coordination across the federal, state, and local levels makes coordination across these levels difficult (Lawrence et al., 2023). While federal agencies such as NIST, the Office of Science and Technology Policy (OSTP), and the new AI Safety Institute issue AI guidance and standards, none of them function as a central operational coordinator across federal, state, and local governments, nor do they generally have the authority or infrastructure to ensure coordinated rollout or shared governance (Lawrence et al., 2023). As a result, states and cities are left to interpret and apply AI standards independently, which leads to fragmentation, duplication of effort, and inconsistent protections (National Governors Association, 2025). A central coordination office or clearinghouse could address these gaps by providing
Existing cross-agency collaborations highlight the value of coordination. An example is a partnership between Johnson County, Kansas, and the Data-Driven Justice Initiative, whose predictive model for jail diversion for individuals with complex needs links data from justice, mental health, and emergency medical services (Salomon et al., 2017). Some state-level efforts also are emerging, although it is too early to determine their downstream impact:
Although federal–local partnerships also exist, coordination can be weak, with siloed efforts and unclear pathways for local input (Congressional Research Service, 2025). When federal pilot programs are accessible with technical assistance, innovations can be scaled. For example, Charlotte-Mecklenburg Police’s Early Intervention System, developed as part of the White House’s Police Data Initiative, was later adopted by other jurisdictions (Helsby et al., 2017).18
Some national efforts support intergovernmental collaboration:
Public associations can fill some of the coordination gaps. Organizations such as the National Governors
___________________
18This program was developed to flag officers at risk of adverse interactions with the public (Helsby et al., 2018).
Association, the International City-County Management Association, the National Association of Counties, the National League of Cities, and the National Conference of State Legislatures offer toolkits, showcase use cases and policies, and facilitate knowledge exchange through conferences and webinars.
In addition, specialized intergovernmental networks focused on AI are emerging:
These networks, along with civil society partners such as the Center for Democracy & Technology’s AI Governance Lab,23 Upturn, and EPIC, are laying the groundwork for collective learning. However, not all localities are equally positioned to benefit from these networks. Large cities (e.g., New York City, Los Angeles) may have their own AI offices and innovative teams that can easily participate. But rural towns or small municipalities often lack the technical staff, legal counsel, and resources to participate meaningfully in cross-government AI networks. Funding mechanisms are needed to compensate these smaller localities for their participation.
As the number of AI technologies continues to increase, the development of robust tiered procurement processes and guidance has become essential for state and local governments. While standards reduce the risk of deploying an ineffective system that is more costly to maintain than anticipated, insecure, or misaligned with public values and protections (Coglianese, 2023), they may also be scalable and tailored to user needs and risks. Procurement guidance navigates a tension between transparency and the proprietary interests of vendors, balancing commercial interests and secrecy with accountability and public trust (Hickok, 2024). Clear procurement guidance can ensure that vendors not only meet technical requirements, organizational needs, and operational goals, but also comply with legal, social, and ethical standards, avoiding a solution-in-search-of-a-problem (IEEE Draft Standard 3119-2025;24 World Economic Forum, 2023). The World Economic Forum’s AI Procurement in a Box provides guidance on assessing vendor transparency, mitigating bias, and implementing ethical safeguards (Coglianese & Shaikh, 2025). A tiered procurement approach that is purpose-oriented and aligns with system risk and public impact can ensure that AI technologies are not simply vendor driven.
Procurement policies can address a defined set of concerns rather than attempting to cover everything. In implementing AI procurement guidance, a tiered approach that aligns with various use cases can be useful to ensure that the procurement process can prioritize critical issues while maintaining flexibility to meet varying needs. For example, low-risk AI use cases lend themselves to a lower level of scrutiny during the procurement process relative to high-risk use cases, which require more intense human oversight and accountability. Focusing on the following key elements can reduce overwhelm and guide actionable implementation:
___________________
19See https://www.sanjoseca.gov/your-government/departments-offices/information-technology/ai-reviews-algorithm-register/govai-coalition
20See https://cityaiconnect.jhu.edu/
22See https://www.connective.city/>
Several state and local governments have started experimenting with innovative approaches to procurement. For example, California invited technology companies to propose GenAI technologies to help reduce traffic congestion and improve road safety (Sowell, 2024). Although framed as an open challenge, California’s approach emphasized alignment with ethical guidelines and responsible AI practices during vendor selection. And the State of Alaska introduced a bill requiring all AI systems used by state agencies to undergo an initial impact assessment and be continuously evaluated throughout their use (State of Alaska, 2024). It is important as well to allow for piloting so that users can test technologies before an AI tool or system is purchased or before full deployment. Doing so can help reduce procurement risks by allowing users to see whether the tool or system aligns with their expectations and needs.
Aligning procurement standards with broader national efforts, such as NIST’s AI Risk Management Framework (National Institute of Standards and Technology, 2023), can foster consistency, interoperability, and effective risk mitigation across jurisdictions. However, governments often encounter "black box" vendor solutions in which the underlying AI models are opaque, and unequal bargaining power complicates the government’s ability to demand accountability (Whittaker et al., 2018). To tackle these barriers, procurement contracts may incorporate access to sufficient system documentation, third-party testing or audits, mechanisms that address instances in which AI systems fail to meet performance expectations or cause harm, and concrete data governance guidelines that specify how the vendor’s access to user data will be restricted and how data controls will be enforced (Coglianese, 2023; Yigitcanlar et al., 2023). Collaborative design of templates for requests for proposals for use in procurement processes, as well as the use of best practices for vendor and product evaluation, can be a useful starting point for state and local agencies seeking to build on collective knowledge and consolidate lessons learned.
Getting started responsibly entails conducting feasibility assessments and workflow mapping and scoping internal capacity. (See Table 3.)
Table 3: Planning and Scoping Strategies
| Goal | Example | Implementation Strategy | Responsibility | Timeframe |
|---|---|---|---|---|
| Feasibility assessments and workflow mapping | Hawaii’s wildfire forecast system | Defining the problem and setting objectives, assessing data readiness and quality, evaluating technical feasibility, and mapping existing workflows to ensure that the AI solution integrates with current | Public-sector practitioners | Early-stage planning |
| systems and processes | ||||
| Scoping of internal capacity | Tempe, Arizona’s, AI Review | Evaluating workforce skills, assessing infrastructure readiness, documenting available resources, and implementing change management strategies to ensure that the organization can support and sustain AI technologies over time | City managers, department heads, city council members, county-level executives | Medium– to long–term |
Adopting a structured approach that includes initial scoping and feasibility assessment, followed by pilot testing, full-scale deployment, and ongoing improvement facilitates validation, user training, and process refinements before adoption (Al-Amin et al., 2024). Feasibility assessments determine what needs to be built or procured, why, and how to understand whether the adoption of specific AI technologies is viable (Kinkel et al., 2021). Tools such as the SITUATE AI guidebook assist in mapping goals to technical feasibility (Kawakami et al., 2024). Feasibility assessments often focus on technical or budgetary constraints but also can benefit from the inclusion of public values (Madan & Ashok, 2022). A thorough feasibility assessment focuses on defining user needs, operational goals, ethical boundaries, and legal constraints (Guha et al., 2024). For example, the development of a model to address a water main failure in Syracuse, New York, began with a citywide infrastructure mapping exercise to explore how the proposed technologies would integrate into public workflows (Kumar et al., 2018). Likewise, Philadelphia’s “Pitch & Pilot”25 program tests AI solutions on a small scale to evaluate key metrics before full-scale deployment.
Workflow planning involves mapping out the AI technology that will be integrated with the existing operational process(es) and how users will interact with the technology and ensuring alignment with operational needs. It is important to center user experiences and solicit staff perspectives to understand oversight mechanisms, governance needs, and public trust considerations. When possible, state and local governments can use shared frameworks or templates to reduce planning burdens. A strong feasibility and workflow mapping process can include the following elements:
___________________
25See https://www.phila.gov/documents/pitch-and-pilot-calls-for-solutions/
An example of early scoping is Hawaii’s wildfire forecast system, which was developed in collaboration with the University of Hawaii.26 The scoping and feasibility phase involved conducting infrastructure planning and defining user-specific outcomes (National League of Cities, 2024). Using guides such as the Data Science Project Scoping Guide,27 designed and used specifically for state and local data science, machine learning (ML), and AI projects, can be a good starting point for ideation and scoping.
Related to the feasibility assessment discussed above, which focuses on technical and operational viability, scoping of internal capacity provides a broader institutional review and explores whether an organization can support AI adoption. A holistic approach that aligns technology feasibility with organizational goals and employee capabilities is beneficial (Uren & Edwards, 2023). The emphasis here is on staffing, technology already in use, change management, governance, and sustained funding—particularly important concerns because local governments often lack “the legal, financial, technological, and human resources necessary to effectively integrate AI solutions” (Eichholz, 2025, p. 2). For example, AI technologies are already present in existing technologies that might be used, including spam detection in emails or search functionality in Microsoft SharePoint. Building on some of these existing capabilities of existing workflows, tools, and teams can be beneficial. The scoping exercise needs to focus on long-term sustainability with respect to the operation, maintenance, and responsiveness of AI technologies. Some actionable strategies for this phase include the following:
___________________
26See https://hawaii.edu/epscor/ai-powered-wildfire-forecast-system-for-hawai%CA%BBi-is-goal-of-uh-researchers/
27See https://datasciencepublicpolicy.org/our-work/tools-guides/data-science-project-scoping-guide/
The effective and responsible use of AI in local government depends on whether the AI technologies being designed or adopted align with their intended goals and purpose. Research highlights principles of purpose-driven, participatory, and responsible design that are crucial in ensuring that AI meets community needs, supports public values, and addresses ethical concerns. (See Table 4.)
Table 4: How to Align Design with Purpose?
| Goal | Example | Implementation Strategy | Responsibility | Timeframe |
|---|---|---|---|---|
| Aligning the problem definition with context, goals, and technical design | National Institute of Standards and Technology’s (NIST’s) AI Risk Management Framework | Refining the problem statement and verifying that the problem aligns with the context, organizational priorities, fairness considerations, and real-world feasibility | City managers, department heads, city council members, county-level executives | Early-stage planning |
| Defining evaluation criteria and assessing system-level impacts | Tempe, Arizona’s, evaluation framework | Defining evaluation criteria and how outcomes will be measured against pre-AI baseline benchmarks | Public-sector practitioners | Early–ongoing |
| Establishing feedback mechanisms | International City/County Management Association’s (ICMA’s) Local Government AI Strategy Workbook | Establishing feedback mechanisms, including measures for public perception, as well as staff feedback loops | City managers, department heads, city council members, county-level executives | Early–ongoing |
Before proceeding with AI adoption, it is important to define the problem statement, incorporating input from all stakeholders. Misaligned AI technologies can cause significant harm. For example, Michigan’s MiDAS system,28 intended to automate unemployment insurance claims, used a flawed algorithm and lacked sufficient human oversight, ultimately falsely accusing 40,000 individuals with a 93 percent error rate (Kohl, 2024). Similarly, predictive policing tools that have been trained on historically biased crime data result in disproportionate enforcement in already overpoliced neighborhoods (Richardson et al., 2019). The above two cases illustrate the need for governance structures that prioritize accuracy, accountability, trust, and refinement and ensure testing of real-world feasibility, especially when deploying AI for high stakes uses. Below are some actionable steps to ensure relevant, effective, and publicly trusted AI:
Defining evaluation criteria that are based on human needs and reflect the social and organizational contexts in which AI systems will operate—and, where possible, establishing baselines before deployment—enables tracking the progress of implementation throughout the adoption life cycle. For example, measuring current performance (e.g., processing time, response time, human task accuracy, and outcomes) at baseline can help quantify whether an AI system is performing any better than the baseline. Developing baseline systems that are simpler, cheaper, and easier to maintain may be a good strategy to enable comparison against existing processes. Many local governments operate in environments in which reliable baseline data may not be available, or existing data may be unreliable, incomplete, or biased (Campion et al., 2020). Recognizing any potential data limitations is critical, as evaluation based on complete and unbiased data can ensure that evaluations have an accurate starting point. In addition, evaluation frameworks are both technically and ethically complex, having to balance accuracy and efficiency with social values such as equity, transparency, and public trust (Sidorkin, 2025; Yigitcanlar et al., 2024a). Metrics for success need to be developed through participatory approaches and subject to independent oversight mechanisms to ensure that they align with public values (Heinisuo et al., 2025). Some actions at this point include the following:
___________________
28See https://wlr.law.wisc.edu/automated-stategraft-faulty-programming-and-improper-collections-in-michigans-unemployment-insurance-program
Feedback mechanisms can be used to collect comments and experiences from those engaging with AI technologies, such as community members and staff, as suggested in the International City/County Management Association (ICMA) Local Government AI Strategy Workbook.30 Establishing these feedback loops can help in determining whether performance criteria and desired outcomes are being met by tracking key metrics, as well as identifying potential challenges and opportunities. The process encompasses the following elements:
___________________
29See https://performance.tempe.gov/
30See https://lgr.icma.org/wp-content/uploads/2024/04/Use-of-Artificial-Intelligence-in-Local-Government-Strategy-and-Tools-1.pdf?utm_source=chatgpt.com
This stage focuses on building internal capacity and competencies for AI, as well as forming collaborations with industry and academic partners. (See Table 5.)
Table 5: Building Readiness and Collaboration Strategies
| Goal | Examples | Implementation Strategy | Responsibility | Timeframe |
|---|---|---|---|---|
| Build internal capacity and competency | Maryland DoIT and InnovateUS training; San Jose, California’s partnership with NVIDIA and San Jose State University; Boise, Idaho’s AI Ambassador program | Training existing staff to evaluate vendor claims, manage systems, and maintain tools over time; offering ongoing training on ethics, risks, and technology literacy | Governance bodies, such as IT strategy committees or state-led training oversight teams; staff managers and department heads | Ongoing and long-term |
| Use partnerships with stakeholders | Schenectady, New York’s partnership with the State University of New York (SUNY) at Albany; California’s 2024 traffic AI pilot; Arizona’s AI Solutions Challenge | Collaborate with civil society, industry, and academic partners for cutting-edge tools, pilot programs, security, usability testing, and technical support | City managers, department heads, city council members, county-level executives | Pilot to long-term |
As more AI technologies become available, internal technical capacity is increasingly critical to ensure effective adoption and implementation. Strategies differ by existing capacity. Localities may train existing staff, and if resources allow, they can hire additional staff with the needed technical skills, partner with other departments, or engage in knowledge exchange with other localities.
Implementing ongoing training programs for public officials ensures that staff can understand the potential and limitations of AI technologies and mitigate associated risks (Chun & Noveck, 2025; Lauterbach & Bonime-Blanc, 2018). Other research highlights the importance of continuous, broad-based literacy and the need to impart new skills to both leadership and technical workers (Stone et al., 2020; Tambe et al., 2019). Training can focus on ethical awareness; basic technical understanding; risk assessments; cost structures; and development trends, since technologies are evolving rapidly.
There are several examples of state and local training programs. The Maryland Department of
Information Technology, for instance, partnered with InnovateUS to offer free, asynchronous courses that include instruction, specifically for state employees, on how to audit systems for bias related to AI in the workplace.31 At the municipal level, the City of San Jose, working with NVIDIA and San Jose State University, offers opportunities to build AI literacy skills for city workers.32 Such partnerships are beneficial for localities that may lack the resources to run their own training programs. There are also training programs of academic think tanks and companies—some free and some fee-based—such as those offered by Partnership for Public Service,33 Innovate,34 Intel,35 IBM,36 Apolitical37, and Salesforce.38 Boise, Idaho’s, AI Ambassador program39 is an internal program that trains staff to act as peer trainers.
Beyond individual technical skills, organizational structures play a critical role. These structures are often inextricably linked and require significant investment, which can be challenging for smaller localities. Organizational readiness also includes leadership buy-in. A study of federal policies revealed a capacity and leadership gap that hindered the effective implementation and governance of AI across federal agencies (Lawrence et al., 2023). Chen and colleagues (2024) highlight that successful deployment depends not only on staff training but also on internal guidance and maintenance processes. To this end, local governments can establish governance bodies that are responsible for updating staff knowledge, as can be seen in early efforts by California, Oregon, and Pennsylvania (State of California, 2023; State of Oregon, 2023; State of Pennsylvania, 2023).
Building long-term capacity may also involve other strategies to support deployment and sustainability:
Cross-sector collaborations with industry, academia, and civil society partners can be beneficial. Governments may structure partnerships to reflect operational needs and include real-world performance evaluation. Such partnerships can help address technical, organizational, and social challenges, accelerating AI adoption (Campion et al., 2020; Garousi et al., 2016). These partnerships can also support local capacity-building efforts when structured well—for example, collaborating with regional schools, community colleges, or universities through student projects and research labs to identify needs and usability issues and build local competencies. Schenectady, New York, for example, partnered with the State University of New York (SUNY) at Albany to codevelop real-time infrastructure tracking tools,
___________________
31See https://innovate-us.org/partner/marylanddoit
32See https://www.sjmayormatt.com/news-room/city-of-san-jos-to-upskill-workforce-and-drive-ai-innovation-in-local-government-in-collaboration-with-nvidia
33See https://www.intel.com/content/www/us/en/developer/topic-technology/artificial-intelligence/training/overview.html
34See https://innovate-us.org/workshop-series/artificial-intelligence-for-the-public-sector
35See https://ourpublicservice.org/course/ai-government-leadership-program-state-local/
36See https://www.ibm.com/training/artificial-intelligence
37See https://apolitical.co/microcourses/en/ai-fundamentals-for-public-servants-opportunities-risks-and-strategies/
38See https://www.salesforce.com/artificial-intelligence/ai-course/
39See https://www.cityofboise.org/programs/innovation-plus-performance/ai-in-government/
building shared capacity and test environments.40 Additionally, partnering with local universities can enable the creation of cost-effective tools (e.g., Arizona’s AI Innovation Challenge).41
Engaging with civil society groups can enhance community digital skills, accelerate the uptake of AI technologies, and improve public trust (Jaillant & Rees, 2022). California,42 for example, piloted a GenAI project in partnership with private-sector firms to analyze live traffic data and predict congestion. However, these partnerships can also create risks, including bias; vendor lock-in; poor alignment with broader needs and public values; privacy gaps; and the imposition of one-size-fits-all solutions on local contexts without addressing critical, unique needs (Eubanks, 2018; Whittlestone et al., 2019).
To guide the establishment of collaborations, local governments can
The management of AI implementation involves continuous monitoring and improvement mechanisms, advisory bodies, and oversight mechanisms. (See Table 6).
Table 6: Accountability and Engagement Strategies
| Goal | Examples | Implementation Strategy | Responsibility | Timeframe |
|---|---|---|---|---|
| Continuous monitoring and improvement mechanisms | San Jose, California’s AI risk classification guidelines; Grove City, Ohio’s AI policy development | Building monitoring and evaluation into AI systems from the beginning; defining clear objectives and metrics; establishing a baseline; and developing an evaluation plan to measure progress | Local governments, risk committees, and stakeholder teams | Ongoing and long-term |
| Create advisory and oversight bodies | Texas’s AI Advisory Council; Maryland’s AI Subcabinet; District of Columbia’s AI Values Alignment Group; Oregon’s AI Advisory Council; New York City’s Local Law 144 audit review issues | Establishing clear legal frameworks for enforceable authority; creating accountability mechanisms such as public-facing reports and independent evaluations; forming independent oversight bodies with broad representation, especially in high-impact areas | State or local government leaders | Medium- to long-term |
Integrating monitoring and improvement mechanisms from the outset is crucial for maintaining public trust and accountability in AI-driven public services. Evaluation can cover the entire life cycle, from pre-deployment testing to real-world performance monitoring, and provide clear pathways for system
___________________
40See https://www.albany.edu/news-center/news/2024-ctg-ualbany-students-using-ai-help-city-schenectady-track-government
41See https://ai.asu.edu/AI-Innovation-Challenge
42See https://www.govtech.com/artificial-intelligence/caltrans-pilots-generative-ai-to-probe-resolve-traffic-woes
improvement or decommissioning (National Association of Counties, 2024; Yigitcanlar et al., 2024b). Evaluations that focus on outcomes instead of just output can measure the actual impact of AI technologies on citizens and communities, rather than simply technical performance (David et al., 2024; National Association of Counties, 2024). Important as well is ensuring that the evaluation process itself is conducted ethically and is free of biases and privacy violations (David et al., 2024; Eichholz, 2025; National Conference of State Legislatures, 2024). Tempe, Arizona’s AI Evaluation Policy, for example, enforces expectations for ongoing system review and improvement (National League of Cities, 2023). However, some local governments may lack the funding, staff, or legal mandates to carry out long-term auditing or retraining, and their AI systems might degrade over time without ongoing maintenance (Badger et al., 2021; Li & Goel, 2024). These challenges need to be considered deliberately when deciding on continuous evaluation processes since the risks associated with deploying a system without robust evaluation can be significant. Key strategies here can include the following:
___________________
43See https://ai-fairness-360.org/
The success of advisory groups and oversight bodies depends on having clear mandates, genuine representation, transparency, and established mechanisms for translating advice into actionable insights. For example, a study of New York City’s Local Law 144, which mandated annual bias audits for automated employment decision tools, found low compliance, with many employees failing to post the required audit reports (Wright et al., 2024). The study found that noncompliance arose because the law granted employers discretion over whether their systems fell within their scope, resulting in inconsistent enforcement due to limited accountability. Additional challenges include a lack of binding and enforceable authority; insufficient technical expertise to evaluate complex AI systems; and failure to guide decision making amid political pressures and industry capture, including the presence of temporary and limited-mandate groups (Nemitz, 2018; Whittlestone et al., 2019). To address these challenges, state and local governments can take the following steps:
___________________
46See https://www.americancityandcounty.com/artificial-intelligence/how-one-city-is-proactively-managing-ai-use-and-what-local-governments-can-learn-from-it
Table 7 shows different types of advisory and oversight bodies.
Table 7: Types of Advisory Groups and Committees
| Type | Core Function | Accountability Mechanism | Examples |
|---|---|---|---|
| AI Advisory Councils | Advising on AI adoption and procurement, ensuring responsible use across government | Regular reports to legislative bodies and response to recommendations | State of Texas’s (2023a) AI Advisory Council; Vermont’s AI Advisory Council (Vermont General Assembly, 2022) |
| AI Ethics Review Boards | Evaluating ethical implications of AI, ensuring alignment with public values | Binding reviews and independent audits before AI deployment | Vermont’s AI Code of Ethics (2023); Delaware’s AI Commission (Delaware General Assembly, 2024) |
| Algorithmic Transparency Task Forces | Ensure the transparency and accountability of AI systems, focusing on public-sector applications | Public meetings, progress tracking, and independent evaluations | New York City’s Automated Decision Systems Task Force (2019); Government of D.C.’s AI Values Advisory Group (2024) |
| Cross-Agency Coordination Bodies | Fostering collaboration across agencies for effective AI deployment | Interagency reports; coordinated oversight with clear performance metrics | California Department of Transportation’s AI Pilot (2024) |
| Independent AI Oversight Committees | Providing post-deployment audits and assessing the risks of AI systems | Legal mandates for audits and public-facing evaluation reports | Maryland’s AI Subcabinet (2024); Texas’s Artificial Intelligence Advisory Council (2023) |
AI is a fast-moving and rapidly changing field, and there remain unanswered questions regarding its integration into state and local decision making. These questions, listed below, point to opportunities for additional research and for creating avenues for gathering insights and exchanging knowledge across the experiences of a range of localities:
The rapid growth of AI-based technologies requires state and local decision-makers to focus on how these technologies can be integrated effectively and safely into local governments, which play a key role in shaping their region’s economy, safety, and well-being. The benefits of AI-enabled systems are multifaceted, such as better equipping public-sector workers and increasing organizational capacity and structures. Yet AI adoption and implementation presents the problem of further creating a rural/urban divide that needs to be addressed. There are risks associated with strategies for integrating AI into state and local government decision making and local governments need to consider carefully before adopting and implementing AI-based systems, particularly when those systems automate decision-making processes.
SEAN is interested in your feedback. Was this rapid expert consultation useful? Send comments to
sean@nas.edu or (202) 334-3440.