GET IN TOUCH WITH PAKKO, CREATIVE DIRECTOR ALIGNED FOR THE FUTURE OF CREATIVITY.
PAKKO@PAKKO.ORG

LA | DUBAI | NY | CDMX

PLAY PC GAMES? ADD ME AS A FRIEND ON STEAM

 


Back to Top

Pakko De La Torre // Creative Director

Dispelling Myths About Artificial Intelligence for Government Service Delivery - Center for Democracy and Technology

Dispelling Myths About Artificial Intelligence for Government Service Delivery – Center for Democracy and Technology

Artificial intelligence (AI) has been promised to revolutionize every aspect of society, including government services. Amidst the excitement, it may be difficult to separate fact from fiction: What is AI exactly? How does it differ from other technologies, and how can it really improve government services? These questions are especially important as government agencies allocate resources to execute contracts with private vendors based on the promises of AI. This brief aims to dispel common misconceptions about AI and government service delivery by establishing a common understanding that:

There is Not a Shared Definition of What AI is

These days, it is commonplace to call nearly any application of data or technology “artificial intelligence,” no matter how sophisticated it might be. That means different actors might talk past each other by using the term AI instead of a more specific, concrete one.

Some technologies might be referred to as AI even though they do not involve any AI at all:

Examples are worth a thousand words, so consider the following hypothetical: the U.S. Department of Transportation (DOT) wants to accurately estimate the number of automobiles in the U.S. Here is how you could solve this problem using the techniques surveyed above:[1] DOT uses computer programs to pull and aggregate vehicular registration data from state DMVs (data sharing) and count the number of vehicles, as well as compute averages by region (descriptive statistics).

Moreover, even when considering technologies that could count as AI, a different term might be more specific and informative. The word “AI” might mean one of the following approaches:

We can also illustrate how DOT might use these techniques to count cars:

Some Kinds of AI Technologies Are Advancing More Quickly Than Others

All AI has benefitted from the information technology explosion, which resulted in the widespread adoption of computers and the internet. Improvements in information technology and data collection have enabled a wide variety of applications, like applying for government benefits online or analyzing gigabytes of data using a personal computer.

But the most significant advancements in AI have come from deep learning. Deep learning has enabled technologists to make significant strides in, for example, image recognition and playing board games like Go. Those two tasks are hallmarks of progress because they have been hard for computer scientists to crack for decades. However, progress on these two tasks should not lead technologists and policymakers to believe that current AI techniques can solve any problem thrown at them. Predicting human behavior in particular remains a hard problem that deep learning does not offer much help with. That is an important lesson for government agencies, since AI might not be much better than humans at tasks like determining which defendants are more likely to commit crimes while released on bail.[3]

AI is Not necessarily Objective or Fair

Another important lesson for the government is that AI is not necessarily objective or fair compared to alternatives. One reason for this is that many uses of AI involve data, but data is inherently biased. This is especially true for government agencies that want to change historical trends in data like student achievement gaps or unhoused rates.

Another reason is that choices about how to build AI can also introduce biases. The choice of data is one example, but another common pitfall is the extent to which measures of success account for all individuals, not just the majority demographic group. If an agency evaluates an AI program based on how well it works on average for all individuals, that favors individuals who comprise the majority demographic group and might mask harms to certain communities.

Even the example of the Department of Transportation hypothetical might have biases. The deep learning approach uses satellite imagery to count cars, which does not account for cars hidden in parking garages. Hypothetically, those hidden cars might be owned more often by some demographics than others.

AI Requires Resources

Proponents of AI claim that it will automate human tasks and require less resources, but deploying AI still requires significant resources that government agencies should be aware of. Agencies may lack technical staff with expertise in AI, so they might have to offload key decisions and judgments to partner vendors. As mentioned previously, making AI work well often requires data, which agencies might face legal, ethical, or logistic obstacles in analyzing and sharing. Finally, deploying AI is not a one-off engagement and typically requires continuous oversight. All of these additional considerations add substantially to the time and money required to deploy AI successfully.

In the context of the Department of Transportation hypothetical problem, additional resources are absolutely necessary: DOT needs to coordinate with state DMVs to get data, work with lawyers to ensure that all relevant laws and policies have been respected, and employ technologists who can process the data.

Spotting AI in Your Government Agency

In light of these ideas about AI and public service delivery, what should government agencies who are interested in AI do? The first step is to understand whether the suggested approach involves AI or not. Two key questions to ask are:

If the answer to both of these questions are yes, then AI is likely to be involved.

The agency might want to then do research on how appropriate the use of the technology is to the task at hand. 

There are many questions that public agencies should answer before moving forward with using AI for service delivery. While not exhaustive, the following questions that agencies should consider include:

To address potential bias issues, the agency might want to ask for a bias audit or conduct their own. The agency should consider the overall cost of the AI application, including indirect ones like additional staff, staff training, ongoing evaluation tools, and governance mechanisms.

Readers might also be interested in the recommendations from other CDT reports, such as using AI for identity verification and making government data publicly available. The recommendations in those resources can also broadly be applied to government usage of AI.

[1]  As an aside, though the hypothetical problem here seems innocuous, there are still privacy concerns. In particular, when state Department of Motor Vehicles (DMV) share data with DOT, what information about car owners are they also sharing with it? How long does DOT retain that data for? How do they plan on protecting that data? If DOT plans on using satellite imagery, do they also plan on re-identifying who might own a particular car by looking at Census Bureau tract information?

[2]  So what is the difference between data science versus machine learning? Here is the rundown: data science tries to explain the data, can be used on smaller datasets, and can not be performed automatically by computers. In contrast, machine learning just tries to make predictions about new or unobserved inputs, often requires large amounts of data, and can eventually be run automatically by computers (after a significant amount of manual building and expiration).

[3]  There might still be other important applications of deep learning for government services though, like offering chatbot services to guide applicants.

This content was originally published here.