aekit: frequently asked questions

The Algorithmic Equity Toolkit (AEKit) is intended to provide resources to community groups involved in advocacy campaigns and to the general public interested in understanding surveillance, artificial intelligence, and automated decision systems.

Want to learn more about these topics? We have gathered some further resources below.

Q: What is artificial intelligence? What is AI? What are automated decision systems? What is an algorithm?

A: Rather than use the phrase “artificial intelligence”, which isn’t always a particularly helpful or illuminating term, we prefer to focus on “automated decision systems”. Automated decision systems are software technology that use algorithms (step-by-step procedures implemented by a computer) to make or support decision-making. Many technologies that people call artificial intelligence are also automated decision systems. 

Q: Why did my loan get rejected? Why did my mortgage rate go up? Why did my application get denied? Why didn’t I get into college?

A: An increasing number of important decisions about people’s lives are made by software, and software is prone to mistakes. Examples include employment screening systems that analyze audio, video, and website interaction data to “predict” a person’s psychological profile. Other examples include loan evaluations based on an analysis of a person’s social media profile, and the use of data from health tracking apps to set insurance rates.  

Q: Why is everyone worried about facial recognition? Should I be worried about facial recognition?

A: Facial recognition technology is a powerful technology that raises concerns both because of its technical failures (its inaccuracy) and because of its social justice ramifications. The technology has been repeatedly shown to perform poorly on faces with darker skin tones. This systematic pattern of errors places people of color at greater risk of misidentification and potential confrontations with police. Facial recognition also represents an increased capacity for video surveillance that is being placed in the hands of troubled institutions, like police forces with a history of over-policing black and brown communities. This technology does nothing to resolve these issues and provides new tools for possible harassment, unnecessary arrests, and violence. 

Q: Is Google spying on me? Is Apple spying on me? Is Amazon spying on me? Is Palantir spying on me? Is Facebook listening to what I say? Is my iPhone listening to what I say?

A: Yes.

Q: Is Siri artificial intelligence? Is Microsoft Excel AI? Is a calculator AI? Is my Roomba using AI?

A: We built the AEKit in part to answer exactly these questions. Check out the AEKit’s flowchart for identifying automated decision systems, which is one of the most common types of AI. 

Q: What is the link between algorithmic fairness and racial justice? 

A: Many algorithmic systems are promoted as “objective” and a means to diminish or eliminate racial bias in employment, court proceedings, and other domains.  The evidence shows exactly the opposite. The prevailing forces of institutional racism in society are reflected in algorithmic systems, either through inattention to the sources of data used for the system, or in the design of the system itself relying on logics that are not objective. 

Q: What is the link between algorithmic fairness and economic justice? 

A: There is a long history of attempting to use science to eliminate poverty. Much of the time, the engineered solutions to poverty attack the dignity of people living in poverty without doing much to alleviate their suffering. For example, low-income mothers seeking prenatal care are often subject to invasive questioning about their lives and involuntary surveillance of their households. Algorithmic systems designed to do similar work can be similarly harmful. 

Q: How are activists and community advocates responding to algorithmic justice and related issues?

A: There are many organizations in the United States and elsewhere working to challenge the adoption of invasive and potentially oppressive technologies by both governmental and non-governmental actors. Here are links to a few prominent organizations and projects:

Q: What can technology designers contribute to the cause of tech justice? 

A: Visionary thinkers and designers are working to provide an alternative lens and chart a way forward to embrace intersectional justice in the production and adoption of technology. 

Q: What Can Policy Makers and Advocates do to Promote Tech Fairness? 

A: Policy initiatives aimed at making technology use more accountable to the public are on the rise in many state and municipal contexts. 

Q: What can academics do to promote justice? What can researchers do to promote justice? What can computer scientists do to promote justice?

A: Our own approach as a research group is to prioritize the experiences and needs of people most negatively impacted by systemic inequality. Sometimes that involves building or studying technology, but often it’s as simple as finding the number for your local city councilor.  

Q: What technology fairness issues are on the horizon? 

A: The COVID-19 pandemic has triggered a wave of proposed technological solutions ranging from apps to keep track of potential disease exposure (contact tracing) to heat sensing cameras in public spaces to detect fevers. Meanwhile, technology companies, including some with a dubious record for respecting human rights, are rushing to offer services to harried governments. While there may be appropriate uses of technology to combat communicable diseases, we should approach these with caution and ensure that important social justice considerations are part of our deliberations.

Q: Are there other tools I can use that are anything like the Algorithmic Equity Toolkit? 

A: Yes