The world is moving progressively towards artificial intelligence as it grows rapidly in use and ability. With it, life is meant to be made easier. We can see this manifest in developments such as the self-driving car to shipping logisitcs, customer service chatbots, imporved health diagnostics, and more. It just seems like the best thing that has happened to humanity in quite a long time.

Nonetheless, when it comes to Artificial intelligence and its use in the matter of criminal justice and social policies, it can end up producing disastrous results & especially to members of the Black and Latinx demographic that finds itself in the American criminal justice system. Some may actually find it surprising that technology can even be related to ethnicity or race.

There is a growing problem in the data that is used in Artificial Intelligence training that has persistently affected people of color, and could become worse if there isn’t significant action taken to ensure that the data isn’t skewed with society’s existing biases and prejudices. These biases in Artificial Intelligence can actually manifest as an issue of life or death & quite literally.
Much of the data being used to train machine learning algorithms, which is the lifeblood of AI, doesn’t take ethnicity or race into consideration, and as a result & the marginalizations and effects that discrimination can have on members of impacted communities.

Facial recognition is notorious for misidentifying Black people. A good example can be drawn from the findings by ACLU with Amazon Rekognition software. In this test, it was found that forty percent of the people who were misidentified were people of color. Sadly, this same software has been used by different police departments across the country.

Notwithstanding the inadequacies in the Amazon Rekognition software, Amazon currently partners with about 400 police forces in the country on the use of a new facial recognition software called “Ring”. This has been projected to be used as a neighborhood watch, despite still being in development phase (as of November 18, 2019). Going further to illuminate problems with bias being used in criminal justice and policing with the use of AI, ProPublica realized through a study that a software algorithm programmed to identify future violent criminal threats is biases against Blacks. In the study, among many other alarming things, it was found that it was only accurate 20 percent of the time & with most of the inaccuracies impacting the Black population.

Artificial Intelligence and other software algorithms are increasingly being used in life-impacting matters of social and criminal justice such as:

  • sentences for crimes
  • paraloe decisions
  • granting of loans
  • college admissions
  • life and health insurance decisions

Before this problem booms exponentially beyond repair, it can be corrected. One significant way to fix this is by Data ownership. To simplify this concept, data-ownership is the idea that all of the personal details that are digitized and filed into various databases and profiling systems are decided upon transparently by who the data is about. Furthermore, the ultimate power to decide how this data should be used is also the right of the individual that the data describes.

Another critical solution is the oversight and third-party (perhaps public) auditing of the software algorithms. Algorithms can be subjected to different laws and regulations that put into consideration ethics against bias. The training data for various Artificial intelligence would be reviewed over and over to prevent any of the systemic racism and prejudices that lead to racism and biases.

Aside from systemic racism that is directly built into these systems, it is even more complicated when we realize the inadvertent racism that creeps its way in to the software algorithms since a very small percentage of those that create these algorithms are from the minorities who are being oppressed. It is a must that we achieve all-encompassing and balanced reflections of all populations, races, ethnicities, and other demographic breakdowns in Artificial Intelligence.