Test Your Data and AI Ethics Program

part 4 in a 4 part series about how to start a data and AI ethics program

Once you have created your map of hotspots, where you have identified potential data and/or AI ethics issues you are faced with choices. Can you validate whether an issue actually exists? If you can validate that some harm is resulting from the data or AI use, or you can’t establish whether the harm is actual or merely theoretical, a decision must be made whether to tolerate it, pull the plug on the current plan, or switch to a new plan. For example, to comply with the European Union’s recently enacted GDPR privacy requirements, a number US companies simply stopped offering their services to EU citizens. The New York Times, on the other hand, switched its online advertisement targeting system to one that doesn’t rely on personal (protected) information—with profitable results.

Test Your Data and AI Ethics Program

Photo by Ousa Chea on Unsplash

Maximize Diversity. You might be surprised by how many well-intentioned technology systems fall flat on their face when “real” people try to use them. Here, I am reminded of the automatic soap dispensers that wound up not being able to recognize darker skin because the people who created and tested them all had light skin. Beyond recruiting a diverse group of people from within your organization to provide feedback on data and AI issues, before launching any major initiatives you also may want to invest in a group of intentionally diverse outsiders to provide feedback. The Diverse Voices project at the University of Washington Tech Policy Lab has a how-to guide for doing this that you might find useful.

Use algorithmic test platforms. With a reminder that “fairness” often isn’t a yes or no, on or off quality, IBM, Microsoft, Google, and others have developed tools for testing AI for fairness. Once you have identified potential harms arising from data and AI you are using or plan to use, in some cases your technical team can use tools like these to evaluate the type and magnitude of the harm. The introduction to this post has a good partial list of these tools.

Hold data and AI vendors’ feet to the fire. Consider the popularity of certified organic food in the US. Many individuals, restaurants, and stores are willing to pay a premium to know that their food meets certain standards. Increasingly, data and AI providers will be expected to answer questions about the ethics of their products. Start asking now, and consider switching to suppliers who can answer your questions to your satisfaction.

When should your organization tolerate harms or potential harms derived from data and AI? That’s one of the reasons why your stakeholder task force (Recruit a Task Force post, above) should be broad. If the issue is clearly legal (let’s start with that), then ask whether it fits your reputation, and what the risk is in terms of customer disappointment (or potential boycotts); impact on employee recruiting, retention, and motivation; shareholder value; etc. Remember that some organizations brand themselves around being more ethical than average, and some are branded (by their actions if nothing else) around being barely legal. Your decision about data and AI ethics issues will help place you somewhere in that range. Any time you move ahead on a part of your issues map that is a hotspot of potential issues, you may want to have PR work up an emergency response plan, just in case.

Systematize. Data and AI ethics is not a one-and-done effort. Your task force should also recommend a frequency for reviewing and mapping ethics issues, for example, whenever new technology is about to be acquired or promoted, and whenever internally developed technology is upgraded.

This was the 4th and final part of the 4 part series How to Start Your Organization’s Data and AI Ethics Program

If the process described in this series is challenging for your organization, reach out! We’ll set up a call to talk about your organization’s goals and how I can help

%d bloggers like this: