Test Your Data and AI Ethics Program

part 4 in a 4 part series about how to start a data and AI ethics program

Once you have created your map of hotspots, where you have identified potential data and/or AI ethics issues you are faced with choices. Can you validate whether an issue actually exists? If you can validate that some harm is resulting from the data or AI use, or you can’t establish whether the harm is actual or merely theoretical, a decision must be made whether to tolerate it, pull the plug on the current plan, or switch to a new plan. For example, to comply with the European Union’s recently enacted GDPR privacy requirements, a number US companies simply stopped offering their services to EU citizens. The New York Times, on the other hand, switched its online advertisement targeting system to one that doesn’t rely on personal (protected) information—with profitable results.

Test Your Data and AI Ethics Program

Photo by Ousa Chea on Unsplash

Maximize Diversity. You might be surprised by how many well-intentioned technology systems fall flat on their face when “real” people try to use them. Here, I am reminded of the automatic soap dispensers that wound up not being able to recognize darker skin because the people who created and tested them all had light skin. Beyond recruiting a diverse group of people from within your organization to provide feedback on data and AI issues, before launching any major initiatives you also may want to invest in a group of intentionally diverse outsiders to provide feedback. The Diverse Voices project at the University of Washington Tech Policy Lab has a how-to guide for doing this that you might find useful.

Use algorithmic test platforms. With a reminder that “fairness” often isn’t a yes or no, on or off quality, IBM, Microsoft, Google, and others have developed tools for testing AI for fairness. Once you have identified potential harms arising from data and AI you are using or plan to use, in some cases your technical team can use tools like these to evaluate the type and magnitude of the harm. The introduction to this post has a good partial list of these tools.

Hold data and AI vendors’ feet to the fire. Consider the popularity of certified organic food in the US. Many individuals, restaurants, and stores are willing to pay a premium to know that their food meets certain standards. Increasingly, data and AI providers will be expected to answer questions about the ethics of their products. Start asking now, and consider switching to suppliers who can answer your questions to your satisfaction.

When should your organization tolerate harms or potential harms derived from data and AI? That’s one of the reasons why your stakeholder task force (Recruit a Task Force post, above) should be broad. If the issue is clearly legal (let’s start with that), then ask whether it fits your reputation, and what the risk is in terms of customer disappointment (or potential boycotts); impact on employee recruiting, retention, and motivation; shareholder value; etc. Remember that some organizations brand themselves around being more ethical than average, and some are branded (by their actions if nothing else) around being barely legal. Your decision about data and AI ethics issues will help place you somewhere in that range. Any time you move ahead on a part of your issues map that is a hotspot of potential issues, you may want to have PR work up an emergency response plan, just in case.

Systematize. Data and AI ethics is not a one-and-done effort. Your task force should also recommend a frequency for reviewing and mapping ethics issues, for example, whenever new technology is about to be acquired or promoted, and whenever internally developed technology is upgraded.

This was the 4th and final part of the 4 part series How to Start Your Organization’s Data and AI Ethics Program

If the process described in this series is challenging for your organization, reach out! We’ll set up a call to talk about your organization’s goals and how I can help

Create a Map of Potential Data and AI Ethics Hot Spots

part 3 in a 4 part series about how to start a data and AI ethics program

Surely, we all know ethical from unethical conduct! Would that it were so. Have you ever heard the expression “just because it’s legal doesn’t make it right”? Even a set of values derived from applicable law (for instance, avoid discimination in hiring because it’s illegal) isn’t going to protect your organization from consumer outrage or employee defections if your values don’t live up to their expectations. If your organization has a vision and/or values statement, start with that. If you don’t, you’ll have something similar when you’re done here.

map data and ai ethics hot spots

Photo by Capturing the human heart. on Unsplash

With your knowledge of data and AI ethics issues (see Educate, above), and your stakeholder experts (see Recruit a Task Force, above) your next objective is to create a map of potential data and AI trouble spots overlaid with the issues that may arise there. Try to look at every point where data and AI enter your organization (in particular, when vendors supply these, or when internal teams build these) and exit your organization (in particular, when customers engage with your data and AI such as on a website, or when data and AI are released for use by partners or the public). A few examples:

  • HR—are recruiters inadvertently targeting job ads in a way that discourages women, minorities, or older potential applicants?
  • Marketing—does your recommendation engine rely on data that customers didn’t (and wouldn’t, if they knew about it) give you permission to use?
  • Application processing—are approvals dependent on features like zip codes which could be a proxy for race?

From the 4 part series How to Start Your Organization’s Data and AI Ethics Program
Next Part 4: Test Your Data and AI Ethics Program

 

If the process described in this series is challenging for your organization, reach out! We’ll set up a call to talk about your organization’s goals and how I can help.

Educate Your Organization About Data and AI Ethics

part 2 in a 4 part series about how to start a data and AI ethics program

You, your task force, and eventually everyone in your organization needs to get up to speed on the fundamentals of data and AI ethics so you can start using a common vocabulary to have productive discussions and make choices together.

Guidelines. Start by finding one or more technology ethics guidelines that fit your organization so that you can circulate and discuss, and perhaps even adopt one if you don’t think you need to develop your own. There are many. I recently read a survey comparing 84 such guidelines. The task before you is to find a set that is accessible to the people in your organization in the sense that they are specific enough to your industry and geared to the level that most of your people are working at (i.e. not too technical for non-technical people).

educate your organization

Photo by Thomas Drouault on Unsplash

Find guidelines that resonate. They don’t have to be the last word for all time, just a good starting point. Read them, share them, discuss them. Kathy Baxter has a short list of guidelines (and other tools) here. Of those, I thought the guidelines created by Integrate.AI were very accessible. Another I like is this one by Susan Etlinger of Altimeter. Yet another that I particularly liked because it’s both concise and non-technical was created by a UK actuaries organization.

Books with Case Studies. At the same time, read some accessible books about how technology can unwittingly lead to ethical challenges. There are many of these, but just top-of-my head I’m going to call out Weapons of Math Destruction by Cathy O’Neill, and Hello World by Hannah Fry as full of well-written, evocative examples written at a level suitable for lay people (non-technology/non-lawyer types).

Papers. Finally, read some specialty pieces that may address the specific ways you use technology. For example, bias is an almost universal concern but is very difficult to eradicate. Get educated by reading papers and blog posts about narrower issues. There are obviously too many of these to summarize here, but as an example I want to call out Harini Suresh’s blog post summarizing her academic paper about sources of bias in AI. If you have specific issues you want to address, let me know and I’ll see if I can put my fingers on something I’ve already come across that might help you.

From the 4 part series How to Start Your Organization’s Data and AI Ethics Program
Next – Part 3: Create a Map of Potential Data and AI Ethics Hot Spots

 

If the process described in this series is challenging for your organization, reach out! We’ll set up a call to talk about your organization’s goals and how I can help.

Recruit a Task Force to Build a Data and AI Ethics Program

part 1 in a 4 part series about how to start a data and AI ethics program

Ethics are about translating values into action. Data and AI in the abstract don’t contain any values except those we humans imbue them with. More importantly, we make value choices about how we use data and AI—including, sometimes, the decision not to use them. So unless all you are looking to accomplish with your data and AI ethics program is publishing a statement of platitudes that no one is going to take seriously, you need to figure out what values your organization wants to rally around, and how honoring those values will look in practice in the day-to-day work of all of the various people you employ and partner with in their specialized roles.

recruit a task force

Photo by Perry Grone on Unsplash

Ultimately you want input and buy-in of representatives from many different stakeholder groups to formulate and implement your organization’s data and AI ethics policies.

If you already have a Chief Data Officer, a business ethics committee, or some other entity in place, they may be the logical starting place for putting together your group. Having said that, a mistake some organizations make is to treat data and AI ethics as a technical problem, entirely within the realm of data science. The truth is:

  • Reputation. Data and AI ethics touches your organization’s brand/reputation—how will the organization be perceived because of its choices (or lack thereof) about ethics?—so you need participation from senior leadership, marketing and PR.
  • Regulation. It touches regulatory issues, including legal scenarios that are just emerging. For example, we will soon be seeing enforcement actions giving context to the California Consumer Privacy Act, which goes into affect less than 30 days from when this is being written. So you need to get your legal and compliance people involved.
  • Employees. Your approach to data and AI ethics affects hiring and retention—what resonates with the people you want to attract to your organization?—so you need HR in the mix.
  • Partners. It affects the ways in which your data and AI can be used, so you need clear communication with whoever is responsible for your sales, licensing, and/or partnerships.
  • Technologists. In addition to all of these, of course you need the people who build and deploy your actual data systems (developers and data scientists).

Can’t get representatives from all of these constituencies on board at the onset? Work with who you’ve got. Start with a prototype—maybe one product, one department, or one scenario. As you go forward, promote the value proposition of your work throughout your organization, and solicit input from whoever is interested but unable to formally participate, while building momentum towards an expanded role down the road.

From the 4 part series How to Start Your Organization’s Data and AI Ethics Program
Next – Part 2: Educate Your Organization About Data and AI Ethics 

If the process described in this series is challenging for your organization, reach out! We’ll set up a call to talk about your organization’s goals and how I can help. 

 

How to Start Your Organization’s Data and AI Ethics Program

Introduction to a 4 Part Series

Let’s suppose your organization (or some part thereof) has decided to take a more principled approach towards the data and/or algorithms it uses by establishing ethics-based ground rules for their use. Maybe this stems from concerns expressed by leadership, legal counsel, shareholders, customers, or employees about potential harms arising from a technology you’re already using or about to use. Maybe it’s because you know companies like Google, Microsoft, and Salesforce have already taken significant steps to incorporate data and AI ethics requirements into their business processes.

ethics principles

Photo by Kelly Sikkema on Unsplash

Regardless of the immediate focus, keep in mind that you probably don’t need to launch the world’s best program on day one (or year one). The bad news is that there is no plug and play, one-size-fits-all solution awaiting you. You and your colleagues will need to begin by understanding where you are now, visualizing where you are headed, and incrementally building a roadway that takes you in the right direction. In fact, it makes sense to start small—like you would when prototyping a new product or line of business—learning and building support systems as you go. Over time, your data and AI ethics program will generate long term benefits, as AI and data ethics increasingly become important for every organization’s good reputation, growth in value, and risk management.

In the following 4 part series about initiating a functional data and AI ethics program we will cover the basic steps you and your team will need to undertake, including:

Part 1: Recruit a Task Force to Build a Data and AI Ethics Program

Part 2: Educate Your Organization About Data and AI Ethics

Part 3: Create a Map of Potential Data and AI Ethics Hot Spots

Part 4: Test Your Data and AI Ethics Program

 

Next – Part 1: Recruit a Task Force to Build a Data and AI Ethics Program

Podcast: Trust, Data, and Financial Services

Episode 8 of my  podcast series, The BaDFun Podcast, is now live. Turning our attention to financial services, the title of this episode is “Changing Everything Without Breaking Anything, with Ken Chou”. Here’s the blurb:

Ken Chou podcast photo

“Welcome to Episode 8 of the BaDFun podcast. This week our guest is Ken Chou, a senior technology executive who began his career on the academic side, with a Ph.D. in digital signal processing from MIT, then was drawn to industry, where his roles included CTO for an internal startup within the global finserv giant Wells Fargo. Ken shared key takeaways from decades of technology innovation and leadership, including

  • How selecting the right architecture, oriented around loose coupling, can enable some technology functions to innovate quickly even though others need to change more slowly;
  • Why trust is the product in banking, and how that drives data classification, data quality, security, privacy, and the regulatory environment; and,
  • How both commercial and regulatory incentives drive  banks to innovate.”

Please check it out and let me know what you think. And subscribe if you want to hear more.

What Warren Buffett said about Ethics

I recently read Warren Buffett’s authorized biography The Snowball. It was a chewy read, at just under 900 pages with copious footnotes and fine grained details about what he ate, parties he attended, and vacations he took, in addition to background profiles of many of the companies he bought and larger-than-life personalities he associated with.

The bits I found most interesting concerned Buffett’s concerns about corporate ethics. In general he sought to put his money behind individuals he felt he could trust, not only because he believed they could make money, but because of their ethics in business dealings. Of course, he didn’t always choose well. And sometimes he compromised—and later regretted certain choices. Continue reading “What Warren Buffett said about Ethics”