Test Your Data and AI Ethics Program

part 4 in a 4 part series about how to start a data and AI ethics program

Once you have created your map of hotspots, where you have identified potential data and/or AI ethics issues you are faced with choices. Can you validate whether an issue actually exists? If you can validate that some harm is resulting from the data or AI use, or you can’t establish whether the harm is actual or merely theoretical, a decision must be made whether to tolerate it, pull the plug on the current plan, or switch to a new plan. For example, to comply with the European Union’s recently enacted GDPR privacy requirements, a number US companies simply stopped offering their services to EU citizens. The New York Times, on the other hand, switched its online advertisement targeting system to one that doesn’t rely on personal (protected) information—with profitable results.

Test Your Data and AI Ethics Program

Photo by Ousa Chea on Unsplash

Maximize Diversity. You might be surprised by how many well-intentioned technology systems fall flat on their face when “real” people try to use them. Here, I am reminded of the automatic soap dispensers that wound up not being able to recognize darker skin because the people who created and tested them all had light skin. Beyond recruiting a diverse group of people from within your organization to provide feedback on data and AI issues, before launching any major initiatives you also may want to invest in a group of intentionally diverse outsiders to provide feedback. The Diverse Voices project at the University of Washington Tech Policy Lab has a how-to guide for doing this that you might find useful.

Use algorithmic test platforms. With a reminder that “fairness” often isn’t a yes or no, on or off quality, IBM, Microsoft, Google, and others have developed tools for testing AI for fairness. Once you have identified potential harms arising from data and AI you are using or plan to use, in some cases your technical team can use tools like these to evaluate the type and magnitude of the harm. The introduction to this post has a good partial list of these tools.

Hold data and AI vendors’ feet to the fire. Consider the popularity of certified organic food in the US. Many individuals, restaurants, and stores are willing to pay a premium to know that their food meets certain standards. Increasingly, data and AI providers will be expected to answer questions about the ethics of their products. Start asking now, and consider switching to suppliers who can answer your questions to your satisfaction.

When should your organization tolerate harms or potential harms derived from data and AI? That’s one of the reasons why your stakeholder task force (Recruit a Task Force post, above) should be broad. If the issue is clearly legal (let’s start with that), then ask whether it fits your reputation, and what the risk is in terms of customer disappointment (or potential boycotts); impact on employee recruiting, retention, and motivation; shareholder value; etc. Remember that some organizations brand themselves around being more ethical than average, and some are branded (by their actions if nothing else) around being barely legal. Your decision about data and AI ethics issues will help place you somewhere in that range. Any time you move ahead on a part of your issues map that is a hotspot of potential issues, you may want to have PR work up an emergency response plan, just in case.

Systematize. Data and AI ethics is not a one-and-done effort. Your task force should also recommend a frequency for reviewing and mapping ethics issues, for example, whenever new technology is about to be acquired or promoted, and whenever internally developed technology is upgraded.

This was the 4th and final part of the 4 part series How to Start Your Organization’s Data and AI Ethics Program

If the process described in this series is challenging for your organization, reach out! We’ll set up a call to talk about your organization’s goals and how I can help

Create a Map of Potential Data and AI Ethics Hot Spots

part 3 in a 4 part series about how to start a data and AI ethics program

Surely, we all know ethical from unethical conduct! Would that it were so. Have you ever heard the expression “just because it’s legal doesn’t make it right”? Even a set of values derived from applicable law (for instance, avoid discimination in hiring because it’s illegal) isn’t going to protect your organization from consumer outrage or employee defections if your values don’t live up to their expectations. If your organization has a vision and/or values statement, start with that. If you don’t, you’ll have something similar when you’re done here.

map data and ai ethics hot spots

Photo by Capturing the human heart. on Unsplash

With your knowledge of data and AI ethics issues (see Educate, above), and your stakeholder experts (see Recruit a Task Force, above) your next objective is to create a map of potential data and AI trouble spots overlaid with the issues that may arise there. Try to look at every point where data and AI enter your organization (in particular, when vendors supply these, or when internal teams build these) and exit your organization (in particular, when customers engage with your data and AI such as on a website, or when data and AI are released for use by partners or the public). A few examples:

  • HR—are recruiters inadvertently targeting job ads in a way that discourages women, minorities, or older potential applicants?
  • Marketing—does your recommendation engine rely on data that customers didn’t (and wouldn’t, if they knew about it) give you permission to use?
  • Application processing—are approvals dependent on features like zip codes which could be a proxy for race?

From the 4 part series How to Start Your Organization’s Data and AI Ethics Program
Next Part 4: Test Your Data and AI Ethics Program

 

If the process described in this series is challenging for your organization, reach out! We’ll set up a call to talk about your organization’s goals and how I can help.

Educate Your Organization About Data and AI Ethics

part 2 in a 4 part series about how to start a data and AI ethics program

You, your task force, and eventually everyone in your organization needs to get up to speed on the fundamentals of data and AI ethics so you can start using a common vocabulary to have productive discussions and make choices together.

Guidelines. Start by finding one or more technology ethics guidelines that fit your organization so that you can circulate and discuss, and perhaps even adopt one if you don’t think you need to develop your own. There are many. I recently read a survey comparing 84 such guidelines. The task before you is to find a set that is accessible to the people in your organization in the sense that they are specific enough to your industry and geared to the level that most of your people are working at (i.e. not too technical for non-technical people).

educate your organization

Photo by Thomas Drouault on Unsplash

Find guidelines that resonate. They don’t have to be the last word for all time, just a good starting point. Read them, share them, discuss them. Kathy Baxter has a short list of guidelines (and other tools) here. Of those, I thought the guidelines created by Integrate.AI were very accessible. Another I like is this one by Susan Etlinger of Altimeter. Yet another that I particularly liked because it’s both concise and non-technical was created by a UK actuaries organization.

Books with Case Studies. At the same time, read some accessible books about how technology can unwittingly lead to ethical challenges. There are many of these, but just top-of-my head I’m going to call out Weapons of Math Destruction by Cathy O’Neill, and Hello World by Hannah Fry as full of well-written, evocative examples written at a level suitable for lay people (non-technology/non-lawyer types).

Papers. Finally, read some specialty pieces that may address the specific ways you use technology. For example, bias is an almost universal concern but is very difficult to eradicate. Get educated by reading papers and blog posts about narrower issues. There are obviously too many of these to summarize here, but as an example I want to call out Harini Suresh’s blog post summarizing her academic paper about sources of bias in AI. If you have specific issues you want to address, let me know and I’ll see if I can put my fingers on something I’ve already come across that might help you.

From the 4 part series How to Start Your Organization’s Data and AI Ethics Program
Next – Part 3: Create a Map of Potential Data and AI Ethics Hot Spots

 

If the process described in this series is challenging for your organization, reach out! We’ll set up a call to talk about your organization’s goals and how I can help.

Recruit a Task Force to Build a Data and AI Ethics Program

part 1 in a 4 part series about how to start a data and AI ethics program

Ethics are about translating values into action. Data and AI in the abstract don’t contain any values except those we humans imbue them with. More importantly, we make value choices about how we use data and AI—including, sometimes, the decision not to use them. So unless all you are looking to accomplish with your data and AI ethics program is publishing a statement of platitudes that no one is going to take seriously, you need to figure out what values your organization wants to rally around, and how honoring those values will look in practice in the day-to-day work of all of the various people you employ and partner with in their specialized roles.

recruit a task force

Photo by Perry Grone on Unsplash

Ultimately you want input and buy-in of representatives from many different stakeholder groups to formulate and implement your organization’s data and AI ethics policies.

If you already have a Chief Data Officer, a business ethics committee, or some other entity in place, they may be the logical starting place for putting together your group. Having said that, a mistake some organizations make is to treat data and AI ethics as a technical problem, entirely within the realm of data science. The truth is:

  • Reputation. Data and AI ethics touches your organization’s brand/reputation—how will the organization be perceived because of its choices (or lack thereof) about ethics?—so you need participation from senior leadership, marketing and PR.
  • Regulation. It touches regulatory issues, including legal scenarios that are just emerging. For example, we will soon be seeing enforcement actions giving context to the California Consumer Privacy Act, which goes into affect less than 30 days from when this is being written. So you need to get your legal and compliance people involved.
  • Employees. Your approach to data and AI ethics affects hiring and retention—what resonates with the people you want to attract to your organization?—so you need HR in the mix.
  • Partners. It affects the ways in which your data and AI can be used, so you need clear communication with whoever is responsible for your sales, licensing, and/or partnerships.
  • Technologists. In addition to all of these, of course you need the people who build and deploy your actual data systems (developers and data scientists).

Can’t get representatives from all of these constituencies on board at the onset? Work with who you’ve got. Start with a prototype—maybe one product, one department, or one scenario. As you go forward, promote the value proposition of your work throughout your organization, and solicit input from whoever is interested but unable to formally participate, while building momentum towards an expanded role down the road.

From the 4 part series How to Start Your Organization’s Data and AI Ethics Program
Next – Part 2: Educate Your Organization About Data and AI Ethics 

If the process described in this series is challenging for your organization, reach out! We’ll set up a call to talk about your organization’s goals and how I can help. 

 

How to Start Your Organization’s Data and AI Ethics Program

Introduction to a 4 Part Series

Let’s suppose your organization (or some part thereof) has decided to take a more principled approach towards the data and/or algorithms it uses by establishing ethics-based ground rules for their use. Maybe this stems from concerns expressed by leadership, legal counsel, shareholders, customers, or employees about potential harms arising from a technology you’re already using or about to use. Maybe it’s because you know companies like Google, Microsoft, and Salesforce have already taken significant steps to incorporate data and AI ethics requirements into their business processes.

ethics principles

Photo by Kelly Sikkema on Unsplash

Regardless of the immediate focus, keep in mind that you probably don’t need to launch the world’s best program on day one (or year one). The bad news is that there is no plug and play, one-size-fits-all solution awaiting you. You and your colleagues will need to begin by understanding where you are now, visualizing where you are headed, and incrementally building a roadway that takes you in the right direction. In fact, it makes sense to start small—like you would when prototyping a new product or line of business—learning and building support systems as you go. Over time, your data and AI ethics program will generate long term benefits, as AI and data ethics increasingly become important for every organization’s good reputation, growth in value, and risk management.

In the following 4 part series about initiating a functional data and AI ethics program we will cover the basic steps you and your team will need to undertake, including:

Part 1: Recruit a Task Force to Build a Data and AI Ethics Program

Part 2: Educate Your Organization About Data and AI Ethics

Part 3: Create a Map of Potential Data and AI Ethics Hot Spots

Part 4: Test Your Data and AI Ethics Program

 

Next – Part 1: Recruit a Task Force to Build a Data and AI Ethics Program

Podcast: Trust, Data, and Financial Services

Episode 8 of my  podcast series, The BaDFun Podcast, is now live. Turning our attention to financial services, the title of this episode is “Changing Everything Without Breaking Anything, with Ken Chou”. Here’s the blurb:

Ken Chou podcast photo

“Welcome to Episode 8 of the BaDFun podcast. This week our guest is Ken Chou, a senior technology executive who began his career on the academic side, with a Ph.D. in digital signal processing from MIT, then was drawn to industry, where his roles included CTO for an internal startup within the global finserv giant Wells Fargo. Ken shared key takeaways from decades of technology innovation and leadership, including

  • How selecting the right architecture, oriented around loose coupling, can enable some technology functions to innovate quickly even though others need to change more slowly;
  • Why trust is the product in banking, and how that drives data classification, data quality, security, privacy, and the regulatory environment; and,
  • How both commercial and regulatory incentives drive  banks to innovate.”

Please check it out and let me know what you think. And subscribe if you want to hear more.

What Warren Buffett said about Ethics

I recently read Warren Buffett’s authorized biography The Snowball. It was a chewy read, at just under 900 pages with copious footnotes and fine grained details about what he ate, parties he attended, and vacations he took, in addition to background profiles of many of the companies he bought and larger-than-life personalities he associated with.

The bits I found most interesting concerned Buffett’s concerns about corporate ethics. In general he sought to put his money behind individuals he felt he could trust, not only because he believed they could make money, but because of their ethics in business dealings. Of course, he didn’t always choose well. And sometimes he compromised—and later regretted certain choices. Continue reading “What Warren Buffett said about Ethics”

Podcast: Making Data Accessible To Nonprofits

Episode 7 of my podcast series, The BaDFun Podcast, is now live. Diving into the nonprofit realm this time, the title of this episode is “Empowering managers to become internal data experts, with Laurel Curran”. Here’s the blurb:

Laurel Curran podcast photo

“In this episode our guest is Laurel Curran, a Consultant at San Francisco based Exponent Partners. Laurel gives us an inside look at how she equips nonprofits to improve fundraising and program delivery using a customized overlay on the Salesforce platform. Her first task is to enable non-technical front-line managers to become internal experts for their teams. Their long term success depends on whether they can meet stringent nonprofit reporting requirements on a volunteer-driven budget, then make incremental improvements when more resources become available.”

Please check it out and let me know what you think. And subscribe if you want to hear more.

Digital Ethics – Introduction

WATCH:

Created and presented by Bruce Wilson

Reach out if you want to talk about digital & AI ethics in your organization—
Email: e-bruce@manydoors.net
Twitter: @bruce2b
Web: ManyDoors.net

See photo credits below

OVERVIEW:

If you work for an organization that uses data—and just about all organizations do, or will before long—even if your job isn’t specifically about data, your ability to make decisions using data, and decision about data, is becoming more and more important.

Organizations are discovering they need to decide things like

• which problems to solve with data,
• who to hire to solve those problems,
• what kind of training to provide employees,
• what the long term strategy will be, and
• how it is going to explain its data use to the world.

An important subset of these decisions that involves everyone—decisionmakers, employees, and customers alike—falls under the general category of digital ethics, which can encompass how data is collected, stored, used, and shared.

To illustrate, lets look at two examples of digital ethics in action, one surprisingly successful, and one disastrous.

First, the happy story. My friend Aaron Reich is basically the futurist in residence at Avanade, the global technology consulting firm. From the vantage point of his high level insight into many of their consulting projects, last year he called out a few examples where companies achieved remarkable improvement in ways that they can help their customers using data and artificial intelligence. One of these companies is a financial institution in Europe which used AI to predict which customers were likely to “churn”, or leave for a competitor. This was a huge problem for them, and obviously for their customers. By applying machine learning to their customer data, they were able to better understand their customers’ needs, improve their communication, and cut churn in half. This is obviously a win-win for both the company and its customers.

Next, the scary story: in 2015 it became widely known that Volkswagen had “cooked” the emissions test data from millions of its diesel vehicles in order to sell more cars.

• Five days after the story broke, their CEO resigned, and was indicted by the US (but not arrested because there’s no extradition treaty between Gernany and US).  His immediate successor was quickly replaced.

• The CEO of Audi, a division of VW, was eventually arrested for fraud and falsification of documents.

• Relatively few VW personnel who had significant roles in the scheme were also present in the US—and thus subject to US jursidiction. One was an engineer sentenced to 40 months in prison…even though he was just doing what his bosses wanted him to do (which is of course not a defense under the law). Another was an engineering manager who was arrested when he entered the US to vacation in Florida.

• VW set aside $31.7 billion for fines, settlements, recalls and buybacks.

• VW experienced a $66 billion drop in value on the stock market after the fraud was revealed (and continued to underperform the market average for some time)

• VW sales fell in the US.

• A shareholder lawsuit was filed  in Germany seeking $10.4 billion in damages for corporate stock manipulation (failing to promptly disclose its inability to comply with emissions requirements).

• Germany’s national reputation for manufacturing excellence was damaged—as offices of other German car makers were also raided by investigators searching for evidence of possible cheating.

• An engineering company which assisted VW in defeating emissions testing was fined $35 million—this amount was imposed because it was deemed the maximum the contractor could pay without putting it out of business.

• Even though Germany didn’t used to have a provision for what the US calls “class action” lawsuits,  in response to “dieselgate” German lawmakers created a new form of collective legal action that, in November and December 2018 enabled 372,000 German owners of VW cars  to seek compensation for being the victims of this fraud.

What’s the point? Why should ordinary business, government organizations, and non-profits take notice of digital ethics? Most people are unlikely to find themselves in the shoes of the people who successfully reduced churn at the European financial institution, or those who participated in Dieselgate. But many will. And we should all be prepared to find ourselves somewhere on that spectrum. We are increasingly like to discover potential benefits from, and problems with, the ways our organizations use data. We can recommend, and sometimes resist, changes our organizations make. The key is to become more educated, and more fluent, in data and digital ethics. It’s like a muscle—you already have it, but you have to exercise it and train it.

In this series of posts about digital ethics, we’re going to cover issues like:

• What does “ethics” mean—and when is ethics important? Ethics are not clearly defined for many situation, and individual’s views of what is ethical can depend largely on context, (for example, healthcare, politics, or finance), and on individual backgrounds or professions.

• What are potential business gains, and avoidable negative consequences, that can result when organizations develop and apply standards of digital ethics internally?

• Who is responsible for digital ethics? Once again, there is no universal answer to this question, but it’s something that every organization and every individual must be prepared to answer for themselves.

• Who needs to talk to who about digital ethics? And here the answer touches on customer relationships, shareholders, employees, leaders, government, and more.

Please join me as we explore this topic and help make it relevant to everyone—this is definitely not best left exclusively to professors, lawyers, and spin doctors.

Photos used in the video:

Ethan-hoover-422836-unsplash.jpg – Photo by Ethan Hoover on Unsplash

Armando-arauz-318017-unsplash.jpg – Photo by Armando Arauz on Unsplash

Ryan-searle-377260-unsplash.jpg – Photo by Ryan Searle on Unsplash

Robert-haverly-125125-unsplash.jpg – Photo by Robert Haverly on Unsplash

Omer-rana-533347-unsplash.jpg – Photo by Omer Rana on Unsplash

Karolina-maslikhina-503425-unsplash.jpg – Photo by Karolina Maslikhina on Unsplash

Abi-ismail-551176-unsplash.jpg – Photo by abi ismail on Unsplash

Claire-anderson-60670-unsplash.jpg – Photo by Claire Anderson on Unsplash

Rob-curran-396488-unsplash.jpg – Photo by Rob Curran on Unsplash

Rick-tap-110126-unsplash.jpg –Photo by Rick Tap on Unsplash

Chris-liverani-552649-unsplash.jpg – Photo by Chris Liverani on Unsplash

Hedi-benyounes-735849-unsplash.jpg – Photo by Hédi Benyounes on Unsplash

Blind Men Appraising an Elephant by Ohara Donshu (Brooklyn Museum / Wikipedia)

References:

AI/ML success story

Uncovering the ROI in AI by Aaron Reich (Avanade.com)

VW’s Dieselgate

VW engineer sentenced to 40 months in prison for role in emissions cheating by Megan Geuss (ArsTechnica)

Five things to know about VW’s ‘dieselgate’ scandal (Phys.org)

$10.4-billion lawsuit over diesel emissions scandal opens against Volkswagen (Bloomberg / LA Times)

How VW Paid $25 Billion for ‘Dieselgate’ — and Got Off Easy (Fortune / Pro Publica)

VW Dieselgate scandal ensnares German supplier, to pay $35M fine by Nora Naughton
(The Detroit News)

Car sales suffer second year of gloom by Alan Tovey & Sophie Christie (Telegraph UK)

Nearly 375,000 German drivers join legal action against Volkswagen (Business Day)

 

So what if the AI Bubble Bursts?

Last week, at a farewell party for a data scientist friend (who is about to ship out from Seattle to Palo Alto to work for a certain social media network based there), I had an interesting exchange with another friend who runs a self funded AI-based startup. Our conversation turned to wondering about whether we’re in the middle of an AI bubble (remember the dotcom bubble?). He asked whether I thought there would be any winners if the AI bubble bursts, and my answer was as follows.

the AI bubble.jpg
Photo by Aaron Burden on Unsplash

Let’s set a floor on defining “winners” by looking at the table stakes Continue reading “So what if the AI Bubble Bursts?”

%d bloggers like this: