Digital Ethics – Introduction

Created and presented by Bruce Wilson

Reach out if you want to talk about digital & AI ethics in your organization—
Email: e-bruce@manydoors.net
Twitter: @bruce2b
Web: ManyDoors.net

See photo credits below

If you work for an organization that uses data—and just about all of them do, or will before long—even if your job isn’t specifically about data, your ability to make decisions using data, and decision about data, is becoming more and more important.

Organizations are discovering they need to decide things like

  • which problems to solve with data,
  • who to hire to solve those problems,
  • what kind of training to provide employees,
  • what the long term strategy will be,
  • and how it is going to explain its data use to the world.

An important subset of these decisions that involves everyone—decisionmakers, employees, and customers alike—falls under the general category of digital ethics, which can encompass how data is collected, stored, used, and shared.

To illustrate, lets look at two examples of digital ethics in action, one surprisingly successful, and one disastrous.

First, the happy story. My friend Aaron Reich is basically the futurist in residence at Avanade, the global technology consulting firm. From the vantage point of his high level insight into many of their consulting projects, last year he called out a few examples where companies achieved remarkable improvement in ways that they can help their customers using data and artificial intelligence. One of these companies is a financial institution in Europe which used AI to predict which customers were likely to “churn”, or leave for a competitor. This was a huge problem for them, and obviously for their customers. By applying machine learning to their customer data, they were able to better understand their customers’ needs, improve their communication, and cut churn in half. This is obviously a win-win for both the company and its customers.

Next, the scary story: in 2015 it became widely known that Volkswagen had “cooked” the emissions test data from millions of its diesel vehicles in order to sell more cars.

What’s the point? Why should ordinary business, government organizations, and non-profits take notice of digital ethics? Most people are unlikely to find themselves in the shoes of the people who successfully reduced churn at the European financial institution, or those who participated in Dieselgate. But many will. And we should all be prepared to find ourselves somewhere on that spectrum. We are increasingly like to discover potential benefits from, and problems with, the ways our organizations use data. We can recommend, and sometimes resist, changes our organizations make. The key is to become more educated, and more fluent, in data and digital ethics. It’s like a muscle—you already have it, but you have to exercise it and train it.

In this series of posts about digital ethics, we’re going to cover issues like:

  • What does “ethics” mean—and when is ethics important? Ethics are not clearly defined for many situation, and individual’s views of what is ethical can depend largely on context, (for example, healthcare, politics, or finance), and on individual backgrounds or professions.
  • What are potential business gains, and avoidable negative consequences, that can result when organizations develop and apply standards of digital ethics internally?
  • Who is responsible for digital ethics? Once again, there is no universal answer to this question, but it’s something that every organization and every individual must be prepared to answer for themselves.
  • Who needs to talk to who about digital ethics? And here the answer touches on customer relationships, shareholders, employees, leaders, government, and more.

Please join me as we explore this topic and help make it relevant to everyone—this is definitely not best left exclusively to professors, lawyers, and spin doctors.

Photos used in the video:

Ethan-hoover-422836-unsplash.jpg – Photo by Ethan Hoover on Unsplash

Armando-arauz-318017-unsplash.jpg – Photo by Armando Arauz on Unsplash

Ryan-searle-377260-unsplash.jpg – Photo by Ryan Searle on Unsplash

Robert-haverly-125125-unsplash.jpg – Photo by Robert Haverly on Unsplash

Omer-rana-533347-unsplash.jpg – Photo by Omer Rana on Unsplash

Karolina-maslikhina-503425-unsplash.jpg – Photo by Karolina Maslikhina on Unsplash

Abi-ismail-551176-unsplash.jpg – Photo by abi ismail on Unsplash

Claire-anderson-60670-unsplash.jpg – Photo by Claire Anderson on Unsplash

Rob-curran-396488-unsplash.jpg – Photo by Rob Curran on Unsplash

Rick-tap-110126-unsplash.jpg –Photo by Rick Tap on Unsplash

Chris-liverani-552649-unsplash.jpg – Photo by Chris Liverani on Unsplash

Hedi-benyounes-735849-unsplash.jpg – Photo by Hédi Benyounes on Unsplash

Blind Men Appraising an Elephant by Ohara Donshu (Brooklyn Museum / Wikipedia)

References:

AI/ML success story

Uncovering the ROI in AI by Aaron Reich (Avanade.com)

VW’s Dieselgate

VW engineer sentenced to 40 months in prison for role in emissions cheating by Megan Geuss (ArsTechnica)

Five things to know about VW’s ‘dieselgate’ scandal (Phys.org)

$10.4-billion lawsuit over diesel emissions scandal opens against Volkswagen (Bloomberg / LA Times)

How VW Paid $25 Billion for ‘Dieselgate’ — and Got Off Easy (Fortune / Pro Publica)

VW Dieselgate scandal ensnares German supplier, to pay $35M fine by Nora Naughton
(The Detroit News)

Car sales suffer second year of gloom by Alan Tovey & Sophie Christie (Telegraph UK)

Nearly 375,000 German drivers join legal action against Volkswagen (Business Day)

 

Amazon’s gender-biased recruiting software is a wake-up call

The recent news that Amazon inadvertently created gender-biased software for screening job applicants is a significant wake-up call for all organizations using AI. The software, which used machine learning to rank incoming resumes by comparison to resumes from people Amazon had already hired, could have discouraged recruiters from hiring women solely on the basis of their gender. Amazon, of all entities, should have known better. It should have expected and avoided this. If this can happen to Amazon, the question we really need to ask is: how many others are making the same mistake?

the wall
Photo by Rodion Kutsaev on Unsplash

Bias in hiring is a burden for our society as a whole, for tech companies in particular, and for Amazon specifically. Biased recruiting software exposes Amazon to a number of risks, among them: Continue reading “Amazon’s gender-biased recruiting software is a wake-up call”

What you—yes you—need to do about Data and AI Ethics

What do you need to know about Data Ethics?

If you work for an organization that uses data and artificial intelligence (AI), or if you are a consumer using data and AI-powered services, what do you need to know about data ethics?

Quite a bit, it turns out. The way things are going, it seems like every few days new ethics controversies, followed by new commitments to privacy and fairness, arise from the ways that businesses and government use data. A few examples:

• Voice assistants like Amazon’s Alexa, Siri, and “Hey Google” are everywhere, on smart phones, computers, and smart speakers. Voice commands satisfy more and more of our needs without resorting to keyboards, touch screens, or call centers. But recently one such assistant, while listening in on a family’s private conversations, recorded a conversation without the family’s knowledge and emailed that recording to a family member’s employee.

doing data ethics
Photo by rawpixel on Unsplash

Continue reading “What you—yes you—need to do about Data and AI Ethics”