Digital Ethics – Introduction

WATCH:

Created and presented by Bruce Wilson

Reach out if you want to talk about digital & AI ethics in your organization—
Email: e-bruce@manydoors.net
Twitter: @bruce2b
Web: ManyDoors.net

See photo credits below

OVERVIEW:

If you work for an organization that uses data—and just about all organizations do, or will before long—even if your job isn’t specifically about data, your ability to make decisions using data, and decision about data, is becoming more and more important.

Organizations are discovering they need to decide things like

• which problems to solve with data,
• who to hire to solve those problems,
• what kind of training to provide employees,
• what the long term strategy will be, and
• how it is going to explain its data use to the world.

An important subset of these decisions that involves everyone—decisionmakers, employees, and customers alike—falls under the general category of digital ethics, which can encompass how data is collected, stored, used, and shared.

To illustrate, lets look at two examples of digital ethics in action, one surprisingly successful, and one disastrous.

First, the happy story. My friend Aaron Reich is basically the futurist in residence at Avanade, the global technology consulting firm. From the vantage point of his high level insight into many of their consulting projects, last year he called out a few examples where companies achieved remarkable improvement in ways that they can help their customers using data and artificial intelligence. One of these companies is a financial institution in Europe which used AI to predict which customers were likely to “churn”, or leave for a competitor. This was a huge problem for them, and obviously for their customers. By applying machine learning to their customer data, they were able to better understand their customers’ needs, improve their communication, and cut churn in half. This is obviously a win-win for both the company and its customers.

Next, the scary story: in 2015 it became widely known that Volkswagen had “cooked” the emissions test data from millions of its diesel vehicles in order to sell more cars.

• Five days after the story broke, their CEO resigned, and was indicted by the US (but not arrested because there’s no extradition treaty between Gernany and US).  His immediate successor was quickly replaced.

• The CEO of Audi, a division of VW, was eventually arrested for fraud and falsification of documents.

• Relatively few VW personnel who had significant roles in the scheme were also present in the US—and thus subject to US jursidiction. One was an engineer sentenced to 40 months in prison…even though he was just doing what his bosses wanted him to do (which is of course not a defense under the law). Another was an engineering manager who was arrested when he entered the US to vacation in Florida.

• VW set aside $31.7 billion for fines, settlements, recalls and buybacks.

• VW experienced a $66 billion drop in value on the stock market after the fraud was revealed (and continued to underperform the market average for some time)

• VW sales fell in the US.

• A shareholder lawsuit was filed  in Germany seeking $10.4 billion in damages for corporate stock manipulation (failing to promptly disclose its inability to comply with emissions requirements).

• Germany’s national reputation for manufacturing excellence was damaged—as offices of other German car makers were also raided by investigators searching for evidence of possible cheating.

• An engineering company which assisted VW in defeating emissions testing was fined $35 million—this amount was imposed because it was deemed the maximum the contractor could pay without putting it out of business.

• Even though Germany didn’t used to have a provision for what the US calls “class action” lawsuits,  in response to “dieselgate” German lawmakers created a new form of collective legal action that, in November and December 2018 enabled 372,000 German owners of VW cars  to seek compensation for being the victims of this fraud.

What’s the point? Why should ordinary business, government organizations, and non-profits take notice of digital ethics? Most people are unlikely to find themselves in the shoes of the people who successfully reduced churn at the European financial institution, or those who participated in Dieselgate. But many will. And we should all be prepared to find ourselves somewhere on that spectrum. We are increasingly like to discover potential benefits from, and problems with, the ways our organizations use data. We can recommend, and sometimes resist, changes our organizations make. The key is to become more educated, and more fluent, in data and digital ethics. It’s like a muscle—you already have it, but you have to exercise it and train it.

In this series of posts about digital ethics, we’re going to cover issues like:

• What does “ethics” mean—and when is ethics important? Ethics are not clearly defined for many situation, and individual’s views of what is ethical can depend largely on context, (for example, healthcare, politics, or finance), and on individual backgrounds or professions.

• What are potential business gains, and avoidable negative consequences, that can result when organizations develop and apply standards of digital ethics internally?

• Who is responsible for digital ethics? Once again, there is no universal answer to this question, but it’s something that every organization and every individual must be prepared to answer for themselves.

• Who needs to talk to who about digital ethics? And here the answer touches on customer relationships, shareholders, employees, leaders, government, and more.

Please join me as we explore this topic and help make it relevant to everyone—this is definitely not best left exclusively to professors, lawyers, and spin doctors.

Photos used in the video:

Ethan-hoover-422836-unsplash.jpg – Photo by Ethan Hoover on Unsplash

Armando-arauz-318017-unsplash.jpg – Photo by Armando Arauz on Unsplash

Ryan-searle-377260-unsplash.jpg – Photo by Ryan Searle on Unsplash

Robert-haverly-125125-unsplash.jpg – Photo by Robert Haverly on Unsplash

Omer-rana-533347-unsplash.jpg – Photo by Omer Rana on Unsplash

Karolina-maslikhina-503425-unsplash.jpg – Photo by Karolina Maslikhina on Unsplash

Abi-ismail-551176-unsplash.jpg – Photo by abi ismail on Unsplash

Claire-anderson-60670-unsplash.jpg – Photo by Claire Anderson on Unsplash

Rob-curran-396488-unsplash.jpg – Photo by Rob Curran on Unsplash

Rick-tap-110126-unsplash.jpg –Photo by Rick Tap on Unsplash

Chris-liverani-552649-unsplash.jpg – Photo by Chris Liverani on Unsplash

Hedi-benyounes-735849-unsplash.jpg – Photo by Hédi Benyounes on Unsplash

Blind Men Appraising an Elephant by Ohara Donshu (Brooklyn Museum / Wikipedia)

References:

AI/ML success story

Uncovering the ROI in AI by Aaron Reich (Avanade.com)

VW’s Dieselgate

VW engineer sentenced to 40 months in prison for role in emissions cheating by Megan Geuss (ArsTechnica)

Five things to know about VW’s ‘dieselgate’ scandal (Phys.org)

$10.4-billion lawsuit over diesel emissions scandal opens against Volkswagen (Bloomberg / LA Times)

How VW Paid $25 Billion for ‘Dieselgate’ — and Got Off Easy (Fortune / Pro Publica)

VW Dieselgate scandal ensnares German supplier, to pay $35M fine by Nora Naughton
(The Detroit News)

Car sales suffer second year of gloom by Alan Tovey & Sophie Christie (Telegraph UK)

Nearly 375,000 German drivers join legal action against Volkswagen (Business Day)

 

Amazon’s gender-biased recruiting software is a wake-up call

The recent news that Amazon inadvertently created gender-biased software for screening job applicants is a significant wake-up call for all organizations using AI. The software, which used machine learning to rank incoming resumes by comparison to resumes from people Amazon had already hired, could have discouraged recruiters from hiring women solely on the basis of their gender. Amazon, of all entities, should have known better. It should have expected and avoided this. If this can happen to Amazon, the question we really need to ask is: how many others are making the same mistake?

the wall
Photo by Rodion Kutsaev on Unsplash

Bias in hiring is a burden for our society as a whole, for tech companies in particular, and for Amazon specifically. Biased recruiting software exposes Amazon to a number of risks, among them: Continue reading “Amazon’s gender-biased recruiting software is a wake-up call”

What you—yes you—need to do about Data and AI Ethics

What do you need to know about Data Ethics?

If you work for an organization that uses data and artificial intelligence (AI), or if you are a consumer using data and AI-powered services, what do you need to know about data ethics?

Quite a bit, it turns out. The way things are going, it seems like every few days new ethics controversies, followed by new commitments to privacy and fairness, arise from the ways that businesses and government use data. A few examples:

• Voice assistants like Amazon’s Alexa, Siri, and “Hey Google” are everywhere, on smart phones, computers, and smart speakers. Voice commands satisfy more and more of our needs without resorting to keyboards, touch screens, or call centers. But recently one such assistant, while listening in on a family’s private conversations, recorded a conversation without the family’s knowledge and emailed that recording to a family member’s employee.

doing data ethics
Photo by rawpixel on Unsplash

Continue reading “What you—yes you—need to do about Data and AI Ethics”

What’s the difference between machine learning and artificial intelligence?

Part II of The Completely Non-Technical Guide to Machine Learning and AI

My previous post raised the question “what is machine learning and artificial intelligence (AI)?” and answered with a functional definition: computer systems that combine measurements and math to make decisions so complicated that until recently only humans could make them.

Now a new question: “What’s the difference between machine learning and AI?” Continue reading “What’s the difference between machine learning and artificial intelligence?”

Machine Learning & AI for Non-Technical Businesspeople (Part I)

Part I: What is Machine Learning? Combining Measurements and Math to Make Predictions

The labels “machine learning” and “artificial intelligence” can be used interchangeably to describe computer systems that make decisions so complicated that until recently only humans could make them. With the right information, machine learning can do things like…

• look at a loan application, and recommend whether a bank should lend the money
• look at movies you’ve watched, and recommend new movies you might enjoy
• look at photos of human cells, and recommend a cancer diagnosis

Machine learning can be applied to just about anything that can be counted/measured, including numbers, words, and pixels in digital photos.

pattern
What is it we see in this photo? How would you describe the details that let us recognize this? (Photo by Mike Tinnion on Unsplash)

What makes it different is that, with machine learning computers don’t need humans to write out incredibly detailed instructions about how to identify bad loans, good movies, cancer cells, etc. Instead, computers are given examples (or goals) and math, and Continue reading “Machine Learning & AI for Non-Technical Businesspeople (Part I)”

Lessons in Agile Machine Learning from Walmart

Takeaways from Sam Charrington’s May, 2017 interview with Jennifer Prendki, senior data science manager and principal data scientist for Walmart.com


I am very grateful to Sam Charrington for his TWiML&AI podcast series. So far I have consumed about 70 episodes (~50 hours). Every podcast is reliably fascinating: so many amazing people accomplishing incredible things. It’s energizing! The September 5, 2017 podcast, recorded in May, 2017 at Sam’s Future of Data Summit event, featured his interview with Jennifer Prendki, who at the time was senior data science manager and principal data scientist for Walmart’s online business (she’s since become head of data science at Atlassian). Jennifer provides an instructive window into agile methodology in machine learning, a topic that will become more and more important as machine learning becomes mainstream and production-centric (or “industrialized”, as Sam dubs it). I’ve taken the liberty of capturing key takeaways from her interview in this blog post. (To be clear, I had no part in creating the podcast itself.) If this topic matters to you, please listen to the original podcast – available via iTunes, Google Play, Soundcloud, Stitcher, and YouTube – it’s worth a listen.


Overview

Jennifer Prendki was a member of an internal Walmart data science team supporting two other internal teams, the Perceive team and the Guide team, delivering essential components of Walmart.com’s search experience. The Perceive team is responsible for providing autocomplete and spell check to help improve customers’ search queries. The Guide team is responsible for ranking the search results, helping customers find what they are looking for as easily as possible. Continue reading “Lessons in Agile Machine Learning from Walmart”

EU Guidelines on Using Machine Learning to Process Customer Data

Summary: Every organization that processes data about any person in the EU must comply with the GDPR. Newly published GDPR Guidelines clarify that whenever an organization makes a decision using machine learning and personal data that has any kind of impact, a human must be able to independently review, explain, and possibly replace that decision using their own independent judgment. Organizations relying on machine learning models in the EU should immediately start planning how they are going to deliver a level of machine model interpretability sufficient for GDPR compliance. They should also examine how to identify whether any groups of people could be unfairly impacted by their machine models, and consider how to proactively avoid such impacts.


In October 2017, new Guidelines were published to clarify the EU’s GDPR (General Data Protection Regulation) with respect to “automated individual decision making.” These Guidelines apply to many machine learning models making decisions affecting EU citizens and member states. (A version of these Guidelines can be downloaded here—for reference, I provide page numbers from that document in this post.)

The purpose of this post is to call attention to how the GDPR, and these Guidelines in particular, may change how organizations choose to develop and deploy machine learning solutions that impact their customers.

Continue reading “EU Guidelines on Using Machine Learning to Process Customer Data”

Machine Learning Enterprise Adoption Roadmap

Be it the core of their product, or just a component of the apps they use, every organization is adopting machine learning and AI at some level. Most organizations are adopting it an ad hoc fashion, but there are a number of considerations—with significant potential consequences for cost, timing, risk, and reward—that they really should consider together.

That’s why I developed the following framework for organizations planning to adopt machine learning or wanting to take their existing machine learning commitment to the next level.

machine learning adoption roadmap preview
click to enlarge

 

Define: Identify opportunities to adopt machine learning solutions in every part of your organization.

Does your organization have well-defined problems that can be solved using machine learning? Continue reading “Machine Learning Enterprise Adoption Roadmap”

Machine Learning / AI roundup for last week

optometrist
Photo by Markus Spiske on Unsplash

There’s  much new goodness on the interwebs since my last machine learning and artificial intelligence roundup post (only 2 weeks ago). This isn’t by any means comprehensive, but take a quick skim and see if there’s anything you missed that’s relevant to you.

Business Uses

Recruiting / Hiring / HR: Video: Predictive Analytics for Hiring in 5 Minutes (Koru co-founder Josh Jarrett presenting at New Tech Seattle, August 8, 2017) – I saw this one live. Josh is a good presenter and the promise of his product is intriguing.

Sales: 3 Ways AI Is Upending the B2B Sales Experience (Seismic CEO Doug Winter, Entrepreneur.com, August 14 2017) – Useful overview from an industry insider.

Healthcare: Google’s Machine Learning Looks to Improve Predictions in Health Care (Dan Ochwat, H&HN.com [Hospitals & Health Networks], August 8 2017)

Energy: Google enters race for nuclear fusion technology (Damian Carrington, The Guardian, July 25 2017) – This is the first time I heard the phrase “Optometrist Algorithm”—the machine learning makes suggestions to the humans, who choose Continue reading “Machine Learning / AI roundup for last week”

It’s An AI (and Machine Learning) Thing

tabulator
Photo by Andrew Branch on Unsplash

AI and Machine Learning in Business and Education

I decided to share some links to a few of my favorite (mostly recent) articles and videos about #AI, aka artificial intelligence, and #ML, aka machine learning, in a post here. If anyone wants to submit additions, feel free to contribute in the comments below.

Recent overview articles about AI / Machine Learning

The Business of Artificial Intelligence / What it can — and cannot — do for your organization (Erik Brynjolfsson & Andrew McAfee, Harvard Business Review, July 2017)

Building machines that learn and think like people (Josh Tenenbaum, O’Reilly Artificial Intelligence, June 28 2017)

Video: Three Ways Businesses Use Artificial Intelligence (Tom Davenport with Allison Ryder, MIT Sloan Management Review, July 24 2017)

Continue reading “It’s An AI (and Machine Learning) Thing”

%d bloggers like this: