Recruit a Task Force to Build a Data and AI Ethics Program

part 1 in a 4 part series about how to start a data and AI ethics program

Ethics are about translating values into action. Data and AI in the abstract don’t contain any values except those we humans imbue them with. More importantly, we make value choices about how we use data and AI—including, sometimes, the decision not to use them. So unless all you are looking to accomplish with your data and AI ethics program is publishing a statement of platitudes that no one is going to take seriously, you need to figure out what values your organization wants to rally around, and how honoring those values will look in practice in the day-to-day work of all of the various people you employ and partner with in their specialized roles.

recruit a task force

Photo by Perry Grone on Unsplash

Ultimately you want input and buy-in of representatives from many different stakeholder groups to formulate and implement your organization’s data and AI ethics policies.

If you already have a Chief Data Officer, a business ethics committee, or some other entity in place, they may be the logical starting place for putting together your group. Having said that, a mistake some organizations make is to treat data and AI ethics as a technical problem, entirely within the realm of data science. The truth is:

  • Reputation. Data and AI ethics touches your organization’s brand/reputation—how will the organization be perceived because of its choices (or lack thereof) about ethics?—so you need participation from senior leadership, marketing and PR.
  • Regulation. It touches regulatory issues, including legal scenarios that are just emerging. For example, we will soon be seeing enforcement actions giving context to the California Consumer Privacy Act, which goes into affect less than 30 days from when this is being written. So you need to get your legal and compliance people involved.
  • Employees. Your approach to data and AI ethics affects hiring and retention—what resonates with the people you want to attract to your organization?—so you need HR in the mix.
  • Partners. It affects the ways in which your data and AI can be used, so you need clear communication with whoever is responsible for your sales, licensing, and/or partnerships.
  • Technologists. In addition to all of these, of course you need the people who build and deploy your actual data systems (developers and data scientists).

Can’t get representatives from all of these constituencies on board at the onset? Work with who you’ve got. Start with a prototype—maybe one product, one department, or one scenario. As you go forward, promote the value proposition of your work throughout your organization, and solicit input from whoever is interested but unable to formally participate, while building momentum towards an expanded role down the road.

From the 4 part series How to Start Your Organization’s Data and AI Ethics Program
Next – Part 2: Educate Your Organization About Data and AI Ethics 

If the process described in this series is challenging for your organization, reach out! We’ll set up a call to talk about your organization’s goals and how I can help. 

 

How to Start Your Organization’s Data and AI Ethics Program

Introduction to a 4 Part Series

Let’s suppose your organization (or some part thereof) has decided to take a more principled approach towards the data and/or algorithms it uses by establishing ethics-based ground rules for their use. Maybe this stems from concerns expressed by leadership, legal counsel, shareholders, customers, or employees about potential harms arising from a technology you’re already using or about to use. Maybe it’s because you know companies like Google, Microsoft, and Salesforce have already taken significant steps to incorporate data and AI ethics requirements into their business processes.

ethics principles

Photo by Kelly Sikkema on Unsplash

Regardless of the immediate focus, keep in mind that you probably don’t need to launch the world’s best program on day one (or year one). The bad news is that there is no plug and play, one-size-fits-all solution awaiting you. You and your colleagues will need to begin by understanding where you are now, visualizing where you are headed, and incrementally building a roadway that takes you in the right direction. In fact, it makes sense to start small—like you would when prototyping a new product or line of business—learning and building support systems as you go. Over time, your data and AI ethics program will generate long term benefits, as AI and data ethics increasingly become important for every organization’s good reputation, growth in value, and risk management.

In the following 4 part series about initiating a functional data and AI ethics program we will cover the basic steps you and your team will need to undertake, including:

Part 1: Recruit a Task Force to Build a Data and AI Ethics Program

Part 2: Educate Your Organization About Data and AI Ethics

Part 3: Create a Map of Potential Data and AI Ethics Hot Spots

Part 4: Test Your Data and AI Ethics Program

 

Next – Part 1: Recruit a Task Force to Build a Data and AI Ethics Program

What you—yes you—need to do about Data and AI Ethics

What do you need to know about Data Ethics?

If you work for an organization that uses data and artificial intelligence (AI), or if you are a consumer using data and AI-powered services, what do you need to know about data ethics?

Quite a bit, it turns out. The way things are going, it seems like every few days new ethics controversies, followed by new commitments to privacy and fairness, arise from the ways that businesses and government use data. A few examples:

• Voice assistants like Amazon’s Alexa, Siri, and “Hey Google” are everywhere, on smart phones, computers, and smart speakers. Voice commands satisfy more and more of our needs without resorting to keyboards, touch screens, or call centers. But recently one such assistant, while listening in on a family’s private conversations, recorded a conversation without the family’s knowledge and emailed that recording to a family member’s employee.

doing data ethics
Photo by rawpixel on Unsplash

Continue reading “What you—yes you—need to do about Data and AI Ethics”

Who Needs Reasons for AI-Based Decisions?

Deep learning systems, which are the most headline-grabbing examples of the AI revolution—beating the best human chess and poker players, self-driving cars, etc.—impress us so very much in part because they are inscrutable. Not even the designers of these systems know exactly why they make the decisions they make. We only know that they are capable of being highly accurate…on average.

Meanwhile, software companies are developing complex systems for business and government that rely on “secret sauce” proprietary data and AI models. In order to protect their intellectual property rights, and profitability, the developers of these systems typically decline to reveal how exactly their systems work. This gives rise to a tradeoff between profit motive, which enables rapid innovation (something government in particular isn’t known for), and transparency, which enables detection and correction of mistakes and biases. And mistakes do occur…on average.

pay no attention to the man behind the curtain
Photo by Andrew Worley on Unsplash

On the one hand, a lack of transparency in deep learning and proprietary AI models has led to criticism from a number of sources. Organizations like AI Now  and ProPublica are surfacing circumstances where a lack of transparency leads to abuses such as discriminatory bias. The EU has instituted regulations (namely GDPR) that guarantee its citizens the right to an appeal to a human being when AI-based decisions are being made. And, last but not least, there is growing awareness that AI systems—including autonomous driving and health care systems—can be invisibly manipulated by those with a motive like fraud or simple mischief. Continue reading “Who Needs Reasons for AI-Based Decisions?”

EU Guidelines on Using Machine Learning to Process Customer Data

Summary: Every organization that processes data about any person in the EU must comply with the GDPR. Newly published GDPR Guidelines clarify that whenever an organization makes a decision using machine learning and personal data that has any kind of impact, a human must be able to independently review, explain, and possibly replace that decision using their own independent judgment. Organizations relying on machine learning models in the EU should immediately start planning how they are going to deliver a level of machine model interpretability sufficient for GDPR compliance. They should also examine how to identify whether any groups of people could be unfairly impacted by their machine models, and consider how to proactively avoid such impacts.


In October 2017, new Guidelines were published to clarify the EU’s GDPR (General Data Protection Regulation) with respect to “automated individual decision making.” These Guidelines apply to many machine learning models making decisions affecting EU citizens and member states. (A version of these Guidelines can be downloaded here—for reference, I provide page numbers from that document in this post.)

The purpose of this post is to call attention to how the GDPR, and these Guidelines in particular, may change how organizations choose to develop and deploy machine learning solutions that impact their customers.

Continue reading “EU Guidelines on Using Machine Learning to Process Customer Data”

3 privacy mistakes to avoid in social media

Nowadays everyone has to have a strategy for managing the complexity of social media privacy. Approaches vary:

  • A relatively small number of people just don’t care who knows what about them. By default they let it all hang out. We see evidence of this every so often when someone gets fired by an employer who thought a photo was too racy, or a comment too racist.
  • On the other extreme, certain people have abandoned social networks altogether, or avoided them in the first place. People who have had stalker problems fit comfortably in this category, for example.
  • The majority are somewhere in between. We seek to filter our private information in a practical, socially acceptable way, while minimizing the amount of time and effort we spend understanding policies and tweaking settings.

Everyone in this third group should be aware of three basic privacy mistakes to avoid.

1. Don’t post truly private information on social networks

The most important thing you can do to protect your privacy is to use self-restraint. You simply shouldn’t put information that you consider “private” on social networks. For starters it’s easy to make a mistake with not-always-intuitive privacy settings, thus giving “public” access when you thought it was “friends only”. Facebook in particular seems to change its privacy system frequently in ways that make it easy to make such mistakes (so much so that it almost seems intentional on Facebook’s part).

Also, people you share “private” information with in social media may goof up and share whatever you share with them. This can happen accidentally (see privacy settings, above) or because they don’t realize that some information they receive from you via social networks is private…unlike all of the Continue reading “3 privacy mistakes to avoid in social media”

3 reasons to try social media add-ons for Outlook or Gmail

Contracts expert Kenneth Adams via Rapportive
Contracts expert Kenneth Adams via Rapportive

Social email plugins like Xobni, Rapportive (now owned by LinkedIn), Gist (now owned by RIM), and Outlook Social Connector (supported by Microsoft) can add an interesting and sometimes productive upgrade to your email experience.

Here’s the basic idea. When you’re reading or writing an email, if you have a social media connection to the senders or recipients, or if they have public social media profiles, you see their recent social media activity displayed to the right side of the email you’re looking at.

So instead of having to visit a bunch of different social media sites and look up a contact on each of them, just open an email and their social media information is all right there in one place.

A number of business purposes are served by using a social email plugin.

1. Staying in touch

Social media updates can help you understand what a contact has been up to, or is doing right now, just as you are sending/receiving email from them. This is useful in much the same way as using a shared calendar at work, which allows you to know when someone is going to be busy or on vacation while you’re trying to schedule a meeting with them. But the social media updates offered by these plugins provide more Continue reading “3 reasons to try social media add-ons for Outlook or Gmail”

Social Media 2010: tension will increase between secrecy and openness

Bruce WilsonI recently participated in an online discussion about corporate social media policies hosted within a social media group on LinkedIn. The person who started the discussion posted Intel’s social media policy as an example. But things got really interesting when another participant asked her attorney, Tedrick Housh at Kansas City’s Lathrop and Gage law firm, to compare Intel’s and IBM’s policies. She posted the following response from Tedrick, who has graciously given me permission to re-post it here with the caveat that it represents his own professional opinion, nothing more:

[Intel’s] policy is undoubtedly comprehensive and makes a lot of good points. Without delving into all the links in the policy and exploring them in detail, and notwithstanding the disclaimer that any comments are the responsibility of the employee alone, however, my gut reaction is that this is a little too wordy. It also may create an approval mechanism that borders on “deputizing” all the individual tweets and blogs (“if unsure, check with a manager,” etc.) By exercising control, Intel may find itself liable for not policing communications as closely as the policies promise. The sparser, more colloquial IBM policy leaves more of an impression that as an employee, you swim at your own risk, and will be held accountable for your actions. As an employer, I would want to keep these communications more at arm’s length than the Intel policy intimates.

Thanks Tedrick!

I think this is a highly important issue that will become a major political football within many corporations this year. And although I think Tedrick’s approach – to push responsibility for exercising good judgment down to the individual level – is the optimal one, it runs counter to many corporate cultures and the over-protective tendencies of a majority of attorneys, I would imagine.

Link: Social Media Governance, a repository of corporate social media policies. (Thanks to Chris Boudreaux for maintaining it.)

%d bloggers like this: