What you—yes you—need to do about Data and AI Ethics

What do you need to know about Data Ethics?

If you work for an organization that uses data and artificial intelligence (AI), or if you are a consumer using data and AI-powered services, what do you need to know about data ethics?

Quite a bit, it turns out. The way things are going, it seems like every few days new ethics controversies, followed by new commitments to privacy and fairness, arise from the ways that businesses and government use data. A few examples:

• Voice assistants like Amazon’s Alexa, Siri, and “Hey Google” are everywhere, on smart phones, computers, and smart speakers. Voice commands satisfy more and more of our needs without resorting to keyboards, touch screens, or call centers. But recently one such assistant, while listening in on a family’s private conversations, recorded a conversation without the family’s knowledge and emailed that recording to a family member’s employee.

doing data ethics
Photo by rawpixel on Unsplash

Continue reading “What you—yes you—need to do about Data and AI Ethics”

Who Needs Reasons for AI-Based Decisions?

Deep learning systems, which are the most headline-grabbing examples of the AI revolution—beating the best human chess and poker players, self-driving cars, etc.—impress us so very much in part because they are inscrutable. Not even the designers of these systems know exactly why they make the decisions they make. We only know that they are capable of being highly accurate…on average.

Meanwhile, software companies are developing complex systems for business and government that rely on “secret sauce” proprietary data and AI models. In order to protect their intellectual property rights, and profitability, the developers of these systems typically decline to reveal how exactly their systems work. This gives rise to a tradeoff between profit motive, which enables rapid innovation (something government in particular isn’t known for), and transparency, which enables detection and correction of mistakes and biases. And mistakes do occur…on average.

pay no attention to the man behind the curtain
Photo by Andrew Worley on Unsplash

On the one hand, a lack of transparency in deep learning and proprietary AI models has led to criticism from a number of sources. Organizations like AI Now  and ProPublica are surfacing circumstances where a lack of transparency leads to abuses such as discriminatory bias. The EU has instituted regulations (namely GDPR) that guarantee its citizens the right to an appeal to a human being when AI-based decisions are being made. And, last but not least, there is growing awareness that AI systems—including autonomous driving and health care systems—can be invisibly manipulated by those with a motive like fraud or simple mischief. Continue reading “Who Needs Reasons for AI-Based Decisions?”

EU Guidelines on Using Machine Learning to Process Customer Data

Summary: Every organization that processes data about any person in the EU must comply with the GDPR. Newly published GDPR Guidelines clarify that whenever an organization makes a decision using machine learning and personal data that has any kind of impact, a human must be able to independently review, explain, and possibly replace that decision using their own independent judgment. Organizations relying on machine learning models in the EU should immediately start planning how they are going to deliver a level of machine model interpretability sufficient for GDPR compliance. They should also examine how to identify whether any groups of people could be unfairly impacted by their machine models, and consider how to proactively avoid such impacts.


In October 2017, new Guidelines were published to clarify the EU’s GDPR (General Data Protection Regulation) with respect to “automated individual decision making.” These Guidelines apply to many machine learning models making decisions affecting EU citizens and member states. (A version of these Guidelines can be downloaded here—for reference, I provide page numbers from that document in this post.)

The purpose of this post is to call attention to how the GDPR, and these Guidelines in particular, may change how organizations choose to develop and deploy machine learning solutions that impact their customers.

Continue reading “EU Guidelines on Using Machine Learning to Process Customer Data”

3 privacy mistakes to avoid in social media

Nowadays everyone has to have a strategy for managing the complexity of social media privacy. Approaches vary:

  • A relatively small number of people just don’t care who knows what about them. By default they let it all hang out. We see evidence of this every so often when someone gets fired by an employer who thought a photo was too racy, or a comment too racist.
  • On the other extreme, certain people have abandoned social networks altogether, or avoided them in the first place. People who have had stalker problems fit comfortably in this category, for example.
  • The majority are somewhere in between. We seek to filter our private information in a practical, socially acceptable way, while minimizing the amount of time and effort we spend understanding policies and tweaking settings.

Everyone in this third group should be aware of three basic privacy mistakes to avoid.

1. Don’t post truly private information on social networks

The most important thing you can do to protect your privacy is to use self-restraint. You simply shouldn’t put information that you consider “private” on social networks. For starters it’s easy to make a mistake with not-always-intuitive privacy settings, thus giving “public” access when you thought it was “friends only”. Facebook in particular seems to change its privacy system frequently in ways that make it easy to make such mistakes (so much so that it almost seems intentional on Facebook’s part).

Also, people you share “private” information with in social media may goof up and share whatever you share with them. This can happen accidentally (see privacy settings, above) or because they don’t realize that some information they receive from you via social networks is private…unlike all of the Continue reading “3 privacy mistakes to avoid in social media”

3 reasons to try social media add-ons for Outlook or Gmail

Contracts expert Kenneth Adams via Rapportive
Contracts expert Kenneth Adams via Rapportive

Social email plugins like Xobni, Rapportive (now owned by LinkedIn), Gist (now owned by RIM), and Outlook Social Connector (supported by Microsoft) can add an interesting and sometimes productive upgrade to your email experience.

Here’s the basic idea. When you’re reading or writing an email, if you have a social media connection to the senders or recipients, or if they have public social media profiles, you see their recent social media activity displayed to the right side of the email you’re looking at.

So instead of having to visit a bunch of different social media sites and look up a contact on each of them, just open an email and their social media information is all right there in one place.

A number of business purposes are served by using a social email plugin.

1. Staying in touch

Social media updates can help you understand what a contact has been up to, or is doing right now, just as you are sending/receiving email from them. This is useful in much the same way as using a shared calendar at work, which allows you to know when someone is going to be busy or on vacation while you’re trying to schedule a meeting with them. But the social media updates offered by these plugins provide more Continue reading “3 reasons to try social media add-ons for Outlook or Gmail”

Social Media 2010: tension will increase between secrecy and openness

Bruce WilsonI recently participated in an online discussion about corporate social media policies hosted within a social media group on LinkedIn. The person who started the discussion posted Intel’s social media policy as an example. But things got really interesting when another participant asked her attorney, Tedrick Housh at Kansas City’s Lathrop and Gage law firm, to compare Intel’s and IBM’s policies. She posted the following response from Tedrick, who has graciously given me permission to re-post it here with the caveat that it represents his own professional opinion, nothing more:

[Intel’s] policy is undoubtedly comprehensive and makes a lot of good points. Without delving into all the links in the policy and exploring them in detail, and notwithstanding the disclaimer that any comments are the responsibility of the employee alone, however, my gut reaction is that this is a little too wordy. It also may create an approval mechanism that borders on “deputizing” all the individual tweets and blogs (“if unsure, check with a manager,” etc.) By exercising control, Intel may find itself liable for not policing communications as closely as the policies promise. The sparser, more colloquial IBM policy leaves more of an impression that as an employee, you swim at your own risk, and will be held accountable for your actions. As an employer, I would want to keep these communications more at arm’s length than the Intel policy intimates.

Thanks Tedrick!

I think this is a highly important issue that will become a major political football within many corporations this year. And although I think Tedrick’s approach – to push responsibility for exercising good judgment down to the individual level – is the optimal one, it runs counter to many corporate cultures and the over-protective tendencies of a majority of attorneys, I would imagine.

Link: Social Media Governance, a repository of corporate social media policies. (Thanks to Chris Boudreaux for maintaining it.)