What do you need to know about Data Ethics?
If you work for an organization that uses data and artificial intelligence (AI), or if you are a consumer using data and AI-powered services, what do you need to know about data ethics?
Quite a bit, it turns out. The way things are going, it seems like every few days new ethics controversies, followed by new commitments to privacy and fairness, arise from the ways that businesses and government use data. A few examples:
• Voice assistants like Amazon’s Alexa, Siri, and “Hey Google” are everywhere, on smart phones, computers, and smart speakers. Voice commands satisfy more and more of our needs without resorting to keyboards, touch screens, or call centers. But recently one such assistant, while listening in on a family’s private conversations, recorded a conversation without the family’s knowledge and emailed that recording to a family member’s employee.

• AI-powered prediction and recommendation software is being used to help people make decisions in hundreds or even thousands of different situations, ranging from shopping to medical diagnosis. One such system, designed to help criminal court judges decide what punishment convicted criminals should receive, has been revealed to consistently recommend harsher terms of imprisonment for African-Americans—but it’s still widely used.
• On our streets self driving cars are being tested which ultimately could be much safer, as well as more convenient, than human-driven cars. Recently a self-driving car “saw” but mistakenly ignored a pedestrian, possibly concluding incorrectly that the person was merely road debris—and struck and killed them.
• Robots are being designed to do dangerous and unpleasant jobs so that humans no longer have to do that work themselves. Among these are robots equipped with guns, replacing human soldiers, programmed to independently decide who to shoot.
So back to the question: what do we need to know about data and AI ethics? And what’s important about that—someone else is already working on those things, right? I fear that many of us are tempted to think of “ethics” alongside “philosophy” as an esoteric academic subject with limited real-world application. Or for some, perhaps, it’s a matter of faith, to be defined primarily by religious texts and leaders. There are also those of us who feel overwhelmed by the breadth of the topic, and thus believe ourselves to be poorly prepared to join in discussions alongside people better versed on the subject. But ethics isn’t necessarily abstract. It doesn’t require a deep philosophical or religious point of view. And the rapid growth in data and AI use means it’s time for all of us to join the ethics conversation, because what’s being discussed right now will affect all of us.
What Is Ethics?
But what do we even mean when we talk about ethics? In practice, for you and me, for engineers and architects, judges and politicians, doctors and bankers, builders and farm workers, baristas and astronauts, for corporate CEOs and entry level hires, ethics can be as simple as listening to our guts, our own personal moral codes. When you heard about the data controversies above, did you feel guilt, pride, outrage, or gratitude? You’re tapping into ethics. It’s about right and wrong behavior, considering the circumstances. This applies on a personal level (your personal code), when you evaluate other people (their reputations), or when you critique companies and government entities, or entire cultures.
Entire civilizations are built around a shared sense of ethics—not identical values, but overlapping and coordinated values more-or-less voluntarily acted on by a majority of individuals. Which requires education, typically through families and schools and peers and communities. Which requires conversations—conversations we’re just starting to have concerning data, despite its rapid rise to importance.
I think the most powerful touchpoint for ethics in the modern world is the idea of reputation or brand. Internally, you ask: am I the sort of person who is known (or wants to be known) for doing X or Y? That’s the reputation you aspire to, the essence of your sense of ethics. Outside of yourself, you ask: is this the sort of person, or company, who I can expect to do X or Y—based on their past words or behavior? That’s their reputation, or in the case of a company, their “brand”, the quality of decisions you expect them to make, the actions you expect them to take.
But of course, reputations and brands aren’t always consistent, fair or deserved. Some words and actions speak louder than others. We don’t always recognize why they have the weight we give them. For instance, you might be more willing to forgive someone you know well—or yourself—for the same lapse in judgment that you would be unwilling to forgive a stranger for. But regardless, our judgments, including our willingness to excuse mistakes, are based on our ethics. And the fact that we might hold businesses to different standards than we hold individuals, or governments, is part of ethics.
The Law and Ethics
Moving beyond personal codes, and reputations, and brands, we come to laws and the legal system, which represents a collective (but not necessarily universally held) system of values. Law is often equated with ethics, but it’s typically treated as a distinct and equally important consideration alongside personal codes and reputation/brand.
Even if the law doesn’t prohibit something, when should I choose not to do something because of my personal code and reputation? And do I expect businesses to do more than follow the letter of the law? Or perhaps I feel the law should prohibit something that it doesn’t yet address. Which gives rise to the question: does the risk of unethical behavior by individuals or business or government rise to the level that a law should be put in place to reduce that risk?
Parallel to laws, but with less force, are the codes of ethics of self-regulating entities like professional associations for doctors and lawyers, which often have the power to terminate their members’ right to work in that profession if they violate written ethics standards.
Ethics and You
How does an evolving ethical perspective play itself out in our society? Perhaps the hottest ethics topic in the US right now—although not directly related to data and AI—is gun violence and rights. It certainly makes a good example of ethics in action. Many people have strong feelings about this issue, and many have been speaking out to advocate for their points of view. Some businesses and other organizations, particularly those facing day-to-day safety challenges or involved in gun distribution, are taking public stands on the issue, thereby incorporating new ethical positions into their brands. The policies and public statements of these organizations will embolden both individuals and other organizations to take similar or opposing stands, and their positions will make their brands more attractive to some customers, less attractive to others. And finally, at the government level, rules and laws are being proposed, debated, and enacted in an effort to better bring about the values being considered.
If you haven’t already it’s time for you to join the discussion about data and AI ethics. Drag your friends, family, and organizations into the debate, too, kicking and screaming if necessary. And sure, whenever you have the opportunity, listen to ethicists and academics and absorb their informed points of view. But you and your friends, family, and employer are now and forever directly responsible for making these important decisions on a day-to-day basis. With or without expert permission, you need to be ready.
It’s time to start listening to and telling stories about ethics and having these conversations at work and at home.