You, your task force, and eventually everyone in your organization needs to get up to speed on the fundamentals of data and AI ethics so you can start using a common vocabulary to have productive discussions and make choices together.
Guidelines. Start by finding one or more technology ethics guidelines that fit your organization so that you can circulate and discuss, and perhaps even adopt one if you don’t think you need to develop your own. There are many. I recently read a survey comparing 84 such guidelines. The task before you is to find a set that is accessible to the people in your organization in the sense that they are specific enough to your industry and geared to the level that most of your people are working at (i.e. not too technical for non-technical people).
Find guidelines that resonate. They don’t have to be the last word for all time, just a good starting point. Read them, share them, discuss them. Kathy Baxter has a short list of guidelines (and other tools) here. Of those, I thought the guidelines created by Integrate.AI were very accessible. Another I like is this one by Susan Etlinger of Altimeter. Yet another that I particularly liked because it’s both concise and non-technical was created by a UK actuaries organization.
Books with Case Studies. At the same time, read some accessible books about how technology can unwittingly lead to ethical challenges. There are many of these, but just top-of-my head I’m going to call out Weapons of Math Destruction by Cathy O’Neill, and Hello World by Hannah Fry as full of well-written, evocative examples written at a level suitable for lay people (non-technology/non-lawyer types).
Papers. Finally, read some specialty pieces that may address the specific ways you use technology. For example, bias is an almost universal concern but is very difficult to eradicate. Get educated by reading papers and blog posts about narrower issues. There are obviously too many of these to summarize here, but as an example I want to call out Harini Suresh’s blog post summarizing her academic paper about sources of bias in AI. If you have specific issues you want to address, let me know and I’ll see if I can put my fingers on something I’ve already come across that might help you.
From the 4 part series How to Start Your Organization’s Data and AI Ethics Program
Next – Part 3: Create a Map of Potential Data and AI Ethics Hot Spots
If the process described in this series is challenging for your organization, reach out! We’ll set up a call to talk about your organization’s goals and how I can help.