Deep learning systems, which are the most headline-grabbing examples of the AI revolution—beating the best human chess and poker players, self-driving cars, etc.—impress us so very much in part because they are inscrutable. Not even the designers of these systems know exactly why they make the decisions they make. We only know that they are capable of being highly accurate…on average.
Meanwhile, software companies are developing complex systems for business and government that rely on “secret sauce” proprietary data and AI models. In order to protect their intellectual property rights, and profitability, the developers of these systems typically decline to reveal how exactly their systems work. This gives rise to a tradeoff between profit motive, which enables rapid innovation (something government in particular isn’t known for), and transparency, which enables detection and correction of mistakes and biases. And mistakes do occur…on average.
On the one hand, a lack of transparency in deep learning and proprietary AI models has led to criticism from a number of sources. Organizations like AI Now and ProPublica are surfacing circumstances where a lack of transparency leads to abuses such as discriminatory bias. The EU has instituted regulations (namely GDPR) that guarantee its citizens the right to an appeal to a human being when AI-based decisions are being made. And, last but not least, there is growing awareness that AI systems—including autonomous driving and health care systems—can be invisibly manipulated by those with a motive like fraud or simple mischief. Continue reading “Who Needs Reasons for AI-Based Decisions?”
Junction is designed to help small and medium sized businesses plan, execute, and manage their social media initiatives effectively. Unlike many social media management solutions which offer publishing to social media accounts, monitoring conversations, and analytics as independent solutions, or as siloed components, Junction tightly integrates and dashboards all three.
I’ve noticed that a major stumbling block of many social media management solutions is that the feedback they offer about the success (or lack thereof) of social media efforts can be difficult to act on. Even when publishing, monitoring, and analytics are available under the same login, the gap between action and feedback can be wide enough to leave a major hurdle in the path of social media marketers. Meanwhile, the level of complexity conveyed by analytics tools can leave marketers, and the people they are accountable to, bewildered.
All three contain countless nuggets of recent scientific insight into behavioral economics, or why people and markets behave as we do, as explained by three very cogent thinkers. All three focused on defining the abilities, strengths and weaknesses of different brain areas; how human impulses mesh and are sorted and acted on; predictable biases of both “rational” and “emotional” sorts; and, what we can do to avoid—and manipulate—biases and errors. Interestingly, all three authors acknowledged the increasing difficulty academics are having in drawing sharp lines between “rational” and “emotional” behavior when confronted with contemporary knowledge about brain function, but all three attempted to draw distinctions between “rational” and “emotional” decisions nonetheless—with varying degrees of success.
The book I enjoyed the most was Jonah Lehrer’s, which I could oversimplify by describing as “neuroscience discovers B.F. Skinner” because of his focus on learned behavior. But perhaps that’s because Lehrer’s approach best fit my personal preconceptions about behavior—and the fact that B.F. Skinner was still working at the psych department where I received my undergraduate degree in psychology way back when I was in school.