#HTE
What Do We Not Want Algorithms to Do for Us?
Transparency is the great virtue of our digital age, and the algorithm is often heralded as its handmaiden. Thanks to increasingly sophisticated algorithms, we can discern patterns and predict outcomes in everything from financial markets to Netflix preferences. Algorithms can write reports and nonfiction news stories; compose music; and offer medical diagnoses. Elegant in their simplicity, they bring into the open things that have long been hidden. They do extraordinary things.
What they can’t yet do is set limits on their own power; that remains a task for people. And it’s one we are largely failing to perform. A recent story in the Wall Street Journal about Castlight Health triggered concern when it was revealed that employers were using the service to mine employee health data in order to predict how many of their workers might develop specific health conditions, including pregnancy.
Castlight and other third-party data mining companies offer employers access to a great deal of information about their employees, although employers see the data only in aggregate form, and health alerts go directly to the employee, not the employer. “To determine which employees might soon get pregnant, Castlight recently launched a new product that scans insurance claims to find women who have stopped filling birth-control prescriptions,” the article noted, “as well as women who have made fertility-related searches on Castlight’s health app.” Given that companies such as Walmart and Time Warner outsource employee health data mining to Castlight, this represents a large number of people who might have browsed for prenatal vitamins online, or stopped taking birth control pills, only to trip Castlight’s algorithmic pregnancy detector.
The gathering of this kind of granular information is evidently what Castlight means when it says on its website, “We’ve learned that it’s possible to engage employees in their healthcare,” noting that “transparency is an essential ingredient.” But the transparency in this case is more like a two-way mirror than a clear window – it flows between Castlight and its employer client, albeit with the purpose of defraying health care costs, not spying on employees. Yes, the data is aggregated and anonymized, and employees can opt out of using the service, but it still raises larger questions about the amount of information we want employers to have about their employees and the design of the algorithms that analyze that information. What of the employee who, finding herself pregnant, decides to terminate the pregnancy? How does Castlight’s algorithm translate that data point to her employer? Why should it?
Of course, periodic outcries about privacy violations are by now mundane; when “smart appliances” debuted, people spent a few anxious moments wondering whether our toasters might spy on them, and then quickly ordered Nest thermostats that in fact, could.
But the Castlight case creates a unique opportunity to ask: What do we not want algorithms to do for us? So far the debate over the use of algorithms has only hinted at what, for lack of a better term, might be called the creepiness factor. Robotics has the Uncanny Valley. Bioethicists argue about the “wisdom of repugnance,” a feeling of disgust some people experience when contemplating procedures such as genetic engineering. But when it comes to algorithms, we still focus largely on their effectiveness rather than the possibility that they might create something deeply unpleasant.
The unease triggered by stories such as Castlight’s pregnancy prediction is telling us something—not that we should abandon algorithms altogether but that we should think more clearly about the balance we want to strike between total transparency and the secrets we want to keep to ourselves. We need a way of assessing algorithms apart from their technical prowess, a meaningful way to think through the likely uses and abuses of algorithms. We need to ask a heretical question, at least by Silicon Valley standards: Is there such a thing as knowing too much?
Among the engineers at Castlight, who decided which people the algorithm would target? How well did the designers of the algorithm question their assumptions about what should and should not be measured? We need more transparency about how these ethical decisions are being made (or if the questions are even being asked). And we need algorithm-auditing procedures whose starting point is an acknowledgement that because algorithms are simplifications (albeit often elegant and effective ones) they will always miss something.
We also need to be alert to the dangers of Big Data hubris. As the disastrous experiment known as Google Flu Trends revealed, even well-designed algorithms crawling across masses of data can fail to give us useful guidance. (One critic of Flu Trends noted that the program was “remarkably opaque in terms of method and data.”) This is especially important in the medical arena, where people often have legitimate reasons to keep health conditions private.
Our use of algorithms is revelatory, but revelations don’t always lead to improvements. In Secrets: On the Ethics of Concealment and Revelation,philosopher Sissela Bok writes:
Control over secrecy and openness gives power: it influences what others know, and thus what they choose to do. … With no capacity for keeping secrets and for choosing when to reveal them, human beings would lose their sense of identity and every shred of autonomy.
Today the digital information we passively shed while going about our daily lives provides the fuel necessary to power a staggering array of algorithms. But by blithely allowing businesses and governments to capture it, and telling ourselves that this is the price we pay for innovation and convenience, do we risk unwittingly telling our own deepest secrets?
This article is part of the algorithm installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on geoengineering:
· “What’s the Deal With Algorithms?”
· “Your Algorithms Cheat Sheet“
· “The Ethical Data Scientist“
· “How to Teach Yourself About Algorithms“
· “How to Hold Governments Accountable for the Algorithms They Use“
· “How Algorithms Are Changing the Way We Argue“
· “Algorithms Can Make Good Co-Workers“
· “Algorithms Aren’t Like Spock—They’re Like Capt. Kirk”
Future Tense is a collaboration among Arizona State University, New America, and Slate. To get the latest from Futurography in your inbox, sign up for the weekly Future Tense newsletter.
http://www.slate.com/blogs/future_tense/2016/02/26/algorithms_can_be_invasive_and_creepy_what_do_we_not_want_them_to_do.html