#HTE
Google’s New Feature Wants Us to Make Time for Ourselves. It May Do the Opposite.
On Wednesday, Google announced an addition to its calendar service. The new feature, called Goals, uses machine learning algorithms to help you find time for activities you’ve been meaning to try but never get around to actually attempting. Goals invites you to describe commitments you’d like to make: The charming introductory video proposes possibilities such as “do yoga three times a week” or “practice Spanish.” It then tries to find holes in your schedule and prods you to actually put your intentions into practice.
Goals is the descendent of Timeful, an app that Google purchased last May. As Will Oremus explained shortly after the sale, Timeful aspired “to do for each user what a first-rate executive assistant would do for a CEO—anticipate needs, prioritize objectives, optimize your time.” By working with our individual desires, the app looked to help us focus more on ourselves, an ideal that appears to be at the heart of Goals as well.
In a New York Times article on Goals, MIT professor Sherry Turkle is quoted as favorably comparing it against the A.I. assistants that companies such as Facebook, Amazon, and even Google itself, have developed. Programs like Siri and Cortana merely “pretend to know us,” in Turkle’s phrase, hiding their alien-ness behind a patina of humanlike banter. By contrast, Turkle suggests that Goals might actually remain within human bounds by focusing on and helping us meet our own needs.
It’s true that Goals doesn’t feign a personality. Nevertheless, it’s precisely the sort of feature that technology companies are looking to integrate into their virtual helpmates. And like those other increasingly common forms of secretarial software, Goals may actually place structural limitations on our lives, even as it claims to set us free.
That matters in part because Goals looks as if it may re-create one of the central problems of contemporary consumer artificial intelligence. As Oremus shows in his comprehensive exploration of the virtual assistants, the more we use such tools, the more we accommodate ourselves to their way of doing things. To work with them, we have to address them as they want to be addressed—learning and working within their limitations—such that their command words provide us with our own cues: You can, Oremus points out, ask one of Amazon’s Echo devices what a kinkajou is, but you can’t tell it where to find its information. As is the case in other elements of our digital lives, algorithms increasingly direct us instead of the other way around.
In this respect, it’s not so much Goals’ bossiness that matters as the restricted range of things that it’s allowed to be bossy about. While its interface permits for a great deal of customization, its range still remains finite, shoehorning our roomy loafer-like aspirations into the tightly laced combat boots of machine learning’s mathematical rigor. Goals implicitly tells us what constitutes a practicable goal, and in so doing may limit the features that make us human. You can practice a new skill—and even specify what skill you’d like to practice—but you can’t work on becoming a better listener or a more generous friend
This is only a problem, of course, if we let services like Goals shape everything about our lives. But, of course, that’s exactly what Goals wants to do, shackling us even more fully to our busy schedules in the name of making us better. “You can even take a few minutes for those unexpected surprises,” the introductory video’s narrator explains, alluding to the way Goals rearranges itself around sudden upsets in your own schedule. When it does so, however, that schedule—and all the structure it implies—still comes first. Algorithms like those powering Google Calendar accept uncertainty, even as they aspire to help us live in spite of it.
There’s a sweet, silly moment in that video when its protagonist, Brad, comes across a sloth on the street, offers it his half-eaten banana, and then, apparently, brings this lost wild animal into the office, where it chills at the water cooler. I too would very much like to hang out with a sloth, and, like Brad, would happily put off exercising to do so. But this is exactly the sort of thing that happens when we don’t make plans. If we want to embrace serendipity—a goal that we can all get behind, I suspect—we need more free time, not less of it. Is leisure still leisurely if we mechanically optimize our relationship to it?
http://www.slate.com/blogs/future_tense/2016/04/13/google_calendar_s_goals_feature_uses_machine_learning_to_help_users_try.html