Google and privacy vs security
I read such different accounts of Google as to puzzle me exceedingly1.
On the one hand, they are the evil empire who know everything about everybody, who sell your closest secrets to the highest bidder, all while trying to convince us it’s for our benefit, as we’ll get ‘better’ advertisements. Oh, and a better world.
On the other, they are the *most* secure place to store your online data, the least likely to be hacked, and the most protective of individual data. Even an entity like Wikileaks seems confused by this dichotomy. Wikileaker Jacob Appelbaum praised Google for their (ultimately failed) efforts to keep his data out of Government hands.
Meanwhile in a The Monthly magazine profile, Julian Assange doesn’t hold back on his total mistrust of the Google machine:
“Google pretends it isn’t a company,” says Assange. “The world’s biggest and most dynamic media conglomerate portrays itself as playful and humane. But Google is not what it seems. It’s a deeply political operation. We must pay attention to how it operates, and prepare to defend ourselves against its seductive powers of surveillance and control.”
I’ve long resented their ubiquity, and the ease with which we seem to slip into giving away just about anything in the name of convenience and zero cost. I stopped using Google as much as possible, even going as far as to leave my Windows machine running IE & Bing2. Maps is the holdout, so I use that anonymously despite the slight inconvenience.
And it’s fine. Using the internet is pretty much exactly the same as when Google was everywhere. I guess maybe I don’t know what I’m missing out on — Google Now is apparently awesome? The new Gmail inbox is meant to be pretty tops? But it’s interesting how easy it is to barely use Google services.
However just when you think you’ve escaped, they go and introduce something like Photos. Which everyone raves about and seems a genuine upgrade to just about every other photo service around. Almost foolproof automatic categorisation, super smart search, easy galleries. All we have to do is trust them with our photos.
Which leads us back to square one: why would we trust Google? Why do so many people trust very private data to a profit machine which clearly doesn’t have their best interests at heart? The usual answer is because it’s free. Because it’s convenient. Because it ‘just works’.
I’ve slowly come to realise there may be another reason, albeit one that we don’t think much about: Google can fight tooth and nail to keep your *identifiable* data out of everyone’s hands. Advertisers, government, competitors, you name it. They really really don’t want anyone getting access, because that data is where the entirety of their value lies.
Ben Thompson often makes this argument on Twitter and his Stratechery blog, though it’s hard to link to direct examples as it’s a paid service. But the extract in this tweet is a good summary:
Google and Facebook are highly motivated to *protect* user information. In fact, should Google or Facebook decide to sell your data, the value of each company would fall through the floor! Their competitive advantage in advertising is that they have data on customers that no one else has.
Dustin Curtis makes a similar argument:
Though Google has all of my data, it is still private. Google does not sell access to my data; it sells access to my attention. Advertisers do not get my information from Google. Once Google lose or start giving away the enormous trove of user data they have collected, they lose all the value they have with advertisers. So Google becomes the most secure place for your data to be purely due to their own self interest.
It’s the most secure place, but is it the most private? This is where it gets confusing. It’s certainly private from the world at large, but it’s most definitely not private from Google itself. This seems to be a point that is often ignored by Google when questioned. They will, perhaps rightly, claim that your data is the most secure private it can be online. But what of the thousands of Google employees, or as Assange points out, Google’s “deep entanglement and collaboration with the American government”.
A recent Guardian feature about a visit to Google HQ, ostensibly arranged to qualm European doubts over Google’s intent, is predictably a “charm offensive”. Any chance to investigate the fascinating intersection of ‘save the world’ and ‘make truckloads of money’ is quickly shut down3. Engineers “speak of an effective firewall between the science and the selling”, while UI guru Ben Gomes handily avoids answering the question about whether the drive for profit might influence things like search:
“Not in my experience. Larry and Sergey set that division up very carefully.” It’s pretty clear that the Engineers (saving the world) and the Marketeers (making the money) are pretty much unknown to each other. Intentionally, as it leaves the dreamers free to do their own thing, unencumbered by the dirty business of advertising. The business side of Google is certainly not exposed to the journalists. The message is pretty clear: we’re not meant to think about that stuff — just focus on the self driving cars and Google Now telling you your flight is running late.
Most interesting is how often Google employees seem to be almost begging people to like and trust them. From The Guardian article:
“My comfort comes from the fact that in Europe people love using our products. We work hard. I wish people could meet our people here.”
Google wants to suggest altruism as a driving principle, global problem-solving as its gift. Among the engineers, that is almost an article of faith.
The concept of faith is also raised by Google VP Bradley Horowitz whilst selling the trustworthiness of Photos:
People are very comfortable entrusting their data to Google. If you provide the right user value with no agenda, with no apologies or agendas, I am sure we can win the faith of users.
Faith, or trust, is mentioned by Curtis too:
So as long as I trust Google’s employees, the only two potential breaches of my privacy are from the government or from a hacker. It’s amazing how a company, a business, can make appeals to faith and trust. If a telephony company or utility or government agency made the same appeals we’d be deeply suspicious and withhold as much as we can. Yet if Google does the same, we fall willingly into their arms.
Google’s own examples of how Photos might be data mined — cherry picked to sound as innocuous and genuinely helpful as possible — are unsettling. Horowitz:
The information gleaned from analyzing these photos does not travel outside of this product — not today. But if I thought we could return immense value to the users based on this data I’m sure we would consider doing that. For instance, if it were possible for Google Photos to figure out that I have a Tesla, and Tesla wanted to alert me to a recall, that would be a service that we would consider offering, with appropriate controls and disclosure to the user.
Sounds great, right? But “not today” is the catch, and there always seems to be one. Once we’re dependent on a service like Photos, what happens next, and what say do we have in it? I don’t want an ad for superhero costumes to appear next to a photo of my nieces after a dress up birthday party. When asked if the face recognition knows who the person actually is, Horowitz replies:
Not in this incarnation of the product. If you look at the faces we have here Google has no idea who these people are, it’s actually face clustering, not face-recognition, so I can click on my stepdaughter Charlotte and see other pictures of her. But it doesn’t know Charlotte’s identity (and can’t make use of any of her own personal information).
‘Not in this incarnation of the product’. The advertiser doesn’t know who I am but Google certainly does. And why, other than misplaced faith, would I trust them?