Category: security

Social media ‘travel mode’

Maciej Cegłowski argues that given phones are somehow subject to the same invasive access as suitcases, social media developers should develop a ‘travel mode’ that severely limits access and data when activated:

Both Facebook and Google make lofty claims about user safety, but they’ve done little to show they take the darkening political climate around the world seriously. A ‘trip mode’ would be a chance for them to demonstrate their commitment to user safety beyond press releases and anodyne letters of support.

Not sure that it would really help though - he suggests it would be irrevocable once set (to thwart border agents just asking you to turn it back on), but that would seem to create a whole other set of problems (what if you have to cancel your trip at late notice with travel mode already set).

John Gruber is right, we should be fighting the entire premise:

“Travel mode” would be better than nothing, but no technical solution is a substitution for proper civil liberties. Our phones and devices should be protected against unwarranted search and seizure, period.

It will be fascinating to see what impact the election has had on US tourism at the end of all this. A planned exploration of some US National Parks is definitely off the agenda for me now, and I’m sure for many others.

Phone security at border checks

A flurry of articles recently about how secure your phone is at border checks, particularly in the US (for obvious reasons).

First a US born NASA scientist was detained returning home and told to unlock his phone - which he eventually did, not being sure what his rights were.

Turns out no-one is really sure what rights you have - specifically whether you are obliged to unlock a locked device.

One thing we can say is you have far less options to say ‘no’ if you’re non-native to the country you are entering.

It’s staggering to think that courts and laws now allow a border agent to demand you unlock a personal device, without any warrant or proof of suspicion, and that all the data on that device is fair game for them to copy and do with what they will. How did we get here?

Accordingly, here’s a very thorough guide to securing your data at border crossings. Some of it seems over the top - mailing yourself a SIM - but given the slow regression in privacy rights it’s probably all exactly right.

Securing Windows

Twitter security celebrity1 SwiftOnSecurity maintains Decent Security, providing some nice detail on how to securely install your Windows machines, and then how to maintain and recover them (particularly useful when your relatives call wondering why their browser is exploding):

This is a guide to bi-yearly maintenance for Windows 7 and higher. Although this isn’t a computer disinfection guide, it will remove many viruses and repair their damage.

Some of the info is incomplete, but it’s an excellent starting point.

  1. Talk about niche 

Moral cryptography

Phillip Rogaway, professor of computer science at the University of California, interviewed in The Atlantic about the failure of the cryptographic community to address moral implications of universal surveillance:

Waddell: What led you to understand the political implications of your own work?

Rogaway: I myself had been thinking increasingly in these terms when the Snowden revelations came out. Those revelations made me confront more directly our failings as a community to have done anything effectual about stemming this transition of the Internet to this amazing tool for surveilling entire populations.

Google and privacy vs security

I read such different accounts of Google as to puzzle me exceedingly1.

On the one hand, they are the evil empire who know everything about everybody, who sell your closest secrets to the highest bidder, all while trying to convince us it’s for our benefit, as we’ll get ‘better’ advertisements. Oh, and a better world.

On the other, they are the most secure place to store your online data, the least likely to be hacked, and the most protective of individual data.

Even an entity like Wikileaks seems confused by this dichotomy. Wikileaker Jacob Appelbaum praised Google for their (ultimately failed) efforts to keep his data out of Government hands. Meanwhile in a The Monthly magazine profile, Julian Assange doesn’t hold back on his total mistrust of the Google machine:

“Google pretends it isn’t a company,” says Assange. “The world’s biggest and most dynamic media conglomerate portrays itself as playful and humane. But Google is not what it seems. It’s a deeply political operation. We must pay attention to how it operates, and prepare to defend ourselves against its seductive powers of surveillance and control.”

I’ve long resented their ubiquity, and the ease with which we seem to slip into giving away just about anything in the name of convenience and zero cost. I stopped using Google as much as possible, even going as far as to leave my Windows machine running IE & Bing2. Maps is the holdout, so I use that anonymously despite the slight inconvenience.

And it’s fine. Using the internet is pretty much exactly the same as when Google was everywhere. I guess maybe I don’t know what I’m missing out on — Google Now is apparently awesome? The new Gmail inbox is meant to be pretty tops? But it’s interesting how easy it is to barely use Google services.

However just when you think you’ve escaped, they go and introduce something like Photos. Which everyone raves about and seems a genuine upgrade to just about every other photo service around. Almost foolproof automatic categorisation, super smart search, easy galleries.

All we have to do is trust them with our photos. Which leads us back to square one: why would we trust Google? Why do so many people trust very private data to a profit machine which clearly doesn’t have their best interests at heart?

The usual answer is because it’s free. Because it’s convenient. Because it ‘just works’.

I’ve slowly come to realise there may be another reason, albeit one that we don’t think much about: Google can fight tooth and nail to keep your identifiable data out of everyone’s hands. Advertisers, government, competitors, you name it. They really really don’t want anyone getting access, because that data is where the entirety of their value lies.

Ben Thompson often makes this argument on Twitter and his Stratechery blog, though it’s hard to link to direct examples as it’s a paid service. But the extract in this tweet is a good summary:

Google and Facebook are highly motivated to protect user information. In fact, should Google or Facebook decide to sell your data, the value of each company would fall through the floor! Their competitive advantage in advertising is that they have data on customers that no one else has.

Dustin Curtis makes a similar argument:

Though Google has all of my data, it is still private. Google does not sell access to my data; it sells access to my attention. Advertisers do not get my information from Google.

Once Google lose or start giving away the enormous trove of user data they have collected, they lose all the value they have with advertisers. So Google becomes the most secure place for your data to be purely due to their own self interest.

It’s the most secure place, but is it the most private? This is where it gets confusing. It’s certainly private from the world at large, but it’s most definitely not private from Google itself. This seems to be a point that is often ignored by Google when questioned.

They will, perhaps rightly, claim that your data is the most secure private it can be online. But what of the thousands of Google employees, or as Assange points out, Google’s “deep entanglement and collaboration with the American government”.

A recent Guardian feature about a visit to Google HQ, ostensibly arranged to qualm European doubts over Google’s intent, is predictably a “charm offensive”. Any chance to investigate the fascinating intersection of ‘save the world’ and ‘make truckloads of money’ is quickly shut down3.

Engineers “speak of an effective firewall between the science and the selling”, while UI guru Ben Gomes handily avoids answering the question about whether the drive for profit might influence things like search:

“Not in my experience. Larry and Sergey set that division up very carefully.”

It’s pretty clear that the Engineers (saving the world) and the Marketeers (making the money) are pretty much unknown to each other. Intentionally, as it leaves the dreamers free to do their own thing, unencumbered by the dirty business of advertising. The business side of Google is certainly not exposed to the journalists.

The message is pretty clear: we’re not meant to think about that stuff — just focus on the self driving cars and Google Now telling you your flight is running late.

Most interesting is how often Google employees seem to be almost begging people to like and trust them. From The Guardian article:

“My comfort comes from the fact that in Europe people love using our products. We work hard. I wish people could meet our people here.”

Google wants to suggest altruism as a driving principle, global problem-solving as its gift. Among the engineers, that is almost an article of faith.

The concept of faith is also raised by Google VP Bradley Horowitz whilst selling the trustworthiness of Photos:

People are very comfortable entrusting their data to Google. If you provide the right user value with no agenda, with no apologies or agendas, I am sure we can win the faith of users.

Faith, or trust, is mentioned by Curtis too:

So as long as I trust Google’s employees, the only two potential breaches of my privacy are from the government or from a hacker.

It’s amazing how a company, a business, can make appeals to faith and trust. If a telephony company or utility or government agency made the same appeals we’d be deeply suspicious and withhold as much as we can. Yet if Google does the same, we fall willingly into their arms.

Google’s own examples of how Photos might be data mined — cherry picked to sound as innocuous and genuinely helpful as possible — are unsettling. Horowitz:

The information gleaned from analyzing these photos does not travel outside of this product — not today. But if I thought we could return immense value to the users based on this data I’m sure we would consider doing that. For instance, if it were possible for Google Photos to figure out that I have a Tesla, and Tesla wanted to alert me to a recall, that would be a service that we would consider offering, with appropriate controls and disclosure to the user.

Sounds great, right? But “not today” is the catch, and there always seems to be one. Once we’re dependent on a service like Photos, what happens next, and what say do we have in it? I don’t want an ad for superhero costumes to appear next to a photo of my nieces after a dress up birthday party.

When asked if the face recognition knows who the person actually is, Horowitz replies:

Not in this incarnation of the product. If you look at the faces we have here Google has no idea who these people are, it’s actually face clustering, not face-recognition, so I can click on my stepdaughter Charlotte and see other pictures of her. But it doesn’t know Charlotte’s identity [and can’t make use of any of her own personal information].

‘Not in this incarnation of the product’.

The advertiser doesn’t know who I am but Google certainly does. And why, other than misplaced faith, would I trust them?

  1. +1 for your Google+ account if you know the source of that phrasing. 

  2. Not nearly as dire a situation as you’d think. Though Bing is pretty clunky. Maybe it’s DuckDuckGo time. 

  3. And whatever you do, don’t mention ‘tax’. 

Google law

Interesting stuff from Tor and Wikileaks contributor Jacob Appelbaum on Google going in to battle for him on the US government’s request for backdoor access to his Gmail.

Appelbaum had no idea this was happening of course, but Google none-the-less threw their full legal weight behind trying to deny the DOJ request.

+1 for Google.