The Illusion of Free

Our data is out of our control. We might (wisely or unwisely) choose to publicly share our statuses, personal information, media and locations, or we might choose to only share this data with our friends. But it’s just an illusion of choice—however we share, we’re exposing ourselves to a wide audience. We have so much more to worry about than future employers seeing photos of us when we’ve had too much to drink.

Article Continues Below

Corporations hold a lot of information about us. They store the stuff we share on their sites and apps, and provide us with data storage for our emails, files, and much more. When we or our friends share stuff on their services, either publicly or privately, clever algorithms can derive a lot of of detailed knowledge from a small amount of information. Did you know that you’re pregnant? Did you know that you’re not considered intelligent? Did you know that your relationship is about to end? The algorithms know us better than our families and only need to know ten of our Facebook Likes before they know us better than our average work colleague.

A combination of analytics and big data can be used in a huge variety of ways. Many sites use our data just to ensure a web page is in the language we speak. Recommendation engines are used by companies like Netflix to deliver fantastic personalized experiences. Google creates profiles of us to understand what makes us tick and sell us the right products. 23andme analyzes our DNA for genetic risk factors and sells the data to pharmaceutical companies. Ecommerce sites like Amazon know how to appeal to you as an individual, and whether you’re more persuaded by social proof when your friends also buy a product, or authority when an expert recommends a product. Facebook can predict the likelihood that you drink alcohol or do drugs, or determine if you’re physically and mentally healthy. It also experiments on us and influences our emotions. What can be done with all this data varies wildly, from the incredibly convenient and useful to the downright terrifying.

This data has a huge value to people who may not have your best interests at heart. What if this information is sold to your boss? Your insurance company? Your potential partner?

As Tim Cook said, “Some companies are not transparent that the connection of these data points produces five other things that you didn’t know that you gave up. It becomes a gigantic trove of data.” The data is so valuable that cognitive scientists are giddy with excitement at the size of studies they can conduct using Facebook. For neuroscience studies, a sample of twenty white undergraduates used to be considered sufficient to say something general about how brains work. Now Facebook works with scientists on sample sizes of hundreds of thousands to millions. The difference between more traditional scientific studies and Facebook’s studies is that Facebook’s users don’t know that they’re probably taking part in ten “experiments” at any given time. (Of course, you give your consent when you agree to the terms and conditions. But very few people ever read the terms and conditions, or privacy policies. They’re not designed to be read or understood.)

There is the potential for big data to be collected and used for good. Apple’s ResearchKit is supported by an open source framework that makes it easy for researchers and developers to create apps to collect iPhone users’ health data on a huge scale. Apple says they’ve designed ResearchKit with people’s privacy values in mind, “You choose what studies you want to join, you are in control of what information you provide to which apps, and you can see the data you’re sharing.”

But the allure of capturing huge, valuable amounts of data may encourage developers to design without ethics. An app may pressure users to quickly sign the consent form when they first open the app, without considering the consequences. The same way we’re encouraged to quickly hit “Agree” when we’re presented with terms and conditions. Or how apps tell us we need to allow constant access to our location so the app can, they tell us, provide us with the best experience.

The intent of the developers, their bosses, and the corporations as a whole, is key. They didn’t just decide to utilize this data because they could. They can’t afford to provide free services for nothing, and that was never their intention. It’s a lucrative business. The business model of these companies is to exploit our data, to be our corporate surveillers. It’s their good fortune that we share it like—as Zuckerberg said—dumb fucks.

To say that this is a privacy issue is to give it a loaded term. The word “privacy” has been hijacked to suggest that you’re hiding things you’re ashamed about. That’s why Google’s Eric Schmidt said “if you’ve got something to hide, you shouldn’t be doing it in the first place.” (That line is immortalized in the fantastic song, Sergey Says.) But privacy is our right to choose what we do and don’t share. It’s enshrined in the Universal Declaration of Human Rights.

So when we’re deciding which cool new tools and services to use, how are we supposed to make the right decision? Those of us who vaguely understand the technology live in a tech bubble where we value convenience and a good user experience so highly that we’re willing to trade it for our information, privacy and future security. It’s the same argument I hear again and again from people who choose to use Gmail. But will the tracking and algorithmic analysis of our data give us a good user experience? We just don’t know enough about what the companies are doing with our data to judge whether it’s a worthwhile risk. What we do know is horrifying enough. And whatever corporations are doing with our data now, who knows how they’re going to use it in the future.

And what about people outside the bubble, who aren’t as well-informed when it comes to the consequences of using services that exploit our data? The everyday consumer will choose a product based on free and fantastic user experiences. They don’t know about the cost of running, and the data required to sustain, such businesses.

We need to be aware that our choice of communication tools, such as Gmail or Facebook, doesn’t just affect us, but also those who want to communicate with us.

We need tools and services that enable us to own our own data, and give us the option to share it however we like, without conditions attached. I’m not an Apple fangirl, but Tim Cook is at least talking about privacy in the right way:

None of us should accept that the government or a company or anybody should have access to all of our private information. This is a basic human right. We all have a right to privacy. We shouldn’t give it up.

“Apple has a very straightforward business model,” he said. “We make money if you buy one of these [pointing at an iPhone]. That’s our product. You [the consumer] are not our product. We design our products such that we keep a very minimal level of information on our customers.”

But Apple is only one potential alternative to corporate surveillance. Their services may have some security benefits if our data is encrypted and can’t be read by Apple, but our data is still locked into their proprietary system. We need more *genuine* alternatives.

What can we do?

It’s a big scary issue. And that’s why I think people don’t talk about it. When you don’t know the solution, you don’t want to talk about the problem. We’re so entrenched in using Google’s tools, communicating via Facebook, and benefitting from a multitude of other services that feed on our data, it feels wildly out of our control. When we feel like we’ve lost control, we don’t want to admit it was our mistake. We’re naturally defensive of the choices of our past selves.

The first step is understanding and acknowledging that there’s a problem. There’s a lot of research, articles, and information out there if you want to learn how to regain control.

The second step is questioning the corporations and their motives. Speak up and ask these companies to be transparent about the data they collect, and how they use it. Encourage government oversight and regulation to protect our data. Have the heart to stand up against a model you think is toxic to our privacy and human rights.

The third, and hardest, step is doing something about it. We need to take control of our data, and begin an exodus from the services and tools that don’t respect our human rights. We need to demand, find and fund alternatives where we can be together without being an algorithm’s cash crop. It’s the only way we can prove we care about our data, and create a viable environment for the alternatives to exist.

About the Author

Laura Kalbag

Laura Kalbag is a designer working on Ind.ie. She was freelance for five years and still holds client work dear. She can be found via her personal site, Twitter, and out on long walks with her big fluffy dog.

20 Reader Comments

  1. @Matt Smith 1: In what way is this a press release?

    I think it is a well researched article that raises awareness of an issue everybody should be concerned about.

  2. I do generally agree about missing interface, where everyone can control his data sharing, and any accessing services must follow it.

    On the other hand, it’s quite useful if potential employers scan my public data, find mismatch in our visions and do not continue the hiring process … and all that without me having to waste any time on researching them, so I can rather focus on employers that do match my vision, or at least respect my own privacy 🙂

  3. @Hidde, ok, maybe not very friendly, but I’m kinda tired of the stuff that indie trots out and this seems very different from the other articles that Laura has written, (some of) which I have also read and enjoyed.

    @Brad; yes, I did read it.

    @Johannes; I called it a press release because (a) it sounds very much like their tone of voice and (b) the fact that the article leads to the conclusion that ‘We need to demand, find and fund alternatives’ when that’s what indie are supposedly offering.

  4. The introductory phrase “privacy settings that let us control who of our friends sees what” threw me. It’s not very inviting to discover that someone doesn’t know how to use pronouns properly. I hope the ‘someone’ wasn’t Laura.

  5. Am I the only one who didn’t find any of those examples scary, terrifying, or any other negative term?

    The only examples I can think of in my own situation that scare me are physical real world threats. And in those situations, this data collection is entirely unnecessary to the execution of the threat.

    If I can see an ad for diapers because I have a child, instead of Viagra, because the advertiser doesn’t know anything about me, in happy with that.

    I truly struggle with seeing the downside to the privacy trade off, and really do think the benefits far outweigh the bad

    The scientific research that was mentioned in the article alone is a boon. Getting data sets that don’t rely on self identification greatly increases the value of the research.

  6. @BlackMagic You’ll be happy to know Laura didn’t write the introduction. I did. But I’m at a loss as to which aspect of pronoun usage is bothering you.

  7. Rose, try “which of our friends”. If you want to use “who” in the context in which you have used it, it would be “who among our friends”.

    Imagine a teacher standing in front of a class saying “Who of you didn’t do your homework?”, or “Who of the Founding Fathers was a slave owner?”. It’s either “which of …” or “who among …”

  8. Let me respond to the actual content of the article. It was great. Spot on and nailed it.

    Recently an article in wired uk magazine touched the same subject. The november issue 2014.

    Keep the good work going!

  9. @BlackMagic, you’re right that the way it’s worded could look awkward to many readers. It seems that “which” is the far more popular option.

    Boring middle part:I shied away from “which” on the logic that it’s for things or classes, while “who” is for (individual) people. “Among” also applies more to groups or classes—think “to be among friends” or “this among other things.” Granted, it can also mean “distinguished in kind from the rest of the group,” but one definition of “of” is “distinguished out of a number, or out of all,” and I felt that was even more suitable. (Is anyone still awake after all that?)

    Anyway that was my thinking. And it actually might be kind of backward-looking. Language evolves, and if it seems wrong to most people, you could make a valid argument that it is wrong.

  10. Thank you Rose. Good explanation.

    Intrigued, I researched the phrase “who of our friends” on Google Ngram, a useful tool for lexicographers. It turned up one instance of such usage in the last 200 years.

    I found a handful of other instances of the phrase, but they were mostly Biblical references. Rare indeed!

  11. I really enjoyed reading this incredibly insightful piece, and have shared it with a number of non-technical colleagues, friends, and family. A basic literacy of data mining is important. Thank you for writing this!

  12. My dismay got greater and greater reading this article, because I was fairly sure what would be coming at the end, and was getting ready to write a disapproving mail to A List Apart for allowing soap-boxing and product endorsement. I’m therefore very relieved it didn’t end with the obvious “Here’s something we prepared earlier.”

    As for the issue at hand, the real problem is that the monetisation models of the web grew out of a well-known working model: advertising. The money that came in from advertising on the web kickstarted the modern web as we know it, and it increased company’s desires to harvest ever more data from users for targetted advertising.

    It is virtually impossible to put that genie back in the bottle. But if you want a free web without the concerns of big business holding your data, another monetisation model is required. As someone in the world who earns a decent living building things on the web, I would be very happy to pay a yearly fee to access the web, and to help make it accessible to those who *can’t* afford it. Imagine if we had a model that allowed is to make this platform available, advert free, to the entire world. It’s sadly a pipe dream. But what a dream!

  13. A great article about the most powerful control mechanism mankind has ever developed. Unfortunately very few people manage to grasp the concept behind data and how it’s being used to predict every single thing we do. A solution would be to develop a whole different suite of communication protocols and software tools to go with them that would be bound to a completely different set of rules. A parallel world that would slowly bypass the existing manipulation network. Just a thought…

Got something to say?

We have turned off comments, but you can see what folks had to say before we did so.

More from ALA

I am a creative.

A List Apart founder and web design OG Zeldman ponders the moments of inspiration, the hours of plodding, and the ultimate mystery at the heart of a creative career.
Career