Category Archives: Tracking

Belgian Privacy Commission Found Facebook in Violation of EU and Belgian Privacy Law

2390914273_da19cc9362_o

About two weeks ago KU Leuven University and Vrije Universiteit Brussel in Belgium published a report commissioned by the Belgian Privacy Commission about the tracking behaviour of Facebook on the internet, more specifically how they track their users (and non-users!) through the ‘Like’ buttons and Share buttons that are found on millions of websites across the internet.

Based on this report and the technical report, the Belgian Privacy Commission published a recommendation, which can be found here. A summary article of the findings is also published.

Findings

The results of the investigation are depressing. It was found that Facebook disregards European and Belgian privacy law in various ways. In fact, 10 legal issues have been found by the commission. Facebook frequently dismisses its own severe privacy violations as “bugs” that are still on the list of being fixed (ignoring the fact that these “bugs” are a major part of Facebook’s business model). This allows them to let various privacy commissioners think that privacy violations are the result of unintended functionality, while in fact it is, the entire business model of Facebook is based on profiling people.

Which law applies?

Facebook also does not recognise the fact that in this case Belgian law applies, and claims that because they have an office in Ireland, that they are only bound by Irish privacy law. This is simply not the case. In fact, the general rule seems to be that if you focus your site on a specific market, (let’s say for example Germany), as evidenced by having a German translation of your site, your site being accessible through a .de top-level domain, and various other indicators as well (one option could be the type of payment options provided, if your site offers ways to pay for products or services, or maybe marketing materials), then you are bound by German law as well. This is done to protect German customers, in this example case.

The same principle applies to Facebook. They are active world-wide, and so should be prepared to make adjustments to their services such that they comply with the various laws and regulations of all these countries. This is a difficult task, as laws are often incompatible, but it’s necessary to safeguard consumers’ rights. In the case of Facebook, if they would build their Like and Share buttons in such way that they don’t phone home on page load and don’t place cookies without the user’s consent, they would have a lot less legal problems. The easiest way to comply if you run such an international site, is take the strictest legislation, and implement it such that it complies with that.

In fact, the real reason why Facebook is in Ireland is mostly due to tax reasons. This allows them to evade taxes, by means of the Double Irish and Dutch Sandwich financial constructions.

Another problem is that users are not able to prevent Facebook from using the information they post on the social network site for purposes other than the pure social network site functionality. The information people post, and other information that Facebook aggregates and collects from other sources, are used by Facebook for different purposes without the express and knowing consent of the people concerned.

The problem with the ‘Like’ button

Special attention was given to the ‘Like’ and ‘Share’ buttons found on many sites across the internet. It was found that these social sharing plugins, as Facebook calls them, place a uniquely identifying cookie on users’ computers, which allows Facebook to then correlate a large part of their browsing history. Another finding is that Facebook places this uniquely identifying datr cookie on the European Interactive Digital Advertising Alliance opt-out site, where Facebook is listed as one of the participants. It also places an oo cookie (which presumably stands for “opt-out“) once you opt out of the advertising tracking. Of course, when you remove this cookie from your browser, Facebook is free to track you again. Also note that it does not place these cookies on the US or Canadian opt-out sites.

As I’ve written earlier in July 2013, the problem with the ‘Like’ button is that it phones home to Facebook without the user having to interact with the button itself. The very act of it loading on the page means that Facebook gets various information from users’ browsers, such as the current page visited, a unique browser identifying cookie called the datr cookie, and this information allows them to correlate all the pages you visit with your profile that they keep on you. As the Belgian investigators confirmed, this happens even when you don’t have an account with Facebook, when it is deactivated or when you are not logged into Facebook. As you surf the internet, a large part of your browsing history gets shared with Facebook, due to the fact that these buttons are found everywhere, on millions of websites across the world.

The Filter BubblePersonal data points

A major problem of personalisation technology, like used by Facebook, but also Google, and others, is that it limits the information users are exposed to. The algorithm learns what you like, and then subsequently only serves you information that you’re bound to like. The problem with that is, that there’s a lot of information that isn’t likeable. Information that isn’t nice, but still important to know. And by heavily filtering the input stream, these companies influence our way of how we think about the world, what information we’re exposed to, etc. Eli Pariser talks about this effect in his book The Filter Bubble: What the Internet is Hiding From You, where he did a Google search for ‘Egypt’ during the Egyptian revolution, and got information about the revolution, news articles, etc. while his friend only got information about holidays to Egypt, tour operators, flights, hotels, etc. This is a vastly different result for the exact same search term. This is due to the heavy personalisation going on at Google, where algorithms refine what results you’re most likely to be interested in, by analysing your previously-entered search terms.

The same happens at Facebook, where they control what you see in your news feed on the Facebook site, based on what you like. Problem is that by doing that a few times, soon you’re only going to see information that you like, and no information that’s important, but not likeable. This massively erodes the eventual value that Facebook is going to have, since eventually, all Facebook will be is an endless stream of information, Facebook posts, images, videos that you like and agree with. It becomes an automatic positive feedback machine. Press a button, and you’ll get a cookie.

What value does Facebook then have as a social network, when you never come in touch with radical ideas, or ideas that you initially do not agree with, but that may alter your thinking when you come in touch with them? By never coming in touch with extraordinary ideas, we never improve. And what a poor world that would be!

Dutch Data Retention Law Struck Down

Good news on privacy protection for once: after an 11 March 2015 ruling of the Court of The Hague in the Netherlands in the case of the Privacy First Foundation c.s. versus The Netherlands, the court decided to strike down the Dutch data retention law. The law required telecommunication providers and ISPs to store communication and location data from everyone in the Netherlands for a year. The court based its decision on the reasoning that a major privacy infringement of this magnitude needs proper safeguards. The safeguards that were put in place were deemed insufficient by the court. There is too much room for abuse of power in the current law, which was the reason for the The Hague Court to strike it down, effective immediately.

An English article by the Dutch Bits of Freedom foundation explains it in more detail here. An unofficial translation of the court’s decision in English can be found here.

The question remains what will happen now. The law has been struck down, so it seems logical to scrap it entirely. Whether that will happen, or whether the decision stands should the Ministry of Security and Justice appeal the decision, time will tell.

RT Going Underground Interview About Regin

I recently did an interview with RT‘s Going Underground programme, presented by Afshin Rattansi. We talked about the recently-discovered highly sophisticated malware Regin, and whether GCHQ or some other nation state could be behind it. The entire episode can be watched here. For more background information about Regin, you can read my article about it.

Talk at Logan Symposium 2014, London

A few weeks ago, I was in London at the Logan Symposium 2014, which was held at the Barbican Centre in London from 5 to 7 December 2014. During this event, I gave a talk entitled: “Security Dilemmas in Publishing Leaks.” (slides, PDF) The event was organised by the Centre for Investigative Journalism in London.

The audience was a switched-on crowd of journalists and hacktivists, bringing together key figures in the fight against invasive surveillance and secrecy. and it was great to be there and to be able to provide some insights and context from a technological perspective.

Gave Privacy By Design Talk At eth0

eth0I gave my talk about privacy by design last Saturday at eth0 2014 winter edition, a small hacker get-together which was organised in Lievelde, The Netherlands this year. eth0 organizes conferences that aim at bringing people with different computer-related interests together. They organise two events per year, one during winter. I’ve previously given a very similar talk at the OHM2013 hacker conference which was held in August 2013.

Video

Here’s the footage of my talk:

Quick Synopsis

I talked about privacy by design, and what I did with relation to Annie Machon‘s site and recently, the Sam Adams Associates for Integrity in Intelligence site. The talk consists of 2 parts, in the first part I explained what we’re up against, and in the second part I explained the 2 sites in a more specific case study.

I talked about the revelations about the NSA, GCHQ and other intelligence agencies, about the revelations in December, which were explained eloquently by Jacob Applebaum at 30C3 in Hamburg in December. Then I moved on to the threats to website visitors, how profiles are being built up and sold, browser fingerprinting. The second part consists of the case studies of both Annie Machon’s website, and the Sam Adams Associates’ website.

I’ve mentioned the Sam Adams Associates for Integrity in Intelligence, for whom I had the honour to make their website so they could have a more public space where they could share things relating to the Sam Adams Award with the world, and also to provide a nice overview of previous laureates and what their stories are.

Swiss FlagOne of the things both sites have in common is the hosting on a Swiss domain, which provides for a safer haven where content may be hosted safely without fear of being taken down by the U.S. authorities. The U.S. claims jurisdiction on the average .com, .net, .org domains etc. and there have been cases where these have been brought down because it hosted content the U.S. government did not agree with. Case in point: Richard O’Dwyer, a U.K. citizen, was threatened with extradition to the United States for being the man behind TVShacks, which was a website that provided links to copyrighted content. MegaUpload, the file locker company started by Kim Dotcom, was given the same treatment, where if you would visit their domain, you were served an image from the FBI telling you the domain had been seized.

The Rising Trend of Criminalizing Hackers & Tinkerers

Note: This article is also available in Portuguese, translated by Anders Bateva.

There seems to be a rising trend of criminalizing hackers & tinkerers. More and more, people who explore the limits of the equipment, hardware and software they own and use, whether they tinker with it, re-purpose it, or expand its functionalities, are met with unrelenting persecution by authorities. In the last couple of years, the trend seems to be that these things, or things which humans have done for thousands of years, like sharing, expanding and improving upon culture, are persecuted. An example is the recent possibility of making violations of Terms of Service, Terms of Use and other Terms put forward by service providers a crime under the Computer Fraud and Abuse Act (CFAA). The companies that are now (for the most part) in control of our collective culture are limiting the methods of sharing more and more, often through judicial and/or technical means. The technical means for the most part don’t work, thankfully. DRM is still a big failure and never got off the ground, although the content industry is still trying to cling onto it. The judicial means, however, can be very effective at crushing someone, especially in the litigious United States of America. In the U.S., about 95% of all criminal cases end in a plea bargain, because that’s cheaper than trial by jury. These people are forced by financial pressure to enter a plea bargain, even if they didn’t commit the crimes of which they are accused.

Aaron SwartzAaron Swartz

The late Aaron Swartz was persecuted heavily by the U.S. government for downloading millions of scientific articles from JSTOR at MIT, JSTOR being the closed-source library of scientific articles, access to which is commercially exploited by ITHAKA, the entity that runs it. Aaron believed that scientific research paid for by the public, should be available to the public for free. It’s completely logical that research paid for by the public belongs to the public, and not to some company which basically is saying: “Thank you very much, we’ll have that, now we are going to charge for access to the scientific results, and reap the financial benefits.” It is sad that the world lost a great hacker and tinkerer, committing suicide, only 26 years old, unable to bear the pressure brought down upon him any longer, when in the end, according to his lawyer Elliot Peters, he probably would have won the case due to the fact that the U.S. Secret Service failed to get a search warrant for Swartz’s laptop until 34 days after they seized it.

The corporate world is seizing control of content creation

This trend is seen more and more lately. The companies in control of most of our content production, devices and systems don’t want you to tinker with them, not even if you own them. Apple is closing their systems by soon preventing you from installing your own software on OS X. Software installs will soon only be permitted through the Apple-curated App Store. Already there’s software in OS X, called Gatekeeper that’s meant to prevent you from installing apps that might contain malware. If you read between the lines in that previous link you’ll see that it’s only a matter of time before they’re going to tighten the reins, and make Gatekeeper more oppressive. Google is rapidly closing Android, and moving more and more parts of the once open-source system to its own Google Play Services app. Check the permissions on that app; it’s incredibly scary just how much of the system is now locked up in this closed-source binary blob, and how little the actual android system now handles. Recently, text messaging functionality was moved from the Android OS to the Google Hangouts app, so texting with an Android 4.4 (KitKat)-equipped phone is no longer possible without a Google account and being logged into that. Of course, Google will store all your text messages, for easy access by American intelligence and law enforcement agencies. If you now were to install Android, and remove the Google Play Services app, you might be surprised at how much stuff depends on that app nowadays. When you remove Google Play Services, your phone basically becomes a non-functional plastic brick. These companies fail to see that any invention is made by standing on the shoulders of giants and working upon other people’s work, making it better, tinkering and modifying it, using it for other purposes not envisioned by the original author et cetera. This is what makes culture, this is what makes us. We are fundamentally social creatures, we share. The same implementation of control systems happens with e-books as well. The devices used to read them usually aren’t open, like the Amazon Kindle for example, so that is a problem. We humans have been sharing culture for millions of years and sharing books for thousands of years, basically since writing was invented in Mesopotamia. It is as natural to human development as breathing. We are social creatures, and we thrive on feedback from our peers. But there’s something worse going on in e-book land. In the Netherlands, all e-book purchases now have to be stored in a database called Centraal Boekhuis, which details all buyer information, and this central database will be easily accessible by Stichting BREIN, the country’s main anti-piracy & content industry lobby club. This was ostensibly done to prevent e-book piracy, but I would imagine that this database soon will be of interest to intelligence agencies. Think of it: a centralized database containing almost all books and which people read which books. You can learn a lot about a person just from the books they read. Joseph Stalin and Erich Honecker would be proud. We reached a high water mark of society after the adoption of the Universal Declaration of Human Rights at the UN General Assembly on 10 December 1948, but it’s sad to see that here in the Western world, we’ve been slipping from that high pillar of decency and humanity ever since. To quote V from V for Vendetta:

“Where once you had the freedom to object, to think and speak as you saw fit, we now have censors and systems of surveillance coercing your conformity and soliciting your submission.”

The surveillance is now far worse than what George Orwell could have possibly imagined. We need to remind the spooks and control freaks in governments around the world that Nineteen Eighty-Four is not an instruction manual. It was a warning. And we’ve ignored it so far.

Facebook records self-censorship

Recently I came across an article about Facebook, more specifically, that Facebook wants to know why you self-censor, in other words, why you didn’t click Publish on that status update you just wrote, but decided not to publish instead. It turns out Facebook is sending everything you type in the Post textarea box (the one with the “What’s on your mind?” placeholder), to Facebook servers. According to two Facebook scientists quoted in the article: Sauvik Das, PhD student at Carnegie Mellon and summer software engineer intern, and Adam Kramer, a data scientist, they only send back information to Facebook’s servers that indicate whether you self-censored, not the actual text you typed. They wrote an article entitled Self-Censorship on Facebook (PDF, copy here) in which they explain the technicalities.

It turns out this claim that they only send metadata back, not the actual text you type is not entirely true. I wanted to confirm whether they really don’t send what you type to Facebook before you hit Publish, so I fired up Facebook and logged in. I opened up my web inspector and started monitoring requests to/from my browser. When I typed a few letters I noticed that the site makes a GET request to the URL /ajax/typeahead/search.php with parameters value=[your search string]&__user=[your Facebook user id] (there are more parameters, but these are the most important for the purposes of this article). The search.php script probably parses what you typed in order to find contacts that it can then show to you as autocomplete options (for tagging purposes).Facebook sends data

Now, the authors of the article actually gathered their data in a slightly different way. They monitored the Post textarea box, and the comment box, and if more than 5 characters were typed in, it would say you self-censored if you didn’t publish that post or comment in the next 10 minutes. So in their methodology, no actual textual content was needed. But it turns out, as my quick research shows above, that your comments and posts actually do get send to Facebook before you click Publish, and even before 5 characters are typed. This is done with a different purpose (searching matches in your contacts for tagging etc.), but clearly this data is received by Facebook. What they subsequently do with it besides providing autocomplete functionality is anyone’s guess. Given that the user ID is actually sent together with the typed in text to the search.php script may suggest that they associate your profile with the typed in text, but there’s no way to definitively prove that.

When I read through the article, one particular sentence in the introduction stood out to me as bone-chilling:

“(…) Last-minute self-censorship is of particular interest to SNSs [social networking sites] as this filtering can be both helpful and hurtful. Users and their audience could fail to achieve potential social value from not sharing certain content, and the SNS [social networking site] loses value from the lack of content generation. (…)”

“loses value from the lack of content generation.” Let that sink in. When you stop from posting something on Facebook, or re-write it, Facebook considers that a bad thing, as something that removes value from Facebook. The goal of Facebook is to sell detailed profiling information on all of us, even those of us wise enough not to have a Facebook account (through tagging and e-mail friend-finder functionality).

Big Data and Big Brother

And it isn’t just Facebook, it’s basically every social network and ad provider. There’s an entire industry of big data brokers, with companies most of us have never heard of, like Axciom for instance, but there are many others like it, who thrive on selling profiles and associated services. Advertising works best if it is specific, and plays into users’ desires and interests. This is also the reason why, for this to be successful, companies like Facebook need as much information on people as possible, to better target their clients’ ads. And the best way is to provide a free service, like a social network, enticing people to share their lives through this service, and then you can provide really specific targeting to your clients. This is what these companies thrive on.

The bigger problem is that we have no influence on how our data gets used. People claiming they have nothing to hide, and do nothing wrong, forget that they don’t decide on what constitutes criminal behavior, it’s the state making that decision for them. And what will happen when you are suddenly faced with a brutal regime that abuses all the information and data they got on you? Surely we want to prevent this.

This isn’t just a problem in the technology industry, and business, but a problem with governments as well. The NSA and GCHQ, in cooperation with other intelligence agencies around the world are collecting data on all of us, but without providing us, the people, the possibility of appeal, and correction of erroneous data. We have no influence on how this data gets used, who will be seeing it, how it might get interpreted by others, et cetera. The NSA is currently experiencing the same uneasiness as the rest of us, as they have no clue how much or what information Edward Snowden might have taken with him, and how it might be interpreted by others. It’s curious that they now complain about this same problem that the rest of us have been experiencing for years; a problem that NSA partly created by overclassifying information that didn’t need to be kept secret. Of course there is information that needs to be kept secret, but the vast majority of information that now gets rubber stamped with the TOP SECRET marking, is information that is of no threat to national security if it were known to the public, but more likely information that might embarrass top officials.

We need to start implementing proper oversight to the secret surveillance states we are currently subjected to in a myriad of countries around the world, and take back powers that were granted to them, and subsequently abused by them, if we want to continue to live in a free world. For I don’t want to live in a Big Brother state, do you?

Choose Your Friends Wisely: Tracking & Profiling on the Web

Note: This article is also available in Portuguese, translated by Anders Bateva.

A lot of data about you and your Internet behavior gets collected when you simply surf the Internet ‘unprotected’. We are currently living in a time when data profiling and getting to know your customers is getting more and more important. In this article I will explore the consequences of data sharing, browser tracking and profiling on the Internet, why it isn’t a good idea to share too much data about yourself, and some of the things we can do as a community.

Data Collection: What Is It?

There are companies out there, like Acxiom (link to Wikipedia) for example, who live on nothing else but to sell your information to other companies who may find use for it. These companies get their data from you. Your browser, or the social networks you’re a part of. Your movements across the Internet are tracked and recorded as well. One of the most ubiquitous form of tracking on the Internet, next to ad networks, is the tracking done by social networks. These networks have convenient ‘share’ or ‘like’ buttons which Personal data pointscan be found on millions of websites across the Internet. Simply by visiting these websites with an unprotected web browser, data gets sent to these social network sites. Data about your browser brand/make/version, the OS you use, the country you’re from, sometimes even down to the actual locality, but also your IP address and the URL of the site you visited. So they know your actual surfing behavior, since these buttons are found on many sites.  Nearly a quarter of the top 10,000 websites have Facebook integration, for instance.  And this is data from last year, I’m sure the number is higher today. Another way of profiling is done via ad networks. Because it is inconvenient to manage your own advertising when you are just looking to make some money out of your website, this often gets outsourced to companies who specialize in advertising. And these companies will then serve you ads from their servers when you visit a site that is using it. Because this is all a single point where this data gets collected and indexed, you can imagine that these companies know quite a lot about peoples’ surfing behavior. And this collecting of data, the profiling and tracking of people across the Internet gets done without your knowledge or consent. Now, of course they claim that this is done to better target their ads, so you get served ads aimed specifically at your current interests and your geographic location or linguistic background. And this is true, the more they know about you, the better they can target ads. But this information is worth a lot of money to marketers, who are always on the lookout for ways to target and market their products to just the right audiences, because this will increase the likelihood people will click on their ads and buy their stuff. And this information gets collected centrally, at only a few companies who specialize in this. Most of us make use of content delivery networks hosted in the United States, implement social media integration et cetera and are thereby facilitating easy data collection by these companies. This centralization means that there are only a few companies out there that own a majority of the market share in this business. You can imagine that the amount of data they collect about a single person is quite substantial indeed. And of course, intelligence agencies like the NSA have access too, as seen by the revelations done by Edward Snowden in recent months. Many people don’t know the sheer extent of the data collection done, and the potential consequences that it can have if it’s misinterpreted.

Consequences of Overzealous Data Collection

HAL9000The main problem with data collection is that data is often misinterpreted, interpreted without context, and there can be serious consequences if this happens to you. The companies using your data infer certain things about you and your behavior based on this data alone. They profile you. However, their assessment is often wrong. The more data you share, the more problematic this can be eventually. A recent example of a serious consequence is that having certain friends on Facebook can actually change your credit score. These companies base this credit score correction on your friends on Facebook. So if you have a lot of friends with questionable credit histories, you may be denied a mortgage or a credit card. Even when you always make sure you never miss a payment. Search engines knowing your search history have access to something very private indeed: you are revealing what you think at that very moment. What things you are likely interested in. This is exactly the reason why this information is so valuable in the hands of advertising companies, so they can adjust their campaigns to make it more likely that they’ll persuade you to click one of their ads. Insurance PremiumSearch engine history also shares your mental state at that very moment, which, together with information on the groceries you buy at the supermarket for instance, can be very valuable information to your health insurance company. It is not inconceivable that insurance companies will be adjusting your premiums based on the food you eat, whether you have a gym membership, whether you smoke or drink alcohol, or whether your search engine history shows that you have an increased risk of depression. Do we really want that? This can potentially lead to some very bad consequences indeed, not just financially. You can also imagine health insurance companies rejecting you for insurance because of your unhealthy lifestyle, car rental companies rejecting you because of the recent fines you received, et cetera. These conclusions get drawn without our knowledge or consent; usually we don’t even know where these companies get the data on which they base their decisions from, and there’s not much we can do about it. The only way to prevent this is by starting to become more aware of what your data is worth to someone else, why it is in their interest to have access to this data, and whether you really want to give them access. And, on the other hand, by starting to think what we as programmers and hackers can do ourselves, by starting to build systems with privacy in mind from the start.

Privacy By Design

What we need to better protect our privacy on the Internet, next to browser add-ons like Ghostery and NoScript, is a change in mentality. We need systems that are built from the ground up with privacy in mind: privacy by design. Think about how much data you really need in order to complete the task at hand. When you’re building forms for your users to fill in, don’t require them to fill in data that isn’t absolutely necessary to complete the current task. So don’t ask your customers for a phone number when an e-mail address will do. Don’t ask them to put in their mail address when you don’t need it to send packages etc. Don’t ask them for their real name either when this isn’t necessary (and usually it isn’t). The reason why we want to limit available data is because this data can come back to bite you later on, as I’ve explained above. This will also protect your business more against cybercriminals looking for personal data to steal, as they cannot steal what isn’t there. Identity theft will also be harder when you’re very selective with who you share your data. If we teach people how to protect their data on the Internet, how to be ‘street smart’ on the Internet so to speak, we will increase their overall security on the Internet, and this is something that is very much necessary nowadays.

At the Crossroads: Surveillance State or Freedom?

OHM2013

When I went to OHM2013 last week, it was great to see such increased political activism from the hackers and geeks at the festival. I truly believe we are currently at a very important crossroads: either let governments the world over get away with crimes against the people’s interests, with programs like PRISM, ECHELON, TEMPORA and countless other authoritarian global surveillance schemes, or enter the path towards more freedom, transparency and accountability.

A good example of what not to do is Google Glass. A few weeks ago I came across the story of a hacker who modded Google Glass as to allow instant facial recognition and the covert recording of video.  Normally you need to tap your temple or use voice commands to start recording with Glass, all of which are pretty obvious gestures. But now people can record video and do automatic facial recognition covertly when they wear Glass. I even saw that there’s an app developed for Glass, called MedRef. MedRef also uses facial recognition technology. This basically allows medical professionals to view and update patient records using Glass. Of course having medical records available on Glass isn’t really in the interests of the patient either, as it’s a totally superfluous technology, and it’s unnecessary to store patient records on a device like that, over which you have no control. It’s Google who is calling the shots. Do we really want that?

Image above © ZABOU.

Image above © ZABOU.

As hackers, I think it’s important to remember the implications and possible privacy consequences of the things we are doing. By enabling the covert recording of video with Google Glass, and also adding on top of that, instant and automatic facial recognition, you are basically creating walking CCTV cameras. Also given the fact that these devices are controlled by Google, who knows where these video’s will end up. These devices are interesting from a technical and societal standpoint, sure, but after PRISM, we should be focusing on regaining what little we have left of our privacy and other human rights. As geeks and hackers we can no longer idly stand by and just be content hacking some technical thing that doesn’t have political implications.

I truly and with all my heart know that geeks and hackers are key to stopping the encroaching global surveillance state. It has been said that geeks shall inherit the earth. Not literally of course, but unlike any other population group out there, I think geeks have the skills and technical know-how to have a fighting chance against the NSA. We use strong encryption, we know what’s possible and what is not, and we can work one bit at a time at restoring humanity, freedom, transparency and accountability.

These values were won by our parents and grandparents after very hard bloody struggles for a reason. They very well saw what will happen with an out-of-control government. Why government of the people, for the people, and by the people, is a very good idea. The Germans have had plenty of hands-on experience with the consequences as well, first with the Nazis who took control and were responsible for murdering entire population groups, not only Jews but also people who didn’t think along similar lines: communists, activists, gay people, lesbians, transgenders, etc. Later the Germans got another taste of what can happen if you live in a surveillance state, with the Stasi in the former East-Germany, who encouraged people to spy on one another, exactly what the US government is currently also encouraging. Dangerous parallels there.

But you have to remember that the capabilities of the Stasi and Gestapo were only limited, and peanuts to what the NSA can do. Just to give a comparison: the Stasi at the height of its power, could only tap 40 telephone lines concurrently, so at any one time, there were at most 40 people under Stasi surveillance. Weird isn’t it? We all have this image in our minds that the prime example of a surveillance state would be East-Germany under the Stasi, while they could only spy on 40 people at a time. Of course, they had files on almost anybody, but they could only spy on this very limited number of people concurrently. Nowadays, the NSA gets to spy continuously on all the people in the world who are connected to the internet. Billions of people. Which begs the question: if we saw East-Germany as the prime example of the surveillance state, what do we make of the United States of America?

The Next Step?

I think the next step in defeating this technocratic nightmare of the surveillance state and regain our freedom is to educate others. Hold cryptoparties, explain the reasons and need and workings of encryption methods. Make sure that people leave with their laptops all configured to use strong encryption. If we can educate the general population one person at the time, using our technological skill and know-how, and explain why this is necessary, then eventually the NSA will have no-one to spy on, as almost all communication will flow across the internet in encrypted form. It’s sad that it is necessary, really, but I see no other option to stop intelligence agencies’ excess data-hunger. The NSA has a bad case of data addiction, and they urgently need rehab. They claim more data is necessary to catch terrorists, but let’s face it: we don’t find the needle in the haystack by making the haystack bigger.