Tag Archives: tracking

Pegasus: NSO Group’s Insidious Spyware

Note: This article was first published at the World Ethical Data Forum.

“Malware Infection” by Visual Content is licensed under CC BY 2.0

Pegasus is advanced spyware that was first discovered in August 2016, developed by NSO Group based in Israel, and sold to various clients around the world, including Saudi Arabia, Bahrain, the UAE, India, Kazakhstan, Hungary, Rwanda, Azerbaijan, Morocco and Mexico among probably other nations. It is marketed by NSO Group as a “world-leading cyber intelligence solution that enables law enforcement and intelligence agencies to remotely and covertly extract valuable intelligence from virtually any mobile device”.

There’s a huge market for spyware like this, not only NSO Group sells it, but countless other corporations across the globe are active in this market.

The reason we are writing about this today is because this month, in July 2021, seventeen news media organisations investigated a leak of over 50,000 phone numbers believed to have been identified as targets of clients of NSO Group since 2016. Pegasus continues to be widely used by authoritarian governments to spy on human rights activists, journalists and lawyers across the world.

Pegasus can infect phones running either Apple’s iOS operating system, or Google’s Android OS. The earliest versions – from around 2016 until 2019 – used a technique called spear phishing – text messages or emails that trick targets into clicking on a malicious link. Nowadays, however, Pegasus can infect phones based on a “zero-click” attack, meaning it does not require interaction by the user in order for the malware to infect the phone. This seems to be the dominant attack method now. Whenever OTA (over-the-air) zero-click exploits are not possible, NSO Group states in their marketing material (page 12) that they will resort to sending a custom-crafted message via SMS, messaging app (like WhatsApp) or e-mail, hoping that the target will click the link. So in other words, in case the newer zero-click exploits don’t work, targets can still be infected using the  spear phishing approach. Both the OTA and ESEM (Enhanced Social Engineering Message) methods require that the operator only knows a phone number or e-mail address used by the target. Nothing more is required to infect the target.

Pegasus is designed to overcome several obstacles on smartphones, namely: encryption, abundance of various communication apps, targets being outside the interception domain (roaming, face-to-face meetings, use of private networks), masking (use of virtual identities making it almost impossible to track and trace), and SIM replacement.

What data is collected?

Several types of data is extracted or made accessible from the phone, namely:

  • SMS records
  • Contact details
  • Call history
  • Calendar records
  • E-mails
  • Instant messaging messages    
  • Browsing history
  • Location tracking (both cell-tower based as well as GPS based) – Cell-tower based locations get sent passively; whenever the operator requests a more precise location, the GPS gets turned on and the malware will send a precise lat-long location.
  • Voice call interception
  • Environmental (ambient) sound recordings via the microphone
  • File retrieval
  • Photo taking
  • Screen capturing.

Pegasus supports both passive and active data capturing; the capabilities above sometimes can be done passively, and sometimes require active interception. The difference being that once the Pegasus malware is installed on a device, it automatically (passively) collects various data, either in real-time, or when specific conditions are met (depending on how the malware is configured). Active data collection means the operator sending active requests to the device for information. These only happen at the specific request of the operator.

The collected data gets transmitted to the Command & Control server (C&C server). However, if data transmission is not possible, the collected data will be stored in a collection buffer. The data will then be sent when a connection is available again. The buffer size is set to reach no more than 5% of the free available space on the device, to avoid detection. The buffer operates on a FIFO basis, meaning that older data gets deleted whenever the buffer is full, an internet connection is not available and replaced by newer data to keep the size of the buffer the same.

NSO Group published a brochure explaining the capabilities and general workings of the malware.

Interesting is that the malware has self-destruct capabilities. When it cannot contact its C&C server for more than 60 days, or if it detects that it was installed on the wrong device, it will self-destruct to limit discovery. That would imply it is possible to defeat the malware by simply placing your phone in a Faraday cage for 60 days.

How do you detect if you have been infected?

NSO Group claims that Pegasus leaves no traces whatsoever. That statement isn’t true. Amnesty International did quite a bit of research on the malware, which you can read in full detail.

It is possible to detect Pegasus infection by checking the Safari logs for strange redirects. These are redirects to URLs that contain multiple subdomains, a non-standard port number, and a random URI request string. For instance, Amnesty has analysed an activist’s phone and discovered that immediately after trying to visit Yahoo.FR, the phone redirects the user to a very strange URL (after which, I presume, the browser will load Yahoo to avoid detection). In order to see these requests for Safari, you need to check Safari’s Session Resource logs, instead of the browsing history. Safari will only record the final site reached in the browsing history, not all the redirects it did along the way.

Another way to detect it is via the appearance of weird, malicious processes. According to Amnesty, both Maati Monjib and Omar Radi’s network usage databases contained a suspicious process called “bh”. This process was observed on multiple occasions immediately following visits to Pegasus installation domains, so it is probably related. References to “bh” were also found in Pegasus’ iOS sample recovered from the 2016 attacks against UAE human rights defender Ahmed Mansoor, analyzed by Lookout.

Other processes associated with Pegasus seem to be “msgacntd” and roleaboutd”, “pcsd” and “fmld”. There seem to be many others, and there’s also evidence that Pegasus is spoofing names of legitimate processes on iOS to avoid detection.

Here are the full Pegasus Indicators of Compromise, courtesy of Amnesty Tech, which lists suspicious files, domains, infrastructure, e-mails, and processes. Also helpful is the Mobile Verification Toolkit (MVT), which is a collection of utilities to simplify and automate the process of gathering forensic traces helpful to identify a potential compromise of Android and iOS devices.

What is interesting to see is that NSO Group rapidly shut down their Version 3 server infrastructure after the publications by Amnesty International and Citizen Lab on 1 August 2018:

Source: Amnesty

After this they have moved on to Version 4 infrastructure and have restructured the architecture to make it harder to detect.

Measures to take to best secure your phone

Luckily, Apple was quick to react with an update back when the malware was first discovered in 2016. The company issued a security update (iOS 9.3.5) that patched all three known vulnerabilities that Pegasus uses. Google notified Pegasus targets directly using the leaked list. According to cybersecurity company Kaspersky, if you have always updated your iOS phone and/or iPad as soon as possible and (in case you use Android) you haven’t gotten a notification from Google, you are probably safe and not under surveillance by Pegasus.

Of course, bear in mind that the article by Kaspersky was published a few years ago, and that the fight against Pegasus and other similar malware continues, and as Apple and Google (and other parties) keep pushing updates, NSO Group and its competitors will keep trying to find zero days and other hacks to try and make sure that its software continues to be able to infect a wide variety of devices. So it’s basically an arms race. In order to limit changes of infection, it is required to be continually vigilant, and take proactive measures to improve your devices’ security.

We can give several additional tips for the future to make sure you keep your devices as secure as possible.

  • First of all, make sure that you install any updates to your software (whether operating system (iOS or Android or others), as well as general software & apps), as fast as possible. These updates often fix important security vulnerabilities so if you run recent software on your devices you are a lot less vulnerable. This is general advice, so always make sure you regularly check for updates, and if possible, configure your systems such that updates are installed automatically, so you’re optimally protected.
       
  • A second tip is to not click any suspicious links that may have been sent to you via instant messaging, SMS or emails. This can be hard to detect, but signs like spelling & grammar mistakes, a sudden change in language (like the way you’re being addressed), or just a strange sequence of events (like why would a certain company contact you at that point to get you to click a link), can help to detect suspicious messages. This is being made a bit harder by the fact that smartphones often don’t show where a link leads until it’s already too late and you’ve clicked on it.
     
  • Another way to protect yourself is to get as many companies as possible to send you physical mail, if that is an option, instead of e-mails or other messages. That way, when you suddenly receive an email from, for example, your utility company, this raises a lot more red flags than if your normal communication with this party also goes over email. If you receive a link from a suspicious source, do not click on it.

Conclusions

Advanced spyware, either fielded by intelligence agencies, or sold by private companies like NSO Group, gets used to target human rights activists, journalists and other people active in organisations trying to affect societal change. Instruments that were previously used against criminals and terrorists are now being fielded on a massive scale against journalists and activists that do not have any criminal intent. Of course, seen from the viewpoint of the various regimes around the world, often with atrocious human rights records, the very existence of a free press, or people campaigning for societal change, seems like a threat. However, the Universal Declaration of Human Rights considers certain rights to be inalienable and common among all people. The fact that highly advanced spyware is being used to disrupt and interfere with people trying to exercise their universal human rights, is highly concerning.

We will only see more cyber attacks in the future, and these will become more and more sophisticated. It will become harder and harder to defend against, as the internet and computer networks have become a battleground for intelligence, for both nation states, criminals and corporations alike.

Belgian Privacy Commission Found Facebook in Violation of EU and Belgian Privacy Law

About two weeks ago KU Leuven University and Vrije Universiteit Brussel in Belgium published a report commissioned by the Belgian Privacy Commission about the tracking behaviour of Facebook on the internet, more specifically how they track their users (and non-users!) through the ‘Like’ buttons and Share buttons that are found on millions of websites across the internet.

Based on this report and the technical report, the Belgian Privacy Commission published a recommendation, which can be found here. A summary article of the findings is also published.

Findings

The results of the investigation are depressing. It was found that Facebook disregards European and Belgian privacy law in various ways. In fact, 10 legal issues have been found by the commission. Facebook frequently dismisses its own severe privacy violations as “bugs” that are still on the list of being fixed (ignoring the fact that these “bugs” are a major part of Facebook’s business model). This allows them to let various privacy commissioners think that privacy violations are the result of unintended functionality, while in fact it is, the entire business model of Facebook is based on profiling people.

Which law applies?

Facebook also does not recognise the fact that in this case Belgian law applies, and claims that because they have an office in Ireland, that they are only bound by Irish privacy law. This is simply not the case. In fact, the general rule seems to be that if you focus your site on a specific market, (let’s say for example Germany), as evidenced by having a German translation of your site, your site being accessible through a .de top-level domain, and various other indicators as well (one option could be the type of payment options provided, if your site offers ways to pay for products or services, or maybe marketing materials), then you are bound by German law as well. This is done to protect German customers, in this example case.

The same principle applies to Facebook. They are active world-wide, and so should be prepared to make adjustments to their services such that they comply with the various laws and regulations of all these countries. This is a difficult task, as laws are often incompatible, but it’s necessary to safeguard consumers’ rights. In the case of Facebook, if they would build their Like and Share buttons in such way that they don’t phone home on page load and don’t place cookies without the user’s consent, they would have a lot less legal problems. The easiest way to comply if you run such an international site, is take the strictest legislation, and implement it such that it complies with that.

In fact, the real reason why Facebook is in Ireland is mostly due to tax reasons. This allows them to evade taxes, by means of the Double Irish and Dutch Sandwich financial constructions.

Another problem is that users are not able to prevent Facebook from using the information they post on the social network site for purposes other than the pure social network site functionality. The information people post, and other information that Facebook aggregates and collects from other sources, are used by Facebook for different purposes without the express and knowing consent of the people concerned.

The problem with the ‘Like’ button

Special attention was given to the ‘Like’ and ‘Share’ buttons found on many sites across the internet. It was found that these social sharing plugins, as Facebook calls them, place a uniquely identifying cookie on users’ computers, which allows Facebook to then correlate a large part of their browsing history. Another finding is that Facebook places this uniquely identifying datr cookie on the European Interactive Digital Advertising Alliance opt-out site, where Facebook is listed as one of the participants. It also places an oo cookie (which presumably stands for “opt-out“) once you opt out of the advertising tracking. Of course, when you remove this cookie from your browser, Facebook is free to track you again. Also note that it does not place these cookies on the US or Canadian opt-out sites.

As I’ve written earlier in July 2013, the problem with the ‘Like’ button is that it phones home to Facebook without the user having to interact with the button itself. The very act of it loading on the page means that Facebook gets various information from users’ browsers, such as the current page visited, a unique browser identifying cookie called the datr cookie, and this information allows them to correlate all the pages you visit with your profile that they keep on you. As the Belgian investigators confirmed, this happens even when you don’t have an account with Facebook, when it is deactivated or when you are not logged into Facebook. As you surf the internet, a large part of your browsing history gets shared with Facebook, due to the fact that these buttons are found everywhere, on millions of websites across the world.

The Filter Bubble

A major problem of personalisation technology, like used by Facebook, but also Google, and others, is that it limits the information users are exposed to. The algorithm learns what you like, and then subsequently only serves you information that you’re bound to like. The problem with that is, that there’s a lot of information that isn’t likeable. Information that isn’t nice, but still important to know. And by heavily filtering the input stream, these companies influence our way of how we think about the world, what information we’re exposed to, etc. Eli Pariser talks about this effect in his book The Filter Bubble: What the Internet is Hiding From You, where he did a Google search for ‘Egypt’ during the Egyptian revolution, and got information about the revolution, news articles, etc. while his friend only got information about holidays to Egypt, tour operators, flights, hotels, etc. This is a vastly different result for the exact same search term. This is due to the heavy personalisation going on at Google, where algorithms refine what results you’re most likely to be interested in, by analysing your previously-entered search terms.

The same happens at Facebook, where they control what you see in your news feed on the Facebook site, based on what you like. Problem is that by doing that a few times, soon you’re only going to see information that you like, and no information that’s important, but not likeable. This massively erodes the eventual value that Facebook is going to have, since eventually, all Facebook will be is an endless stream of information, Facebook posts, images, videos that you like and agree with. It becomes an automatic positive feedback machine. Press a button, and you’ll get a cookie.

What value does Facebook then have as a social network, when you never come in touch with radical ideas, or ideas that you initially do not agree with, but that may alter your thinking when you come in touch with them? By never coming in touch with extraordinary ideas, we never improve. And what a poor world that would be!

Dutch Data Retention Law Struck Down

Good news on privacy protection for once: after an 11 March 2015 ruling of the Court of The Hague in the Netherlands in the case of the Privacy First Foundation c.s. versus The Netherlands, the court decided to strike down the Dutch data retention law. The law required telecommunication providers and ISPs to store communication and location data from everyone in the Netherlands for a year. The court based its decision on the reasoning that a major privacy infringement of this magnitude needs proper safeguards. The safeguards that were put in place were deemed insufficient by the court. There is too much room for abuse of power in the current law, which was the reason for the The Hague Court to strike it down, effective immediately.

An English article by the Dutch Bits of Freedom foundation explains it in more detail here. An unofficial translation of the court’s decision in English can be found here.

The question remains what will happen now. The law has been struck down, so it seems logical to scrap it entirely. Whether that will happen, or whether the decision stands should the Ministry of Security and Justice appeal the decision, time will tell.

Talk at Logan Symposium 2014, London

A few weeks ago, I was in London at the Logan Symposium 2014, which was held at the Barbican Centre in London from 5 to 7 December 2014. During this event, I gave a talk entitled: “Security Dilemmas in Publishing Leaks.” (slides, PDF) The event was organised by the Centre for Investigative Journalism in London.

The audience was a switched-on crowd of journalists and hacktivists, bringing together key figures in the fight against invasive surveillance and secrecy. and it was great to be there and to be able to provide some insights and context from a technological perspective.

The Internet of Privacy-Infringing Things?

Let’s talk a little bit about the rapid proliferation of the so-called Internet of Things (IoT). The Internet of Things is a catch-all term for all sorts of embedded devices that are hooked up to the internet in order to make them “smarter,” able to react to certain circumstances, automate things etcetera. This can include many devices, such as thermostats, autonomous cars, etc. There’s a wide variety of possibilities, and some of them, like smart thermostats are already on the market, with autonomous cars following closely behind.

According to the manufacturers who are peddling this technology, the purpose of hooking these devices up to the internet is to be able to react better and provide more services that were previously impossible to execute. An example would be a thermostat that recognises when you are home, and subsequently raises the temperature of the house. There are also scenarios possible of linking various IoT devices together, like using your autonomous car to recognise when it is (close to) home and then letting the thermostat automatically increase the temperature, for instance.

There are myriad problems with this technology in its current form. Some of the most basic ones in my view are privacy and security considerations. In the case of cars, Ford knows exactly where you are at all times and knows when you are breaking the speed limit by using the highly-accurate GPS that’s built into modern Ford cars. This technology is already active, and if you drive one of these cars, this information (your whereabouts at all times, and certain metrics about the car, like the current speed, mileage, etc.) are stored and sent to Ford’s servers. Many people don’t realise this, but it was confirmed by Ford’s Global VP of Marketing and Sales, Jim Farley at a CES trade show in Las Vegas at the beginning of this year. Farley later retracted his statements after the public outrage, claiming that he left the wrong impression and that Ford does not track the locations of their cars without the owners’ consent.

Google’s $3.2 billion acquisition

Nest Labs, Inc. used to be a separate company making thermostats and smoke detectors, until Google bought it for a whopping $3.2 billion dollars. The Nest thermostat is a programmable thermostat that has a little artificial intelligence inside of it that enables it to learn what temperatures you like, turns the temperature up when you’re at home and turns it down when you’re away. It can be controlled via WiFi from anywhere in the world via a web interface. Users can log in to their accounts to change temperature, schedules, and see energy usage.

Why did Google pay such an extraordinary large amount for a thermostat company? I think it will be the next battleground for Google to gather more data, the Internet of Things. Things like home automation and cars are markets that Google has recently stepped into. Technologies like Nest and Google’s driver-less car are generating massive amounts of data about users’ whereabouts and things like sleep/wake cycles, patterns of travel and usage of energy, for instance. And this is just for the two technologies that I have chosen to focus my attention on for this article. There are lots of different IoT devices out there, that eventually will all be connected somehow. Via the internet.

Privacy Concerns

One is left to wonder what is happening with all this data? Where is it stored, who has access to it, and most important of all: why is it collected in the first place? In most cases this collecting of data isn’t even necessary. In the case of Ford, we have to rely on Farley’s say-so that they are the only ones that have access to this data. And of course Google and every other company out there has the same defence. I don’t believe that for one second.

The data is being collected to support a business model that we see often in the tech industry, where profiles and sensitive data about the users of a service are valuable and either used to better target ads or directly sold on to other companies. There seems to be this conception that the modern internet user is used to not paying for services online, and this has caused many companies to implement the default ads-based and data and profiling-based business model. However, other business models, like the Humble Bundle in the gaming industry for instance, or online crowd-funding campaigns on Kickstarter or Indiegogo have shown that the internet user is perfectly willing to spend a little money or give a little donation if it’s a service or device that they care about. The problem with the default ads-based business model discussed above is that it leaves the users’ data to be vulnerable to exposure to third parties and others that have no business knowing it, and also causes companies to collect too much information about their users by default. It’s like there is some kind of recipe out there called “How to start a Silicon Valley start-up,” that has profiling and tracking of users and basically not caring about the users’ privacy as its central tenet. It doesn’t have to be this way.

Currently, a lot of this technology is developed and then brought to market without any consideration whatsoever about privacy of the customer or security and integrity of the data. Central questions that in my opinion should be answered immediately and during the initial design process of any technology impacting on privacy are left unanswered. First, if and what data should we collect? How easy is it to access this data? I’m sure it would be conceivable that unauthorized people would also be able to quite easily gain access to this data. What if it falls into the wrong hands? A smart thermostat like Google Nest is able to know when you’re home and knows all about your sleep/wake cycle. This is information that could be of interest to burglars, for instance. What if someone accesses your car’s firmware and changes it? What happens when driver-less cars mix with the regular cars on the road, controlled by people? This could lead to accidents.

Vulnerabilities

And what to think of all those “convenient” dashboards and other web-based interfaces that are enabled and exposed to the world on all those “smart” IoT devices? I suspect that there will be a lot of security vulnerabilities to be found in that software. It’s all closed-source and not exposed to external code review. The budgets for the software development probably aren’t large enough to accommodate looking at the security and privacy implications of the software and implementing proper safeguards to protect users’ data. This is a recipe for disaster. Only when using free and open source software can proper code-review be implemented and code inspected for back-doors and other unwanted behaviour. And it generally leads to better quality software, since more people are able to see the code and have the incentives to fix bugs, etc. in an open and welcoming community.

Do we really want to live in a world where we can’t have privacy any more, where your whereabouts are at all times stored and analysed by god-knows who, and all technology is hooked up to each other, without privacy and security considerations? Look, I like technology. But I like technology to be open, so that smart people can look at the insides and determine whether what the tech is doing is really what it says on the tin, with no nasty side-effects. So that the community of users can expand upon the technology. It is about respecting the users’ freedom and rights, that’s what counts. Not enslaving them to closed-source technology that is controlled by commercial parties.

The Age of the Gait-Recognising Cameras Is Here!

A few days ago I read an article (NRC, Dutch, published 11 September, interestingly) about how TNO (the Dutch Organisation for Applied Scientific Research, the largest research institute in the Netherlands) developed technology (PDF) for smart cameras for use at Amsterdam Schiphol Airport. These cameras were installed at Schiphol airport by the Qubit Visual Intelligence, a company from The Hague. These cameras are designed to recognise certain “suspicious behaviour,” such as running, waving your arms, or sweating.

Curiously enough, these are all things that are commonly found at the stressful environment an international airport is to many people. People need to get at the gate on time, which may require running (especially if you arrived at Schiphol by train, which in the Netherlands is notoriously unreliable), they may be afraid of flying and trying to get their nerves under control, and airports are also places where friends and family meet again after long times abroad, which (if you want to hug each other) requires arm waving.

I suspect that a lot of false positives are going to occur with this technology due to this. It’s the wrong technology at the wrong place. I fully understand the need for airport security, and we all want a safe environment for both passengers and crew. Flights need to operate under safe conditions. What I don’t understand is the mentality that every single risk in life needs to be minimised away by government agencies and combated with technology. More technology does not equal safer airports.

Security Theatre

A lot of the measures taken at airports constitute security theatre. This means that the measures are mostly ineffective against real threats, and serve mostly for show. The problem with automatic profiling, which is what this programme tries to do as well, is that it doesn’t work. Security expert Bruce Schneier has also written extensively about this, and I encourage you to read his 2010 essay Profiling Makes Us Less Safe about the specific case of air travel security.

The first problem is that terrorists don’t fit a specific profile, these systems can be circumvented once people figure out how, and because of the over-reliance on technology instead of common sense this can actually cause more insecurity. In “Little Brother”, Cory Doctorow wrote about how Marcus Yallow put gravel in his shoes to fool the gait-recognising cameras at his high school so he and his friends could sneak out to play a game outside. Similar things will be done to try and fool these “smart” cameras, but the consequences can be much greater. We are actually more secure when we randomly select people instead of relying on a specific threat profile or behavioural profile to select who to screen and who gets through security without secondary screening. The whole point of random screening is that it’s random. Therefore, a potential terrorist cannot in advance know what the criteria are that will make the system pick him out. If a system does use specific criteria, and the security of the system depends on the criteria themselves being secret, that would mean that someone would just have to observe the system for long enough to find out what the criteria are.

Technology may fail, which is something people don’t always realise. Another TNO report entitled: “Afwijkend Gedrag” (PDF; Abnormal Behaviour) states under the (admittedly tiny) section that deals with privacy concerns that collecting data about abnormal behaviour of people is ethically just because the society as a whole can be made safer with this data and associated technology. It also states (and this is an argument I’ve read elsewhere as well), that “society has chosen that safety and security trumps privacy.”

Now, let’s say for the sake of the argument that this might be true in a general sense (although it can be debated whether this is always the case, personally I don’t think so, as sometimes the costs are just too high and we need to keep a free and democratic society after all). The problem here is that the way technology and security systems are implemented is usually not something we as a society get to first have a vote on before the (no doubt highly lucrative) contracts get signed. In this case, Qubit probably saw a way to make a quick buck by talking the Schiphol leadership and/or the government (as the Dutch state holds 69.77% of the Schiphol shares) into buying their technology. It’s not something the people had a conscious debate on, and then subsequently made a well-informed decision.

Major Privacy Issues

We have established that these systems are ineffective and can be circumvented (like any system can), and won’t improve overall security. But much more importantly, there are major privacy issues with this technology. What Schiphol (and Qubit) is doing here, is analysing and storing data on millions of passengers, the overwhelmingly vast majority of which is completely innocent. This is like shooting a mosquito with a bazooka.

What happens with this data? We don’t know, and we have to believe Qubit and Schiphol on their word that data about non-suspect members of the public gets deleted. However, in light of recent events where it seems convenient to collect and store as much data about people as possible, I highly doubt any deletions will actually happen.

And the sad thing is: in the Netherlands the Ministry of Security and Justice is now talking about implementing the above-mentioned behavioural analysis system at another (secret) location in the Netherlands. Are we all human guinea pigs ready to be tested and played around with?

What is (ab)normal?

There are also problems with the definitions. This is something I see again and again with privacy-infringing projects like this. What constitutes “abnormal behaviour”? Who gets to decide on that and who controls what is abnormal behaviour and what isn’t? Maybe, in the not-too-distant future, the meaning of the word “abnormal” begins to shift, and begins to mean “not like us,” for some definition of “us.” George Orwell mentioned this effect in his book Nineteen-eighty-four, where ubiquitous telescreens watch and analyse your every move and one can never be sure what are criminal thoughts and what aren’t.

In 2009, when the European research project INDECT got funded by the European Union, there were critical questions asked to the European Commission by the European Parliament. More precisely, this was asked:

Question from EP: How does the Commission define the term abnormal behaviour used in the programme?

Answer from EC: As to the precise questions, the Commission would like to clarify that the term behaviour or abnormal behaviour is not defined by the Commission. It is up to applying consortia to do so when submitting a proposal, where each of the different projects aims at improving the operational efficiency of law enforcement services, by providing novel technical assistance.

(Source: Europarl (Written questions by Alexander Alvaro (ALDE) to the Commission))

In other words: according to the European Commission it depends on the individual projects, which all happen to be vague about their exact definitions. And when you don’t pin down definitions like this (and anchor them in law so that powerful governments and corporations that oversee these systems can be held to account!), these can be changed over time when a new leadership comes to power, either within the corporation in control over the technology, or within government. This is a danger that is often overlooked. There is no guarantee that we will always live in a democratic and free society, and the best defence against abuse of power is to make sure that those in power have as little data about you as possible.

Keeping these definitions vague is a major tactic in scaring people into submission. This has the inherent danger of legislative feature creep. A measure that once was implemented for one specific purpose soon gets used for another if the opportunity presents itself. Once it is observed that people are getting arrested for seemingly innocent things, many people (sub)consciously adjust their own behaviour. It works similarly with free speech: once certain opinions and utterances are deemed against the law, and are acted upon by law enforcement, many people start thinking twice about what they say and write. They start to self-censor, and this erodes people’s freedom to the point where we slowly shift into a technocratic Orwellian nightmare. And when we wake up it will already be too late to turn the tide.

Gave Privacy By Design Talk At eth0

I gave my talk about privacy by design last Saturday at eth0 2014 winter edition, a small hacker get-together which was organised in Lievelde, The Netherlands this year. eth0 organizes conferences that aim at bringing people with different computer-related interests together. They organise two events per year, one during winter. I’ve previously given a very similar talk at the OHM2013 hacker conference which was held in August 2013.

Video

Here’s the footage of my talk:

Quick Synopsis

I talked about privacy by design, and what I did with relation to Annie Machon‘s site and recently, the Sam Adams Associates for Integrity in Intelligence site. The talk consists of 2 parts, in the first part I explained what we’re up against, and in the second part I explained the 2 sites in a more specific case study.

I talked about the revelations about the NSA, GCHQ and other intelligence agencies, about the revelations in December, which were explained eloquently by Jacob Applebaum at 30C3 in Hamburg in December. Then I moved on to the threats to website visitors, how profiles are being built up and sold, browser fingerprinting. The second part consists of the case studies of both Annie Machon’s website, and the Sam Adams Associates’ website.

I’ve mentioned the Sam Adams Associates for Integrity in Intelligence, for whom I had the honour to make their website so they could have a more public space where they could share things relating to the Sam Adams Award with the world, and also to provide a nice overview of previous laureates and what their stories are.

One of the things both sites have in common is the hosting on a Swiss domain, which provides for a safer haven where content may be hosted safely without fear of being taken down by the U.S. authorities. The U.S. claims jurisdiction on the average .com, .net, .org domains etc. and there have been cases where these have been brought down because it hosted content the U.S. government did not agree with. Case in point: Richard O’Dwyer, a U.K. citizen, was threatened with extradition to the United States for being the man behind TVShacks, which was a website that provided links to copyrighted content. MegaUpload, the file locker company started by Kim Dotcom, was given the same treatment, where if you would visit their domain, you were served an image from the FBI telling you the domain had been seized.

NSA is coming to town!

I just stumbled upon this funny video made by the ACLU (American Civil Liberties Union). It fits perfectly, and it’s funny to see that when invasions of privacy gets really personal (Santa photographing your face, recording your conversations and rifling through your smartphone), people really don’t like this and some respond strongly, but when the exact same thing is done by some big, anonymous government agency it doesn’t get such a strong response, which in unfortunate. Anyway, without further ado: