Tag Archives: surveillance

Why I won’t recommend Signal anymore

Note: This article is also available in Portuguese, translated by Anders Bateva.

One of the things I do is cryptography and infosec training for investigative journalists who have a need to keep either their sources and communications confidential so they can more safely do their work in the public interest. Often they work in places which are heavily surveilled, like Europe, or the United States. Ed Snowden’s documents explain a thing or two about how the US intelligence apparatus goes about its day-to-day business. They sometimes also work in places in the world where rubber hose cryptanalysis is more common than in say the U.S. or Europe. Which is why crypto tools alone are not the Alpha and the Omega of (personal) security. This requires careful consideration of what to use when, and in what situation. One of the things I have recommended in the past for various cases is the OpenWhisperSystems’ app called Signal, available for Android and iOS. In this article, I want to explain my reasons why I won’t be recommending Signal in the future.

To be clear: the reason for this is not security. To the best of my knowledge, the Signal protocol is cryptographically sound, and your communications should still be secure. The reason has much more to do with the way the project is run, the focus and certain dependencies of the official (Android) Signal app, as well as the future of the Internet, and what future we would like to build and live in. This post was mostly sparked by Signal’s Giphy experiment, which shows a direction for the project that I wouldn’t have taken. There are other, bigger issues which deserve our attention.

What is Signal?signal

Signal is an app published by OpenWhisperSystems, a company run by Moxie Marlinspike. It has published an official Signal app for Google Android, and Apple iOS. Signal has been instrumental in providing an easy-to-use, cryptographically secure texting and calling app. It is a combination of the previously separate apps TextSecure and Redphone, which were combined into one app called Signal.

One of the main reasons why I recommended it previously to people was that it was easy to use, next to the cryptographic security. This is one good thing Signal has going for it. People could just install it and then communicate securely. Cryptographic software needs to be much more simple to use, and use securely, and Signal is doing its thing on the mobile platforms to create an easy-to-use secure messaging platform. I do appreciate them for that. I wanted to get that out of the way.

Multiple problems with Signal

There are however, multiple issues with Signal, namely:

  • Lack of federation
  • Dependency on Google Cloud Messaging
  • Your contact list is not private
  • The RedPhone server is not open-source

I’ll go into these one at a time.

Lack of federation

There is a modified version of Signal called LibreSignal, that removed the Google dependency from the Signal app, allowing Signal to be run on other (Android) devices, like CopperheadOS, or Jolla phones (with Android compatibility layer). In May this year, however, Moxie made it clear that he does not want LibreSignal to use the Signal servers, and that he does not approve of the name. The name is something that can change, that is not a problem. What is a problem, however, is the fact that he does not want LibreSignal to use the Signal servers. Which would be fine if he allowed LibreSignal to federate across using their own servers. This was tried once (Cyanogenmod, and also offered to Telegram, of all people) but subsequently abandoned, because Moxie believes it slows down changes to the app and/or protocol.

The whole problem with his position however, is that I don’t see the point of doing any of this secure messaging stuff, without having federation. The internet was built on federation. Multiple e-mail providers and servers for instance, can communicate effortlessly with one another, so I can send an e-mail to someone who has a Gmail address or a corporate address, etc. without effort and it all works. This works because of federation, because the protocols are all open standards and there are multiple implementations of the standards who can cooperate and communicate together. Another example would be the Jabber/XMPP protocol, which also has multiple clients on multiple platforms who can communicate securely with one another, despite one having a Jabber account on another server than the other.

If we don’t federate, if we don’t cooperate, what is there to stop the internet from becoming a bunch of proprietary walled gardens again? Is the internet then really nothing more than just a platform for us to use certain proprietary silo services on? Signal then, just happens to be a (partly proprietary) silo on which your messages are transmitted securely.

Dependency on Google Cloud Messaging

Currently, the official Signal client depends on Google Cloud Messaging to work correctly. The alternative that has been developed by the people of LibreSignal has removed that dependency, so people running other software, like Jolla or CopperheadOS can run Signal. Unfortunately, the policy decisions of OpenWhisperSystems and Moxie Marlinspike make it so that it became impossible to reliably run unofficial Signal clients that use the same server infrastructure, so people can communicate. Also, federation, like explained in the previous section, is expressly hindered and prohibited by OpenWhisperSystems, so it is not an option for LibreSignal to simply run their own servers and then federate within the wider Signal network, allowing people to contact each other across clients.

What is Google Cloud Messaging?

The Google Cloud Messaging service is used by Signal with empty messages in order to wake up the device before the actual messages are pushed to the device by Signal’s servers.[1] There is a way to use Signal without depending on GCM, but that uses microg, and that asks people to basically re-compile their kernel (at least I had to in my case). This is not something you can ask of non-technical users. I would like to be able to run an official Signal client (or any secure messaging client) on hardware that runs CopperheadOS for example.

Unrelated to GCM directly, but since on Android devices, Google usually has root access to the phone, there’s the issue of integrity. Google is still cooperating with the NSA and other intelligence agencies. PRISM is also still a thing. I’m pretty sure that Google could serve a specially modified update or version of Signal to specific targets for surveillance, and they would be none the wiser that they installed malware on their phones. For this reason it would also be strongly preferable to run a secure messaging client on a more secure platform. Currently when it comes to Signal this cannot be done in any official way, and it would help for the people who really need secure messaging services (instead of the people who merely use it as a replacement of say WhatsApp), if the software runs on other Android distributions, like Copperhead.[2]

Your contact list (social graph) is not private

Here is the permission list of Signal, including OpenWhisperSystems’ explanation for the need for them. As you can clearly see, Signal is allowed (if you install it), to read and modify your contacts. Signal associates phone numbers with names in a similar way that Whatsapp is doing, and this is a big reason why they feel they need to read your contact list. Also, there’s a usability thing where they display the contacts’ names and pictures in the Signal app. It hashes them before sending them to the server, but since the space of possible hashes is so small for phone numbers, this does not provide a lot of security. Moxie has stated previously (in 2014) that the problem of private contact discovery is difficult, lays out different strategies that don’t work or do not give satisfying performance, and then admits it’s still an unsolved problem. Discussion regarding this seemed to have moved from a Github issue to a mailing list, and I don’t know of any improvement on this front.[2]

This could of course all been done differently, by using usernames to connect users instead of their phone numbers (incidentally, this would also allow people who use multiple phone numbers on the same device to use Signal reliably). And last time I checked, if you use the same phone number on a different device, Signal will get deregistered on the old device.

Another issue, and a plus for using usernames, is that you may want to use Signal with people you don’t necessarily want to give your phone number to. And federation would also be easier with usernames, and servers, separated by a symbol, like the @. Just like in the case of Jabber/XMPP. I also see no usability issues here, as even very non-technical people generally get the concept of an address, or an e-mail address, and this would be very similar.

RedPhone not open source

The phone component of Signal is called RedPhone. The server component of this is unfortunately not open source (so people are prevented from running their own phone servers, and this is also probably the reason why secure encrypted phone calls don’t work in e.g. LibreSignal.)

I don’t know exactly what prevents the RedPhone server code from being released (whether it is legal issues or simple unwillingness), but I do think it is strange that there is no movement whatsoever to move to a different/alternative solution, that respects users’ rights.

Moving forward

Image above © ZABOU.

The big question now, as also said by @shiromarieke on Twitter, is what post-Signal tool we want to use. I don’t know the answer to that question yet, but I will lay out my minimum requirements of such a piece of software here. We as a community need to come up with a viable solution and alternative to Signal that is easy to use and that does in fact respect people’s choices, both in the hardware and software that they choose to run.

In my view, there should be a tool that is fully free software (as defined by the GNU GPL), that respects users’ freedoms to freely inspect, use, modify the software and distribute modified copies of the software. Also, this tool should not have dependencies on corporate infrastructure like Google’s (basically any partner in PRISM), that allows these parties to control the correct working of the software. The fact that Signal depends on Google Cloud Messaging, and Google technology in general is something that should be avoided.

In the end, I think we need to move to an Internet where there are more federated services, not less, where information is openly shared, and services publicly run by multiple people all over the world. Otherwise, we’ll be in danger of ending up in an neo-90s Internet, with walled gardens and pay walls all over the place. You already see this trend happening in journalism.

We need to remember that we’re fighting not only against government surveillance, but also against corporate surveillance as well. We need ways to defend against this, and using corporate solutions that create a dependency on these solutions, even if the communications themselves are not readable to them, there’s still the issue of metadata, and of course general availability of Google’s services to Signal.

It’s really unfortunate that OpenWhisperSystems isn’t more friendly to initiatives like LibreSignal, since these people did a lot of work which is now basically going to be thrown away because the person running Signal is not friendly to these initiatives.

We need to cooperate more as a community instead of creating these little islands, otherwise we are not going to succeed in defeating or even meaningfully defending against Big Brother. Remember, our enemy knows how to divide and conquer. Divide et impere. It’s been a basic government subjugation tactic since the Roman times. We should not allow our own petty egos and quest for eternal hacker fame to get in the way of our actual goal: dismantling the surveillance states globally.

Notes:
[1]: An earlier version of this article stated incorrectly that GCM was used to transport Signal messages. While correct for a previous version of TextSecure, this is in fact not correct anymore for Signal. I’ve updated it, in response to this HN comment: https://news.ycombinator.com/item?id=12882815.
[2]: Clarified my position re Google and GCM and the contact list / private contact discovery issue a bit.

Dutch Data Retention Law Struck Down

Good news on privacy protection for once: after an 11 March 2015 ruling of the Court of The Hague in the Netherlands in the case of the Privacy First Foundation c.s. versus The Netherlands, the court decided to strike down the Dutch data retention law. The law required telecommunication providers and ISPs to store communication and location data from everyone in the Netherlands for a year. The court based its decision on the reasoning that a major privacy infringement of this magnitude needs proper safeguards. The safeguards that were put in place were deemed insufficient by the court. There is too much room for abuse of power in the current law, which was the reason for the The Hague Court to strike it down, effective immediately.

An English article by the Dutch Bits of Freedom foundation explains it in more detail here. An unofficial translation of the court’s decision in English can be found here.

The question remains what will happen now. The law has been struck down, so it seems logical to scrap it entirely. Whether that will happen, or whether the decision stands should the Ministry of Security and Justice appeal the decision, time will tell.

RT Going Underground Interview About Regin

I recently did an interview with RT‘s Going Underground programme, presented by Afshin Rattansi. We talked about the recently-discovered highly sophisticated malware Regin, and whether GCHQ or some other nation state could be behind it. The entire episode can be watched here. For more background information about Regin, you can read my article about it.

With Politicians Like These, Who Needs Terrorists?

The text on the cover says: "Love is stronger than hate."

The text on the cover says: “Love is stronger than hate.”

Last week, on the 7th of January 2015, the satirical magazine Charlie Hebdo‘s office in Paris was attacked by Islamic fundamentalists. Charlie Hebdo is a French satirical magazine featuring jokes, cartoons, reports etcetera. that is stridently anti-conformist in nature. They make fun of politics, Judaism, Christianity and Islam and all other institutions. Like all of us they have every right to freedom of expression. But alas, fundamentalists did not agree, and opted to violently attack their office in Paris with assault rifles and rocket propelled grenades, leaving 12 people killed and 11 wounded. This was a terrible attack, and my heart goes out to the families and their colleagues and friends who have lost their loved ones.

After the attack, there was (rightly so) worldwide condemnation and the sentence “Je suis Charlie,” French for “I am Charlie,” became the slogan of millions. What I am afraid of however, is not the terrorists who perpetrate these attacks. What frightens me more, is the almost automatic response by politicians who immediately see reasons to implement ever more oppressive legislation, building the surveillance state. After all, the goal of terrorism is to change society by violent means. If we allow them to, the terrorists have already won. Their objective is completed by our own fear.

Hypocrites At The March

When I was watching footage of the march in Paris for freedom of expression I saw that a lot of government leaders were present, most of whom severely obstructed freedom of expression and freedom of the press in their home countries. Now they were were at the march, claiming the moral high ground and claiming to be the guardians of press freedom.

Here’s an overview of some of the leaders present at the march and what they did in relation to restricting press freedom in their own countries, courtesy of Daniel Wickham, who made this list and published it on his Twitter feed:

Politicians like the ones mentioned above, but also the likes of May (UK Home Secretary), Opstelten (the Netherlands’ Justice Minister) and many others are jumping on the bandwagon again to implement new oppressive laws limiting freedom of expression and the civil and human rights of their peoples. With leaders like these, who needs terrorists? Our leaders will happily implement legislation that will severely curtail our freedoms and civil liberties instead of handling the aftermath of tragic events like these as grown-ups. It would be better if they viewed participating in the march as a starting point to start improving the situation in the areas of freedom of expression and freedom of the press at home.

The Political Consequences Of Terrorist Attacks

What frightens me is the fact that people like Andrew Parker, head of MI5, the kind of person who normally never makes headlines, is given all the space he needed to explain to us “why we need them,” to put it in the words of High Chancellor Adam Sutler, the dictator from the film “V for Vendetta,” which is set in a near-future British dystopia. UK Chancellor George Osborne immediately said in response to the piece by Andrew Parker that MI5 will get an extra £100 million in funding for combating Islamic fundamentalism. David Cameron has confirmed this.

Politicians are using the tragic events in Paris as a way to demand more surveillance powers for the intelligence community in a brazen attempt to curtail our civil liberties in a similar way to what happened after the 9/11 attacks.

All the familiar rhetoric is used again, how it’s a “terrible reminder of the intentions of those who wish us harm,” how the threat level in Britain worsened and Islamic extremist groups in Syria and Iraq are trying to attack the UK, how the intelligence community needs more money to gather intelligence on these people, how our travel movements must be severely restricted and logged, the need for increased security at border checks, a European PNR (Passenger Name Record) (which, incidentally would mean the end of Schengen, one of the core founding principles on which the EU was founded — freedom of movement). The list goes on and on.

A trend can be seen here. UK Home Secretary Theresa May wants to ban extremist speech, and ban people deemed extremist from publicly speaking at universities and other venues. The problem with that is that the definition of extremist is very vague, and certainly up for debate. Is vehemently disagreeing with the government’s current course in a non-violent way extremist? I fear that May thinks that would fit the definition. This would severely curtail freedom of speech both on the internet and in real life, since there are many people who disagree with government policies, and are able to put forward their arguments in a constructive manner.

Before we can even begin to implement laws like these we need to discuss what extremism means, what vague concepts like “national security” mean. There are no clear definitions for these terms at this point, while the legislation that is being put into place since 9/11 is using these vague notions intentionally, giving the security apparatus way too much leeway to abuse their powers as they see fit.

I read that Cameron wants to ban all encrypted communications, since these cannot be decrypted by the intelligence community. This would mean that banks, corporations and individuals would leave themselves vulnerable to all kinds of security vulnerabilities, including identity theft among others, vulnerabilities which cryptographic technologies are meant to solve.

Cryptography is the practice of techniques for secure communication in the presence of adversaries. Without cryptography, you couldn’t communicate securely with your bank, or with companies that handle your data. You also couldn’t communicate securely with various government agencies, or health care institutions, etcetera. All these institutions and corporations handle sensitive information about your life that you wouldn’t want unauthorised people to have access to.  This discussion about banning cryptography strongly reminds me of the Crypto Wars of the 1990s.

Making technologies like these illegal only serves to hurt the security of law-abiding citizens. Criminals, like the people who committed the attacks at Charlie Hebdo, wouldn’t be deterred by it. They are already breaking the law anyway, so why worry? But for people who want to comply with the law, this is a serious barrier, and restricting cryptography only hurts our societies’ security.

Norwegians’ Response to Breivik

Instead of panicking, which is what these politicians are doing right now, we should instead treat this situation with much more sanity. Look for instance to how the Norwegians have handled the massacre of 77 people in Oslo and on the Norwegian island of Utøya by Anders Behring Breivik on July 22nd, 2011.

Breivik attacked the Norwegian government district in Oslo, and then subsequently went to Utøya, where a large Labour Party gathering was taking place. He murdered 77 people in total.

The response by the Norwegians was however, very different from what you would expect had the attack taken place in the UK, the US or The Netherlands, for instance. In these countries, the reaction would be the way it is now, with the government ever limiting civil liberties in an effort to build the surveillance state, taking away our liberties in a fit of fear. The Norwegians however, urged that Norway continued its tradition of openness and tolerance. Memorial services were held, the victims were mourned, and live went on. Breivik got a fair trial and is now serving his time in prison. This is the way to deal with crises like this.

Is Mass Surveillance Effective?

The problem with more surveillance legislation is the fact that it isn’t even certain that it would work. The effectiveness of the current (already quite oppressive) surveillance legislation has never been put to the test. Never was a research published that definitively said that, yes, storing all our communications in dragnet surveillance has stopped this many terrorist attacks and is a valuable contribution to society.

In fact, even the White House has released a review of the National Security Agency’s spy programmes in December 2013, months after the first revelations by Edward Snowden, and this report offered 46 recommendations for reform. The conclusion of the report was predictable, namely that even though the surveillance programmes have gone too far, that they should stay in place. But this report has undermined the NSA’s claims that the collection of meta-data and mass surveillance on billions of people is a necessary tool to combat terrorism.

The report says on page 104, and I quote:

“Our review suggests that the information contributed to terrorist investigations by the use of Section 215 telephony meta-data was not essential to preventing attacks and could readily have been obtained in a timely manner using conventional Section 215 orders.”

And shortly after Edward Snowden’s revelations about the existence of some of these programmes were published, former director of the NSA Keith Alexander testified to the Senate in defence of his agency’s surveillance programmes. He claimed that dozens of terrorist attacks were stopped because of the mass surveillance, both at home and abroad. This claim was also made by President Obama, who said that it was “over 50.” Often, 54 is the exact number quoted. Alexander’s claim was challenged by Senators Ron Wyden (D-OR) and Mark Udall (D-CO), who said that they “had not seen any evidence showing that the NSA’s dragnet collection of Americans’ phone records has produced any valuable intelligence.” The claim that the warrant-less global dragnet surveillance has stopped anywhere near that number of terrorist attacks is questionable to say the least, and much more likely entirely false.

More oppressive dragnet surveillance measures aren’t helping with making the intelligence community any more efficient at their job. In fact, the more intelligence gets scooped up in these dragnet surveillance programmes, the less likely it becomes that a terror plot is discovered before it occurs, so that these may be stopped in time. More data needs to be analysed, and there’s only so much automatic algorithms can do when tasked with filtering out the non-important stuff. In the end, the intel needs to be assessed by analysts in order to determine their value and if necessary act upon it. There is also the problem with false positives, as people get automatically flagged because their behaviour fits certain patterns programmed into the filtering software. This may lead to all sorts of consequences for the people involved, despite the fact that they have broken no laws.

Politicians can be a far greater danger to society than a bunch of Islamic terrorists. Because unlike the terrorists, politicians have the power to enact and change legislation, both for better and for worse. When we are being governed by fear, the terrorists have already won.

The objective of terrorism is not the act itself. It is to try and change society by violent means. If we allow them to change it, by implementing ever more oppressive mass surveillance legislation (in violation of Article 8 of the European Convention on Human Rights (ECHR)), or legislation that restricts the principles of freedom of the press and freedom of speech, enshrined in Article 10 of the ECHR, freedom of assembly and association enshrined in Article 11, or of freedom of movement which is one of the basic tenets on which the European Union was founded, the terrorists have already won.

Let’s use our brains and think before we act.

Talk at Logan Symposium 2014, London

A few weeks ago, I was in London at the Logan Symposium 2014, which was held at the Barbican Centre in London from 5 to 7 December 2014. During this event, I gave a talk entitled: “Security Dilemmas in Publishing Leaks.” (slides, PDF) The event was organised by the Centre for Investigative Journalism in London.

The audience was a switched-on crowd of journalists and hacktivists, bringing together key figures in the fight against invasive surveillance and secrecy. and it was great to be there and to be able to provide some insights and context from a technological perspective.

Regin: The Trojan Horse From GCHQ

In 2010, Belgacom, the Belgian telecommunications company was hacked. This attack was discovered in September 2013, and has been going on for years. We know that this attack is the work of Western intelligence, more specifically, GCHQ, thanks to documents from Edward Snowden. This operation was called Operation Socialist. Now, however, we know a little bit more about how exactly this attack was done, and by what means. Internet connections from employees of Belgacom were sent to a fake LinkedIn page that was used to infect their computers with malware, called “implants” in GCHQ parlance. Now we know that Regin is the name given to the highly complex malware that seems to have been used during Operation Socialist.

Projekt 28Symantec recently reported on this malware (the full technical paper (PDF) can be found here), and it’s behaviour is highly complex. It is able to adapt to very specific missions and the authors have made tremendous effort to make it hard to detect. The malware is able to adapt and change, and since most of anti-virus detection relies on heuristics, or specific fingerprints of known malware, Regin was able to fool anti-virus software and stay undetected. However, Symantec put two and two together and has now revealed some of Regin’s inner workings.

fig3-countriesThe infections have ranged from telecoms and internet backbones (20% of infections), to hospitality (hotels, etc.), energy, the airlines, and research sectors but the vast majority of infections has been of private individuals or small businesses (48%). Also, the countries targeted are diverse, but the vast majority of attacks is directed against the Russian Federation (28%) and Saudi Arabia (24%).

The Regin malware works very much like a framework, which the attackers can use to inject various types of code, called “payloads” to do very specific things like capturing screen-shots, taking control of your mouse, stealing passwords, monitoring your network traffic and recovering files. Several Remote Access Trojans (also known as RATs) have been found, although even more complex payloads have also been found in the wild, like a Microsoft IIS web server traffic monitor (this makes it easy to spy on who visits a certain website etcetera). Another example of a highly complex payload that has been found is malware to sniff administration panels of mobile cellphone base station controllers.

How Regin Works

As mentioned above, Regin works as a modular framework, where the attackers can turn on/off certain elements and load specific code, called a “payload,” to create a Regin version that is specifically suited to a specific mission. Note that it is not certain whether all payloads have been discovered, and that there may be more than the ones specified in the report.

fig2-sectorsRegin does not appear to target any specific industrial sector, but infections have been found across the board, but mostly in telecom and private individuals and small businesses. Currently, it is not known what infection vectors can possibly be used to infect a specific target with the Regin malware, but one could for instance think of tricking the target into clicking on a certain link in an e-mail, visiting spoof websites, or maybe through a vulnerable application installed on the victim’s computer, which can be used to infect the target with Regin. In one instance, according to the Symantec report, a victim was infected through Yahoo! Instant Messenger. During Operation Socialist, GCHQ used a fake LinkedIn page to trick Belgacom engineers into installing the malware. So one can expect infection to take place along those lines, but other possibilities may of course exist.

regin_stages

The various stages of Regin.

Regin has six stages in its architecture, called Stage 0 to Stage 5 in the Symantec report. First, a dropper trojan horse will install the malware on the target’s computer (Stage 0), then it loads several drivers (Stage 1 and 2), loads compression, encryption, networking, and EVFS (encrypted file container) code (Stage 3), then it loads the encrypted file container and loads some additional kernel drivers, plus the payloads (Stage 4), and in the final stage (Stage 5) it loads the main payload and the necessary data files for it to operate.

The malware seems to be aimed primarily against computers running the Microsoft Windows operating system, as all of the files discussed in the Symantec report are highly Windows-specific. But there may be payloads out there which target GNU/Linux or OS X computers. The full extent of the malware has not been fully revealed, and it will be interesting to find out more about the exact capabilities of this malware. The capabilities mentioned in the report are already vast and can be used to spy on people’s computers for extended periods of time, but I’m sure that there must be more payloads out there, I’m certain that we’ve only scratched the surface of what is possible.

Regin is a highly-complex threat to computers around the world, and seems to be specifically suited towards large-scale data collection and intelligence gathering campaigns. The development would have required significant investments of time, money and resources, and might very well have taken a few years. Some components of Regin were traced back all the way to 2003.

Western Intelligence Origins?

In recent years, various governments, like the Chinese government, and the Russian government, have been implicated in various hacking attempts and attacks on Western infrastructure. In the article linked here, the FBI accuses the Russians of hacking for the purpose of economic espionage. However, Western governments also engage in digital warfare and espionage, not just for national security purposes (which is a term that has never been defined legally), but they also engage in economic espionage. In the early 1990s, as part of the ECHELON programme, the NSA intercepted communications between Airbus and the Saudi Arabian national airline. They were negotiating contracts with the Saudis, and the NSA passed information on to Boeing which was able to deliver a more competitive proposal, and due to this development, Airbus lost the $6 billion dollar contract to Boeing. This has been confirmed in the European Parliament Report on ECHELON from 2001. Regin also very clearly demonstrates that Western intelligence agencies are deeply involved in digital espionage and digital warfare.

Due to the highly-complex nature of the malware, and the significant amount of effort and time required to develop, test and deploy the Regin malware, together with the highly-specific nature of the various payloads and the modularity of the system, it is highly likely that a state actor was behind the Regin malware. Also, significant effort went into making the system very stealthy and hard for anti-virus software to detect. It was carefully engineered to circumvent anti-virus software’s heuristic detection algorithms and furthermore, some effort was put into making the Regin malware difficult to fingerprint (due to its modular nature)

Furthermore, when looking at the recently discovered attacks, and more especially where the victims are geographically located, it seems that the vast majority of attacks were aimed against the Russian Federation, and Saudi Arabia.

According to The Intercept and Ronald Prins from Dutch security company Fox-IT, there is no doubt that GCHQ and NSA are behind the Regin malware. Der Spiegel revealed that NSA malware had infected the computer networks of the European Union. That might very well been the same malware.

Stuxnet

symantic_virus_discovery.siA similar case of state-sponsored malware appeared in June 2010. In the case of Stuxnet, a disproportionate amount of Iranian industrial site were targeted. According to Symantec, which has published various reports on Stuxnet, Stuxnet was used in one instance to change the speed of about 1,000 gas-spinning centrifuges at the Iranian nuclear power plant at Natanz, thereby sabotaging the research done by Iranian scientists. This covert manipulation could have caused an explosion at this nuclear facility.

Given the fact that Israel and the United States are very much against Iran developing nuclear power for peaceful purposes, thinking Iran is developing nuclear weapons instead of power plants, together with Stuxnet’s purpose to attack industrial sites, amongst those, nuclear sites in Iran, strongly indicates that the US and/or Israeli governments are behind the Stuxnet malware. Both of these countries have the capabilities to develop it, and in fact, they started to think about this project way back in 2005, when the earliest variants of Stuxnet were created.

Dangers of State-Sponsored Malware

The dangers of this state-sponsored malware is of course that should it be discovered, it may very well prompt the companies, individuals or states that the surveillance is targeted against to take countermeasures, leading to a digital arms race. This may subsequently lead to war, especially when a nation’s critical infrastructure is targeted.

The dangers of states creating malware like this and letting it out in the wild is that it compromises not only security, but also our very safety. Security gets compromised when bugs are left unsolved and back doors built in to let the spies in, and let malware do its work. This affects the safety of all of us. Government back doors and malware is not guaranteed to be used only by governments. Others can get a hold of the malware as well, and security vulnerabilities can be used by others than just spies. Think criminals who are after credit card details, or steal identities which are subsequently used for nefarious purposes.

Governments hacking other nations’ critical infrastructure would constitute an act of war I think. Nowadays every nation worth its salt has set up a digital warfare branch, where exploits are bought, malware developed and deployed. Once you start causing millions of Euros worth of damage to other nations’ infrastructure, you are on a slippery slope. Other countries may “hack back” and this will inevitably lead to a digital arms race, the damage of which does not only affect government computers and infrastructure, but also citizens’ computers and systems, corporations, and in some cases, even our lives. The US attack on Iran’s nuclear installations with the Stuxnet malware was incredibly dangerous and could have caused severe accidents to happen. Think of what would happen had a nuclear meltdown occurred. But nuclear installations are not the only ones, there’s other facilities as well which may come under attacks, hospitals for instance.

Using malware to attack and hack other countries’ infrastructure is incredibly dangerous and can only lead to more problems. Nothing has ever been solved by it. It will cause a shady exploits market to flourish which will mean that less and less critical exploits get fixed. Clearly, these are worth a lot of money, and many people that were previously pointing out vulnerabilities and supplying patches to software vendors are now selling these security vulnerabilities off on the black market.

Security vulnerabilities need to be addressed across the board, so that all of us can be safer, instead of the spooks using software bugs, vulnerabilities and back doors against us, and deliberately leaving open gaping holes for criminals to use as well.

Dutch Intelligence Agencies AIVD/MIVD go TEMPORA

On November 21, 2014, the Dutch Ministry of the Interior and Relations within the Realm (Ministerie van Binnenlandse Zaken en Koninkrijksrelaties), sent a message to Parliament about the — in their view — necessary changes that need to be made to the Wet op de inlichtingen- en veiligheidsdiensten (Wiv) 2002 (Intelligence and Security Act 2002). The old law (Wiv 2002), differentiates between cable-bound and non-cable-bound (as in: satellite or radio) communications, and gives the intelligence agencies different powers for each of these two cases. In general, under the old law, according to Article 27, it’s legal for the AIVD and MIVD to bulk-intercept non-cable-bound communications. It isn’t legal for them to do so for cable-bound communications (as in: internet fibre optic cables, etc.) In this latter case, of cable-bound communications, it’s only legal for them to intercept the communications of specific intelligence targets (as put forward in Articles 25 and 26). In the case of targeted surveillance, the intercepted information can come from any source.

outline_dutch_intercept_network

An outline of the new Dutch interception framework. Click for larger version. Official document in Dutch can be found here.

The Dessens Committee concluded (PDF, on pages 10 and 11) that this distinction between the various sources of the communication (cable vs non-cable) is no longer appropriate in the modern day and age, where the largest chunk of the communications in the world travel via cables. The way the cabinet wants to solve this problem is by changing the law such that the AIVD and its military sister MIVD can lawfully intercept cable-bound communications in bulk, expanding their powers significantly. So, in other words, the Dutch government is planning to go full TEMPORA (original source PDF courtesy of Edward Snowden), and basically implement what GCHQ has done in the case of Britain: bulk intercept everything that goes across the internet.

Why does this matter?

This matters because by bulk-intercepting everything that goes across the internet, the communications of people who aren’t legitimate intelligence targets get intercepted and analysed as well. By intercepting everything, no-one can have any expectation of privacy on the internet anymore, except when we all pro-actively take measures (like using strong encryption, Tor, OTR chat, VPNs, using free/open source software, etc.) to make sure that our privacy is not being surreptitiously invaded by the spooks. It is especially important to do this when there isn’t any proper democratic oversight in place, which could stop the AIVD or MIVD from breaking the law, and provide meaningful oversight and corrections to corrupting tendencies (after all, as we all know, power corrupts).

Also, the Netherlands is home to the second-largest internet exchange in the world, the Amsterdam Internet Exchange (Ams-IX), second only to the German exchange DE-CIX in Frankfurt. So a very large amount of data goes across Ams-IX’s cables, and this makes it interesting from an intelligence point of view to bulk-intercept everything that goes across it. This was previously not allowed in the Netherlands. Now, of course, if the AIVD wanted access to these bulk-intercepts, it could simply ask its sister organisation GCHQ in Britain. There is a lively market for sharing intelligence in the world. For instance, in many jurisdictions where it would be illegal for a domestic intelligence agency to spy on their own citizens, a foreign intelligence agency has no such limitations, and can then subsequently share the gained intel with the domestic intelligence agency. But now, they are building their own capacity to do this in Amsterdam on a massive scale.

In terms of intelligence targets, the AIVD currently focuses on jihadists, Islamic extremists, and due to their historical tendencies still left over from the BVD-era, left-wing activists. The BVD’s surveillance on the left-leaning portion of the Dutch population was legendary.

Legalising certain practices of intelligence agencies is something that we see more and more, which is what happens here.

Lawyer-client confidentiality routinely broken

A few weeks ago, I read on RT that MI5, MI6 and GHCQ routinely snoop on lawyers’ client communications. In the Netherlands, lawyer-client communications are routinely intercepted by police, prison administrations, and intelligence agencies. In a normal criminal case with the police or prisons doing the intercepting, this is illegal, and any intel gained isn’t supposed to end up in court documents. But in the case of intelligence agencies doing the intercepting, this is currently legal since there are no legal provisions prohibiting the Dutch intelligence community from not recording and analysing lawyer-client communications. But in a few occasions, these communications did end up in court documents. This strongly indicates that these communications are routinely intercepted and analysed. There is in fact a whole IT infrastructure in place to “exclude” these communications from the phone tap records, for instance. On this page, the Dutch Bar Association is explaining to their members how to submit their phone numbers into this system so that their conversations with their clients are (ostensibly) excluded from the taps (only the taps by Police though, the intelligence community is, as I’ve explained above, not affected by this.)

This trend is incredibly dangerous to the right to a fair trial. If one cannot honestly speak to one’s lawyer any more, where every word spoken to one’s lawyer is intercepted and analysed, suddenly the government holds all the cards, and will always be one step ahead. How can one build a defence based on that?

The Netherlands is by the way still the country with the dubious distinction of having the largest absolute number of wire-taps in the world, and that’s just gleaned from (partial) police records. We don’t even know how much the AIVD and MIVD tap, since that information is classified, and “threatens national security if released,” which in my opinion is spy-speak for: “We tap so much that you’d fall off your chair in outrage if we told you, so it’s better that we don’t.”

Instead of holding the intelligence community accountable for their actions for once, and make these practices stop at once, the government has always taken the position of legalising current practices instead, which, if you are the government minister responsible for the oversight on the intelligence community, sure is a lot easier than confronting a powerful intelligence agency, which maybe holds some dirt on you.

All of these developments are so dangerous to our way of living and any sane definition of a free and open, democratic society where government is accountable to the people that they claim to represent, that it makes me want to proclaim, as Cicero exasperatedly proclaimed in his first oration against Senator Catilina:

“O tempora! O mores!”

In the Roman case, Catilina conspired to overthrow the Republic & Senate, and Cicero was frustrated that, in spite of all the evidence presented, Catilina was still not sentenced for the coup, whereas in previous times in Roman history, Cicero noted, people have been executed based on far less evidence.

Maccari-CiceroNow we have the situation, that in spite of all the mountains of evidence we now have, thanks to Snowden, governments around the world still won’t take the prudent and necessary steps to hold the intelligence community to account. We need to take action, and start to encrypt. As soon as the vast majority of the world’s communications are encrypted using strong encryption (not the ones where the NSA “helpfully” gives NIST the special factor to use for calculations in their standardisation of a crypto algorithm, all for free), soon, blatantly collecting everything will be of no use.

The Age of the Gait-Recognising Cameras Is Here!

Schiphol_airport_Amsterdam2.cleaned

A few days ago I read an article (NRC, Dutch, published 11 September, interestingly) about how TNO (the Dutch Organisation for Applied Scientific Research, the largest research institute in the Netherlands) developed technology (PDF) for smart cameras for use at Amsterdam Schiphol Airport. These cameras were installed at Schiphol airport by the Qubit Visual Intelligence, a company from The Hague. These cameras are designed to recognise certain “suspicious behaviour,” such as running, waving your arms, or sweating.

Curiously enough, these are all things that are commonly found at the stressful environment an international airport is to many people. People need to get at the gate on time, which may require running (especially if you arrived at Schiphol by train, which in the Netherlands is notoriously unreliable), they may be afraid of flying and trying to get their nerves under control, and airports are also places where friends and family meet again after long times abroad, which (if you want to hug each other) requires arm waving.

I suspect that a lot of false positives are going to occur with this technology due to this. It’s the wrong technology at the wrong place. I fully understand the need for airport security, and we all want a safe environment for both passengers and crew. Flights need to operate under safe conditions. What I don’t understand is the mentality that every single risk in life needs to be minimised away by government agencies and combated with technology. More technology does not equal safer airports.

Security Theatre

A lot of the measures taken at airports constitute security theatre. This means that the measures are mostly ineffective against real threats, and serve mostly for show. The problem with automatic profiling, which is what this programme tries to do as well, is that it doesn’t work. Security expert Bruce Schneier has also written extensively about this, and I encourage you to read his 2010 essay Profiling Makes Us Less Safe about the specific case of air travel security.

The first problem is that terrorists don’t fit a specific profile, these systems can be circumvented once people figure out how, and because of the over-reliance on technology instead of common sense this can actually cause more insecurity. In “Little Brother”, Cory Doctorow wrote about how Marcus Yallow put gravel in his shoes to fool the gait-recognising cameras at his high school so he and his friends could sneak out to play a game outside. Similar things will be done to try and fool these “smart” cameras, but the consequences can be much greater. We are actually more secure when we randomly select people instead of relying on a specific threat profile or behavioural profile to select who to screen and who gets through security without secondary screening. The whole point of random screening is that it’s random. Therefore, a potential terrorist cannot in advance know what the criteria are that will make the system pick him out. If a system does use specific criteria, and the security of the system depends on the criteria themselves being secret, that would mean that someone would just have to observe the system for long enough to find out what the criteria are.

Technology may fail, which is something people don’t always realise. Another TNO report entitled: “Afwijkend Gedrag” (PDF; Abnormal Behaviour) states under the (admittedly tiny) section that deals with privacy concerns that collecting data about abnormal behaviour of people is ethically just because the society as a whole can be made safer with this data and associated technology. It also states (and this is an argument I’ve read elsewhere as well), that “society has chosen that safety and security trumps privacy.”

Now, let’s say for the sake of the argument that this might be true in a general sense (although it can be debated whether this is always the case, personally I don’t think so, as sometimes the costs are just too high and we need to keep a free and democratic society after all). The problem here is that the way technology and security systems are implemented is usually not something we as a society get to first have a vote on before the (no doubt highly lucrative) contracts get signed. In this case, Qubit probably saw a way to make a quick buck by talking the Schiphol leadership and/or the government (as the Dutch state holds 69.77% of the Schiphol shares) into buying their technology. It’s not something the people had a conscious debate on, and then subsequently made a well-informed decision.

Major Privacy Issues

We have established that these systems are ineffective and can be circumvented (like any system can), and won’t improve overall security. But much more importantly, there are major privacy issues with this technology. What Schiphol (and Qubit) is doing here, is analysing and storing data on millions of passengers, the overwhelmingly vast majority of which is completely innocent. This is like shooting a mosquito with a bazooka.

What happens with this data? We don’t know, and we have to believe Qubit and Schiphol on their word that data about non-suspect members of the public gets deleted. However, in light of recent events where it seems convenient to collect and store as much data about people as possible, I highly doubt any deletions will actually happen.

And the sad thing is: in the Netherlands the Ministry of Security and Justice is now talking about implementing the above-mentioned behavioural analysis system at another (secret) location in the Netherlands. Are we all human guinea pigs ready to be tested and played around with?

What is (ab)normal?

There are also problems with the definitions. This is something I see again and again with privacy-infringing projects like this. What constitutes “abnormal behaviour”? Who gets to decide on that and who controls what is abnormal behaviour and what isn’t? Maybe, in the not-too-distant future, the meaning of the word “abnormal” begins to shift, and begins to mean “not like us,” for some definition of “us.” George Orwell mentioned this effect in his book Nineteen-eighty-four, where ubiquitous telescreens watch and analyse your every move and one can never be sure what are criminal thoughts and what aren’t.

In 2009, when the European research project INDECT got funded by the European Union, there were critical questions asked to the European Commission by the European Parliament. More precisely, this was asked:

Question from EP: How does the Commission define the term abnormal behaviour used in the programme?

Answer from EC: As to the precise questions, the Commission would like to clarify that the term behaviour or abnormal behaviour is not defined by the Commission. It is up to applying consortia to do so when submitting a proposal, where each of the different projects aims at improving the operational efficiency of law enforcement services, by providing novel technical assistance.

(Source: Europarl (Written questions by Alexander Alvaro (ALDE) to the Commission))

In other words: according to the European Commission it depends on the individual projects, which all happen to be vague about their exact definitions. And when you don’t pin down definitions like this (and anchor them in law so that powerful governments and corporations that oversee these systems can be held to account!), these can be changed over time when a new leadership comes to power, either within the corporation in control over the technology, or within government. This is a danger that is often overlooked. There is no guarantee that we will always live in a democratic and free society, and the best defence against abuse of power is to make sure that those in power have as little data about you as possible.

Keeping these definitions vague is a major tactic in scaring people into submission. This has the inherent danger of legislative feature creep. A measure that once was implemented for one specific purpose soon gets used for another if the opportunity presents itself. Once it is observed that people are getting arrested for seemingly innocent things, many people (sub)consciously adjust their own behaviour. It works similarly with free speech: once certain opinions and utterances are deemed against the law, and are acted upon by law enforcement, many people start thinking twice about what they say and write. They start to self-censor, and this erodes people’s freedom to the point where we slowly shift into a technocratic Orwellian nightmare. And when we wake up it will already be too late to turn the tide.

Country X: The Country That Shall Not Be Named

On Monday, 19 May 2014, Glenn Greenwald published his report entitled Data Pirates of the Caribbean: The NSA is recording every cell call in the Bahamas, in which he reported about the NSA SOMALGET program, which is part of the larger MYSTIC program. MYSTIC has been used to intercept the communications of several countries, namely the Bahamas, Mexico, Kenya, the Phillipines, and thanks to Wikileaks we now know that the final country, redacted in Glenn Greenwalds original report on these programs, was Afghanistan.

MYSTICSOMALGET can be used to take in the entire audio stream (not just metadata) of all the calls in an entire country, and store this information for (at least) 30 days. This is capability the NSA developed, and was published by The Washington Post in March this year.

Why the Censorship?

The question however, is why Glenn Greenwald chose to censor the name of Afghanistan out of his report. He claims it has been done to protect lives, but I honestly can’t for the life of me figure out why lives would be at risk when it is revealed to the Afghani’s that their country is one of the most heavily surveilled on the planet? This information is not exactly a secret. Why is this knowledge that’s OK for the Bahamians to possess, but not the Afghani’s? The US effectively colonized Afghanistan and it seems that everyone with at least half a brain can figure out that calling someone in Afghanistan might have a very high risk of being recorded and analysed by NSA. Now we know for certain that the probability of this happening is 1.

Whistleblowers risk their lives and livelihoods to bring to the public’s attention, information that they deem to be in the gravest public interest. Now, whistleblowers carefully consider which information to publish and/or hand out to journalists, and in the case of intelligence whistleblowers, they are clearly more expert than most journalists when it comes to security and sensing which information has to be kept from the public in the interest of safety of lives and which information can be published in the public interest. After all, they have been doing exactly that for most of their professional lives, in a security-related context.

Now, it seems that Greenwald acts as a sort of filter between the information Edward Snowden gave him for publication, and the actual information the public is getting. Greenwald is sitting on an absolute treasure-trove of information and is clearly cherry picking which information to publish and which information to withhold. By what criteria I wonder? Spreading out the publication of data however, is a good strategy, given that about a year has passed since the first disclosures, and it’s still very much in the media, which is clearly a very good thing. I don’t think that would have happened if all the information was dumped at once.

But on the other hand: Snowden has risked his life and left his comfortable life on Hawaii behind him to make this information public, a very brave thing to do, and certainly not a decision to take lightly, and has personally selected Greenwald to receive this information. And here is a journalist who is openly cherry-picking and censoring the information given to him, already preselected by Snowden, and thereby withholding potentially critical information from the public?

So I would hereby like to ask: By what criteria is Greenwald selecting information for publication? Why the need to interfere with the whistleblower’s judgement regarding the information, who is clearly more expert at assessing the security-related issues surrounding publication?

Annie Machon, whistleblower and former MI5, has also done an interview on RT about this Afghanistan-censoring business of Greenwald, whistleblowers deserve full coverage. Do watch. Whistleblowers risk their lives to keep the public informed of government and corporate wrongdoing. They need our support.

Update: Mensoh has also written a good article (titled: The Deception) about Greenwald’s actions, also in relation to SOMALGET and other releases. A highly recommended read.