Category Archives: Software

Pegasus: NSO Group’s Insidious Spyware

Note: This article was first published at the World Ethical Data Forum.

“Malware Infection” by Visual Content is licensed under CC BY 2.0

Pegasus is advanced spyware that was first discovered in August 2016, developed by NSO Group based in Israel, and sold to various clients around the world, including Saudi Arabia, Bahrain, the UAE, India, Kazakhstan, Hungary, Rwanda, Azerbaijan, Morocco and Mexico among probably other nations. It is marketed by NSO Group as a “world-leading cyber intelligence solution that enables law enforcement and intelligence agencies to remotely and covertly extract valuable intelligence from virtually any mobile device”.

There’s a huge market for spyware like this, not only NSO Group sells it, but countless other corporations across the globe are active in this market.

The reason we are writing about this today is because this month, in July 2021, seventeen news media organisations investigated a leak of over 50,000 phone numbers believed to have been identified as targets of clients of NSO Group since 2016. Pegasus continues to be widely used by authoritarian governments to spy on human rights activists, journalists and lawyers across the world.

Pegasus can infect phones running either Apple’s iOS operating system, or Google’s Android OS. The earliest versions – from around 2016 until 2019 – used a technique called spear phishing – text messages or emails that trick targets into clicking on a malicious link. Nowadays, however, Pegasus can infect phones based on a “zero-click” attack, meaning it does not require interaction by the user in order for the malware to infect the phone. This seems to be the dominant attack method now. Whenever OTA (over-the-air) zero-click exploits are not possible, NSO Group states in their marketing material (page 12) that they will resort to sending a custom-crafted message via SMS, messaging app (like WhatsApp) or e-mail, hoping that the target will click the link. So in other words, in case the newer zero-click exploits don’t work, targets can still be infected using the  spear phishing approach. Both the OTA and ESEM (Enhanced Social Engineering Message) methods require that the operator only knows a phone number or e-mail address used by the target. Nothing more is required to infect the target.

Pegasus is designed to overcome several obstacles on smartphones, namely: encryption, abundance of various communication apps, targets being outside the interception domain (roaming, face-to-face meetings, use of private networks), masking (use of virtual identities making it almost impossible to track and trace), and SIM replacement.

What data is collected?

Several types of data is extracted or made accessible from the phone, namely:

  • SMS records
  • Contact details
  • Call history
  • Calendar records
  • E-mails
  • Instant messaging messages    
  • Browsing history
  • Location tracking (both cell-tower based as well as GPS based) – Cell-tower based locations get sent passively; whenever the operator requests a more precise location, the GPS gets turned on and the malware will send a precise lat-long location.
  • Voice call interception
  • Environmental (ambient) sound recordings via the microphone
  • File retrieval
  • Photo taking
  • Screen capturing.

Pegasus supports both passive and active data capturing; the capabilities above sometimes can be done passively, and sometimes require active interception. The difference being that once the Pegasus malware is installed on a device, it automatically (passively) collects various data, either in real-time, or when specific conditions are met (depending on how the malware is configured). Active data collection means the operator sending active requests to the device for information. These only happen at the specific request of the operator.

The collected data gets transmitted to the Command & Control server (C&C server). However, if data transmission is not possible, the collected data will be stored in a collection buffer. The data will then be sent when a connection is available again. The buffer size is set to reach no more than 5% of the free available space on the device, to avoid detection. The buffer operates on a FIFO basis, meaning that older data gets deleted whenever the buffer is full, an internet connection is not available and replaced by newer data to keep the size of the buffer the same.

NSO Group published a brochure explaining the capabilities and general workings of the malware.

Interesting is that the malware has self-destruct capabilities. When it cannot contact its C&C server for more than 60 days, or if it detects that it was installed on the wrong device, it will self-destruct to limit discovery. That would imply it is possible to defeat the malware by simply placing your phone in a Faraday cage for 60 days.

How do you detect if you have been infected?

NSO Group claims that Pegasus leaves no traces whatsoever. That statement isn’t true. Amnesty International did quite a bit of research on the malware, which you can read in full detail.

It is possible to detect Pegasus infection by checking the Safari logs for strange redirects. These are redirects to URLs that contain multiple subdomains, a non-standard port number, and a random URI request string. For instance, Amnesty has analysed an activist’s phone and discovered that immediately after trying to visit Yahoo.FR, the phone redirects the user to a very strange URL (after which, I presume, the browser will load Yahoo to avoid detection). In order to see these requests for Safari, you need to check Safari’s Session Resource logs, instead of the browsing history. Safari will only record the final site reached in the browsing history, not all the redirects it did along the way.

Another way to detect it is via the appearance of weird, malicious processes. According to Amnesty, both Maati Monjib and Omar Radi’s network usage databases contained a suspicious process called “bh”. This process was observed on multiple occasions immediately following visits to Pegasus installation domains, so it is probably related. References to “bh” were also found in Pegasus’ iOS sample recovered from the 2016 attacks against UAE human rights defender Ahmed Mansoor, analyzed by Lookout.

Other processes associated with Pegasus seem to be “msgacntd” and roleaboutd”, “pcsd” and “fmld”. There seem to be many others, and there’s also evidence that Pegasus is spoofing names of legitimate processes on iOS to avoid detection.

Here are the full Pegasus Indicators of Compromise, courtesy of Amnesty Tech, which lists suspicious files, domains, infrastructure, e-mails, and processes. Also helpful is the Mobile Verification Toolkit (MVT), which is a collection of utilities to simplify and automate the process of gathering forensic traces helpful to identify a potential compromise of Android and iOS devices.

What is interesting to see is that NSO Group rapidly shut down their Version 3 server infrastructure after the publications by Amnesty International and Citizen Lab on 1 August 2018:

Source: Amnesty

After this they have moved on to Version 4 infrastructure and have restructured the architecture to make it harder to detect.

Measures to take to best secure your phone

Luckily, Apple was quick to react with an update back when the malware was first discovered in 2016. The company issued a security update (iOS 9.3.5) that patched all three known vulnerabilities that Pegasus uses. Google notified Pegasus targets directly using the leaked list. According to cybersecurity company Kaspersky, if you have always updated your iOS phone and/or iPad as soon as possible and (in case you use Android) you haven’t gotten a notification from Google, you are probably safe and not under surveillance by Pegasus.

Of course, bear in mind that the article by Kaspersky was published a few years ago, and that the fight against Pegasus and other similar malware continues, and as Apple and Google (and other parties) keep pushing updates, NSO Group and its competitors will keep trying to find zero days and other hacks to try and make sure that its software continues to be able to infect a wide variety of devices. So it’s basically an arms race. In order to limit changes of infection, it is required to be continually vigilant, and take proactive measures to improve your devices’ security.

We can give several additional tips for the future to make sure you keep your devices as secure as possible.

  • First of all, make sure that you install any updates to your software (whether operating system (iOS or Android or others), as well as general software & apps), as fast as possible. These updates often fix important security vulnerabilities so if you run recent software on your devices you are a lot less vulnerable. This is general advice, so always make sure you regularly check for updates, and if possible, configure your systems such that updates are installed automatically, so you’re optimally protected.
       
  • A second tip is to not click any suspicious links that may have been sent to you via instant messaging, SMS or emails. This can be hard to detect, but signs like spelling & grammar mistakes, a sudden change in language (like the way you’re being addressed), or just a strange sequence of events (like why would a certain company contact you at that point to get you to click a link), can help to detect suspicious messages. This is being made a bit harder by the fact that smartphones often don’t show where a link leads until it’s already too late and you’ve clicked on it.
     
  • Another way to protect yourself is to get as many companies as possible to send you physical mail, if that is an option, instead of e-mails or other messages. That way, when you suddenly receive an email from, for example, your utility company, this raises a lot more red flags than if your normal communication with this party also goes over email. If you receive a link from a suspicious source, do not click on it.

Conclusions

Advanced spyware, either fielded by intelligence agencies, or sold by private companies like NSO Group, gets used to target human rights activists, journalists and other people active in organisations trying to affect societal change. Instruments that were previously used against criminals and terrorists are now being fielded on a massive scale against journalists and activists that do not have any criminal intent. Of course, seen from the viewpoint of the various regimes around the world, often with atrocious human rights records, the very existence of a free press, or people campaigning for societal change, seems like a threat. However, the Universal Declaration of Human Rights considers certain rights to be inalienable and common among all people. The fact that highly advanced spyware is being used to disrupt and interfere with people trying to exercise their universal human rights, is highly concerning.

We will only see more cyber attacks in the future, and these will become more and more sophisticated. It will become harder and harder to defend against, as the internet and computer networks have become a battleground for intelligence, for both nation states, criminals and corporations alike.

Automatically update WordPress to the latest version

This post is a quick, temporary break from my usual privacy/civil rights posts, to a post of a slightly more technical nature.

As WordPress is the most popular blogging platform on the internet, updates become crucial. However, the way WordPress runs at certain clients of mine means it’s not always just a question of clicking a button (or it happening automatically, as in recent versions of WordPress).

For security reasons, at certain websites in need of high security, but whose editors still want the ease of use of something familiar like WordPress, I like to keep WordPress off the publicly-accessible internet, and then have a static HTML copy of the website publicly accessible. This has advantages of security (the publicly-accessible web server only has to be able to serve static HTML and images), and also causes much less load on the server, allowing the server to respond to a much higher number of requests. This however, causes issues with the automatic update feature that’s built in to WordPress.

I recently wrote a script that can automatically update WordPress to the latest version available from the WordPress website, which is useful in cases where the automatic update feature in WordPress does not work, for instance when the admin interface is not routable on the public internet, such that it never gets notified if there’s a new version and can’t reach the public internet to fetch the updates.

In that case you’re forced to do the updates manually. The script I wrote was designed to help with that. I wrote it to expedite the task of updating WordPress, instead of having to manually remove certain directories and files, downloading the tarball from the official WordPress website, checking the SHA-1 checksum and then carefully copying the files/directories back over.

Demo

This is a quick demo of how it works:

The script is meant to be run whilst in the directory containing the WordPress files. Put the script somewhere in your PATH, go to the directory containing your WordPress files, then run it like so:

$ update-wordpress.sh

The script will automatically detect what version is the latest available (from the website), download that if necessary, or else use the copy of WordPress stored in the cache, and it will only update the website if the versions don’t match up.

Git

The script will also automatically detect if it’s running in a git repository. If this is the case, it will use the git rm command to properly record the removal of directories, and then do a git add . at the end.

To save even more time, the script can also auto-commit and push the changes back to a git repository if necessary. For this, the variables GIT_AUTOCOMMIT and GIT_PUSH exist. The default value is true, meaning that the script will automatically make a commit with the message:

Updated WordPress to version <version>

and then push the changes to the git repository. Of course, provided that you’ve correctly configured git to do a simple git push.

Caching

It will cache the latest version of WordPress in a directory in your home directory, called $HOME/.update_wordpress_cache, where it will put the latest.tgz file from the WordPress website, the SHA-1 checksum, and also the actual files unpacked in a wordpress directory. This is to prevent the script from re-downloading the files when you have multiple sites you want to update.

License

The script is free software, MIT licensed, and the code is on GitHub.

Belgian Privacy Commission Found Facebook in Violation of EU and Belgian Privacy Law

About two weeks ago KU Leuven University and Vrije Universiteit Brussel in Belgium published a report commissioned by the Belgian Privacy Commission about the tracking behaviour of Facebook on the internet, more specifically how they track their users (and non-users!) through the ‘Like’ buttons and Share buttons that are found on millions of websites across the internet.

Based on this report and the technical report, the Belgian Privacy Commission published a recommendation, which can be found here. A summary article of the findings is also published.

Findings

The results of the investigation are depressing. It was found that Facebook disregards European and Belgian privacy law in various ways. In fact, 10 legal issues have been found by the commission. Facebook frequently dismisses its own severe privacy violations as “bugs” that are still on the list of being fixed (ignoring the fact that these “bugs” are a major part of Facebook’s business model). This allows them to let various privacy commissioners think that privacy violations are the result of unintended functionality, while in fact it is, the entire business model of Facebook is based on profiling people.

Which law applies?

Facebook also does not recognise the fact that in this case Belgian law applies, and claims that because they have an office in Ireland, that they are only bound by Irish privacy law. This is simply not the case. In fact, the general rule seems to be that if you focus your site on a specific market, (let’s say for example Germany), as evidenced by having a German translation of your site, your site being accessible through a .de top-level domain, and various other indicators as well (one option could be the type of payment options provided, if your site offers ways to pay for products or services, or maybe marketing materials), then you are bound by German law as well. This is done to protect German customers, in this example case.

The same principle applies to Facebook. They are active world-wide, and so should be prepared to make adjustments to their services such that they comply with the various laws and regulations of all these countries. This is a difficult task, as laws are often incompatible, but it’s necessary to safeguard consumers’ rights. In the case of Facebook, if they would build their Like and Share buttons in such way that they don’t phone home on page load and don’t place cookies without the user’s consent, they would have a lot less legal problems. The easiest way to comply if you run such an international site, is take the strictest legislation, and implement it such that it complies with that.

In fact, the real reason why Facebook is in Ireland is mostly due to tax reasons. This allows them to evade taxes, by means of the Double Irish and Dutch Sandwich financial constructions.

Another problem is that users are not able to prevent Facebook from using the information they post on the social network site for purposes other than the pure social network site functionality. The information people post, and other information that Facebook aggregates and collects from other sources, are used by Facebook for different purposes without the express and knowing consent of the people concerned.

The problem with the ‘Like’ button

Special attention was given to the ‘Like’ and ‘Share’ buttons found on many sites across the internet. It was found that these social sharing plugins, as Facebook calls them, place a uniquely identifying cookie on users’ computers, which allows Facebook to then correlate a large part of their browsing history. Another finding is that Facebook places this uniquely identifying datr cookie on the European Interactive Digital Advertising Alliance opt-out site, where Facebook is listed as one of the participants. It also places an oo cookie (which presumably stands for “opt-out“) once you opt out of the advertising tracking. Of course, when you remove this cookie from your browser, Facebook is free to track you again. Also note that it does not place these cookies on the US or Canadian opt-out sites.

As I’ve written earlier in July 2013, the problem with the ‘Like’ button is that it phones home to Facebook without the user having to interact with the button itself. The very act of it loading on the page means that Facebook gets various information from users’ browsers, such as the current page visited, a unique browser identifying cookie called the datr cookie, and this information allows them to correlate all the pages you visit with your profile that they keep on you. As the Belgian investigators confirmed, this happens even when you don’t have an account with Facebook, when it is deactivated or when you are not logged into Facebook. As you surf the internet, a large part of your browsing history gets shared with Facebook, due to the fact that these buttons are found everywhere, on millions of websites across the world.

The Filter Bubble

A major problem of personalisation technology, like used by Facebook, but also Google, and others, is that it limits the information users are exposed to. The algorithm learns what you like, and then subsequently only serves you information that you’re bound to like. The problem with that is, that there’s a lot of information that isn’t likeable. Information that isn’t nice, but still important to know. And by heavily filtering the input stream, these companies influence our way of how we think about the world, what information we’re exposed to, etc. Eli Pariser talks about this effect in his book The Filter Bubble: What the Internet is Hiding From You, where he did a Google search for ‘Egypt’ during the Egyptian revolution, and got information about the revolution, news articles, etc. while his friend only got information about holidays to Egypt, tour operators, flights, hotels, etc. This is a vastly different result for the exact same search term. This is due to the heavy personalisation going on at Google, where algorithms refine what results you’re most likely to be interested in, by analysing your previously-entered search terms.

The same happens at Facebook, where they control what you see in your news feed on the Facebook site, based on what you like. Problem is that by doing that a few times, soon you’re only going to see information that you like, and no information that’s important, but not likeable. This massively erodes the eventual value that Facebook is going to have, since eventually, all Facebook will be is an endless stream of information, Facebook posts, images, videos that you like and agree with. It becomes an automatic positive feedback machine. Press a button, and you’ll get a cookie.

What value does Facebook then have as a social network, when you never come in touch with radical ideas, or ideas that you initially do not agree with, but that may alter your thinking when you come in touch with them? By never coming in touch with extraordinary ideas, we never improve. And what a poor world that would be!

RT Going Underground Interview About Regin

I recently did an interview with RT‘s Going Underground programme, presented by Afshin Rattansi. We talked about the recently-discovered highly sophisticated malware Regin, and whether GCHQ or some other nation state could be behind it. The entire episode can be watched here. For more background information about Regin, you can read my article about it.

Talk at Logan Symposium 2014, London

A few weeks ago, I was in London at the Logan Symposium 2014, which was held at the Barbican Centre in London from 5 to 7 December 2014. During this event, I gave a talk entitled: “Security Dilemmas in Publishing Leaks.” (slides, PDF) The event was organised by the Centre for Investigative Journalism in London.

The audience was a switched-on crowd of journalists and hacktivists, bringing together key figures in the fight against invasive surveillance and secrecy. and it was great to be there and to be able to provide some insights and context from a technological perspective.

Regin: The Trojan Horse From GCHQ

In 2010, Belgacom, the Belgian telecommunications company was hacked. This attack was discovered in September 2013, and has been going on for years. We know that this attack is the work of Western intelligence, more specifically, GCHQ, thanks to documents from Edward Snowden. This operation was called Operation Socialist. Now, however, we know a little bit more about how exactly this attack was done, and by what means. Internet connections from employees of Belgacom were sent to a fake LinkedIn page that was used to infect their computers with malware, called “implants” in GCHQ parlance. Now we know that Regin is the name given to the highly complex malware that seems to have been used during Operation Socialist.

Symantec recently reported on this malware (the full technical paper (PDF) can be found here), and it’s behaviour is highly complex. It is able to adapt to very specific missions and the authors have made tremendous effort to make it hard to detect. The malware is able to adapt and change, and since most of anti-virus detection relies on heuristics, or specific fingerprints of known malware, Regin was able to fool anti-virus software and stay undetected. However, Symantec put two and two together and has now revealed some of Regin’s inner workings.

The infections have ranged from telecoms and internet backbones (20% of infections), to hospitality (hotels, etc.), energy, the airlines, and research sectors but the vast majority of infections has been of private individuals or small businesses (48%). Also, the countries targeted are diverse, but the vast majority of attacks is directed against the Russian Federation (28%) and Saudi Arabia (24%).

The Regin malware works very much like a framework, which the attackers can use to inject various types of code, called “payloads” to do very specific things like capturing screen-shots, taking control of your mouse, stealing passwords, monitoring your network traffic and recovering files. Several Remote Access Trojans (also known as RATs) have been found, although even more complex payloads have also been found in the wild, like a Microsoft IIS web server traffic monitor (this makes it easy to spy on who visits a certain website etcetera). Another example of a highly complex payload that has been found is malware to sniff administration panels of mobile cellphone base station controllers.

How Regin Works

As mentioned above, Regin works as a modular framework, where the attackers can turn on/off certain elements and load specific code, called a “payload,” to create a Regin version that is specifically suited to a specific mission. Note that it is not certain whether all payloads have been discovered, and that there may be more than the ones specified in the report.

Regin does not appear to target any specific industrial sector, but infections have been found across the board, but mostly in telecom and private individuals and small businesses. Currently, it is not known what infection vectors can possibly be used to infect a specific target with the Regin malware, but one could for instance think of tricking the target into clicking on a certain link in an e-mail, visiting spoof websites, or maybe through a vulnerable application installed on the victim’s computer, which can be used to infect the target with Regin. In one instance, according to the Symantec report, a victim was infected through Yahoo! Instant Messenger. During Operation Socialist, GCHQ used a fake LinkedIn page to trick Belgacom engineers into installing the malware. So one can expect infection to take place along those lines, but other possibilities may of course exist.

The various stages of Regin.

Regin has six stages in its architecture, called Stage 0 to Stage 5 in the Symantec report. First, a dropper trojan horse will install the malware on the target’s computer (Stage 0), then it loads several drivers (Stage 1 and 2), loads compression, encryption, networking, and EVFS (encrypted file container) code (Stage 3), then it loads the encrypted file container and loads some additional kernel drivers, plus the payloads (Stage 4), and in the final stage (Stage 5) it loads the main payload and the necessary data files for it to operate.

The malware seems to be aimed primarily against computers running the Microsoft Windows operating system, as all of the files discussed in the Symantec report are highly Windows-specific. But there may be payloads out there which target GNU/Linux or OS X computers. The full extent of the malware has not been fully revealed, and it will be interesting to find out more about the exact capabilities of this malware. The capabilities mentioned in the report are already vast and can be used to spy on people’s computers for extended periods of time, but I’m sure that there must be more payloads out there, I’m certain that we’ve only scratched the surface of what is possible.

Regin is a highly-complex threat to computers around the world, and seems to be specifically suited towards large-scale data collection and intelligence gathering campaigns. The development would have required significant investments of time, money and resources, and might very well have taken a few years. Some components of Regin were traced back all the way to 2003.

Western Intelligence Origins?

In recent years, various governments, like the Chinese government, and the Russian government, have been implicated in various hacking attempts and attacks on Western infrastructure. In the article linked here, the FBI accuses the Russians of hacking for the purpose of economic espionage. However, Western governments also engage in digital warfare and espionage, not just for national security purposes (which is a term that has never been defined legally), but they also engage in economic espionage. In the early 1990s, as part of the ECHELON programme, the NSA intercepted communications between Airbus and the Saudi Arabian national airline. They were negotiating contracts with the Saudis, and the NSA passed information on to Boeing which was able to deliver a more competitive proposal, and due to this development, Airbus lost the $6 billion dollar contract to Boeing. This has been confirmed in the European Parliament Report on ECHELON from 2001. Regin also very clearly demonstrates that Western intelligence agencies are deeply involved in digital espionage and digital warfare.

Due to the highly-complex nature of the malware, and the significant amount of effort and time required to develop, test and deploy the Regin malware, together with the highly-specific nature of the various payloads and the modularity of the system, it is highly likely that a state actor was behind the Regin malware. Also, significant effort went into making the system very stealthy and hard for anti-virus software to detect. It was carefully engineered to circumvent anti-virus software’s heuristic detection algorithms and furthermore, some effort was put into making the Regin malware difficult to fingerprint (due to its modular nature)

Furthermore, when looking at the recently discovered attacks, and more especially where the victims are geographically located, it seems that the vast majority of attacks were aimed against the Russian Federation, and Saudi Arabia.

According to The Intercept and Ronald Prins from Dutch security company Fox-IT, there is no doubt that GCHQ and NSA are behind the Regin malware. Der Spiegel revealed that NSA malware had infected the computer networks of the European Union. That might very well been the same malware.

Stuxnet

A similar case of state-sponsored malware appeared in June 2010. In the case of Stuxnet, a disproportionate amount of Iranian industrial site were targeted. According to Symantec, which has published various reports on Stuxnet, Stuxnet was used in one instance to change the speed of about 1,000 gas-spinning centrifuges at the Iranian nuclear power plant at Natanz, thereby sabotaging the research done by Iranian scientists. This covert manipulation could have caused an explosion at this nuclear facility.

Given the fact that Israel and the United States are very much against Iran developing nuclear power for peaceful purposes, thinking Iran is developing nuclear weapons instead of power plants, together with Stuxnet’s purpose to attack industrial sites, amongst those, nuclear sites in Iran, strongly indicates that the US and/or Israeli governments are behind the Stuxnet malware. Both of these countries have the capabilities to develop it, and in fact, they started to think about this project way back in 2005, when the earliest variants of Stuxnet were created.

Dangers of State-Sponsored Malware

The dangers of this state-sponsored malware is of course that should it be discovered, it may very well prompt the companies, individuals or states that the surveillance is targeted against to take countermeasures, leading to a digital arms race. This may subsequently lead to war, especially when a nation’s critical infrastructure is targeted.

The dangers of states creating malware like this and letting it out in the wild is that it compromises not only security, but also our very safety. Security gets compromised when bugs are left unsolved and back doors built in to let the spies in, and let malware do its work. This affects the safety of all of us. Government back doors and malware is not guaranteed to be used only by governments. Others can get a hold of the malware as well, and security vulnerabilities can be used by others than just spies. Think criminals who are after credit card details, or steal identities which are subsequently used for nefarious purposes.

Governments hacking other nations’ critical infrastructure would constitute an act of war I think. Nowadays every nation worth its salt has set up a digital warfare branch, where exploits are bought, malware developed and deployed. Once you start causing millions of Euros worth of damage to other nations’ infrastructure, you are on a slippery slope. Other countries may “hack back” and this will inevitably lead to a digital arms race, the damage of which does not only affect government computers and infrastructure, but also citizens’ computers and systems, corporations, and in some cases, even our lives. The US attack on Iran’s nuclear installations with the Stuxnet malware was incredibly dangerous and could have caused severe accidents to happen. Think of what would happen had a nuclear meltdown occurred. But nuclear installations are not the only ones, there’s other facilities as well which may come under attacks, hospitals for instance.

Using malware to attack and hack other countries’ infrastructure is incredibly dangerous and can only lead to more problems. Nothing has ever been solved by it. It will cause a shady exploits market to flourish which will mean that less and less critical exploits get fixed. Clearly, these are worth a lot of money, and many people that were previously pointing out vulnerabilities and supplying patches to software vendors are now selling these security vulnerabilities off on the black market.

Security vulnerabilities need to be addressed across the board, so that all of us can be safer, instead of the spooks using software bugs, vulnerabilities and back doors against us, and deliberately leaving open gaping holes for criminals to use as well.

The Internet of Privacy-Infringing Things?

Let’s talk a little bit about the rapid proliferation of the so-called Internet of Things (IoT). The Internet of Things is a catch-all term for all sorts of embedded devices that are hooked up to the internet in order to make them “smarter,” able to react to certain circumstances, automate things etcetera. This can include many devices, such as thermostats, autonomous cars, etc. There’s a wide variety of possibilities, and some of them, like smart thermostats are already on the market, with autonomous cars following closely behind.

According to the manufacturers who are peddling this technology, the purpose of hooking these devices up to the internet is to be able to react better and provide more services that were previously impossible to execute. An example would be a thermostat that recognises when you are home, and subsequently raises the temperature of the house. There are also scenarios possible of linking various IoT devices together, like using your autonomous car to recognise when it is (close to) home and then letting the thermostat automatically increase the temperature, for instance.

There are myriad problems with this technology in its current form. Some of the most basic ones in my view are privacy and security considerations. In the case of cars, Ford knows exactly where you are at all times and knows when you are breaking the speed limit by using the highly-accurate GPS that’s built into modern Ford cars. This technology is already active, and if you drive one of these cars, this information (your whereabouts at all times, and certain metrics about the car, like the current speed, mileage, etc.) are stored and sent to Ford’s servers. Many people don’t realise this, but it was confirmed by Ford’s Global VP of Marketing and Sales, Jim Farley at a CES trade show in Las Vegas at the beginning of this year. Farley later retracted his statements after the public outrage, claiming that he left the wrong impression and that Ford does not track the locations of their cars without the owners’ consent.

Google’s $3.2 billion acquisition

Nest Labs, Inc. used to be a separate company making thermostats and smoke detectors, until Google bought it for a whopping $3.2 billion dollars. The Nest thermostat is a programmable thermostat that has a little artificial intelligence inside of it that enables it to learn what temperatures you like, turns the temperature up when you’re at home and turns it down when you’re away. It can be controlled via WiFi from anywhere in the world via a web interface. Users can log in to their accounts to change temperature, schedules, and see energy usage.

Why did Google pay such an extraordinary large amount for a thermostat company? I think it will be the next battleground for Google to gather more data, the Internet of Things. Things like home automation and cars are markets that Google has recently stepped into. Technologies like Nest and Google’s driver-less car are generating massive amounts of data about users’ whereabouts and things like sleep/wake cycles, patterns of travel and usage of energy, for instance. And this is just for the two technologies that I have chosen to focus my attention on for this article. There are lots of different IoT devices out there, that eventually will all be connected somehow. Via the internet.

Privacy Concerns

One is left to wonder what is happening with all this data? Where is it stored, who has access to it, and most important of all: why is it collected in the first place? In most cases this collecting of data isn’t even necessary. In the case of Ford, we have to rely on Farley’s say-so that they are the only ones that have access to this data. And of course Google and every other company out there has the same defence. I don’t believe that for one second.

The data is being collected to support a business model that we see often in the tech industry, where profiles and sensitive data about the users of a service are valuable and either used to better target ads or directly sold on to other companies. There seems to be this conception that the modern internet user is used to not paying for services online, and this has caused many companies to implement the default ads-based and data and profiling-based business model. However, other business models, like the Humble Bundle in the gaming industry for instance, or online crowd-funding campaigns on Kickstarter or Indiegogo have shown that the internet user is perfectly willing to spend a little money or give a little donation if it’s a service or device that they care about. The problem with the default ads-based business model discussed above is that it leaves the users’ data to be vulnerable to exposure to third parties and others that have no business knowing it, and also causes companies to collect too much information about their users by default. It’s like there is some kind of recipe out there called “How to start a Silicon Valley start-up,” that has profiling and tracking of users and basically not caring about the users’ privacy as its central tenet. It doesn’t have to be this way.

Currently, a lot of this technology is developed and then brought to market without any consideration whatsoever about privacy of the customer or security and integrity of the data. Central questions that in my opinion should be answered immediately and during the initial design process of any technology impacting on privacy are left unanswered. First, if and what data should we collect? How easy is it to access this data? I’m sure it would be conceivable that unauthorized people would also be able to quite easily gain access to this data. What if it falls into the wrong hands? A smart thermostat like Google Nest is able to know when you’re home and knows all about your sleep/wake cycle. This is information that could be of interest to burglars, for instance. What if someone accesses your car’s firmware and changes it? What happens when driver-less cars mix with the regular cars on the road, controlled by people? This could lead to accidents.

Vulnerabilities

And what to think of all those “convenient” dashboards and other web-based interfaces that are enabled and exposed to the world on all those “smart” IoT devices? I suspect that there will be a lot of security vulnerabilities to be found in that software. It’s all closed-source and not exposed to external code review. The budgets for the software development probably aren’t large enough to accommodate looking at the security and privacy implications of the software and implementing proper safeguards to protect users’ data. This is a recipe for disaster. Only when using free and open source software can proper code-review be implemented and code inspected for back-doors and other unwanted behaviour. And it generally leads to better quality software, since more people are able to see the code and have the incentives to fix bugs, etc. in an open and welcoming community.

Do we really want to live in a world where we can’t have privacy any more, where your whereabouts are at all times stored and analysed by god-knows who, and all technology is hooked up to each other, without privacy and security considerations? Look, I like technology. But I like technology to be open, so that smart people can look at the insides and determine whether what the tech is doing is really what it says on the tin, with no nasty side-effects. So that the community of users can expand upon the technology. It is about respecting the users’ freedom and rights, that’s what counts. Not enslaving them to closed-source technology that is controlled by commercial parties.

Killing Counterfeit Chips: Parallels with DRM

Last week, The Scottish chip manufacturer FTDI pushed out an update to their Windows driver that deliberately killed counterfeit FT232 chips. The FTDI FT232 is a very popular chip, found in thousands of different electronic appliances, from Arduinos to consumer electronics. The FT232 converts USB to serial port, which is very useful, and this chip probably is the most cloned chip on the planet.

Of course, not supporting counterfeit chips is any chip manufacturer’s right, since they cannot guarantee that their products work when used in conjunction with counterfeit hardware, and because it is a strain on customer support to provide support for devices not made by the company. This case however, is slightly different in that the update contains code that is deliberately written to (soft)brick all counterfeit versions of the FT232. By doing this, FTDI was deliberately destroying other people’s equipment.

One could simply say: don’t use counterfeit chips, but in many cases you simply don’t know that some consumer electronic device you use contains a counterfeit FT232. Deliberately destroying other people’s equipment is a bad move, especially since FTDI doesn’t know what device that fake chip is used in. It could for instance be a medical device, on which flawless operation people’s lives depend.

Hard to tell the difference

In the case of FTDI, one cannot easily tell an original chip from a counterfeit one, only by actually closely looking at the silicon are the differences between a real or a fake chip revealed. In the image above, the left one is a genuine FTDI FT232 chip; the right one is counterfeit. Can you tell the difference?

Even though they look very similar on the surface, the inner workings differ between the original chips and counterfeit ones. The driver update written by FTDI exploits these differences to create a driver that works as expected on original devices, but for counterfeit chips reprograms the USB PID to 0, which is a technical trick that Windows, OS X and GNU/Linux don’t like.

Parallels with Digital Rights Management (DRM)

I see some parallels with software DRM, which is aptly named Digital Restrictions Management by the Free Software Foundation. Because that is what it is. It isn’t about protecting rights of copyright holders, but restricting what people have always done since the early beginnings of humanity.

We copy. We get inspired by, modify and build upon other work, standing on the shoulders of the giants that came before us. That’s in our nature. Children copy and modify, which is  great for their creativity, artists copy and modify culture to make new culture, authors read books and articles and use the ideas and insights they gain to write new books and articles,  providing new insights which brings humanity as a whole forward. Musicians build upon foundations of others to make new music. Some, like the mashup-artists, even outright copy other people’s music and use them in their compositions as-is, making fresh and new compositions out of it. Copying and modifying is essential for human culture to thrive and survive and adapt.

According to the FSF definition, DRM is the practice to use technological restrictions to control what users can do with digital media, software, et cetera. Programs that prevent you from sharing songs, copying, reading ebooks on more than one device, etcetera, are forms of DRM. DRM is defective by design, as it damages the product you bought and has only one purpose: prevent what would be possible to do with the product or software had there not been a form of DRM imposed on you.

DRM serves no other purpose but to restrict possibilities in the interest of making you dependent on the publisher, creator or distributor (vendor lock-in), who, confronted with a rapidly changing market, chooses not to innovate and think of new business models and new ways of making money, and instead try to impose restrictions on you in an effort to cling on to outdated business models.

In the case of DRM, technical measures are put in place to prevent users from using software and media in a certain way. In the case of FTDI, technical measures are put in place to prevent users from using their own, legally-purchased hardware, effectively crippling it. One often does not know whether the FT232 chip that is embedded in a device is genuine or counterfeit, as you can see in the image near the top of this article, the differences are very tiny and hard to spot on the surface. FTDI wanted to protect their intellectual property, but doing so by sneakily exploiting differences between real and counterfeit chips and thereby deliberately damaging people’s equipment is not the way to go.

Luckily, a USB-to-serial-UART chip is easily replaced, but one is left to wonder what happens when other chip manufacturers, making chips that are not so easily replaced, start pulling tricks like these?