Tag Archives: free software

Why I won’t recommend Signal anymore

Note: This article is also available in Portuguese, translated by Anders Bateva.

One of the things I do is cryptography and infosec training for investigative journalists who have a need to keep either their sources and communications confidential so they can more safely do their work in the public interest. Often they work in places which are heavily surveilled, like Europe, or the United States. Ed Snowden’s documents explain a thing or two about how the US intelligence apparatus goes about its day-to-day business. They sometimes also work in places in the world where rubber hose cryptanalysis is more common than in say the U.S. or Europe. Which is why crypto tools alone are not the Alpha and the Omega of (personal) security. This requires careful consideration of what to use when, and in what situation. One of the things I have recommended in the past for various cases is the OpenWhisperSystems’ app called Signal, available for Android and iOS. In this article, I want to explain my reasons why I won’t be recommending Signal in the future.

To be clear: the reason for this is not security. To the best of my knowledge, the Signal protocol is cryptographically sound, and your communications should still be secure. The reason has much more to do with the way the project is run, the focus and certain dependencies of the official (Android) Signal app, as well as the future of the Internet, and what future we would like to build and live in. This post was mostly sparked by Signal’s Giphy experiment, which shows a direction for the project that I wouldn’t have taken. There are other, bigger issues which deserve our attention.

What is Signal?signal

Signal is an app published by OpenWhisperSystems, a company run by Moxie Marlinspike. It has published an official Signal app for Google Android, and Apple iOS. Signal has been instrumental in providing an easy-to-use, cryptographically secure texting and calling app. It is a combination of the previously separate apps TextSecure and Redphone, which were combined into one app called Signal.

One of the main reasons why I recommended it previously to people was that it was easy to use, next to the cryptographic security. This is one good thing Signal has going for it. People could just install it and then communicate securely. Cryptographic software needs to be much more simple to use, and use securely, and Signal is doing its thing on the mobile platforms to create an easy-to-use secure messaging platform. I do appreciate them for that. I wanted to get that out of the way.

Multiple problems with Signal

There are however, multiple issues with Signal, namely:

  • Lack of federation
  • Dependency on Google Cloud Messaging
  • Your contact list is not private
  • The RedPhone server is not open-source

I’ll go into these one at a time.

Lack of federation

There is a modified version of Signal called LibreSignal, that removed the Google dependency from the Signal app, allowing Signal to be run on other (Android) devices, like CopperheadOS, or Jolla phones (with Android compatibility layer). In May this year, however, Moxie made it clear that he does not want LibreSignal to use the Signal servers, and that he does not approve of the name. The name is something that can change, that is not a problem. What is a problem, however, is the fact that he does not want LibreSignal to use the Signal servers. Which would be fine if he allowed LibreSignal to federate across using their own servers. This was tried once (Cyanogenmod, and also offered to Telegram, of all people) but subsequently abandoned, because Moxie believes it slows down changes to the app and/or protocol.

The whole problem with his position however, is that I don’t see the point of doing any of this secure messaging stuff, without having federation. The internet was built on federation. Multiple e-mail providers and servers for instance, can communicate effortlessly with one another, so I can send an e-mail to someone who has a Gmail address or a corporate address, etc. without effort and it all works. This works because of federation, because the protocols are all open standards and there are multiple implementations of the standards who can cooperate and communicate together. Another example would be the Jabber/XMPP protocol, which also has multiple clients on multiple platforms who can communicate securely with one another, despite one having a Jabber account on another server than the other.

If we don’t federate, if we don’t cooperate, what is there to stop the internet from becoming a bunch of proprietary walled gardens again? Is the internet then really nothing more than just a platform for us to use certain proprietary silo services on? Signal then, just happens to be a (partly proprietary) silo on which your messages are transmitted securely.

Dependency on Google Cloud Messaging

Currently, the official Signal client depends on Google Cloud Messaging to work correctly. The alternative that has been developed by the people of LibreSignal has removed that dependency, so people running other software, like Jolla or CopperheadOS can run Signal. Unfortunately, the policy decisions of OpenWhisperSystems and Moxie Marlinspike make it so that it became impossible to reliably run unofficial Signal clients that use the same server infrastructure, so people can communicate. Also, federation, like explained in the previous section, is expressly hindered and prohibited by OpenWhisperSystems, so it is not an option for LibreSignal to simply run their own servers and then federate within the wider Signal network, allowing people to contact each other across clients.

What is Google Cloud Messaging?

The Google Cloud Messaging service is used by Signal with empty messages in order to wake up the device before the actual messages are pushed to the device by Signal’s servers.[1] There is a way to use Signal without depending on GCM, but that uses microg, and that asks people to basically re-compile their kernel (at least I had to in my case). This is not something you can ask of non-technical users. I would like to be able to run an official Signal client (or any secure messaging client) on hardware that runs CopperheadOS for example.

Unrelated to GCM directly, but since on Android devices, Google usually has root access to the phone, there’s the issue of integrity. Google is still cooperating with the NSA and other intelligence agencies. PRISM is also still a thing. I’m pretty sure that Google could serve a specially modified update or version of Signal to specific targets for surveillance, and they would be none the wiser that they installed malware on their phones. For this reason it would also be strongly preferable to run a secure messaging client on a more secure platform. Currently when it comes to Signal this cannot be done in any official way, and it would help for the people who really need secure messaging services (instead of the people who merely use it as a replacement of say WhatsApp), if the software runs on other Android distributions, like Copperhead.[2]

Your contact list (social graph) is not private

Here is the permission list of Signal, including OpenWhisperSystems’ explanation for the need for them. As you can clearly see, Signal is allowed (if you install it), to read and modify your contacts. Signal associates phone numbers with names in a similar way that Whatsapp is doing, and this is a big reason why they feel they need to read your contact list. Also, there’s a usability thing where they display the contacts’ names and pictures in the Signal app. It hashes them before sending them to the server, but since the space of possible hashes is so small for phone numbers, this does not provide a lot of security. Moxie has stated previously (in 2014) that the problem of private contact discovery is difficult, lays out different strategies that don’t work or do not give satisfying performance, and then admits it’s still an unsolved problem. Discussion regarding this seemed to have moved from a Github issue to a mailing list, and I don’t know of any improvement on this front.[2]

This could of course all been done differently, by using usernames to connect users instead of their phone numbers (incidentally, this would also allow people who use multiple phone numbers on the same device to use Signal reliably). And last time I checked, if you use the same phone number on a different device, Signal will get deregistered on the old device.

Another issue, and a plus for using usernames, is that you may want to use Signal with people you don’t necessarily want to give your phone number to. And federation would also be easier with usernames, and servers, separated by a symbol, like the @. Just like in the case of Jabber/XMPP. I also see no usability issues here, as even very non-technical people generally get the concept of an address, or an e-mail address, and this would be very similar.

RedPhone not open source

The phone component of Signal is called RedPhone. The server component of this is unfortunately not open source (so people are prevented from running their own phone servers, and this is also probably the reason why secure encrypted phone calls don’t work in e.g. LibreSignal.)

I don’t know exactly what prevents the RedPhone server code from being released (whether it is legal issues or simple unwillingness), but I do think it is strange that there is no movement whatsoever to move to a different/alternative solution, that respects users’ rights.

Moving forward

Image above © ZABOU.

The big question now, as also said by @shiromarieke on Twitter, is what post-Signal tool we want to use. I don’t know the answer to that question yet, but I will lay out my minimum requirements of such a piece of software here. We as a community need to come up with a viable solution and alternative to Signal that is easy to use and that does in fact respect people’s choices, both in the hardware and software that they choose to run.

In my view, there should be a tool that is fully free software (as defined by the GNU GPL), that respects users’ freedoms to freely inspect, use, modify the software and distribute modified copies of the software. Also, this tool should not have dependencies on corporate infrastructure like Google’s (basically any partner in PRISM), that allows these parties to control the correct working of the software. The fact that Signal depends on Google Cloud Messaging, and Google technology in general is something that should be avoided.

In the end, I think we need to move to an Internet where there are more federated services, not less, where information is openly shared, and services publicly run by multiple people all over the world. Otherwise, we’ll be in danger of ending up in an neo-90s Internet, with walled gardens and pay walls all over the place. You already see this trend happening in journalism.

We need to remember that we’re fighting not only against government surveillance, but also against corporate surveillance as well. We need ways to defend against this, and using corporate solutions that create a dependency on these solutions, even if the communications themselves are not readable to them, there’s still the issue of metadata, and of course general availability of Google’s services to Signal.

It’s really unfortunate that OpenWhisperSystems isn’t more friendly to initiatives like LibreSignal, since these people did a lot of work which is now basically going to be thrown away because the person running Signal is not friendly to these initiatives.

We need to cooperate more as a community instead of creating these little islands, otherwise we are not going to succeed in defeating or even meaningfully defending against Big Brother. Remember, our enemy knows how to divide and conquer. Divide et impere. It’s been a basic government subjugation tactic since the Roman times. We should not allow our own petty egos and quest for eternal hacker fame to get in the way of our actual goal: dismantling the surveillance states globally.

Notes:
[1]: An earlier version of this article stated incorrectly that GCM was used to transport Signal messages. While correct for a previous version of TextSecure, this is in fact not correct anymore for Signal. I’ve updated it, in response to this HN comment: https://news.ycombinator.com/item?id=12882815.
[2]: Clarified my position re Google and GCM and the contact list / private contact discovery issue a bit.

Automatically update WordPress to the latest version

This post is a quick, temporary break from my usual privacy/civil rights posts, to a post of a slightly more technical nature.

WordPressAs WordPress is the most popular blogging platform on the internet, updates become crucial. However, the way WordPress runs at certain clients of mine means it’s not always just a question of clicking a button (or it happening automatically, as in recent versions of WordPress).

For security reasons, at certain websites in need of high security, but whose editors still want the ease of use of something familiar like WordPress, I like to keep WordPress off the publicly-accessible internet, and then have a static HTML copy of the website publicly accessible. This has advantages of security (the publicly-accessible web server only has to be able to serve static HTML and images), and also causes much less load on the server, allowing the server to respond to a much higher number of requests. This however, causes issues with the automatic update feature that’s built in to WordPress.

I recently wrote a script that can automatically update WordPress to the latest version available from the WordPress website, which is useful in cases where the automatic update feature in WordPress does not work, for instance when the admin interface is not routable on the public internet, such that it never gets notified if there’s a new version and can’t reach the public internet to fetch the updates.

In that case you’re forced to do the updates manually. The script I wrote was designed to help with that. I wrote it to expedite the task of updating WordPress, instead of having to manually remove certain directories and files, downloading the tarball from the official WordPress website, checking the SHA-1 checksum and then carefully copying the files/directories back over.

Demo

This is a quick demo of how it works:

demo

The script is meant to be run whilst in the directory containing the WordPress files. Put the script somewhere in your PATH, go to the directory containing your WordPress files, then run it like so:

$ update-wordpress.sh

The script will automatically detect what version is the latest available (from the website), download that if necessary, or else use the copy of WordPress stored in the cache, and it will only update the website if the versions don’t match up.

Git

The script will also automatically detect if it’s running in a git repository. If this is the case, it will use the git rm command to properly record the removal of directories, and then do a git add . at the end.

To save even more time, the script can also auto-commit and push the changes back to a git repository if necessary. For this, the variables GIT_AUTOCOMMIT and GIT_PUSH exist. The default value is true, meaning that the script will automatically make a commit with the message:

Updated WordPress to version <version>

and then push the changes to the git repository. Of course, provided that you’ve correctly configured git to do a simple git push.

Caching

It will cache the latest version of WordPress in a directory in your home directory, called $HOME/.update_wordpress_cache, where it will put the latest.tgz file from the WordPress website, the SHA-1 checksum, and also the actual files unpacked in a wordpress directory. This is to prevent the script from re-downloading the files when you have multiple sites you want to update.

License

The script is free software, MIT licensed, and the code is on GitHub.

The Internet of Privacy-Infringing Things?

Let’s talk a little bit about the rapid proliferation of the so-called Internet of Things (IoT). The Internet of Things is a catch-all term for all sorts of embedded devices that are hooked up to the internet in order to make them “smarter,” able to react to certain circumstances, automate things etcetera. This can include many devices, such as thermostats, autonomous cars, etc. There’s a wide variety of possibilities, and some of them, like smart thermostats are already on the market, with autonomous cars following closely behind.

According to the manufacturers who are peddling this technology, the purpose of hooking these devices up to the internet is to be able to react better and provide more services that were previously impossible to execute. An example would be a thermostat that recognises when you are home, and subsequently raises the temperature of the house. There are also scenarios possible of linking various IoT devices together, like using your autonomous car to recognise when it is (close to) home and then letting the thermostat automatically increase the temperature, for instance.

There are myriad problems with this technology in its current form. Some of the most basic ones in my view are privacy and security considerations. In the case of cars, Ford knows exactly where you are at all times and knows when you are breaking the speed limit by using the highly-accurate GPS that’s built into modern Ford cars. This technology is already active, and if you drive one of these cars, this information (your whereabouts at all times, and certain metrics about the car, like the current speed, mileage, etc.) are stored and sent to Ford’s servers. Many people don’t realise this, but it was confirmed by Ford’s Global VP of Marketing and Sales, Jim Farley at a CES trade show in Las Vegas at the beginning of this year. Farley later retracted his statements after the public outrage, claiming that he left the wrong impression and that Ford does not track the locations of their cars without the owners’ consent.

Google’s $3.2 billion acquisition

google-nest-acquisition-1090406-TwoByOneNest Labs, Inc. used to be a separate company making thermostats and smoke detectors, until Google bought it for a whopping $3.2 billion dollars. The Nest thermostat is a programmable thermostat that has a little artificial intelligence inside of it that enables it to learn what temperatures you like, turns the temperature up when you’re at home and turns it down when you’re away. It can be controlled via WiFi from anywhere in the world via a web interface. Users can log in to their accounts to change temperature, schedules, and see energy usage.

Why did Google pay such an extraordinary large amount for a thermostat company? I think it will be the next battleground for Google to gather more data, the Internet of Things. Things like home automation and cars are markets that Google has recently stepped into. Technologies like Nest and Google’s driver-less car are generating massive amounts of data about users’ whereabouts and things like sleep/wake cycles, patterns of travel and usage of energy, for instance. And this is just for the two technologies that I have chosen to focus my attention on for this article. There are lots of different IoT devices out there, that eventually will all be connected somehow. Via the internet.

Privacy Concerns

One is left to wonder what is happening with all this data? Where is it stored, who has access to it, and most important of all: why is it collected in the first place? In most cases this collecting of data isn’t even necessary. In the case of Ford, we have to rely on Farley’s say-so that they are the only ones that have access to this data. And of course Google and every other company out there has the same defence. I don’t believe that for one second.

The data is being collected to support a business model that we see often in the tech industry, where profiles and sensitive data about the users of a service are valuable and either used to better target ads or directly sold on to other companies. There seems to be this conception that the modern internet user is used to not paying for services online, and this has caused many companies to implement the default ads-based and data and profiling-based business model. However, other business models, like the Humble Bundle in the gaming industry for instance, or online crowd-funding campaigns on Kickstarter or Indiegogo have shown that the internet user is perfectly willing to spend a little money or give a little donation if it’s a service or device that they care about. The problem with the default ads-based business model discussed above is that it leaves the users’ data to be vulnerable to exposure to third parties and others that have no business knowing it, and also causes companies to collect too much information about their users by default. It’s like there is some kind of recipe out there called “How to start a Silicon Valley start-up,” that has profiling and tracking of users and basically not caring about the users’ privacy as its central tenet. It doesn’t have to be this way.

Currently, a lot of this technology is developed and then brought to market without any consideration whatsoever about privacy of the customer or security and integrity of the data. Central questions that in my opinion should be answered immediately and during the initial design process of any technology impacting on privacy are left unanswered. First, if and what data should we collect? How easy is it to access this data? I’m sure it would be conceivable that unauthorized people would also be able to quite easily gain access to this data. What if it falls into the wrong hands? A smart thermostat like Google Nest is able to know when you’re home and knows all about your sleep/wake cycle. This is information that could be of interest to burglars, for instance. What if someone accesses your car’s firmware and changes it? What happens when driver-less cars mix with the regular cars on the road, controlled by people? This could lead to accidents.

Vulnerabilities

And what to think of all those “convenient” dashboards and other web-based interfaces that are enabled and exposed to the world on all those “smart” IoT devices? I suspect that there will be a lot of security vulnerabilities to be found in that software. It’s all closed-source and not exposed to external code review. The budgets for the software development probably aren’t large enough to accommodate looking at the security and privacy implications of the software and implementing proper safeguards to protect users’ data. This is a recipe for disaster. Only when using free and open source software can proper code-review be implemented and code inspected for back-doors and other unwanted behaviour. And it generally leads to better quality software, since more people are able to see the code and have the incentives to fix bugs, etc. in an open and welcoming community.

Do we really want to live in a world where we can’t have privacy any more, where your whereabouts are at all times stored and analysed by god-knows who, and all technology is hooked up to each other, without privacy and security considerations? Look, I like technology. But I like technology to be open, so that smart people can look at the insides and determine whether what the tech is doing is really what it says on the tin, with no nasty side-effects. So that the community of users can expand upon the technology. It is about respecting the users’ freedom and rights, that’s what counts. Not enslaving them to closed-source technology that is controlled by commercial parties.

Killing Counterfeit Chips: Parallels with DRM

Last week, The Scottish chip manufacturer FTDI pushed out an update to their Windows driver that deliberately killed counterfeit FT232 chips. The FTDI FT232 is a very popular chip, found in thousands of different electronic appliances, from Arduinos to consumer electronics. The FT232 converts USB to serial port, which is very useful, and this chip probably is the most cloned chip on the planet.

Of course, not supporting counterfeit chips is any chip manufacturer’s right, since they cannot guarantee that their products work when used in conjunction with counterfeit hardware, and because it is a strain on customer support to provide support for devices not made by the company. This case however, is slightly different in that the update contains code that is deliberately written to (soft)brick all counterfeit versions of the FT232. By doing this, FTDI was deliberately destroying other people’s equipment.

One could simply say: don’t use counterfeit chips, but in many cases you simply don’t know that some consumer electronic device you use contains a counterfeit FT232. Deliberately destroying other people’s equipment is a bad move, especially since FTDI doesn’t know what device that fake chip is used in. It could for instance be a medical device, on which flawless operation people’s lives depend.

Hard to tell the difference

FTDI Real vs FakeIn the case of FTDI, one cannot easily tell an original chip from a counterfeit one, only by actually closely looking at the silicon are the differences between a real or a fake chip revealed. In the image above, the left one is a genuine FTDI FT232 chip; the right one is counterfeit. Can you tell the difference?

Even though they look very similar on the surface, the inner workings differ between the original chips and counterfeit ones. The driver update written by FTDI exploits these differences to create a driver that works as expected on original devices, but for counterfeit chips reprograms the USB PID to 0, which is a technical trick that Windows, OS X and GNU/Linux don’t like.

Parallels with Digital Rights Management (DRM)

Defective by Design I see some parallels with software DRM, which is aptly named Digital Restrictions Management by the Free Software Foundation. Because that is what it is. It isn’t about protecting rights of copyright holders, but restricting what people have always done since the early beginnings of humanity.

We copy. We get inspired by, modify and build upon other work, standing on the shoulders of the giants that came before us. That’s in our nature. Children copy and modify, which is  great for their creativity, artists copy and modify culture to make new culture, authors read books and articles and use the ideas and insights they gain to write new books and articles,  providing new insights which brings humanity as a whole forward. Musicians build upon foundations of others to make new music. Some, like the mashup-artists, even outright copy other people’s music and use them in their compositions as-is, making fresh and new compositions out of it. Copying and modifying is essential for human culture to thrive and survive and adapt.

According to the FSF definition, DRM is the practice to use technological restrictions to control what users can do with digital media, software, et cetera. Programs that prevent you from sharing songs, copying, reading ebooks on more than one device, etcetera, are forms of DRM. DRM is defective by design, as it damages the product you bought and has only one purpose: prevent what would be possible to do with the product or software had there not been a form of DRM imposed on you.

DRM serves no other purpose but to restrict possibilities in the interest of making you dependent on the publisher, creator or distributor (vendor lock-in), who, confronted with a rapidly changing market, chooses not to innovate and think of new business models and new ways of making money, and instead try to impose restrictions on you in an effort to cling on to outdated business models.

In the case of DRM, technical measures are put in place to prevent users from using software and media in a certain way. In the case of FTDI, technical measures are put in place to prevent users from using their own, legally-purchased hardware, effectively crippling it. One often does not know whether the FT232 chip that is embedded in a device is genuine or counterfeit, as you can see in the image near the top of this article, the differences are very tiny and hard to spot on the surface. FTDI wanted to protect their intellectual property, but doing so by sneakily exploiting differences between real and counterfeit chips and thereby deliberately damaging people’s equipment is not the way to go.

Luckily, a USB-to-serial-UART chip is easily replaced, but one is left to wonder what happens when other chip manufacturers, making chips that are not so easily replaced, start pulling tricks like these?

The Rising Trend of Criminalizing Hackers & Tinkerers

Note: This article is also available in Portuguese, translated by Anders Bateva.

There seems to be a rising trend of criminalizing hackers & tinkerers. More and more, people who explore the limits of the equipment, hardware and software they own and use, whether they tinker with it, re-purpose it, or expand its functionalities, are met with unrelenting persecution by authorities. In the last couple of years, the trend seems to be that these things, or things which humans have done for thousands of years, like sharing, expanding and improving upon culture, are persecuted. An example is the recent possibility of making violations of Terms of Service, Terms of Use and other Terms put forward by service providers a crime under the Computer Fraud and Abuse Act (CFAA). The companies that are now (for the most part) in control of our collective culture are limiting the methods of sharing more and more, often through judicial and/or technical means. The technical means for the most part don’t work, thankfully. DRM is still a big failure and never got off the ground, although the content industry is still trying to cling onto it. The judicial means, however, can be very effective at crushing someone, especially in the litigious United States of America. In the U.S., about 95% of all criminal cases end in a plea bargain, because that’s cheaper than trial by jury. These people are forced by financial pressure to enter a plea bargain, even if they didn’t commit the crimes of which they are accused.

Aaron SwartzAaron Swartz

The late Aaron Swartz was persecuted heavily by the U.S. government for downloading millions of scientific articles from JSTOR at MIT, JSTOR being the closed-source library of scientific articles, access to which is commercially exploited by ITHAKA, the entity that runs it. Aaron believed that scientific research paid for by the public, should be available to the public for free. It’s completely logical that research paid for by the public belongs to the public, and not to some company which basically is saying: “Thank you very much, we’ll have that, now we are going to charge for access to the scientific results, and reap the financial benefits.” It is sad that the world lost a great hacker and tinkerer, committing suicide, only 26 years old, unable to bear the pressure brought down upon him any longer, when in the end, according to his lawyer Elliot Peters, he probably would have won the case due to the fact that the U.S. Secret Service failed to get a search warrant for Swartz’s laptop until 34 days after they seized it.

The corporate world is seizing control of content creation

This trend is seen more and more lately. The companies in control of most of our content production, devices and systems don’t want you to tinker with them, not even if you own them. Apple is closing their systems by soon preventing you from installing your own software on OS X. Software installs will soon only be permitted through the Apple-curated App Store. Already there’s software in OS X, called Gatekeeper that’s meant to prevent you from installing apps that might contain malware. If you read between the lines in that previous link you’ll see that it’s only a matter of time before they’re going to tighten the reins, and make Gatekeeper more oppressive. Google is rapidly closing Android, and moving more and more parts of the once open-source system to its own Google Play Services app. Check the permissions on that app; it’s incredibly scary just how much of the system is now locked up in this closed-source binary blob, and how little the actual android system now handles. Recently, text messaging functionality was moved from the Android OS to the Google Hangouts app, so texting with an Android 4.4 (KitKat)-equipped phone is no longer possible without a Google account and being logged into that. Of course, Google will store all your text messages, for easy access by American intelligence and law enforcement agencies. If you now were to install Android, and remove the Google Play Services app, you might be surprised at how much stuff depends on that app nowadays. When you remove Google Play Services, your phone basically becomes a non-functional plastic brick. These companies fail to see that any invention is made by standing on the shoulders of giants and working upon other people’s work, making it better, tinkering and modifying it, using it for other purposes not envisioned by the original author et cetera. This is what makes culture, this is what makes us. We are fundamentally social creatures, we share. The same implementation of control systems happens with e-books as well. The devices used to read them usually aren’t open, like the Amazon Kindle for example, so that is a problem. We humans have been sharing culture for millions of years and sharing books for thousands of years, basically since writing was invented in Mesopotamia. It is as natural to human development as breathing. We are social creatures, and we thrive on feedback from our peers. But there’s something worse going on in e-book land. In the Netherlands, all e-book purchases now have to be stored in a database called Centraal Boekhuis, which details all buyer information, and this central database will be easily accessible by Stichting BREIN, the country’s main anti-piracy & content industry lobby club. This was ostensibly done to prevent e-book piracy, but I would imagine that this database soon will be of interest to intelligence agencies. Think of it: a centralized database containing almost all books and which people read which books. You can learn a lot about a person just from the books they read. Joseph Stalin and Erich Honecker would be proud. We reached a high water mark of society after the adoption of the Universal Declaration of Human Rights at the UN General Assembly on 10 December 1948, but it’s sad to see that here in the Western world, we’ve been slipping from that high pillar of decency and humanity ever since. To quote V from V for Vendetta:

“Where once you had the freedom to object, to think and speak as you saw fit, we now have censors and systems of surveillance coercing your conformity and soliciting your submission.”

The surveillance is now far worse than what George Orwell could have possibly imagined. We need to remind the spooks and control freaks in governments around the world that Nineteen Eighty-Four is not an instruction manual. It was a warning. And we’ve ignored it so far.

My Move to Switzerland

Accelerated because of the recent exposure of the NSA’s horrible PRISM program by whistleblower Edward Snowden, I’ve decided to finally take the steps I’ve contemplated about for roughly a year now: moving my online persona to Switzerland.

Why Switzerland?Swiss Flag

The reason I chose Switzerland is because of United States policy, really. In recent years, the US administration has been flexing their jurisdictional muscles and have been putting several perfectly legitimate websites out of business because their owners published things the US junta didn’t like. This happens even when your servers aren’t located in the United States, and even when you don’t market your site to Americans. Having a .com, .net or .org is apparently enough to fall under US jurisdiction.

Examples are legion: Mega (previously known as MegaUpload), ran by the New Zealand citizen Kim Dotcom, whose domains have been seized by the US government because of vague copyright infringement allegations. Their website got defaced by the American government, and you can imagine the kind of damage this may inflict if you’re running a company or non-profit, and the image put up by the US authorities says your website was taken down because of, shall we say, ‘questionable’ content.

TVShacks, the website ran by the then 23-year-old Richard O’Dwyer, a UK citizen who faced extradition to the United States in 2011 because of copyright allegations, even when he was not doing anything illegal according to UK law. His website simply aggregated links to where copyrighted content could be found on the Internet, and he complied with proper notice and take-down requests. Yes, you’ve read it correctly: here is someone who actually faced extradition to the US, even when he didn’t do anything illegal under UK law, based on what exactly? Some vague copyright claims by Hollywood.

You have to be careful about which companies you deal with, and especially in which country they are incorporated. Because if you’re dealing with a US-based company, any US company, it will be subject to the US PATRIOT Act, NSLs (National Security Letters), FISA and legally required to put in back-doors and send logs containing your traffic to the US intelligence community, the NSA in particular. And in the order by the FISC (Foreign Intelligence Surveillance Court) it explicitly says that you can’t inform your clients about the fact that you have to send all their communications to the NSA. It also stipulates hefty prison sentences for the leadership of the US companies that are found to be breaching this stipulation in the order. And they aren’t collecting just meta-data: the actual content of your communications are recorded and profiled and searched through as well. But this wasn’t really anything new: the US plus the UK and her former colonies have been running the ECHELON program for many years. Its existence was confirmed by a European Parliament investigation into the capabilities and political implications of ECHELON in 2001.

What Can You Do?

The solution to this is quite complex and involves many factors and variables you have to consider. But here are some of the things I did:

Basically you want to have nothing to do with US companies. Basically don’t have any US ties whatsoever. Because as soon as there is a US link, your service providers are subject to US legislation, have to comply with the spooks’ orders and more importantly: can’t tell you about it. So avoid US companies, US cloud providers, etc. at all costs if you want to stay really secure. So no Google, Facebook, Twitter, LinkedIn, etc. without approaching this with a clear strategy in mind. Be careful when (if at all) you’re using these services.

Be sure to install browser plugins like HTTPS Everywhere (to use secure HTTPS connections wherever possible; providing end-to-end encryption) and Ghostery to prevent letting these companies track the web pages you visit.

The hardware and software you’re using also needs to be as secure as possible. Don’t order your new computer on the Internet, but go to a physical (brick-and-mortar) store (pick one at random that has the model you fancy in store) and buy one cash over the counter. The computer should preferably be running a free software (free as in freedom, not free as in ‘free beer’) operating system like GNU/Linux (there’s an easy to use distribution of GNU/Linux called Ubuntu) or BSD, and the software running on top of that should preferably be free software as well. This is done to ensure that the hardware cannot be compromised in the transfer from the manufacturer to you (since it’s impossible to tell which computer you’re going to pick at the store), and to ensure proper review of the source code of the software you are using. Or, as Eric S. Raymond said in his book The Cathedral and the Bazaar: “Given enough eyeballs, all bugs are shallow.” You cannot trust proprietary software, since you cannot check the source code, and it’s less flexible than free software because you cannot extend or change the software to fit your needs exactly. Even if you yourself don’t have the expertise to do so, you can always hire someone to do this work for you.

With regards to domain security (to prevent the US authorities from defacing your website) you can register a domain name that doesn’t fall under US jurisdiction. I chose Switzerland (.ch) because of the way they’ve been resisting pressure by the US authorities when they clamped down on Wikileaks. The server is also physically located in Switzerland. This server is also running my email, which I access through a secure, encrypted SSL/TLS connection.

Now, e-mail is basically a plain text protocol, so people still get to read them if they sniff your packets somewhere between source and destination. The best way to prevent this from happening, is to use encryption, not just for authentication, but encrypt the content as well whenever possible. I use GnuPG, an open source implementation of PGP, together with the Enigmail plug-in for Thunderbird. This works using asymmetric encryption, with two keys, a public key and a private key, which you generate on your machine. The public key can be published and shared freely, as this is what allows other people to send encrypted mail to you. You have to keep the private key secret. You can then send encrypted email to people if you have their public key.

If you want to read up some more on some of the practical measures you can take to increase your security, please visit Gendo’s Secure Comms webpage. It contains comprehensive practical advice and lots of links to the software you need to set up secure comms.

My plan is to write more articles on this website, so I’d like to thank you for your time, and hope to see you again soon!