Tag Archives: ethical

The Panama Papers: Dirty Money or Dirty Media?

Panama PapersOn 3 April 2016, the first few of the so-called Panama Papers were published by mainstream media across the West. The Panama Papers are a collection of allegedly 2.6 TB of data and documents by and related to Mossack Fonseca, a Panamanian law firm providing offshore trust services.

The leak, given by an anonymous whistle-blower to Bastian Obermayer of the German Süddeutsche Zeitung, consists of 11.5 million documents created between the 1970s and late 2015 by Mossack Fonseca. A consortium of journalists, the International Consortium of Investigative Journalists (ICIJ) subsequently organised the research and review of the documents.

These documents allegedly provide proof of the rich and powerful in the world storing their massive stashes of money in tax havens across the world like the British Virgin Islands (BVI), Guernsey, The Netherlands, etc. This practice is called tax avoidance, and is usually not illegal. It is highly questionable from a moral standpoint though. Billions of euros or dollars flow through thousands of shell companies that provide no benefit to society in terms of services, goods and employment. And the country of residence of the billionaire in question doesn’t receive tax income which could be put to better use to improve society rather than sit on an anonymous bank account on the Cayman Islands.

Media Bias

Putin_mediaOne of the first things that struck me as odd, but that is sadly no longer surprising, was the incredibly one-sided reporting done on this by the media. On 3 April, lots of articles appeared about the Panama Papers, and they strongly implied that President Putin of Russia was mentioned in these documents. Even though Putin was not mentioned in the few actual documents released to this point, the mainstream media strongly implied (by using photographs depicting Putin, for instance), that Putin is personally involved with the arrangements mentioned in the documents by Mossack Fonseca. The BBC Panorama documentary entitled “Tax Havens of the Rich and Powerful Exposed” is also strongly biased in their editing, showing documents on-screen for only a few nanoseconds behind an unclear background. When you stop the video and zoom in you can clearly see that the documents shown are from the British Virgin Islands, while this British overseas territory is not mentioned even once in the documentary itself, while they are droning on about Putin and the Icelandic former Prime Minister Gunnlaugsson.

Why this massive media bias? Why is it necessary to remind us that leaders from countries like Russia, China, Zimbabwe, North Korea, Syria etc. are corrupt? We know that. That is not news. What would be news is to reveal hard evidence that Western billionaires like George Soros are just as corrupt, and worse, that they influence politics and world affairs using their massive stashes of money.

The reason why the bias is so strong is partly due to the methodology used, and partly because of other interests. The Süddeutsche Zeitung gives a detailed explanation on how these documents were searched for interesting titbits. One of the things they did is focus on countries that may be violating UN sanctions, which might explain in part why the bias is on non-Western countries as it is. Also note that these documents only come from one law firm in Panama. If there would be another leak from, say, a law firm on the BVI, then we might find other people involved.

As Craig Murray, former UK Ambassador to Uzbekistan has written, Western journalists, the corporate media gatekeepers, are withholding the vast majority of the actual documents from the public. If we truly want to know what the impact of the Panama Papers is, without spin from the media, we should have access to the actual raw documents. Raw docs or it doesn’t exist, so to speak. If you don’t release 99% of the documents, you’re engaged in 1% journalism by definition. This is why I like the work that WikiLeaks is doing. They work very hard to publish the original source documents responsibly so that we can all learn how the world works from the original and authoritative source material. And then all journalists can read these documents on an equal standing. It’s been a pet-peeve of mine for many years that mainstream media don’t link to their sources like bloggers do. If a story is clearly based on documents like in this case the Panama Papers, just release the source documents together with your explanatory articles. Why is this such a problem?

Or are the journalists who have access to these documents afraid of possible blow-back if they report on the hand that feeds them?

Who is funding this?

Because that is the big elephant in the room. Who could be funding this propaganda extravaganza? Let’s have a look at the ICIJ’s site shall we?

Soros

George Soros at the Festival of Economics 2012, Trento. Photo by Niccolò Caranti.

The International Consortium of Investigative Journalists is based in Washington, D.C, and is a project of the Center for Public Integrity. There, on the funding page, you can read that amongst the big institutional funders are names like the Omidyar Network (Pierre Omidyar, owner of The Intercept and founder of eBay), the Open Society Foundations (George Soros), the W.K. Kellogg Foundation, the Rockefellers, The Democracy Fund (again: Omidyar), and many others.

The OCCRP (Organized Crime and Corruption USAIDReporting Project) is also heavily involved with the Panama Papers project, and is sponsored, by (again) the Open Society Institute of George Soros, and also USAID, which is a US government agency and front organisation posing as a charity and frequently used as an instrument of regime change.

Is it strange that which such backers the very first news reports that came out were so incredibly biased? Given how much the US administration would like to see regime change in Russia, are these reports bashing the Russian President a surprise? No, sadly, I’m not surprised any more. What I find despicable, is that so many journalists who worked on this, like to think of themselves as independent and the ultimate arbiters of truth, when evidently, they are not.

Why are there not reports about the vast amount of wealth stashed away in tax havens by George Soros? Mark Zuckerberg? Warren Buffet? The journalists sacrificed a token Western leader like Gunnlaugsson from Iceland, so they can claim to be bias-free (“look, we’re also publishing on Western leaders!”), while in reality, their entire enterprise is funded by the rich and powerful in the West. So I think I can quite confidently predict that for instance George Soros’s financial arrangements in various tax havens will not be published. Mark my words.

Belgian Privacy Commission Found Facebook in Violation of EU and Belgian Privacy Law

2390914273_da19cc9362_o

About two weeks ago KU Leuven University and Vrije Universiteit Brussel in Belgium published a report commissioned by the Belgian Privacy Commission about the tracking behaviour of Facebook on the internet, more specifically how they track their users (and non-users!) through the ‘Like’ buttons and Share buttons that are found on millions of websites across the internet.

Based on this report and the technical report, the Belgian Privacy Commission published a recommendation, which can be found here. A summary article of the findings is also published.

Findings

The results of the investigation are depressing. It was found that Facebook disregards European and Belgian privacy law in various ways. In fact, 10 legal issues have been found by the commission. Facebook frequently dismisses its own severe privacy violations as “bugs” that are still on the list of being fixed (ignoring the fact that these “bugs” are a major part of Facebook’s business model). This allows them to let various privacy commissioners think that privacy violations are the result of unintended functionality, while in fact it is, the entire business model of Facebook is based on profiling people.

Which law applies?

Facebook also does not recognise the fact that in this case Belgian law applies, and claims that because they have an office in Ireland, that they are only bound by Irish privacy law. This is simply not the case. In fact, the general rule seems to be that if you focus your site on a specific market, (let’s say for example Germany), as evidenced by having a German translation of your site, your site being accessible through a .de top-level domain, and various other indicators as well (one option could be the type of payment options provided, if your site offers ways to pay for products or services, or maybe marketing materials), then you are bound by German law as well. This is done to protect German customers, in this example case.

The same principle applies to Facebook. They are active world-wide, and so should be prepared to make adjustments to their services such that they comply with the various laws and regulations of all these countries. This is a difficult task, as laws are often incompatible, but it’s necessary to safeguard consumers’ rights. In the case of Facebook, if they would build their Like and Share buttons in such way that they don’t phone home on page load and don’t place cookies without the user’s consent, they would have a lot less legal problems. The easiest way to comply if you run such an international site, is take the strictest legislation, and implement it such that it complies with that.

In fact, the real reason why Facebook is in Ireland is mostly due to tax reasons. This allows them to evade taxes, by means of the Double Irish and Dutch Sandwich financial constructions.

Another problem is that users are not able to prevent Facebook from using the information they post on the social network site for purposes other than the pure social network site functionality. The information people post, and other information that Facebook aggregates and collects from other sources, are used by Facebook for different purposes without the express and knowing consent of the people concerned.

The problem with the ‘Like’ button

Special attention was given to the ‘Like’ and ‘Share’ buttons found on many sites across the internet. It was found that these social sharing plugins, as Facebook calls them, place a uniquely identifying cookie on users’ computers, which allows Facebook to then correlate a large part of their browsing history. Another finding is that Facebook places this uniquely identifying datr cookie on the European Interactive Digital Advertising Alliance opt-out site, where Facebook is listed as one of the participants. It also places an oo cookie (which presumably stands for “opt-out“) once you opt out of the advertising tracking. Of course, when you remove this cookie from your browser, Facebook is free to track you again. Also note that it does not place these cookies on the US or Canadian opt-out sites.

As I’ve written earlier in July 2013, the problem with the ‘Like’ button is that it phones home to Facebook without the user having to interact with the button itself. The very act of it loading on the page means that Facebook gets various information from users’ browsers, such as the current page visited, a unique browser identifying cookie called the datr cookie, and this information allows them to correlate all the pages you visit with your profile that they keep on you. As the Belgian investigators confirmed, this happens even when you don’t have an account with Facebook, when it is deactivated or when you are not logged into Facebook. As you surf the internet, a large part of your browsing history gets shared with Facebook, due to the fact that these buttons are found everywhere, on millions of websites across the world.

The Filter BubblePersonal data points

A major problem of personalisation technology, like used by Facebook, but also Google, and others, is that it limits the information users are exposed to. The algorithm learns what you like, and then subsequently only serves you information that you’re bound to like. The problem with that is, that there’s a lot of information that isn’t likeable. Information that isn’t nice, but still important to know. And by heavily filtering the input stream, these companies influence our way of how we think about the world, what information we’re exposed to, etc. Eli Pariser talks about this effect in his book The Filter Bubble: What the Internet is Hiding From You, where he did a Google search for ‘Egypt’ during the Egyptian revolution, and got information about the revolution, news articles, etc. while his friend only got information about holidays to Egypt, tour operators, flights, hotels, etc. This is a vastly different result for the exact same search term. This is due to the heavy personalisation going on at Google, where algorithms refine what results you’re most likely to be interested in, by analysing your previously-entered search terms.

The same happens at Facebook, where they control what you see in your news feed on the Facebook site, based on what you like. Problem is that by doing that a few times, soon you’re only going to see information that you like, and no information that’s important, but not likeable. This massively erodes the eventual value that Facebook is going to have, since eventually, all Facebook will be is an endless stream of information, Facebook posts, images, videos that you like and agree with. It becomes an automatic positive feedback machine. Press a button, and you’ll get a cookie.

What value does Facebook then have as a social network, when you never come in touch with radical ideas, or ideas that you initially do not agree with, but that may alter your thinking when you come in touch with them? By never coming in touch with extraordinary ideas, we never improve. And what a poor world that would be!

The Internet of Privacy-Infringing Things?

Let’s talk a little bit about the rapid proliferation of the so-called Internet of Things (IoT). The Internet of Things is a catch-all term for all sorts of embedded devices that are hooked up to the internet in order to make them “smarter,” able to react to certain circumstances, automate things etcetera. This can include many devices, such as thermostats, autonomous cars, etc. There’s a wide variety of possibilities, and some of them, like smart thermostats are already on the market, with autonomous cars following closely behind.

According to the manufacturers who are peddling this technology, the purpose of hooking these devices up to the internet is to be able to react better and provide more services that were previously impossible to execute. An example would be a thermostat that recognises when you are home, and subsequently raises the temperature of the house. There are also scenarios possible of linking various IoT devices together, like using your autonomous car to recognise when it is (close to) home and then letting the thermostat automatically increase the temperature, for instance.

There are myriad problems with this technology in its current form. Some of the most basic ones in my view are privacy and security considerations. In the case of cars, Ford knows exactly where you are at all times and knows when you are breaking the speed limit by using the highly-accurate GPS that’s built into modern Ford cars. This technology is already active, and if you drive one of these cars, this information (your whereabouts at all times, and certain metrics about the car, like the current speed, mileage, etc.) are stored and sent to Ford’s servers. Many people don’t realise this, but it was confirmed by Ford’s Global VP of Marketing and Sales, Jim Farley at a CES trade show in Las Vegas at the beginning of this year. Farley later retracted his statements after the public outrage, claiming that he left the wrong impression and that Ford does not track the locations of their cars without the owners’ consent.

Google’s $3.2 billion acquisition

google-nest-acquisition-1090406-TwoByOneNest Labs, Inc. used to be a separate company making thermostats and smoke detectors, until Google bought it for a whopping $3.2 billion dollars. The Nest thermostat is a programmable thermostat that has a little artificial intelligence inside of it that enables it to learn what temperatures you like, turns the temperature up when you’re at home and turns it down when you’re away. It can be controlled via WiFi from anywhere in the world via a web interface. Users can log in to their accounts to change temperature, schedules, and see energy usage.

Why did Google pay such an extraordinary large amount for a thermostat company? I think it will be the next battleground for Google to gather more data, the Internet of Things. Things like home automation and cars are markets that Google has recently stepped into. Technologies like Nest and Google’s driver-less car are generating massive amounts of data about users’ whereabouts and things like sleep/wake cycles, patterns of travel and usage of energy, for instance. And this is just for the two technologies that I have chosen to focus my attention on for this article. There are lots of different IoT devices out there, that eventually will all be connected somehow. Via the internet.

Privacy Concerns

One is left to wonder what is happening with all this data? Where is it stored, who has access to it, and most important of all: why is it collected in the first place? In most cases this collecting of data isn’t even necessary. In the case of Ford, we have to rely on Farley’s say-so that they are the only ones that have access to this data. And of course Google and every other company out there has the same defence. I don’t believe that for one second.

The data is being collected to support a business model that we see often in the tech industry, where profiles and sensitive data about the users of a service are valuable and either used to better target ads or directly sold on to other companies. There seems to be this conception that the modern internet user is used to not paying for services online, and this has caused many companies to implement the default ads-based and data and profiling-based business model. However, other business models, like the Humble Bundle in the gaming industry for instance, or online crowd-funding campaigns on Kickstarter or Indiegogo have shown that the internet user is perfectly willing to spend a little money or give a little donation if it’s a service or device that they care about. The problem with the default ads-based business model discussed above is that it leaves the users’ data to be vulnerable to exposure to third parties and others that have no business knowing it, and also causes companies to collect too much information about their users by default. It’s like there is some kind of recipe out there called “How to start a Silicon Valley start-up,” that has profiling and tracking of users and basically not caring about the users’ privacy as its central tenet. It doesn’t have to be this way.

Currently, a lot of this technology is developed and then brought to market without any consideration whatsoever about privacy of the customer or security and integrity of the data. Central questions that in my opinion should be answered immediately and during the initial design process of any technology impacting on privacy are left unanswered. First, if and what data should we collect? How easy is it to access this data? I’m sure it would be conceivable that unauthorized people would also be able to quite easily gain access to this data. What if it falls into the wrong hands? A smart thermostat like Google Nest is able to know when you’re home and knows all about your sleep/wake cycle. This is information that could be of interest to burglars, for instance. What if someone accesses your car’s firmware and changes it? What happens when driver-less cars mix with the regular cars on the road, controlled by people? This could lead to accidents.

Vulnerabilities

And what to think of all those “convenient” dashboards and other web-based interfaces that are enabled and exposed to the world on all those “smart” IoT devices? I suspect that there will be a lot of security vulnerabilities to be found in that software. It’s all closed-source and not exposed to external code review. The budgets for the software development probably aren’t large enough to accommodate looking at the security and privacy implications of the software and implementing proper safeguards to protect users’ data. This is a recipe for disaster. Only when using free and open source software can proper code-review be implemented and code inspected for back-doors and other unwanted behaviour. And it generally leads to better quality software, since more people are able to see the code and have the incentives to fix bugs, etc. in an open and welcoming community.

Do we really want to live in a world where we can’t have privacy any more, where your whereabouts are at all times stored and analysed by god-knows who, and all technology is hooked up to each other, without privacy and security considerations? Look, I like technology. But I like technology to be open, so that smart people can look at the insides and determine whether what the tech is doing is really what it says on the tin, with no nasty side-effects. So that the community of users can expand upon the technology. It is about respecting the users’ freedom and rights, that’s what counts. Not enslaving them to closed-source technology that is controlled by commercial parties.

Killing Counterfeit Chips: Parallels with DRM

Last week, The Scottish chip manufacturer FTDI pushed out an update to their Windows driver that deliberately killed counterfeit FT232 chips. The FTDI FT232 is a very popular chip, found in thousands of different electronic appliances, from Arduinos to consumer electronics. The FT232 converts USB to serial port, which is very useful, and this chip probably is the most cloned chip on the planet.

Of course, not supporting counterfeit chips is any chip manufacturer’s right, since they cannot guarantee that their products work when used in conjunction with counterfeit hardware, and because it is a strain on customer support to provide support for devices not made by the company. This case however, is slightly different in that the update contains code that is deliberately written to (soft)brick all counterfeit versions of the FT232. By doing this, FTDI was deliberately destroying other people’s equipment.

One could simply say: don’t use counterfeit chips, but in many cases you simply don’t know that some consumer electronic device you use contains a counterfeit FT232. Deliberately destroying other people’s equipment is a bad move, especially since FTDI doesn’t know what device that fake chip is used in. It could for instance be a medical device, on which flawless operation people’s lives depend.

Hard to tell the difference

FTDI Real vs FakeIn the case of FTDI, one cannot easily tell an original chip from a counterfeit one, only by actually closely looking at the silicon are the differences between a real or a fake chip revealed. In the image above, the left one is a genuine FTDI FT232 chip; the right one is counterfeit. Can you tell the difference?

Even though they look very similar on the surface, the inner workings differ between the original chips and counterfeit ones. The driver update written by FTDI exploits these differences to create a driver that works as expected on original devices, but for counterfeit chips reprograms the USB PID to 0, which is a technical trick that Windows, OS X and GNU/Linux don’t like.

Parallels with Digital Rights Management (DRM)

Defective by Design I see some parallels with software DRM, which is aptly named Digital Restrictions Management by the Free Software Foundation. Because that is what it is. It isn’t about protecting rights of copyright holders, but restricting what people have always done since the early beginnings of humanity.

We copy. We get inspired by, modify and build upon other work, standing on the shoulders of the giants that came before us. That’s in our nature. Children copy and modify, which is  great for their creativity, artists copy and modify culture to make new culture, authors read books and articles and use the ideas and insights they gain to write new books and articles,  providing new insights which brings humanity as a whole forward. Musicians build upon foundations of others to make new music. Some, like the mashup-artists, even outright copy other people’s music and use them in their compositions as-is, making fresh and new compositions out of it. Copying and modifying is essential for human culture to thrive and survive and adapt.

According to the FSF definition, DRM is the practice to use technological restrictions to control what users can do with digital media, software, et cetera. Programs that prevent you from sharing songs, copying, reading ebooks on more than one device, etcetera, are forms of DRM. DRM is defective by design, as it damages the product you bought and has only one purpose: prevent what would be possible to do with the product or software had there not been a form of DRM imposed on you.

DRM serves no other purpose but to restrict possibilities in the interest of making you dependent on the publisher, creator or distributor (vendor lock-in), who, confronted with a rapidly changing market, chooses not to innovate and think of new business models and new ways of making money, and instead try to impose restrictions on you in an effort to cling on to outdated business models.

In the case of DRM, technical measures are put in place to prevent users from using software and media in a certain way. In the case of FTDI, technical measures are put in place to prevent users from using their own, legally-purchased hardware, effectively crippling it. One often does not know whether the FT232 chip that is embedded in a device is genuine or counterfeit, as you can see in the image near the top of this article, the differences are very tiny and hard to spot on the surface. FTDI wanted to protect their intellectual property, but doing so by sneakily exploiting differences between real and counterfeit chips and thereby deliberately damaging people’s equipment is not the way to go.

Luckily, a USB-to-serial-UART chip is easily replaced, but one is left to wonder what happens when other chip manufacturers, making chips that are not so easily replaced, start pulling tricks like these?

The Age of the Gait-Recognising Cameras Is Here!

Schiphol_airport_Amsterdam2.cleaned

A few days ago I read an article (NRC, Dutch, published 11 September, interestingly) about how TNO (the Dutch Organisation for Applied Scientific Research, the largest research institute in the Netherlands) developed technology (PDF) for smart cameras for use at Amsterdam Schiphol Airport. These cameras were installed at Schiphol airport by the Qubit Visual Intelligence, a company from The Hague. These cameras are designed to recognise certain “suspicious behaviour,” such as running, waving your arms, or sweating.

Curiously enough, these are all things that are commonly found at the stressful environment an international airport is to many people. People need to get at the gate on time, which may require running (especially if you arrived at Schiphol by train, which in the Netherlands is notoriously unreliable), they may be afraid of flying and trying to get their nerves under control, and airports are also places where friends and family meet again after long times abroad, which (if you want to hug each other) requires arm waving.

I suspect that a lot of false positives are going to occur with this technology due to this. It’s the wrong technology at the wrong place. I fully understand the need for airport security, and we all want a safe environment for both passengers and crew. Flights need to operate under safe conditions. What I don’t understand is the mentality that every single risk in life needs to be minimised away by government agencies and combated with technology. More technology does not equal safer airports.

Security Theatre

A lot of the measures taken at airports constitute security theatre. This means that the measures are mostly ineffective against real threats, and serve mostly for show. The problem with automatic profiling, which is what this programme tries to do as well, is that it doesn’t work. Security expert Bruce Schneier has also written extensively about this, and I encourage you to read his 2010 essay Profiling Makes Us Less Safe about the specific case of air travel security.

The first problem is that terrorists don’t fit a specific profile, these systems can be circumvented once people figure out how, and because of the over-reliance on technology instead of common sense this can actually cause more insecurity. In “Little Brother”, Cory Doctorow wrote about how Marcus Yallow put gravel in his shoes to fool the gait-recognising cameras at his high school so he and his friends could sneak out to play a game outside. Similar things will be done to try and fool these “smart” cameras, but the consequences can be much greater. We are actually more secure when we randomly select people instead of relying on a specific threat profile or behavioural profile to select who to screen and who gets through security without secondary screening. The whole point of random screening is that it’s random. Therefore, a potential terrorist cannot in advance know what the criteria are that will make the system pick him out. If a system does use specific criteria, and the security of the system depends on the criteria themselves being secret, that would mean that someone would just have to observe the system for long enough to find out what the criteria are.

Technology may fail, which is something people don’t always realise. Another TNO report entitled: “Afwijkend Gedrag” (PDF; Abnormal Behaviour) states under the (admittedly tiny) section that deals with privacy concerns that collecting data about abnormal behaviour of people is ethically just because the society as a whole can be made safer with this data and associated technology. It also states (and this is an argument I’ve read elsewhere as well), that “society has chosen that safety and security trumps privacy.”

Now, let’s say for the sake of the argument that this might be true in a general sense (although it can be debated whether this is always the case, personally I don’t think so, as sometimes the costs are just too high and we need to keep a free and democratic society after all). The problem here is that the way technology and security systems are implemented is usually not something we as a society get to first have a vote on before the (no doubt highly lucrative) contracts get signed. In this case, Qubit probably saw a way to make a quick buck by talking the Schiphol leadership and/or the government (as the Dutch state holds 69.77% of the Schiphol shares) into buying their technology. It’s not something the people had a conscious debate on, and then subsequently made a well-informed decision.

Major Privacy Issues

We have established that these systems are ineffective and can be circumvented (like any system can), and won’t improve overall security. But much more importantly, there are major privacy issues with this technology. What Schiphol (and Qubit) is doing here, is analysing and storing data on millions of passengers, the overwhelmingly vast majority of which is completely innocent. This is like shooting a mosquito with a bazooka.

What happens with this data? We don’t know, and we have to believe Qubit and Schiphol on their word that data about non-suspect members of the public gets deleted. However, in light of recent events where it seems convenient to collect and store as much data about people as possible, I highly doubt any deletions will actually happen.

And the sad thing is: in the Netherlands the Ministry of Security and Justice is now talking about implementing the above-mentioned behavioural analysis system at another (secret) location in the Netherlands. Are we all human guinea pigs ready to be tested and played around with?

What is (ab)normal?

There are also problems with the definitions. This is something I see again and again with privacy-infringing projects like this. What constitutes “abnormal behaviour”? Who gets to decide on that and who controls what is abnormal behaviour and what isn’t? Maybe, in the not-too-distant future, the meaning of the word “abnormal” begins to shift, and begins to mean “not like us,” for some definition of “us.” George Orwell mentioned this effect in his book Nineteen-eighty-four, where ubiquitous telescreens watch and analyse your every move and one can never be sure what are criminal thoughts and what aren’t.

In 2009, when the European research project INDECT got funded by the European Union, there were critical questions asked to the European Commission by the European Parliament. More precisely, this was asked:

Question from EP: How does the Commission define the term abnormal behaviour used in the programme?

Answer from EC: As to the precise questions, the Commission would like to clarify that the term behaviour or abnormal behaviour is not defined by the Commission. It is up to applying consortia to do so when submitting a proposal, where each of the different projects aims at improving the operational efficiency of law enforcement services, by providing novel technical assistance.

(Source: Europarl (Written questions by Alexander Alvaro (ALDE) to the Commission))

In other words: according to the European Commission it depends on the individual projects, which all happen to be vague about their exact definitions. And when you don’t pin down definitions like this (and anchor them in law so that powerful governments and corporations that oversee these systems can be held to account!), these can be changed over time when a new leadership comes to power, either within the corporation in control over the technology, or within government. This is a danger that is often overlooked. There is no guarantee that we will always live in a democratic and free society, and the best defence against abuse of power is to make sure that those in power have as little data about you as possible.

Keeping these definitions vague is a major tactic in scaring people into submission. This has the inherent danger of legislative feature creep. A measure that once was implemented for one specific purpose soon gets used for another if the opportunity presents itself. Once it is observed that people are getting arrested for seemingly innocent things, many people (sub)consciously adjust their own behaviour. It works similarly with free speech: once certain opinions and utterances are deemed against the law, and are acted upon by law enforcement, many people start thinking twice about what they say and write. They start to self-censor, and this erodes people’s freedom to the point where we slowly shift into a technocratic Orwellian nightmare. And when we wake up it will already be too late to turn the tide.

Country X: The Country That Shall Not Be Named

On Monday, 19 May 2014, Glenn Greenwald published his report entitled Data Pirates of the Caribbean: The NSA is recording every cell call in the Bahamas, in which he reported about the NSA SOMALGET program, which is part of the larger MYSTIC program. MYSTIC has been used to intercept the communications of several countries, namely the Bahamas, Mexico, Kenya, the Phillipines, and thanks to Wikileaks we now know that the final country, redacted in Glenn Greenwalds original report on these programs, was Afghanistan.

MYSTICSOMALGET can be used to take in the entire audio stream (not just metadata) of all the calls in an entire country, and store this information for (at least) 30 days. This is capability the NSA developed, and was published by The Washington Post in March this year.

Why the Censorship?

The question however, is why Glenn Greenwald chose to censor the name of Afghanistan out of his report. He claims it has been done to protect lives, but I honestly can’t for the life of me figure out why lives would be at risk when it is revealed to the Afghani’s that their country is one of the most heavily surveilled on the planet? This information is not exactly a secret. Why is this knowledge that’s OK for the Bahamians to possess, but not the Afghani’s? The US effectively colonized Afghanistan and it seems that everyone with at least half a brain can figure out that calling someone in Afghanistan might have a very high risk of being recorded and analysed by NSA. Now we know for certain that the probability of this happening is 1.

Whistleblowers risk their lives and livelihoods to bring to the public’s attention, information that they deem to be in the gravest public interest. Now, whistleblowers carefully consider which information to publish and/or hand out to journalists, and in the case of intelligence whistleblowers, they are clearly more expert than most journalists when it comes to security and sensing which information has to be kept from the public in the interest of safety of lives and which information can be published in the public interest. After all, they have been doing exactly that for most of their professional lives, in a security-related context.

Now, it seems that Greenwald acts as a sort of filter between the information Edward Snowden gave him for publication, and the actual information the public is getting. Greenwald is sitting on an absolute treasure-trove of information and is clearly cherry picking which information to publish and which information to withhold. By what criteria I wonder? Spreading out the publication of data however, is a good strategy, given that about a year has passed since the first disclosures, and it’s still very much in the media, which is clearly a very good thing. I don’t think that would have happened if all the information was dumped at once.

But on the other hand: Snowden has risked his life and left his comfortable life on Hawaii behind him to make this information public, a very brave thing to do, and certainly not a decision to take lightly, and has personally selected Greenwald to receive this information. And here is a journalist who is openly cherry-picking and censoring the information given to him, already preselected by Snowden, and thereby withholding potentially critical information from the public?

So I would hereby like to ask: By what criteria is Greenwald selecting information for publication? Why the need to interfere with the whistleblower’s judgement regarding the information, who is clearly more expert at assessing the security-related issues surrounding publication?

Annie Machon, whistleblower and former MI5, has also done an interview on RT about this Afghanistan-censoring business of Greenwald, whistleblowers deserve full coverage. Do watch. Whistleblowers risk their lives to keep the public informed of government and corporate wrongdoing. They need our support.

Update: Mensoh has also written a good article (titled: The Deception) about Greenwald’s actions, also in relation to SOMALGET and other releases. A highly recommended read.

Gave Privacy By Design Talk At eth0

eth0I gave my talk about privacy by design last Saturday at eth0 2014 winter edition, a small hacker get-together which was organised in Lievelde, The Netherlands this year. eth0 organizes conferences that aim at bringing people with different computer-related interests together. They organise two events per year, one during winter. I’ve previously given a very similar talk at the OHM2013 hacker conference which was held in August 2013.

Video

Here’s the footage of my talk:

Quick Synopsis

I talked about privacy by design, and what I did with relation to Annie Machon‘s site and recently, the Sam Adams Associates for Integrity in Intelligence site. The talk consists of 2 parts, in the first part I explained what we’re up against, and in the second part I explained the 2 sites in a more specific case study.

I talked about the revelations about the NSA, GCHQ and other intelligence agencies, about the revelations in December, which were explained eloquently by Jacob Applebaum at 30C3 in Hamburg in December. Then I moved on to the threats to website visitors, how profiles are being built up and sold, browser fingerprinting. The second part consists of the case studies of both Annie Machon’s website, and the Sam Adams Associates’ website.

I’ve mentioned the Sam Adams Associates for Integrity in Intelligence, for whom I had the honour to make their website so they could have a more public space where they could share things relating to the Sam Adams Award with the world, and also to provide a nice overview of previous laureates and what their stories are.

Swiss FlagOne of the things both sites have in common is the hosting on a Swiss domain, which provides for a safer haven where content may be hosted safely without fear of being taken down by the U.S. authorities. The U.S. claims jurisdiction on the average .com, .net, .org domains etc. and there have been cases where these have been brought down because it hosted content the U.S. government did not agree with. Case in point: Richard O’Dwyer, a U.K. citizen, was threatened with extradition to the United States for being the man behind TVShacks, which was a website that provided links to copyrighted content. MegaUpload, the file locker company started by Kim Dotcom, was given the same treatment, where if you would visit their domain, you were served an image from the FBI telling you the domain had been seized.

The Rising Trend of Criminalizing Hackers & Tinkerers

Note: This article is also available in Portuguese, translated by Anders Bateva.

There seems to be a rising trend of criminalizing hackers & tinkerers. More and more, people who explore the limits of the equipment, hardware and software they own and use, whether they tinker with it, re-purpose it, or expand its functionalities, are met with unrelenting persecution by authorities. In the last couple of years, the trend seems to be that these things, or things which humans have done for thousands of years, like sharing, expanding and improving upon culture, are persecuted. An example is the recent possibility of making violations of Terms of Service, Terms of Use and other Terms put forward by service providers a crime under the Computer Fraud and Abuse Act (CFAA). The companies that are now (for the most part) in control of our collective culture are limiting the methods of sharing more and more, often through judicial and/or technical means. The technical means for the most part don’t work, thankfully. DRM is still a big failure and never got off the ground, although the content industry is still trying to cling onto it. The judicial means, however, can be very effective at crushing someone, especially in the litigious United States of America. In the U.S., about 95% of all criminal cases end in a plea bargain, because that’s cheaper than trial by jury. These people are forced by financial pressure to enter a plea bargain, even if they didn’t commit the crimes of which they are accused.

Aaron SwartzAaron Swartz

The late Aaron Swartz was persecuted heavily by the U.S. government for downloading millions of scientific articles from JSTOR at MIT, JSTOR being the closed-source library of scientific articles, access to which is commercially exploited by ITHAKA, the entity that runs it. Aaron believed that scientific research paid for by the public, should be available to the public for free. It’s completely logical that research paid for by the public belongs to the public, and not to some company which basically is saying: “Thank you very much, we’ll have that, now we are going to charge for access to the scientific results, and reap the financial benefits.” It is sad that the world lost a great hacker and tinkerer, committing suicide, only 26 years old, unable to bear the pressure brought down upon him any longer, when in the end, according to his lawyer Elliot Peters, he probably would have won the case due to the fact that the U.S. Secret Service failed to get a search warrant for Swartz’s laptop until 34 days after they seized it.

The corporate world is seizing control of content creation

This trend is seen more and more lately. The companies in control of most of our content production, devices and systems don’t want you to tinker with them, not even if you own them. Apple is closing their systems by soon preventing you from installing your own software on OS X. Software installs will soon only be permitted through the Apple-curated App Store. Already there’s software in OS X, called Gatekeeper that’s meant to prevent you from installing apps that might contain malware. If you read between the lines in that previous link you’ll see that it’s only a matter of time before they’re going to tighten the reins, and make Gatekeeper more oppressive. Google is rapidly closing Android, and moving more and more parts of the once open-source system to its own Google Play Services app. Check the permissions on that app; it’s incredibly scary just how much of the system is now locked up in this closed-source binary blob, and how little the actual android system now handles. Recently, text messaging functionality was moved from the Android OS to the Google Hangouts app, so texting with an Android 4.4 (KitKat)-equipped phone is no longer possible without a Google account and being logged into that. Of course, Google will store all your text messages, for easy access by American intelligence and law enforcement agencies. If you now were to install Android, and remove the Google Play Services app, you might be surprised at how much stuff depends on that app nowadays. When you remove Google Play Services, your phone basically becomes a non-functional plastic brick. These companies fail to see that any invention is made by standing on the shoulders of giants and working upon other people’s work, making it better, tinkering and modifying it, using it for other purposes not envisioned by the original author et cetera. This is what makes culture, this is what makes us. We are fundamentally social creatures, we share. The same implementation of control systems happens with e-books as well. The devices used to read them usually aren’t open, like the Amazon Kindle for example, so that is a problem. We humans have been sharing culture for millions of years and sharing books for thousands of years, basically since writing was invented in Mesopotamia. It is as natural to human development as breathing. We are social creatures, and we thrive on feedback from our peers. But there’s something worse going on in e-book land. In the Netherlands, all e-book purchases now have to be stored in a database called Centraal Boekhuis, which details all buyer information, and this central database will be easily accessible by Stichting BREIN, the country’s main anti-piracy & content industry lobby club. This was ostensibly done to prevent e-book piracy, but I would imagine that this database soon will be of interest to intelligence agencies. Think of it: a centralized database containing almost all books and which people read which books. You can learn a lot about a person just from the books they read. Joseph Stalin and Erich Honecker would be proud. We reached a high water mark of society after the adoption of the Universal Declaration of Human Rights at the UN General Assembly on 10 December 1948, but it’s sad to see that here in the Western world, we’ve been slipping from that high pillar of decency and humanity ever since. To quote V from V for Vendetta:

“Where once you had the freedom to object, to think and speak as you saw fit, we now have censors and systems of surveillance coercing your conformity and soliciting your submission.”

The surveillance is now far worse than what George Orwell could have possibly imagined. We need to remind the spooks and control freaks in governments around the world that Nineteen Eighty-Four is not an instruction manual. It was a warning. And we’ve ignored it so far.

Speaking Truth to Power: Integrity in the Mainstream Media

RT Front page

Yesterday I watched a public discussion (last link in Dutch) on Sargasso between Jeroen Wollaars, NOS reporter, and Arjen Kamphuis, futurist, writer, and co-founder and CTO at Gendo. During his talk at OHM2013 (titled: Futureshock), someone asked Arjen a question that went somewhat like this: “If we cannot trust the mainstream media anymore to supply us with the information we need to act as informed citizens, what is the alternative?” To which Arjen replied that, if you want to be better informed about what happens in the Western world, RT (Russia Today) is pretty good.

Now it is important to be very nuanced here. You probably shouldn’t believe the RT reporting done on stuff that is happening in Russia, as RT is, just like any media organization, selective in the information they broadcast, and probably won’t be objective when it comes to Russia, just like the Western media aren’t objective on Western subjects. But on Western issues, and informing us about all the stuff the Western governments are doing, the RT reporting is very good because unlike the Western mainstream media, the Russians dare to ask the questions that need to be asked. Questions that you won’t hear from the Western mainstream media, and the Dutch media in particular.

So many questions..Collateral Murder

Why are the people who committed war crimes and crimes against humanity in an attack helicopter during the Iraq War under the Bush Administration still allowed to walk free, whereas Chelsea Manning was sentenced to 35 years for simply exposing those very same war crimes? How come Manning was sentenced to 35 years, while Anders Breivik was sentenced to just 21? Isn’t that a bit off? A man who ruthlessly and pointlessly murdered 77 people gets less years in prison than someone who exposed the dirty laundry of the powers that be?

When exactly did Dutch Prime Minister Jan Peter Balkenende know about the contents of the Downing Street Memos? Remember, these were the memos that proved definitively that “facts were being fixed around the policy” and that Governor Bush was set in his ways on provoking a war with Saddam Hussein’s Iraq. His administration claimed that Saddam had WMDs (which was a blatant lie, even then), and they even tried to connect Saddam to Al-Qaeda.

AIVDWhere is the coverage about our own intelligence agencies, like the AIVD, MIVD etc. in relation to the revelations on PRISM? Do they have the same capabilities, do they request data on Dutch citizens from their UK and US partners? What kind of data sharing is done with these inter-agency cooperations? We know the Americans spy on Dutch citizens as well (just like they do on every person on the planet connected to the Internet or phone networks), but where are the critical questions from the media? Where are the tough talk shows and debates that really question a few high-ranking politicians about these very important issues? The Germans have at least asked these questions to their politicians.

What is the underlying reason for the massive nation-wide push for the RFID OV-chipkaart public transport ticket (at the expense of normal paper tickets), the ANPR (automatic number plate recognition) cameras above the nation’s highways (which are also used by police), or the fingerprints on the RFID chip on our passports? The government seems intent on tracking our every move.

And these are just a handful of questions the Dutch media didn’t bother to ask and issues they didn’t bother to cover.

The problem with the Dutch mainstream media

The Dutch mainstream media are unfortunately excruciatingly bad at journalism. For instance, the whole Manning case is barely on the news here, but whenever the American presidential elections draw near, the whole Dutch mainstream media press corps gets their knickers in a twist in trying to report on the American ‘elections’ in excruciating and nitty-gritty detail.

There are more important things going on in the world than reporting on an election that is principally undemocratic to begin with. After the 2000 presidential election, Governor Bush squatted the White House for 8 years, while Al Gore won the popular vote. It sure was convenient that Bush’s brother Jeb happened to be Governor of Florida when the electoral votes for that state were the deciding factor in who would win the presidency. And there’s stuff like voter suppression and gerrymandering going on in the US as well, which can influence elections quite substantially. But this fixation the Dutch media has with the US elections has always surprised me, given the fact that the coverage is almost on par with our own elections!

The Dutch media stopped asking the critical questions, and are now almost exclusively broadcasting propaganda from Washington. No questions asked, no background stories, no critical analyses, no audi alteram partem. They now mostly copy-paste the press releases from PR departments, and I really miss the critical tone. Most articles are less than 3 paragraphs long.

I will gladly watch the NOS and other Dutch media again (online, for free, not behind a paywall, and using open standards to provide streaming video) when they start being critical of the government which decides on their budget, and start speaking truth to power.

And this is the main reason why I use RT (among others) to keep me updated on the stuff our Western governments are doing. Unlike the Western mainstream media, RT is asking the questions, they currently speak truth to (Western) power. And again, nuance is important: you shouldn’t believe RT too much when it comes to Russia, just like you shouldn’t believe the Western media too much when it comes to the West. It’s both propaganda, one way or the other. The Russians are at least open and frank about where RT gets their money from; in the West they are much more indirect and subtle about these matters. It’s always best to get your news from as many sources as possible, and make your own decisions on who is more likely to tell you the truth.