By Rebecca Lowe, Senior Reporter, International Bar Association
The world is experiencing a data explosion. Huge stockpiles of information are being amassed online at an unprecedented rate. According to Cisco, global internet traffic will hit 1.4 zettabytes of data by 2017, 12 times the amount generated in 2008. To put that in layman’s terms, if one gigabyte is the equivalent of a cup of coffee, 1.4 zettabytes equates to the Great Wall of China. If someone took it upon themselves to watch all the video crossing the web, it would take them approximately five million years to do so.
There is now so much data stored in the world that we’re running out of language to describe it. The only quantity bigger than a zettabyte is a yottabyte, a figure with 24 zeroes. After that, we’re on our own. Yet how many people know how much of this data is theirs or where it is being stored? Or who is protecting it? Or what rights they have to access or remove it? The answer, it appears, is not many: more than two-thirds of consumers have ‘little idea’ what happens to their personal information, according to research by the Boston Consulting Group (BCG). The global population is pouring more and more of its life online, into a bottomless pit it neither understands nor controls.
‘People have analogised that where we are with data is comparable to where they were in the industrial revolution with pollution,’ says Brian Hengesbaugh, partner and technology specialist at Baker & McKenzie, and former special counsel in the US Department of Commerce. ‘Pollution was coming out all over the place and was not really controlled. That is the environment we are in with data generation right now.’
If people realised the true worth of their data, however, they may try harder to get a handle on it. Described by the World Economic Forum in 2011 as ‘the new oil’, personal information has emerged as a distinct asset class with huge untapped value; the ‘currency of the digital economy’, according to Viviane Reding, European Commissioner for Justice, Fundamental Rights and Citizenship. It could, says the BCG, amount to around eight per cent of the EU-27 GDP by 2020.
Yet while technology sprints unshackled into the future, old data protection laws struggle to keep pace. It is currently unclear who owns the rights over personal information and clarification is urgently needed, says Taylor Wessing IT-specialist partner Chris Rees,
Co-Chair of a new IBA Working Group on Digital Identity. ‘If you start from the position that information is the “new oil”, then what lawyers need to do is recognise that the law needs to change to adapt to that new reality. It needs to develop appropriate safeguards and economic redress for this new asset class.’
Benjamin Amaudric du Chaffaut, senior legal counsel at Google France, admits the sluggishness of the law means they are often forging the legislative path as they go. ‘We generally can’t really rely on existing legal provisions and case law because we often face new legal issues. We therefore have to convince the court that we are doing the right thing.’
The recent leaking of the US National Security Agency’s Prism programme, which gave US authorities unprecedented access to personal data online, drew much-needed attention to consumer vulnerability. Suddenly, what material is held on the internet and how it is protected were questions not only being asked by NGOs and policymakers, but by internet users across the world.
While a growing number of countries are enacting data protection legislation, there remain significant inconsistencies across the world. The EU is due to bring in some of the toughest legislation in the world in 2014, while China and India – which will soon have more people online than Europe and the US have citizens – are in the process of updating their privacy laws. The US currently relies on a mixture of legislation, regulation and self-regulation for a patchwork of categories of information, but last year announced its intention to enact a privacy bill of rights for web users.
Differences in privacy laws act as a trade barrier and obstacle to innovation, as well as providing legal loopholes for savvy companies to exploit. Aware of the need for clarity on the issue, the IBA Working Group on Digital Identity has set about drafting a set of high-level principles to address concerns surrounding the collection and use of online information. Such data is not simply inputted by users, they point out, but comprises an entire ‘digital identity’ based on behavioural information, such as web-surfing, paying bills – or merely wandering around with a mobile phone, which continuously tracks your movements. Such information is stored by companies, often indefinitely, acting as a form of indelible, sprawling cyber-tattoo, a permanent digital footprint.
How to protect your digital footprint
The digital version of yourself may be far bigger and uglier than you realise. But there are a few ways you can try to hone it into shape. Here is IBA Global Insight’s guide to controlling your privacy and security online.
Shutting the blinds
‘Until recently there was a general lack of appreciation about the amount of data being collected and stored in the online environment,’ says Sylvia Khatcherian, Managing Director and Global Head of Technology, Privacy and IP Law at Morgan Stanley and Co-Chair of the Working Group alongside Rees. ‘Many questions need to be asked, such as who has control over this information, how is it being used and how is it protected?’
Khatcherian stresses the principles are not intended to be critical of internet companies, but aim to provide a balanced perspective and ‘consider all stakeholder interests’. Key issues under discussion include transparency (are terms and conditions clear and precise?); access (can users obtain or delete their information easily?); privacy (who else has access to this information?); security (are sufficient safeguards in place to prevent data misuse?); and accountability (are there obvious means of redress for data violations?).
Internet service providers (ISPs) currently have highly diverse data protection policies, many of which change regularly. Users of Facebook own the rights over the information they post, but grant Facebook a ‘non-exclusive, transferable, sub-licensable, royalty-free, worldwide licence’ over that content. Google, in a case currently being fought in California, has surprised many by stating that users of Gmail should have no ‘legitimate expectation’ that their emails will not be read by the company, ‘just as a sender of a letter to a business colleague cannot be surprised that the recipient’s assistant opens the letter’. In a recent interview, Google co-founder Eric Schmidt admitted he could easily read people’s emails should he choose to do so – though quickly stressed that he would certainly lose his job and ‘be sued to death’.
As new technologies continue to blur traditional lines between the public and private spheres, and old privacy laws are shoe-horned into ever more complex modern scenarios, internet users are advised to remain vigilant. ‘People are only just beginning to realise they need discipline online,’ says IBA Senior Staff Lawyer Anurag Bana, a member of the Working Group who is also managing a project on the way social media is affecting the legal profession. ‘Ten years down the line we’ll realise what we’ve said on the internet and how we’ve been perceived. We’ve been giving up our rights, and doing it willingly.’
Technology lawyer Erik Valgaeren, a partner at Stibbe Brussels and member of the Working Group, says that people must start to think differently about the way they interact online. ‘The internet is becoming an extension of our functioning as individuals. It is forcing us to think about identity in a much more holistic way, about the trail we create on ourselves, knowingly and unknowingly. This triggers questions about how we are developing identity, how we are protecting it and what are the legal implications?’
Law of information
Leading the charge on data protection is the European Commission, which plans to implement a unified EU law, the General Data Protection Regulation, in 2014. The new law aims to strengthen people’s rights over their personal data and make it easier to transfer information from one service provider to another. It also significantly increases fines for data violations and extends the scope of protection to foreign companies processing data of EU residents. This means companies such as Google would find it harder to claim jurisdiction in California when faced with European privacy lawsuits – as they are doing in an ongoing UK case, in which they are accused of secretly bypassing Apple’s Safari browser security settings to track people’s online activity. In its submission to the UK’s High Court, Google said the information taken was not ‘private or confidential’, and claimed that British courts should not hear the case as no data processing takes place there. Representing the claimants, Olswang’s Daniel Tench said he found it ‘surprising that Google is seriously trying to contend that there is no expectation of privacy in one’s history of internet usage’, which is ‘something inherently, intimately, personal’.
One particularly controversial provision of the EU draft law is the ‘right to be forgotten’, whereby personal information must be deleted when consent is withdrawn – such as, for example, embarrassing childhood posts on Facebook. While the idea has many supporters, some believe such a right would be impractical and conflict irreconcilably with other rights, such as freedom of speech. It is also unclear whether search engines such as Google would be affected alongside original publishers of content. In a recent Spanish case, a claimant won the right to have a 15-year-old newspaper article removed from Google that referred to non-payment of social security contributions – despite the fact the newspaper itself was not obliged to remove the story. However, an advisor to the European Court of Justice (ECJ) disagreed, stating in June that Google is not obliged to delete ‘legitimate and legal information’ upon request (illegal or libellous information remains a moot point). The ECJ is yet to make a final decision in the case.
Chaffaut outlines Google’s position on the issue. ‘We are not the publisher, we are just a tool that helps users to find the content available on the web. If something is disclosed – legally, of course, and with their consent – we can’t just ask third parties to clean the web for them. It goes against many principles, such as freedom of information.’
Rees believes that one way to inject some clarity into the muddy legal framework surrounding digital identity would be for the law to recognise personal data as a form of property. While such a property right is already recognised in UK data protection laws, he says, it is yet to be sufficiently recognised by the courts. Should it do so, people could more easily assert their legal and economic rights over their digital assets, while having the option to transfer those rights to a third party. An internet company would therefore need to recompense the user for any profit derived from the use of their information – unless, says Rees, it was aggregated and anonymised with other data.
‘Google regards the information it is harvesting as being its own by virtue of the harvesting,’ Rees continues. ‘My proposition would be that this is an incorrect analysis akin to the argument made by colonial powers who took oil from the Middle East. They felt the whole of the profit was theirs, but as the economy developed, the countries from which the oil was being extracted asserted their rights to be paid for that commodity.’
Such a ‘law of information’ may not be far away. While not defined in explicit property terms, personal information is increasingly being interpreted as a commodity with monetary value. In a recent US case, Facebook agreed to pay a settlement of $20m to 614,000 users for using their data to promote products and services without asking permission or offering compensation – though the judge admitted it had not been established that Facebook ‘had undisputedly violated the law’.
While Rees stresses that he does not wish to stifle innovation and ‘limit the capability of the information harvesting’, others believe that asserting such a property right may do just that. Hogan Lovells partner Tim Tobin, a specialist in data security law, points out that there are already a number of laws in place to regulate the internet – including the Federal Trade Commission Act in the US, which imposes harsh penalties for companies engaging in unfair or deceptive trade practices. Anything more, he believes, could stifle innovation.
‘There are very thorny issues as to how you would value that particular property right,’ he says. ‘And I think we are already in a world where there is a trade-off that occurs when people are providing their information for various internet platforms. The trade-off is the interaction they obtain, the content that is made available to them.’
Indeed, ISPs would argue that handing over the keys to one’s digital identity in exchange for free email and web facilities is a fair exchange. After all, Google’s modus operandi helps it to generate targeted marketing to benefit its partners and also develop new products, fight fraud, filter spam, improve its geolocation services and far more. Google may have offered to put a ‘do not track’ button on its Chrome browser, but the consumer cannot expect to be getting something for nothing. It is a business, after all, not a charitable endeavour – and we are not only their customers, but their merchandise.
The key, it seems, is transparency. Until people know what and where data is being held, and exactly what is being done with it, neither they nor policymakers can make informed choices about this currency exchange being enacted on their behalf. If customers are not getting their money’s worth, they need to know about it. They could even be given the option of maintaining their privacy and paying for services the old-fashioned way – with cold, hard cash.
Data collection and storage, if done securely and transparently, should not necessarily be something to fear. Once clear access rights are agreed, such a wealth of information could prove as useful to consumers as to companies – and ultimately help to construct a more detailed and accurate picture of our lives than human memory could ever hope to replicate.
Indeed, Jeremy Bailenson, founding director of StanfordUniversity’s Virtual Human Interaction Laboratory, believes big data analysis has immense potential to change society for the better. He has spent almost ten years assessing what knowledge can be determined from people’s non-verbal movements alone – knowledge, he claims, which is far more powerful than that garnered from other internet activity, such as web browsing. Many of his experiments are based around the Microsoft console Kinect, which continuously observes players’ behaviour and shares the information online. From this data alone, ‘there is nothing we haven’t been able to predict when we’ve tried,’ he says, from aspects of people’s personalities to their learning capabilities to their likelihood of crashing a car.
The concern, he says, is if such information falls into the wrong hands. ‘It’s easy to regulate a company like Microsoft because they are selling you these products and can be held accountable. What I’m worried about is that you have this device in your home that tracks your movements and others can easily access that.’ He adds: ‘When you’re online, you think you’re anonymous and that no-one is watching, but in reality you’re more identifiable and more telling than you are face-to-face.’
Now that vast amounts of data are being shifted into the ubiquitous ‘cloud’ – an aptly nebulous haven, which means effectively a vast network of online servers – concerns over security are more pressing than ever. ‘The sharing and combination of data through cloud services will increase the locations and jurisdictions where personal data resides,’ says Olswang partner and technology specialist Blanca Escribano. ‘For machine-to-machine communication and the internet of things [communication devices connected via the internet communicating with each other and with the wider world], some concepts of traditional data protection rules have to be rethought. As stated by the EU Commission, the meaning of “personal” data, the purpose of this data, who is liable and what is “consent” have to be adapted to this new context.’
However, it would be extremely hard for a hacker to access cloud data, Chaffaut insists, because it is both encrypted and fragmented across multiple servers, and would only be readable when accessed on a local device. ‘Now almost everything is in the cloud, and this is something that cannot be changed,’ he says. ‘The challenge is to show that the whole of the infrastructure is secured and we support the highest security standards to protect our users’ data.’
Because any loss of social media data could have a devastating impact on users, Valgaeren believes ISPs could be called upon to sign up to a certification or ‘monitoring’ scheme that verifies data security and the overall health of the company. ‘For many people, the digital world is as important – or more important – as the real world,’ he says. ‘So the question is, can we leave it up to the market to take on this very significant role of being your digital identity provider, or should there be some government-regulated or self-regulated trust label for these companies?’
The need for greater trust online has prompted a series of innovative proposals for identity management systems from both the private and public sector. The emerging idea is to create a form of ‘identity ecosystem’, whereby users can store personal information across several different identity providers. Through this system, people’s identity could be verified without them necessarily having to reveal it publicly, as they could control how much information is disclosed in any one transaction. This way, users could more easily keep track of multiple accounts and passwords, while companies could help secure against fraud.
While any kind of identification system tends to be viewed with mistrust in Western countries, elsewhere this is not always the case. The World Bank recently announced its plan to provide digital identities to all 1.2 billion Indian citizens in order to fight fraud and help people access financial services. The scheme, according to World Bank President Jim Yong Kim, may prove to be a ‘poverty killer’.
What is clear is that this brave new world of big data has the potential both for greatness and for gross exploitation: ‘Like uranium,’ says Bailenson, ‘it can heat homes and it can destroy nations’. Which direction it takes is ultimately up to us.