Inline's Data Policy Tracker, May 2019
by Inline Policy on 16 May 2019
In the second of our regular Data Policy Tracker we cover the key political and regulatory changes, trends and developments impacting the data sector.
One year on from the implementation of GDPR, we examine the latest interventions from regulators, policymakers and politicians within the context of this evolving data policy landscape.
In this edition of our Data Policy Tracker we explore:
- European data regulator wants to clean-up privacy notices of 'unnecessary' data sharing commitments;
- UK’s data ethics body launches extensive review of online targeting;
- European Member States are increasingly concentrating on sectoral and activity-focused code of practices; and
- Enforcement action against obtaining biometric data without adequate consent.
You can receive the Data Policy Tracker direct to your inbox by subscribing here and you can read the first edition here. We would love to hear your feedback on this Tracker and about any particular issues you would like to see covered in future editions, please do get in touch, or leave a comment at the bottom of the post.
European data regulators moves to unbundle ‘unnecessary’ data processing from privacy policies
Data regulators fear privacy policies are being used to trap users into a raft of different data processing activities and have issued Guidelines to ensure online contractual agreements do not include unnecessary add-ons.
The European Data Protection Board (EDPB), the body which brings together the EU’s data protection authorities, has issued formal Guidelines on the processing of personal data under GDPR in the provision of online services. The Guidelines are an attempt to curtail the extent to which privacy notices (or "the performance of a contract" as a legal basis for processing data) is used by online platforms. In its Guidelines, the EDPB specifically warned that GDPR does not allow for situations where the processing is not genuinely necessary for the performance of a contract, but rather imposed on the data subject.
Data processing activities not considered necessary for the performance of a service include fraud prevention, general service improvement, online behavioural advertising, and personalisation of content. Specifically, (and with one eye on large social media platforms), the Guidelines state that privacy policies “cannot provide a lawful basis for online behavioural advertising simply because such advertising indirectly funds the provision of the service”.
The European Data Protection Supervisor Giovanni Buttarelli has backed the Guidelines in a blogpost, warning privacy policies had evolved into “either long, verbose and impenetrable legalese, or else vague and soothing PR exercises”. Buttarelli issued a call to arms, requesting “regulators begin working together across disciplines to tackle real cases”. On cue, the Irish Data Protection Commission has launched a statutory inquiry into Quantcast International, citing the purposes of profiling and utilising the profiles generated for targeted advertising as potential issues.
Online targeting and algorithmic decision making subject to detailed examination by UK data ethics body
The Centre for Data Ethics and Innovation (CDEI) has launched a ‘call for evidence’ to support its research and position on online targeting and bias in algorithmic decision making.
In its first public consultation since it formed last year, the CDEI has highlighted the proliferation of online targeting as posing serious risks to users. This review concerns itself with how targeting approaches can undermine or reinforce the concept of autonomy – the ability to make choices freely and based on information that is as full and complete as reasonably possible. The CDEI will produce analysis of the governance frameworks that regulate the practices and supply recommendations to government, regulators, and industry.
There are similar exercises happening across Europe, as policymakers seek to find a meaningful role in forming regulatory frameworks and ethical guidelines for AI and machine-learning applications. The European Commission has eagerly responded to an EU high-level expert group on AI’s first set of ethical guidelines calling for the high-level principles to be applied in a controlled ‘piloting phase’ and refined at the beginning of next year. Together with Member States and stakeholders, the Commission will start discussions to implement a model for data sharing, while applying ethical guidelines and with a focus on transport, healthcare and industrial manufacturing.
Data regulators across Europe ramp up efforts to implement statutory code of practices
Since the implementation of GDPR, national authorities have been slow to prepare codes of practices, in conjunction with industry bodies. However, there are signs that this changing.
GDPR explicitly encourages the development of codes of practices to assist with its “proper application”. It should therefore come as no surprise that national authorities are, in growing numbers, updating current industry codes and placing them on a more legal, statutory footing. Any codes that relate to processing activities must be submitted to the European Data Protection Board for an opinion as to the code’s compliance with GDPR.
The UK’s Information Commissioner’s Office (ICO) has now published the proposed ‘Age appropriate design: a code of practice for the online services'. The consultation paper outlines a practical guidance about how to ensure online services appropriately safeguard children’s personal data. The code will be laid before Parliament and, according to the ICO, if a company or organisation fails to act in accordance with the code it may “invite regulatory action”.
In time, the UK hopes to place before Parliament code of practices for the use of data in direct marketing; journalism; and political campaigns. There are similar developments in other European countries. In its Annual Activity Report, the Belgium Data Protection Authority stated it was currently developing two national codes of conduct - technical security measures in clinical trials and obligations in the journalism sector. The French data regulator CNIL has also cited the creation of sectoral code of practices as a key tool to improve compliance, in its Annual Activity Report.
The UK regulator reminds companies that biometric data is a 'special category' data and requires greater protection
The Information Commission’s Office (ICO) has forced the UK Government's Revenue and Customs department (HMRC) to delete voice authentication data for about 5 million customers, as HMRC was judged to have gathered this data without adequate consent.
Deputy Commissioner of the ICO Steve Wood has publicly warned data controllers who process biometric data on users to be wary of the strict conditions that apply to the activity. Alongside the usual actions that must be taken into account under GDPR, organisations and companies must obtain explicit consent (when using consent as a legal basis for holding data), with the data subject fully informed of the purpose for gathering the data and the options available to them.
Following a complaint from privacy rights group, Big Brother Watch, the ICO revealed that callers to HMRC were not given information or advised that they did not have to sign up to its voice verification service. This is the first enforcement action taken in relation to biometric data since the advent of GDPR when, for the first time, biometric data was specifically identified as special category data that requires greater protection.
Topics: European Politics, Artificial Intelligence (AI), Data policy, Conor Brennan
Comments