The European Data Protection Board (EDPB), an expert governing body that advises EU lawmakers on how to interpret rules encompassing citizens’ personal data, has warned bloc lawmakers that a set of incoming digital regulations risked undermining basic human rights – without “decisive action” to change the set of proposals.
These are draft rules covering the governance and liability of digital platforms (the law on digital services; DSA); proposals for ex ante rules for Internet gatekeepers (the Digital Markets Act; DMA), the Data Governance Act (DGA), which aims to encourage the reuse of data as a driver of innovation and AI; and the European Approach to Artificial Intelligence (AIR) Regulation, which sets out a risk-based framework for regulating AI applications.
EDPB’s analysis further suggests that the pan-European digital rules update package will be hampered by fragmented oversight and legal inconsistencies – potentially in conflict with existing EU data protection law, unless it is not clarified to avoid prejudicial inconsistent interpretations.
In particular, in a statement released today following a plenary meeting yesterday, the EDPS directly calls on EU lawmakers to implement stricter regulations on targeted advertising in favor of alternatives that do not not require the tracking and profiling of Internet users – to calling on lawmakers to consider “a phase-out leading to a ban on targeted advertising on the basis of ubiquitous tracking”.
In addition, the EDPB statement urges that the profiling of children for the purpose of advertising targeting be “globally prohibited”.
It turns out that the European Parliament’s Committee on the Internal Market and Consumer Protection (IMCO) was holding a hearing today to discuss targeted advertising, as MEPs consider changes to a known wider digital set of regulations under the name of the Digital Services Act (DSA).
A number of MEPs have called for an outright ban on tracking-based ads to be added to the DSA package – amid growing concerns about the myriad damage resulting from surveillance-based ads, from ad fraud to manipulation individual and democratic erosion. (to only cite a few).
However, MEPs speaking at the IMCO committee hearing today suggested there would be no overall backing in parliament to ban ad tracking – despite compelling testimony from a range speakers articulating the harms of surveillance-based advertising and calling on the ad technology industry for deceptive lobbying on the problem by seeking to confuse targeting and tracking.
While retail lobbyist Ilya Bruggeman has spoken out in favor of tracking and profiling – echoing the big ad platforms’ claim that SMBs rely on invasive advertisements for privacy – d other speakers in the committee session aligned themselves with civil society defying the line.
Johnny Ryan, a former adtech industry insider (now an ICCL member) – who has filed numerous GDPR complaints against the misuse of personal data by real-time auctions (RTB), calling it the biggest security breach of history – began his presentation with a sharp debunking of industry rotation, telling MEPs the problem is not, as the session title put it, “commercials targeted ”; rather, the problem boils down to “tracking-based ads”.
“You can have targeting, without having tracking,” he told MEPs, warning: “The industry that makes money from tracking wants you to think otherwise.
The European Parliament’s focus on behavioral ads (v. What sounds like great news for providers of dark patterns.
That said, MPs appear to be considering a ban on the tracking and profiling of minors for ad targeting – raising questions about how this could be implemented without a robust age verification also being put in place. implemented in all internet services… which, uh, is not the case at all now – nor in most people’s favorite versions of the internet. (The UK government might like it though.)
So, if this ends up being in the final version of the DMA, one way for services to comply / reduce their risk (i.e. being accused of ad targeting minors) might be for them to turn off ad tracking for all users by default – unless they really age robustly, a specific user is an adult. (So maybe adtech platforms like Facebook would start requiring users to upload national ID to use their “free” services in this version of the future…)
In light of the reluctance of MEPs, the intervention of the EDPB seems important – although the body does not itself have legislative power.
But by urging the EU co-legislators to take ‘decisive action’, he is clearly drawing the arcs of Council, Parliament and Commission to redouble their efforts and avoid the pit of selfish lobbying; and remember that alternative forms of online advertising (contextually targeted) are available. And profitable.
“Our concerns fall into three categories: (1) the lack of protection of the fundamental rights and freedoms of individuals; (2) fragmented supervision; and (3) the risks of inconsistencies “, wrote the Council in the press release, before warning that it” considers that, without other modifications, the proposals will have a negative impact on the fundamental rights and freedoms of individuals and will lead to significant legal uncertainty which undermines both the current and future legal framework ”.
“As such, the proposals might not create the conditions for innovation and economic growth envisioned by the proposals themselves,” he also warns.
The EDPS ‘concerns about fundamental citizens’ rights also encompass the Commission’s proposal to regulate high-risk applications of artificial intelligence, with the body claiming that the draft AI regulation does not go far enough to prevent the development of AI systems designed to categorize individuals – such using their biometric data (e.g. facial recognition) and based on ethnicity, gender and / or political or sexual orientation, or other prohibited grounds of discrimination.
“The EDPS considers that such systems should be banned in the EU and calls on the co-legislators to include such a ban in the RIA”, he writes. “In addition, the EDPB considers that the use of AI to infer the emotions of a natural person is highly undesirable and should be prohibited, except for certain well-specified use cases, namely for health purposes. or research, subject to warranties, conditions and limitations. “
The board also reiterated its earlier call for a ban on the use of AI for remote biometric monitoring in public places, following a joint statement with the European Data Protection Supervisor. data in June.
MPs have also previously voted to ban remote biometric monitoring.
The Commission proposal offered a very lukewarm and cautionary restriction which was widely criticized as insufficient.
“[G]In view of the significant negative effect on the fundamental rights and freedoms of individuals, the EDPS recalls that RIA should include a ban on any use of AI for automated recognition of human characteristics in spaces accessible to the public – such as faces but also gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioral signals – in any context, ”wrote the Council in the press release.
“The proposed RIA currently allows the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes in certain cases. The EDPS welcomes the resolution recently adopted by the EP, in which the significant risks are highlighted. “
On surveillance, the EDBP appears concerned that data protection agencies are being bypassed by the bloc’s ambitious flotilla of digital regulation updates – urging ‘complementarity in surveillance’ to strengthen legal certainty, as well as the need for DPAs to have “sufficient resources to carry out these additional tasks”. (A perennial problem in an age of ever-increasing data.)
Legal certainty would also be improved by including explicit references to existing data protection legislation (such as the GDPR and the ePrivacy Directive), he argues, to avoid the risk of incoming data packets weakening core concepts of data protection. GDPR such as consent to data processing.
“It also creates the risk that certain provisions may be interpreted as deviating from the GDPR or the ePrivacy Directive. Consequently, certain provisions could easily be interpreted in a manner incompatible with the existing legal framework and subsequently lead to legal uncertainty ”, warns the Council.
So far, far from the much-vaunted restart of EU digital regulations strengthening protections for citizens – to boost their confidence in data-driven services – there is, in the absence of very significant amendments, a risk of death by a thousand cuts (and / or regulatory complexity) for fundamental fundamentals, with potentially ruinous consequences for the “European values” so proclaimed by the bloc.