Privacy blockbusters, the Brussels Effect & AI disruption
Long queues formed at the entrance to last week’s sold out IAPP Annual congress, the largest gathering of top privacy professionals in Europe. With 3,000 people in attendance and the likes of Google, Amazon and Microsoft prominent as sponsors the ‘Brussels effect’ certainly appears to be in full swing.
The Brussels Effect was coined around the time of the first IAPP gathering 12 years ago, and has come to mean the globalisation of EU regulatory standards through market forces. This has been seen in the fields of chemicals, aeroplane emissions and most recently privacy with GDPR.
When GDPR emerged in 2018 it quickly became the gold standard for privacy, and the EU recently doubled down with a suite of new regulations covering everything from social media to data portability and mobile devices.
The star of the show remains AI however, with its transformational qualities and unique challenges for governance. In October The White House published its own standards on AI security and safety, and unless the EU completes the progress of its own AI Act soon, Brussels risks losing its position to the US government’s buying power.
Nestled beside the EU parliament, the IAPP event had its fair share of scoops, not least that agreement on what’s defined as “high risk” activity under the AI Act is being finalised. Just as GDPR applied a risk-based approach to data-rich activities like marketing, risk levels will be key to how different applications of AI are treated. This will feed demand for tools and software to automate new processes for governance, much as GDPR did for PETS (privacy enhancing technologies) back in the day, but more of that shortly.
Alongside fresh challenges, one topic is evergreen amongst privacy folk, regulators and tech platforms alike; that of consent and choice when it comes to use of personal data. The UK recently published a paper jointly from its data and competition regulators focused on the online interfaces and controls which are a fundamental touchpoint between companies and citizens who participate in digital markets.
They describe these as “online choice architecture” or when they’re feeling less charitable in recent postings, “dark patterns” in UX design where there is an emphasis on the downside to users if they exercise their data rights, known as “biased framing”.
From the panel which I joined in my capacity as Industry Commissioner to the DMC, it’s clear the will amongst advertisers, intermediaries and politicians is to move beyond threshold compliance in the form of static cookie banners and to address public concerns that organisations are using their personal data without permission.
As connected devices from cars to hearing aids become pervasive and AI makes increasingly intimate connections between these collection points, companies must look at how they build data dialogue and trust with citizens, rather than simply broadcast their terms.
The good news is there were plenty of examples at the conference of companies facing their challenges openly and sharing how they dealt with them, publishing ethics principles and mobilising on governance frameworks for AI (based on a show of hands 50% of attendees already had these in motion). It’s now more common to see the results of DPIA’s (Data Protection Impact Assessments) shared with regulators, stakeholders and citizens which creates a higher general level of understanding and helps engineers build responsible applications.
Working on Swash’s DPIA was one of the first tasks I worked on with them as advisor, and I was repeatedly asked at the IAPP how data unions can play a role as intermediaries in the new type of data economy the EU and other blocs want to build.
The concept behind data unions is simple; a group which represents the shared interest of people and their data. I download a plugin or app, and assign control to the group which is bound by rules of association. Unions already handle all sorts of data, from browsing to financial transactions (like Unbanx) and can be run on altruistic principles or as a new type of commercial intermediary.
What’s encouraging about the new data regulations is the EU putting its own financial muscle into play by pledging funding to kickstart the ecosystem. Each data union still has the challenge of finding and scaling their business model, but I think AI offers fresh opportunities in this area too.
One interesting challenge shared by Mastercard resonated with other organisations I talked to at the event. They keep as little data as possible, by design but now need to add personal info to check against bias for the algorithms they use in different parts of the business. Data unions can provide targeted, zero-party data to meet this AI-driven need and flow the proceeds back to their members, or good causes.
I think the political will is there for the EU to hit its target of getting new regulations in place before next year’s elections and continue to drive positive change, in concert with the US and other parts of the world represented at this event.
Given the AI Act is essentially a ‘product safety regulation’, this clarity will be essential to the business plans of current builders and users of this revolutionary technology. I predict that much like the DPIA with GDPR, tools such as conformity assessments will be key to managing risk and disclosure around AI, and will return to the subject in more detail in another post. If you agree, or found this interesting please reach out or follow for more updates…