Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Europe’s 5-year digital priorities

POLITICO’s weekly transatlantic tech newsletter for global technology elites and political influencers.
By MARK SCOTT
Send tips here | Subscribe for free | View in your browser
WE’RE BACK. THIS IS DIGITAL BRIDGE, and I’m Mark Scott, POLITICO’s chief technology correspondent. I’m guessing some of you are still on vacation; enjoy! For those who are back at your desks, I bring you my favorite character from the Marvel Cinematic Universe.
Ready? Let’s get started.
— The European Union is preparing for another digital legislative push ahead of the five-year term of the next European Commission. That would be a mistake.
— Renée DiResta, mostly recently research manager at the Stanford Internet Observatory, talks about the power of online influencers, the digital mob and social media’s algorithms.
— A U.S. federal judge ruled that Google abused its search monopoly. Now comes the hard part: How do you fix that?
IF THE LITANY OF EMAIL BOUNCE-BACKS AND out-of-office WhatsApp messages I’ve got over the last few weeks tell me anything, it’s that many European officials are still resting ahead of the real work starting back up in early September. Still, plans are already afoot for a hefty dose of digital policymaking over the next five years in Brussels — on top of what has arguably been the most active digital lawmaking period ever, between 2019-2024. Before we dive into what’s coming, let’s do some housekeeping. The next Commission, the EU’s executive branch that sets the bloc’s policymaking priorities, won’t be in place until mid-November. The body must still be officially approved by the European Parliament, which will hold individual hearings for each would-be commissioner from late September onward. Only after the Commission is rubber-stamped in its entirety can it officially get to work. Got that? Phew.
The months-long wait doesn’t mean that officials are twiddling their thumbs. The manifesto of Ursula von der Leyen, the soon-to-be returning president of the European Commission, is full of digital policies, albeit most of these are framed via the prisms of competition, industrial policy, security and climate change. If her last tenure was defined, at least digitally, by a revamp of the bloc’s approach to social media, antitrust and, belatedly, artificial intelligence, the upcoming five years should be viewed as an effort to harness the digital world in the name of broader societal goals. Alongside these new priorities (we’ll get to them in a minute), you can also expect long-standing battles — especially on efforts to revamp the bloc’s privacy standards, known as the General Data Protection Regulation — to trundle along with little, if anything, to show for it. And, for the extreme policy wonks, you get extra points for any movement on the so-called ePrivacy Regulation, or ugly duckling of Europe’s digital rulemaking.
So what can you expect? Von der Leyen secured a second term in the Berlaymont building for promoting a muscular vision of Europe — one based on defense, global competitiveness and growth. To that end, the German politician wants to double down on the bloc’s recently approved Artificial Intelligence Act with not one, but three AI-focused priorities. A so-called AI Factories Initiative will open up government-backed supercomputing capacity for local firms. An “Apply AI Strategy” will seek to include the emerging technology in existing pan-EU industries. And a “European AI Research Council” will try to bring together research from across the region in a CERN-style program.
The German made a big point of warning of Russian disinformation and propaganda ahead of June’s European Parliament election. Those efforts are expected to lead to a so-called European Democracy Shield, an as-yet determined push to pool Europe’s work on countering online foreign interference. Much of this work already takes within a team inside the EU’s diplomatic corps, as well as at the national level. The upcoming Commission wants to expand that work to detect and counter state-backed disinformation campaigns. Though my discussions with EU officials about what this actually means have left me none the wiser, other than the proposal meeting von der Leyen’s wider political aims of showing Europe as a tough global operator.
The next big ticket item on the upcoming Commission’s agenda are the buzzwords of the 2020s: competition and industrial growth. Brussels eyes, enviously, what both the United States and China have been able to achieve by investing heavily in domestic industries in ways, some would say, that are anticompetitive. Expect the EU to continue along a similar line over the next five years, with investments targeted at local semiconductor production, quantum computing capacity and, as stated above, artificial intelligence use cases. Von der Leyen will also prioritize enhanced cybersecurity resilience — as part of her wider defense plans — by using EU funds to back local firms to create a “trusted European cyber-defense industry.”
To all of this, I say, “Fine, go ahead.” It’s understandable that von der Leyen 2.0 would want to propose new policies because, well, that’s what politicians do. But in her ongoing effort to move the EU forward, she’s forgetting that the bloc’s existing (newish) rules — notably the Digital Services Act, Digital Markets Act, Digital Governance Act, Data Act, and AI Act, to name just a few — still need to be implemented in an effective and reasonable way. That involves more resources required to hire experts and enforcers; policy capacity to tweak these rules when, inevitably, things go wrong; and a willingness to acknowledge that portraying Europe as the West’s digital policeman doesn’t necessarily equate to either a safer online world or greater economic growth. In short, it would be better to prioritize these existing rules than to focus on the next shiny digital priority.
It’s worth remembering the EU has form for this. Almost a decade ago, the Commission — then run by Luxembourg politician Jean-Claude Juncker — proposed a “Digital Single Market Strategy for Europe.” The basic aim was to break down the online barriers across the then-28-country bloc to boost economic growth, reduce internal trade barriers, and create a pan-EU market to compete with international rivals like the U.S. and China. Fast forward to 2024, and that vision remains incomplete. Europe, unlike its global competitors, is not one single country, and digital barriers, writ large, still represent unnecessary cross-border friction. If von der Leyen really wanted to make an impact with digital policy over the next five years, actually completing what Juncker started — namely a borderless digital single market across the EU — would be a good place to focus.
AFTER YEARS OF TRACKING ONLINE DISINFORMATION, propaganda and other digital nastiness, Renée diResta sees patterns where others see chaos. In her new book, “Invisible Rulers: The People Who Turn Lies into Reality,” the former Stanford University researcher tries to parse together a theory about why, seemingly out of the blue, online coalitions of unconnected people can quickly jump on a specific (sometimes political) issue, make it trend across social media and, in the worst cases, take it offline that can lead to real-world harm. Case in point: the Jan. 6 Capitol Hill riots or the Jan. 8 attacks in Brasilia. For diResta, these increasingly occurring events, which often appear random, come down to three overlapping trends. You need online influencers to fan the flames of a cause. You need social media algorithms to promote those messages, far and wide. And you need online crowds of users, often with similar viewpoints to the influencer, willing to share the posts with whoever will listen.
“Influencers are charismatic, they’re interesting, but they also have this innate sense of what the algorithm wants,” DiResta told me via Google Meet this week. “You can’t really talk about one of these things without the other. The influencers get support from the crowd. The crowd is going to be the person who is engaging with the content. But the influencer is making content, not only for the crowd, but also for the algorithm.” Confused? Let’s break this down. Policymakers and politicians struggle to understand how online chatter can quickly turn into offline harm. In DiResta’s theory, such action needs all three areas — an influencer to spark a cause, a social media algorithm, focused on driving user engagement, to surface that content, and an online crowd to fan the initial flames. In that way, global events like the Canadian truckers’ protest; conspiracy theories linked to Covid-19; and ongoing falsehoods about the war in Ukraine can take on a life of their own, in a matter of minutes.
It’s a compelling theory, especially as I read her book amid ongoing civil unrest in the United Kingdom, where I live, that led far-right mobs to carry out violence across the country. While none of their grievances started online, much of the sporadic, seemingly unconnected, violence was coordinated online via Telegram channels, Facebook groups and posts on X. It’s a prime example of how individual influencers — even those only known to a small niche of online users — can have outsized effects when fueled by social media algorithms dedicated to keeping people glued to these platforms and online communities of likeminded users who share many of the underlying beliefs. I should add that such influencer-algorithm-crowd dynamics are equally valid in explaining how certain TikTok dance crazes go viral as they are in explaining the inner workings of the Jan. 6 violence.
DiResta has views on how the negative sides of these dynamics should be tackled. Tech companies must do a better job at basing their content moderation rules on international human rights law and on quantifiable evidence of harm. FWIW, the firms mostly say they already do that. Regulation — still a four-letter word for many in the U.S. — should increase people’s understanding of how these social networks function via greater transparency. That includes the wonky risk assessments and audits the EU and the U.K. are planning via their social media legislation. Governments should also actively enforce existing rules on commercial and paid-for political speech (think: an influencer should be clear if she is being paid to promote a political view). Officials should also out covert efforts to sway public opinion akin to the so-called prebunking the U.S. Department of State has done when tackling Russian disinformation about Ukraine.
“The American political culture is polarized. This topic, particularly content moderation, has been made even more polarized,” she acknowledged when I asked her about how such efforts could play out in the U.S. versus elsewhere. “There’s a strong culture of preserving free expression that I think is good, and that is just different than what might make it through in Europe.” What, then, should the role of the government be in mitigating potential harm online? “I don’t believe it’s the business of government to involve itself in the nitty-gritty decisions of content moderation,” DiResta added. “What is the role of government? I would argue it’s to enable the discoveries of externalities and harms.” In short: Officials should make it easier for others to flag potential online nastiness. They shouldn’t be doing it themselves.
Despite the inherent nerdiness of her expertise, DiResta has become a central figure in the U.S. culture wars on content moderation — especially around how some on the right of American politics view the result of the 2020 presidential election. During her time at the Stanford Internet Observatory, the researcher was central in tracking online falsehoods, dubbed “The Big Lie,” that Democrats alleged stole the last presidential election from Donald Trump. Her team also worked closely to surface potentially life-threatening misinformation during the Covid-19 pandemic. Those efforts brought her to the attention of the likes of Jim Jordan, a U.S. congressman whose House subcommittee has done all it can to undermine primarily academics’ research into how online hate and conspiracy theories can spill over into the offline world.
DiResta left her job at Stanford in June after also facing a deluge of lawsuits — primarily fronted by former Trump aide Stephen Miller. She also is public enemy number one for those believing in the so-called Censorship Industrial Complex, unfounded claims the U.S. federal government, academics and tech companies have attempted to silence rightwing voices online. FWIW, repeated studies have shown that is not true. “I have regrets,” she said when I asked about how things had played out over the last four years. “I wish I had spoken out as me, ‘Hey guys, no, this is what actually happened, here are the facts.’”
“The thing that Congress set out to do, that Jim Jordan set out to do, was to destroy communication networks between different groups of people,” DiResta added. As a result of ongoing Congressional subpoenas and requests for information to academics and tech companies about their potential (unproven) collusion with the federal government to silence rightwing voices, many of these groups have shut down collaboration efforts — primarily to avoid costly legal costs and unwanted political attention. The result is no meaningful expansive research is underway to flag potential online-offline harm ahead of November’s election in the U.S. “That is what (Jordan) succeeded at,” the researcher added. “So even as there are very committed election officials, who are going to work as hard as they can to ensure a free and fair election, he’s created an environment in which they’re going to think twice before reaching out to an academic. And I think that’s terrible.”
IN EARLY AUGUST, AMIT MEHTA, a U.S. district judge, did the unthinkable: He ruled that Google abused its dominant position in search via exclusive partnerships with companies like Apple and Samsung. “Google is a monopolist, and it has acted as one to maintain its monopoly,” Mehta wrote in the most important digital antitrust case in the U.S. since the Microsoft ruling more than 20 years ago.
Google will now appeal, and the final outcome won’t likely be known for years. But, next month, a date for a linked court case will be set on what, exactly, should be the remedies to give other search engines a chance at competing with Google. This is where it gets incredibly complex, mostly because Google’s search product is so intertwined with not only the company’s own services, but also those across the wider internet.
Expect a massive political fight to ensue. In the worst case (for Google), the judge could rule to break Search off from the tech giant’s other businesses like Chrome and Android. He could also order the firm to end the exclusivity search deals with device makers at the center of this case. He could also demand consumers be offered a so-called choice screen from which they could select an alternative search engine.
None of these solutions are easy and perfect. Google will fight tooth and nail to protect its position, while Apple, for example, may voluntarily choose to keep its ties to its Big Tech rival, mostly because Google Search is still the best in the business. What is unclear from all of this is how will it improve consumers’ ability to make an informed choice over which search engine to use.
WE’RE HEADING TO THE WEST COAST this week, where Scott Wiener, a local Democratic politician, is the main force behind California’s efforts to regulate artificial intelligence via its proposed Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
A New Jersey native with degrees from Duke University and Harvard Law School, he had a legal career before joining the San Francisco Board of Supervisors. In 2016, he became a California state senator.
Wiener introduced the AI legislation that would impose safety requirements on the most advanced AI models, including mandatory risk assessments before companies can release such services. The bill has been vocally opposed by Nancy Pelosi, the former speaker of the U.S. House of Representatives.
“As the relevant content is accessible to EU users and being amplified also in our jurisdiction, we cannot exclude potential spillovers in the EU,” European Commissioner Thierry Breton wrote on X ahead of Elon Musk’s online interview with Donald Trump in a post that some in the U.S. said represented Europe’s potential foreign interference in the upcoming U.S. election. “We are monitoring the potential risks in the EU associated with the dissemination of content that may incite violence, hate and racism.”
— The U.S. Office of the Director of National Intelligence, the Federal Bureau of Investigation and the Cybersecurity and Infrastructure Agency published a joint statement accusing Iran of hacking Donald Trump’s presidential campaign in an act of foreign interference.
— The Massachusetts Institute of Technology published an AI-risk database that included more than 700 potential risks into which policymakers can drill down. More here.
— The EU Disinfo Lab updated its so-called Coordinated Inauthentic Behavior Detection Tree to give others the ability to track the potential online traits of covert influence campaigns. More here.
— Freedom House published an Election Vulnerability Index that included specific metrics on how countries worldwide could be affected by digital tactics linked to elections. More here.
— The U.S. Department of State created a risk-management framework for artificial intelligence to help organizations develop AI systems that are consistent with international human rights. More here.
— OpenAI and Meta published separate reports on how state-backed actors were trying to use their digital services in covert influence campaigns. More here and here.
**Save the date! On September 25, POLITICO will host the event “Europe’s Digital Future: Navigating the Path of Connectivity and Innovation” to discuss the role of connectivity and innovation in the context of the ongoing digital transformation. Register today!**

en_USEnglish