How should we regulate digital platforms?


TSE held the 16th edition of its Digital Economics Conference on January 11 and 12 in Toulouse.

The breakneck pace of the digital transformation poses unprecedented challenges for regulators. Market competition, artificial intelligence (AI) and online hate speech were among the issues discussed during a lively roundtable at TSE Digital Center’s annual conference in January. Chairing the panel, William Kovacic (George Washington University) noted that lawyers and economists will need help from other disciplines - including computer scientists, engineers, historians, political scientists, sociologists and anthropologists - to address antitrust issues in today’s dynamic digital landscape.

Digital competition

In the context of tech competition, Fiona Scott-Morton (Yale University) focused on the impact of Europe’s new Digital Markets Act (DMA). She recognized its potential to foster contestability and fairness on platforms, encouraging positive developments such as app-store variety and enhanced interoperability, as evidenced by new messaging applications. The effects of the DMA for competition between platforms may be less significant than the effect on competition on the platforms. For instance, competition will increase in the area of digital wallets and app stores on mobile devices. But it will take good enforcement and some time before the DMA will lead to the emergence of platforms comparable to Facebook. But it’s hard to know what creative developers will build. She raised the possibility of an alternative future in which app stores evolve into superapps, similar to WeChat, allowing users to manage their entire digital lives and leading to platform agnosticism among users.

The Yale University professor praised the DMA’s balanced approach, steering clear of price fixing or controls on app-store content. The DMA serves more as a directive for effective design, she said: “There is an interface, you can design it, but it needs to be effective [for the third-party business user],” while maintaining a cautious stance on legislating emerging technologies. The DMA is a welcome attempt to shake up the digital sector, she concluded: “The gatekeepers' businesses are not new, they are not terribly dynamic, and they have been a monopoly for a long time. In terms of competition in digital, things have gotten worse, not better.”

Artificial intelligence

AI technology has improved dramatically from the development of deep learning, GPUs, Gen AI, and practical experience in applying these. It provides substitutes for brainpower in a growing range of useful applications. David Evans (Berkeley Research Group) highlighted that AI will touch many existing laws and regulations because it will impact the subjects of those in some way and require some new laws and regulations just because new issues and problems will arise.

In many cases, AI and its applications will just lead to other facts to consider in applying those rules. In others, AI technologies may change the underlying tradeoffs for these laws and regulations and require that society recalibrate that balance. Finally, society may need new laws or regulations — and fast —to deal with harms that the spread of tools for creating deep porn fakes or websites that traffic in them.

The debate over how to regulate AI is occurring at a time when countries see the immense value that AI can bring to their economies. Sound policy must account for the social benefits of AI technologies and possible harms in deciding the right balance to strike. The timing and extent of AI regulation is an ongoing dispute, as seen from the more pro-investment US/UK approaches and to stricter regulatory EU approaches to date, as was highlighted in the panel’s final debate, with far-reaching implications.

 

Polarization and hate speech

The potential for social media to amplify social divisions is a matter of much concern. Studies on platforms like Facebook do not unequivocally demonstrate its impact on societal polarization. Nevertheless, the creation and intensification of echo chambers by social media are valid grounds for regulation, if the society considers them harmful. Ro’ee Levy (Tel Aviv University) says that there are stronger ground to regulate hate speech. First, there is evidence that hate speech on social media may increase hate crime. Second, many countries already intervene in hate speech so there is justification to regulate it on social media as well. More generally, balancing the removal of harmful content while protecting free speech represents a challenging yet crucial task.

 

 


Published in TSE Reflect, March 2024