Previous article

Algorithms & Competition Law: Interview of Michal Gal by Thibault Schrepel

Algorithmic antitrust is almost everywhere! It’s in the doctrine; to start with, Google Scholar features more than 16,000 related articles. It’s also in competition agencies and international institutions which have published numerous reports on the subject. Where it is not, however, is in the case-law. To this day, only a handful of cases are concerned with algorithms as a central piece of anti-competitive practices. Algorithmic antitrust is even less visible in empirical works. One may still be waiting for studies documenting the supposed extent of the phenomenon.

For this first issue of the "e-Competitions Interviews Series", Dr.Thibault Schrepel (T.S.) [1] has invited Prof. Michal Gal (M.G.), Professor and Director of the Center for Law and Technology at the Faculty of Law at Haifa University and author of several high-profile articles on the subject, to answer his questions. The "e-Competitions Interviews Series" aims to bring together views of leading academics, enforcers or private practitioners substantiated by case law.

T.S.: Despite all the hype, the extent to which companies are implementing algorithmic agreements is still not quantified, and in fact, there is very little litigation. Shouldn’t academic research first focus on quantifying them before arguing for more enforcement? If so, where to start?

M.G.: Indeed, thus far, only a small number of cases involving algorithmic-facilitated cartels have been brought by competition authorities. This begs the question what drives such low levels of litigation: is it because firms do not use algorithms to coordinate their conduct, or there is another explanation. The possibility of collusion is real. This has been proven both theoretically and empirically. Theoretical research points to several characteristics of algorithms and the digital world in which they operate which increase the ease of coordination. For example, as I elaborate in my own work, [2] algorithms are “recipes for future action” that increase the clarity of how trade terms will be set by them, and how they will react to their competitors’ terms. By enabling other algorithms to “read their minds”—either directly (by exposing the algorithm), even before any action was taken by them, or indirectly (through its actions and reactions)—they limit the need for direct communication or physical meetings. Also, due to the conditions in the digital world, there is lesser need for communication ex ante. Rather, algorithms can coordinate actions in a short sequence of low-value games. An empirical study recently published by researchers from the University of Bologna has shown that algorithms learn, by themselves, to charge supra-competitive prices, without communicating with each other. The high prices are sustained by classical collusive strategies with a punishment phase followed by a gradual return to cooperation. This finding is robust to asymmetries in cost or demand and to changes in the number of players. [3]


"The possibility of collusion is real. This has been proven both theoretically and empirically."


There may be at least three reasons why enforcement is still scarce, which do not lead to the conclusion that algorithmic coordination is not a real problem. First, detection might be difficult. Parallel conduct, by itself, is insufficient to prove an illegal cartel. Rather, one should prove the existence of an “agreement”. Yet, especially with artificial intelligence (AI)-based algorithms, this might be difficult. As Ezrachi and Stucke suggest in their seminal book, Virtual Competition, [4] such algorithms are often a “black box” for external observers. Second, enforcement agencies have only begun to wrap their heads around this new technological challenge, which may require adding computer and data scientists to their teams. Finally, low enforcement levels may indicate that legal tools are insufficient to capture some instances of algorithmic-facilitated coordination. Whatever the reason, given that both theory and experimental evidence already point to the potential coordination-facilitating capabilities of algorithms, it is urgent that we prepare for such algorithmic interactions. This need is strengthened by the fact that a growing number of firms are using algorithms for their pricing decisions, increasing the possibility of algorithmic coordination that raises prices and harm consumers. Furthermore, given the exponential growth in machine learning capabilities of algorithms, we cannot afford to wait until algorithms become completely autonomous to check whether our laws are welfare-enhancing.


"Given the exponential growth in machine learning capabilities of algorithms, we cannot afford to wait until algorithms become completely autonomous to check whether our laws are welfare-enhancing."


Indeed, we are witnessing a surge of interest as well as studies by competition authorities around the world on the subject of how algorithms affect competition, [5]; Australian Competition and Consumer Commission, Digital Platforms Inquiry, Final Report (2019), [6] as well as in the number of cases in which algorithms were used to harm competition. To give a few examples, [7] in the UK several energy firms, as well as a software provider, were found liable of cartelistic conduct, based on the use of algorithms “that allowed the acquisition of certain customers to be blocked and customer lists to be shared,... in pursuit of the objective of sharing markets and/or allocating customers.” [8] Likewise, the European Commission fined four manufacturers for resale price maintenance. The Commission emphasized the fact that the companies had used sophisticated algorithms to monitor the prices set by distributors, thereby allowing them to intervene quickly when there were price decreases. It noted that “many, including the biggest online retailers, use pricing algorithms which automatically adapt retail prices to those of competitors. In this way, the pricing restrictions imposed on low pricing online retailers typically had a broader impact on overall online prices.” [9] Finally, the US DOJ sanctioned a cartel which was conducted through the use of algorithms for encrypted messaging. [10] It is noteworthy, however, that all cases brought so far are easy cases, in that the algorithm was used to facilitate a previously agreed-upon scheme, rather than to create such a scheme in the first place.

T.S. Algorithms are often being described as the source of competition concerns. Meanwhile, all the opportunities they create for competition are rarely being discussed. Do you know of any empirical work exploring algorithms benefits in terms of competitive process? Isn’t it a risk that agencies will use algorithms like a scarecrow to strengthen their enforcement, without focusing on the big picture?

M.G.: Algorithms can offer three main advantages over human decision-making: (1) speedier decisions: given any number of decisional parameters and data sources, computers can apply the relevant algorithm far more quickly than the human brain, especially if the decision tree involves a large number of decision parameters that need to be balanced or many data inputs that must be analyzed or compared; (2) analytical sophistication. It is not that humans cannot perform these tasks, but it might not be worth their while to do so, given the time and effort involved; (3) reduced information and transaction costs, for example by saving valuable time or making suggestions based on past consumption (e.g., Netflix’s suggestions). To achieve these results, algorithms perform a myriad of tasks, including collecting, sorting, organizing and analyzing data, making decisions based on that data, and even executing such decisions.

Accordingly, as you rightly observe, algorithms create many pro-competitive benefits both to suppliers and to consumers. [11] They may enable consumers to more efficiently compare products and offers online, enabling them to enjoy lower-priced goods or find products that better fit their preferences. A wide variety of algorithms already help consumers make decisions in market transactions. At the most basic level, algorithms offer consumers information relevant to their choices. Some simply collect and organize relevant information provided by suppliers, such as Kayak, Expedia, and Travelocity, which offer information on flight prices and schedules. Others offer information about quality, such as rating services TripAdvisor and Yelp. More sophisticated algorithms use data analytics to enable price forecasting. Others use consumers’ characteristics and past revealed preferences to narrow down the options, presenting only those assumed to be most relevant, such as is done by Netflix and Tinder. Such algorithms serve as tools to enhance consumer choice by aggregating and organizing relevant data so as to help the consumer make an informed decision. Algorithms can also automatically determine the need for certain transactions (e.g., a factory is almost out of bolts and new ones must be ordered), and automatically search for the best offer and execute it, thereby saving time and resources. Algorithms can thus also increase competitive pressures on suppliers.

Algorithms may also create significant pro-competitive effects on suppliers. By using algorithms, suppliers can more quickly and efficiently analyze large amounts of data, allowing them to better respond to consumer demand, better allocate production and marketing resources, and save on human capital. By doing so, algorithms reduce entry barriers, thereby potentially enabling more firms to operate in the market, and increasing competition. Algorithms also enable suppliers to better analyze information about consumers’ preferences, enabling firms to better compete for consumer attention, to better cater to consumer preferences, and to create more efficient and profitable marketing campaigns. Unless a supplier enjoys significant comparative advantages in performing such tasks- for example, if the supplier enjoys significant economies of scale and scope in data collection, which are not easily met by other firms, then algorithms can potentially increase competition among suppliers.


"Algorithms create many pro-competitive benefits both to suppliers and to consumers."


The algorithm’s ability to facilitate coordination should therefore be balanced against its pro-competitive effects. Furthermore, because our understanding of how algorithms interact in the digital world is still rudimentary, the rules regulating algorithms should be developed in keeping with our understanding of their potential effects on the market and the potential chilling effects of overbroad prohibition. Put differently, the use of “smart algorithms” by suppliers requires “smart regulation”—setting rules that limit the harms of increased coordination while ensuring that the digital economy’s welfare-enhancing effects are not lost.

T.S.: One may argue that algorithms do not change the nature of illegal agreements and, as a consequence, that algorithmic collusion is very much old wine in new bottles. Would you agree?

M.G.: While algorithms do not change the nature of illegal agreements, they may change the nature of the interaction. [12] A good way to illustrate this point is by considering how algorithms affect the conditions for coordination identified by Nobel laureate economist, George Stigler: (1) Reaching an agreement on what trade conditions will be profitable for all parties to the agreement; (2) Detection of deviations from the supra-competitive equilibrium. The slower and less completely deviations are detected, the weaker the coordination; (3) Creating a credible threat of retaliation in order to discourage deviations. The economic literature points to certain market conditions which make it easier to meet the three requirements. These include, inter alia, market concentration, product and cost homogeneity, transparency of transactions, and the availability of information regarding changing market conditions.

Algorithms may affect these conditions. Reaching an agreement is made easier by the use of algorithms. Several factors combine to reduce the difficulty in calculating a joint profit-maximizing equilibrium: (1) the greater availability of data, particularly real-time and more accurate data on market conditions, including digital price offers of competitors and suppliers of intermediate goods and services, as well as data on consumer preferences; (2) cheaper and easier data collection and storage tools (e.g., the cloud); (3) advances in Internet connectivity which allow for cheaper and faster transfer of data; and (4) the increasingly strong and sophisticated analytical power of algorithms due to advances in data science. Indeed, algorithmic sophistication makes it easier to solve the multidimensional problems raised by coordination, such as establishing a jointly profitable price in a market with differentiated products. Studies performed by Google’s artificial intelligence business, DeepMind, on algorithmic interactions found that algorithms with more cognitive capacity sustained more complex cooperative equilibria. The fact that algorithms—unless their developers code them otherwise—make rational decisions, devoid of ego and biases, also potentially eases coordination, by making their decisions more predictable. They also shorten time lags of reaching new equilibriums when market conditions change.


"While algorithms do not change the nature of illegal agreements, they may change the nature of the interaction."


Algorithms also change the mode and dynamics of communication needed to reach an agreement. An algorithm is a pre-set decision mechanism, a “recipe” for making decisions. Algorithms can also be instructed to read other algorithms, and to perform some action if the other algorithm’s content is of a particular kind. This simple but fundamental idea highlights a central difference between human and algorithmic coordination: when an algorithm is transparent to others, another algorithm can “read its mind” and accurately predict all its future actions when given any specific sets of inputs, including changes in market conditions and reactions to other player’s actions. This implies that communication to competitors of future intended actions can be performed by simply making one’s algorithm transparent and readable by (select) others’ communication protocols. This observation cannot be overstated: the mere (direct or indirect) observation of the algorithm by competitors may, by itself, serve to facilitate coordination. Such communications need not be binding, but algorithms may strengthen this aspect as well.

Stigler’s second condition, detection of deviations from the status quo, is also fulfilled more easily and quickly by algorithms. Due to their high levels of sophistication and reduced ingrained biases, algorithms may better differentiate between intentional deviations from coordination and natural reactions to changes in market conditions or even errors, which change the efficient status-quo, thereby preventing unnecessary price wars. Furthermore, the incentives to deviate in the first place are also reduced. Since technology enables the algorithm to react almost immediately to changes in a competitor’s price, consumers may not be aware of ephemeral price differences between competitors and therefore may not switch between them. Competitors, acknowledging this fact, have weaker incentives to deviate.

Stigler’s third condition, creating a credible and sufficiently strong threat of retaliation against deviators, can also be more easily met by algorithms. Given their potentially high level of sophistication, algorithms can better calculate the level of sanctions necessary to discourage deviations. Moreover, algorithms may create a credible threat of retaliation, if changing their decision tree is not simple or change may take a long time relative to the frequency of market transactions.

Accordingly, algorithms may facilitate coordination in three ways. First, they ease the fulfilment of Stigler’s conditions. Second, they lessen the need to commit to Stigler’s conditions a priori. Finally, algorithms may strengthen not only players’ ability to reach an agreement, but also their incentives to do so. One factor which affects such incentives is the risk of detection by enforcement agencies and private plaintiffs. A study performed by Google Brain has shown that algorithms can autonomously learn how and when to encrypt messages, given a specified secrecy policy, in order to exclude other algorithms from the communication. Unless third parties have a way of determining when the conduct of algorithms is based on such encryption, detection will become much harder.

This is not to say that algorithms can facilitate coordination in all circumstances. Where entry barriers are low, or where one or more of Stigler’s conditions cannot be effectively met, coordination will not take place. This may be the case, for example, in markets where demand fluctuations are significant and difficult to distinguish from deviations from the equilibrium, or where the relevant data is not easily accessed by all competitors.

T.S.: One may see two crucial issues regarding algorithms and competition law: detection and liability. It seems to me that we, collectively (scholars and agencies), will find answers to these issues, and that, as such, there is no urgency in deploying regulatory regimes. Would you agree? Do you see any other issues?

M.G.: Even if we could detect algorithmic coordination much better than we do today, the issue of liability raises at least two complex questions with no easy answers. The first involves liability for anti-competitive actions taken by an algorithm. The more sophisticated the algorithm becomes, the less it needs external determination of its decisional parameters, and the more independent its actions. In such situations the question of who is liable for the algorithm’s actions- the coder, the user, both, or nobody- is not an easy one to answer.

The second challenge arises from the fact that algorithmic coordination, while potentially harmful and arguably quite stable, might constitute oligopolistic coordination. In such a case, it will not be captured under competition laws. Moreover, the traits of algorithms, to which I alluded above, may make oligopolistic coordination easier. Part of the challenge arises from the fact that current legal tools were designed to deal with human facilitation of coordination. New and improved ways to coordinate, as well as the potential scale and scope of the resulting coordination, were not envisioned when competition law prohibitions were fashioned. Antitrust currently relies on the exploitation of human limitations in order to increase competition in the market. For example, it prevents market players from discussing anti-competitive agreements and from using the legal system to implement them in order to make it harder to reach and enforce such agreements. But in the algorithmic world, where coordination, detection, and punishment are automated, questions of reaching or enforcing explicit agreements fall in importance. Similarly, the law is based on the assumption that humans’ capacity to respond quickly to market changes is limited when numerous or multi-factored decisions must be taken; algorithms are only limited by their computational powers. Furthermore, the current legal treatment of illegal agreements is generally focused on the means of communication used by market players in order to coordinate. When means of communication change, the law might no longer capture conduct which is socially harmful. Accordingly, given changes in modes of communication, which may facilitate many more instances of oligopolistic coordination, we need to explore whether it is still socially beneficial to consider such conduct to be legal. There may well be a case for not binding ourselves to past formulations which no longer fit economic realities. In particular, the time may be ripe to reconsider no-fault prohibition of algorithmic coordination with potential anticompetitive tendencies with no offsetting pro-competitive ones, even where such conduct does not constitute an agreement in the traditional sense. Yet designing such efficient regulation poses a big challenge. For example, what exactly would we prohibit the algorithm from doing.


"Even if we could detect algorithmic coordination much better than we do today, the issue of liability raises at least two complex questions with no easy answers."


In the meantime, I suggest that we use our existing laws to capture practices that facilitate algorithmic coordination. [13] Such practices should meet three basic criteria: (1) they may facilitate coordinated conduct; (2) they are potentially avoidable by the algorithm’s programmers or users; and (3) they are unlikely to be necessary in order to achieve procompetitive results. Such practices may thus amount to “coordination by design.” They may include, for example, a situation where suppliers consciously use similar algorithms even when better algorithms are available to them. The algorithms need not be identical, but their operative part—which calculates the trade conditions—should generate relatively similar outcomes; or suppliers take actions that make it easier for their competitors to observe their algorithms and/or their databases, and their competitors take actions to observe them. The algorithm can thus signal to other market players how its user is likely to react to market conditions, thereby communicating intent and possibly a credible commitment.

Another issue regards remedies in data-based markets which involve algorithms. Many scholars focus on sharing the data from which the algorithm learned. In a recent article I wrote with Nicolas Petit, we explore radical remedies for the digital economy. [14] We suggest that in rare cases, where the algorithmic learning resulted from unlawful data collection or use, the monopolist might be required to share the algorithmic learning. Such a remedy might be suitable, under some conditions, when a monopolist illegally excludes his rivals from access to the data necessary for their operations, thereby depriving them of the ability to train their algorithms in order to make high quality decisions, while training his own algorithms on data. It might also be suitable when the monopolist engages in anti-competitive data collection, analysis, or usage activities, such as when he conditions the use of his monopolized service on exclusivity which leads to single-homing. The remedy is based on the inherent advantages which are the fruits of the anti-competitive conduct. It may also provide a swift tool for restoring competition in some markets while not harming the advantages enjoyed by consumers and firms from the use of better algorithms. In some situations it is more efficient and effective than data sharing and to “unteaching” the algorithm.

A final issue regarding algorithms and competition law involves ensuring conditions for the operation of algorithms that are operated by consumers, as I elaborate below.

T.S.: Quite a lot has been written already on personalized pricing using algorithms. In the meantime, I haven’t read much about personalized services. To what extent do you think that personalized services could be anti-competitive? What do you think the framework could be? More broadly, what do you think of the idea of having competition agencies listing algorithmic personalization (in terms of both prices and services) that would benefit from a per se legality?

M.G.: Algorithmic-assisted personalization is indeed generating a lot of interest, and rightly so. Personalization can relate to many aspects of the transaction, including the offer, the trade terms (price, credit, tying), and the products or services. Such personalization is often based on knowledge mined by algorithms from big data, which enables the supplier to design the transaction to fit the unique characteristics of the potential consumer (based upon his “digital profile”).

Oftentimes, the consumer will benefit from such personalization. Let me give two examples of personalized products and services. A known example involves algorithms which provide health recommendations, based on the consumer’s unique health conditions. Sanofi and Google, for example, created a joint venture which uses real-time data on a person’s bodily conditions to monitor sugar levels and suggest actions for diabetes patients. Another example involves loans. As the US Federal Trade Commission observed in its report on the digital economy, the use of personalized data has made more loans available to some under-served segments of the population, due to the ability to better determine whether each specific person creates a risk of faulty payments, rather than using aggregate indicators to determine such risk. Indeed, in a competitive market firms will compete by offering trade terms, products, or services that would fit better the needs of each individual consumer. Furthermore, if consumers are informed about options, a sufficient number of competitors exist, and switching costs are low, suppliers will not be able to engage in welfare-reducing personalization. Accordingly, in such markets we can often assume a per se legality of the algorithmic personalization, as you suggested.

In some situations, however, personalization might harm consumers. This might be the case where information regarding one’s preferences and conduct patterns, mined by the algorithm from the data, is used by the supplier in order to exploit the consumer’s vulnerabilities. As Oren Bar-Gill has shown, once the algorithm takes into account not only the consumer’s preferences, but also his biases, personalization may not be welfare-enhancing. [15] This is true of offers for services as well. The ability to do so hinges upon market conditions. Should competition be truncated, trade terms might be supra-competitive. Interestingly, personalization of products and services creates heterogeneity which makes it harder for consumers to compare their options, or to engage in product or services exchanges. Accordingly, even firms that are not monopolies might be able to enjoy supra-competitive profits in niche markets. In such cases, a per se legality rule is problematic. Observe, however, that in some cases competition law might not be the correct tool to deal with such consumer harms.

Let me make a final point. Personalization makes algorithmic coordination more difficult. The exponential increase in the number of parameters that must be taken into account in calculating personalized prices, as well as in the calculation of a jointly profitable price, introduces “noise” into the system. Furthermore, the ability to coordinate depends, inter alia, on the information about each consumer’s preferences held by each supplier. A focal point on which to base a coordinated equilibrium may be more difficult to find where differentiated products are offered. Accordingly, whereas the concern for unilateral exploitation might increase due to the personalization of offers, the concern for coordinated exploitation might decrease.

T.S.: I see a significant legal loophole due to the absence of decisions coming out of court and agencies regarding the design of software and platforms. Several agencies have recently announced the creation of a digital task force, but is it enough? What do you think enforcement will be in 20 years? How you would like it to be?

M.G.: The question is what powers of enforcement the regulator- be it the antitrust agency, a digital task force, or any other regulator- would be given. From an institutional point of view, to effectively regulate the digital economy, such a regulator should be comprised of computer and data scientists, economists and lawyers, working together and taking actions based on up-to-date understandings of algorithmic capabilities and their effects on markets. This is no easy task, as the law is already playing “catch up” with these new technologies. But we cannot afford not to invest in such actions, given potential significant harmful effects on consumers. We will also need to be creative, and think outside the box, beyond our tools which were designed to fit human interactions.

While I do not have a crystal ball to envision the future, I am certain that the use of algorithms to make decisions- by suppliers, by consumers, and also be regulators- will exponentially grow. As machine learning and data collection capabilities increase further, those who will not use algorithms might well suffer from significant comparative disadvantages. It might well be that regulators will use algorithms to automate some regulatory functions regarding other types of algorithms (detection and even enforcement).


"From an institutional point of view, to effectively regulate the digital economy, such a regulator should be comprised of computer and data scientists, economists and lawyers, working together and taking actions based on up-to-date understandings of algorithmic capabilities and their effects on markets."


I hope that the market will offer more algorithms to be used by consumers, to counteract at least some of the anti-competitive effects of algorithms used by suppliers. [16] By creating countervailing buyer power, algorithmic consumers can potentially reduce the ability of algorithms used by suppliers to exploit market power. They can also reduce price discrimination. The competition authority has an important role to play in enabling the operation of algorithmic consumers, by ensuring access to potential consumers and to the relevant data. The fact that the big technology companies currently dominate markets for digital butlers, should raise some concern.

I should also mention that algorithms might change other aspect of our lives, beyond commercial spheres. For example, they affect social interactions as well as our autonomy and the exercise of our “decision-making muscle”. Such considerations, albeit beyond competition law, should be acknowledged. [17]

Note from the Editors: although the e-Competitions editors are doing their best to build a comprehensive set of the leading EU and national antitrust cases, the completeness of the database cannot be guaranteed. The present foreword seeks to provide readers with a view of the existing trends based primarily on cases reported in e-Competitions. Readers are welcome to bring any other relevant cases to the attention of the editors.

PDF Version

Authors

Quotation

Thibault Schrepel, Michal S. Gal, Algorithms & Competition Law: Interview of Michal Gal by Thibault Schrepel, 14 May 2020, e-Competitions Algorithms, Art. N° 93929

Visites 1628

All issues

  • Latest News issue 
  • All News issues
  • Latest Special issue 
  • All Special issues