[Digvijay R. Singh is a 4th year BA LLB (Hons.) student at National Law School of India University, Bangalore]
An algorithm is an application of reasoning by artificial intelligence (“AI”) for the finding of a probable solution to a problem. Algorithms are relied upon for the cost and efficiency benefits attributed to them. In the present day, algorithms contribute to an enterprise in multiple ways in the conduct of business by controlling production, collecting market data and self-learning by monitoring of market conditions.
However, it is this use of algorithms in business conduct that often gives rise to instances or suspicions of algorithmic collusion. This post discusses what algorithmic collusion is and considers India’s experience in dealing with such collusion. It also analyzes whether the Competition Act, 2002 is well equipped to scrutinize algorithmic collusion, specifically algorithms with the logic to collude and self-learning algorithms.
How are Algorithms Used to Collude?
The interaction between algorithms and humans can be very wide, which thus gives rise to several possible scenarios. According to Ariel Ezrachi and Maurice Stucke, the description of the four ways that algorithms can collude are as follows:
Messenger: There is agreement to collude by the people involved and the algorithm is solely used for implementation of the agreement. A well-known instance of such a collusion is the Topkins case in US, where David Topkins, the founder of Poster Revolution and his co-conspirators were prosecuted for having agreed to the coordination of prices by the adoption of specific algorithms for the sale of posters in Amazon’s marketplace.
Hub and Spoke: Here, the same algorithm is adopted by the market players, and collusion is made possible by the algorithm developer who facilitates collusion by entering into vertical agreements with various competitors and ensures that the prices are fixed.
Predictable Agent: Here, there is no use of the same algorithm by the competing parties; instead, algorithms are independently and unilaterally designed by them such that they react to market changes in certain ways which leads to predictable outcomes.
Digital Eye: This involves machine-learning algorithms.
India’s Experience with Algorithms
India has also had its fair experience with algorithmic collusion. This experience can be summarized by the two cases where algorithmic collusion was looked into. The first is Samir Agarwal v. ANI Technologies Pvt. Ltd. where the possibility of hub and spoke agreement was investigated. The Competition Commission of India (CCI) observed that a hub and spoke could not exist given that there was no agreement to set prices through the platform, or any agreement for the platform to coordinate prices between the parties. The second is the recent Airlines Industry Cartel case, where cartelization was alleged amongst various domestic airlines operating in India. Here, the Director General (DG) investigated the role of algorithms in the software used by these airlines and its impact on ticket pricing. The CCI observed that the personnel working in the revenue management team of each airline determined the algorithm and the final ticket price and the involvement of the software was limited only to the extent of aiding the revenue management team to arrive at a price for optimal revenue. These cases indicate that although India’s competition law has been exposed to hub and spoke agreements, the predictable agent and the digital eye algorithmic collusion has not been encountered by the competition law framework of India. This necessitates an examination of whether the Competition Act, 2002 can sufficiently regulate such algorithms.
India’s Competition Law Framework
For assessing whether India’s Competition Act can deal with algorithms with the logic to collude or self-learning algorithms, it is necessary to begin by looking into section 3(3) of the Competition Act. That provision deals with collusion. The section can be broken down into three components: (i) if there is any “agreement entered into” or “practice carried on” or “decision taken by”; (ii) “persons or association of persons or enterprises or association of enterprises”; (iii) which “directly or indirectly determines purchase or sale prices”; then it shall be presumed to have an appreciable adverse effect on competition (AAEC).
Application of Section 3(3)
Looking into its application, we see that for section 3(3), AAEC need not be proved and is automatically presumed. The third leg of the test mentioned above is a factual determination; hence, the first two requirements remain to be analyzed.
The First Requirement
Section 3(3) deals with “horizontal restraint” and the conduct which can fall under “horizontal restraint” is very expansive. For the first component, the definition of “agreement” is such that it includes any understanding or action in concert. The term “practice” also has a broad definition and includes any practice related to any trade carried on by a person or an enterprise. The conduct in the two types of algorithmic collusion namely “predictable agent” and “digital eye” would fall under the term “agreement” given that “predictable agent” would fall under both “practice” and “action in concert” and the conduct in “digital eye” would amount to “action in concert”.
The Second Requirement
As regards the second requirement, it is doubtful whether the groups such as, “individual”, “artificial juridical person” or the other groups which are under the definition of “person” could be read to include algorithms. However, the definition of the term “person” is inclusive. Adopting a purposive interpretation of “predictable agent” category at least, since the algorithms merely reflect the logic of the programmer herself, the competitors themselves would be the “persons” who through indirect means are acting in concert.
Question of Liability
Under the Competition Act, the personal liability of employees in case there is violation of the provisions of the Act by the company is dealt with by section 48 of the Act. However, the reverse situation has not been envisaged under the Act, i.e., whether the employee’s performance of anti-competitive acts can be attributable to the company for which the employee works. The question of liability can be seen to be much simpler for the “predictable agent” scenario where the competing parties would be liable given that they engaged in the programming of the algorithms to respond similarly to similar market conditions, and therefore engage in collusive behavior to show the same prices. However, in the situation of “digital eye” the competitors do not engage in expressly programming their software to act in a specific way. It may be argued that the competitors possessed the requisite knowledge that self-learning algorithms programmed with the objective to maximize profits would ultimately resort to collusion. However, whether having such knowledge is sufficient for holding competitors liable is something that the CCI will have to decide.
Practical Feasibility and Enforcement Concerns
After a discussion of how India’s competition law shall deal with the algorithms at hand, it has been seen that the “digital eye” algorithmic collusion is the only algorithmic collusion which needs to be watched out for. It is therefore, worthwhile to see whether “digital eye” is practically feasible. The practical feasibility of the “digital eye” category rests on two factors:
(I) Technological Advances: Increased developments in the field of AI and computer science are leading to increased feasibility of anticompetitive conduct such as the “digital eye”.
(ii) Increased Regulatory Vigilance: Regulatory authorities have become increasingly conversant about novel algorithmic collusions and are now both willing and able to come down strongly on such collusions.
It is generally seen that technological advances and regulatory vigilance have a direct relationship such that technological advances lead to it being picked up and regulated by the concerned authorities.
Anticipating Regulatory Vigilance: What can Companies Expect from Authorities?
As mentioned above, there is a direct relationship between technological developments and regulatory vigilance. Increased technological developments lead to increased regulatory vigilance. This can be seen in the increased regulatory vigilance in India and foreign–jurisdictions.
The Draft Competition (Amendment) Bill, 2020 contains an amendment which is an expansion of the scope of the term ‘cartel’ to include hub and spoke cartels, which will lead to the effective address of tacit collusion, and which can be said to be a step in the right direction for tackling such algorithms.
The situation in foreign jurisdictions is much promising given that there are machine learning tools which have been developed for detecting cases of algorithmic collusion. In a recent address, the French Competition Authority has proposed that pricing algorithms will be increasingly scrutinized or audited by requiring the algorithm to be tested prior to their deployment in real market conditions. Moreover, the EU Competition Commissioner has also recently said that it is extremely necessary make clear to companies that they cannot escape responsibility under the pretext of a computer program.
A discussion regarding the applicability of the Competition Act, 2002 with regards to algorithms with the logic to collude and self-learning algorithms allows us to conclude that a broad interpretation of the Competition Act, 2002. Specifically, section 3(3) can allow the CCI to bring algorithmic collusion under its purview. However, with regard to self- learning algorithms, it remains to be seen whether the act of collusion by self-learning algorithms will be used by the CCI to hold the respective companies liable for collusion.
– Digvijay R. Singh