ECONOMY

Lina Khan’s FTC Gears Up to Go After Algorithmic Black Boxes Governing Workers’ Lives, but It Might Be Running Out of Time


The Department of Justice just scored a landmark victory against Google over its monopoly behavior. It’s a feather in the cap of the DOJ antitrust division and Lina Khan’s Federal Trade Commission (FTC), which have had an pretty incredible four-year run. We’ll see if it continues under the next administration. JD Vance continues to back Khan for whatever that is worth, while there have been some not great signs on that front from Team Kamala where Harris’s closest advisor is Uber general counsel and former Obama official Tony West. Tim Walz, too, has a record of friendly ties with Uber to the detriment of workers in the state of Minnesota.

Just to quickly review some of the efforts of the DOJ and FTC that are really pretty earth shattering considered they’re coming after 40-plus years of moving in the opposite direction:

They are now going after surveillance pricing. The DOJ  is reportedly readying a civil case against RealPage, the private equity-owned corporation that creates software programs for property management. The company is accused of “selling software that enables landlords to illegally share confidential pricing information in order to collude on setting rents,” and the DOJ is also working on a complaint focused on landlords’ exchange of vacancy rate information, which helped to restrict supply.

Maybe most importantly, DOJ reversed enforcement policies put in place by the Clinton administration on information sharing. Between 1993 and 2011 the DOJ Antitrust Division issued a trio of policy statements (two during the Clinton administration and one under Obama) regarding the sharing of information in the healthcare industry. These rules provided wiggle room around the Sherman Antitrust Act, which “sets forth the basic antitrust prohibition against contracts, combinations, and conspiracies in restraint of trade or commerce.”

And it wasn’t just in healthcare. The rules were interpreted to apply to all industries. To say it has been a disaster would be an understatement. Companies increasingly turned to data firms offering software that “exchanges information” at lightning speed with competitors in order to keep wages low and prices high – effectively creating national cartels.

In a 2023 speech announcing the withdrawal, Principal Deputy Attorney General Doha Mekki explained that the development of technological tools such as data aggregation, machine learning, and pricing algorithms have increased the competitive value of historic information. In other words, it’s now (and has been for a number of years) way too easy for companies to use these Clinton-era “safety zones” to fix wages and prices:

An overly formalistic approach to information exchange risks permitting – or even endorsing – frameworks that may lead to higher prices, suppressed wages, or stifled innovation. A softening of competition through tacit coordination, facilitated by information sharing, distorts free market competition in the process.

Notwithstanding the serious risks that are associated with unlawful information exchanges, some of the Division’s older guidance documents set out so-called “safety zones” for information exchanges – i.e. circumstances under which the Division would exercise its prosecutorial discretion not to challenge companies that exchanged competitively-sensitive information. The safety zones were written at a time when information was shared in manila envelopes and through fax machines. Today, data is shared, analyzed, and used in ways that would be unrecognizable decades ago. We must account for these changes as we consider how best to enforce the antitrust laws.

***

We’ve seen the efforts and some major wins on antitrust and consumer protections. So what about the wages Mekki mentions? The DOJ has gone after no-poach deals and wage-fixing in recent years with limited success.

In November, the DOJ moved to dismiss one of its last no-poach criminal cases after failing to secure a conviction in three other no-poach or wage-fixing cases brought to trial since over the last two years.

In 2022, the DOJ fined a group of major poultry producers $84.8 million over a long-running conspiracy to exchange information about wages and benefits for poultry processing plant workers and collaborate with their competitors on compensation decisions. More significant than the measly $84.8 million, it ordered an end to the exchange of compensation information, banned the data firm (and its president) from information-sharing in any industry, and prohibited deceptive conduct towards chicken growers that lowers their compensation. Neither the poultry groups nor the data consulting firm admitted liability.

Comments by FTC and DOJ officials in recent months also hint that they are still looking at going after wage-fixing cartels, as well as single companies using algorithms to exploit workers.

FTC officials are talking about opening up the algorithmic “black boxes” that increasingly control workers’ wages and all other aspects of their labor. While they became infamous from “gig” companies like Uber, they are now used by companies across all sectors of the economy.

Today I’d like to look at one such legal theory making the case for not just opening up the Uber et al. black box but smashing it altogether.

“Algorithmic wage discrimination” is the term Veena Dubal, a professor of law at University of California, Irvinet, uses to describe the way outfits like Uber and increasingly companies across the economy set wages and control workers. The term also hints at her argument to ban the practice. More from Dubal’s “On Algorithmic Wage Discrimination,” published in November at the Columbia Law Review:

“Algorithmic wage discrimination” refers to a practice in which individual workers are paid different hourly wages—calculated with ever-changing formulas using granular data on location, individual behavior, demand, supply, or other factors—for broadly similar work. As a wage-pricing technique, algorithmic wage discrimination encompasses not only digitalized payment for completed work but, critically, digitalized decisions to allocate work, which are significant determinants of hourly wages and levers of firm control. These methods of wage discrimination have been made possible through dramatic changes in cloud computing and machine learning technologies in the last decade.

These automated systems record and quantify workers’ movement or activities, their personal habits and attributes, and even sensitive biometric information about their stress and health levels.

Employers then feed amassed datasets on workers’ lives into machine learning systems to make hiring determinations, to influence behavior, to increase worker productivity, to intuit potential workplace problems (including worker organizing)…

Maybe not on the same level that Khan’s 2017 article, “Amazon’s Antitrust Paradox” reframed antitrust, but Dubal’s piece is an attempt to zero in on the discrimination aspect of these employer algorithms is an attempt to bring the issue under the legal umbrella of existing laws. Specifically, she argues that since “the on-demand workforces that are remunerated through algorithmic wage discrimination are primarily made up of immigrants and racial minority workers, these harmful economic impacts are also necessarily racialized.”

The primary focus of courts and regulators thus far has been on transparency specifically related to potential algorithm mistakes or the algorithm’s violations of the law.

That misses the point, argues Dubal. It is not, primarily, the secrecy or lack of consent that results in low and unpredictable wages; it is the “extractive logics of well-financed firms in these digitalized practices and workers’ comparatively small institutional power that cause both individual and workforce harms.”

While some workers have sought to use existing law to learn what data are extracted from their labor and how the algorithms govern their pay, Dubal argues that any data-transparency reform approach “cannot by themselves address the social and economic harms.”

The secrecy must be overcome, but the information gleaned must be used in pursuit of a blanket ban, argues Dubal, because algorithmic wage discrimination  runs afoul of both longstanding precedent on fairness in wage setting and the spirit of equal pay for equal work laws.

If I’m reading this right, a successful argument of wage discrimination would lead to the outright ban favored by Dubal on the use of algorithms that control workers wages, activities, etc because they are, in their very nature, discriminatory.

That would be a step beyond many other efforts nowadays to “reform” the algorithm, make the black box more transparent, compensate workers for their data and so forth. It would also be a death knell for so many of the most exploitative “innovative” companies in the US.

This argument also brings Dubal in for heavy criticism as it is a major threat to US oligarchs and their courtesans. Here’s Forbes attacking her last year using many of the same arguments that are used against Khan.

Worker attempts to go after companies like Uber for violations have been difficult due to a lack of knowledge of what exactly their algorithms are doing. It is not dissimilar to lawsuits challenging illegal government surveillance, which are made impossible due to the requirement that plaintiffs are required to prove that the government surveilled them. Because such surveillance is conducted entirely in secret, there’s virtually no way to obtain proof.   Companies like Uber have frequently successfully argued that “the safety and security of their platform may be compromised if the logic of such data processing is disclosed to their workers.”

Even in cases where the companies have released the data, they have released little information about the algorithms informing their wage systems. The FTC, however, would have the authority to pry open the black boxes — as it is doing now in its investigation into surveillance pricing with orders to eight companies to hand over information.

“On Algorithmic Wage Discrimination” is well worth a read, but here is a quick breakdown.

How does it differ from traditional forms of variable pay?

…algorithmic wage discrimination—whether practiced through Amazon’s “bonuses” and scorecards or Uber’s work allocation systems, dynamic pricing, and wage incentives—arises from (and may function akin to) the practice of “price discrimination,” in which individual consumers are charged as much as a firm determines they may be willing to pay.

As a labor management practice, algorithmic wage discrimination allows firms to personalize and differentiate wages for workers in ways unknown to them, paying them to behave in ways that the firm desires, perhaps for as little as the system determines that the workers may be willing to accept.

Given the information asymmetry between workers and firms, companies can calculate the exact wage rates necessary to incentivize desired behaviors, while workers can only guess how firms determine their wages.

Isn’t that illegal?

Although the United States–based system of work is largely regulated through contracts and strongly defers to the managerial prerogative, two restrictions on wages have emerged from social and labor movements: minimum-wage laws and antidiscrimination laws. Respectively, these laws set a price floor for the purchase of labor relative to time and prohibit identity-based discrimination in the terms, conditions, and privileges of employment, requiring firms to provide equal pay for equal work. Both sets of wage laws can be understood as forming a core moral foundation for most work regulation in the United States. In turn, certain ideals of fairness have become embedded in cultural and legal expectations about work.

[Laws] which specifically legalize algorithmic wage discrimination for certain firms, compare with and destabilize more than a century of legal and social norms around fair pay.

What does it mean for workers? It’s not just that such compensation systems  make it difficult for them to predict and ascertain their hourly wages. It also affects “workers’ on-the-job meaning making and their moral interpretations of their wage experiences.” More:

Though many drivers are attracted to on-demand work because they long to be free from the rigid scheduling structures of the Fordist work model,27 they still largely conceptualize their labor through the lens of that model’s payment structure: the hourly wage.28 Workers find that, in contrast to more standard wage dynamics, being directed by and paid through an app involves opacity, deception, and manipulation.29 Those who are most economically dependent on income from on-demand work frequently describe their experience of algorithmic wage discrimination through the lens of gambling.30 As a normative matter, this Article contends that workers laboring for firms (especially large, well-financed ones like Uber, Lyft, and Amazon) should not be subject to the kind of risk and uncertainty associated with gambling as a condition of their work. In addition to the salient constraints on autonomy and threats to privacy that accompany the rise of on-the-job data collection, algorithmic wage discrimination poses significant problems for worker mobility, worker security, and worker collectivity, both on the job and outside of it.

Is such a model coming for other industries?

So long as this practice does not run afoul of minimum-wage or antidiscrimination laws, nothing in the laws of work makes this form of digitalized variable pay illegal.37 As Professor Zephyr Teachout argues, “Uber drivers’ experiences should be understood not as a unique feature of contract work, but as a preview of a new form of wage setting for large employers . . . .”38 The core motivations of labor platform firms to adopt algorithmic wage discrimination—labor control and wage uncertainty—apply to many other forms of work. Indeed, extant evidence suggests that algorithmic wage discrimination has already seeped into the healthcare and engineering sectors, impacting how porters, nurses, and nurse practitioners are paid.39 If left unaddressed, the practice will continue to be normalized in other employment sectors, including retail, restaurant, and computer science, producing new cultural norms around compensation for low-wage work.

Here’s The Register detailing how it’s being used by Target, FedEx, UPS, and increasingly white collar jobs:

One example is Shipt, a delivery service acquired in 2017 by retailer Target. As recounted by Dana Calacci, assistant professor of human-centered AI at Penn State’s College of Information Sciences and Technology, the shipping service in 2020 introduced an algorithmic payment system that left workers uncertain about their wages.

“The company claimed this new approach was fairer to workers and that it better matched the pay to the labor required for an order,” explained Calacci. “Many workers, however, just saw their paychecks dwindling. And since Shipt didn’t release detailed information about the algorithm, it was essentially a black box that the workers couldn’t see inside.”

…”FedEx and UPS drivers continue to deal with the integration of AI-driven algorithms into their operations, which affects their pay among other things,” said [Wilneida Negrón, director of policy and research at labor advocacy group Coworker.org]. “The new UPS contract does not only increase wages, but give workers a bigger say in new technologies introduced and their impact.

The situation is slightly different, said Negrón, in the banking and finance industries, where workers have objected to the use of algorithmic performance and productivity measurements that indirectly affect compensation and promotion.

“We’ve heard this from Wells Fargo and HSBC workers,” said Negrón. “So, two dynamics here: the direct and indirect ways that algorithmic systems can impact wages is a growing problem that is slowly affecting white collar industries as well.”

One doesn’t have to think too hard about the dangers these types of practices introduce to the workplace — for laborers and consumers. As Dubal points out:

Hospitals…have begun using apps to allocate tasks based on increasingly sophisticated calculations of how workers move through space and time. Whether the task is done efficiently in a certain time frame can impact a worker’s bonus. It’s not surge pricing per se, but a more complex form of control that often incentivizes the wrong things. “It might not be the nurse that’s really good at inserting an IV into a small vein that is the one that’s assigned that task,” Dubal said. “Instead, it’s the nurse that’s closest to it, or the nurse that’s been doing them the fastest, even if she’s sloppy and doesn’t do all the necessary sanitation procedures.”

What to do? Dubai proposes a simple solution:

… a statutory or regulatory nonwaivable ban on algorithmic wage discrimination, including, but not limited to, a ban on compensation through digitalized piece pay. This would effectively not only put an end to the gamblification of work and the uncertainty of hourly wages but also disincentivize certain forms of data extraction and retention that may harm low-wage workers down the road, addressing the urgent privacy concerns that others have raised.

…At the federal level, the Robinson–Patman Act bans sellers from charging competing buyers different prices for the same “commodity” or discriminating in the provision of “allowances”—like compensation for advertising and other services. The FTC currently maintains that this kind of price discrimination “may give favored customers an edge in the market that has nothing to do with their superior efficiency.”

Though price discrimination is generally lawful, and the Supreme Court’s interpretation of the Robinson–Patman Act suggests it may not apply to services like those provided by many on-demand companies, the idea that there is a “competitive injury” endemic to the practice of charging different buyers a different amount for the same product clearly parallels the legally enshrined moral expectations about work and wages…

If, as on-demand companies assume, workers are consumers of their technology and not employees, we may understand digitalized variable pay in the on-demand economy as violating the spirit of the Robinson–Patman Act.

While Khan’s FTC, which is charged with protecting American consumers, is increasingly coming after these non-employer employers, it has not yet gone the route recommended by Dubal. Here is a little bit of what the FTC has been doing, however: A 2022 policy statement on gig work from the FTC reads:

As the only federal agency dedicated to enforcing consumer protection and competition laws in broad sectors of the economy, the FTC examines unlawful business practices and harms to market participants holistically, complementing the efforts of other enforcement agencies with jurisdiction in this space. This integrated approach to investigating unfair, deceptive, and anticompetitive conduct is especially appropriate for the gig economy, where law violations often have cross-cutting causes and effects…And the manifold protections enforced by the Commission do not turn on how gig companies choose to classify working consumers.

The FTC has gone after companies with smaller, targeted actions, but Benjamin Wiseman, Associate Director of the FTC’s Division of Privacy and Identity Protection speaking at the Harvard Journal of Law & Technology earlier this year, said that larger actions are coming on the labor-black box front:

…the Commission is also taking steps to ensure that the FTC has the resources and expertise to address harms workers face from surveillance tools. We are doing this in two ways. First, the Commission is forging relationships with partner agencies in federal government with expertise in the labor market. In the past two years, the Commission has entered into memoranda of understandings with both the National Labor Relations Board and the Department of Labor, recognizing our shared interest in protecting workers and, among other things, addressing the impact of algorithmic decision-making in the workplace. Second, the Commission is increasing its in-house capacity to investigate and analyze new technologies.

We’ll see if Khan and company get the chance to carry its work over to a Trump or Kamala administration. While the former might be a bit of a wild card, it looks like the writing’s on the wall for Khan and the Jonathan Kanter- led antitrust division under the latter.

While Uber is a ringleader in the use of the exploitative black box and its practices are diametrically opposed to Khan’s mission at the FTC, it enjoys a significant presence on Team Kamala. The “company” — probably better described as a venture capital-funded project to cement serfdom in the 21st century — also has a longstanding close relationship with  Obama World, which helped orchestrate the crowning of Kamala. And now the Plutocrats are showering Kamala with cash and insisting that Khan must go. One can only wonder if the same insistence that Biden step aside was in part influenced by his administration’s empowering of the FTC and DOJ antitrust.

It sure would be fitting if after a career of supporting the police state, debt peonage, mass infection during a pandemic, and war, part of the reason Joe Biden was pushed aside was because one of the few decent things the man ever did: appointing Khan and letting her do her job.

Print Friendly, PDF & Email



Source link

MarylandDigitalNews.com