Whilst the use of technology for workforce management is nothing novel, covering various functions including monitoring attendance and preparing payroll, there is now a new phenomenon that has the potential to significantly impact the workforce: the use of artificial intelligence.
This article will provide insight into the use of AI to manage and monitor the performance of employees, a concept often referred to as ”algorithmic management”.
Algorithmic management refers to automated monitoring and decision-making systems through which digital labour platforms control or supervise the assignment, performance, evaluation, ranking, review of, and other actions concerning, the work performed by people working through platforms.
Algorithmic management encompass a diverse set of technology tools and techniques designed for remote workforce management and assessment, based on data collection and surveillance of workers to enable automated or semi-automated decision-making. Algorithmic management can include real time response to data influencing management decisions, and the use of ‘nudges’ and penalties to provide indirect incentivise for worker behaviour.
When this technology is used in the context of employee dismissals, a new term has emerged – “robofiring”. This term refers to the practice of using automated systems, algorithms, or AI tools to make decisions about employee retention or termination without direct human intervention. While these technologies promise objectivity and accuracy, they may inadvertently introduce bias and overlook vital contextual factors that humans consider.
The reach of these technologies extends beyond the gig economy, since they are now more affordable and easier to deploy. Such technology is increasingly becoming part of ordinary infrastructure of workplaces across different sectors.
Employment laws are primarily designed to safeguard employee rights, prevent discrimination and ensure fair treatment at workplace. The specific challenges of algorithmic management in the platform work context are not covered by existing labour laws. However the adoption of these technologies is giving rise to a number of legal challenges concerning workers’ rights and may destabilise employment relationships due to:
Some of the legislative measures aimed to regulate specific aspects of this issue are briefly assessed hereunder.
The proposed EU Platform Workers Directive, a legislative measure intended to address a number of issues relative to platform workers, includes specific rules for the use of artificial intelligence in the workplace.
The proposal identifies the issues concerning the digital control exercised by platforms (use of algorithms to assign tasks, but also to monitor, supervise, evaluate, impose sanctions and terminate the contract). The proposed directive would also increase the transparency in the use of algorithms by digital labour platforms, by introducing the requirement for human monitoring, in order to ensure fairness and accountability in algorithmic management and the respect of working conditions. Those performing human monitoring would also have the right to contest automated decisions. These new rights would be granted to both workers and genuine self-employed people.
In the US state of Pennsylvania, a new law has been proposed under the name ‘No Robot Bosses Act’, which would forbid employers from making employment decisions solely based on automated systems, such as throughalgorithms and machine learning tools. Additionally, it would mandate that employers train their employees on how to use such systems and make sure employers disclose when they are using them.
The bill would also force employers to provide real-life, human oversight over AI’s use in the workplace before an automated system makes an employment-related decision — such as during the hiring process, when adding or removing work shifts, or when firing a worker based on their performance.
In the UK, the Data Protection and Digital Information (No. 2) bill has been proposed intendending to make data protection legislation simpler for businesses to understand and implement, has also been widely criticised due to the possible effect of reducing protection for workers against automated decision making afforded to them under Article 22 of the GDPR. The bill provides clarifications in relation to prohibition to automated decision making with no ‘meaningful human involvement’. The bill further states that when considering whether there is meaningful human involvement in the taking of a decision, a person must consider, among other things, the extent to which the decision is reached by means of profiling. The bill appear to weaken the right to require app-based companies to provide an explanation when they make automated decisions.
Though relatively nascent, these concerns have already undergone judicial scrutiny in a highly significant case.
In April 2023, the Amsterdam Court of Appeal upheld the appeal filed by a group of drivers against the ride-hailing companies Uber and Ola Cabs. The drivers brought claims for breaches of the EU General Data Protection Regulation (“GDPR”) concerning decisions made using opaque algorithms which included managing, fining and sacking workers. Significantly, the Court of Appeal overturned the first instance ruling on this issue and rejected Uber’s attempt to rely on the “humans in the loop” who were supposed to have reviewed and verified the algorithms’ decisions. According to the Court, these reviews amounted to “not… much more than a purely symbolic act” in the context of the facts. As a result, the algorithmic decision-making was “solely automated”.
The Court also delved into the complex interplay between these rights and the protection of trade secrets and determined that the defendants were not permitted to refuse to disclose this information just because it constituted proprietary trade secrets.
The inevitable rise of AI and comparable technologies, coupled with their deployment into employment-related affairs, is poised to attract increased scrutiny.
These issues are poised to play a pivotal role in forthcoming court cases, where these matters will be further refined and elaborated.
Companies must take proactive measures to embrace automation while upholding employee rights, such as (i) auditing algorithms to ensure compliance with employment laws and detection of any potential bias; (ii) incorporating human oversight to the automated decision-making process; (iii) clearly communicating the use of automation in employment decisions; (iv) analyzing data collection to ensure workforce diversity and inclusion; and (v) consulting an employment law expert for legal advice.
 Briefing by European Parliamentary Research services dated June 2023 https://www.europarl.europa.eu/RegData/etudes/BRIE/2022/698923/EPRS_BRI(2022)698923_EN.pdf
Mariella graduated from the University of Malta with a doctorate in law in 2005. She completed a master’s degree in ‘European Private Law’ from the La Sapienza, University of Rome, and was admitted to the bar in Malta in 2006.
Mariella is a people person – and it is this attribute which has really characterised and shaped her career.
Over the years, she headed the legal departments of several corporate services firms. Due to her skillset, she was also entrusted with managing and overseeing operations and human resources, where she gained technical and practical experience in various corporate, commercial and employment matters.
Her practical hands-on experience and insight perfectly complement Mariella’s technical knowledge of employment law, thus placing her in an ideal position to understand and advise employers and employees alike on various matters that may arise at the workplace.
Mariella is passionate about employment law matters and provides her clients with the highest-quality legal service to achieve the best possible outcome and resolve any employment law related issues and concerns.
Bradley graduated Doctor of Laws from the University of Malta in 2005 and was admitted to the Bar in Malta in 2006. He advises clients on various corporate, commercial, employment and regulatory matters, with particular focus on company and financial services law.
He has assisted clients in various corporate and commercial matters by providing company law advice and assisting in the implementation of corporate finance, restructuring, mergers and acquisitions and similar transactions.
Bradley has also advised and assisted investment funds, fund managers and other investment services providers, banks and financial institutions, on various legal and regulatory matters relating to the setting up, authorisation and ongoing conduct of their activities in Malta.
His practice also covers general employment law matters. Bradley’s experience in company and financial services law enables him to focus on various corporate and regulatory aspects of employment relationships. In particular, he advises organisations on the implementation of employee share option and participation schemes, the implications of business transfers on employment relationships, as well as relations with senior employees.
Karl graduated Doctor of Laws from the University of Malta in 2005 and was admitted to the Bar in Malta in 2006.
Karl has gained considerable expertise in technology law and regularly assists clients in relation to intellectual property issues, commercial contracts and ways to ensure compliance with the General Data Protection Regulation (GDPR) and privacy laws. Whilst such matters used to be only given incidental importance when dealing with employment matters, they are now widely acknowledged to be vital in all employment relationships.
He is also regularly engaged by C-level executives to assist in negotiating employment contracts and settlement agreements.
Karl advises across a multitude of industries including technology; marketing; adtech; financial services; gaming; esports; consumer products; and media and telecommunications.