The occupational risks of algorithms, the main shortcoming of the new European regulation on Artificial Intelligence
- Scientific Culture and Innovation Unit
- January 17th, 2022
Research by Adrián Todolí, Professor of Labour Law at the University of Valencia (UV), analyses the specific dangers to the health of workers caused by algorithms in the workplace. These include monitoring and feeling observed, work intensification caused by software decisions, discrimination despite apparent algorithmic neutrality, or possible malfunctions. Todolí proposes specific European regulations to protect against these occupational hazards, in a system increasingly imbued with artificial intelligence (AI).
The study, published in the International Journal of Economics and Law Transfer, points to the specific health risks of staff since, as the teacher specifies, new AI methods lack “empathy or knowledge of the limits of the human being”. As its author points out, this is one of the main shortcomings of the proposal for a Regulation by the European Commission on AI, which came to light in April last year and is now being negotiated with the rest of European authorities for final approval.
Along these lines, the research lists up to six risk groups with possible consequences. One example is constant monitoring, which can cause negative psychological effects such as feeling constantly watched or overwhelmed by technology. But he also points out that possible algorithm malfunctions are one of the factors causing problems in terms of stress and anxiety.
In addition, other risks that the article highlights are the reduction of the person’s autonomy or the intensification of work in the face of a lack of human empathy. To this are added the biases caused by algorithms that, although a priori pass the filter of neutrality, the study shows that they do not exceed it. As Todolí exemplifies: “to select a candidate from a minority group, a hiring algorithm will by default require more qualities, skills or knowledge than those required of a member of a majority group, simply because it is easier to statistically predict behaviour of a candidate who belongs to the latter group”.
That is why the UV professor defends the need for specific regulation at European level to protect the health of AI-managed staff, as “many can be reduced or avoided if they are taken into account when programming an algorithm”. “New regulations of this type must stipulate that this programming takes into account these occupational and health risks, such as the right to privacy. In the same way that supervisors must receive training in risk prevention in order to carry out their work, the algorithm must also be programmed to weigh them”, specifies Todolí.
Furthermore, research makes it clear that AI must be under human control and advocates for specific rules on the security of algorithms. “This would send a message about the importance of specific risks. On the other hand, if the European Union does not consider it appropriate to adopt new regulations in this specific area, the opposite message could be conveyed and it should be stated that algorithms do not pose a risk to health when this is not the case”, said the expert. In this line, the teacher also mentions the relationship of the article with the recent approval of the State Agency for the Supervision of Artificial Intelligence and control of algorithms in Spain, included in the State Budgets for 2022.
Article: Todolí-Signes, Adrian, Making Algorithms Safe for Workers: Occupational Risks Associated with Work Managed by Artificial Intelligence (September 1, 2021). Transfer: European Review of Labour and Research, 2021, Available at SSRN: https://ssrn.com/abstract=3915718