In a context where technology is advancing rapidly, Artificial Intelligence (AI) is revolutionising the global work landscape, driving profound changes and opening up horizons that were previously unimaginable.
The law is therefore called, once again, to regulate new scenarios that do not conform to traditional legal paradigms. The first step in this direction comes from the European Union. In fact, the European Parliament’s website states that “as part of its digital strategy, the EU wants to regulate artificial intelligence to ensure better conditions for the development and use of this innovative technology”. Thus, on 9 December 2023, the Commission, the Council and the Parliament reached a political agreement on the content of the AI Act – proposed by the Commission in 2021 – the final text of which is currently being finalised.
The European legislature’s priority is to ensure that the AI systems used are safe, transparent, traceable, non-discriminatory and environmentally friendly.
There is therefore a growing awareness, also at a regulatory level, that AI (i) is the engine of a change that raises ethical, social and legal questions around its use and its consequences and (ii) represents one of the most important and complex challenges facing companies.
It is on this last aspect that organisations need to prepare themselves to overcome the profound transformation that, more or less silently, is underway in the world of work.
Taking advantage of a technology that autonomously collects information, processes it, draws conclusions from it or makes decisions, contributes to the speed with which services can be carried out, improves operational efficiency and reduces the scope for error in routine activities. It also impacts the personnel needs of a business and may also affect the measurement of the performance of a human resource.
While this represents a great opportunity to make business processes faster, more reliable and more cost-effective, there are also several issues lurking under the surface. From an employment law point of view these include (i) bias and (ii) the risk of intensive employer control.
The technology, although artificially intelligent, is programmed by humans and as such can therefore be affected by the bias of their programmers, reflecting and amplifying any errors present in the processed information.
As we know generative AI is programmed to learn and (self-)train itself to improve over time, and this is also based on the information that is provided to it. The risk of biasreplication is therefore very high.
In addition, AI provides and processes an unimaginable amount of data and can also directly or indirectly enable intensive remote control of employees.
That said, in the Italian legal system, remote control is regulated in detail and allowed only in the manner and in the presence of stringent conditions provided for by law, including full compliance with the provisions on data protection. This is a matter that plays an obviously fundamental role when it comes to AI.
Continue reading the full version published on AIDP.
Artificial intelligence (AI) is playing an increasingly important role in the recruitment phase, offering new opportunities and challenges for personnel selection processes.
The use of artificial intelligence in the preparatory phase of the employment relationship is becoming ever more important in the following activities:
– Resume screening: AI can be used to perform the initial screening of resumes received for a job position. Through machine learning and data analysis, AI can identify the skills, experience and qualifications which are relevant to the role in question, thereby reducing the time and effort required to make a manual pre-selection;
– Analysis of candidate profiles: AI can analyse candidate profiles on online platforms such as LinkedIn or recruiting websites, by extracting relevant information such as skills, work experience, certifications and other contact information. This can assist the HR functions of companies in identifying the most suitable candidates for a given role, thus speeding up the selection process;
– Initial interviews and skills assessment: some companies are experimenting with the use of chatbots or AI-based virtual assistants to conduct initial interviews with candidates These systems can ask pre-defined questions, analyse the candidates’ answers and provide a score or assessment based on the answers given. This allows the interview process to be standardised and to further filter candidates prior to the subsequent selection phases;
– Predictive analysis: AI can be used to analyse large amounts of historical data and identify patterns or trends that can help predict a candidate’s future performance. For instance, AI can identify personality traits, experiences or skills that are related to job success in certain roles or sectors. This can be used as an additional factor in assessing candidates.
The widespread perception is that such automated procedures are faster, more reliable and cheaper than ‘conventional’ selections, as they allow the candidates’ personal characteristics and aptitudes to be effectively identified by analysing a large amount of data collected during virtual interviews
Continue reading the full version published at Norme e Tributi Plus Lavoro of Il Sole 24 Ore.
1. DIGITAL REVOLUTION AND LAW
The emergence of technologies using artificial intelligence systems has ushered in a new round of debates on the key ethical, social and legal issues surrounding the use of such technologies and their consequences.
Modern technologies – with their increasing impact on society and customs – pose the issue of devising instruments to protect fundamental rights, security and data protection in order to ensure that technological advances are carried out in keeping with individual and collective protection needs, while at the same time ensuring a human-centred approach.
Indeed, it is clear that the development of new generation algorithms and increasingly sophisticated automated data processing techniques offer new opportunities but, at the same time, present complex challenges that affect almost every area of law.
Labour law is not immune to this profound transformation, which necessitates constant adaptation to new demands stemming from practical experience. It has been noted, in this regard, how this renders labour law ‘a necessarily dynamic law, since the basis of the employment contract is functionally connected to productive organisations and structured in such a way so that the contents of the employment relationship change in accordance with organisational and productive changes’.
One of the factors changing the organisation and performance of work is undoubtedly that particular IT branch known as artificial intelligence (now referred to as A.I.).
2. ARTIFICIAL INTELLIGENCE IN MANAGING THE EMPLOYMENT RELATIONSHIP
In a precise effort to focus on the endless variations and multiple applications of the phenomenon, several definitions of A.I. have emerged over time. The definition of Artificial Intelligence provided by the European Commission in its Proposal for a Regulation of the European Parliament and of the Council of April 2021 laying down harmonised rules on Artificial Intelligence (A.I.Act) is particularly interesting, in view of its origin.
The Proposal for a Regulation, in Article 3, defines the ‘artificial intelligence system’ as ‘a system that is designed to operate with a certain level of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions , which influence the environments with which the AI system interacts’.[LT1]
The specific function of the Regulation, in the terms formulated by the Proposal, is to set out the specific requirements for A.I. systems and the obligations to be complied with by those who place this type of product on the market, right down to the user, in order to ensure that A.I. systems which are marketed and used are safe and respect the EU fundamental rights and values.
The relevant provisions are based on a ranking of the potential level of impact of the systems on the wider community, with particular attention to applications of A.I. formally qualified as ‘high risk’ (i.e. which have ‘a significant harmful impact on the health, safety and fundamental rights of persons in the Union’.
For the purposes hereof, it is noted that the A.I. Act qualifies, inter alia, as ‘high-risk systems’ those used ‘in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships’.
This classification stems from the fact that ‘those systems may appreciably impact future career prospects and livelihoods of these persons’.
2.1 ARTIFICIAL INTELLIGENCE IN THE RECRUITING PHASE
Already in the preliminary phase of the employment relationship, A.I. is growing in importance: indeed, algorithmic hiring, understood as a personnel selection procedure wholly or partially entrusted to algorithms, is undergoing great development.
The widespread perception is that such automated procedures are faster, more reliable and cheaper than ‘conventional’ selections, thereby enabling the effective identification of candidates’ personal characteristics and aptitudes through analysing a large amount of data collected during virtual interviews.
While A.I. represents a great opportunity, when it is not properly controlled, it can be adversely affected by an inherent insidious issue, namely human prejudice that inevitably is reflected in the algorithms. In referring to the A.I. Act cited above, the following are in fact considered ‘High-Risk’:
With reference to the risks associated with the use of artificial intelligence in the workplace, it was in fact found that ‘throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy.’
Depending on the way the software is constructed, even a company that has no discriminatory purposes could unwittingly introduce so-called biases in the processing, which, with a knock-on effect, would affect the outcomes of the process, thus resulting in discriminatory effects.
This is because software, however artificially intelligent it may be, is still programmed by human beings and is therefore affected by the judgmental dynamics of its own programmers.
In addition, data entered into the software remains stored within the programme, thus influencing future predictive analyses that will be affected by outdated data.
Interestingly, the well-known case of Amazon should be mentioned in this regard.
The renowned US giant had developed an experimental automated talent finding programme with the aim of assessing candidates according to a ranked scoring system. However, with specific reference to IT roles, the system did not select applications in a gender-neutral manner: female candidates were automatically excluded. The reason was due to the fact that the software was based on data collected over the last 10 years, and the majority of the resources hired during that time in the IT field were, in fact, male.
The algorithms thus identified and exposed the biases of their own creators, thereby demonstrating that automated systems training on unbiased data leads to future non-neutral decisions.
The case of Amazon is an interesting insight into the limits of Artificial Intelligence learning and the extent to which so-called human biases can be reflected in automated systems, thereby influencing their algorithms.
2.2 LEADERSHIP POWER THROUGH ALGORITHMIC MANAGEMENT
In addition to the pre-hiring phase, A.I. systems are also an important factor in organising work, e.g. systems for managing warehouse logistics as well as platforms used for managing riders.
In these sectors, decisions on how best to manage activities and human resources are increasingly being delegated to algorithms, which are able to analyse an infinite amount of data and identify the most effective management and organisational solution: algorithms that determine the assignment of tasks according to certain parameters, automated monitoring systems, geolocalisation systems that provide alerts or automatic intervention in case of danger.
In this rapidly changing working environment, the European Union has emphasised the need for workers to be fully and promptly informed as to the essential conditions of their work.
In order to ensure that employees and trade unions are aware of the digital systems in individual business organisations, the legislator, by transposing Directive (EU) 2019/1152 on transparent and predictable working conditions into national law, has introduced a disclosure obligation for the employer in cases where automated decision-making or monitoring systems are used (Article 1-bis of Italian Legislative Decree No. 152/1997 introduced by the so-called Transparency Decree, Italian Legislative Decree No. 104/2022).
The purpose of the new legislation was that, as can be seen from the reading of the recitals and Article 1 of the EU Directive, to ‘improve working conditions by promoting more transparent and predictable employment while ensuring labour market adaptability’.
A practical interpretation of a sometimes difficult jargon is that the worker must be able to know whether automated techniques are used, whether the employer uses algorithmic decisions and similar means; furthermore, the worker is entitled to know the way these techniques operate, their logic and their impacts, including in terms of security risks to personal data.
From a combined reading of Article 1(1)(s) and Article 1-bis, para. 1 of Italian Legislative Decree 152/1997, it results that the provision of such specific disclosure is required in cases where the manner in which workers’ services are performed is organised through the use of automated decision-making and/or monitoring systems, which are designed to ‘provide information relevant to the recruitment or assignment of the management or termination of the employment relationship, the assignment of tasks or duties as well as information affecting the monitoring, evaluation, performance and fulfilment of the contractual obligations of workers’.
The scope of the rule contained in Article 1-bis of the Transparency Decree created interpretative uncertainties and applicative difficulties relating to the identification of which systems were to be included among those that were subject to this additional disclosure as opposed to remote control instruments, with respect to which the disclosure obligations are conversely governed, as is widely known, by Article 4 of Italian Law No. 300/1970, i.e. by a provision expressly spared from the reform and which appears to retain some degree of its autonomy.
With reference to the types of tools to be regarded as automated systems, the Circular of the Italian Ministry of Labour and Social Policies (Ministero del Lavoro e delle Politiche Sociali) No. 19/2022 has attempted to provide some clarifications on the innovations introduced by Italian Legislative Decree. 104/2022. In particular, the Circular excluded the obligation to disclose information where badges are used, i.e. automated tools for recording the attendance of employees upon entry or exit, provided that such recording does not automatically trigger an employer’s decision, while, purely by way of example but not limited to, it provided for such an obligation in the case of the use of automated systems for managing shifts, determining pay, tablets, GPS, wearables and other devices.
Continue reading the full version published on Guida al lavoro of Il Sole 24 Ore.