After a silent and rapid incubation, computer systems that can perform tasks that would normally require human intelligence (also known as Artificial Intelligence), have become reality and represent one of the hottest challenges (if not the greatest and more complex) which enterprises are now held to face and in most cases, not to put at stake their future and business continuity, win. Social, ethical and legal aspects are critical and need to be taken into account in this challenge.
Picking up the A.I. challenge has become particularly imminent as it broke in, from multiple angles, in the business model of many industries. And if A.I. on one hand represents a big revolution in data management, processing and decision making, on the other hand it has simultaneously raised ethical, social and – as always – legal issues, all at once. As social and legal environments are struggling to keep up, A.I. has impacted already. Among the aspects deeply concerned, is certainly employment law which often (if not always) crosses ways with data processing and protection, particularly when it comes to workers’ rights and, more in general, to people and work management.
To spot a hint of how this impact might be significant, suffices to say that A.I. has already been changing contents of all those duties and tasks consisting of (or ending up in) decision making based on data (and big data) and information processing, quickly replacing manual activities which have been typically requiring extra resources and resulted particularly time consuming until very recently. Having machines processing information and data, to raise conclusions or form decisions, is self-explaining of how deep this revolution can be in any work environment. But closing up to typical activities related to people management and employment, powerful applications of A.I. inevitably need to reckon with employment and privacy laws restrictions and protections.
As far as pre-hiring and recruitment phase is concerned, for example, faster, more accurate and incomparably cost-effective automated data collection and management, have made ordinary recruitment processes suddenly obsolete and no longer useful. As a result, recruiting now requires less people to achieve bigger results in a shorter time. Data privacy is of course very sensitive in terms of rules on data processing (and in particular to avoid wrongful processing). In addition to this, automated recruitment processes, if left ungoverned when it comes to compliance (let’s just think of gender equality and discrimination), may entail significant legal issues, since wrongful results conflicting with applicable laws may involuntarily not only pass unnoticed, but be reiterated. A.I. is indeed typically based on the capitalization of previous experiences and on elaborating new results starting from previous inputs, this entailing the risk that undesired patterns may be involuntarily replicated in future. As machines have no capacity to completely elaborate their results based on ethics or good sense, human intervention is still unavoidable most of the times.
The first conclusion which comes out from these preliminary considerations is that any form of A.I. implemented in the business activity needs to be governed and then regulated in dedicated internal policies, also to avoid accidental law violations.
It is not a casualty that, in a legal environment which has been found unprepared to the sudden changes imposed by digital evolution and A.I. advent, the EU A.I. Act, the world’s first comprehensive A.I. law enacted as a European legislation during last spring 2023, identifies as ‘high-risk systems’ those used “in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, […] since those systems may appreciably impact future career prospects and livelihoods of these persons” (Whereas, no. 36 of the A.I. Act). Consistently, A.I. assumes that “Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation” (Whereas, no. 36 of the A.I. Act).
On another employment law and HR related subject, while A.I. allows to “intelligently” process and elaborate almost instantly an (until recently) unimaginable amount of data, restrictions for the employer to use tools that (even indirectly) may allow remote control on the employees remain, and they might be critical. Indeed, in many civil law jurisdictions (including Italy) remote controlling on employees is generally forbidden or strongly limited and even in the cases when it is allowed, employers can acquire and use the data for disciplinary purposes only in specific cases and provided that information and guarantees on data processing and data privacy are complied with.
If Artificial Intelligence is going to play a central role in many heterogeneous industries (e.g., consultancy, legal, delivery etc., as well as in all fields where combined and processed information are fundamental) and though its functioning is led by machines and algorithms, data processing is the key factor and data may easily include personal and sensitive data (e.g., ethnicity, gender of people) and their storage, hence requiring extra-care when it comes to data protection compliance, also considering that sanctions applicable by GDPR legislation in case of wrongful data protection are particularly significant and can be quantified based on the global turnover of the group to which the single entity held liable belongs.
The second conclusion is that when it comes to A.I. implementation data protection plays a critical role and careful analysis of all the data which might be voluntarily or involuntarily processed, as well as any form of control which A.I. may directly or indirectly allow, need to be properly addressed and not left unregulated (again, for example, also regulating these aspects in internal policies and guidelines).
Last but not least, A.I. is also expected to have an impact in the job market, in terms of available and/or requested jobs, professions and duties in general. A.I. is indeed a great opportunity, but at the same time a big threat to old jobs, forced to face a totally different kind of challenge.
Continue reading the full version published in IPBA Jpurnal.
Our Managing Partner Vittorio De Luca will take part in the second “Welfare & HR Summit” organised by Il Sole 24 Ore on 22 February.
This digital event explores new labour market developments and their impact on corporate governance and personnel management. It is a place for dialogue between Experts and Institutions under the new regulatory scenarios and allow Italian companies to organise themselves to beat the pandemic challenge.
LOCATION AND TIMETABLE
22 February 2021
3 pm – 6 pm
A live event from a dedicated location and streamed live using a platform that allows speakers to connect and interact remotely.
Click here for further details and register for the event.