Legal departments have also not been spared from the persistent corporate mantra of managing costs and doing more with less.
However, lessons can already be learned from the use of automation in legal operations and the department’s role in setting up workable guardrails to mitigate privacy risks of other corporate departments.
Harnessing these learnings could help HR teams navigate similar advantages and risks of using AI in its day-to-day operations.
Automation: a saving grace?
The modern legal department has installed contract management systems to tame voluminous contracting demands from other departments and has automated compliance approvals in regulated industries.
General counsel and their teams have long had to grapple with GDPR and other privacy laws, and crucially operationalise and imbed privacy compliance into corporate-wide activities so that they are understood by the business.
They can apply lessons learned in these areas to help HR navigate similar challenges for their own activities.
The possibilities are endless
HR departments are currently using (or starting to use) AI across four key areas:
1. Recruitment: AI is streamlining the recruitment process via automated candidate screening, talent identification, and predicting performance.
In addition to algorithms to analyse resumes, cover letters and LinkedIn profiles to identify the most qualified candidates, it can also be used to conduct phone interviews and Q&A with candidates.
This reduces the manual time and effort otherwise expended by talent acquisition teams.
2. Performance management: AI can help candidates manage employee performance by analysing data from performance reviews, productivity reports and other data.
This is used to flag areas for employee improvement and create bespoke training for such needs.
3. Customer service: Automated responses, personalised recommendations and chatbots are routinely used to handle basic Q&A, enabling HR employees to deal with more complex matters.
4. Decision-making: Data-driven decisions can also be promoted by data analysis and insight.
This facilitates discerning patterns in customer behaviour, market trends and financial data to power strategic decision-making.
As applied to the HR function, AI-powered decision-making and data analysis can help forecast staffing requirements for various corporate functions.
Mitigating the risks
Now come the guardrails. Legal departments are tasked with ensuring that HR and other departments operate compliantly and in accordance with laws and regulations.
They must apply their understanding of a set of complex and often overlapping legal and privacy regimes to give their HR departments freedom to operate.
From speaking to a range of technology, legal and privacy experts, there are six major risks requiring policies, procedures and monitoring for the safe implementation of AI and generative AI and avoiding litigation:
Bias and discrimination: Creation and coding of artificial intelligence invariably carries the risk of conscious or unconscious bias embedded in the methods used to create work product or data.
HR departments will need to train on diversity, equity and inclusion methodology and avoid words and phrases associated with protected classes (for instance gender, ethnicity and sexual orientation).
Privacy: AI-generated tools used by HR will collect and analyse employee data which includes personal information, performance metrics, and communication logs.
This carries with it the corresponding duty to collect and store data securely to protect employee privacy.
Confidentiality: AI chatbots re-perform tasks for others using similar training inputs.
Any input to a chatbot may end up being produced to a third-party user.
It will remain in an AI data bank outside the control of the business even if not reproduced.
Consider prohibiting input of business confidential information into chatbots and instead using non-automated channels and materials.
Data security: Chat histories of one user can be produced to a different user as output.
This may represent a data breach or a fake app where a cyber-criminal poses as a real ChatGPT or other generative AI product to induce employees to download malware.
Remember to include this in learning management systems privacy training courses.
Copyright infringement: Where an AI chatbot samples prior work products to create ‘new’ documentation, this will undoubtedly lead to copying copyrighted information from the internet which does not come with footnotes.
Unauthorised use of text or images will trigger litigation requiring human oversight.
Incorrect data input and human error: In order to protect against so-called ‘hallucinations’, where a chatbot behaves erratically in response to major input errors, the quality of data input processed by a chatbot or any AI system will require human quality control following specialised training to learn how AI is programmed and detect system faults.
The most egregious example occurred where a chatbot produced fictitious legal precedents presented to a New York judge in a recent litigation and the lawyer involved was sanctioned.
Is AI the future for HR?
AI will doubtless generate increased efficiencies and allow for greater productivity for HR departments.
The legal department can help ensure smooth implementation and outcomes by conducting a risk assessment of AI impact on employee privacy and diversity; testing AI systems for bias including corrective oversight and monitoring; and finally, evaluating and monitoring AI systems to ensure proper functioning and intended outcomes.
AI offers HR teams exciting possibilities for streamlining operations to be able to focus on value-add activities instead.
However, it might be wise to check in with your in-house lawyers before jumping on the ChatGPT bandwagon.
Jerry Temko is managing director, in-House counsel group at Major, Lindsey & Africa