HR leaders around the world use computers, clouds and cookies to process billions of pieces of employee data. Yet for many of those same people, the thought of using Artificial Intelligence (AI) and Machine Learning (ML) in the HR context is uncomfortable, even though these technologies have tremendous potential to help coach and empower employees (and thus help the business retain top talent). Such feelings are usually, albeit mistakenly, attributed to the European Union’s General Data Protection Regulation (GDPR) and other privacy laws, despite the fact that the terms “Artificial Intelligence” and “Machine Learning” aren’t mentioned anywhere in the guidelines.

To help better understand – and move past – these misconceptions, let’s explore how companies can stay compliant with global privacy laws, while still using the newest AI/ML workplace technologies to provide coaching and empowerment solutions to their employees. Here are the answers to some common privacy questions and issues in an HR context.

Is it unusual to analyze an employee’s incoming digital messages using AI and ML?

No. In fact, most applications you currently use already do this. Consider the following: When you reply to an email, ever wonder how it provides you with a list of suggested “canned” responses? Or how support emails or chat windows can answer your questions without human intervention? Or how your anti-virus software can tell whether your incoming email contains a virus before you open it? All of these technologies are based on AI/ML. HR technologies that do something similar are not unique.

Does the company need consent in those instances according to GDPR?

No. Under GDPR, employers have what’s known as “legitimate interests” to operate the company. These interests give them grounds to process personal data using these types of AI applications without requiring consent.

As a general rule, employers should avoid relying on consent to process employee personal data altogether. Employees are almost never in a position to voluntarily or freely give consent due to the imbalance of power inherent in employer-employee relationships, and therefore the consents are often invalid1. That may be surprising, but employers already process employee personal data without consent on a daily basis, whether or not they realize it. For example:

  • automatically storing employee communications;
  • creating employee review or discipline records;
  • sending sensitive employee payroll information to government agencies.

This isn’t an issue because GDPR offers five alternative legal bases pursuant to which employee personal data can be processed, including the pursuit of the employer’s “legitimate interests2.” This concept is intentionally broad and allows organizations flexibility to determine whether its interests are appropriate, regardless of whether these interests are commercial, individual, or broader societal benefits, or even whether the interests are a company’s own or those of a third party3.

Are there common software applications that use AI/ML on employee email messages without employee’s consent?

Yes. Many of these are likely already in use by your organization, and do not ask for consent of every sender for every message that is analyzed in your inbox. For example:

  • Gmail Smart Reply: Uses AI and ML to analyze inbox messages in order to suggest responses.
  • Microsoft Office 365 Scheduler with Cortana: Analyzes incoming emails using natural language processing to schedule meetings.
  • ZenDesk Answer Bot: Utilizes ML to analyze and provide answers to incoming messages.
  • Norton Email Antivirus Scan: Scans all incoming email messages to protect against security threats.

Ok, so employers can process email communication data internally and be compliant with GDPR, but should they?

It depends. Here is where we diverge from “law” to “ethics.” At Cultivate we believe that employees should control their own inbox, even though that’s not a requirement of GDPR. That means letting employees grant and revoke permission to the applications (that have already been approved by the company) that can read their workplace inbox. And we feel this control should be “opt-in” control, not opt-out. This lets the individual control their own data, while also allowing the use of new AI/ML-powered HR technologies if they so choose.

There can be a middle ground that respects employee’s privacy and also lets HR departments use new technologies to empower the individual. This is more important than ever in the new era of WFH where we have an abundance of workplace communication and companies are charting new courses to help their employees thrive in the future of work.

We hope this answered some of your key questions around the ethics and legality of using AI/ML technologies in your HR initiatives. New technologies offer powerful solutions that can help your employees become more self-aware and better leaders. Exploring them, and understanding compliance around them, is key to effectively rolling them out.

1 GDPR Article 4(11); GDPR Recital 43; Article 29 Data Protection Working Party, Guidelines on consent under Regulation 2016/679
2 GDPR Article 6; GDPR Recital 47
3 UK Information Commissioner’s Office: “Legitimate interests” and “What is the ‘legitimate interests’ basis?”

This information is provided for your reference only and does not constitute the provision of legal advice.

Josh Pittel, JD, ESQ, CIPP/US/E
Josh Pittel, JD, ESQ, CIPP/US/E

Josh is the Director of Global Privacy/Legal at Cultivate