Employers in 2020 might feel like they’re stuck in Orwell’s Nineteen Eighty-Four. Okay, so your computer is not a telescreen per se, but still, many employers electronically monitor and even physically read employee communications in an effort to improve productivity. The use of these invasive tactics in the pursuit of an otherwise beneficial objective can often create tension (or seed mistrust) between employees and employers.
At a basic level, employees are driven by internal motivation (i.e. the desire to learn, feeling competent, etc.) and external motivation (i.e. salary, etc.)1. The Harvard Business Review indicates that employees who are intrinsically motivated are three times more engaged2. But external motivations like employee monitoring and the feeling of being watched have a strong influence and can even replace an individual’s existing internal motivations, significantly diminishing employee engagement and productivity3.
Additionally, employee monitoring is often implemented in ways that are not compliant with Europe’s General Data Protection Regulation (GDPR) (for example using insufficient disclosures, relying on invalid employee consents, failing to perform required risk assessments, etc.)4. With fines of up to 4% of an employer’s annual revenue, employers can be stuck with a bill that far exceeds any value of monitoring.
AI can help empower individuals to develop into effective people leaders, without the use of employee monitoring.
A manager’s emails, chats and calendar are not just communication and scheduling mechanisms. They’re digital relationships that reveal as much about an individual’s behavior as they do content. Instead of monitoring these relationships to evaluate a manager’s behavior, AI can be used to point out interesting patterns in that behavior, and even provide coaching and tips based on those patterns. For example, AI can measure and show discrepancies in communication frequency among a manager’s direct reports, and push relevant articles. These are powerful insights and tools that can help a manager become more self-aware. And they’re based on clear statistical measurements, not a potentially inaccurate evaluation of a manager (i.e. managers with communication discrepancies are poor communicators). This is a small, but important distinction to make as monitoring provisions under the GDPR could otherwise be triggered5.
AI doesn’t have to feel like monitoring.
People don’t like being watched, regardless of whether or not it falls under the GDPR’s definition of monitoring. AI’s insights and coaching can provide in-the-moment feedback directly to a manager, so there isn’t a need to involve anyone else.
But AI technology can be difficult to understand, and some organizations and their users may have incorrect assumptions about how it works, even though they already use it in their everyday tasks (for example when sending emails, using customer service chat bots, and more). However, getting comfortable with AI doesn’t have to be a major undertaking. For example, organizations should use vendors with transparent data practices (vendors that host webinars, maintain easy to read privacy practices, publish FAQs, seek user buy-in, etc.) and secure procedures (vendors that maintain well-known privacy and security certifications, etc.).
Organizations can further help users get comfortable with AI-related leadership development by giving users the choice to participate, as well as the right to delete their data should they later change their mind.
Business leaders that want to increase organizational productivity by helping managers hone their leadership abilities, without diminishing employee engagement or triggering GDPR’s onerous monitoring provisions should ask their data teams about the promise of AI technology.
Want to read more about the ethics and law of AI/ML? Check out this other recent blog post on the topic here.
4 GDPR Recital 43 and Articles 4(11), 6(1)(a), 7, 13 and 35; Article 29 Data Protection Working Party, Guidelines on consent under Regulation 2016/679
5 GDPR Recital 24; Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, Adopted on 3 October 2017 As last Revised and Adopted on 6 February 2018 and EDPB Guidelines 3/2018 on the territorial scope of the GDPR, (Article 3), Version 2.1, 12 Nov 2019
This information is provided for your reference only and does not constitute the provision of legal advice.