Companies have an expensive retention problem. Identifying, sourcing and training the right talent, which can range from $4,000 to nearly $60,000, is simply the first step in a journey of the employee experience. Yet, this investment does not always bear fruit, as the average time an employee spends at a company is only 4.2 years, according to the Bureau of Labor Statistics. Estimates in Silicon Valley are much lower.
What makes people stay at their place of work? According to an analysis of Facebook employees, those who were likely to stay felt they used their strengths more often and that their workplace enabled them to grow. Great employee experiences boost talent retention and acquisition, as employees feel more fulfilled at work and the business builds a talent brand as an employer of choice.
How AI will reshape employee experience and the nature of work
Employee experience is increasingly associated with advanced technology; especially data analytics and Artificial Intelligence (AI). The emergence of predictive intelligence, machine analysis, recommendation, and process automation are providing businesses with new tools for examining complex dimensions of people in the organization and creating interventions and capabilities to enable those people to perform at their best.
This technology is set to transform the employee experience at leading businesses very soon: a recent survey found that half of the 400 CHROs interviewed recognized the power of cognitive computing to transform everything from HR operations and talent acquisition to talent development. There are a wide range of applications for AI in improving the employee experience, but some of the most compelling relate to the areas of development, training, collaboration and teaming. Here are some potential use cases:
- Engaging employees – Advanced sentiment analysis technologies are using natural language processing, text analysis, biometrics and other emerging technologies to go beyond traditional ways of assessing employee experience and seeking deeper insights into behaviors and motivations of employees. For example, AI can analyze data including email communications, and biometric data and predict specific actions to improve team members sense of belonging or connection to activities. Sentiment analysis could also, for instance, be used to predict when an employee is getting bored by their work; the AI could then provide data-based recommendations on actions to boost the employee’s engagement levels.
- Building culture and changing behavior – The possibilities for leveraging artificial intelligence in order to help organizations adapt culture and behavior is immense. From a consumer perspective, humans are now exposed to a broad variety of behavioral stimuli and nudges in order to encourage “views”, “likes”, or “buys.” Organizations are already applying nudge theory and behavioral economics concepts in the workplace. By combining behavior economics theory with the insights and scale of artificial intelligence, there are significant opportunities for organizations to leverage these capabilities to achieve culture and behavior change not possible through traditional transformation or change initiatives. For example, consider the case of unconscious bias in promotions and reviews. Perhaps a particular manager shows difficult to identify patterns in his reviews of colleagues. Artificial intelligence capabilities can both discover previously unrecognized trends and act as “friendly intermediary” in order to demonstrate evidence to the manager and help him change behaviors toward colleagues – which benefits the collective organization.
- Building new skills – By applying data analysis and AI, employers will be able to tailor data-fueled personalized career plans and training programs. These plans will be mapped exactly to the individual’s personality and approach to learning, based on insights and correlations that can only be achieved by machine analysis at scale. Moreover, AI can become an instructor and guide to support workers as they improve their skills while also managing organization’s learning and development costs.
- Enabling teaming and collaboration – Machine-scale data analysis can also be applied to the wider employee experience by ensuring that people are placed in teams that will work well together and collaborate effectively. The right algorithmic models will provide recommendations which, when implemented, drive employee engagement by creating cohesive teams and boosting collaboration. As the workforce becomes more fluid to better leverage gig and contract workers to help meet skills shortages, effective teaming will become critical. The workforce of the future will be liquid: teams will come together ad hoc as required and be expected to be productive immediately. By recommending team structures and hierarchies, AI will prove a core enabler for this transformation.
With the rise of AI, the way in which enterprises engage, evaluate and manage employees will transform beyond recognition. Data and predictive analytics will lie at the heart of this change, as algorithms help businesses understand what motivates their employees and how they can architect their workflow, workplace culture and HR services to play on the strengths of individual workers.
The ethical implications of AI shaping employee experiences
The use of AI in employee hiring, promotion, and retention invites a new set of ethical questions for organizations, and wider society, to consider. The era of Big Data encouraged us to collect and store as much data as possible. Our new, more evolved, perspective is that we need to address the quality of the data we keep and store and consider the privacy implications of the humans whose data we collect. The use of algorithms on this data introduces questions of unintended negative consequences introduced by bias – algorithmic, human, and data-driven.
Data Quality
First, data quality is paramount. Is your data measuring what you think it’s measuring? Social scientists concern themselves with operationalization with good reason. Operationalization refers to the measurement of data that is intended to represent a concept.
For example, a valuable measure of a good workforce is employee satisfaction. How do you measure this? One strategy might be to develop a survey tool that explicitly asks employee satisfaction with particular aspects of the company. This might suffer from biased sample (in other words, only very unhappy or very happy employees respond), or you might not get the turnout you were hoping for to get an accurate measurement (also known as a representative sample).
Another strategy might be to infer satisfaction through other means, what social scientists call a ‘construct’. Given the accessibility and variety of data at our fingertips, we might create a construct based on the number of company events the employee attends, how many vacation days they take, whether their internal network includes individuals outside their immediate department, and what websites they browse during company time.
The first consideration is whether this is an accurate and comprehensive measurement. For example, does the number of vacation days reflect employee satisfaction? Does being a workaholic mean the employee is happy? Might this negatively bias against individuals with families versus single employees?
Second, how well is this data measured? Do you measure a person’s internal network by the meetings they have with non-department individuals? This might miss informal meetings, watercooler banter, and after-work events.
Finally, what are the social and cultural institutions that shape your data, particularly if it will be used in algorithms? The data that drives AI algorithms is non-neutral and reflective of existing gender and racial biases and disparities. A common fallacy is that removing identifying racial and gender information is sufficient in developing AI that’s unbiased. This simply isn’t the case.
Privacy Implications
Employees share a significant amount of active and passive information. ‘Active’ information refers to data that is explicitly shared by the individual, such as email on a company email address. Most employees are aware that this is information that is stored and may be accessed by the company for their own purposes. ‘Passive’ information includes web logs, calendar invites, and other less obvious data that is shared through the use of employer software and hardware. For example, based on an online work calendar, an employer can construct a network of connections to understand how the employee spends their time and with whom, and track trends. Employees may not be aware that this is possible.
AI raises new ethical questions related to data privacy in the workplace. We are able to capture data from different sources, and utilize algorithms on this data to arrive at predictions and conclusions that were previously impossible.
Algorithmic considerations
The unintended consequences of algorithms are prevalent in the media. For example, we’ve heard of algorithms that show women lower salaried jobs than men, or parole algorithms biasing against black inmates. How might we enable the positive use of AI in employee experience, when the potential for negative outcomes is palpable?
Let’s look at an example: if a company is building an algorithm to inform employee promotions, they may use factors such as manager and co-worker reviews, title, or previous salary as variables in that algorithm. The developer may actively decide not to include race and gender in the algorithm. However, this data is already tainted. For each of these variables, women and minorities are systematically discriminated against by the institution. While gender and race are not explicitly in the algorithm, they are implicitly influencing the algorithmic outcome.
One study from researchers at Princeton University and the University of Bath, for example, looked at how AI learns bias through word embeddings (embeddings enable AI to correlate commonly associated words to create more accurate definitions). The study found the words ‘female’ and ‘woman’ were mostly associated with arts and humanities occupations, as well as with the home. ‘Male’ and ‘man’, meanwhile, were matched to math and engineering professions.
How can we navigate this potentially treacherous space? We cannot change the institutions from the ground up, and we cannot change a history of bias and discrimination. What we can do is create human-centric AI. In short, human centricity refers to the explicit inclusion of humans in AI decision-making. This can be infused in design, oversight, and execution.
Shaping an ethical future for AI in the workplace
AI will bring new levers of success including collaboration capabilities, information sharing, experimentation, learning and decision-making effectiveness. Businesses that want to remain competitive must adopt these levers. However, if the application of AI in the workplace is to benefit all, it needs to be deployed in a human-first way that places the rights of individual employees front and center.
Central to this goal is ensuring that we use AI for good, and not for bad. We know that AI is biased, and we know that people are biased. Any application of a technology that ignores the flaws of both human and machine will be inherently flawed. But there is also much good that can be achieved through AI, and these applications should be embraced. AI is excellent at pointing out when humans are unconsciously being biased, for example, and that knowledge allows us to correct our actions. Using AI in this way will help us humans become better at our jobs and treat employees more fairly.