The AI Problem We’re Not Taking Seriously Enough
The key to employee acceptance of AI is an aggressive management commitment to focus on using AI to enhance -- not replace -- a user’s job. Also key: loud and clear communication of that goal.
- By Rob Enderle
- August 28, 2023
If we want to benefit from the recent massive wave of generative AI products, we need to take one of the biggest problems surrounding the technology more seriously. This problem was recently showcased in a report I was sent by the PHA Group produced by Ipsos (similar to this report) that concluded that only one in eight employees agree that AI will create more job opportunities, and 55% of those sampled, made up of mostly people with college degrees, disagreed.
Whether AI deployments take jobs or not will have to do with AI capabilities and management decisions, but these results suggest employees do not trust employers to do the right thing. The recent actors and writers strikes in the U.S. highlight this concern.
Let’s talk about the problems that will result if management does not aggressively commit to using AI to enhance users and improve working conditions rather than replace them and reduce employment while enriching executive leadership.
Increased Unionization and Strikes
Like a lot of people who have degrees in manpower management, I think unions only result when management loses the trust of their employees. I have belonged to and had to fight unions over the years, so I’m not a fan, but I recognize that when management misbehaves against employees, unions are one of the only powerful defenses that can work at scale (litigation, given the proliferation of arbitration clauses in employment contracts, isn’t nearly as strong).
Using the actors and writers strikes as an example, the reasons unions are a problem is that they create a second chain of command not aligned with the business and can drive initiatives that destroy the companies and industries they operate in because their primary tool to elicit a favorable management response is to temporarily shut the business down.
This is bad in a competitive environment because customers can’t do business with companies that cannot keep their doors open. Much of manufacturing’s move offshore was the direct result of union actions making labor too expensive domestically. The quickest way to get a union to form is to convince (or allow employees to be convinced) that they are being treated unfairly. Having them train AI tools to replace them would be perceived as incredibly unfair.
Unless companies aggressively act to ensure these AI tools are used to make employee jobs easier and more desirable instead of replacing them, expect unionization and strikes such as the actors and writers strikes to become more common.
Internal Sabotage
Workers may be asked to train the AIs they work with, but if they feel they are going to eventually be replaced by the AI when they are finished, a considerable number of workers are likely to sabotage the effort. I have seen this play out in years past when someone who has been notified they are going to be let go is asked to first train their replacement. In that case, they will not do an excellent job and will, instead, mistrain or poorly train their replacement.
We are already seeing reports that AI tools such as ChatGPT are seeing a massive degradation of math accuracy. Although researchers argue that this resulted from performance improvements that adversely impacted accuracy, it is at least possible that it could have also resulted from poor training so the AI will not be able to take that job. One of the researchers is quoted as saying, “These are black box models, so we don’t actually know how the model itself, the neural architectures, or the training data has changed.” Discussions of corrupted AI solutions are increasing, with intentional corruption being identified as one of the major causes of that corruption.
Lower Retention
Finally, if employees are convinced they will be replaced by AI, they are more likely to either “quiet quit” or leave the company altogether. Retaining employees who don’t trust management is difficult, in any case. You would be better off if the employee left because disgruntled employees who do not trust management can create problems over and above retention, including acting out and violence. Both times I was close to an employee-violence event was triggered when the employee (or the employee’s spouse) thought management was mistreating the employee.
One of the areas AI is correctly focused on is to gauge employee sentiment and to assure retention rather than reduce it. If you don’t take care to assure employees that AI will not be used against them, retention will suffer and workplace violence will increase.
Recommendations
As we explore the use of AI, we must constantly communicate and reaffirm that these tools are not being used to replace but to supplement and help employees. Only in areas where staffing has been inadequate or where employee safety drives AI use should AI be positioned as an alternative to employees. These statements and claims must be matched with subsequent actions with the clear goal to rebuild employee trust which appears to be unusually low in certain segments.
If you want AI to work, employees need to want to make it successful and accurate. Otherwise, the employees won’t be helpful and may actively work against the effort. Fixing a broken AI training set is not a trivial task.
In the end, generative AI is not a technology you can just drop on employees. It must be positioned as helpful. Otherwise, the strikes we are seeing in the media industry will just be the tip of a very nasty iceberg on our path to AI implementation.
About the Author
Rob Enderle is the president and principal analyst at the Enderle Group, where he provides regional and global companies with guidance on how to create a credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero-dollar marketing. You can reach the author via email.