This article was published in TSE science magazine, TSE Mag. It is part of the Autumn 2023 issue, dedicated to “The World of Work”. Discover the full PDF here and email us for a printed copy or your feedback on the mag, there.
The increasing integration of Artificial Intelligence (AI) into the workplace means that machines will at some point become colleagues to whom we delegate tasks, cooperate with, and report to. These relations will present psychological and ethical challenges, as we start to map our way into the workplaces of the future.
Many workers are already delegating tasks to machines. This can be a positive experience: AI can handle repetitive or time-consuming work, leading to productivity gains and giving humans more time to focus on tasks that require their special touch. The public release of ChatGPT made this experience available to everyone, but also illustrated our uncertainty about the ethics of relying on machines we do not understand, to perform tasks we are not sure they can handle. In the absence of transparency about which exact task was delegated, the instructions given, and the extent of AI output in the task’s outcome, mistakes may be made without a clear understanding of who is responsible. AI may also perform tasks using questionable means, without proper supervision.
Trust and cooperation
Cooperating with AI in the workplace can lead to unconventional team dynamics, as employees must learn to trust that AI systems will help rather than compete with them; and learn to themselves assist AI systems with their goals. Trust and cooperation between humans rely on social norms, appropriate incentives, and evolved emotions, but all these building blocks are missing when people cooperate with machines. First, there are no consensual norms about whether it is socially desirable to trust and help machines. Second, machines do not care about incentives the way humans do; for example, people cannot assume machines are motivated to cooperate by financial rewards. Third, we do not empathize with machines as we do with other humans, which removes a powerful motivation to help them or care about their goals.
Having AI as a manager is a new and potentially disturbing idea for many. AI may be better than humans at making unbiased decisions and optimizing team performance. But employees may struggle with the idea of reporting to a non-human entity, and develop feelings of alienation or disconnection. AI managers may themselves struggle with emotional support, empathy, and the navigation of complex interpersonal conflicts. We have little data so far that could help us predict the future of AI management, and these data can be contradictory. For example, employees without experience of AI management are typically averse to the idea, but this may be due to distorted media coverage or fictional representations. In contrast, employees with experience of AI management are more positive about it, but their responses may be biased by pressure to speak favorably of their company, or by the fact that companies that use AI management are the ones in which AI management is the least problematic.
FURTHER READING
- Humans feel too special for machines to score their morals, Zoe A. Purcell and Jean-François Bonnefon, 2023.
- Humans judge, algorithms nudge: The psychology of behavior tracking acceptance, R. Raveendhran, N. J. Fast, 2021.