

There is no doubt that artificial intelligence (“AI”) has already changed the landscape in many industries, with the benefits including increased efficiency and productivity, costs savings and improved recruitment and retention, to name just a few.
That being said, the use of AI by businesses is not without its risk, particularly for the uninitiated. When the risks include data security concerns, potential job losses, and biased decision making, it is fair to say that the conversation around AI is often contradictory and polarising.
However, with 75% of employees already using AI and 70% of them saying they would delegate as much of their work to AI as they could – the AI conversation is not one that employers can afford to avoid with their employees.
Managing AI in the workplace
1) Know the risks
The first step, when managing any risk, is making sure you have a clear understanding of where the risk lies. While this sounds obvious, the risk of doing nothing – particularly when it comes to new technology – is often forgotten. When it comes to AI, that risk is huge. In the longer-term, as AI technology continues to improve and adoption increases across all sectors and industries, the risk of doing nothing will far outweigh the potential risks associated with using AI.
AI is rapidly reshaping the workplace. It offers powerful tools to streamline operations, enhance productivity, and drive innovation. However, it nonetheless introduces risks which, if not properly managed, can lead to ethical dilemmas, job displacement, and security threats.
So, on the basis that doing nothing is the biggest risk of all, the next step is to look at the actual risks, as they might apply to your business.
These include:
- Job displacement and workforce anxiety: AI-driven automation can potentially replace certain functions, leading to potential job losses and reduced security for employees. Even where the need for humans is not actually reduced, 49% of employees currently believe that AI will eventually replace their job – which can in itself create resistance to AI adoption, lower morale and employee relations issues as a result of colleagues feeling insecure, devalued and expendable.
- Bias and discrimination: AI systems are built and often “trained” by humans, which means that those systems can be subject to human biases – be they conscious or unconscious. AI systems also learn from past outcomes and historical data – otherwise known as “machine learning”, which means the technology can then amplify existing prejudice and biases which may have been “trained in”. Finally, AI, without a human in the loop, cannot look beyond the data it has been given. That means that, unlike a human, AI cannot mitigate against its own bias, nor can it apply context to the decision-making process, in the way humans can.
- Data privacy and security: In order to function effectively, AI predominately relies on large datasets which are input into the system by humans. Depending upon the purpose for which the AI is used, that data may include sensitive personal data, such as employee or customer data or commercially sensitive information. The AI tool may store that information, potentially leading to various issues with data security, depending upon where the data is hosted by the system. In addition to being stored, AI tools will use any data it is given in order to train and may share that data with other users – again leading to significant data security and commercial risks if not properly managed.
- Over-reliance on AI and loss of human skill and judgment: While AI can undoubtedly improve efficiency and productivity, excessive dependence on automated processing and decision making can lead to errors. This is particularly the case in situations requiring empathy, creativity, contextual understanding or consideration of ethical standards. At a more basic level, AI is not infallible and, like any system, is only as good as the data it is given. As such, where there is an overreliance on AI, particularly by those in what might be described as entry level or junior roles, those employees will have less of an opportunity to build up their knowledge and experience over time. This can result in an element of de-skilling which might mean that they may not be able to spot when the results or calculations provided by AI are incorrect.
2) Understand how AI can be and is being used in your business
Once you understand the risks posed by AI, it is far easier to then consider how it can be used and harnessed in your business.
As mentioned above, 75% of employees are already using AI, with 50% of employees admitting to using their own AI tools. That means, whether you have authorised the use of AI as an employer or not, it is likely being used already by at least half of the workforce.
Not only is it crucial in mitigating risk to understand what AI tools are being used and how – but that information can also help inform a business’s policy with regard to AI adoption. If an employer can understand what tools its workforce is using and why, it will be better able to understand what tasks can be automated and how best to do that.
Speaking to employees about their exiting AI usage in a non-critical way and from a place of curiosity will allow an employer to “take the pulse” of its workforce, providing a detailed insight into an employee’s day to day role, their motivations, their skills and where they themselves see their value best lies with regard to their role.
3) Establish an AI committee or super user group
As every good employer knows, often the people with the most understanding of day-to-day operations within a business are the employees themselves. It is likely that your employees will know, better than you, how AI might assist them in their roles and even how the business might benefit form AI adoption, within the employer’s risk management framework.
Without doubt, employees will be the end users of AI and, ultimately, if you want to roll AI out across the business, you will need employee buy-in in order to achieve full user adoption. User adoption is a tricky area of change management. As mentioned above, it is likely that 50% of your workforce are already using AI.
That means that 50% likely aren’t and, bearing in mind the afore-mentioned statistic, that 49% of employees believe that AI will replace their jobs – you can bet that a sizable proportion of those not currently using AI are reluctant to do so.
Super users, also known as power users, are a representative sample of the AI user population – both current and future – who can be used by the business to help shape adoption while ensuring employee voices are heard. AI committees or super users take responsibility for AI in the workplace – whether that be by testing new AI tools before a wider roll out, helping inform AI policy, advising on safeguarding measures drawing on their own day-to-day experience or by helping to monitor compliance.
When putting together an AI committee or group of super users, employers should think holistically and creatively. Avoid pulling superusers from one demographic and embrace the multi-generational workforce in order to benefit from diversity of voice and experience.
4) Have an AI policy
Whatever stage an employer is at, when it comes to AI, policy is key. Employers should have a policy to explain what is and isn’t permitted, the safeguards that are in place and procedures to be followed when it comes to AI. The policy should also set out how it interacts with other relevant policies such as cyber security procedures, codes of conduct, privacy policies and Diversity, Equity and Inclusion.
It might be that a business is content for AI to be used in order to generate creative content, whilst the risk and benefit profile of adoption of further AI tools is considered by the business. On the other hand, it might be that the company has invested in developing its own AI systems to help to mitigate the risk of confidential data being leaked – but that the use of any other publicly available AI tools is forbidden.
Whatever an employer’s approach to the use of AI, a good AI policy creates a framework which reflects the business’s AI ethos and sets out clear parameters within which employees can operate.
5) Train your employees
Employers should not only train employees who are using AI on how to use the tools permitted by the business, but also on the risks involved in using this technology and how to mitigate those risks.
Businesses should also train those who may not use AI themselves but who are responsible for managing employees who do, or who are ultimately responsible for the business’s culture and deliverables such as HR, senior leadership teams and directors, in order for the risks to be managed.
In conclusion
AI is a powerful tool which, if managed effectively, can transform the workplace for the better. However, its adoption must be accompanied by a well-structured governance strategy that prioritises ethical use, employee well-being, and human oversight.
By implementing clear policies, training employees, appointing super users, and maintaining human involvement in decision-making, employers can harness the true potential of AI. The future success of AI in the workplace depends not just on the technology itself, but on how organisations implement and use that technology effectively, ethically and responsibly.
Contact us
"*" indicates required fields