Artificial Intelligence (“AI”) has numerous benefits, but it also comes with risk. As with all new technologies, it is important in the rush to start using AI that you are aware of and manage your risks.
What is AI?
Legislators are busy working out how to define AI so that they can legislate in this area and these definitions are not without controversy. In a nutshell though, AI works by using algorithms and models to process data, learn from it, and make decisions or predictions.
It enables machines to mimic human intelligence to some extent. Current examples include ChatGPT for natural language processing, Siri and Alexa for voice recognition and Tesla’s Autopilot for self-driving. AI can enhance efficiency and decision-making.
AI adoption has surged significantly in recent years. As of early 2024, around 72% of organisations globally are using AI in some capacity. This is a large increase from previous years and this trend looks set to continue. Many commentators are fearful of the direction of uncontrolled AI, but we are still some way from Terminator-style cyborgs!
What are the risks and how do you offset them?
There are many risks of using AI. It is important to understand them and to find the means by which to manage them.
- AI “washing”: Not all solutions are AI-ready. Some are simply basic algorithms rebadged for those keen to adopt AI. If you’re looking to adopt AI, make sure you know what you’re getting. Do your research and ensure you’re not paying extra for nothing
- Bias, discrimination and fog: AI systems can inherit biases due to poor training, insufficient data modelling or inadequate research samples. This could lead to unfair treatment of certain groups. Also, AI output based on previous AI-generated output can exacerbate this issue leading to greater distortion, or AI “fog”. Therefore, you should seek to mitigate these risks by requiring your AI provider to use diverse and representative datasets for training AI models and regularly audit them for bias. You should check the terms of use of the AI to identify what guarantees the provider gives about bias and diversity.
- IP and privacy: AI can process vast amounts of data. Without checks and balances in place this has already included proprietary, confidential and personal data. This presents risk to the original owner of that data, but also the user of output that is not clear of encumbrance. We should all be restricting access to our data. Moreover, customers should seek assurances from their AI provider that they are following strict data governance policies and are ensuring their datasets are free of hindrances and are using anonymisation techniques as appropriate.
- Job replacement or displacement: automation through AI could lead to job losses in certain sectors, affecting livelihoods. Some software developers estimate use of AI could make them 30% more efficient and some publishers are already using AI to write news items. The key is whether this leads to a reduction in headcount or an opportunity to win more business. Employers should consider introducing programmes to reskill or upskill their workers. Educational and training institutions should direct their energies towards skills fit for the new market.
- Security threats: AI can be used maliciously, such as in creating deepfakes or automating cyber-attacks. The days of Cambridge Analytica using Facebook data to serve political adverts and manipulate voter intentions already seems very old-fashioned. It is therefore key to strengthen security measures by implementing advanced security protocols to protect AI systems from malicious use. Businesses should ask their AI provider about how they are addressing this
- Transparency: many AI systems operate as “black boxes,” making it difficult to understand how they make decisions. It is difficult to undertake diligence on your AI tool if you are unable to see and understand how it works. Users of AI should seek an explanation of how their chosen AI tool’s decision-making process operates to better understand what is happening and how to retain accountability.
- Ethical issues: AI can be used in ways that raise ethical questions, such as in interference with democracy, autonomous weapons or surveillance. You need to ensure your choice of AI tool adheres to ethical standards. Not only is this area likely to become regulated in the near future, AI providers are likely to use it as a differentiator in the marketplace.
Use of AI can lead to great efficiencies and improvements. It is important to assess and address the risks proactively while we wait for legislation to take effect. Even then, remember that technology moves faster than lawmakers, so we all need to be aware of the issues.