AI – particularly generative artificial intelligence – needs data. The more data it can access, the better. However, there are risks to address when using AI with personal data.
The risks
Data leaks: Bad actors or hackers could breach your cyber security. They could gain access to all personal data stored in the AI system.
Data poisoning: The data could be compromised – intentionally by a hacker or accidentally by staff – leading to incorrect results. If data is no longer accurate, this is a potential breach of UK GDPR.
Algorithm corruption: A hacker might corrupt the underlying AI algorithm. This might result in inaccurate handling or processing of personal data.
Misuse of data: Personal data used to train AI models can be repurposed for other uses without the individual’s knowledge or consent.
Profiling: UK GDPR gives people the right not to be subject to solely automated decisions, including profiling. This could prevent your use of AI.
Discrimination: An AI tool can inadvertently learn and perpetuate biases in the training data particularly if your dataset is not broad enough. The AI algorithm itself could also be flawed. Unchecked, this could result in unfair treatment or discrimination.
Mitigating against risk
- Transparent policies: You should adjust your privacy notice when collecting data. Also, you should implement an internal policy guiding workers on how to use AI. This clarity and transparency can be crucial for compliance.
- Data minimisation: It is key for all organisations to minimise data to ensure they retain only such data as is necessary. This will help ensure that you use only relevant data in your AI tool.
- Encryption: You should encrypt data where possible to avoid data corruption and leaks.
- Audit AI: You should put in place checks and balances to confirm the AI is being used as expected.
- Ethics: You should ensure you are using the AI in an ethical way, not favouring or discriminating against any particular group.
- Human oversight: It seems odd to say you should have human oversight. However, at these early stages of AI adoption, this is key. Microsoft’s Copilot and Tesla’s Autopilot, for example, both require human oversight.
- Contract: Check your contract with the AI provider. Identify what safeguards they provide. Pay special attention if the AI tool is run or hosted externally to your business. Does the provider retain a copy of your data to train their AI? Is personal data held in a jurisdiction whose laws don’t provide protection equivalent to UK GDPR?
If you make the decisions about use of the personal data, you are the “controller” under UK GDPR. You are responsible for ensuring you don’t infringe data protection legislation. Failing to ensure compliance could result in enforcement action against you. It could also lead to a fine by the ICO or another regulatory body.