Artificial Intelligence is rapidly transforming our lives, and in particular how we communicate, shop and work. The reason behind this rapid emergence is our personal data.
AI analyses our data and knows what we watch, what we listen to, the search engines we use and the online purchases we make. Its aim is to pinpoint our interests and therefore tailor the services it offers us, for example suggesting new programmes to watch on streaming services etc. Of course, this ultimately makes our experience more enjoyable, but at what cost? AI tools are trained only by using the information that has been fed into them. As these get smarter and faster, the question of “how much of my information do AI tools store and use?” arises.
What are the possible risks in using AI tools?
AI systems are fed by data and are therefore only as good as the information they receive and process. From search histories, location, voice recognition to biometrics, all of this information is used by AI tools for them to learn and grow. However, the following questions remain as to how much information is being used, to what end and what are the risks involved?
Although both consumers and businesses can generally see the benefits of using AI tools, such as increased efficiency, cost savings, tailored recommendations and risk management, there are a substantial amount of risks to keep in mind.
Compliance
Businesses must comply with many laws and regulations. In particular, UK GDPR sets out rules and principles for processing personal data. The key principles being[1]:
- Lawfulness, fairness and transparency
- Purpose limitation
- Data minimisation
- Accuracy
- Storage limitation
- Integrity and confidentiality
- Accountability
This is a pressing risk for businesses as failure to comply with GDPR regulations can lead to fines, investigation and of course, reputational damage.
Data breach
AI tools have access to and process a substantial amount of data, some more sensitive than others, including medical records, personal information, or even financial information. Unfortunately, with the advancement of AI technologies also comes an increase in cybersecurity threats. AI models can as easily be attacked and sensitive data can be exposed.
However, threats do not necessarily come from outside. Some AI models have been proven to retain information, which may lead to unintentional leaks of data. It is therefore crucial to have proper safeguards in place before feeding information to an AI tool.
Individual’s consent
Based on the UK GDPR regulations, an individual’s consent is required when collecting their data and businesses must ensure that they are using such data only for a set purpose, known to and agreed by the individual. But more than that, individuals have a right to know what kind of data is being collected, why it is being collected and how it will be used. Individuals must therefore give clear and informed consent for their data to be collected and stored.
Businesses must also ensure that the collected data is being used for the intended purpose and not fed into an AI model without proper consent.
Reputational damage
A big part of a business’ success is based on reputation and the trust its customers have towards it. Consumers are conscious about what they divulge and all it takes is one mistake (whether it is misuse of information or AI’s bias at the customer’s expense) for rumours to spread that data has not been processed safely, which will inevitably result in lost customers and a poor reputation.
Surveillance and bias
Just like Hal in 2001 Space Odyssey, AI is like a digital eye and is watching everything. It listens and learns from what is fed into it, which in a way makes it evolve. Unfortunately, AI models often come with bias which, in the context of surveillance, can cause irreparable damage due to wrongful profiling. In fact, it would be a misconception to think that AI is only collecting data when an individual uses their phone. AI is used though security cameras on public streets or airport or tracking cookies on someone’s phone.
So, the question now is to determine what, if anything, is being done about all of these existing risks.
The EU’s AI Act and the UK’s approach
The EU’s AI Act which came into force on 1 August 2024 aims to establish a framework for artificial intelligence across the EU and to “foster responsible artificial intelligence development and deployment in the EU”[2].
In the UK, the Government adopted a “pro-innovation” approach based on five principles[3]:
- Safety, security and robustness;
- Appropriate transparency and explainability;
- Fairness;
- Accountability and governance; and
- Contestability and redress
These are to be implemented by the various regulators in their respective sectors using existing laws.
In conclusion, AI is a technological revolution and will no doubt keep evolving and helping in many ways. Inherently, it is not meant to be a scary tool, albeit if left unchecked, it could become an invasive tool. It is important for individuals to understand what their rights are and how to implement them. It is also important for businesses to understand their obligations and how to safely collect and process customers’ data.
Our Commercial Team is widely experienced in data protection and would be available to assist your business however we can.
[1] A guide to the data protection principles | ICO
[2] AI Act enters into force - European Commission
[3] The UK’s framework for AI regulation | Deloitte UK
The information provided in this article is provided for general information purposes only, and does not provide definitive advice. It does not amount to legal or other professional advice and so you should not rely on any information contained here as if it were such advice.
Wright Hassall does not accept any responsibility for any loss which may arise from reliance on any information published here. Definitive advice can only be given with full knowledge of all relevant facts. If you need such advice please contact a member of our professional staff.
The information published across our Knowledge Base is correct at the time of going to press.
 
                         
                                         
                                        