Law firms are increasingly looking to use generative artificial intelligence tools as part of their working practices. Melanie O’Brien explores some of the key compliance risks and the steps you should take before adopting this technology

Headshot of Melanie O'Brien

The use of artificial intelligence (AI) can offer significant benefits for both your business and your clients. However, it’s vital to weigh these benefits against the associated risks. When determining whether to implement or allow staff to use AI – especially generative AI – as part of your practice, you will need to undertake a detailed and thorough risk evaluation process. This will inevitably require testing of, and training on, your systems. As with any management or research tool, AI should be properly supervised throughout its use. 

Your firm remains responsible for your activities, and accountability cannot be delegated or outsourced to a third party, such as an IT team or the third-party provider of the software. It is therefore vital that, before deploying any AI system, you fully understand what the technology does and does not do – and most importantly, what the law is regarding its use.

Data processing: key legislation

UK GDPR and the Data Protection Act

The use of AI technology will more than likely include the processing of personal data, and this must be carried out in accordance with the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018 (DPA). 

A firm’s compliance considerations will involve assessing the risks to the rights and freedoms of data subjects. This applies to the use of AI, just as to other technologies that process personal data.

Although the UK GDPR does not explicitly refer to AI, it may affect its use in many ways. The principles of the UK GDPR include lawfulness, fairness and transparency, purpose limitation, data minimisation, accuracy, storage limitation and rules on automated decision-making (ADM), which can all pose challenges when utilising certain AI tools.

Current guidance from the Information Commissioner’s Office (ICO) points out that, while AI increases the importance of embedding data protection into a firm’s processes, the technical complexities of AI systems can make this more difficult. Certainly, it can be hard to ensure data processing is fair and transparent when you do not understand how data is processed or who else may have access to it. AI tools can also perpetuate bias that may stem from the training data or programming algorithms, creating a considerable risk of information being inaccurate. 

Use of AI may pose difficulties when dealing with data requests, too. Individuals have the right to know who is controlling their data, the right to access it and the right to seek erasure in some cases. Where personal data is processed using AI, these rights can become very difficult to fulfil. 

Article 22 of the UK GDPR currently imposes prohibitions in situations where you are making decisions based solely (that is, without any meaningful human intervention) on ADM that affect the individual legally or in a similarly significant manner. Solely automated decisions currently require explicit consent, must be necessary for contract performance, or need to satisfy a substantial public interest justification.

Many AI tools will often involve the making of automated decisions or profiling on the basis of personal data, but the complexity of the system and their data sources can make it impossible for someone to make an informed and meaningful assessment about the accuracy of the generated output or decision, or to give sufficient information to data subjects to enable them to give informed consent to its use.

Data (Use and Access) Act

The restrictions on ADM have led to a significant level of uncertainty regarding how AI can lawfully be used. The irresolution on this legal point has been recognised by the UK government as a barrier to AI innovation and competition, and has led in part to some recent legislative developments. 

While it does not replace existing data protection laws, the Data (Use and Access) Act 2025 (DUAA), which will come into force in June 2026, makes a number of amendments to the UK GDPR, DPA and other legislation. The changes include the removal of some of the wider restrictions on ADM in the UK GDPR, allowing firms to make solely automated decisions on non-special category data without consent in a wider range of situations, provided that it has appropriate safeguards in place. These safeguards include requirements to inform individuals about the decisions being taken using ADM and to provide them with rights to make representations, obtain human intervention and contest the decision.  

A firm will have significantly greater scope to rely on ‘legitimate interests’ as its lawful basis for processing non-special-category data for ADM, providing greater flexibility than reliance on consent. This will not apply, however, if the processing is based entirely or partly on ‘special category data’ (including information on health, sexual orientation, political opinions, racial and ethical origin or biometric and genetic information used for unique identification). In such cases, ADM decisions producing legal or similarly significant effects will still need to meet one of the existing conditions. 

The DUAA also seeks to impose stricter duties in relation to the processing of children’s data. AI tools used for this purpose will have to be adapted and operated in a manner that prioritises the safeguarding of children’s privacy and welfare.

While the DUAA will generally provide more regulatory freedom to utilise AI and adopt other ADM methods, this freedom is countered by a clear responsibility to fully observe the relevant safeguards, which will include staying abreast of any future regulatory guidance. To provide more certainty on how to deploy AI in ways that uphold people’s rights and build public confidence, the ICO (which will be known as the ‘Information Commission’, as part of other changes to be brought in by the DUAA) has committed to continuing to consult on an update to its ADM and profiling guidance, as well as proposing to develop a statutory code of practice on AI and ADM.

Data protection impact assessment

Whenever a firm introduces new technologies that involve a type of data processing likely to result in a high risk to individuals’ rights and freedoms, it is required by article 35(1) of the UK GDPR to conduct a data protection impact assessment (DPIA).

Graphic of an umbrella shielding a screen from rain that resembles circuit board lines

© omadoig@btinternet.com

The ICO’s opinion is that, in the vast majority of cases, the use of AI will involve the level of processing that triggers this legal duty. In any case, if you have a major project, such as the introduction of new technology, that involves the use of personal data, it is good practice to undertake a DPIA. 

You may already maintain a DPIA procedure, in which case you can follow this to complete one for the AI system you propose to trial or use. Your AI risk assessment should generate information about data protection risks, which could be used to help formulate your DPIA. 

You do not need to publish your DPIA or send it to the ICO, although there is a process for doing so should you require official guidance about how to manage a particular risk. However, you may benefit from publishing a version of the DPIA and/or AI risk assessment to demonstrate your compliance with transparency and accountability obligations.

Identifying and mitigating data protection risks

As part of your AI risk assessment and DPIA, your firm should seek to identify and minimise the data protection risks of using AI technology. The processing of personal data can only be undertaken where the firm has a legitimate basis for doing so, and it must be conducted in accordance with data protection principles. You should: 

  • Ensure there is a legitimate basis for processing personal data by means of the AI technology. 
  • Consider the extent to which personal data can be input into or moved onto the AI technology. 
  • Consider the extent to which restrictions should be imposed on how and when personal data can be input into or moved onto an AI system. 
  • Consider the extent to which generated output data contains, or could contain, personal data.
  • Assess the extent to which generated output data could contain personal data, and consider imposing necessary restrictions on communicating this outside of the firm. 
  • Consider the need to limit the exchange and use of personal data via data processing terms or agreements. 

You will also need to adhere to the following data processing principles.

  • Lawfulness, fairness and transparency: ensuring you handle personal data in a manner that people expect, and do not use it in ways that have unjustified adverse effects on them.
  • Accuracy: ensuring that personal data is accurate and, where necessary, kept up to date. 

Adherence to other principles – including purpose limitation, data minimisation and storage limitation – may need to be addressed as well.

Ensuring statistical accuracy

While not itself a data processing principle, you should be mindful of the need to evaluate and improve the ‘statistical accuracy’ of data generated by any AI system. Statistical accuracy refers to the accuracy of an AI system itself and is different from the ‘accuracy’ principle of data protection law. Accuracy in AI (and, more generally, statistical accuracy) refers to how often an AI system guesses the correct answer, measured against correctly labelled test data. Improving the statistical accuracy of your AI system’s outputs is one of your considerations to ensure compliance with the fairness principle. 

Any AI system you use needs to be sufficiently accurate to ensure that any personal data generated by it is processed lawfully and fairly. In many cases, the outputs of an AI system are not intended to be treated as factual information about an individual. Instead, they are intended to represent a statistically informed guess about something that may be true about the individual now or in the future. To avoid such personal data being misinterpreted, you may need to ensure the records generated demonstrate instances where: 

  • data outputs are statistically informed guesses rather than facts 
  • the inference was based on inaccurate data, or the AI system used to generate it was statistically flawed in a way that may have affected the quality of the inference, and/or 
  • the processing of the incorrect inference may have an impact on an individual, meaning they may request the inclusion of additional information in their records to counter the incorrect inference.

Wherever possible, you need to ensure any factors that may result in inaccuracies in personal data are corrected and the risk of errors is minimised.

Complaints about AI use and data rights

Under the UK GDPR, individuals have numerous rights relating to their personal data, including the rights to information, access, rectification and erasure, as well as to the restriction of processing, data portability and objection. These rights apply wherever personal data is used during the development and operation of an AI model or tool, and may cover personal data:

  • contained in the training data
  • used to make a prediction during deployment (and the result of the prediction itself), or
  • contained in the AI model itself.

Use of AI may pose challenges when complying with these rights, and the ICO has warned firms against regarding such requests as unfounded or excessive just because they may be harder to fulfil. As part of your obligations as a data controller, you should factor into your risk assessment how you will protect individuals’ data rights and respond to requests when processing data using AI. 

Data subjects have a legal right to complain about your handling of their data, and these rights will be enhanced by the new DUAA provisions. Under the DUAA, your firm will need to take steps, such as providing an electronic complaints form, to help individuals who want to make complaints about how you use their personal data. Your firm will also need to acknowledge complaints within 30 days and respond to them “without undue delay”.

These requirements will need to be integrated into the firm’s risk and compliance processes, along with the mandatory safeguards to inform individuals about any significant decision made via ADM based on their personal data, and their rights to make representations, obtain human intervention and contest the decision. You may need to integrate measures to deal with complaints about the use of personal data in AI, and/or respond to an individual’s exercise of their rights about the use of ADM, into your existing complaints handling procedure.

Intellectual property

You must ensure that your proposed use of AI and any output data generated does not infringe on the intellectual property rights of another firm or individual. You may need to seek independent legal advice regarding the relevant obligations and restrictions. 

Generative AI creates content, but it is not ‘creative’. It analyses data from other sources and will draw on various types of content, irrespective of who that content may belong to or any rights to use that data. If you use this content in your own work, you will not want to open yourself up to accusations of plagiarism. Of course, many argue that AI is simply plagiarism on a large scale, and it is yet to be fully established who the intellectual property owner is of the outputs produced by AI. 

The government’s Copyright and artificial intelligence consultation, which closed in February 2025, explored the broader relationship between copyright and AI, and how best to balance AI innovation with the protection of intellectual property rights. One of its aims was to seek to address the uncertainties surrounding the use of copyright material to train AI models. 

While copyright featured in parliamentary debates prior to the enactment of the DUAA, the government opted not to impose statutory transparency requirements in relation to the use of copyrighted content. However, the DUAA imposes an obligation on the government to publish a progress statement and a full impact assessment report on proposals outlined in its consultation, which should then feed into its anticipated consultation response. No timeline has yet been given for when this may be published.

Until the legal position is clearer, you will want to be assured that you are not infringing on the intellectual property rights of another firm or individual when: 

  • researching, sourcing and trialling AI technology, including contributing to any training or trial periods 
  • using AI technology 
  • inputting data, and 
  • using generated output data.

Intellectual property ownership depends on many factors, including: 

  • the associated terms and conditions in any agreements or licences 
  • pre-existing intellectual property rights in the output data generated, and 
  • laws and regulations relating to intellectual property rights.

AI technology may have been trained on content that contains intellectual property rights belonging to others. With some professional systems designed specifically for legal use, you may be able to obtain a licence for use of the content in your legal work, although there may be restrictions on that use. In short, unless the intellectual property position is clear, you need to be mindful that it is likely your firm won’t hold all intellectual property rights to data generated by AI systems, so you should be very cautious about its use.

Confidentiality and disclosure

Rule 6.3 of both the SRA Code of Conduct for Solicitors, RELs, RFLs and RSLs and its Code of Conduct for Firms requires you to keep the affairs of current and former clients confidential, unless disclosure is required or permitted by law or the client consents. This is not simply limited to the duty not to communicate such information to a third party but is a wider duty not to misuse it – that is, not to make any use of it or cause any use of it other than for the client’s benefit, and not without their consent. 

When providing prompts or training an AI tool with input data, you must be very mindful of how you use confidential or sensitive data. The information you upload should also not be privileged, as you may waive the right to privilege by using the information in such a way. Even with professional-grade systems, you may not be able to control what happens to the data you input. Generative AI tools usually work by users inputting a prompt and the AI following up with a response, which may draw upon the original text input as well as text from within its bank or external resources. In many cases, the data you input will be used to train the system to improve its knowledge bank, and it may potentially be used in other outputs, not just for your firm. AI is also susceptible to cyber-attacks, and hackers could gain access to sensitive information. 

You will need to ensure that you introduce, train and operate AI technology in a way that protects confidentiality. Once data has been entered into the system, it has left the confines of your firm and, to some degree, is outside of your control. In particular, you should: 

  • Be mindful of the need to avoid potential breaches of confidentiality when moving data to an online system.
  • Take care to avoid any breaches of client confidentiality when moving information between your firm and the AI provider. 
  • Consider the extent to which restrictions should be imposed on how and when confidential information can be input into an AI system. 
  • Assess the extent to which generated output data could contain confidential information or could be extrapolated from the output data, and consider imposing necessary restrictions on communicating this outside of the firm. 
  • Assess access to confidential information belonging to another organisation (such as a business partner or subsidiary) or another individual (such as an employee or consultant) and consider the need to put in place restrictions on the exchange and use of that information (for instance, by means of confidentiality and/or data processing agreements).

You may wish to consider using synthetic data when training a system. Synthetic data is artificial data generated by data synthesis algorithms that follow the same patterns as actual client data but without their exact information. You should review the ICO’s information on this before creating and inputting any synthetic data. 

All firms should consider implementing a policy that restricts the inputting of confidential and sensitive data into any generative AI system. Certainly, this will need to be a major factor in your AI risk assessment process when determining how staff may use a particular AI tool and the extent to which any limits should be imposed.

Conclusion

Although AI may be a good starting point, it is vital that the generated outputs are checked for accuracy or bias, or you will run the risk of producing advice based on incorrect information. Generative AI may produce content that appears to be plausible and authoritative but is actually inaccurate. Not least, you should ensure that the legal basis or jurisdiction on which an output is based is correct for the given legal matter. Many AI systems, including ChatGPT, have been trained using precedents and case law from the US. Therefore, they will often produce outputs that are inaccurate for legal matters in the UK. 

Data outputs produced by generative AI must always be double-checked for accuracy, especially when they will be used in the context of legal work. Your clients have instructed your firm to provide accurate advice and act in their best interests. While AI may reduce costs and increase efficiency, you remain responsible for offering clients a professional service and ensuring your advice is reliable. If you fail to deliver accurate information, and this causes the client loss or damage, you will be fully liable, and it will be no defence to suggest that the fault ultimately lies with the software or AI system you used.