Practical Law Practice Compliance & Management, with Natalie Cooksey and Karim Nassar of Travers Smith LLP
Thomson Reuters offers practical guidance on the safe deployment of artificial intelligence (AI) in law firms, in line with all the relevant legal and regulatory obligations that now apply. Use cases for the most recent forms of AI in law firms are varied and can include usage for the firm’s own internal purposes (such as for staff recruitment or appraisal processes) or for client matters. However, any law firm use of AI systems creates risk, and firms must ensure that their deployment of AI is safe for their business and clients, and in line with relevant legal and regulatory obligations. Here, we offer practical guidance on the key overarching compliance issues that firms should consider, particularly to compliance professionals. It does not cover wider legal considerations such as employment law or product development issues.
Regulatory and legislative landscape
Most firms will already be aware of the need to comply with data protection legislation, and individual practitioners must observe their fiduciary obligations under the common law. As further developments and guidance specific to AI systems emerge from the government and other public bodies like the Information Commissioner’s Office (ICO), the legislative and regulatory position may develop further over time. However, the key compliance obligations for Solicitor Regulation Authority (SRA) regulated law firms are outlined in the SRA Standards and Regulations, which already provide a regulatory framework applicable to law firm use of AI.
The SRA Principles comprise the fundamental tenets of ethical behaviour that the SRA expects of regulated firms and individuals to uphold in all aspects of their practice and service delivery. Much of the rules relating to AI governance fall under the SRA Codes of Conduct, but all the SRA Principles should be borne in mind. For example, Principle 7, “you must act in your client’s best interests” is central to decisions about the use of technology. To comply with this, firms will need to have appropriate governance, systems and controls in place to ensure they are using any such technology responsibly.
The SRA Code of Conduct for Firms and Code of Conduct for Individuals provide parameters for using AI in both running a legal business and providing legal services to clients. For example, the Code for Firms includes requirements to address business systems, record keeping and risk management (rule 2) and standards of service (rule 4), while both Codes contain rules around maintaining trust and acting fairly (rule 1 of both Codes) and around confidentiality and disclosure (rule 6 of both Codes).
Governance, monitoring and controls
The core regulatory requirements for a firm’s governance, monitoring and controls of its use of AI systems are set out in the SRA Code of Conduct for Firms, which provides that firms must:
- have effective governance structures, arrangements, systems and controls in place that ensure they comply with all the SRA’s regulatory arrangements, as well as with other applicable regulatory and legislative requirements (rule 2.1(a))
- keep and maintain records to demonstrate compliance with the firm’s obligations under the SRA’s regulatory arrangements (rule 2.2)
- identify, monitor and manage all material risks to the business (rule 2.5)
- ensure the service provided to clients is competent, timely and appropriate to clients’ needs (rule 4.2)
- ensure that staff are competent to carry out their role and keep their knowledge and skills, including their regulatory and ethical understanding, up to date (rule 4.3), and
- have effective systems for supervising client matters (rule 4.4).
When developing or implementing any new AI or legal technology system, the firm should build in measures to ensure compliance with these obligations and (pursuant to rule 2.2) carefully document the steps taken to do so. Measures should include:
- putting in place appropriate leadership and oversight
- undertaking thorough risk and impact assessments
- creating appropriate policies, procedures and controls
- ensuring thorough training and awareness, and
- monitoring and evaluating performance and usage to avoid unintended consequences
Assessing risks and opportunities
There are various resources available that can assist firms to identify, assess and manage the risks and opportunities that AI use poses to the business. Examples include:
- The Law Society’s guidance Generative AI – the essentials, which may be of particular interest to small and medium firms, and in-house practitioners.
- Microsoft’s responsible AI impact assessment template (Microsoft: Responsible AI Impact Assessment Template (June 2022)). This includes a section for capturing any potential system benefits and harms, and sections to record details of human oversight and control and how the system will be used to inform decision making.
- The National Institute of Standards and Technology AI Risk Management Framework, which links to a playbook setting out suggested actions for achieving outcomes in relation to governance, mapping, measurement and management of AI functionality (NIST: AI Risk Management Framework and NIST: AI RMF playbook).
- For data protection considerations, the French Data Protection Authority (CNIL) self-assessment guide for AI systems, covering topics such as: “Asking the right questions before using an artificial intelligence system”, “Collecting and qualifying training data”, and “Achieving compliance” (CNIL: Self-assessment guide for artificial intelligence (AI) systems).
Development
Firms that have the resources may consider developing an AI system in-house with a view to controlling the data used by the system, and the development and testing process. Firms developing in-house should maintain careful records about the governance procedures followed.
Although adopting a third-party proprietary AI system may not provide the specificity achievable through bespoke in-house development, using a reliable proprietary system can reduce many of the risks involved in AI adoption.
Pilots and testing
Whether an in-house or proprietary AI system is adopted, some firms have taken a “bottom up” approach, providing staff with access to the system allowing them to decide how best to use it, evolving the firm’s approach organically. Although users may prefer this individualistic approach it can be difficult to monitor and control from a governance perspective. A better approach may be to permit access on a controlled pilot basis to specified user groups assigned to investigate specific issues. Feedback from a pilot would help determine the use cases for which the AI system works well (generating time and cost saving benefits to clients) and those where it performs less well, or with higher risk. This information will help senior management to focus further adoption or development efforts appropriately.
Implementation
Once the firm has decided on its AI strategy, it should build compliance into its implementation program, including:
- Implementing a structured quality assurance programme, both before and after roll-out, so that the AI system can be refined as needed. This should include monitoring whether any output generated by the system is influenced by any implicit biases present in the data.
- Ensuring that staff receive training on the system before use, including appropriate warnings about the risk of biased or inaccurate outputs.
Governance structures
When designing appropriate governance and monitoring structures, good practice includes:
- Appointing a senior individual to have overall oversight of the use of the AI system. The SRA has said that it expects compliance officers for legal practice (COLPs) “to be responsible for regulatory compliance when new technology is introduced” (SRA Innovate: Compliance tips for solicitors).
- Setting up a committee with responsibility for training staff and monitoring usage. Membership should be comprised of senior stakeholders and technical system experts.
- Carrying out regular audits, to assess functionality and effectiveness and identify any potentially worrying outcomes or risks.
- Where specific risks are identified, ensuring these are reflected in the firm-wide risk assessment and risk register.
Governance structures should be agile, so that the firm can respond quickly as the landscape of regulatory and competitive requirements and challenges evolves.
Thomson Reuters is a partner of the Law Society. To find out how Thomson Reuters’s CoCounsel can amplify your expertise with one professional-grade GenAI assistant, see CoCounsel, your AI assistant