Home Knowledge AI and Investment Funds

AI and Investment Funds

The growth and proliferation of generative Artificial Intelligence (AI) in recent months brings with it new risks (and opportunities) for the Financial Services (FS) sector. Regulated entities, including funds and fund service providers, should be preparing now.

AI will introduce new regulatory obligations and shareholder considerations, and will have an impact for example on risk assessments, governance frameworks, sustainability assessments, data protection and anti-money laundering.

The new EU Artificial Intelligence Act (AI Act) will introduce a new regulatory framework and obligations on both developers of AI systems and their users, for example funds and/ or their service providers. It will serve as a framework around which organisations can base their AI regulatory policies, in areas such as transparency, accuracy, risk management, data governance, and human oversight.

The potential impact of AI on FS entities is already underway and causing stirs among regulators, and FS entities should take actions now to address the bearing of AI on their operations and stakeholders. For instance, in October 2022 the UK Financial Conduct Authority and the Bank of England published a report on the state of machine learning (ML) in the UK financial services sector. The report highlighted the growing trend in the use of ML applications across the FS sector and suggested that the largest changes over the coming years are likely to be in the investment and capital markets sector, which has the largest proportion of ML applications in test stages. More recently, in February 2023, ESMA published an article on AI in EU securities markets, noting that the use of AI in finance is under increasing scrutiny from regulators, who are beginning to develop AI-specific governance principles or guidance for FS firms.

The deployment of AI in the FS sector presents numerous legal challenges. It is critical that FS entities navigate these challenges in collaboration with legal and tech experts, ensuring the innovative use of AI while remaining compliant with the regulatory landscape in the EU.

New Regulatory Obligations

The main piece of governing regulation will be the new AI Act proposed by the European Commission, the first law on AI by any major regulator. The AI Act will regulate the providers of AI systems, and entities making use of them in a professional capacity. Following its approval by the EU Parliament on 14 June 2023, the AI Act is expected to come into effect before June 2024, with a two-year transition period, and is expected to become a global standard in the field.

Entities will need to be aware of their responsibilities and obligations under the AI Act, including evaluating the risk level of their AI systems, conducting risk assessments, and considering transparency, accountability and robustness measures to meet the AI Act’s obligations.

Risk Assessments

While the use of AI technologies may offer firms opportunities in automation, productivity, and efficiency, it can also bring risks.

Under the AI Act, all operators will be required to make their best efforts to develop and use AI systems in accordance with principles such as human oversight, privacy and data governance, social and environmental well-being. Developing a Responsible AI Framework and incorporating AI risks into risk registers and governance practices will become critical for verifying that an AI system is properly and ethically employed for shareholders.

Risks associated with AI systems can be difficult to assess, particularly given their potential for unpredictability and the complex risks associated with ML models. Risks may include for example algorithmic risks (the risk that the AI system behaves unpredictably or makes poor decisions) and data risks (the risk of bias in the training data or misuse of data). AI Impact Assessments will be central to ensuring that risk management obligations, including under UCITS and AIF rules, are complied with when AI is being used.

Some of the risks that AI poses to the fund industry may include:

  1. Conflicts of interest between the fund manager’s initial investment strategy and the AI system’s investment strategy. That is, because ML algorithms learn based on ‘rewards’ or ‘punishments’ they may potentially select investments that don’t offer the best returns for the investor. Alternatively, the AI system may stock-pick profitable stocks as opposed to adhering to the initial portfolio strategy.
  2. Data accuracy is critical for AI models to perform efficiently. As such, service providers may use incomplete or inferential data that has not been properly anonymised, with a resulting possibility that the AI model could make incorrect investments.
  3. From the perspective of investors and stakeholders, the lack of disclosure of some AI systems when applied to investments may not be transparent or easily explainable.

To mitigate these risks, FS entities including fund managers should be developing robust governance frameworks for AI, investing in AI ethics, and actively involving their legal teams in the AI deployment process. It is also essential to maintain transparency about how AI is used and to ensure that the systems are auditable and accountable.

Governance Frameworks / Outsourcing and Delegate Oversight

Under the AI Act, entities using AI systems must ensure data governance and managerial best practices are in place before running their AI system. Specific operational and governance impacts for FS entities may include:

  • Compliance with MiFID obligations. AI systems may be used for automated trading decisions or providing investment advice, and must comply with regulatory obligations including MiFID requirements of transparency, record-keeping, and ensuring best execution policy for algorithmic trading. AI systems used for automating trading decisions or providing investment advice may be considered high-risk under the AI Act and therefore subject to heightened transparency, robustness, and accountability requirements.
  • Compliance with UCITS/ AIFMD obligations, including fiduciary and depositary duties for managing conflicts of interest and disclosure to investors.
  • Appropriate outsourcing and delegate oversight, where AI systems are utilised by service providers.

Sustainability Assessments

FS entities, including fund managers, are subject to ESG-related obligations including under the Sustainable Finance Disclosure Regulation (SFDR), and will need to consider the impact of the use of AI and the new AI Act on these existing activities.

The SFDR requires financial market participants to make disclosures related to sustainability risks and the impact of their investments on sustainability factors. The introduction of AI in this context could have certain implications, including for example:

  • Accuracy of Disclosures: AI systems could be subject to bias or error in assessing ESG (Environmental, Social, and /or Governance) risks or impacts.
  • Transparency and Explainability: The transparency or explainability of the AI system may be impeded if the methodologies or decision-making processes (e.g., the data inputs/outputs) of the AI system are convoluted, not clear, or are difficult to explain.

Firms will need to consider the implications of AI in assessing sustainability risks and impacts for compliance with the sustainability-related regulation including the SFDR.

Data Protection

Any data processed by AI systems, including data used in marketing and profiling activities, must be protected in accordance with data protection law including the General Data Protection Regulation (GDPR).

AI systems are already regulated by Article 22 of the GDPR if they are used to make automated decisions that could have legal or similarly significant effects on individuals. Such systems would be subject to the AI Act’s provisions for high-risk AI systems, including requirements for transparency, robustness, and accountability.

Moreover, under the GDPR, individuals have the right not to be subject to a decision based solely on automated processing, such as shareholder profiling, or decisions which produce legal effects concerning them. The data controller, such as the fund or fund manager, must implement suitable measures to safeguard the data subject’s rights, freedoms and legitimate interests. This includes the right to obtain human intervention, to express one’s point of view, and to contest the decision. Furthermore, Article 35 of the GDPR requires those processing personal data “using new technologies” to carry out an assessment of the impact of the processing where that processing is “likely to result in a high risk to the rights and freedoms of natural persons”.

Anti-Money Laundering (AML)

Under the EU’s Fifth Anti-Money Laundering Directive (5AMLD), AI can be used to enhance transaction monitoring and flag potential money-laundering activities. These AI systems must adhere to 5AMLD’s requirements for reporting suspicious transactions and maintaining adequate records. AI systems used for transaction monitoring and identifying potential money-laundering activities might be considered high-risk under the AI Act if they have significant control over decisions with serious legal implications. Their classification as high-risk would require them to meet the AI Act’s requirements for transparency, robustness, and accountability.

How William Fry Can Help

William Fry are at the forefront of the AI evolution and are monitoring developments daily, advising clients on the wide range of legal and practical implications. We can advise on best practices for implementing safeguards and ensuring compliance with relevant laws and regulations.

For instance, we can help with:

  1. Training, providing overviews of AI developments, requirements, and potential impacts.
  2. Assessments on the use of AI technologies which may impact on an FS entity or its stakeholders, in particular by using William Fry’s AI Impact Assessment process.
  3. Analysis of potential risks and areas requiring remediation and suggesting recommended actions.
  4. Audit of contracts and policies for cover of the use or development of AI.
  5. Application, including developing frameworks, updating contracts and policies to account for AI risks and obligations, and drafting new AI policies.

Please contact Barry Scannell, Clodagh Ruigrok or any member of your usual William Fry team to discuss further.