Finance: A Breeding Ground for Innovation, Now Embracing Generative AI

There's definitely a range of concerns and fears surrounding the use of Artificial Intelligence (AI) in financial markets but it’s worth remembering that the financial sector has always been an early adopter of new technologies, and the current wave of disruption is no different. Generative Artificial Intelligence (AI) is the latest game-changer, impacting everything from customer interactions to risk assessment. While its influence on financial decision-making is undeniable, generative AI also comes with significant challenges.

These challenges include the potential spread of misinformation, increased vulnerability to data breaches, and a widening digital divide between developed and developing economies.

Balancing Innovation with Security: A Navigational Challenge

Banks and financial institutions are actively developing strategies to navigate these complexities. Mitigating risks associated with generative AI integration requires innovative approaches. Simultaneously, establishing and expanding regulatory frameworks is crucial to ensure safe and secure deployment of this technology.

The key lies not just in recognizing generative AI's potential but also in emphasizing the importance of strategic planning and regulations. This ensures we can fully capitalize on its benefits while minimizing the risks.

Here's a breakdown of some of the most common ones followed by a post from Christopher Wollard, Chair EY Global Regulatory Network:

  • Job displacement: One major fear is that AI will automate many tasks currently performed by human financial professionals, leading to widespread job losses in areas like trading, portfolio management, and risk assessment.

  • Algorithmic bias: AI algorithms are only as good as the data they're trained on. If this data is biased, the AI's decisions could be unfair or discriminatory. This could lead to issues like unfair loan denials or biased investment recommendations.

  • Black box problem: Some AI systems, particularly complex ones, can be opaque and difficult to understand. This lack of transparency raises concerns about accountability and the potential for unintended consequences.

  • Market manipulation: If AI is used for high-frequency trading or other automated strategies, it could exacerbate market volatility and create opportunities for manipulation.

  • Financial crises: A reliance on AI for critical financial decisions raises concerns about potential systemic risks. If a large number of institutions use similar AI models and those models make the same wrong decisions, it could lead to a financial crisis.

However, it's important to consider the potential benefits of AI in finance as well. Here are some:

  • Improved efficiency: AI can automate repetitive tasks, freeing up human professionals to focus on more strategic activities.

  • Enhanced risk management: AI can analyze vast amounts of data to identify and assess risks more effectively than humans.

  • Better decision-making: AI can sift through complex information and identify patterns that humans might miss, potentially leading to better investment decisions.

  • Democratization of finance: AI-powered tools could make financial products and services more accessible to a wider range of people.

The future of AI in finance will likely involve a balance between automation and human oversight. Regulatory bodies are also working on guidelines to ensure responsible AI use in the financial sector.

Here are some resources for further exploration:

Christopher Woolard CBE

Partner at EY, EMEIA lead financial services regulation, Chair EY Global Regulatory Network. Trustee at Which?

What response could we expect from financial regulators and how can firms prepare?

Artificial intelligence (AI) has been increasingly integrated into the global financial services sector, now with the rise of large language models (LLMs) and generative AI, a new window of opportunities brings both risk and reward. There has already been significant attention paid to the general role of AI in the economy and consumer protection, but regulators are increasingly thinking about the possible dangers AI could introduce to the stability of the financial system. The UK’s Bank of England and the US Financial Stability Oversight Council are among those, who believe that new AI tools could pose financial stability risks. Therefore, balancing the benefits and risks of AI in financial services is becoming more critical.

How could AI undermine financial stability?

· Cyber risk: Greater AI use heightens firms' vulnerability to cyber-attacks targeting their technological infrastructure. This challenge intersects with increasing geopolitical instability, as concern grows about the threat of attacks by state-sponsored actors on a country’s critical infrastructure.

· Concentration risk: The widespread adoption of the same algorithms among numerous financial institutions can lead to undesirable outcomes such as liquidity hoarding and fire sales during stress periods. This has been noted by the Bank for International Settlements.

· Herding behavior or collusion: Reliance on the same data sets for decision-making can develop a "herd mentality," threatening financial stability.

· Governance and risk management: Growing dependence on critical third parties for datasets, AI algorithms, IT outsourcing (such as cloud computing) can amplify systemic risk.

· Market manipulation: Heavy reliance on sentiment analysis and social media signals in AI trading can lead to abrupt market inflation or crashes. The Bank of England's discussion paper highlights a potential risk of market manipulation and instability due to algorithmic misconduct in financial markets.

· Lack of explainability/auditability leading to unintended consequences: The complexity of AI decision-making mechanisms can make identifying and addressing errors or biases difficult, leading to financial consequences. Hence, explainability and interpretability and subsequentially auditability are critical factors for AI's responsible and ethical use in the financial sector.

Stepping up regulatory measures

Financial regulators are striving to strike a balance between innovation, financial stability, and the responsible development of AI. Regulatory responses are likely to increase in the following areas:

· Enhanced supervisory effectiveness: Regulators won’t just be thinking about downside risk. AI presents an opportunity for financial regulators to increase their supervisory efforts and responsiveness to trends and vulnerabilities. For example, the European Central Bank has outlined how AI helps them in their supervisory duties. They also noted their intention to use LLMs and accelerate its adoption across their organisation. In due course, regulators are also likely to expect regulated firms to use AI for compliance in certain areas – like combatting financial crime.

· Regulatory sandboxes: These are on the rise as a collaborative tool between the private sector and regulators. Notably, in the EU, Singapore, and UK, AI regulatory sandboxes are being utilized to navigate the rapid growth of AI and regulators’ unfamiliarity with it. The EU Artificial Intelligence Act (AIA) has mandated the development of such sandboxes to spur innovation. Additionally, to prepare for the EU AIA Spain, Sweden and Germany are also establishing AI sandboxes.

· Leverage wider regulatory frameworks: Regulators expect firms to continue meeting existing governance and system requirements while addressing AI-related risks, as observed in Australia, China, and India. President Biden's recent AI Executive Order emphasized regulatory protection against fraud and discrimination while preserving financial stability. In parallel, the EU's Network and Information Security Directive (NIS2) and Cyber Resilience Act are set to enhance the EU AIA by defining cybersecurity norms for high-risk AI systems. AI will also need to align with broader EU digital resilience framework. Similarly, to assess operational resilience, UK authorities have published DP3/22 focusing on critical third parties.

· Develop FS-specific regulation: AI in the financial sector has largely been governed by extant laws and self -regulation. The Hong Kong Monetary Authority (HKMA) and the Monetary Authority of Singapore (MAS) has published FS specific principles. However more sector-specific focus is to gain momentum. The Financial Stability Board has called for research on AI's implications on financial stability, and the topic to be addressed at the Central Bank Research Association's 2024 Annual Meeting held in August. The U.S. Treasury will release a best practice report by March 2024 and the Commodity Futures Trading Commission seeks insight into AI uses and risks in derivatives markets. The European Banking Authority (EBA) plans to map prudential and consumer protection requirements related to banking-sector AI under the upcoming EU AIA which should trigger a harmonization in AI supervision. Whilst the UK Financial Conduct Authority is required to outline its AI regulatory approach to the UK government by April 2024.

How can firms prepare?

The use of AI is interlinked to existing risks that can lead to financial instability. It is best that firms adopt a proactive strategy and risk-based approach to meet regulatory expectations. It is also more costly and complex to ensure compliance when AI systems are operating than during design phase. Here are some practical steps that firms can start implementing now:

· Establish formal governance: Create a diverse advisory board who will guide on how to design AI responsibly and resolve any issues.

· Upskill talent: Make sure you have the right people and resources to develop AI tools.

· Assign duties: Clearly define and enforce the roles and responsibilities associated with AI processes.

· Evaluate AI systems: Keep an inventory of AI algorithms with a risk assessment for each.

· Assess your risks: Ensure your organization has a comprehensive risk management framework in line with known regulatory requirements.

· Raise awareness: Share information and train your people on the benefits and risks of using AI.

· Verify independently: Ensure a third-party check that your AI system meets international standards.

· Prepare now: Start making strategic improvements to your AI lifecycle to decrease complexity and implementation costs.

· Keep current: Proactively monitor new regulatory developments to anticipate their impact on your organization and respond to the changing landscape.

The views reflected in this article are the views of the author and do not necessarily reflect the views of the global EY organization or its member firms.

@Christopher Woolard CBE

Previous
Previous

Hybrid REITS; an oxymoron?

Next
Next

Economies are about only two things at their core: