When AI replaces proxy advisers: what J.P. Morgan Asset Management’s shift means for responsible investing

J.P. Morgan Asset Management’s recent decision to eliminate the use of traditional proxy advisory firms in favour of an internally developed AI-driven voting and governance tool marks a significant inflection point for responsible investing. While proxy advisers have long been a lightning rod in debates over stewardship quality, influence, and standardisation, their removal by one of the world’s largest asset managers signals something more profound: a re-definition of how fiduciary responsibility, governance oversight, and ESG integration are operationalised at scale.

This move does not simply raise questions about technology adoption. It forces asset owners, Investment Directors, and boards to confront a deeper issue: who should decide what “responsible” looks like, and how those decisions are made in increasingly complex, politicised, and data-rich markets.

Proxy advisers and the architecture of modern stewardship

For decades, proxy advisers have played a central role in institutional voting. Their value proposition was pragmatic: provide research, voting recommendations, and operational infrastructure to help large, diversified investors exercise voting rights across thousands of issuers globally. For many asset managers and asset owners, particularly those managing passive or quasi-passive strategies, proxy advisers were not just convenient—they were necessary.

Yet proxy advisers were always an imperfect solution. Their methodologies tended to be:

  • Standardised, sometimes insufficiently nuanced for company-specific or regional contexts

  • Backward-looking, relying heavily on disclosed policies and historical behaviour

  • Normative, embedding assumptions about governance “best practice” that did not always align with an investor’s own beliefs or fiduciary objectives

As responsible investing evolved from exclusion-based screens to more engagement-oriented and risk-aware frameworks, these limitations became more visible. Stewardship increasingly demanded judgement, prioritisation, and trade-offs—qualities that sit uncomfortably with one-size-fits-all voting recommendations.

Why AI, and why now?

J.P. Morgan Asset Management’s move reflects a broader convergence of pressures reshaping responsible investing:

  1. Scale meets complexity
    Large asset managers face a paradox: portfolios are broader and more global than ever, while expectations around stewardship quality and accountability continue to rise. Manual, case-by-case governance analysis does not scale easily. AI promises to bridge that gap by processing vast amounts of structured and unstructured data quickly, consistently, and at lower marginal cost.

  2. Fragmentation of ESG norms
    The global consensus around ESG has fractured. What constitutes “responsible” governance differs across jurisdictions, political environments, and client mandates. Outsourcing voting recommendations to third parties risks importing external value judgements that may not align with an asset manager’s fiduciary duty to diverse clients. AI tools can be trained to reflect house views, client-specific policies, or mandate-level constraints.

  3. Regulatory and reputational scrutiny
    Asset managers are under growing scrutiny to demonstrate that stewardship decisions are intentional, documented, and aligned with stated policies. An internally controlled AI system offers auditability, traceability, and defensibility—key attributes in a world where voting decisions can quickly become political flashpoints.

  4. The maturation of data and language models
    Advances in natural language processing allow AI systems to analyse proxy statements, shareholder proposals, governance disclosures, and historical voting outcomes at a level of depth that was not previously feasible. This moves AI from a blunt screening tool to something closer to a decision-support engine.

What changes for responsible investing?

1. From “outsourced judgement” to “codified beliefs”

The most profound shift is philosophical. Using proxy advisers implicitly delegated a portion of governance judgement to external parties. Replacing them with an AI system forces asset managers to explicitly codify their stewardship beliefs.

This includes answering difficult questions:

  • How should trade-offs between short-term performance and long-term governance resilience be handled?

  • How much weight should be given to environmental or social risks relative to shareholder rights?

  • When is engagement preferable to voting against management—and when is escalation warranted?

AI does not eliminate judgement; it forces judgement upstream, into model design, training data, and policy parameters. In that sense, it may improve responsible investing by making implicit assumptions explicit.

2. Consistency versus discretion

One long-standing criticism of proxy advisers was that they encouraged mechanical voting. Ironically, AI could either entrench or alleviate that problem.

Used poorly, AI risks:

  • Reinforcing historical biases embedded in training data

  • Optimising for consistency at the expense of context

  • Creating an illusion of objectivity where normative choices still exist

Used well, AI can:

  • Apply consistent frameworks while flagging exceptions that require human review

  • Identify patterns of governance risk that merit deeper engagement

  • Free human stewardship teams to focus on high-impact, high-judgement cases

The determining factor is governance: AI as decision-maker versus AI as decision-support.

3. Engagement becomes more selective—and potentially more effective

One underappreciated benefit of AI-driven stewardship is prioritisation. Responsible investing has long struggled with the problem of “engagement inflation”: thousands of engagements reported, but limited evidence of material impact.

AI tools can help identify:

  • Companies where governance risks are most likely to translate into financial or operational stress

  • Repeated patterns of poor disclosure or board responsiveness

  • Situations where voting escalation is more likely to influence outcomes

This could shift stewardship from volume to materiality, aligning it more closely with portfolio risk management.

Implications for portfolio construction

While proxy voting may appear operational, the shift to AI-driven governance has second-order effects on portfolio construction.

Governance risk as a portfolio input

If governance analysis becomes more granular and forward-looking, it can feed into:

  • Security selection

  • Position sizing

  • Risk budgeting

For example, companies with persistent governance red flags may warrant lower conviction weights, higher required returns, or explicit risk premiums—particularly in markets where legal or regulatory protections are weak.

Passive strategies are no longer “neutral”

One of the strongest arguments for proxy advisers was their utility for passive strategies. AI challenges the notion that passive ownership must imply passive stewardship. Large index investors, armed with AI-enabled tools, can now exercise governance influence in a more differentiated way—raising questions about systemic responsibility and market-wide externalities.

Alignment with long-term risk management

Governance failures tend to surface during stress: geopolitical shocks, commodity price swings, regulatory shifts, or social unrest. AI-enhanced governance monitoring can help portfolios identify fragilities before they become tail risks—strengthening the role of responsible investing as a downside-risk mitigant, not just a reputational overlay.

New risks introduced by AI-driven stewardship

The shift is not without hazards.

1. Model risk and opacity

AI systems can be complex and difficult to explain, even to their designers. For asset owners and regulators, this raises uncomfortable questions:

  • How do you evidence that a voting decision was reasonable and fiduciary-aligned?

  • How do you detect errors or unintended bias?

  • Who is accountable when an AI-informed vote contributes to a controversial outcome?

Responsible investing frameworks will need to incorporate AI governance alongside corporate governance.

2. Concentration of influence

As large asset managers develop proprietary AI systems, stewardship power may become more concentrated and less transparent to the market. Smaller asset owners, who rely on managers to vote on their behalf, may have less visibility into how decisions are made—heightening the importance of reporting and client dialogue.

3. The risk of false precision

AI can create confidence where uncertainty remains. Governance outcomes are shaped by human behaviour, politics, and culture—factors that resist clean modelling. Over-reliance on AI outputs risks underestimating genuine ambiguity.

What this means for Investment Directors and asset owners

For Investment Directors, J.P. Morgan Asset Management’s move is not just a headline—it is a signal that responsible investing is entering an implementation phase where technology choices matter as much as principles.

Key questions asset owners should now be asking their managers include:

  • How are stewardship beliefs translated into voting logic?

  • What role does AI play, and where does human oversight intervene?

  • How are conflicts, edge cases, and contested issues handled?

  • How does governance analysis feed back into portfolio risk management?

For those overseeing internal teams, similar questions apply internally. If AI is reshaping stewardship, it must be integrated into investment governance frameworks, not bolted on as a technical upgrade.

A broader shift in responsible investing

J.P. Morgan Asset Management’s decision reflects a broader maturation of responsible investing. The field is moving:

  • From outsourcing to ownership

  • From principles to processes

  • From reporting activity to managing risk

In an era of geopolitical fragmentation, regulatory uncertainty, and politicised capital markets, responsible investing is increasingly about resilience—of portfolios, institutions, and the financial system itself. AI, used thoughtfully, can support that goal. Used carelessly, it can undermine trust.

Conclusion: a turning point, not an endpoint

Eliminating proxy advisers in favour of AI is not the end of the stewardship debate—it is the beginning of a more demanding one. Responsible investing is no longer judged solely by policies or intentions, but by how decisions are made, documented, and defended.

For Investment Directors, the message is clear: governance, risk management, and technology are converging. The challenge is not whether to adopt AI, but how to ensure it strengthens fiduciary judgement rather than replacing it. In that sense, responsible investing’s future will depend less on what tools are used—and more on who remains accountable when those tools are deployed.

Next
Next

Geopolitical uncertainty is accelerating responsible investing — but it’s changing what “responsible” means