AI in quant finance: the new edge in trading

Quantitative finance and machine learning have coexisted for decades. The hedge funds and proprietary trading desks that built the first algorithmic trading systems were running what would now be called machine learning workflows before the terminology existed. What has changed is not that AI is entering quantitative finance. It is that the capabilities AI is bringing to quantitative finance in 2025 are qualitatively different from the statistical models that preceded them, and the competitive dynamics of a market where every major participant is upgrading simultaneously are producing both real productivity improvements and new risks that the previous generation of quant infrastructure did not create.

The signal discovery problem and why LLMs changed it

Quantitative finance rests on the identification of signals: patterns in data that predict future price movements, risk events, or market conditions at a reliability sufficient to generate risk-adjusted returns above the cost of capital. The signal discovery problem has always been the central intellectual challenge in quant finance, and it has always been constrained by the scope of data that human researchers can process and the hypothesis space they can explore.

Large language models have changed the signal discovery problem in two specific ways. First, they have made text-based data sources systematically processable at a scale that transforms previously unusable information into structured signal inputs. Earnings call transcripts, central bank communications, regulatory filings, news flows, and analyst research represent enormous volumes of information whose content has always been known to contain market-relevant signals but whose unstructured form made systematic incorporation into quantitative models impractical. LLMs can extract structured sentiment, topic, and event data from these sources at the speed and scale that quantitative strategy requires.

Second, they have enabled the generation and testing of hypotheses at a speed that changes the research process. A quant researcher who previously required weeks to formalize, code, and backtest a signal hypothesis can now use AI assistance to accelerate the hypothesis-to-backtest cycle substantially, exploring a larger space of potential signals in the same calendar time. The productivity gain in signal research is not primarily in the analysis of individual signals. It is in the breadth of the search.

Two Sigma, Citadel, and Man Group are among the major quantitative funds that have invested heavily in LLM integration for signal research and portfolio management. The specific capabilities they are deploying are not publicly disclosed in detail, but the investment levels and the hiring patterns visible in their talent acquisition are consistent with organizations building LLM-based research infrastructure as a core competitive asset.

The alternative data revolution and AI’s role in making it usable

Alternative data, the broad category of non-traditional data sources that have proliferated as digital activity has expanded, has been a feature of sophisticated quantitative strategies for over a decade. Satellite imagery of retail parking lots, credit card transaction data, web traffic analytics, and logistics data each represent signals whose information content was recognized long before the infrastructure to process them systematically existed.

AI has solved the alternative data usability problem in a specific way: it has reduced the signal extraction cost for messy, high-volume, unstructured alternative data sources to a level where a larger number of signals can be incorporated in a strategy’s data infrastructure at reasonable research cost. A computer vision model that extracts parking lot occupancy estimates from satellite imagery is performing a task that previously required custom software development and ongoing maintenance. A foundation model fine-tuned on financial text that extracts structured sentiment from news flows is performing a task that previously required a specialized NLP development project.

The competitive implication is that the alternative data signals that were once exclusive to the funds with the resources to build custom extraction infrastructure are now accessible to funds with the resources to use AI tools effectively. The democratization of alternative data extraction has shifted the competitive frontier from data access to data interpretation: the edge now lies less in having access to a signal and more in understanding what that signal means and how to combine it with others.

Risk management: AI in the function that matters most

Trading AI tends to dominate coverage of AI in quantitative finance, but the risk management applications may be more consequential over long timescales. Risk models that mischaracterize tail risk, correlation structures, or liquidity conditions are the proximate cause of the large-scale losses that periodically damage major financial institutions. AI risk management tools that improve the accuracy of these models deliver value not through consistent daily returns improvement but through the reduction of the catastrophic loss events that statistical risk models periodically fail to prevent.

See also  Legal AI: how law firms are adopting AI fast

AI-enhanced risk models incorporate several capability improvements over their statistical predecessors. Network analysis of counterparty relationships and market exposure correlations identifies systemic risk concentrations that position-level risk models miss. Scenario generation using AI to construct adversarial scenarios more systematically than human stress testing identifies vulnerabilities that structured scenario sets built from historical crisis templates overlook. Real-time monitoring of market microstructure conditions provides early warning of liquidity deterioration that allows position adjustment before the deterioration reaches the speed and depth that creates forced liquidation dynamics.

JPMorgan’s AI risk infrastructure, Goldman Sachs’s machine learning-enhanced risk models, and similar investments at major banks represent the application of AI to the function where errors are most consequential. The regulatory dimension of AI in financial risk management, where model risk management requirements create specific validation and governance obligations for AI models used in risk measurement, is examined in our coverage of how Experian and data-intensive financial organizations are deploying AI.

Execution and market microstructure: the AI layer closest to the market

At the execution layer, AI has improved trading performance through better prediction of market impact, more sophisticated order routing, and real-time adaptation to market microstructure conditions. The productivity improvement at this layer is measured in basis points of execution cost reduction rather than in alpha generation, but for strategies with significant execution costs, basis point improvements in execution quality compound into meaningful return improvements over time.

Machine learning execution algorithms that adapt their behavior based on real-time market conditions, using historical pattern recognition to identify the market states in which different execution strategies perform better or worse, have become standard infrastructure at major trading organizations. The specific implementations are proprietary, but the outcome data from academic research and industry studies consistently shows adaptive execution algorithms reducing market impact costs compared to static execution approaches.

The operational risk the productivity narrative obscures

The productivity improvements AI brings to quantitative finance are real and well-documented. The operational risks they introduce are real and less discussed in the technology adoption narrative.

AI models trained on historical data inherit the assumptions embedded in that data about the stability of market relationships. When those relationships shift, in the way that the low-rate environment that defined the decade preceding 2022 shifted abruptly, AI models that learned in one regime can fail in the new one in ways that statistical models with more explicit assumptions would make more visible. Model regime risk is not new to quant finance, but AI models’ opacity makes regime risk harder to identify and monitor than in models whose assumptions are explicitly stated.

The correlation risk introduced by widespread adoption of similar AI models across major market participants is a structural concern that is beginning to receive attention from both risk professionals and regulators. When a large number of trading systems trained on similar data with similar architectures respond similarly to the same market signal, the resulting correlation in market behavior can amplify volatility in ways that no individual system’s risk model would predict. This is the AI equivalent of the crowded trades that have periodically created liquidity crises in quantitative strategies, operating through a mechanism that is harder to observe and measure because the similarity in models, unlike the similarity in positions, is not reported to regulators or visible in market data.

AI has enhanced the productivity and capability of quantitative finance across the signal research, risk management, and execution dimensions that determine strategy performance. The organizations that have deployed AI most effectively in this context are those that have integrated it into their research and risk infrastructure with appropriate governance rather than treating it as a capability addition that standard model risk management frameworks cover adequately.

For the financial AI applications extending beyond trading into payments and consumer finance, see Mastercard AI tools: the future of payments explained and Experian AI: how data giants are using AI to transform finance. For the governance framework that AI in financial applications requires, read AI governance in enterprises: what leaders must fix now.

The question every quantitative organization’s risk management function should be asking: Which of your AI models were trained primarily on data from the pre-2020 market environment, and how have you validated that their behavior is appropriate for the market conditions they are operating in today?

Blog author
Scroll to Top