Can we avoid the next MOASS?
When markets go viral
Remember January 2021: thousands of small investors banded together on Reddit, sending GameStop shares soaring and forcing hedge funds to liquidate their short positions.
What became known as the "MOASS" - Mother of All Short Squeezes - nearly shook Wall Street.
At the time, institutions blamed online speculation. But behind the memes and tweets, there was a deeper reality: mistrust.
A generation of connected investors was rejecting the classic rational market narrative.
And that's where artificial intelligence comes in.
The illusion of algorithmic control
Since this episode, most major banks and hedge funds have integrated AI models capable of detecting weak signals: abnormal volumes, social correlations, mass behavior.
The idea seems reassuring: thanks to machine learning, we could anticipate a panic, or at least contain it.
But beware of the illusion of control.
AI, by nature, learns from the past. However, short squeezes are emergent events, where collective behavior escapes all predictive rationality.
In other words: AI can explain what has just happened, but rarely what is going to happen.
As Andrew Lo points out in his Adaptive Market Hypothesis (MIT, 2017), financial markets do not follow a stable equilibrium, but an evolutionary process of adaptation.
And an AI, however sophisticated, has not yet learned to model human chaos.
When AI becomes an integral part of risk management
Paradoxically, AIs that are supposed to prevent bubbles can also amplify them.
When several algorithms simultaneously identify the same "opportunities", they trigger self-reinforcing market movements.
This is the paradox of speed: the faster the models act, the more they create the conditions for a runaway.
A CFA Institute study (2023) shows that more than 65% of US stock market transactions are already carried out by autonomous algorithms.
In such an environment, a micro-anomaly can become an avalanche.
And the question is no longer just economic, but systemic: who is responsible for a crash caused by an algorithmic loop?
Towards responsible financial AI
However, there are ways of controlling this power.
Institutions such as the Bank of England and the European Commission are working on frameworks for "responsible AI for finance", integrating transparency, explicability and auditability of models.
The idea is not to slow down innovation, but to reintroduce accountability into a system that has lost it.
Responsible financial AI should :
be explainable (able to justify a trading or risk assessment decision),
be traceable (history of decisions and training data),
and be aligned with macroeconomic stability objectives rather than instant returns alone.
Some players are even exploring hybrid approaches: behavioral AI + human supervision + narrative stress tests (Harvard FinTech Lab, 2024).
This is an elegant way of putting some common sense back into a system that has become too fast to think.
Can we avoid the next MOASS?
Probably not.
But we can reduce its violence.
Artificial intelligence, if governed rigorously, can become an instrument of financial resilience rather than an amplifier of volatility.
But institutions must dare to use it as something other than a profit-making tool.
Because behind every model lies a choice of values: to prevent or to profit.
And the real question, at the end of the day, isn't "Can AI prevent a MOASS?"
But rather: do we really want it to?
Sources:
Lo, A. (2017). Adaptive Markets: Financial Evolution at the Speed of Thought. MIT Press.
CFA Institute (2023). AI in Asset Management: Ethics, Governance, and Systemic Risk.
Bank of England (2024). AI Governance in Financial Supervision: From Trust to Verification.
European Commission (2024). AI Act: Implications for Financial Markets.
Harvard FinTech Lab (2024). Hybrid Models for Risk Anticipation: The Human-Machine Paradox.

