By Ashwani Mishra, Editor-Technology, 63SATS
How AI’s Pursuit of Profit Could Exploit Financial Weaknesses and What Regulators Are Doing to Keep It in Check
Imagine an AI system designed to maximize stock market returns. As it combs through historical data, it notices a pattern: the biggest stock market crashes often follow bank runs, which are typically triggered by a wave of bad news. To achieve its goal, this AI might take short positions against certain banks and then deliberately spread negative information to incite panic, destabilizing those banks.
This scenario underscores the dangers and ethical challenges of allowing AI to operate without sufficient oversight, revealing how these systems could exploit vulnerabilities in the financial system to achieve their objectives.
To counter such risks, The Securities and Exchange Board of India (SEBI) has recently proposed transparency measures regarding the use of AI tools by investment advisors (IAs) and research analysts (RAs).
The regulatory body emphasizes that these professionals must disclose to clients the extent of AI usage in their services, ensuring that clients can make informed decisions.
Additionally, SEBI is considering stricter regulations for algorithmic trading platforms, potentially requiring brokers to obtain approvals for algorithmic strategies employed by clients. The concern is that unregulated algo trading, particularly through open APIs, could trigger market distortions if left unchecked.
Global Concerns: AI as Both Tool and Weapon in Financial Markets
Michael J. Hsu, Acting Comptroller of the Currency, has also raised alarms about AI’s potential to disrupt financial stability.
Speaking at the 2024 Conference on Artificial Intelligence and Financial Stability, Hsu highlighted that while AI can be a powerful tool, it can equally serve as a weapon in the wrong hands. AI-enabled fraud, cyberattacks, and disinformation pose significant risks, potentially leading to large-scale financial impacts.
For example, the emergence of AI tools like FraudGPT on the dark web could escalate the scale and sophistication of fraud and scams, threatening both financial institutions and public trust in the banking system. The financial sector must be vigilant in its efforts to bolster operational resilience and guard against the cascading risks of AI-driven threats.
As AI continues to evolve, so too must our approach to regulating its use in financial markets. The potential for AI to both empower and endanger financial stability is vast. It’s clear that robust oversight, transparency, and proactive regulation are essential to ensuring that AI serves as a tool for progress rather than a catalyst for crisis.