Let me suggest five principles that should guide us as we think about regulation. The first is balanced benefits and risks. This sounds really obvious. Who could be against that? Well, a lot of the AI regulatory discussions are against that. They say, don't deploy this until we're sure there are no risks. Get rid of all the risks.
Well, there's a lot of risks of slowing AI down. And so you have to ask every time you regulate, not just what are the risks that the AI does something bad, but what are the risks that you regulate it too much and you don't get all the good things that you want.And you balance those.
Second, compare AI to humans or to the alternative, don’t compare them to the Almighty. If your autonomous car crashes, that’s not a reason not to have autonomous cars. You want to ask how much it crashes compared to how much humans crash.
Third, this gets to something in your initial question. Whenever possible, use domain specific regulation. An AI super regulator, they don't understand financial markets. They don't understand financial stability. They couldn't do that.
But you need more AI expertise in your regulators. So get all your the regulators you already have to know more about AI.
Fourth, regulations should not be a moat protecting incumbents. A lot of the big companies have welcomed regulations. Some of that, I think, is public spiritedness on their part, and should be welcomed and applauded. Some of that is that they know they can comply with the regulations,the smaller upstarts can’t,we should be very skeptical of that.
And then finally, not every problem caused by AI can be solved by changing AI. If taxi drivers are losing their jobs due to AI, the answer is not to reprogram the AI and ban AI that has driverless cars, and China's not doing that. The United States is not doing that. The answer is maybe a training program for those taxi drivers. So a lot of the solutions to AI's problems are not about regulating AI. They're about separate programs in labor markets, the tax system and the like.
Aetna Professor of the Practice of Economic Policy, Harvard University; Former Chairman, Council of Economic Advisers of the White House
1. Let me zoom in on the finance industry in three steps.
So first of all, we have seen a revolution in trading already, which is not based on generative AI, but on machine learning over the past decade or a little bit more. You know, trading has really changed fundamentally, particularly in the most liquid markets, by becoming very much algorithm based, and arguably that is improving market efficiency and is benefiting households and corporations and governments in terms of lowering funding costs and making markets more efficient.
The second example I would give is credit scoring, right? How artificial intelligence can improve credit allocation, and that can ultimately benefit the population at large via increased financial inclusion and better allocation of credit across agents, across people and across firms.
And thirdly, and that is really the frontier, what Jason was referring to as the next step of artificial intelligence. So general intelligence, I think, in the financial sector, this is the question about fully autonomous decision making in financial markets or financial institutions. There's a great degree of skepticism at the moment as to whether we will get there, and there's certainly we have not talked to any market participant at the moment that is willing to let the machine run without human intervention. But there's this sort of like possibility, theoretically, that there could be fully autonomous agents, and presumably that would be achieved once there would be, you know, a more general form of artificial intelligence.
2. In early August, we saw huge market swings. So you know, in Japan, the Nikkei lost 12% in one day. In the US, implied equity market volatility shot up from the mid 20s to levels above 60 intraday. These are the kind of levels that are usually observed in severe crisis, and one thing that market commentators pointed to is potentially the role of correlated trading, where algorithms are using potentially similar strategies that trigger sales. So to some degree, systemic risk could be increased because of correlations across signals that are amplifying downside moves.
So we saw some of that already back in 2010 in the stock market, which was called the flash crash, and then later in the treasury market in 2014.
There could be broader macro consequences as well, that could amplify, you know, systemic risk in principle. So this is certainly an area that regulators are studying, and a big challenge is that the kind of data that is needed by policymakers to understand those risks is not necessarily the data that was needed previously to assess risk, so what entities to collect data from, and what kind of data to collect may be different. And so you know having additional transparency, maybe first order here.
Director of the Monetary and Capital Markets Department, International Monetary Fund (IMF)