Paul Diggle and Luke Bartholomew discuss the many potential economic impacts of Artificial Intelligence, including on productivity growth, the labour market, sectoral winners and losers, regulation, and global geopolitics.
The abrdn research note on AI’s macro impact is here.

Podcast

Paul Diggle

Hello and welcome to Macro Bytes the economics and politics podcast from abrdn. My name is Paul Diggle.

 

Luke Bartholomew

And I’m Luke Bartholomew.

 

Paul Diggle

And today we are talking about a very topical issue in macro and markets at the moment, which is AI. How is the artificial intelligence revolution going to change the economy, global macro picture and markets over the coming years and indeed, decades? And when we talk about artificial intelligence, here we are referring to a suite of technologies, which are about the simulation of human-style intelligence by machines. This includes things like natural language processing, making decisions, recognising patterns, learning from experience, solving complex problems. So we're talking about something a bit wider than chat GPT, very much the topic du jour. We're talking about machine learning neural networks, computer vision, as well as large language models and generative AI. And already increasingly, pervasive products and services powered by AI are changing consumer experiences, corporate processes, and in the markets, US equity market performance over the past year has been very narrowly driven by a small number of perceived AI winners. So to structure how all this is going to change macro markets over the long term, we're going to talk about five crucial areas in which it's going to have an impact; productivity, sectors, jobs, government policy and regulation. And then finally, the geopolitics of those categories.

 

Luke Bartholomew

So starting with productivity which for economists is probably the most interesting of those categories, as Paul Krugman, once put it, productivity isn't everything. But in the long run, it's almost everything. And that's getting at the idea there that, of course, we have the cyclical fluctuations in unemployment, and growth and those matter, but over the longer run, what really pins down living standards is productivity. And there's a lot of excitement about AI as a driver of productivity, in part because recently, we've been experiencing very poor productivity growth, at least since the financial crisis. So 15 years or so. In Britain, in fact, we talk of the productivity puzzle, our productivity growth has been so bad, it's something of a mystery as to how it could possibly be as bad as it has been. But even in the US at the frontier of technical change, productivity growth has recently been somewhat underwhelming compared to history. And perhaps that's a little bit surprising in the context of this sense of huge amounts of technical change going on, technological progress, huge churn in the economy. But certainly the AI revolution is not showing up yet at all in the economic data. But to invoke another economist, we sometimes talk in economic theory or history of the ‘Solow Paradox’. This is, after the economist Bob Solow, this quip in the late 1980s to the effect of well, you can see the computer, the information age everywhere, except in the productivity statistics – i.e. there seems to be all this change going around, but it wasn't showing up at all, in the aggregate economic data - so perhaps it wasn't that important. And then almost immediately after he made these comments, and I should say by the way, this is a Nobel Prize-winning economist who is famous for his work on economic growth, Almost immediately after he made these comments, productivity growth picked up in the US in the 1990s, quite significantly. And we now think of that period as perhaps being the US enjoying the fruits of a third industrial revolution. And I think this framing of waves of industrial revolution is quite a useful way of thinking about the potential impacts of AI because economic historians sort of think of industrial revolutions coming in big waves or phases and they identify maybe three significant ones. First, in the late 18th century - so this is like the archetypal Industrial Revolution centred in Britain, on steam power, railways and the like. And this was obviously significant in driving economic growth through into the 1800s. Then a second industrial revolution in the second half of the 19th century, so, the late 1800s, this one centred in the US around the internal combustion engine, industrial chemistry and the movement into gas power. And then as I say, a third industrial revolution maybe starting in the 1970s, around the internet, circuit boards and computers. So, then the question is, will AI perhaps be the driver of a fourth industrial revolution?

 

Paul Diggle

Yeah. And I think what links those industrial revolutions Luke is that they're all powered, they're all underlined by a new general-purpose technology, what economists call a GPT. And this is different from the other sort of GPT we talked about here (generative, pre-trained transformers). This is a kind of a longer running definition of a general-purpose technology, which is a technology with widespread application across many domains, and which creates many spillover effects, many further waves of innovation. So across those industrial revolutions, it could be something like steam power was the one in the first industrial revolution, electrification and internal combustion engine in the second, and then computers in the third. And perhaps AI is a new general-purpose technology that, as you say, will power a fourth industrial revolution. And I think there are arguments that AI is just such a technology, it has lots of potential applications, many of those are speculative at this stage, but with a little bit of imagination we can see use cases across medicine, across law, across indeed the finance industry, in military domains, across a very broad range of sectors. It is also the sort of technology which could spawn wide and long-lived spillovers that then create further innovations in industry. It could, for example, be not just a use case in its own right, but also a new mode of invention, something that actually helps generate new insights, new technology. I mean, think here as a comparison, that the technology of the lens, its primary use wasn't for improving glasses, it was in generating a new mode of discovery, through the invention of microscopes and telescopes. So that's also a crucial kind of feature of a general-purpose technology, and one which AI seems to share. I mean, on the other hand, it's possible that AI isn't a GPT on the scale of steam and electrification and computing. Perhaps many of its use cases are speculative, at this stage, too speculative. Perhaps a lot of the hype currently around large language models, is leading us down something of a blind alley, perhaps chatbots are they kind of magic trick of today, but actually, they'd suffer from problems like hallucinations, or, you know, they can't actually have application across a very wide range of domains. They don't, for example, have the ability to do calculations, they only draw on a preexisting set of output

 

Luke Bartholomew

But assuming it is a GPT in that general technology sense, I mean, the kind of range of impacts that it could have on productivity are pretty wide. There are estimates ranging from you know, basically 0% - even if it were to be a general purpose technology, perhaps the impacts are pretty small or they’re spread over such a long period of time that the impact on annual growth is extremely small - up to a 3% increase every year on the rate of growth. And it is worth stressing, if you were to see 3% added to the rate of growth every year, over several decades, the kind of compounded change that would bring about is absolutely transformative. It’s almost impossible to imagine the degree of change that that would that would bring about. So it really is quite a wide range of impacts with the possibility of being quite radical in what it brings.

 

Paul Diggle

Yeah, so I want then to move us on to our second big topic area which is what might be the sectoral economic impact of AI? And I think you can split this down into first, the short term and the long term, In the short term, a useful kind of typology is that winners from AI fall into three categories – ‘enablers’ - and these can be companies like chip manufacturers, or the designer of AI systems themselves, then ‘scalers’ - people who are able to apply AI at scale, and many of the well-known tech platforms are scalers. And then finally, ‘early adopters’, companies that are already at the leading edge of applying AI in their own business processes - so it’s a lot of software companies will be in that category. And indeed, that kind of narrow group of winners in the S&P at the moment, the US stock market index, largely fall into these three categories. Over the longer term, I think, we're gonna think about sectoral winners as likely to be those parts of the economy where there are large numbers of highly trained knowledge workers, doing technical, but essentially, repetitive tasks that draw on a corpus of knowledge. And those could be finance (perhaps concerningly for ourselves Luke as one of them), education, healthcare, the law. I think those are the sectors in which certainly at least capital in those sectors, perhaps not necessarily labour, could be significant long term beneficiaries from the AI revolution.

 

Luke Bartholomew

So this distinction between the impact on capital and the impact on labour perhaps brings us onto the third section of its impact on labour markets. And typically, a lot of the commentary around AI and its impact on labour markets has been around job destruction, the way in which AI is going, to coin a phrase, ‘eat all the jobs’. But I think that's probably quite a narrow framing just to look at job destruction. Typically, when we think about the impact of technical change, we see both job destruction, of course, but also job enhancement, and job creation. And it's important to think about all three aspects when weighing the aggregate impact on the labour market. So as I say, it is almost certainly the case that some jobs will be destroyed. To some extent, that's just where some of the productivity boosting impact comes from. You can do things more efficiently, you need less workers to do them. And that is the story of technical change since the first industrial revolution. But significantly, there has been very little evidence of long run unemployment over that period. Certainly, we have swings in unemployment, but that's much more of a cyclical factor around the business cycle rather than reflecting technological change.

 

Paul Diggle

And it strikes me Luke that worry about the job destruction effects of AI potentially falls into what economists call the ‘lump of labour fallacy’. So this is the fallacy, the idea that there are only a set number of jobs in the economy, as often cited in the context of migration flows and worries about job losses from economic migration into an economy taking jobs away. And generally, that is a fallacy of economic understanding, in the sense that there aren't actually a fixed number of jobs in the economy. The number of jobs in the economy can expand or contract depending on aggregate demand and potentially worrying too much about AI’s job destruction effects falls foul of that same fallacy.

 

Luke Bartholomew

Indeed, as you say, part of the reason that we think of this as being a fallacy is that it ignores both the enhancement and creation effects. So the enhancement effect there would be the way in which AI can aid workers in some of their tasks. I mean, basic examples are perhaps summarising or writing certain bits of text, helping with research, which frees up human workers to concentrate on the really value-add high productivity activities. So perhaps that's, I don't know, lawyers looking at case precedents or, dare I say, economists doing data analysis or whatever it might be. It just makes us better at doing our jobs. And then on the job creation front, of course, there are the direct jobs created by AI programmers or perhaps, you know, today's ‘hallucinations’ that you were talking about earlier Paul. Perhaps we need a new set of jobs with people checking that the AI isn't hallucinating certain facts or whatever it might be, but frankly, those likely to be relatively small in a macro sense. I think the much more important way in which this works is indirectly in that, just by virtue of being a more productive society, we're also a richer society. And so we can just do more things. There are more barristers, or baristas, or whatever else, it might be that the economy needs. We’re able to sustain more employment, more different kinds of demand, produce a different range of goods that comes with this wealthier more productive society. But it is important to stress that whilst how wealthy a society is ultimately turns on its productivity, that does sometimes align over important distributional questions. The way in which those productivity gains are allocated, are ultimately mediated through particular institutional contexts. The way in which the market is structured, the kinds of laws, the kinds of norms that govern our society. And this is, I think, particularly important in the AI context, partly because of those winners and losers that we've pointed to already, Paul, but also because we do have a bit of historical precedent that shows us how this might work. So there's quite a famous period in economic history that sometimes referred to as ‘Engel’s Pause’, which occurred during the first industrial revolution. So think the early 1800s, or something like that. And during this period, we're seeing extremely rapid productivity growth as a consequence of that first industrial revolution, but living standards were stagnating through that period. So the gains of productivity were not flowing down. And I suspect it's not entirely coincidental that this was also the period in which Engels and Marx were writing about the Industrial Revolution and the impact that it might have. And I think therefore, what this gets at is the way in which the benefits of AI are distributed and the impact that it's going to have on people in the structure, the labour market, will turn to some extent on the particular political institutions that we end up developing, and there's no reason to think those we have at the moment will be static to the kind of economic changes that we're seeing.

 

Paul Diggle

Indeed, and that, in fact, gets us on to our fourth big topic, which is how AI changes government policy and regulation. So we've taught productivity sectors, the labour market, but actually it's going to do a lot to change what governments themselves are doing. I think there's a few important things to say about the near-term picture here, which first is governments really face a pacing problem as the speed of technological change is so rapid, that it's going to make it extremely difficult both for government policy, whether that's around labour market policy, or indeed regulation of AI, to actually keep up. So in many ways governments are playing catch up here just because of the pace of change. And indeed, what regulatory efforts there are, are pretty nascent, they're at quite an early stage. So the European Commission is negotiating with member states to approve an AI Act that would, if passed, be the first comprehensive AI law in the world. In the US, the White House has introduced a non-binding AI Bill of Rights. In the UK, there are also preliminary efforts in this area. And generally speaking, what all these regulatory efforts, what these government efforts focus on, are human oversight of autonomous systems. So that loop could get at checking for hallucinations, but more generally, introducing a human element in AI automated decision making as a kind of safety check. These regulatory efforts also try and tackle responsibility for AI decision making. So the classic example here is if an autonomous car causes an accident, with whom does the responsibility for that accident lie? That's still a much contested part of AI regulation. The transparency of decision making - to the extent that AIs are black boxes, it's very hard to tell why they came to certain decisions. So I think regulators want to open up that black box and allow us to understand and how decisions are made. And related to that, a lot of efforts focus on privacy, and indeed, bias considerations because these are profound potential problems with AI. I think the big point here is that a wave of AI regulation is coming. So governments will want to catch up in this area. And this current free for all, will soon be regulated. But it's not just within country politics and regulation that's going to get changed by the AI revolution. Because our kind of fifth big topic under this heading is geopolitics, because the size of some of those economic impacts that you outlined Luke, a potential boost productivity growth, you know, as high as 3% (perhaps in more plausible scenarios smaller than that). But that is such a big change either way that it's going to profoundly change the shape of the global economy, depending on who can harness those gains, amd most, which countries. Also AI’s dual use military and commercial applications, the physical location of where hardware is produced, all these things mean that AI is a geopolitical issue as well. And I'd say to give a few specific examples, so we've done a fairly recent Macro Bytes episode on microchips, and on the importance of Taiwan in the kind of global microchip ecosystem, and this all ties in with AI as well, as we covered on that episode, because the most advanced chips are so important to training AI systems. It's another reason why Taiwan is such an important location, in terms of its global geopolitical focus. But it's not just a hardware issue here, the development of AI software could become something of a cyber arms race between countries. The location of software developers, the security of the intellectual property that they produce. All these, I think, are politically sensitive issues. And then finally, the values embedded in AI systems are actually also a geopolitical issue. It's not just about preventing privacy and bias, it's also about broader political values that could get embedded in AI decision-making - the use to which AI is put in terms of say surveillance, or legal systems or policing, or indeed military uses. So all of these I think of reasons why AI is also a geopolitical issue.

 

Luke Bartholomew

And then, finally, looking a little bit past those regulatory issues that you were discussing Paul as part of domestic politics, I think we can be thinking even bigger picture over the long term in terms of the kind of social changes that AI might bring about. Now, of course, Engle’s pause came to an end. Productivity growth, and real wage growth tended to equalise over time, and the gains from industrialisation were broadly spread. But that only happened to an extent because it was an unsustainable situation - the gap that had opened up between productivity growth and living standards. Something political needed to be done. And there's a sense in which the political order, the institutional order is endogenous to what's happening in the economy. So specifically, you got the rise of trade unions, you got an increase in the franchise, ultimately you got the rise of the welfare state that came from all of this. Particular mechanisms had to be built into society, to ensure that the gains from significant productivity growth were shared widely. And so you can imagine similar things occurring were the rate of AI growth to be as rapid as some of those estimates suggest. I mean, at the very minimum, if we are growing at that rate, you have to imagine the degree of job destruction is such that people are going to have to experience quite significant career change during the course of their career and that will need labour market institutions that ensure training, ensure that skills are provided throughout the career and indeed, the skills that one would learn in formal education as a child are unlikely to be particularly useful perhaps by the end of a career that has seen that degree of productivity growth. The economy is going to look so very different. And you know, in the somewhat I guess, more extreme cases people talk of radical changes to the nature of the welfare system, up to and including a minimum income, or things like this, to ensure that purchasing power continues to be allocated broadly and allow the economy to continue to operate. Now naturally, these are highly speculative thoughts and are going to play out over a very long horizon. So, no doubt we will have more to say on all of this in Macro Bytes in due course but I think for now that is all that we have time for, for this episode. So, as ever, please do let me ask you to subscribe and review us on your podcast platform of choice. And then all that remains is for me to thank you for listening. So thanks very much and speak again soon.

 

 

 

This podcast is provided for general information only and assumes a certain level of knowledge of financial markets. It is provided for information purposes only and should not be considered as an offer investment, recommendation, or solicitation to deal in any of the investments or products mentioned herein and does not constitute investment research. The views in this podcast are those of the contributors at the time of publication and do not necessarily reflect those of abrdn. The value of investments and the income from them can go down as well as up and investors may get back less than the amount invested. Past performance is not a guide to future returns, return projections or estimates and provides no guarantee of future results.

Related articles