We are sorry - we can’t find the page you are looking for.
×
The page you were looking for may no longer be available or may not be available in your country, language or to your investor type. Please use the website navigation or site search at the top of the page to find content similar to what you were looking for.
DeepSeek and the AI arms race
The emergence of DeepSeek, a Chinese artificial intelligence (AI) model promising transformative cost savings and strong performance relative to other large language models, has raised a multitude of questions.
February 2025
Does DeepSeek represent a game-changer for the AI industry? What geopolitical implications might it have? And how does its arrival impact existing market players in the IT ecosystem?
To discuss these topics and more on the future of the AI industry, we’re joined by Richard Windsor, founder and CEO of Radio Free Mobile – an independent research firm specializing in the digital and mobile ecosystem.
Tim Graf (TG): This is Street Signals, a weekly conversation about markets and macro brought to you by State Street Global Markets. I'm your host Tim Graf, European Head of Macro Strategy.
Each week, we bring you the latest insights and thought leadership from our award-winning suite of research, as well as the current thinking from our strategists, our traders, our business leaders, and a wide array of external experts in the markets. If you listen to us and like what you're hearing, please do subscribe, leave us a good review, get in touch with us, it all helps us improve what we hope to bring to you. And with that, here's what's on our minds this week.
Last week, we touched briefly on the emergence and impact of DeepSeek, an AI model developed in China whose release two weeks ago shook markets very briefly at the beginning of last week. It's apparent offering of comparable performance to other large language models for what appears to be an inexpensive infrastructure and R&D cost was initially viewed in one of two ways – either as an opportunity to lower the cost for those who leverage AI, or as a threat to chip producers who might be facing lower future tech spend.
As is so often the case, there is more here than meets the eye. So this week, I wanted to take a much closer look into what makes DeepSeek a unique entrant to the AI ecosystem and whether it actually is a threat. And I also wanted to talk to someone who knows a lot more than me about AI to have a much broader discussion about whether any long-term blows might actually be struck against the competitive motes that the large cap tech names have built around themselves.
Joining me to do that is Richard Windsor, who has as good a grasp on the AI landscape as anyone. Richard is the founder and CEO of Radio Free Mobile, an independent research producer who specializes in the digital and mobile ecosystems. The blog Richard writes, radiofreemobile.com, is a central daily reading for me and for anyone interested in AI, technology and the companies driving innovation in these most important sectors.
Hi, Richard.
Richard Windsor (RW): Hello, Tim.
TG: I really appreciate you doing this. Nice to meet you. I figured this was a busy week enough for you, given you cover a lot of the names that are going through earnings season as it is, and then you had all this news on Monday.
RW: Keeping up with AI has been a real challenge.
TG: I mean, yeah. Look, as background, I'm a complete, not neophyte to this, but I work mostly in global macro, and just trying to keep on top of it is, I can't imagine what it's like for someone with a full-time job and as a part-time job, it's like brutal.
RW: Well, yeah. Basically, I cover five verticals, AI is one of them, and at the moment, giving me almost all of my time.
TG: I appreciate it. It's a very busy time for you, and I also appreciate we're going to talk a lot about DeepSeek and appreciate that by the time this actually goes out next week, some of the things we might talk about may well evolve, given how quickly this is moving.
RW: Yeah, it's possible. The honest answer is, it's difficult to tell. Yeah, we shall see. I'll tell you what, I wrote about DeepSeek on the 21st of January. No one gave a ****.
TG: Yeah.
RW: And then suddenly, Monday the 20th, whatever it was, the world's coming to an end.
TG: Exactly. Yeah, the app charts on Apple made all the difference in the world, I think.
RW: That's right. And the popular media got hold of it, and that was it. Boom.
TG: Yeah. And it's hard to believe that was only Monday. We're recording this on Friday, the 31st of January. And I don't really want to go over, look, this has been covered in the media in terms of why this is important. Effectively, we have a new entrant into a field that is ostensibly producing something of similar quality with lower cost of goods sold, to use the old accounting term, I think.
RW: And R&D.
TG: And R&D, yes. So as an opening general question to this, this has been described as something of a Sputnik moment for the sector, something that makes people sit up and take notice because of those factors I just went through. But all things equal. Do you think this is more of a threat or an opportunity, this emergence of DeepSeek?
RW: Well, it depends, as you say, it depends who you are. If you are open AI, who's been burning billions and billions of dollars over the last couple of years, obviously it's a threat. Because my suspicion is, although it's gone into the open source, the secret source has not. So I suspect this may prove to be somewhat difficult to replicate.
And if you then follow that to its logical conclusion, it would mean that Chinese companies, DeepSeek in particular and maybe its partners, would be able to provide artificial intelligence services at much lower cost than anyone else. And if you are a provider of an artificial intelligence service like Anthropic or OpenAI or Google, you can see how this would definitely have a significant pricing and margin pressure on your product. So from that perspective, absolutely a threat.
Flip that on its head and look at it the other way. If you're someone like Qualcomm or MediaTek or NVIDIA, you could view this as an opportunity, because if you can replicate those techniques that DeepSeek claims to have perfected, then obviously you can offer much more AI processing, service, quality of service complexity for lower cost. So from that perspective, potentially it is an opportunity.
Net-net, if I was to say opportunity or threat, net, it's more of a threat than it is an opportunity because of the geopolitical environment in which we find ourselves.
TG: Okay, great. We're going to definitely talk about that in a second. I wanted to just finish off the point because the response to this has been, oh, this is great. You mentioned Open AI. You had Sam Altman talking about this being a great thing. Satya Nadella, very positive about this and talking about, in the context of more AI creates its own demand and it being a good thing.
Do you think that is then a bit disingenuous, their enthusiasm or are they truly believing that this does represent a new leap forward for everyone?
RW: No, there is an argument for that. If you just look at the internet, for example, when the marginal cost goes to zero, the internet really took off. So you can make an argument for that.
However, what I would say is temper that with a little bit of reality, which is, if the media is to be believed, OpenAI is currently in discussions to receive US$25 billion from SoftBank, so you're not about to undermine your own valuation now, are you?
TG: Yeah, yeah, fair, fair. Okay, well, let's talk about the verification process of DeepSeek. That's really, I think, the crux of the matter here. And you've written about this on your blog, Radio Free Mobile, and I would encourage people really to read Richard's blog on a daily basis. It does highlight things in close to real time as far as what's developing.
You've talked about verification of DeepSeek's performance, and I wanted to go there and talk about whether you see this performance of DeepSeek as an improvement versus other large language models as a starting point.
RW: If you take the paper that DeepSeek has published, now it's not a scientific paper, it's just like OpenAI, it's a marketing document, and it's very important to bear that in mind because it's not published in a journal, it's not been peer reviewed. To put that aside, if you take the paper that DeepSeek has published, and you take it at face value, DeepSeek has been able to substantially reduce the costs of both training a large language model on NVIDIA Silicon and also to run inferencing, which is basically when the model is trained and people start asking it questions, that's basically inference, and do that. So those are the innovations.
Now, if you look at the actual innovations that DeepSeek is claiming, it's more about the methodology in terms of how it architected the model, firstly. And then secondly, what other technical tricks it did during the training and during the inference process. Now, what is going on right now is everyone is now rushing off to see if they can replicate any of this. Will they be able to? I have no idea. Very difficult to tell.
Again, the geopolitics plays into this, is to whether or not it will actually be replicable. Because it is one thing to offer your model into the open-source community. But when you've offered it and you put 671 billion parameters that take up, I think it's 700 gigabytes of space. I think it's about that sort of area. Delving into the architecture when these things are pretty much a black box is going to be pretty tough.
TG: The spend that has apparently gone into the development of DeepSeek, it is mooted as being a fraction of the cost of training other LLMs. The compute power, a fraction in theory of what other LLMs are, as you've alluded to. But we're reading today now, the story is that the compute might have been obtained via Singapore, getting chips via Singapore rather than China getting them directly as they in theory cannot do. What are your thoughts on this? How likely of that is that story?
RW: Right, let's start with the chip in Singapore thing first. I think it's unlikely that DeepSeek trained this algorithm on the H100 chip.
Just for those who don't know, the H100 chip is currently limited. You're not allowed to sell it in China. There is a version of the H100 called the H800, with the only real difference between the two being is that the memory bandwidth of the H100 is 600 gigabits per second and the H800 300 gigabits per second.
If you look at what DeepSeek have done, it has put in some very specific optimizations for the H100 chipset to try and mitigate the impact of that lower memory bandwidth. And so consequently, if it was training on H100s, there would have been no upside for the company to have done that. So my suspicion is, is that it actually trained this on H800s.
TG: Another question I have about that then is thinking about the future. Will this then genuinely represent a true lowering of future CapEx needs if this is a replicable model that is shown to be efficient and effective as it looks so far?
RW: It's a good question, and it really depends on demand. Your natural assumption would be, if the price of AI comes down significantly, then demand for that AI would increase to a similar amount. We saw this with the internet, and there is a very good argument for that to happen, which is the argument that Microsoft and Meta and all of the others are basically making on their quarterly conference course.
The flip side of that is, is that AI at the moment is horrendously expensive to train anyway. People are burning billions of dollars to try and gain initial traction in a nascent market. So it's quite possible that actually demand may not increase by as much as cost has fallen, which will put pressure on those that are offering their services using some of the older tech or the other techniques of training.
So the big question is, can people replicate outside of DeepSeek the results that DeepSeek is claiming? Now, another thing to bear in mind is the comparison between OpenAI and DeepSeek is not apples to apples. Okay, people have got this one wrong, in my opinion. So if you look in the paper, DeepSeek claims we trained it for US$6 million. What they're referring to is, this is the cost to rent the chips.
TG: Yeah.
RW: That's it. And people are saying, oh yeah, but it cost OpenAI 500 million to train GPT-40. And that's based on the amount of money that the company raised. And that includes all kinds of other things like operations, and architecting, and R&D, and all that kind of stuff. So it's not an apples to apples comparison. So I've done a very rough back of the envelope calculation on what the real difference might be. And it's not 100x, it's about seven.
TG: Okay.
RW: I would say about seven times cheaper, based on the innovations that DeepSeek is claiming. But that's a very rough estimate. And I wouldn't put my hand up and say, yeah, I'm absolutely convinced that's right.
TG: Okay. Well, let's close the topic off on DeepSeek with some of the political implications that you mentioned. And the first one I wanted to start with was actually, you wrote about this at the, I think this was the first piece you might have written about this in fact, about the timing of the release.
You mentioned that it might have been very coincidentally or maybe not so much coincidentally time with the change of administration in the US. Can you elaborate on that a little bit?
RW: Yes. And the reason that there's been a lot of tit for tat going on for the last few years. So for example, when Gina Raimondo, the previous Secretary of Commerce visited China, Huawei mysteriously released a phone that had a seven nanometer chip in it and there was a huge great big fuss and there have been several incidents of this nature.
DeepSeek releasing it on the day of the inauguration of the new administration is coincidental. Is it a coincidence? Maybe. Was it deliberate? Maybe.
Whatever it does, the timing undermines the credibility of DeepSeek because you immediately start to ask the question, is DeepSeek cooperating in depth with the Chinese state, the Chinese state and the US state are at loggerheads. We can, we call it an ideological struggle, not desperately dissimilar to the Cold War. And so consequently, you know, if you can win PR points off each other, which has been going on for several years, that's obviously something they want to do.
There are signs that DeepSeek may be cooperating with the Chinese Communist Party. For example, the CEO has met the very senior members of the CCP on numerous occasions. And all companies in China have to have members of the Communist Party in their management team somewhere on the board or in the management. So there's always a link back to the state.
And then also finally, one thing to remember, DeepSeek has released its algorithm to open source, which basically means it's exported the algorithm. Now in China, there's a thing called the National Security Law, which prevents you from doing that without a license. So the CCP will have to have said, yes, you can export your algorithm, which again raises the question, why would the CCP want to give away the crown jewels?
TG: That's actually the next question I had was about those privacy concerns, because I can see how if you download the app, that is potentially fraught with privacy concerns. But the fact that it's open source, does that mitigate any concerns about privacy and data privacy specifically?
RW: It does somewhat, because what you can do, it means you can download, but right now, if you download the app on your phone, basically what you're doing is that you're creating an HTML front end for DeepSeek's R1 model that is running on servers in China. So any data you put in that goes straight to China. And I think it says that in the Ts and Cs.
If you download it in open source, what you're doing is you're basically taking it and then you're running it somewhere else. So from the perspective of your data automatically going to China, that's not automatically going to happen.
However, who's to say there aren't back doors or data funnels or anything hidden somewhere in that 700 billion parameters? Nobody knows. And because it's so large, very difficult to verify absolutely that that's not what's happening. And we have seen this kind of thing before, particularly in terms of claims have been made against Huawei in terms of putting in back doors and data leakages into its software that it puts onto hardware that it sells outside of China. So again, it's not a new story.
TG: Is that something that could ever be found in that open source version?
RW: Yeah, you could find it, but given its 700 billion parameters, it might be difficult to find.
TG: It would take some time.
Final question on DeepSeek specifically. Given some of these political ramifications and the fact that it is part of this US-China rivalry and AI arms race, it seems, how seriously do you think the new administration will take this and will there be, in your view, any kind of competitive response at this stage?
From strictly a political point of view, obviously there will be a corporate and market competitive response, but from the political point of view?
RW: I think, from a political point of view, it might be tricky, simply because it doesn't appear that DeepSeek has broken the rules of the sanctions. What you might see is, then turn around and go, hang on a minute, 300 gigabyte per second limit, it's not enough, maybe we should drop it to 100. You might see something of that nature.
Typically, what happens is, is that this game of limitations and sanctions, or this whack-a-mole, if you like, which is US makes some rules, China dodges them, so the US puts in some more to stop China dodging, so China dodges again, and so on and so forth. Typically happens once a year, roundabout in October, November. So if something's going to happen, I would expect to see it roundabout then.
TG: Okay. Well, I wanted to move on now to the sort of bigger picture, longer term questions about AI in general, and not so much about how DeepSeek affects them, but the broader issues. And if nothing else, the DeepSeek news has reminded us of the valuation concerns people have about tech, the concentration risk in US equities particularly, that people have started to worry about, especially over the last year.
They have overcome those concerns by the strong fundamentals of those companies. You know, their ability to generate cash flow is unparalleled. And we're going through earnings season. It still looks pretty good. And that's where I wanted to go next is about the moats of these companies that we're talking about and that key risk to the moats that I think DeepSeek highlights.
Let's start with the premise that - we actually track institutional investor flows in a lot of the industry groups that you can, well, all of the industry groups that are classified under the GICS classification system. And we did see on Monday and Tuesday, massive selling of semis and tech hardware, rotation into software, things like that. And so I wanted to start on the hardware side.
Are there any emergent hardware competitors that can really threaten Nvidia? You know, I've read pieces about companies like Cerebras and Grok that are not public yet. Are they emergent threats in the next five years, do you think?
RW: So the answer is actually relatively straightforward, which is, in the current generation, there is no threat to Nvidia whatsoever. There's tons of competitors. They're all trying to eat Nvidia's lunch. But frankly, in my opinion, no one's got a chance of getting a look in.
And the reason for that is twofold. One, Nvidia has a platform called CUDA. And all the developers who create AI use CUDA to create their AI. It's been around for 20, more than 20 years. It's very mature. Everyone knows how to use it. And so consequently, it has huge stickiness for Nvidia's silicon.
The second thing is, is that Nvidia's silicon is always at least one generation ahead of everyone else. So it can credibly make the claim, although we're making 75 percent plus gross margin on our silicon and our product, we're still cheaper than anyone else. And that is a credible claim.
In that environment, there is no one who's going to lay a glove on Nvidia, in my opinion, in this environment, while CUDA remains so important.
Now, what is happening is, is that there are signs of change, which are that the large language models are converging in terms of their ability, and DeepSeek is just the latest example of that. There isn't much difference between GPT-4 and Gemini 2 and Llama 3.2 and Mistral and all the others. They're all pretty much equivalents as to what they can do, which means they start to look like operating systems.
Now, what happens then is, is what these guys are saying is, don't worry, don't bother about worrying about CUDA or any of that stuff. Just come and use our API and develop your model on top of our foundation. Take our foundation model, do your fine tuning, there's your model, you don't need to worry about the Silicon stuff at all.
Now, if developers start to move in that direction, what it basically means is, is it starts to erode or undermine CUDA's position as the place, the thing that everyone demands or the control point. If that starts to happen and there are signs of it, then NVIDIA's position will start to materially weaken. And that's how the market share balance in AI training Silicon, I think, could become more distributed among the major players.
TG: And who are the companies that are developing software with potential to threaten CUDA? Is it just the other big tech companies or are there other ones?
RW: No, it's more, it's, so what happens is, is that the, it's hardware and software together. CUDA is absolutely bound by the hip to NVIDIA Silicon because it's how NVIDIA monetizes the CUDA.
And you'll see the same thing for everyone else. So Cerebras has a software development system. Amazon's Tranium has a software development. They've all basically got their own. When you look at the Silicon itself, I don't think NVIDIA Silicon is particularly much better than anyone else's. It's the software, the software ecosystem that is built around it is what gives it, makes it so powerful.
TG: The other question I had about hardware was the advances that are being made in quantum computing and high performance computing. Is that also still relying on NVIDIA chips, or are there companies unique to that ecosystem that might eventually be competitors in the kind of AI hardware race?
RW: Okay, I think quantum computing is a completely different kettle of fish. As far as I'm aware, NVIDIA hasn't even made a chip for quantum computing. There are two main routes of which I'm aware. One is the one that Google has taken, which basically uses a substrate of some description that runs at almost absolute zero.
And another approach that China has taken, which basically massively parallels photonic computing. So quantum computing is a completely different ballgame. And frankly, I view it is the nuclear fusion of semiconductors.
You know, frankly, I don't think we're going to see anything in that area for 10 years or more, at least. So, you know, it's not something I worry about from a competitive dynamic.
TG: Okay. Well, just to wrap up those competitive dynamics and the market based outlook, I guess, for this, it sounds like NVIDIA is one of your winners long-term from the whole AI vertical.
RW: Certainly for the immediate term, while CUDA remains the control point, NVIDIA is the only game in town, in my opinion. Except for, if you look at something like Cerebras, Cerebras is very niche in terms of its appeal, because basically what it does, it sells massive chips with incredible performance. And so if you've got a mission critical system, such as something in the military, or something in the government, or a financial institution that needs to compute really quickly and you're happy just to completely verticalize your entire software stack, then Cerebras may be the option for you.
TG: And what about within the sort of large big cap tech names as it is, just related to the overall AI outlook? NVIDIA is one of them, but I'm thinking more about the software companies as well as this. Who do you think are the leaders and laggards at this point?
RW: Part of the problem that you've got at the moment, let's take Google as an example. Google is investing incredibly heavily in this area, but if you are an investor in Google, you're still basically invested in Internet search. Yes, you could buy Google for exposure to artificial intelligence, but you're not really getting that kind of exposure.
I would make exactly the same argument about Apple, except probably greater.
Now, some places which have a much greater level of exposure, I would actually argue Meta Platforms does. And for no other reason than historically, Meta has been really, really bad at artificial intelligence. You know, in fact, I've written, I wrote pieces five, six years ago, where we basically looked at their AI and said, this is just awful.
A lot has changed.
You know, I would actually rate Meta Platforms now as one of the leaders in artificial intelligence. And what they're doing is, is that they are implementing the AI that they've developed to enable them to run much, much more efficiently. And that's why you are seeing the company putting up 48 percent operating margin, even though it lost a massive US$5 billion from Reality Labs, it still managed to put, its core business runs a 60 percent plus operating margin.
And the reason why, one of the reasons why it's been able to do that is, it's the first time application of artificial intelligence, that it has developed into its own, so it can basically do the same, but with less humans.
TG: It's possibly a stupid question. Is that because of Llama and the open source nature of that, or is that something completely different?
RW: That's something completely different.
Open source, the open source decision by Meta is all about not being hostage to another platform. So if you look at Mark Zuckerberg, one of the big problems he's had in the last five, 10 years basically has been he's hostage to iOS and Android, which basically means if Apple does something, it can mess around with his business.
And when Apple did that, the ask not to track all the app, what was it called, the ad-pocalypse, I think was what it was referred to, it caused him a lot of problems. And that really ticked him off, to put it mildly. And he's absolutely determined he's not going to be dependent on anyone's platform again, which is why he's doing a full stack on Reality Labs. And one of the reasons why they made Llama, put Llama out into the open source so that everyone would adopt it, such that Llama becomes one of the default places where you go to develop artificial intelligence. That's really what it's all about.
TG: Well, just to close, I had a couple of fun philosophical questions that I wanted to ask just about AI, and particularly artificial general intelligence. This broad concept that we're all thinking about and potentially working towards, how close do you think we are to achieving AGI?
RW: Okay, so I'm going to go against the grain on this one. In my opinion, there is no evidence whatsoever that we are any closer to AGI than we were 10 years ago.
And the answer to, the reason why is very, very simple. In order for AGI, AGI basically means that the AI can do 90 percent of human economic tasks or something of that nature. The problem with the AI today is all generative AI is based on a statistical pattern matching system, which means it has no understanding of causality. It does not know what it is doing from a fundamental basis. And that's why it makes all those horrible mistakes.
Just ask an AI to draw a picture of someone drawing or writing with their left hand and you'll see exactly what I mean. They can't do it because they don't understand what writing with your left hand is. And it is not until that problem is solved that we're going to get any closer to artificial general intelligence because intelligence requires an understanding of causality and the machines don't have it.
TG: Would we know if it was ever created?
RW: Yes, and this is why the concept of reasoning is so important. So, Satya Nadella and Mark Zuckerberg and Sam Altman and all his buddies all say that their models can reason.
I respectfully disagree. I say they can't. I say that they can simulate reasoning incredibly effectively, but they can't actually do it.
Why do I say that? For two reasons.
One, these models that they tout, they can all run, but they can't walk. So they can do the complicated PhD maths, but you ask them to play noughts and crosses. Oh dear. Okay?
Secondly, they all catastrophically fail the most simple logical reasoning test, in my opinion, which is if A equals B, then it follows that B equals A. When you empirically test them on that, they all catastrophically fail this test. And that's why reasoning is important. They simulate reasoning, but they don't actually do it. Now, if we start to see hard evidence of real reasoning in these systems, this is a sign of the causality problem starting to be fixed. Then we're on the road to AGI. And that's why you see in the daily that I write, that's why I bang on about reasoning a lot, because that's my signal.
TG: Very good. Well, I have one final question. It's actually not my question. It was given to me by actually the friend who recommended your blog all the way back about a year ago when I started reading it. He likes to posit that, “The simple economics of AI is to reduce the cost of prediction to zero”. Do you agree with that proposition?
RW: That is a very, very good question. If you want AI to be as pervasive as the internet, that has to happen, because that's how the internet operates.
Is it possible? Yes, it should be. It should basically be possible.
Because again, when I look at, and I think DeepSeek is an indication of this, we've written about it many times, which is when I look at how AI is implemented at the moment, it's incredibly inefficient, which means there's tons of low-hanging fruit by which you could make the whole system much more efficient.
And part of the problem is, is that because there's been this endless wad of money pouring into the Western AI sector, no one's really had to bother. So if you run out of money, you can just pop around the corner and get another 100 million. No problem.
And this is why I think China, China has been working on this. Because China has been hamstrung and not been able to get access to the latest chips, and because money is much shorter in China than it is overseas for this sort of thing, the Chinese have had to work out how to do more with less. Arguably, exactly what DeepSeek has just done.
And so, from that perspective, yes, I think there is a lot, AI should be able to be much, much cheaper than it is today. And we need that in order to, for AI to become as pervasive as they would have us believe.
TG: And, sorry, I said that was the last question. It's not the last question. I wish I had taken credit for that question as well, given you said it was a good one. This one is my question. So what things then are you looking for? And as part of that process, what are the kind of things to look out for as we move towards that more efficient state?
RW: Good question. So I think the first thing is for the start-up companies to stop burning billions and billions and billions of dollars. And on that basis, when they actually start to move into some degree of profitability, it's a sign that they are starting to optimize and to run their algorithms much, much more efficiently.
DeepSeek's technology or its innovation, they're not actually new. It's just taken a lot of stuff that's been well known in the industry for a while and tweaked it and tweaked around the edges and changed it and optimized it, which is, you know, which, you know, I think a lot of these techniques, OpenAI hasn't even considered doing it because it hasn't been forced to, because it's just had, it's just been able to have wads of cash on tap.
But yeah, so from that perspective, that's really what I would look for, is when they actually start to become economically viable, that's when you're starting to get to the point where the competitive nature of the market, combined with the efficiency of an implementation, is starting to get to a point where you can have a realistic economic opportunity.
TG: Richard, it's been so good to talk to you about this. I definitely want to do this again sometime. Radio Free Mobile is Richard's blog. He updates it, I'm pretty sure, just about every working day.
RW: Pretty much.
TG: Highly recommended. Please do check it out. That's how I got to see who he was and what he had to say. And I very much recommend you doing the same.
Richard again, thank you so much.
RW: No problem. Thanks.
TG: Thanks for listening to this week's edition of Street Signals from the research team at State Street Global Markets. This podcast and all of our research can be found at our web portal, Insights. There you'll be able to find all of our latest thinking on macroeconomics and markets where we leverage our deep experience in research on investor behavior, inflation, risk, and media sentiment, all of which goes into building an award-winning strategy product.
If you're a client of State Street, hit us up there at globalmarkets.statestreet.com. And again, if you like what you've heard, subscribe and leave a review. We'll see you next time.
Street Signals – our weekly podcast – brings to you the latest developments shaping the industry. In each episode, experts from the industry and State Street share their perspectives on market developments and key trends in the financial sector.
Follow and subscribe to our content wherever you get your podcasts:
Thank you for contacting State Street. This message confirms that we have received your message and have routed it to the appropriate business area. We will make every effort to respond to you as soon as possible.
Thank you for contacting State Street. This message confirms that we have received your message and have routed it to the appropriate business area. We will make every effort to respond to you as soon as possible.