Speakers
Jonathan Rawicz and Henry Walters
To listen to the full podcast episode, use the buttons below.
Let us know where you’re located so we can tailor our information to make your experience more relevant.
Investment Insights • Podcast
Chinese generative AI contender, DeepSeek, caused huge waves in markets across the globe at the end of January. Investors and analysts were caught flat footed after the Chinese company, operating on inferior hardware, revealed a generative AI comparable to leading Western models while claiming that development costs had been a fraction of their American counterparts. This week with the help of Jonathan Rawicz and Henry Walters we delve into the technical details behind why this development was such a revelation, and what the longer term impacts of successful AI implementation might look like across sectors.
Speakers
Jonathan Rawicz and Henry Walters
To listen to the full podcast episode, use the buttons below.
Welcome to Beyond the Benchmark, the EFG podcast with Moz Afzal.
Moz Afzal:
Hi everyone. So today we are running an artificial intelligence special topic. Obviously a lot of noise around deep sea and AI and lots of noise around developments around ai. So I'm welcoming today Henry Walters, who is our resident AI expert these days, and Jonathan Rawicz, who's our head of global equities. So Jonathan, Henry, welcome.
Henry Walters:
Thank you. Hi Moz. Good to be back.
Moz Afzal:
Yeah, I think it was roughly around this time last year and obviously a huge amount has happened in the last 12 months around ai. So let's go straight into it. And the first question is around the noise around DeepSeek. So very quickly, what is DeepSeek and why have financial markets and investors got in such a tizz over DeepSeek? So DeepSeek, what is it?
Henry Walters:
Sure. So DeepSeek is a Chinese AI research lab equivalent to an OpenAI or Anthropic. The big names in the US it's been around for maybe a couple years, came out of a Chinese or been affiliated with a Chinese quant hedge fund who had a lot of experience in AI machine learning, had their own data centre infrastructure and then pivoted towards AI. And what shook the markets was the release of a couple models and research papers in December and January, which culminated in a bit of a frenzy of social media activity pointing at these models and really what they showed for the very first time that a Chinese AI research lab could produce something or a model that was close or equivalent to in performance to leading western models. So R1 was their reasoning model and we can get into what that means specifically, but it could do similar performance to an open AI oh one at maybe three to six months behind, which I think particularly from a geopolitical perspective, the fact that it was Chinese was pretty significant given all the US export restrictions and policy targeting Chinese AI advancement, that was a bit of a shock.
And also the other widely debated point was how much they spent and cost to train these models. So they are very open with their research and their disclosures more so than most other firms. And they release some figures on the final training run costs them 6 million of the V3 model. Nothing disclosed on the R1 model. And this probably this nearly definitely doesn't reflect the entire cost of the infrastructure and all the R&D and other experiments they ran to train these models. But even so it's probably an order of cheaper to serve these models than the leading models in the west. So what does that mean? And the initial reaction was, well if we can achieve similar performance for these models at a fraction of the cost, have we overinvested, do we have too much Nvidia infrastructure everywhere? That is made almost redundant by some of these advances.
And so far the reaction from we can talk about infrastructure investment, but so far the reaction has been no cheaper cost to serve means only an expansion of demand and capabilities more than offsetting that. I think one useful data point is when you look at an initial GPT-3 level model, which came out in 2022, initially the cost to serve that was around $60 per million output tokens over two and a half years. That steadily decreased and the cost now is at least 1200 times cheaper. It's now cents to produce output tokens at a similar level of quantity or quality and demand has only far exceeded more than offset that cost reduction. And so yeah, there's obviously a lot of talk about how they achieve these optimizations, but I don't think the surprise was that the cost has continued to fall more. The surprise was that a Chinese lab did it rather than Meta or someone else. So initially that's been the point of debate around DeepSeek.
Moz Afzal:
So just maybe recapping some of the points that you made here. I guess the surprise factor for those people who actually knew what was going on, it wasn't a big surprise. I think people who'd been tracking a DeepSeek and their models and also the papers had been writing. So a lot of people were not necessarily surprised, but it suddenly seemed to capture the imagination of everybody. And I also find the timing of it quite interesting just before Chinese New Year holiday was also timely in terms of the reaction to markets as well as reaction to the papers. But let's put that aside for now. I think added to the point is that because they were Chinese and all the embargoes that have been given on the most advanced chips produced by Nvidia and others was they did it on if you like, second, third generation chips rather than the newest ones. Can you talk through how they're able to do that and what were the sort of secret things they did to advance their speeds?
Jonathan Rawicz:
Yeah, sure. So on the infrastructure, there's a lot of debate around what they have. It's entirely possible they did everything within the rules and only bought chips that fully met all the export restrictions. But even then they're still very good chips. They're not that far behind because these things are trained with a lag. So they still have access to a lot of Nvidia Hopper chips and they haven't fully disclosed how many of those they have, but still if they have still less than the largest clusters here, but not that far off in theory, what they've done is a lot of innovations at the kind of software level and software optimizations and methods to architect these models in innovative ways. One, not to get too technical was greater use of something called mixture of experts, which is making the models more divisible into kind of subsections to run much, much more efficiently.
And there's other kind of innovations that I think one comparison versus western firms is because of these export restrictions, they've necessitated certain optimization efforts because they can't simply buy more and more, they have to make better use of what they have versus Western Labs, typically we think we have unlimited budgets effectively we can just buy bigger and we don't need to spend so much time on optimization when we're racing so fast to scale everything. But that being said, it's not like these kind of software advancements that they've very openly published research about. Mark Zuckerberg says we're going to try and copy everything they've done and implement it in our systems and they've effectively published a bit of a guide as how to do it. So these things should be adopted more broadly.
I also think the other important thing was one of the limitations on the chips that Nvidia was allowed to sell to China was that essentially their ability to network together and have high data transfer rates between chips was limited. That was one of the key differences in terms of the export restrictions. So what was really surprising was how the DeepSeek guys went below the software layer right onto a very, very, very low level programming language and focus and got around the network issue. So they actually basically hacked the chips to get performance that nobody expected to happen. And I think that was one of the other things that scared people because the bull case around Nvidia is that its MO is comprised of hardware, the chips, software and networking altogether. So when somebody saw that there was basically a way of getting round the Cuda software, which is their USP, basically the moat came into question. And so that was one of the other things that drove a very big negative reaction in the stock and obviously really highlighted to people how if you really needed to like the Chinese did, you could be innovative and get even more performance than expected. And so perhaps you don't need as many chips in the big scheme of things because you can get them to do more. That was one of the other things that mattered.
Moz Afzal:
And I think associated with that was all of the infrastructure that's needed. If you could do it quicker, faster, cheaper, it means you don't need as much energy usage. So you actually had a big reaction in the utility names as well because the electricity usage was probably not going to be as much. Look, I think all these I guess are theories. One thing we certainly do know about technology is that the cheaper it gets, as Henry just mentioned earlier, the more adoption you get and the faster the adoption curve. And I think that's actually a really critical point here is that the debate certainly from my perspective is does this mean that AI gets get even faster adoption because it's cheaper and we found a way to make it cheaper and more efficient and what is the winning and the losing components of this? And I guess for me, and we don't need to go into too much detail, but my kind of view on this is about margin and volume.
So at the moment we know the margins for the infrastructure are very, very high best pricing margins that we've seen in the industry probably forever I suspect because it's such huge shortage and because it's huge demand, but actually, so margins may suffer in the industry as a result of that, but it then means they more than make up for it in volume. And I guess that's always been typically the technology, holy grail is margins high volume high and they've got to meet somewhere in the middle. And certainly if you talk about semiconductors, we also know it's a commodity business. So eventually there is excess supply over demand and then there's a correction and just like there is in normal commodities. And I think that is, I guess to me in the medium long-term, the debate is that margin versus volume and what is a fair price for that. And I think that volatility and that debate around I suspect will run along those lines. So let's pivot it back to the infrastructure requirements. So of course within AI we'll talk about the applications and why people are doing it at the moment. So Henry, what about cloud infrastructure?
Henry Walters:
Yeah, so that's been the biggest pool of profits from AI so far has been infrastructure vendors and that hasn't really changed so far. So we're still waiting for full year 2024 numbers. I've had quite a few companies report, but roughly speaking, CapEx and AI compute infrastructure in 2024 is probably in the region of 250 billion, maybe 300 billion across regions, which is another over a hundred percent growth year over year versus 2023. So massive increase. And within that the bulk, the largest players are the well-known named hyperscalers, they they're still building as much to demand as possible. So that capacity is still being utilised heavily. And we can talk through kind of adoption of what that means, but that's kind of high level where we are in terms of CapEx and again, the companies we've seen earnings have revised upwards their CapEx estimates.
Moz Afzal:
And I guess Jonathan, I guess the question is are we going to see an ROI on this investment? And I guess the big question is how long is the delay to really show that?
Jonathan Rawicz:
I mean I think we going to see ROIs at different stages in different sorts of businesses. So we know already that Meta has got an ROI on all the Nvidia chips they've put in, they essentially needed to completely redevelop their ad recommendation engine after some changes that Apple made late 2022. So they were early, they invested their businesses growing very fast, they're able to offer very good return on ad spend essentially for their customers. So we know that they're making money out of it and they were already starting to talk to us about generating ads using AI. So it's not only targeting, it's actually content. So it's very clear that they're making money out of this. The other guys, you look at the big hyperscalers, we know as Henry said, that there's more demand and supply and we can see very, very, very healthy growth rates in actual revenues in the cloud providers north of 20% in most cases.
And we also know that the mix is shifting in those cloud providers sales. So we know more and more people are buying that cloud capacity to run AI models. What we don't know yet is whether those runs on the cloud are themselves generating value. We suspect that they are because we have many use cases which we can talk about, which we know will probably save companies a lot of cost. And so potentially maybe not increased sales yet, and that's to be determined. But we know costs are coming out in a lot of different spaces. So there is some ROI, it's very hard to measure. It's very deep in the income statements of many companies. Some of the companies are giving us some indication that it's going to pay off. But I think the uncertainty here in the whole trade is like, will that continue? Will that expand? If it does, then AI is a massive structural bull trend that will continue and if it fades, it won't. So that's the real question.
Moz Afzal:
So let's unpack the rationale to why they're spending so much money. The first is, which I think we all can see tangible is just a cost. So we all can save a bit of cost, let's tackle that cost piece first. Where for a company do they say, let's take the hyperscalers because my kind of view is because the hyperscalers know AI better than anyone else, they're the first ones to actually use it in their businesses. And you've already seen many software companies, for example, talking about cutting costs. So we had a sort of cloud software company called Workday yesterday announced steep job losses. And you can see that those companies are the first to adopt AI because they know how to use it if you like to cut costs. So let's talk about how they're saving costs for hyperscalers and then how other companies might save you as well.
Jonathan Rawicz:
Okay, so I mean I think the most obvious clear value creation from all of this is in software coding. So we know that not only the hyperscalers and not only software companies, but all companies are implementing software to try and run their businesses more efficiently. And software has been very expensive because you need to hire very technically advanced people who spend a lot of time and effort writing code and getting that code to work. So what's really radically changed is that their products launched, for example, Microsoft's GitHub copilot, which is a chat bot essentially, but the chat bot produces incredibly good quality code. So you can take an individual programmer alongside this product and get them to be 10 times more effective. So what that means is that the demand for software programmers has essentially gone down. So you get cost deflation in software production.
So obviously the large hyperscalers are using that to enhance their platforms. Microsoft's using it to build its own pieces of software that it sells, et cetera, et cetera. The interesting thing most recently has actually been on Meta’s conference call because Meta is starting to talk about creating basically an autonomous AI software engineer that'll help it write code internally. It's very early, but it's going to move from just assisting like a copilot assistant to maybe autonomously creating software. And that's going to be very interesting. That's right at the edge, but it's pretty obvious that that is a huge amount of value creation, just making software cheaper for all of us.
Moz Afzal:
I guess the other use cases around say software companies are I guess agents and different forms of agents. Again, a lot of people on this podcast probably don't know what an agent is. Maybe Henry, do you want to define what an agent is? And then obviously there's loose definitions as well. It seems to change quite a lot.
Henry Walters:
Yeah, it's a fuzzy definition that can kind of vary depending on who's talking about it, but kind of loosely defines an AI agent is an AI model that you can set a task and it will kind of go away on its own and spend some amount of time iterating and working on something before coming back. So instead of just sending one query and it's sending one response back, it's going to go kind of in theory work on more of a longer term project. Now that could be 10 minutes, that could be half an hour, that could be hours or it could be even longer. But in theory it's kind of a bit more autonomous and can take more, I suppose, actions by itself without the human in the loop.
Moz Afzal:
So that's an agent, maybe you've got a use case. Give us an example of that.
Jonathan Rawicz:
I'll give you a very obvious one. Imagine having an AI agent that's a travel agent where you say, I want to go to the Caribbean for two weeks, and that AI agent knows a bit about where you've booked before and then it goes away, searches for flights, searches for hotels, makes all the bookings and sorts you out without spending any time on the internet going through that process. So it would be something like maybe a company could, like Expedia would implement, or airlines would implement that would just make the friction of going through the process of buying their goods and services that they either sell or distribute a whole lot easier for everybody and that gives them a competitive edge. So it's just something that uses information that it knows about you, knows what you're trying to do and works through a series of steps itself to get a result.
Moz Afzal:
Alright. So I guess in research, for example, in investment research, we would ask a particular problem, maybe let's call it macroeconomic problem. It would go out, source all the data, do all the cleanup that's needed, and then be able to produce a lovely graph for us, something like that.
Henry Walters:
Yeah, there are some good products. I mean Gemini and OpenAI have both released products, both called deep research, which are worth looking up and having a browse of what they can do, they can produce, they can browse internet, find a hundred different websites and produce quite interesting reports on what they read. So yeah, definitely some products coming.
Moz Afzal:
So we have these agents. How do the agents, in terms of resource allocation from the hyperscalers and chips and everything else, how does that come together if you like?
Jonathan Rawicz:
I mean the key implication of having agents work is that the workloads to run things just massively increases because there's a whole lot more that AI does just compared to what it does today, which is just you ask it a question, it gives you an answer, there's steps in the process, which might be very complicated and not necessarily linear. So the AI agents may try a number of different solutions before they find the optimal solution and every time they try and solve the problem a certain way that consumes compute, memory, energy. So all of that points to more demand for infrastructure, but it's obviously predicated on these agents' ability to ultimately solve problems. And so far there's plenty of evidence that they're making very rapid progress doing this. And so that's one of the things that holds up the bull case for the infrastructure guys,
Moz Afzal:
Certainly for the foreseeable future. So next move, so we talked about costs and how these agents can reduce friction. We talked about coding and how that can be automated so AI can do coding itself and find solutions for us, I guess create a website and so on and so forth. So let's talk about the consumer cases rather than I guess the cost cases. How do companies use AI now to say, enhance the customer experience and therefore charge more for that service?
Jonathan Rawicz:
The classic second area where there's obvious evidence of value creation other than software creation is in customer service. So customer service has obviously got two layers. One is selling you the product and then it's servicing you once you're a customer. So we know that AI is being embedded into big customer relationship management software systems, which are enabling salespeople to much better target their sales, so to understand their customer better, to customise their sales process to make that sales process more efficient. So that obviously saves companies money when trying to sell. And then on the backend, once somebody's a customer, obviously call centres are a horrible experience for most people to have a person who doesn't know the answers to every single question where you have to wait in a queue to talk to them is a painful process. So what we're starting to see is these agent type functionalities be used on the backend where you've got AI learning everything about the customer and all the problems that customers have, and then automating more and more that process of finding solutions for people when they call up and have an issue.
And so obviously that makes the customer's experience much better, but it also saves the company a whole lot of costs. It improves all the friction in terms of servicing customers. And we're starting to see that be embedded particularly in some of the big software vendors who do this and sell software products to companies either on the CRM side or even seeing it internally. So they're large software companies that essentially sell systems that allow companies to manage their internal IT or manage internal HR. So having AI agents embedded in, for example, resolving IT issues within a company or helping people manage HR processes is very, very, very beneficial. And we know that those companies are working on a gentech software and that they're seeing demand for that sort of solution being sold on to their enterprise customers. So that's where it's starting to hit people's lives in a more real way. I mean, I guess the simplest example of this is I'm sure many people have used chatbots online chatbots, when you go and try and ask a question of a company, those are becoming much, much better at giving you answers or at least rooting your query really smartly. So that's the real evidence of it from a consumer perspective.
Moz Afzal:
And actually be able to solve the problems as well. So they're able to go one step further and make a payment or where it might be or stop a payment if you're using a bank service transaction, it's actually pretty useful. So then let's take the next step and then we've obviously done quite a lot. I know quite a lot of work, Henry on humanoid robots, and then of course Elon Musk's a big thing in terms of autonomous driving. So let's take the humanoid robots piece first because obviously AI is going to be hugely important certainly when it comes to consumer robots, but we are already seeing good progress in I guess industrial manufacturing and those things have been well understood for some time. How's AI helping to power that?
Henry Walters:
Sure. So yeah, as you mentioned, industrial robotics is a massive industry, but historically it's been very linearly programmed. Robots on a manufacturing line, for example, performing repetitive programmable tasks and the excitement around AI and humanoid robotics is the AI models within the robots can make them much more adaptable. The models effectively the brain, and then sensors of the site and touch, and that's how they could interact without having to be initially programmed for every specific movement. They can be more adaptive and react and humanoid robotics, it's still very early stage, quite nascent. It's not like this is a massive market today, but you see Jensen, CEO of Nvidia do a big presentation at Computex this year around. He sees the challenge as we need to build enough simulation and data and pre-training data to make these models to then feed into the hardware. The hardware is relatively capable today. It's a lot of Chinese startups largely. You see some fun demos online of what they're capable of doing, but it's still relatively proof of concept, what kind of market there is for humanoid robotics. But the technology definitely looks interesting at this point.
Moz Afzal:
And then autonomous, I guess where probably a little bit more real. And I guess the big debate is I guess for those of you who've been to San Francisco and now Austin and various other places, Waymo is already available and people using it and generally the experience has been pretty decent. I guess that's using very much a centralised process versus what Elon Musk wants to build is much more of a decentralised where the car's actually driving themselves in real time and a central figure is not necessarily there.
Jonathan Rawicz:
Yeah, I mean there's a lot of different avenues to go down in terms of thinking about autonomous driving. Look, I think the most important thing is that there is very, very clear evidence that we are able to build cars that can drive themselves without stopping, without human intervention for longer and longer periods. So essentially what that means is before maybe you'd get in the car and you'd go down the street and it would stop three times because it wouldn't know what to do at a certain juncture and you'd have to intervene Now that is extending and extending and in fact there's an exponential increase in how long that happens for. So that really means that this, from a safety perspective, you're starting to build a case that you can go to the regulator if you're an autonomous car manufacturer and make an argument that you should start talking about wider adoption of this.
So companies like Waymo, obviously Tesla are trying to do that, right? And that's a very, it's possible, but it's not yet going to basically handle every possible situation. Autonomous driving works really well in places like Arizona where you have this nice flat grid-like system where you don't have snow, where the weather's pretty constant, you can get cars to drive themselves. It's when you get to the edge cases where you have much more windy roads, many more people on the streets, much more, much more varied weather that it becomes more difficult. But I think the point is this, it's not about whether it happens today, it's about we are looking at how AI is evolving. So we are getting AI to start learning and training itself, right? When we get AI models to train AI models, you get to potentially get to a point where the models start doing things much, much, much better than human beings can do.
So the typical case that's often cited to sort of explain this is when we taught computers to play chess and we taught computers to play the game of go, eventually we went from just letting the model look at a whole bunch of chess games to telling the model the rules of the game and letting the model simulate many, many, many, many games. And we saw that those models basically worked out moves that human beings just could not come up with and they smashed human players. And that same kind of concept of getting robots to learn about the real world from models that are interacting with the real world. Eventually you might get to a point where those models far, far, far exceed their capabilities today. So this is, as Henry said, very nascent, but we can see a pattern emerging where this sort of stuff becomes more realistic. There's a lot of future hope discounted today, but it's not impossible that the sort of technology continues to move the way it does and produces much better results than we see today. There's kind of an inflexion point potentially.
Moz Afzal:
So let me just point to something you just mentioned, which I think is quite key, is that AI is teaching AI, if you like, so it's not no longer us giving scenarios thrown to the AI model to simulate is now AI coming up with its own simulations and then testing on itself if you like. And I think that's key. Now I want to go into some of the technical definitions that have been thrown out there, AI training, AI, what is that definition if you like?
Henry Walters:
Sure. So maybe I'll start with setting some definitions and contextualising a bit in terms of where we are in capabilities progress and what the different techniques have been to improve model performance. So it's helpful to refer to something known as scaling laws, which is a belief rather than a fixed law similar to Moore's law and generally scaling law is defined as the AI models capabilities improve with greater scale across a few different features. And that can be broken up at the moment. There's kind of three main categories of scaling and, and then inference scaling. Pre-training was kind of the initial first vector of improvement, which is taking more data, making larger models equals better performance. And then around last year there's a lot of noise that pre-training scaling had slowed, diminishing returns. And essentially what happened is we've trained on all the text data that's available on the internet.
There's still ways to kind of scale that further with more synthetic data, video data, other forms, but that had really hit a bit of a wall and so new vectors were required to continue to drive model improvements. You touched on reinforcement learning, which fits within that. So post-training, it's the second still training the model. So you're not using it yet, you're still improving it, but instead of your model already has a load of base knowledge and really you are showing it lots of examples or you are refining the model after it's been trained to align it to a certain form of output. You want to encourage it to perform things in specific ways or you're trying to kind of embed new abilities within the model through post-training. And that's been a new vector that's been really pushed through more synthetic data, embedding these kind of reasoning capabilities in post-training.
And now nearly more compute is spent on post-training than pre-training these models. And so the reinforcement learning you mentioned is still, I dunno how widely applied that is. It becomes a bit kind of sci-fi or we're letting these models self play and improve, but it's an idea that is being pushed at the moment and experimented with the other vector I mentioned on just to finish up was on inference scaling and inference is the use of the model after it's already been trained. So the model's created, you have an AI model now, so when I send a request to Chat GPT inference occurs, its spits out an answer. What happened last year was with the emergence of these kind of reasoning capabilities, there was a realisation that running more computational time ad inference could lead to better performance and results. And now these so-called reasoning models, instead of generating one output, they generate these chains of thoughts and they can think through logically.
So they don't go to the final answer, they break down a problem into multiple stages and they can backtrack if they reach an illogical conclusion, recognise a mistake has been made or an approach didn't work, hit a dead end and they can revisit earlier steps. And this has been mostly focused around training a model to perform much better at logical complex math problems. I mean if you remember when Chat GPT came out and how bad it was at maths, a lot of the innovations have made it much better at scientific mathematical problems. And the two important implications of inference scaling and reasoning models is one, the kind of engineering logical deduction. And second that its scales with compute. And there was a great chart from OpenAI in December and it showed on the x axis was the cost they spent on the inference or the output and the y axis was the performance at specific kind of hard math questions and there was a trade trade-off. You could spend a thousand, 2000 times more compute and you'd get 10% better scores, but your queries instead of costing cents could cost $5, could cost a thousand dollars to get an answer, which is a whole new economic trade-off of how smart do you want this output to be, which is yet a new exciting kind of vector to push on.
Jonathan Rawicz:
I think the other thing to sort of mention is that a lot of the sort of reinforcement learning stuff really only really works at this point where you have problems where you have a definitive answer like a math problem, right? There's only one answer. So what you can use these models to do is go through all those steps to find the right answer. That's as Henry was pointing to, the next vision is that you get a model to learn from how another model goes through all those steps and gets to the right answer and create a better model based on what it learns about how that first model solved the problem. And that's what I meant when I said models, teaching models. So that's the dream at the moment. We haven't seen very firm results, but we know that it's probably possible with problems where there's a definitive answer.
I think the one thing, and I'd leave this with why this all really matters, all the scaling vectors, what really matters is all of it points to more infrastructure requirements. So from a stock market perspective, it has reinforced the bull case for the companies producing all the layers of infrastructure because we know, as we said earlier, one is the ultimate cost of serving is dropping and we are seeing improvements and we know that in order to get those improvements, we're going to need more infrastructure, both to develop the improvements and then ultimately to serve all these different needs we've discussed. And this is what really underpins the bull case at the moment, and that's why everybody cares about these concepts in terms.
Moz Afzal:
Great. I think that's a good place to end. Maybe last question actually to maybe first Henry and then Jonathan. So when you are reading upon all the developments in AI, what's your main source of information?
Henry Walters:
Oh, great question. Yeah. So many that I need to try and use AI to use teach me. I would say I definitely use a lot of AI tools as well, but it's a lot of different researchers, the main sources of things like Twitter, Substack, YouTube, podcasts, there's just an abundance of great resources and publications out there.
Moz Afzal:
Any one that you care to mention? Well, your current favourite obviously, because that changes, isn't it?
Henry Walters:
Yeah, favourite, I mean it's semi analysis at the moment. The blog they have is very deep technical knowledge that can be a bit overwhelming at times, but that's probably my go-to at the moment. Okay,
Jonathan Rawicz:
Jonathan? Yeah, I didn't go quite as deep as Henry on the details on the semi, but I think what I enjoy listening to is that more and more specialised tech investors are doing podcasts where they debate these big issues and contextualise what's going on and how to think about who the winners and losers will be. So some of our favourite people are people like Ben Thompson who does an amazing podcast and blog around tech in general. And then there are guys like Bill Gurley who does, he's a specialist tech investor who's very thoughtful. So yeah, it's really trying to follow the details on the technical architecture stuff, but then also think about why this all matters and where it's going and trying to piece these things together. And obviously it's such a fast moving space that it's very difficult to keep up. And as in you said you almost need AI to keep up with everything that's coming out. So yeah, it's very interesting.
Moz Afzal:
And one thing we hadn't even talked yet about is video and all the advances that can bring along that, no doubt we'll talk about at another time. So gentlemen, thank you very much for keeping us up to date. A lot of information we kind of got through there. We'll probably need the transcript and Chat GPT to summarise the transcript of the podcast. But again, thank you very much.
Jonathan Rawicz:
Pleasure. Thank you, Moz.
Moz Afzal:
With that, thank you very much for listening and we'll be back again on the EFT podcast shortly.
This podcast is provided for general information only and assumes a certain level of knowledge of financial markets. It's provided for informational purposes only and should not be considered as an offer investment recommendation or solicitation to deal in any of the investments or products mentioned herein and does not constitute investment research. The views in this podcast are those of the contributors at the time of publication and do not necessarily reflect those of EFG international or new capital. The companies discussed in this podcast have been selected for illustrative purposes only, or to demonstrate our investment management style and not as an investment recommendation or indication of their future performance. The value of investments and the income from them can go down as well as up, and investors may get back less than the amount invested. Past performance is not a guide to future returns, return projections or estimates, and provides no guarantee of future results.
The value of investments and the income derived from them can fall as well as rise, and past performance is no indicator of future performance. Investment products may be subject to investment risks involving, but not limited to, possible loss of all or part of the principal invested.
This document does not constitute and shall not be construed as a prospectus, advertisement, public offering or placement of, nor a recommendation to buy, sell, hold or solicit, any investment, security, other financial instrument or other product or service. It is not intended to be a final representation of the terms and conditions of any investment, security, other financial instrument or other product or service. This document is for general information only and is not intended as investment advice or any other specific recommendation as to any particular course of action or inaction. The information in this document does not take into account the specific investment objectives, financial situation or particular needs of the recipient. You should seek your own professional advice suitable to your particular circumstances prior to making any investment or if you are in doubt as to the information in this document.
Although information in this document has been obtained from sources believed to be reliable, no member of the EFG group represents or warrants its accuracy, and such information may be incomplete or condensed. Any opinions in this document are subject to change without notice. This document may contain personal opinions which do not necessarily reflect the position of any member of the EFG group. To the fullest extent permissible by law, no member of the EFG group shall be responsible for the consequences of any errors or omissions herein, or reliance upon any opinion or statement contained herein, and each member of the EFG group expressly disclaims any liability, including (without limitation) liability for incidental or consequential damages, arising from the same or resulting from any action or inaction on the part of the recipient in reliance on this document.
The availability of this document in any jurisdiction or country may be contrary to local law or regulation and persons who come into possession of this document should inform themselves of and observe any restrictions. This document may not be reproduced, disclosed or distributed (in whole or in part) to any other person without prior written permission from an authorised member of the EFG group.
This document has been produced by EFG Asset Management (UK) Limited for use by the EFG group and the worldwide subsidiaries and affiliates within the EFG group. EFG Asset Management (UK) Limited is authorised and regulated by the UK Financial Conduct Authority, registered no. 7389746. Registered address: EFG Asset Management (UK) Limited, Park House, 116 Park Street, London W1K 6AP, United Kingdom, telephone +44 (0)20 7491 9111.
Let us know where you’re located so we can tailor our information to make your experience more relevant.