I would make the argument that every company in Delaware has to move to a different domicile because they could be sued in a future derivative lawsuit for the risk they've taken by staying in Delaware. Oh my God. You're so right. You are so right. Oh, mic drop on that. Oh, mic drop on that. Hey, Bill, great to see you. I mean, people loved when you were here last weekend person, so we got to make that happen again. But now where are you? Looks like you're in Texas somewhere. I'm back in Texas. Yes. Yeah. All right. All right. So what's on your mind? A lot of action the last couple of weeks. What's going on? One thing that I've reflected on quite a bit is just kind of how lucky we are to be a part of the venture capital industry and the startup world simply because things change so fast. And if you're a curious person, if you're someone that likes constant learning, it's really amazing. Like the stuff we're talking about, the stuff I'm listening to podcast on every day, you know, two years ago didn't exist. And now it's 80 or 90 percent, 80 or 90 percent of the dialogue. And that's just pretty well. Yeah. No, it's a, you know, our brains really aren't programmed to work in kind of these exponentials. Right. I mean, you and I both know every sell side model on Wall Street has linear deceleration and growth rates. Like we think really, you know, we're really good at thinking that kind of these linear ways. I had that thought this morning that, you know, the biggest investment opportunities really do occur around these phase shift moments. I mean, Satya talks about all the value capture occurs in the two to three year period around phase shifts, but it's hard to forecast in those moments, right? I mean, that's when you see these massive deltas, you know, in these, in these forecasts. And I just went back and looked at, for example, at the start of last year, the consensus estimate of the smartest people covering Nvidia day to day was that the data center revenue was going to be 22 billion for the year. Right. Guess what it ended up being? 96 billion. Wow. Okay. They were off almost by a factor of three or a four, right? The EPS at the beginning of last year, the earnings per share was expected to be $5.70. And now it looks like it's going to be $25. Right. Over the course of your career, have you ever seen sell side estimates off by that much on a large cap stock? I mean, just like, you know, very, very rare, like, you know, once a decade, maybe, you know, that something like this happens. Yeah. So, you know, and I've had investors say to me when the stock was at 200. Hell, you and I talked about this. You know, should we sell it all at 200 sell it all at 300 sell it at 400. And now, you know, those investors are calling me every day saying, have you, you know, have you sold it yet? Our general view is that if the numbers are going up, so if our numbers are higher than the streets number for whatever variant perception that we have, right, then the stock is going to continue to go higher. At some point, the street will get ahead of itself and its numbers will now be higher or at the same level as ours. And at that point, I think it becomes more of a market performer.
But of course, some things will be wildly overestimated and some things will be wildly underestimated. But they, that, that sort of discontinuity really occurs around these moments of big face shifts. So speaking of a big face shift, right? We teased on the pod, I think at the start last time that I had taken a, you know, a test ride in in Tesla's new FSD 12. And I said, you know, kind of felt like a little bit of a chat GPT moment. But I think we left the audience hanging. We got a lot of feedback. Hey, you know, dig in more to that. So you and I spent some time on this both together and with the, with some folks on the Tesla team. So roughly the setup here, background, I want to get your reaction to it is about 12 months ago, the team pretty dramatically for their self driving model, right? Moving it from this really C plus plus deterministic model to what they refer to as an end to end model. That's really driven by imitation learning, right? So we think of this new model. It's really video in and control out. It's faster. It's more accurate. You know, but after 11 different versions of FSD, I think there's a lot of skepticism in the world.
Like is this going to be, you know, something different? You sent me a video and I have tons of these videos, you know, floating around at the moment, you know, that really kind of shows, you know, how this acts more like a human than prior models out there. So Bill, kind of just react. You know, you've watched this video, reacted this video and give us your thoughts. You know, I think you've been a long time observer of self driving. I might even describe you as a bit of a critic of, you know, or a skeptic when it comes to full self driving. So is this a big moment that I overstayed it? And kind of what are your thoughts here?
Yeah, so, you know, one of the critiques and concerns people had about self driving is they would say that, yeah, we're 98% of the way they are 99, but the last 1% is going to take as long as the first 99. And one of the reasons for that is it's nearly impossible to code for all of the corner cases. And the corner cases are where you have problems. That's where you end up in wrecks, right? And so the approach Tesla had been taken up until this point in time was one where you would literally try and code every every object, every every circumstance, every case in like a piece of software. This X happens then why, right? And that ends up being a patchwork kind of a just a big nasty, you know, rat's nest of code and it builds up and builds up and builds up and maybe even steps on itself. And it's not very elegant.
What we learned this week is that they've completely tossed all of that out and gone with a neural network model where they're uploading videos from their best drivers. And literally the videos are the input and the output is the steering wheel, the brake and the gas pedal. And you know, there's a there's there's this principle known as Occam's razor, which has been around forever in science. But the the simplified version of it is a simpler approach is much more likely to be the optimal approach. Right. When I fully understood what they had done here, it seems to me this approach has a much better chance of going all the way and of being successful and and certainly of being maintainable and reasonable.
It's way more elegant. It requires them to upload a hell of a lot of video, which we can talk about. And the other thing that's just so damn impressive is that this company, which is very large, hundreds of thousands of employees, made a decision so radical to kind of throw out the whole thing and start afresh. And it sounds like the genesis of that may have been, you know, three or four years ago, but but they got to the point where they're like, this is going to this is going to be way better and through the whole thing out. And I think about four months after they made the change, Elon did a drive where he uploaded and kind of streamed the drive so we can put that in the notes and people can watch it. But it's way, way different. It's way, way different. And in my mind, you know, basically with this Occam razor's notion, it's got a much higher chance of being wildly successful.
Yeah, let's dig in a little bit into how it's different. Right. And you referenced a little of this. So, you know, like for example, this model does not have a deterministic view of a stop light, right? I mean, Kaparthi has talked about this before, you know, before you have to label a stop light, right? So you would basically take the data from the car. That would be your perception data. You would draw a box around a stop light. You would say this is a, you know, this is a stop light so that your first job on the car would have to be to identify that you're at a stop light. Then the second thing is you would write all of this C++ that would deterministically say when you are at a stop light, here's what the controls should do, right? And so for all of that second half of the model, you know, the heuristics, the planning and the execution, that was all driven by this patchwork that you're talking about. And that was like you would just chase, you know, every one of these corner cases and you could never solve them all.
Now in this new model, it's pixels in. So the model itself has no code. It doesn't know this is a stop light per se. In fact, they just watched the driver's behavior. So the driver's behavior is actually the label. It says when we see pixels like this on the screen, here's how the model should behave, which I thought is just an extraordinary break. And I don't think there's a deep appreciation for the fact that, you know, again, because we've had 11 versions of what came before it, those were just slightly better patchwork models. In fact, I think what, what, you know, we learned was that the rate of improvement of this is order of magnitude five to 10 X better per month as a model versus the rate of improvement of those prior systems. And once again, the audacity to throw out the whole old thing and put a new thing in is just crazy.
One thing for the listeners, well, actually two things I would, I would mention one in terms of just how they got this going. You know, a lot of people I fear equate AI with LLMs because it was really the arrival of chat GPT and the LLM that I think introduced what AI was capable of to most people. But that, those are language models. That's what the L one of the L stands for. And these AI models that Tesla's used for, for FSD 12 are these generic open source AI models that you can find on hugging face, you know, and, and they obviously customize them. So, so there's, there's some proprietary code there at Tesla, but, but, you know, AI's been evolving for a very long time. And this notion of neural networks was around before the LLMs popped out, which is why I know they had started on this four years ago or whatever, right? But the foundational elements, you know, are there and by the way, they use, they use the hardware that we're talking about, right? They use the big Nvidia clusters to do the training. They need some type of GPU or TPU to do the inference at runtime. So, it is the same hardware the LLMs use, but it's not the same type of code. I just thought that was worth mentioning and.
Yeah, no, it's a, it's a, to me, if we dig in a little bit to, you know, the model itself, you know, the transformers, the diffusion architecture, the convolution neural nets, those are all like these modular open source building blocks, right? Like the thing that's extraordinary to me, and we're going to get later in the pod to this open versus closed debate, but like, this is just this great example. You know, you talk about ideas having sex. I mean, these, these open source module, you know, kind of modular components, those have been worked on for the last decade.
And now they're bringing those components together. And now all of their energy, and I want to dig into this a little bit that is really going, they're taking all these engineers who were writing the C++, these deterministic, you know, patches effectively. And now they're focusing them on how do we make sure that our data infrastructure, that the data that we're pulling off of the edge comes in and makes these models better. So all of a sudden it becomes about the data, because the model itself is just digesting this data, brute forcing it with a lot of this, you know, Nvidia hardware and outputting better models. You know, it's such a classic Silicon Valley startup thing where you need all the pieces to line up. So if you go back and watch, if you haven't watched it, if anyone's watched the general magic video, which is fantastic, it's on the internet about why general magic didn't work. And Tony Fidel, who ended up building the iPod and ran engineering for the iPhone, talks about how the pieces just weren't there. So they were having to do all the pieces, right? Right. The network and the chips, and it just wasn't there yet.
现在他们正在将这些组件结合在一起。现在所有的精力都集中在这一点,我想稍微深入探讨一下,他们正在汇集所有这些写C++代码的工程师,有效地修补漏洞。现在他们把重点放在如何确保我们的数据基础设施,我们从边缘获取的数据能够让这些模型更好地运行。所以突然之间,问题变成了关于数据,因为模型本身只是在消化这些数据,并利用大量的英伟达硬件进行 brute force 计算,输出更好的模型。这就是典型的硅谷初创公司的特点,你需要让所有的组件都能顺利配合。所以如果你回过头去看,如果你还没有看过的话,如果有人看过了关于为什么原始的幻想没有成功的那个视频,那真是太棒了,这段视频可以在互联网上找到。托尼·费德尔最终建造了iPod并领导了iPhone的工程部门,谈到了各个部分都不完整的情况。因此他们不得不承担所有的工作,对吗?网络和芯片,它们还没有准备好。
And so these models have been around, maybe ahead of the hardware. And now Nvidia is bringing the hardware and these pieces start to come together. And then the data, like, and I think one of the most fascinating things about this story of Tesla and FSD 12 is when you understand where they get the data. So they are tracking their best drivers with five cameras. And the drivers know it. They've opted into the program. And they upload the video overnight. And so, you know, talk about the pieces coming together. We found Reddit forums and stuff we can put, we can put links to in the notes where users are, our Tesla drivers are saying they're uploading 10 gigabit a night. And so, you know, you had to have the Wi-Fi infrastructure that like, like, how would it be possible to upload that much? Here's someone who's Tesla uploaded 115 gigabyte in a month, right?
And so these are massive numbers. And the infrastructure five years ago, your car couldn't have done this. And you know, I think we'll talk about competition in a minute, but like, you know, who else has the capacity to do this, right? It's unbelievable to like the footprint of cars they have. And then the notion that, oh yeah, we could just go upload this data and it is a buttload of data that's coming out. Right. And even with this architecture, so you just do the math, 5 million cars, 30 miles a day, I think eight cameras on the car, five megapixels each, and then the data going back 10 years, right? This amount of shadow data, you could combine the clusters of every hyperscaler in the world and you couldn't possibly store all of this data, right? That's the size of the challenge.
So what they've had to do is process this data on the edge. And in fact, I think 99% of the data that a car collects never makes it back to Tesla. So you know, they're using video compression, these remote send filters. They're running, you know, neural nets and software on the car itself. So basically they, you know, for example, if 80% of your driving is the highway and it's there's nothing interesting that happens on the highway, then you can just throw out all that data. So what they're really looking for is, you know, what is the data that is a long way away from the mean data, right? So what are these outlier moments? And then can we find 10, 10s or hundreds or thousands of those moments to train the model? So they're literally pulling this compressed filter data every single night off of these cars.
They've built an autonomous system. So before they would have engineers look at that data and say, okay, what have we perceived here now? How do we write, you know, this patchwork code? Instead, this is simply going into the model itself. It's fine tuning the model. And they're constantly running this autonomous process of fine tuning these models. And then they're re uploading those models back to the car. Okay. This is why you get these exponential moments of improvement, right? That we're seeing now, which then brings us back to build this question. You know, Tesla has five million cars on the road. They have all this infrastructure. They have, they are collecting this data. We know there are a couple of years ahead. Think about Waymo, for example, they're still using the old architecture. It's geo fence. I don't know. They have 30 or 40 cars on a road and they're only running the. So do they have any chance? Does Waymo have any chance of competing or even adopting this architecture? It'd be, it'd be a, it's such an interesting question.
And by the way, just on one quick comment on the previous thing, you said, it's genius actually that they are, they've talked a car. What moments it should record? And so they, they, they mentioned to us an example of any time there's, you know, well, obviously a disengagement. So a disengagement becomes a moment where they want the video before and the video after. The other thing would be any abrupt movement. So if the, if the gas goes fast or if the brake has hit quickly or if the steering wheel jerks, that becomes a recordable moment. And the part I didn't know, um, which they told us, which is just fascinating.
People with LLMs have heard, you know, about reinforcement learning from human feedback, RLA, Jeff, and they've talked about how that could make it even with Gemini. They said maybe that was what caused that. Um, what, what we were told is that those moments, these moments were like the car jerks or whatever. If it is super relevant, they can put that in the model with extra weight. And so it tells the model this is this, if this circumstance arises, this is something that's more important and you have to pay extra attention to. And so if you think about this corner case, um, these corner case scenarios, which we all know are the biggest problems in self driving.
Um, now they have a way to only capture the things that are most likely to be those things and to learn on them. So, so the amount, the amount of data they needed to get started was this impossible amount of data with the millions of cars. And now the way that place to their advantage is they're much more likely to capture these, these, these, these let these more severe, less frequent moments because of the bigger footprint. And so you say to yourself, you know, you ask the question who, I don't know who could compete. It certainly couldn't, if, let's, let's make an assertion. If this type of neural network approach is the right answer. Yes. And I want to reason you once again, you know, Occam's razor seems that way to me, then who could compete? And one of the companies or two, you know, several of the companies would be least likely would be crews in Waymo and these things because they just don't have that many cars.
And their cars cost $150,000. So if they wanted to have like the, the mattress doesn't work, you can't build the footprint. You know, and so who could, I don't know, could you, I don't know, could you put a, could you, what would it cost to build a five camera device to put on top of every Uber? I don't know, like a lot. It'd be weird. They're not going to, they're not going to do it. I mean, like, and that to me is, um, you know, when you look at these alternative models, right, if this really is about data, and remember, Bill just said an important point, which is it's not just about quantity of data.
Getting magic happens around a million cars. Yes, you've got to get all that quantity of data, but to get the long tail events, right? These are events that occur tens or just hundreds of times. That's where you really need millions of cars. Otherwise you don't have a statistically relevant pool of these long tail instances. And what they're uploading, uploading from the edge, Bill, he said, each instance is a few seconds long of video and, you know, plus some additional vehicle driving metadata. And it's those events. If you only have hundreds of cars or thousands of cars, you can get a lot of data quickly.
It's not about quantum of data. A hundred cars can produce a huge quantum of data driving a thousand miles. It's about, it's about the quality of the data, those adverse events. Yes. And, and, and I guess the other type of company that maybe could take a swing at it would be like Mobile Eye or something. The problem they have is they, they don't control the whole design of the car. And so this part where Tesla has the car in the garage at night and uploads gigabytes and puts it right into the model. Like, are they going to be able to get that done working with other OEMs? Like are they going to be able to organize all that? You know, do they have the piece on the car that says, when to record and when not to record and, and, and like it is just a massive infrastructure question, I would probably, if I had to handicap anybody, it would probably be, BYD or one of the Chinese manufacturers. Right. And if you think about there, they have a lot of miles driven in China, right? Much less so outside of China. I imagine you're going to have some of this nationalistic stuff that, you know, that emerges on, on both ends of this. Like one of the things I asked our analysts, Bill, is like, if we just step back, I think these guys have network advantage. They have data advantage. They're clearly in the lead. They have bigger H 100 clusters than the people they're competing against. I mean, they have all sorts of things that have come together here. But if you think about like, what's the so what to Tesla, right? And just in the first instance, and we'll pull up this slide that Frieda on our teammate, if you look at the unit economics of a Tesla, right? With no FSD, they're making about two and a half thousand bucks on a vehicle. If you look at it today, they have about 7% penetration of FSD. That was, let's call it through FSD 11.
And those people paid $12,000 incrementally for that FSD. And as we know, you can go read about it on Twitter. People are like, yeah, it's good, but it's not as good as I thought it would be. So now we have this big moment of a step, what feels like, you know, kind of a step function, a model getting better at a much faster rate. So I asked the question, what if we reduce the price on this by half? Right? What if what if Tesla said this is such a good product? We think we want to drive penetration. So let's make it 500 bucks a month, not a thousand bucks a month. So if you assume that you have, you know, penetration, you know, go from 7% to 20%, give it to everybody for free. They drive around for a month. They're like, wow, this really does feel like a human driver. I'm happy to pay 500 bucks a month. You know, if you get to, you know, 20% penetration, then your contribution margin at Tesla, right, is about the same even though you're charging half as much. Now if you get to 50% penetration, all of a sudden you're creating billions of dollars in incremental e-bata. Now think about this from a Tesla perspective. Why do they want to drive even more adoption of FSD? Well, you get a lot more information and data about disengagements and all these other things. So that data then, you know, continues to turn the flywheel. So my guess is that Tesla seeing this meaningful improvement is going to focus on penetration. My guess is that they want to get a lot more people trying the product and they're going to play around with price. Why not? Right? Well, I think that all of these things are occurring at an accelerating rate at Tesla.
And when I look around, you know, I still hear people saying Waymo's worth 50 or 60 billion bucks, but you could be in a situation on that business where it just is, you know, gets passed really quickly and they have a hard time structurally of catching up. Well, and we, you know, people have said that, and if someone has data, once again, they want to crack this, I'd be glad to state to recorrect the data. But, you know, we've been told they have a headcount similar to cruise and the cruise financials came out and they were horrific. And so I don't have any reason to believe that the Waymo financials are any different than the cruise ones. And I've always thought this model that we're going to build this incredible car. And our business model is going to be to run a service like the CapEx, like if you just build a 10 year model, the CapEx you need, like they would have to go raise a hundred billion. And there's another element that's super interesting that the team at Tesla feels very strongly that LiDAR does not need to be a component of this thing.
And so the Waymo cruise, all those approaches and mobile I are LiDAR dependent, which is a very costly piece of material in those designs. And so if this is all true, if this is how it plays out, it's a pretty radical new discovery. So one of the things I also want to talk about because one of the reasons I started going down this path is our team's been spending a lot of time with the robotics companies, new robotics companies. We have Optimus at Tesla, figure AI just raised some money from OpenAI and Microsoft. And we met with those guys and they're all doing really interesting things.
But again, they're shifting their models. The robotics companies also were using these deterministic models to teach the robot maybe how to pour a cup of coffee or something. And now they're moving to these imitation models. So I was searching around the other day and I came across this video by a PhD student at Stanford, Ching Chae. And he showed how this robotic arm was basically just collecting data very quickly using a little camera on a handheld device. And then they literally take the SD card out of the camera, they plug it into the computer, it uploads this data to the computer, it refreshes the model and just based on two minutes of training data.
Now video in control out this robotic arm knows how to manipulate this coffee cup in all of these different situations. So I think we're going to see the application of these models, end to end learning models, imitation learning models, impact not just cars. I mean, 5 million cars on the road, that's probably the best robot we could possibly imagine for data collection. The challenge of course in robotics is going to be data collection. But then I saw this video and I said, well, maybe that's a manageable challenge, particularly for a discreet set of events. Yeah.
And the other great thing about that video of people take the time to watch it, it actually explains pretty simply how the Tesla stuff's working, right? I mean, it's just a different scale obviously, but that's the exact same thing, just at a very reduced state. Right. And you can imagine when that's just this autonomous flywheel without a lot of human intervention, and that's the direction that Tesla still has some engineering intervention along the way. But I think the engineering team working on this at Tesla is about one-tenth the size of the teams at Crucet.
Well, I mean, that gets back to this simplicity point, right? Like this approach removes so much complexity that you should be able to do it with less people. And the fact that you can have something better with less people is really powerful. So we talked a little bit about how models, these open source models are driving a lot of the improvements at Tesla. We seem to get model improvements and model updates every day, Bill. Maybe I just go through a few of the recent ones. And I want to explore this open versus close. Last week, we heard about Gemini 1.5. It has a huge expanded context window. And Gemini 1.5, about a chat GPT-4 level, then yesterday we get Cloud 3 announcements.
Their best model opus is just a little bit better than chat GPT-4. But I think the significant thing there, and we have a slide on this, is just really about the cost breakthrough that their sonnet level model can do workloads at a fraction of the price of chat GPT-4, even though it's performing at or near that quality. And then we have, you know, those models were trained on a mixture, I think, of H100 and prior version of NVIDIA chips. The first H100 only trained models, I think, will be LAMA3 and chat GPT-5. So we're hearing rumors that both of those models are going to come out in the May, July timeframe.
With respect to LAMA3 that was trained on Meta's H100 cluster, rumors are that it has Cloud 3-like performance, which is pretty extraordinary if you're thinking about a fully open-sourced model. And then chat GPT-5, which we hear is done. And they're simply in kind of their post-training safety, guardrails, their normal post-training work. And here that's going to launch sometime in May versus June. And because that one was trained on H100s, we hear it is like a 2X improvement versus chat GPT-4.
But then we hear all the rest of the frontier models are kind of in this holding pattern because they're waiting for the B100s to get launched to this Q3, Q4 out of NVIDIA, which probably means the next iteration of the frontier models will come out in Q2 of next year, Q2 of 25. That's after chat GPT-5. So Bill, if you go through this bedrock page on AWS, if you just scroll through, you see the Amazon is offering all these different models. I mean, you can run your workloads on LAMA, on Mistral, on Claude, et cetera.
Snowflake today just announced a deal with Mistral, and they're going to have LAMA as well. I imagine Databricks will. Microsoft, you can use LAMA or you can use Mistral or OpenAI. So where do you think all of this goes in terms of the models that will actually get used by enterprises and consumers in practice? Yeah. So I have a lot of different thoughts. My first one, when this new anthropic thing came out and they list all the different math tests and science tests and PhD, and they're all listed in the same thing, I wonder if they're racing up a hill, but they're all racing up the same hill. Yeah, there's the thing.
Because they're all running the same comparative test and they're all releasing this data. And I would, I don't know if any of them are creating the type of differentiation that's going to lead to one of them becoming the wholesale winner versus the other, right? And is this type of micro optimization, you know, in a way that's going to matter to people or to the users? And it's not clear to me. I mean, I see some developers get way more excited about the pricing at the low end of those three choices than they do about the performance of the top end. So that's one thing.
The second thing on my mind, I don't, I don't have a lot of logic to put around this. It's more of an intuition. I wonder if these companies can simultaneously try and compete with Google to be this consumer app that you're going to rely on to get you information. So you could call that Wikipedia on steroids, you know, Google search redefined whatever market you want to call that and simultaneously be great at enterprise models. And, and I just don't know if they can do both.
I really don't. And maybe that'll get to the third thing, which is more the essence of your question. Like, what am I hearing about and seeing about when it comes to companies that are actually utilizing these things? The Tesla example was interesting because, you know, they start with these bedrock components that are open source. And one thing that happened in the past 20 years, it happened very slowly, but we definitely got there.
CIOs at large companies, they used to be an IBM shop or an Oracle shop or a Microsoft shop. That was their platform. They slowly got to the place where most of the best say CIOs were open source first. So for any new project, they start, you know, they used to be skeptical of open source and it slipped completely the other way. Like, oh, is there an open source choice we can use? And the reason is they don't wonder there's more competition and two, they don't want to get stuck on anything.
And so when I look at what I see going on in the startup world, they might start with one of these, you know, really well known service models that's proprietary. But the minute they start thinking about production, they become very cost focused and on the inference side and they'll just play these things off of one another and they'll run a whole bunch of different ones. I saw one startup that moved between four different platforms. And I just think that that competition is very different than the competition to compete with Google and this consumer thing.
And I'll give you another example. Like I was talking to somebody, if you had a legal application you wanted to use, you'd be better off with a smaller model that had been trained on a bunch of legal data. It wouldn't need some of the training of this overall LMM. And it might be way cheaper to have something that's very proprietary or not proprietary but very focused from a vertical standpoint. You could imagine that in a whole bunch of different verticals.
So it just strikes me that this on the B2B side, this stuff's getting cut up and into a bunch of different pieces where a bunch of different parties could be more competitive and where those components are most likely to be open source first. Yes, yes. I mean, you're causing me to think a couple of different things. One, I've said in the past, if I was Sam Altman running OpenAI, I think I might rename the company Chat GPT and just focused on the multi-trillion dollar opportunity to replace Google. Right? Winning at both beating Google at consumer and beating Microsoft at enterprise.
Andy wants to beat NVIDIA at building chips. Those are three big battle fronts. And if I think about the road to AGI, building memory, building all this thing that's going to differentiate you in the consumer competition, that just seems best aligned with who they are, what they're doing. I mean, Chat GPT has become the verb in the age of AI. They replace Google at the start. Nobody's saying we're barding something. They're saying we're Chat GPT and something. So I think that they have a leg up there.
When I look at the competition in enterprise, right, I think anthropic was up at the Morgan Stanley Conference this morning and they said they're hiring their sales force, went from two people last year to 25 this year. Think of the tens of thousands of salespeople at Microsoft, at Amazon, et cetera, that you got to go compete with. Now, of course, they're also partnering with Amazon. But when you think about that, these guys, there's going to be all this margin stacking bill. So Amazon's got to get paid, anthropics got to get paid, NVIDIA's got to get paid.
Now if you use an open source model, you can pull one of those pieces of the margin stacking out, right? So now this is just Microsoft getting paid using llama, llama three or llama two. They don't have to pay for the use of that model and NVIDIA gets paid. So I think in the competitive dynamics of an open marketplace, right, that that enterprise game is going to be tough for two different reasons for these model businesses. Number one, Zuckerberg is going to drive the price, right? He's going to give away frontier esque models on the cheap. Okay.
And that's going to be highly disruptive to your ability to stack margin. If I'm a CIO of JP Morgan or some other, you know, large institution, do I really want to pay a lot for that model? I'd rather have the benefit of open, right? Because then I can, you know, move my data around a little bit more fluidly. I get the benefit, the safety benefits of an open source model. And I'm not sending my data to open AI. I'm not sending my data to some of these places. Huge point you just made that is in addition to everything we said, which is a lot of the big companies have concerns about their data being commingled or uploaded even at all.
Into these proprietary models. So it's not, it's, it's not just, I think the challenge for them in enterprise is not just how do I build an enterprise fleet to go compete with the largest hyperscaler in the world who are great enterprise businesses. And you've got to compete with Databricks and Snowflake, et cetera. But I think the second thing is just, you know, there is this, this bias, this tendency that you say has evolved over a couple of decades of open versus closed, which then, you know, brings me a little bit to this, you know, the weight, but wait, one more, one more element that I think that's important too for everyone to understand.
One of the reasons open source is so powerful is because it, because it can be replicated for free, you end up with just so much more experimentation. So it turns out right now there are multiple startups who believe they have an opportunity hosting open source models. So they're propping up llama three or mrall as a server provider competing with Amazon. But they're going to tune in a little different way. They're going to play with it a different way. So you're the number of places you can go by one of these open source models delivered as a service is you have multiple choices. It's been proliferate and that creates optionality.
There's just so much more experimentation that's going to happen on top of the data privacy problem, the pricing stuff you talked about. So there's a lot of different elements that make me think that that the open source component models are going to be way more successful in the enterprise. And it's a really tough thing to compete with now. Yeah, well, it kind of brings into stark relief a big, a big debate that erupted this week, you know, certainly on the Twitter's with Elon's lawsuit, you know, that he filed.
And you know, part of that was about this not for profit to for profit conversion. You know, that's to me a little bit less interesting. Don't want to talk a lot about that. But it blew the doors wide open on this open versus closed debate, right? And the potential, you know, that exists here for regulatory capture. Nobody's more thoughtful about this topic than you. I think, you know, I saw somebody tweet this, this two by two matrix, you know, it says dividing every conversation, you know, up between Mark and the note and, you know, and Elon and Sam. But, you know, we saw a lot of, you know, a very sharp opinions expressed. So help us think about like the regulatory cap, the risk of regulatory capture and why this moment is so important. Yeah.
You know, I happened to mention this when I did my regulatory capture speech at the all-in conference. I mentioned very briefly when I showed a picture at Sam Altman that I was worried that they were attempting to use fear mongering about doomerism and AI to to build regulation that would be, you know, particularly beneficial to the proprietary models. And then after that, there were, you know, rumors that, you know, people at some of the big model companies were going around saying we should, we should kill open source or we should make it illegal or we should get the government to block it.
And then Venote started basically saying that literally like, yes, we should block open source. And that became very concerning to me. I think it obviously became concerning to Mark and Driesen as well. And for me, the biggest reason that it's concerning is because I think it could become a precedent where all companies would try and eliminate open source. And there's a good reason why. I mean, we just talked about it's a hell of a fucking competitor. Like I wouldn't want to go up against it. But but but it's also really amazing for the world. It's great for startups. It's amazing for innovation. It's great for worldwide prosperity. Think about Tesla. We just talked about all this open source that they're using.
Yeah. It's the last thing I would want to see happen. But but you know, we do live in this world where where these pieces exist. And I would urge people to read, we'll put a link in a political article that shows the amount of lobbying that has been done on behalf of the large for proprietary models. And I don't think you'll find literally the only thing that comes close perhaps and people will think I'm being outlandish. But as SBF who was also lobbying at this kind of level, but this political article shows they have three or four different super PACs. They're putting people they're literally inserting people onto the staffs of the different congressmen and senators to try and influence the outcome here.
I think we may be escaped this. Like I think the open source models are so prolific right now that maybe we've gotten past it. And I also think their competitiveness has shown that there's a reason why they would you know, want to stop them. I mean, I think at the time they started, maybe that wasn't clear, but I think it's remarkably clear right now. I also don't believe in the doomerism scenario. Someone who I admire quite quite a bit. Steve Pinker posted a link to this article by Michael Totten where he goes through, I think in a very sophisticated way, the different arguments that I would urge people maybe to read that on their own.
But yeah, I don't, I don't, for me, if you want to spread the doomerism, let's get people to tell that story that aren't running billion dollar companies that are taking hundreds of millions out and giving it to their employees. I mean, there's a level of bias that's obvious here. And so I'd rather listen to a doomerism argument from someone who's not standing to gain from regulation. Yeah, I mean, I think you saw this tweet from Martin Casado, you know, that was in response to, you know, Vinod comparing open source, you know, would you use open source for the Manhattan Project, you know, which really kind of opened up, you know, this box even more.
What's your, you know, weigh in a little bit here, just if you're in Washington and you're hearing these things like, you know, we can't allow these types of models to be used on, you know, on things like this, we saw India is now requiring approval, you know, to release models that also, you know, was I think a scary development for people in the open source community. But you know, again, just reinforce like, why should we not be worried about open source AI, AI models? How do they send us to a better place?
In the, in the, in the, in the Totten article of Pinker uses an analogy that I just love, which he says, like, you could spread a doomerism, you know, argument that a self-driving car would just go 200 miles an hour and run over everybody. But he says, if you look at the evolution of self-driving cars, they're getting safer and safer and safer. It's, it's not, we don't program the AI to give them this singular purpose that, that overrides all the other things they've been taught and then they go crazy. Like, that's not what's happening. That's not how the technology works. That's not how we use the technology.
And so I, I think the whole article is great, but I think, you know, and look, I also think Pinker is a really smart human. Like he's also one of the biggest outspoken proponents of nuclear, which is another topic that I think has been wildly, you know, misconstrued. And so anyway, I'm, I'm more of an optimist about technology. These kind of doomerism things go way back to the luddites. Hence, hence the definition of the word, right? And ever since then. And someone else tweeted like, you know, be like telling the farmer, you know, look out for the tractor. Like it's going to ruin, you know, it's just not how our world evolves.
Well, the reason I think this is so important is because, you know, the competition that's going to come from these models, all the evidence suggests that it moves us to a better place, but not worst place. However, during these moments, right, where, you know, you do have a new thing and it does sound scary. And then you have all these people coming to Washington saying, Hey, we can't allow all this experimentation. We can't allow these open source models. What I worry about is that that can actually win the day like it has in India.
But you know, I was in Washington last week talking to leadership in both the House and the Senate about, you know, a program near and dear to me called Invest America. But the conversation about AI came up with many senators and many senior leadership folks in the House. And one of them said to me when he was asking about AI, I, you know, I said, I was worried about, you know, excessive government oversight, getting persuaded, particularly as it relates to open source models. And he said, don't worry. He said, you know, we had Sam Altman out here and we know what he's up to. And I thought that was, you know, and he ended by saying, we need competition. Like the way we stay ahead of China is we need competition. So that was highly encouraging to me, you know, from a senior member.
It's interesting. That's so great to hear. And I think, you know, this China thing comes up all the time. Like the one thing that would cause us to get way behind China is if we played without open source and they had like, and then the other thing I would just say is, you know, many academics I talk to are like, I have way more trust in open source where I can get in and see and analyze what's going on. And you know, the other side of this, because we talked about LLMs or, you know, AI competing about them to be to be side to be to see side on the consumer side.
You know, the Gemini release from Google, I think is proof of the type of, you know, the Google Gemini model was much more similar to something autocratic that you might equate with a communist society. Like it's intentionally limiting the information you can have and painting it in a very specific way. And so yeah, I'm more afraid of the price. Yeah, they're effectively imposing a worldview by massaging the kernel here in ways that we don't understand. It's a black box influencing our opinions.
And you know, I just find it ironic in this moment in time that, you know, the person putting the most dollars up against the open source is somebody we're critical of, you know, Washington was pretty critical of a couple of years ago, which is Zuckerberg. And you know, the fact of the matter is you need to have a million H 100s. He's going to have, you know, hundreds of thousands of B 100s. You need somebody who has a business model that can fund this level of frontier magic on these open source models. And the good news, it appears we have it.
Yeah. That's awesome. I'm thrilled you heard that. You know, the, there was another interesting case over the course of the last couple of weeks that I know you and I, actually one last thing on this, because I just recalled a conversation I was having with the Senator. Like, let's assume, let's assume that you, you, you, you, do Marism's right. And you have to be worried about this. What are the odds? What are the odds that our government could put together a piece of effective legislation that would actually solve the problem? Right. Right. It's low. Well, I mean, I think the cost, you know, the cost to society is certainly greater when you look at, you know, kind of the tail risk of it. But again, you know, how the node frames it, what, what, what I get worried about, I have no problem, you know, in, in him having an active defense and wanting to do everything in open AIs best interest. You know, I just don't want to see us attack technological progress, right? Which open, which open source obviously contributes to en route to that, right? Just compete against them heads up and win heads up. Like that's fine.
But let's not try to try to, you know, cap the other guys by taking their knees out before they even get started. So back to what I was saying, you know, speaking of government's role in business, you know, a couple of weeks ago, state of Delaware, the chance record, you know, this judge, Kathleen McCormick, she, you know, pretty shockingly struck down Elan's 2018 pay package. Remember the company was on the verge of bankruptcy. They basically cut a pay package with him where he took nothing if the company didn't improve. But if the company hit certain targets, he would get paid out, you know, 1% tranches of options. I think over 12 tranches, which because the company, right, had this extraordinary turnaround, you know, he achieved his goals.
So now she's kind of Monday morning quarterbacking. She's looking back and she says his pay package is unfathomable. And she said, the board never asked the $55 billion question bill. Was it even necessary to pay him this to retain him and to achieve the company's goals? So of course, this can be appealed to the Delaware Supreme Court and it will be. But you know, in response to this, Elan, and I think many others just said, hold on a second here. What the hell just happened? You know, the state of Delaware has had this historical advantage in corporate law because of its predictability and its predictability wasn't because of the code, but it was because the judiciary, right? There was a lot of precedent in the state of Delaware.
And this seemed to turn that totally on its head. He said he was going to move, you know, incorporation to the state of Texas. You know, we're starting to see, you know, other companies follow suit and other people talking about this. So what was your reaction, you know, you know, seeing, you know, something that was, I think most of us thought was highly unlikely and pretty shocking. Yeah. Well, first of all, I think it's super important for everyone to pay attention to this. I don't, I don't actually think it's just an outlier event. I think it's so unprecedented in Delaware's history that it really marks a moment for everyone to pay attention. And there's a couple of things I would pay attention to. One data point you left out, which came up recently is the lawyers that pursued this case are asking for five or six billion dollars in payment. And it turns out when you bring a derivative suit in the, in, in Delaware, there have been cases where people ask for a percent and the judge gets to kind of decide that and, you know, if you step back and look, the, the, this is a victimless crime.
And I think that's the thing that makes Delaware look like a kangaroo court here. The everyone knows the lawyer grabs someone that only had nine shares and those nine shares went way up, but it's kind of silly because it's so small. Anyway. So how could, how could a client with nine shares lead to a multi-billion dollar award to a lawyer? And that's only true. If you've created a, a bounty hunter system, you know, a bureaucratic bounty hunter system, there's something California called Paga that's kind of evolved this way. And, and if that's the new norm in Delaware, that's, that's really, really concerning. The other thing that's, that's different here is the stock went way, way up. So I think we've all become accustomed to when stock prices going down these, these litigators, you know, grab a handful of shareholders and bring a shareholder lawsuit and we're like, Oh, yeah. Yeah. Unfortunately, that's become a way of life.
But, but to attack companies that go way up, you know, I, I would two things. One, I would offer this pay package. I looked at it in detail to any CEO I work with. And I think they would all turn it down because there was no, there's no cash, no guarantee. And the first trunch was a two X of the stock. So like that's fantastic. I think the biggest problem with compensation packages and you may may tackle that some other day is a misalignment with shareholders where, where people are getting paid when the stock doesn't move. That's RSU's do. And here, by the way, that's the standard in corporate America. We have this grift where people make a ton of money and the stock doesn't do anything. Look at the pay package for Mary Barra at GM. So the first trunch here was if the stock doubled and I would offer that to anyone. I would also say if any other CEO took a package like this, I would in a public company. I would be very encouraged to consider buying a lot of it. And so it's, it may be like one of the most, you know, shareholder aligned and center packages ever, which is exactly what you would think Delaware courts would be looking after. And I assess as well, which is a whole not a whole nother subject. But so it's, I think it's just really bad. And it does show a new side of Delaware, you know, one that they haven't shown before. And so I think everyone has to pay attention. Right. No, I mean, it's, it's shocking.
And if you, you know, I was a corporate lawyer in my, my first life, as you know, if you actually go and look at the, the actual corporate law code in the state of Delaware, right? It's almost word for word, the same as Texas, the same as California and so on. The point here is it's not that Delaware has code, you know, a legal code around, around corporations that's so much different than every other state. What has set it apart is it has way more legal precedent, way more trials that have occurred and judges who have interpreted that in a way that is very shareholder aligned, shareholder friendly. So the big, and they're known for letter of the law. So the construction. Correct. And so here we have a moment. And the reason it's so shocking is because it's at odds with all of the precedent that people had come to expect. So I think there are going to be left out, we left out there at 70% shareholder approval. I mean, and there was a low, high, low probability event that happened to happen. And you can't look at that after the fact and say, Oh, it was obvious this was going to happen, you know, right? I think, I think that, you know, if this is, if this stands, so I imagine corporations right now are in holding patterns, right? Elon is moving, you know, reincorporating in Texas. I think a lot of other corporations will stay pending the Delaware Supreme Court appeals ruling, right? If they over torn, overturn this, this judge is ruling, then I think you may be back to the status quo in the state of Delaware.
But if they uphold the ruling and deny, I mean, I think Elon said, despite all the goodness that's occurred, saving the company from bankruptcy, this means he effectively gets paid zero for the last five years. I mean, it's such an outlandish outcome. So if it gets, if it gets upheld, I expect you're going to see significant flight from the state of Delaware by people reincorporating these other states that, you know, frankly are pretty friendly as well. Brad, I just thought of something. So if it's upheld and if these lawyers are paid anything as a percentage, anything other than maybe just their hourly fees, so if those two things happen, I would make the argument that every company in Delaware has to move to a different domicile because they could be sued in a future derivative lawsuit for the risk they've taken by staying in Delaware. Oh my God. Oh my God, you're so right. You are so right. Oh, mic drop on that. They, they, you know, so now on the boards that I sit on, I have to warn them that if they stay in the state of Delaware, then they're knowingly and negligently taking on this incremental risk. Absolutely.
Oh, wow. You know, let's just wrap with this, a quick market check. You know, one of the things I like to do is be responsive to the feedback we get. A lot of people, you know, loved, you know, kind of some of the charts we had put up on, on kind of the market checked on the last show. So you know, we get asked about this all the time. We sat on the prior pod, you know, prices have run a lot this year and the background noise around macro, you know, has not improved. Arguably it's getting a little worse. Inflation's running a little hotter. You know, rates are not expected to come down as much. So I, so I just a quick check on the multiples of companies that we really care about. Microsoft, Amazon, Apple, Meta, Google and Nvidia. And I just want to walk through this really quick. So this is a chart that just shows the multiples between March of 21 and March of 24. Right.
So if we look at, let's start with, with meta, you know, you can look at that time. There are multiples gone from about 20 times earnings to about 23 times earnings. Right. So it's a little bit higher. Take a look at Google, you know, it's multiples come from, went has gone from about 25 earnings to now down to just below 20 times earnings. Now this is to be expected. I mean, we've been having this debate about whether or not, you know, Google search share is going to go down and the impact that that will have. And so, you know, this is just the markets voting machine at a moment time saying, hey, we hear that debate and we're a little bit more worried about those future cash flows than we were in March of 21, which makes a lot of sense to me. If you look at Apple, it too. And by the way, on that one, I mean, the Gemini release, the world's looking at you with this lens and then you release this thing that, and then you trip. I mean, they basically trip, right. And, and we know they trip because they've apologized for tripping. And so it's just not good. Like it's not confidence inspiring. Well, now you're seeing the drum beats starting, you know, you and I are getting the text, the emails, the drum beats are out and whether Sundar is going to, you know, make it past this moment in time.
I mean, listen, I think boards have one job, higher fire, the CEO who leads the company forward. Can they execute against the plan? And I think that if I was on the board of Google, that's the question I'd be asking at this moment in time. Not is he a good human being, not is he a smart product guy, not is he a good technologist, not what's happened over the course of the next last 10 years. But at this moment in time, do we have any risk of innovators dilemma? And is this the team? Is this the CEO who can lead us through what is likely to be a tricky moment.
Just to finish it off, Apple's multiple is a little bit lower, right? That also makes sense to me. You see what's happening in China, some concerns about their, you know, they get $20 billion a year from Google, you know, like what happens to that? In the case of Microsoft, there are multiples a little higher, but you know, again, these multiples are all in the range. And then the final two, you know, Apple's multiple, or I mean, Amazon's multiple is actually quite a bit lower, you know, here. And so that's interesting to me. I actually think the retail business is doing better. I actually think the cloud business is doing better. And now that stock looks cheaper to me. And then Nvidia, of course, is the one that everybody's talking about. This goes back to where we started the show. I mean, if you look at Nvidia's multiple to start the year bill, you know, so hover there right above December 23, it's multiple was had, you know, like a five, 10 year low, right? But why? Because earnings exploded last year from five bucks to 25, you know, bucks.
And multiples obviously come up here a little bit at the start of the year, but you can see it's well below some of its historical really frothy multiples. But I think the question in my mind that we're big and share Nvidia shareholders, like in other people's minds is, you know, is this earnings train durable for Nvidia, right? Are these revenues durable? Have we pulled forward this training data? We showed that chart a couple of weeks ago that we think the future build, build out of compute and super compute of B 100s of everything is longer and wider than people think. And then the interesting thing, like when you see that note out of Klarna last week bill and what they were able to achieve, this is the, this is really the question. At the end of the day, our companies and consumers getting massive benefits out of the models and inference that's running on these chips.
You know, if the answer is no, then all of these stocks are going lower. If the answer is yes, they probably have a lot of room to run. But that's the quick, you know, maybe we'll do this, you know, at the end of each of them, do a quick market check. But why don't we leave it there? It's good seeing you. Hey, next time get back out here. Let's do this together again. All right. Take it easy. As a reminder to everybody, just our opinions, not investment advice.