ENFR
8news

Tech • IA • Crypto

TodayMy briefingVideosTop articles 24hArchivesFavoritesMy topics

Cerebras IPO, Warsh Confirmed Fed Chair, Musk-OpenAI Trial Nears End | Diet TBPN

AITBPNMay 15, 2026 at 12:42 AM32:30
0:00 / 0:00

TL;DR

Cerebras Systems surged to a roughly $64 billion market cap after a blockbuster IPO, signaling strong demand for ultra-fast AI inference despite technical scaling challenges.

KEY POINTS

Explosive IPO debut

Cerebras Systems exceeded expectations with shares jumping from an initial $150–$160 range to around $300–$350 on debut, effectively doubling projected valuations. The company now sits near a $64 billion market cap, far above earlier optimistic forecasts of $50 billion, reflecting intense investor demand and limited share allocation.

Radical chip architecture

Cerebras differentiates itself with a wafer-scale engine, using an entire silicon wafer as a single chip rather than dividing it into smaller units. This design dramatically increases compute density and speed, but initially raised concerns about manufacturing defects and low yields.

Yield problem solved with redundancy

Early skepticism centered on the risk that a single defect could ruin an entire wafer. Cerebras addressed this by embedding redundant cores, allowing defective sections to be bypassed. This engineering workaround has proven effective, helping validate the architecture in real-world deployments.

Speed over intelligence drives demand

Market behavior shows a clear willingness to pay for faster AI responses. Some enterprise users spend disproportionately on high-speed inference, even paying up to 6× higher costs for roughly 2× speed gains. This suggests that latency reduction, not just model capability, is becoming a key competitive factor.

Real-world usage validates strategy

Cerebras chips are already deployed in production environments, including serving OpenAI’s GPT-5.3 “Spark” inference workloads. Users report a shift from token-by-token streaming toward near-instant full responses, improving usability for coding, research, and enterprise automation tasks.

Major commercial partnerships

A significant 750-megawatt deal with OpenAI underscores growing confidence in Cerebras infrastructure. Such large-scale commitments position the company as a serious challenger within the AI hardware supply chain, historically dominated by Nvidia.

Scaling limitations emerge

Despite its speed advantage, Cerebras faces constraints in handling larger AI models. Its architecture relies heavily on on-chip SRAM, which is no longer scaling efficiently with new semiconductor nodes. Recent chip iterations increased memory only marginally, from 40GB to 44GB, limiting support for larger contexts.

Competition from networked systems

Rival systems like Nvidia’s NVL72 racks link multiple GPUs to handle massive models and extended context windows. Cerebras currently lacks equally robust multi-chip scaling, raising concerns about its ability to serve future workloads requiring hundreds of thousands of tokens.

Shift toward hybrid AI architectures

Industry trends suggest a hybrid future where large, intelligent models delegate tasks to smaller, faster systems. In this framework, Cerebras chips could excel as “speed workers,” handling rapid inference tasks while larger models manage reasoning and orchestration.

Investor concentration and allocation dynamics

Demand for shares far outstripped supply, with roughly one-third of interested buyers receiving no allocation. The top 25 investors secured about 60% of shares, indicating strong institutional control and reinforcing confidence from major asset managers.

Long-term backing and growth trajectory

Cerebras’ valuation has climbed sharply from $720 million in 2016 to $48.8 billion at IPO, before post-listing gains. Early backing from firms like Eclipse Ventures, led by veteran investor Pierre Lamond, highlights the role of long-term conviction in emerging AI infrastructure bets.

CONCLUSION

Cerebras’ IPO success highlights a growing market premium on AI speed, but its long-term position will depend on overcoming memory and scaling limits as models continue to expand.

Full transcript

Uh there's a ton of news. Let's start with Cerebrus. The IPO has gone spectacularly well. Cerebras doubled their valuation basically overnight. Brandon Gell had the uh good fortune of writing up some of the details of the Cerebrus news in the newsletter today. TBPN.com you can go sign up. Yeah, right now it's sitting at a $64 billion market cap. And a lot of the prediction markets, they didn't even have a category above 50, right? A lot of people were just kind of uh trading or or betting. >> And when I wrote the newsletter Friday, Monday, I I said a $50 billion IPO and was sort of being uh optimistic and uh it beat those expectations, which is great news. chip design company Cerebrris if you don't if you're not familiar they make a big big chip big chip company uh instead of >> the biggest chip >> instead of taking the wafer putting a bunch of chips on it cutting it up into smaller chips they use the whole wafer it's genius idea it's one of those simple ideas taken deadly seriously in some ways uh but it's trading at uh $350 a share on its first day of public trading which values the company much higher >> three 300 now >> 300 >> $300 okay the price on this IPO has been literally up only on Monday the price range was 150 to 160. Then they raised it. That was up from 115 to 125. Uh and today we're seeing, you know, much higher prices. >> Go back to that picture. Someone should make a set in LA. You know, they have those like fake private jet sets. Imagine if entrepreneurs could have a set where they put their logo in the background and like they're hitting it with a hammer and there's confetti going everywhere. >> Yeah. But it's for your course. >> Yeah. >> Yeah. And and you walk right from there. uh to to the Lambo. >> 1,000 students in your mastermind. >> Yeah. No, I had this I had this idea back in the day when do you remember the ice cream the ice cream museum this whole thing? >> Oh yeah. >> So there were there there was this trend I mean really bad news for the museum industry but they're getting eaten alive. And so some entrepreneurs, I think they did very well, started something called the ice cream museum, which uh was not really a museum in the sense of like a presidential library or uh you know the the Norton Simon or the Getty or the you know natural history museum. It was more of like an experiential place to go and hang out. Good for first dates, good for you know taking kids maybe. Uh, and they would maybe give you some ice cream, but most of the most of the museum was just like very Instagrammable things. So, there would be like a ball pit or a bunch of raining confetti and stuff and uh a huge a huge fabricated statue of ice cream that was not a piece of art that would be sold. >> Sprinkle ball pit. >> There you go. That sounds real. I don't know if that's is real, but it sounds very believable. They had that. Okay. Yeah. And and there were a number of other kind of copycats that were trying to jump on and do like, oh, we'll do like the waffle museum or something or the pancake museum, you know, cuz they just wanted to cash in. And my idea was just the museum of Instagrammable objects. And so it would have all of those. So there would be a private jet set and then there would be a Lamborghini set and this one would fit right in. So, it's just they have a big pink wall so you can go take the pink wall photo and then there's a beach and then there was a gym with fake weights so you could go and look like you're maxing out and benching 500 lb and so it just says bring these clothes or we'll have them for you and then you move from room to room taking the ideal dating profile >> photo reveal. >> Yeah, exactly. Exactly. Oh, you had kids here. You in the hospital. You can live an entire life through this fictional museum of Instagrammable objects. uh more of a meme than a real business idea, but uh >> no, John, the Museum of Ice Cream now has uh seven locations. >> Okay, so they're cooking. >> They're global. >> They're global. They're they're doing well. Anyway, let's go back to the serious stuff. Cerebrus uh it's a complicated uh company because they are so deep in the AI supply chain, but we'll break it all down for you. So, there's a bunch of interesting takeaways, some really solid positives. Uh Cerebrus chips work, which was something people were not expecting for a while. There was a lot of FUD around this company just the idea of like, oh, that'll never work. What if the architecture changes? What if we go away from transformers or something? What if we need something quote completely different? Or maybe like the yields will never work because there was this idea that if you're using the entire wafer, typically as you're etching the chips onto the wafer, sometimes there's little defects. And it's not a problem if you're going to break up a wafer into like 64 chips because you just throw away one. But if there's one defect on basically every wafer, well then your yield is going to be super low. We talked to Andrew about how he solved that by spa uh creating redundant cores and they don't actually activate all the cores and so they sort of built in that redundancy and got through that but that was an early critique of the strategy uh you can use >> use cerebra strips today as uh >> in codeex 5.3 spark uh and so they are very fast and I think the most important thing that semi- analysis points out is that token consumers customers businesses have shown this revealed preference for and and a willingness to pay for speed. And they sort of contextualize it and they quantify it based on their own usage and their experience with anthropics opus models. So opus 4.6 fast mode. Famously, I like that they use famously because it's like famous to like 100,000 people, but uh famously charges six times the price for two and a half times the interactivity, although it's now under 2x faster. So effectively, you're you're you're paying six times the price for two times the speed. That's uh that that's disproportionately more money for what you're getting. You would think you'd pay six times the price for six times the speed potentially. Um but there were a lot of questions about would people really pay for more that much more for faster models faster inference and Andre Karpathy Sam Alman was was saying like do you want faster models or smarter models and uh he was like I think and Sam's point was sort of like the these models are very intelligent but using them faster is sort of more of a magical superpower and Sam was I I felt like Sam was sort of leading it towards like speed is really important as the next leg up on productivity and utter karpathy was like no I just want smarter I'll just let it run overnight I don't mind that but that's not what everyone is feeling some people especially the semi- analysis team leaned more towards interactivity or speed over raw intelligence power >> well yeah and then there's the other aspect which is just capability capability speed and intelligence that's like the question I think people have had is like okay what is is there a 250 IQ model or is there just a much more capable model. >> Yeah. The unh tools more efficiently and is really quick. >> Yeah. And that's actually important to cerebrus. Semi analysis was spending 80% of their AI spend on Opus 4.6 fast and so they were willing to pay that 6x like 80% of their spend disproportionately more even though uh when their sort of expectation as they put it was that they would always want the smartest model. They would they would they would be very costconscious they were in reality saying I'm going to hammer fast mode. I want to spend on fast mode and then I think uh the price was significant and so there's probably sort of a renegotiation about when is the right time to use fast versus when do you want to leave something running overnight but OpenAI is clearly very pilled on Cerebrris Cerebrus has a big 750 megawatt deal with OpenAI and the chips are already serving GP 5.3 in codecs under the name Spark as we mentioned uh and I've used it you should use it it's a very interesting experience because I think a lot of people have interacted with LLMs and chat bots and they're sort of used to this token streaming in. It's sort of cute because the phone vibrates and it and it feels like you're talking to someone who's typing. But it's way better when you just land on a Wikipedia page, the full thing loads and you can just scroll however much you want. And that's the experience that I think people want and will demand across everything, especially if they're firing off a coding task. They just want the code immediately. Uh you can also just go talk to the model like it's chatbt. You don't need to use codeex 5.3 spark in a coding context. You can ask it whatever you want and it will just act like a normal llm. I personally think there will be huge demand for faster inference across all parts of the AI economy. There's this old lat >> Yeah. Another another way to think about it is like if you have two employees with the same skill set, the same capability, but one is just five times faster, right? That person can create way more value in the organization. Yeah. Right. >> And for a lot of things, if they're if they're two times faster, they do command six times the price. a sales rep that that that sells twice as much or someone who is twice as effective as their job might actually command a salary that's five times six times um the the actual price. And so there's lots of other context across different business lines that uh you could draw to. There's also this old adage or saying about e-commerce that might may or may not be real. It's probably been transposed so many times in Think Pieces. I don't know the real quote, but it goes something like every 100 milliseconds of latency costs Amazon 1% in sales. I don't know if that's the right way to think about it, but basically as Amazon was scaling, they realized that there were a bunch of things that they could do on the UI side, a bunch of things they could do on the layout side, where does the buy button go, where does certain information go, the price, the discount, all of this stuff, the images. They they were tweaking the front end, but as they did that, they added bloat and the pages would slow down. And what they noticed was that the slower the page was, the lower the conversion rate because people were waiting for amazon.com to load, click on the page, it takes a second, they get distracted, they go somewhere else. And I think that that's happening in LLM use cases all over the place. People fire off a query and they're like, "Ah, it's taking too long. I'll go scroll Instagram reels. There's always an Instagram reel." And they'll be like, "Oh, I for kind of forgot about what I was asking about. I didn't get my answer." And that's certainly true in business context as well. This is currently playing out in AI inference. uh companies are paying disproportionately more for faster inference and this is good for Cerebras but semi- analysis does point out a number of potential headwinds and problems that the team at Cerebris will have to solve or contend with over the next few years. Uh mainly Cerebrris chips are not currently as capable of holding larger models in the limited memory that they have or networking multiple chips together to serve larger models. We've heard about the NVL72 racks that wire a whole bunch of Nvidia chips together can serve these really large models. Uh that has potentially be been a challenge. So semi analysis says moreover the industry is tending trending towards larger context windows add infinitum 128k context will certainly not be acceptable for long especially with the prevalence of agentic workloads. And it doesn't look like there's a simple solution of just scaling the wafer size larger because TSMC is set up with a standard wafer size or adding more memory to the existing architecture because Cerebrus' whole design depends on a lot of SRAMM static random access memory directly on the on the wafer but SRAMM is no longer shrinking as much with each new semiconductor node. So the last version of the Cerebrris chip they've done WSSE 1 2 and three. They're on three now, but WSE2 had 40 gigs of memory. WS3 you would expect, oh, we want a doubling, right? We want we want a 10x or something. Uh, it got 44. So, a 10% increase over one process node, one iteration. Is there an easy way to double this? Is there a question like how will this scale as the models get bigger to add more SRAMM? you might have to sacrifice compute area because everything is being done on one wafer if you want computation or memory. There's a direct trade-off because you only have so much space on the actual wafer. But in an agentic workflow, I think it's entirely possible that you want like the biggest most powerful model like the vice president delegating things. You want the vice president >> senior vice president uh maybe just the president uh handling the critical work. Uh so future models might not and that might not be on Cerebras that might be on NVL72 or TPUs or something but uh I imagine that we will quickly jump from the agentic age where you're firing the best smartest model at the full workload to the orchestration age and there will be hybrid approaches. So the biggest and best models will delegate certain tasks to smaller faster models just like they go and do database queries these days or they go and search the web these days and that's CPUbound. There will be certain workloads that the larger smarter agent model like the boss model can sort of delegate to the cerebrous speed workers, the faster workers. A year or two ago when uh Daniel Gross wrote AGI bets and was sort of like is Nvidia underpriced? I don't know if Nvidia he might have said that on on semi on strategy but you know we we entered the AI age and everyone was like oh GPUs are the future Nvidia is the company but then it was like Nvidia GPUs are good and then also CPUs are good and and ARM is getting into it and Intel's doing very well and it it >> we're going to make big computers >> big computers big computers for sure. Honam says this IPO illustrates the power of an individual partner over the brand name of the firm. Pierre Leond was a partner at both Sequoia and Kla, but instead of those firm backing Cerebras, it was Eclipse, the firm he joined at the age of 84 that backed this little known ship company, multiple times in the early days. What a way to wrap up a career. He was born in 1930, the same year as Warren Buffett. >> Wow, that is an awesome story. I love that. Onethird of the order book, the folks that said, "I want shares in the Cerebrous IPO, onethird of the book got zero." I guess the top 25 investors took 60%. That's probably the big investment funds, the Fidelities, the State Streets, the Black. >> They have done quite well today. This picture looks wildly different than the CLA IPO last year in which uh only a handful of the team at CLA popped over, hit the NY, IPOed, and went back back home. >> Yeah, it was very much just like another day at the office for the team. Uh yeah, that's definitely what I was contrasting it to. Uh the Cerebras valuation every round, series A in 2016, $und00 million foundation benchmark and Eclipse Co led the series B in 2016. Vy Capital led the series C in 2017, then 1.6 billion valuation in 2018, 2.4 in 2019, 4 billion in 2021. That was like maybe a little bit of a slump, but then 2025 a trades and Fidelity come in at 8 billion. Then Tiger comes in at $23 billion. Then in May of 2026, they IPO at 48.8 billion. Let's run through uh the Kevin Walsh news because he has been confirmed as the Fed chair. Kevin Walsh, who uh is most famous for interviewing Alex Karp on CNBC, while Alex Karp appeared to have popped a nicotine pouch and then spun a notebook on his finger. Did you ever find that clip, Tyler? Is that in the >> Yeah, it should be timeline. >> Let's have the the video here. >> They really put Kevin Walsh on the map. And >> I remember I showed up in your office once. I was dressed like this and I think you screamed at one of the guys. You said, "Kevin's here. He looks like the guy from IBM and I was talking about well you know we need like really finance controls and you know how you going to sell the product and all this stuff. >> Okay, >> but I would say you certainly built that. >> He's really spinning it. I didn't realize he goes back to it like four times dirty word anymore. He's >> really good at this. >> But somehow you grafted that on to the to to the strange company that can produce these products. How's that transition been? If I've got it right. >> I have so many questions. First, we have to get him to recreate that for sure. Second, uh I I thought Tyler, I thought we were talking about that being on CNBC, but that looks like just a podcast. Like that doesn't have any Chiron or >> Yeah. No, I I don't think it was actually on I think it was from Palunteer. Like that was a Palunteer. >> Oh, okay. So, it was just like a random podcast and then and then uh when I've seen it on CNBC, they were playing the clip. Got it. >> I think so. Yeah. >> The vote was 54 to 45 in the Senate. The divided vote signals challenges ahead for Worsh who faces a Fed committee skeptical of rate cuts that Trump has demanded. And of course, we talked about the inflation news. Typically, you don't cut interest rates uh going into inflation and potentially economic stagnation. Uh you definitely don't cut rates in that's why stagflation is so difficult because if you have uh stagnation and low inflation, you can cut rates very easily. Maybe the economy starts overheating a little bit. you get a little bit of inflation, but then you can pull back. That's what we've done historically. Vice versa, if the economy is running hot, you're seeing high GDP growth and high inflation. Well, if you raise rates, you're going to pull back on both of those. But in stagflation, you're seeing both inflation and st and economic stagnation. Harder to deal with as a Fed chairman, which is potentially the task he will be faced with. So, uh, the Senate confirmed Kevin Walsh as the Federal Reserve's 17th chair Wednesday in a largely partyline vote that require that reflected how tensions with the White House have dragged the Fed deeper into the political fray. I was looking back at the old Fed chairs. There's some absolutely legends in there because some of them had really long runs. So, very quickly you get back to the black and white portrait and the painting as you go back in time. >> Who's your favorite Fed chair? >> Vulkar. >> Yeah, Vulkar is pretty good. Bernani is crazy. >> An absolute dog. >> Yeah. I don't know. Hard to pick. Hard to pick. Chair Jerome Powell, whose leadership tenure whose leadership tenure ends Friday, captured at least 80 votes in Senate confirmations for each of his two terms at top the Fed. Wow. Jerome Pal just fan favorite of both teams. 80 votes in the Senate. That's pretty significant. I'm putting him in the conversation, Jordy, but I'm not giving him the GOAT trophy. uh but merely because the the challenges faced he wasn't confronted with a great recession, a.com bubble bursting, a uh a black Friday. uh he like he the like the economy from 2018 to today >> global pandemic doesn't you don't you don't count a shutdown of >> no because no I I I I actually don't because the economy was pretty strong in 2019 and it went into it went into 2020 with uh pretty strong consumer balance sheets low debt there wasn't a shadow banking economy there was no bomb in the US economy waiting to explode and So although we saw high unemployment briefly and we did have to stimulate the economy, that's not his job. His job was to set rates. there was a little bit of like I mean maybe you put the inflation you know the the end the Zerp era and the end of the Zerp era and all of those girrations on him but those the problems that were downstream of both the ZER era and the end of the Zerp era were suffered mostly and benefited mostly on like tech companies and Silicon Valley companies that had really long cash flow horizons. And so there was not a moment where it was a dire situation that the Fed had to intervene in a meaningful way and like save the economy like in 2008. It it it's a big deal. He did a great job, but he didn't he wasn't faced with the same challenges of a Bernani for example. That's what I would say. Tyler, what do you think? Yeah, I think that that's reasonable. But also like if Powell was worse at his job and he saw some crazy crash because of COVID and then he brought it back like then it'd be like oh yeah, he did face this massive thing, but because he did, you know, such a good job, maybe you didn't see any like massive crash. Nothing super bad happening is evidence that he was really good as a Fed chairman. >> Uh yeah. Yeah, maybe he's a defensive back. Uh you know, if they don't score, he there's no great plays because he's just shut down shut down cornerback. Anyway, Jensen Wong is over in China. Jason Calacanis has a photo that looks uh extremely real. Zero AI detected, but he's bringing two huge boxes of GeForce RTX 5090s, which are >> This is a picture from when he was in Alaska, too. >> Uh Jason says, "Never stop selling." I I agree. Uh there is some news which we will cover later uh in the week around uh the dynamics around H100 sales uh and Blackwells, what's actually happening. It's all in flux as the Trump China summit uh plays out on the front page of the Wall Street Journal every day this week because it is headline news. High stakes US China summit kicks off. Watch a team of humanoid robots running a full eight-hour shift at human performance levels. And Brett Adcock said this is fully autonomous running Helix 2. >> All right. Pull up pull up this post from Pete. >> Yes. And the the stream did fantastically. It was 24 hours. It got 3.4 million views. But at a certain point during the stream, there was some questions about whether or not the humanoid robot was in fact >> back to the beginning. Back to the beginning. >> Okay. Let's play this. >> All right. So, it's cooking. I mean, the speed is actually >> And we were extremely impressed by this. This was remarkable. Remarkable. >> Even if it's tea operated, it's extremely impressive. >> Yeah. Yeah. Yeah. Like the robot's clearly working. This is very >> but they're saying that it's not teop. >> Okay. So then the robot starts missing things being a little bit like an inch off and then reaches up and touches the robot's head the robot which is something that wouldn't normally be necessary. It doesn't have like a logical explanation or conclusion. And so a lot of people are asking >> it does have a semi-logical conclusion which is that Brett is claiming when it reaches across its body to go to the right that it puts its hand up here to get the hand out of the way. That's what I was thinking was that if the hand is is halfway up the you might be blocking the sensor the camera sensor and so even though like you might the robot might reach the hand up further to move out of the view so then the robot can look at the next package. So that's one possible explanation, but a lot of people are asking even harder questions saying that potentially was there a human in the loop? Was this tea operated? Which is something uh Brett has said it's fully autonomous? I feel like that means no humans in the loop. But Toraxes has an artists representation of Helix 2 figures in-house neural network running entirely on board. And it of course is a human in a VR headset. Very uh very debatable. We'll let we'll see where you stand. But there is there is a third option which I have shared which is potentially no humans involved. I don't know if you'd call it autonomous but you would call it no humans in the loop >> loop >> because you have >> Well, it is an autonomous system, right? It just sort of runs. >> Yeah, I would I would consider this autonomous. It's the It's the image that I shared in the production chat. It's not of a human and it's not quite robotic, but there's no human in the loop. And so this could explain the system is running with no humans in the loop. If you make that claim and you follow this, >> I think this qualifies as no humans in the loop. If you have a giant orangutang in a VR headset puppeteering the robot via tea operation, you could say that this system is does not have a human in the loop. And you could make that >> and I could make the argument that it's autonomous. Yes, the chimpanzeee is running its own. It has somewhat of a neural network like a neural network. >> The OpenAI Elon Musk trial is in its final day. The trial is ending. People expected four weeks of trial. We only got three. They're cutting it short. Uh what are the prediction markets saying about who's going to win? I want to know that. And I want to go to Mike Isaac, the rat king, because he has a breakdown of what's going on. He says, "Good morning. Closing arguments of Musk versus OpenAI with special guest Microsoft are happening today." >> The cow sheet. Well, Elon win his case against Open AI. It peaked at a 58% chance. Where is it now? >> April 28th. It's now sitting at a 30% chance. >> 30% chance. Okay. Uh, so right now the judge is instructing the jury on the criteria by which they should be judging the outcome of the case. important because if the jury listens and carries this out, it is a very very specific lens through which they view all the evidence. Ostensibly, it's where theater ends. Listening to this and being read out in court and for the last 23 30 minutes is very helpful because it's clarifying on how high the bar is for the plaintiff's side approving some of these claims. Uh sort of feel bad for the AV guy during this trial. There's been feedback been mic drops, but not in the good way. The mics have been dropping out. Vky video feeds. They need to revamp this place, says Mike Isaac. LMAO. The first joke of the tweet storm. He says Musk Council is going after OpenAI execs Altman and Brockman and has the mugshot style photo of Altman on the screen again. Battle of Photoshops of executives in this trial has been entertaining to watch. You want to depict your opponent in in the worst possible light. Mus council going back and forth hammering the point they've made over and over the argument essentially painting a picture. Sam Alman liar. Chipping away at witness credibility has been a core strategy for the plaintiff's side. And we're back to everyone hates Google again. Uh Molo is using Larry Page, who they claim doesn't care about humanity as a foil to the noble Musk who only whose only care with respect to AI is the future of humanity. Uh Musk council is painting the dross don't trust Sam picture in a bit more detail for the jury. Also Musk's side has a picture of Elon and Alman on the screen now. Sam's looks like he's about to be processed by a US marshal. Musk's looked like he's getting ready for the Met Gallow. Lol. Lots of Musk closing side arguments. Semi-pop populist track of pointing open pointing at OpenAI and saying these billionaires are making gobs of cash while writing a charity for the supposed good of the world. I'm curious if jury can register this argument even if it comes from Elon Musk, the world's richest man. Ouch. Open AI Council begins closing argument with a broadside against Musk. Every even the people who work for him, even the mother of his children can't back his story. Oh yeah. Back to the war of the photoshops. Closing remarks. Now in in the digital displays on the monitors for exhibits, all the OpenAI executives look like Olan Mills photo shoots. Do you know who Olan is? He says it's complimentary. I need to get up to speed on my photographers. Olan >> Olan Mills is a portrait offers portrait photography. >> Oo, does look very nice. You pull up the the Google images on Olan Mills. Uh anyway, short summary of the closing. Must camp. All these open eye executives are rich as hell and all the time. Open eye camp. All of that is a sideshow and literally all the claims Musk is bringing cannot be stood up by actual law. The Microsoft camp disappears into bushes. Dota got mentioned again. They love mentioning Defense of the Ancients. Uh, incredible Photoshop from the Open AI camp of a calendar of events complete with little characters and a timeline of events. I wonder if they're using Image Gen 2 or if they're doing it the oldfashioned way. I can't wait until it's entered into evidence this afternoon so he can show us. Uh, sort of want to buy this meme guitar, but I also have two tele. Is that just completely side side side note? Uh, gamer has entered the blog. The Dota moment has been mentioned nearly every single day during this three-week trial. AI researcher. We got to have Mike back on the show. It's so good. Uh, saw as a true breakthrough in the technology. >> What What is the timeline for the jury to meet? They Is this a something they're doing today? >> They're getting a 30-minute recess. most they've had in a month. I might actually be able to go outside and get real food. There's a Popeye's across the street. Is it a bad idea to get a bucket of red beans and rice? That's what he's thinking about doing. So, not much news on when this will close. It is 1:10 Pacific time. I imagine that uh they will wrap up by what did he say? 300 p.m. 4 p.m. So, 30 minute break. That happened 40 minutes ago. Um so, I imagine that >> and but they've been taking Fridays off is kind of what I'm getting at. Oh yeah, because this could >> So maybe this happens to Monday. This is just closing arguments. It's not necessarily the end of the trial. Might get the results. >> Or the jury might might make a quick call. But >> well, there was an ups. >> 11 minutes ago, a lawyer for OpenAI on Thursday defended the company's chief executive Sam Alman from withering character attacks by Elon Musk's legal team as both sides delivered their closing arguments in a trial with potentially seismic implications. The stakes are high. Mr. Musk, who was not in the courtroom on Thursday because he was in China with President Trump, is asking for more than $150 billion in damages. He is also asking the court to remove Mr. Alman from the startup's board and to stop a shift the company made last year to operate as a for-profit company. They pushed back. Sarah Eddie, member of OpenAI legal team, tried in her closing argument to dull the attacks on Altman's credibility and to argue that there was never a firm agreement among the founders that could have been breached. Not one in this case other than Elon Musk has testified to any commitments or promises that Sam Alman or Greg Brockman or OpenAI made to Misker Musk is what she's saying. After the recess, William Savit, OpenAI's lead council told the jury that Musk does not have a claim against the startup. Unless there was a specific agreement between Musk and OpenAI describing how his donations to the nonprofit should be spent, that agreement does not exist, Savit said. So that's where I guess OpenAI is leaving it for now. We will continue to cover the story as it evolves. >> Is the jury allowed to use codecs/go be done in one and a half hours? >> There's other tech problems going on. Max Zeff over at Wired has been covering the story as well and says Musk's lawyer brought a big monitor maybe 36 in into the courtroom. OpenAI's lawyers asked to use it. Musk's lawyers said no. The judge told Musk's lawyers that they have to let Open AI use it. Then OpenAI said it might not be possible to connect their laptops to it. AGI is here, but we'll still need a dongle. I suppose a dongle has entered the courtroom says actually there's about 50 15 lawyers standing in the middle of the room right now talking how to you talking about how to use this big monitor. This is wild. >> In other news, >> break it down. >> Tim Draper says, "I think I broke a record. >> I took 52 pitches in 52 minutes at below 40°. Welcome to my office. # Draper University # survival training. What do we think about going in the ice tank? How cold are ice baths typically? You you you've done ice ice baths. I feel like I did one and it wasn't as insanely difficult as people said, but then I checked the temperature and I don't think it was 40. I think it was closer to 50. Yeah, you can totally get closer to I I I um >> there's a couple companies that >> sell personally when when if you're going surfing and the water is below 45° >> can just be very painful like to so even in a wet suit anywhere that's not covered. A lot of people are putting gloves on >> uh booty. So apparently Joe Rogan's at like 34. >> 34? >> Yeah. >> Wow. >> So that's like the cold plunge, you know? He's the He's the top of the mountain when it comes to ice bows. >> He's the final boss. Uh >> this Yeah, this this is just a crazy picture. I did think it was I did think it was AI, but turns out it's real. It's just funny because it looks like like what is this set? What is this set up? >> Yeah. What are all the trash bags there? And the wall is like sort of decrepit and there's piping. >> It looks like kind of like a prison ice bath. >> Yeah. This is not what you'd expect from I mean, isn't he a billionaire investor? you'd expect some sort of palatial, you know, you see the the the the the properties that Mark Zuckerberg's acquiring, that big investors are acquiring, you would expect something that would be uh much more regal. Uh but he's doing it the old fashioned way. Whipped this up himself, bought some trash bags and took some pictures. Uh yeah. Uh that's our show, folks. Leave us five stars on Apple Podcast and Spotify. >> Another one. Sign up for our newsletter at tbpn.com. See you tomorrow at 11:00 a.m. Pacific time. And have a great rest of your day. Goodbye.

More from AI