The conversation kicks off with the surprising news of Sam Altman’s departure from OpenAI. Nabeel and Fraser then discuss the ‘crazy week’ of product releases, demystifying the complexity and impact of AI in various workflows. They spotlight AI startup Lindy, review My GPTS, and early impressions of the product Dot, an AI-driven guide in daily life. The duo categorizes AI developments into three ‘buckets’: lead bullet strategy, a universal translator bucket, and inventing new behaviors.
Nabeel Hyatt: The author Rory Sutherland. has these kind of like new market creation has these three buckets . You can make faster horses. That's one set of innovation primarily about efficiency . And then there's teleportation, which is like the thing that feels like sci fi, but you know exactly what it is. If you could get there and everyone agrees, teleportation would be awesome. And it should be here.
And then the third bucket, he calls the Japanese toilet problem which is like. You never knew you needed a Japanese toilet until the first time you showed up in Japan and then you walk out and you're like, why doesn't everybody in the world have this thing? Right? It's that's the third bucket.
Fraser Kelton: The, that is, that makes sense. And I'll tell you, it also needs different founders at different times for each of those buckets.
let's do it. Hey, everybody. Hey, Nabeel.
Nabeel Hyatt: Hey, Fraser. And welcome to what turned out to be a pretty crazy day. Welcome to Hallway Chat. We have some hallway conversations that are certainly going on. Like I had to upfront 30 seconds before we turned on the mic here. Fraser caught the news that Sam Altman is out at OpenAI.
We are not, this is not CNN breaking news, so maybe by the end of this podcast we might have a comment or two on the whole thing. But our job is not to gossip, it's to analyze with some thoughtful consideration, about what is un inevitably a pretty crazy thing that just happened in the industry
Fraser Kelton: Yeah we're not a news show and I also think that this is probably a situation where we should process before we share anything, myself especially.
Nabeel Hyatt: We are going to talk about what was already a pretty crazy week though.
Because there was just a lot of product released this week. I don't know if you found this Fraser, like I felt like I needed an extra day in the week to try and demo. All the stuff that hit the wall. Everyone seemed to need to get their stuff out before Thanksgiving hit and end of year stuff is clearly playing a part here.
Fraser Kelton: I think we're going to see a tick, tick tick cadence of releases over the next couple of weeks before in the year shutdowns happen
Fraser Kelton: I carved out a good chunk of time to play around with Lindy this week, and I think it's... maybe the best glimpse of the future that a lot of us are going to live in.
And I know that you've played it with GPTs as well. Do we want to start there?
Nabeel Hyatt: Yeah, why don't we start with Lindy?
Fraser Kelton: Okay. So Lindy is a startup billing itself as an, a platform to build AI employees, which is so ambitious that I love it.
What it really is a way to, to stitch together different workflows using AI to automate tasks. And that sounds very dull and boring, but I will tell you, it is so amazing. Right now I have a Lindy that gets copied into an email. And then takes over the scheduling task comes back on an email thread with available times that work for me, gets the response from the person I'm trying to schedule with. And then sends a calendar invite with a Google Meet link , and then replies to the original thread. Not science fiction, but a very hard orchestration problem. And the, there were just... A couple of different moments where I had to pause and take in what was happening. So first of all, you describe in natural language what you want it to do. And it then executes it, and the first time it executes it, it realizes that it needs access to your calendar, so it has you auth into your Google Calendar. And then it realizes that it needs to auth in to your Gmail account. So it asks you, just within this beautiful flow, to auth in. And then it does it.
Nabeel Hyatt: How much of that does it feel is generative versus being on rails? Somebody went and programmed inside of Lindy the calendar off thing and somebody went in and programmed the Gmail off thing or did you really feel like you were bumping up against truly generative new stuff?
Fraser Kelton: I think that this is what makes it very appealing, is you get the sense that it's a mix of the two. So it has an understanding where somebody's created a template that a calendar invite requires these fields to be formed. But then there's a generative piece that is going out and trying to work with the API.
And this is the part that was perhaps the most inspiring for me, is it got an error when it tried to create the calendar invite, and then you see its thought process as it reasons through the error. It says something along the lines of, Oh, a 400 error is commonly associated with a bad field entered into this area, so I'm going to try it again by removing the special characters that were in that field.
And then it got an error again, and it said, okay, it's not that, I got the error again. And it worked through and it ended up shipping, and it ships the calendar invite
It is the combination of AI having some level of reasoning and some generative capabilities as it works through these problems within just enough structure that somebody's hard coded in that it can stay on the rails, that I think makes it work very well.
We have a habit of sending out weekly summaries on Sunday afternoon. It's going to go into my Google drive and comb through, if it works, we'll see various Google docs in my calendar to figure out who I met with and what the summaries were for those meetings, and then send a well formed email generated and synthesized through the model.
Nabeel Hyatt: it's we'll see. My experience with so many products, I have my experience with so many products that involve GPT is that they're terrible at dates. Nobody's yet done the scaffolding above it that, that allows it to have dates. So let's see if it can understand relative date structure of gather docs from the last week and make an email.
I'm looking forward to this Sunday's weekly update.
Fraser Kelton: We'll see. But this is the case where I think that having an inelegant solution, if you will, of forcing structure and fields to create rails adjacent to the AI feels like the right product solution.
Nabeel Hyatt: I think we're talking about something pretty specific, right? Which is. We have a generative system, an LLM, and what we're really talking about is this tension point between our right brain activities and left brain activities, and then how do you connect the two systems together. This is obviously using shorthand, the whole right brain left brain thing isn't a real thing
for me, the first example of this magic was in plugins, which was Wolfram Alpha plugin, which is, you know, maybe the best singular use case in plugins I've come across. And it's ex exactly an example of this. let me generatively speak to a system and then you call in a left brain system as you need.
And this seems obs to that, right? I am Generatively talking to you and then I need you to call into these more deterministic systems to do things that generative. is, would hallucinate and get badly and not do well,
if you just think about what a computer was asking you to do over the last 50 years, it's telling you to program, it's telling you to be syntax perfect and speak in computer language, and this is obviously the inverse of that,
This wave of LLMs, I don't know how much it'll stick, but this wave of LLMs is essentially being a connectivity layer. And it's very much actually, it's funny, Fraser, literally the way I was going to talk about GPTs, which is that it is this fundamentally connective layer between an iterative generative interface and left brain fixed syntax systems.
Fraser Kelton: Yep. Yep. That's a wonderful way to put it This is to go back to my earlier comment that this is the purest example of where I think the world's going We all have a bunch of tasks that we do day to day, week to week. And there's certain triggers that we take as humans to go and do those tasks.
There are then certain different platforms and products that we use for those tasks.
And automating that, stitching that together is a great solution for LLMs. Because it can add a little bit of structure from unstructured data and then feed it into various pipelines that can complete that task for you.
And we'll see, I mean, we'll see on Sunday how the automated email feels. The scheduling stuff feels pretty good. We'll see how it goes.
Fraser Kelton: But how has GPT's been this past week?
Nabeel Hyatt: It's one week. So the first thing is I need to try it over the weekend and a bunch more times. I've built maybe, I think at this point I've built probably five or six GPTs and wrestled with the interface for a little while. I've also been using a handful of the other GPTs that friends are generating and putting out there into the world. As you saw internally, I built a Spark design mentor where you, I took our whole internal branding guideline given that, you know, we, Don't even have a head of marketing, but we have a very deep detail branding guideline.
Fraser Kelton: have a Nabeel, we have a Nabeel.
Nabeel Hyatt: And no we don't! We now have a GPT's! And it allows you to do things like upload a dinner invitation, or a Canva thing, or a Figma thing, and get design feedback on how it adheres or doesn't adhere to the brand guidelines, which fonts should I use a bunch of things like that.
I used the board game thing that they built GameTime which did a relatively good job and Universal Primer was probably one of my favorites. Which is my long time friend Siqi Chen built a learn anything little chatbot, which was, does a very good job I need to talk to him about what structured prompting he did to get there, but does a very good job of kind of being an educational assistant.
I think the thing that will feel very similar is the smart thing that they did is the natural language interface to talk to it, and... Look, I've been doing prompting for quite a while now, and we've all spent dozens and dozens of hours. But, the average human, we're still very early in the cycle of understanding what a prompt is or how to do it.
And so it is still technical and does feel syntactical. It doesn't feel totally natural language. It feels like a syntax. And so this idea, I think of it as like an intermediary where you're now using a natural language interface, Ostensibly as a prompt builder, that's what really it feels like. And then the smart thing, the very smart thing they did, instead of trying to hide it all because of course it's still early software, it's buggy, there's lots of issues, maybe they got the prompt wrong, they still then reveal the surface area behind, right?
So the interface feels similar to Copilot. Where you're typing in one syntax on the left hand column in natural language. And then it shows up not in Python but instead in prompt language. And then you can look at the prompt language and double check what you've done.
Right now, I think mostly what you're going to see over the next couple of months, I'm pretty convinced it's not the App Store.
What you're going to see over the next couple of months is ostensibly feels it feels like a prompt finder. Like people are going to build wonderful prompts. Then you're going to have a search engine for prompts. That'll help everybody, right? I suspect that the real unlock, and I think it goes to the Lindy conversation, I suspect that the real unlock is I have not yet used a GPT that does a really good job of speaking directly to an API versus prompt to a doc or prompt to the internet, and It being an API interactor, so again, this similar syntax, like I can natural language speak to you and you can speak to the API to gather data is where real magic will happen.
And I have a couple APIs lined up that I'll save for a future product podcast that I'm going to hammer on over the Thanksgiving break you
Fraser Kelton: Going back to even well before the launch of ChatGPT, my observation was that extensibility and discoverability were the key things for really broad, deep utility. If you look at the response on Twitter, a lot of the people who have been early adopters to tinker and build with AI and LLMs, they feel disinterested, dissatisfied by it, and it's because they have no problem prompting these models, right?
They speak the syntax already. They've trained themselves over the past couple of years to do it. Siqi's. GPT, I tried it it's simultaneously great and it's also underwhelming, right? It is, it's underwhelming because I could
Nabeel Hyatt: expected it to work that way.
Fraser Kelton: Yeah listen I could go have it do all of those things but there's no way that many people in my life could.
But now if I'm like, oh, ChatGPT is a great way to learn any topic. I don't have to give them a Google Doc on here are the different ways of how to prompt and interact with the model. I can give them a link to that GPT. I share your view, it's not going to be the App Store, but if you want to have utility coming out of GPT for a broad number of people, this is a very elegant solution to get there.
Nabeel Hyatt: Yeah, I mean, look, the problem that the natural language interface in GPT had before this. is that nobody has any idea what the affordance
Fraser Kelton: That's
Nabeel Hyatt: is on its capabilities. And it very much reminds me of... The Alexa problem, if I could rant for a minute about, right the affordance was just completely all wrong with Alexa.
I have no idea what the full nature of commands are. Can I ask this thing for sad music or just an artist? Can it do math? I'm not sure until I've said the words and then I get a failure state back. And then I try again, it fails again, I'm frustrated, it's a horrible loop. And so the user is trained to just use Alexa for a couple of really reliable jobs to be done.
Usually a glorified voice alarm clock and music player after all this amazing amount of technology, right? It, you know, we're, I think of it as like we are in the, we were in the MS DOS era of generative AI. You know, first, I don't know what the commands this thing can do. .
You know you know, exploration can be fun for explorers. So for people like you and me, it's great. But for the vast majority, they just want to know what it's good for. You know, this is why you end up with things like people sharing dozens of prompt spreadsheets to figure things out.
Ask any designer what it's like to get a creative brief from a random human. About what they want designed, and you will hear horror stories. People are just not great at translating what's in their head into the English language. And their words are not precise enough, careful enough, they don't even know what to ask for, right?
This serves a little bit like the Google spreadsheet in the sky of all the various wonderful
I'm glad they didn't use the Amazon phrasing, but all of the skills that GPT has all the better. And I suspect that there will eventually be... There'll be a natural language interface for finding GPTs everybody else has built, which have been built with natural language to talk to GPTs, to talk to APIs, is like where it's going to end up in the
Fraser Kelton: Yeah, that's it. You have extensibility. How do you make it easier to do new things with the GPT? And you have discoverability. How do you allow as many people as possible to find that GPT at the moment that they need it? You know, when we launched Dall-e, even before it went live it was very clear that it was a great technology for creating children book coloring pages.
My kids had no idea how to use that. You know, you put them in front of a blank box and it just was a struggle for them. But one of the GPTs that OpenAI launched is a... Coloring page creator. And it just gives a little scaffold for my eight year old to be able to walk through and do it well enough.
And then she gets exactly what she wants. In that sense, I think it's great. Is, are we going to buy a subscription to a coloring book creator, GPT? No. But we are certainly going to get more. Utility out of that product because of these little scaffolds that are put in place.
Nabeel Hyatt: We're a year into ChatGPT, we've seen hundreds of millions of dollars goes into a bunch of startups, and I, in my head now, I'm sure it will change in three weeks, but I would bucket the new types of consumer experiences being built off of LLMs loosely into three buckets.
I think the first one, which is what we just talked about for a little while, you know, I am not a good namer of things, but let's talk it, let's just call it talking to APIs loosely. The second one, which is what I actually have been really thinking about all week long, is the lead bullet strategy.
Which is that if you think about the, people have been, hey, which new GPT product are you really using every day that really changes your life, that really feels great? Why aren't there more of them? And I've been thinking that actually the most recent ones that I would talk about, are basically lead bullet strategy, not silver bullet strategy.
It's things like Perplexity, which is, you know exactly what that interface is, it's Google. But just done better. They solved a million little paper cuts. And then the other one I've been talking about a lot lately is Arc as a browser. Which just is an incredibly good example of, no single one of those features is incredible.
But they add up to a browser experience that I would just, it's like appreciably better.
Anything else that you would think of in that category?
Fraser Kelton: Not off the top of my head, but that's the second of the, you said there's three different strategies you've been mulling on.
Nabeel Hyatt: Yeah, the first one is talking APIs. The second one is a lead bullet strategy. And then the third one is one that we've kind of teased out a little while, which is entirely new interfaces and experience built out of AI and and Descript is a good example of this in audio and you know, I think people were hoping that a million new interfaces would show up day zero and we've already counseled a couple of times, and I think we're of the world that those things take time.
I don't think I had been thinking about the first one that much. To be honest but don't think more about what it would mean, you know, which APIs are likely to surface up. What kind of new durable value do you create if you're connecting all the APIs very fluidly? What happens when every single human has access to whatever API they want to stitch together, whatever they want?
It's very interesting implications when you think about where that goes.
Fraser Kelton: Interesting. Interesting. So I think I missed that when you went through it the first time because at OpenAI, they now had a slide publicly. We used to say that the model is the product, which literally means that talking to APIs in time is going to be the product. But you mean in that case, the Lindy's of the world where. You are talking in natural language and it is stitching together a bunch of different tasks for you via API? Is that what you mean in the first bucket?
Nabeel Hyatt: when I say talking to APIs . It's not that the model is the product, the model is the interface. If you think about what Lindy is, the model is just the ui. It's the way for an average user to be able to speak to in, in the syntex of. Something that is actually not generative, but fixed. Copilot is a way I speak in a language, or frankly, all of the coding companies. They are, I can use my English language brain, to speak the way that I know, and then you will translate, it's a transformer, into the syntax, the fixed left brain syntax that the computer understands.
And Lindy is the same. You are speaking to it to try and loosely cast an incantation. And then Lindy translating that, not into an LLM, the LLM's not answering. It's sending API calls to Google. It's doing very deterministic work. It's the same thing as ChatGPT sending a math quote to WolframAlfred to get back a deterministic math answer.
And, so that's my first category, really, is... It's not models all the way down. It's actually models talking to the old systems.
It is a universal translator.
Fraser Kelton: I think that is the future of much of work where we, Microsoft owning that term is very smart co pilot, I think that there are going to be work co pilots where we talk to a model that then goes and orchestrates the old world, I think is how you just phrased it, and it does tasks for us.
Fraser Kelton: Lindy felt really remarkable. this is almost a branch from that world into bucket number three. I played around really in depth over the past week with a lovingly built product called Dot, which is by the gang at New Computer. It is... position it as a guide for your life, and it's very easy to listen to that and think that's co pilot for work.
And the more I play around with each of those, the more I realize that there probably is going to be a real opportunity for two distinct, impactful products emerging here. Because one is cold, utilitarian, workflow oriented. And I don't want any of that in the guide for my life.
Nabeel Hyatt: Yeah, can you explain down at the ground level? I open up the product and what happens? What does it look like? But,
Fraser Kelton: It is a very opinionated product. It is a single continuous chat message as if you and I are in our iMessage and that's all it is. You pinch to zoom out. And it shows you conversations that you've had with Dot. Again, chronologically, but this time some structure has been applied at a high level.
So it might say, Fraser's morning routine, Fraser's plan for the weekend, etc., etc., going back in time.
Nabeel Hyatt: But, Fraser when I'm hearing this, and I'm gonna ignore for a second having played with the product and meeting with the team like, when I hear that description, what it sounds to me like, It's the same way that ChatGPT or Perplexity or any search engine has a list of the conversations you've had with them right now.
And then I zoom out and I look where I looked at the left panel and there's a list of all the previous conversations I've had with them that I can drop back into. So
Fraser Kelton: Ah, so you,
Nabeel Hyatt: different here and talk to me about what is missing and what's the nuance of the surface area of the interaction.
Fraser Kelton: that's a great call out. And I don't think you get the soul of the product and appreciate the difference between this and a copilot for work until you actually use it. It comes through how DOT utilizes the memory that it's creating based on your conversations and the facts that you're sharing with it.
And then how it tries to transition from gathering the facts to helping you make sense of the facts to be a guide in your life.
I was on a plane, I came across the BlackBerry movie, I thought I better not watch this now because my wife will not be happy, we should watch it together.
And so I said, Dot, remind me that I want to watch the BlackBerry movie. Last weekend We had plans to send our kids to parents so that we were going to go out for an evening. It was going to be a rare, great evening for us. And Dot checked in because I had asked it to remind me about some things for the date that we were going to go on.
And I said too bad one of our daughters is sick. We're not going to go on that date anymore. it was so amazing, Nabeel.
It comes back and it says, a little bit of empathy, it says, you know, that's a bummer, hopefully Natalie gets better soon. And it goes, I guess you won't be going for dinner tonight. It could be a great opportunity to watch the BlackBerry movie. And it sounds so silly, right? It sounds so
Nabeel Hyatt: no, I get it.
Fraser Kelton: but you think, okay it captured that beautifully.
It captured it, it was exactly the right tone. It captured memory in a way that I would want it. It was as if you had been in a series of conversations with me as my friend and just found the opportune time to say, Hey, listen that, why don't you watch that movie tonight?
Nabeel Hyatt: Yeah. It's one of those things you don't really know you need until you've got it.
It's a very good example of walking a fine line between utilitarian and. Empathy and social emotional. And most of the things we've seen in the market sit on one side of the equation or the other, right? The perplexities in ChatGP feel perfectly on the utilitarian side, and the character AI or inflection, Pi, Products feel very much on the, I'm just, I'm trying to be a human in your life ish.
But if we think about most of our relationships, most of our relationships actually start semi utilitarian and transition into depth, and they have some measure of humanity in them over time. We start as co workers talking about a project we're working on, or we start in a board game group on Thursday nights, or basketball on Saturday mornings. There's a utility function to many of the beginning of the relationships, and I don't think we should ignore that. And I think it does a very good job in tone, kind of not trying to be one or the other, but dance in between the two.
Fraser Kelton: That's right. And I think that's an entirely new market that is likely to become important because I don't want my code completion co pilot to then empathetically start talking to me about my sick daughter. I think that there. There is a strong separation historically between work and personal life and I suspect that we will have an assistant, an agent, or droids that are executing tasks for us, and it's called utilitarian, factory like work, making us better, more efficient, more effective. They are tools. And then there's going to be these somewhat soft... Empathetic guides throughout life that are sitting right, right on that boundary of the right tone and compassion for a machine while still being useful and providing value through navigating life.
Nabeel Hyatt: This framework of three buckets of AI products that let's, literally, I'm coming up with on the fly this week right now, so it'll probably change next week. But I actually, Now that just, it just occurred to me that you probably would reorder those. In a way, you would think of it as how broken is the current way that the behavior is being done in the world.
How broken is the current interface. So the lead bullet strategy you place in the first bucket. Browsing the internet works, right? So does search engines. They work. They do not need complete and total revolution, but AI can play a role in smoothing out a thousand paper cuts that will add up to it feeling substantially different in a more polished way.
I can almost hear a certain cohort of humans now saying, that is just a sustaining innovation and no... current startup will be able to to win that strategy. I completely disagree because large companies are often horrible at polish. Apple is great at polish.
They are the lead bullet kings, not the silver bullet kings. But that they're that because nobody else can do it. Like they're that because. It doesn't matter how big Google gets, they still ship stuff that's not nearly as polished. It is hard to do, and it's cultural.
That's bucket one, is stuff where you just need iterative change.
The second bucket, the universal translator bucket, the let me talk to APIs bucket is where technical people have access to the goods. But Real change needs to be made to give everyone access.
And then the third bucket is new behaviors, right? And I do think that this feels like new behaviors. The author Rory Sutherland. has these kind of like new market creation has these three buckets . You can make faster horses. That's one set of innovation primarily about efficiency . And then there's teleportation, which is like the thing that feels like sci fi, but you know exactly what it is. If you could get there and everyone agrees, teleportation would be awesome. And it should be here.
And then the third bucket, he calls the Japanese toilet problem which is like. You never knew you needed a Japanese toilet until the first time you showed up in Japan and then you walk out and you're like, why doesn't everybody in the world have this thing? Right? It's that's the third bucket.
Fraser Kelton: The, that is, that makes sense. And I'll tell you, it also needs different founders at different times for each of those buckets. Lindy, you know, what needs to be executed on that roadmap to make that great. And it's sanding off a bunch of rough edges and making sure that the pipes all stick together and that the edge cases are gracefully handled.
And I'll tell you
Nabeel Hyatt: Very hard, obviously
Fraser Kelton: Excruciatingly hard technical challenge and you need to have good intuition as to which ones need to get sanded off in which way.
But then you sit down with Dot. it's very malleable. So I told Dot I have time in the morning that's just to myself before the kids get up help me program my day. And it sends me a check in at the time that I've asked, and I'm sitting around sipping my coffee, and we just chat.
Here's what I want to accomplish today. Can you send me a reminder by X to make sure that I'm on track with this? And then I say, okay, that's it. And the lovely thing is it came back a couple mornings ago and it said, Your kids don't get up for another 15 minutes. Here's a link that you told me to remember. Do you want to read this right now? And I thought, oh hell yeah, I want to read that right now. And it was so great. It was so great. Now, it's so it's very rough. And as you said that we are figuring out even the problems that need to be solved in that area, but it's very inspiring.
Nabeel Hyatt: There's also a different measure of patience that's required. I think you build a different culture. You know, I do worry a little bit that we were in such a red ocean world the last four, five, or six years, that maybe even decade, that the founder ethos of ship early and often. The kind of quick iterative feedback cycle, really truffle seeking founders, right, that truffle pig, that's like really sitting down really fast.
And that is right. If you're doing the first bucket for sure, it's intuitive, it's thoughtful, it's often hard to find the polish points that will get rid of paper cuts, but you can do it really fast. You kind of feel it. The second one takes medium length of time, but the third bucket where you're really inventing new things when you're working on a new video game, just to take something entirely different, it can often be six to nine months before you feel the fun,
A year.
And I worry that as a collective entity, we've lost some patience for craft, right? Wait, we just. Especially in AI where quote unquote everything feels so hot. I think it's just so hard to be a founder. I give them, I have so much empathy for them. You raise a round or you're just hacking away with some friends, you pop up Twitter and somebody's launching something every single week and then they come back later and they've got a million users that signed up for the thing and it's very hard.
to get to that third bucket of work, to get to the work that is really generative and not iterative to create new interfaces, because it almost never works the first few months. It's a lot of walking in the dark on the IDMAs, and you have to have a real patience for it. I have a lot of empathy.
Fraser Kelton: both sides, right? On both sides. A lot of patients, when you're building it, to be navigating in the dark for so long and you have to have a, I mean, you have to be brave to go down that path, frankly. On the other side, you need users who are patient with it and are sitting with it and trying to figure out what it even is and what it can become. There's an art to it, and these things, these experiences are being sculpted and discovered both at the same time.
Nabeel Hyatt: that's right. just very different sets of skills. All three of those buckets are very different sports. Although we should be clear it's fun to talk in frameworks. I'll probably have a new framework next week,
Fraser Kelton: That's how you learn.
Nabeel Hyatt: Yancy, at Kickstarter who wrote a great book he has these there are three phases to any innovation cycle.
The first phase is a paradigm shift. The second, where there were basically no rules. The second phase is the science phase, where you're testing theses constantly, and then getting feedback really quickly. And then the third phase is the production phase, where it's just about you kind of know where you are.
Obviously it's easy to talk about AI generally as in the paradigm phase right now, but as I think we just talked about now, depending on the use case and the problem set and where you're going afterwards, you could be somewhere in between each of those things, and I think you build very different cultures and very different go to market formulas and probably different investors, probably different co founders, depending on what you're actually trying to build.
It's not a one size fits all.
Fraser Kelton: Yeah your fundraising strategy should be completely different as well. And how you build the company should be, everything should be different across those.
Nabeel Hyatt: We should leave it here maybe be done for the day, we can all go process
Fraser Kelton: Yeah, we
We gotta think through what this means. It certainly means an awful lot. I mean, I have tremendous respect for Sam as a leader, especially at a place like OpenAI. He was a magnet for talent unlike Anybody. And so we will have to see. It's going to be, it is going to be interesting in the weeks and months and years to come.
Nabeel Hyatt: Yeah, as we talked about earlier, it is such a careful thing to stay in that hyper growth, high execution mode where you're a massive magnet for entrepreneurial talent, not just talent that wants to build something. cOmpanies only hold on to it for a very short amount of time. We will see what this means for what OpenAI is.
Fraser Kelton: We
Nabeel Hyatt: It is, this is not a boring market
Fraser Kelton: No. The book and then movie that gets made around that company is going to be ridiculous.
Nabeel Hyatt: Who are they going to get to play you, Fraser?
Fraser Kelton: Yeah, all right. .
See
Nabeel Hyatt: be
Fraser Kelton: ya, man.
Nabeel Hyatt: Talk to you soon.