May 12, 2024

Should we stop talking about models?

Should we stop talking about models?
The player is loading ...
Hallway Chat

Is it still worth it to discuss models when discussing startups? Nabeel and Fraser discuss how that may be the wrong question to ask in the current landscape, and why customer-centric questions and user experience should be the basis of product experience. Later, they deliberate who might come out on top in the “horse race” for AI product dominance, and whether it will come from a large, established company, or if the frontier of capabilities belongs to small innovators.

  • (00:00) - Decoding the Future: Puzzles vs. Mysteries in Tech
  • (01:22) - Welcome to Hallway Chat: Podcast or Tweets?
  • (01:35) - Exploring the Venture Firm USV's "Hallway Chat" Tweets
  • (02:27) - The atomization of media
  • (03:17) - Rethinking the Focus on AI Models in Startups
  • (04:38) - The Importance of Use Cases Over Models in AI Innovation
  • (07:45) - What makes a Foundational Model... Foundational
  • (18:23) - AI for Consumers: Navigating the S Curve of Innovation
Chapters

00:00 - Decoding the Future: Puzzles vs. Mysteries in Tech

01:22 - Welcome to Hallway Chat: Podcast or Tweets?

01:35 - Exploring the Venture Firm USV's "Hallway Chat" Tweets

02:27 - The atomization of media

03:17 - Rethinking the Focus on AI Models in Startups

04:38 - The Importance of Use Cases Over Models in AI Innovation

07:45 - What makes a Foundational Model... Foundational

18:23 - AI for Consumers: Navigating the S Curve of Innovation

Transcript

Should we stop talking about models?


[00:00:00] Decoding the Future: Puzzles vs. Mysteries in Tech


Nabeel Hyatt: The question is, in three or four years, is it going to be a chain of competitive barrier to entry? You know, I use this framework that's like, is the answer a puzzle or a mystery? Um, because I think we all try and be really smart and solve everything. And a puzzle is something you can think yourself through.


Brute force. Intellectual horsepower will get you and a mystery is something that's only discovered by going on the journey to get to it And I think is the model gonna be performant over time destined? Like I think we're joking ourselves if we think that that's a puzzle. It's a mystery


Fraser Kelton: in that world I think it's a net new entrant who wins So if it's not a chatbot And you have to have some sort of crazy product experience delivered, you know, that doesn't exist today, that we haven't seen, I think it's net new.


I think if it's a, if it's a chat like experience where the differentiation happens at the model layer, but the UI remains somewhat consistent with where we are, or simple, and the end user cares about the frontier of capabilities, I think it's, it's going to be OpenAI or, or, or similar. And then I think every other scenario, it's going to go to the incumbent.


And I think we're going to see Meta, Google, and Apple all just have a big piece of that market on their platforms.


[00:01:22] Welcome to Hallway Chat: Podcast or Tweets?[00:01:22] Intro


Fraser Kelton: Hi everybody. Welcome back to Hallway Chat. I'm Fraser.


Nabeel Hyatt: Hi there.


I'm Nabeel. Welcome to


Hallway Chat. I know, I know you've got a thing you want to jump into, but I am going to start, whether we should be doing this as a podcast or series of tweets.


[00:01:35] Exploring the Venture Firm USV's "Hallway Chat" Tweets


Nabeel Hyatt: I, I don't know if you saw, but the venture firm USV posted this last week. Basically, I think they're recording their Monday partner meetings, uh, as their, as their version of their intro podcast. We love that firm and they tend to talk about a lot of different things and they just posted, you know, their version of a little hallway chat.


What we talked about this week. And, um, it's kind of a great little read. I don't know if you read it. Did you read it at all? I didn't


Fraser Kelton: read it. No,


Nabeel Hyatt: no, I'll, I'll send it to you.


There's things in there like, Hey, this week we talked about how it's really important to digitally sign all types of content to maintain authenticity in the world of AI.


We talked about, um, the theory of like, won't be evil on the front end and he'd be evil on the back end as a kind of like hybrid architecture to how to think about our, it was like, it was kind of cool. And it made me think like, uh, maybe we shouldn't have everybody, uh, spending 40 minutes reading this when they could just read a tweet with headlines.


I don't know.


[00:02:27] The atomization of media


Nabeel Hyatt: It's the


Fraser Kelton: bifurcation of media consumption, right? You either want a two hour rolling conversation or you want to have The terse summary, as terse of a summary as you can get.


Nabeel Hyatt: I think that's probably right, actually. Maybe the answer is both. Maybe the answer is we should have the two hour version, we should have the three second version, we could have the eight second version and you just pick your medium and we'll have AI cut it 55 different ways.


Fraser Kelton: Maybe. And I can't wait to see it. I have tremendous respect for them. They're all smart. So kudos. Although if we do the trans,


Nabeel Hyatt: uh, like all the media stuff, uh, in all the different mediums, that means that eventually we'll have to show people the video of us actually talking instead of just audio. But, and I'm not sure if I'm comfortable with that just yet.


Fraser Kelton: Let's jump in.


[00:03:17] Rethinking the Focus on AI Models in Startups


Fraser Kelton: You know, last time we ended because we had to go and it was a moment in time when you were talking internally a lot about how we just should stop talking about the models when we discuss companies. And I thought it was, um, an interesting conversation that we should pick up here. Like, where was that coming from?


What did you mean? And why did, why were you saying it? Yeah, I mean, the provocative thing I said


Nabeel Hyatt: internally was just like, how about if we just never talk about models ever again? Yeah. And it was a little bit of a prompting and, and yeah, it came from this feeling that we, personally, internally, and, and frankly, I think I picked it up talking to founders as they're, they're pitching as well.


Um, we may be focused on the wrong things. I mean, the core thing about a model is that it's an enabling technology. This is what we talk about. This, this world of generative AI is enabling technology.


And it reminds me a little bit of, you know, back in the day, back in the day, there used to be this adage that you would rather have an engineer from Georgia Tech or Waterloo than you would from MIT because the MIT CS grad who just graduated has only been taught a bunch of theoretical CS.


That's kind of like eat and wonderful academic use and build technology for technology sake, but could care less about how that's going to be used in the real world. Whereas a more practical CS program, uh, you know, they're going to be builders and hackers and that's what you want. You want builders and hackers.


It's kind of like two points.


[00:04:38] The Importance of Use Cases Over Models in AI Innovation


Nabeel Hyatt: The first one is that, which is we need to focus on the end user. You know, the models are not even equally good at everything. And so the question is not which database you used or which model you used or anything like that. It's, is the end user going to be happy with this product and going to come back to you?


And so then do you earn the right to keep working with them? Um, right. The second kind of root question is, is the model even important to the company at all? Uh, that is, is the model good for your particular use case? If you don't have a really clear use case, you have no way of evaluating whether the model is any good for that use case.


And we can be handy about like, oh, it's a model for music or text or sound or images, but like, we've seen like, that's not the answer. The answer is it, is it really good at rendering faces, uh, you know, for deepfakes, or is it really good at inflection of tones in speech because that's what the use case is, like the answers are in the nuances of the way you use it and evaluate the product, not because it's good at some like big 10, 000 foot headline.


And then lastly, even if you think the model is kind of awesome and like soda, uh, like state of the art right now, it's You still have to ask the question, is it going to be the best thing over time? Right. Right. Like, is it, I don't care. It's this early stage startup broken in 15 ways. The question is in three or four years, is it going to be a chain of competitive barrier to entry?


You know , is the answer a puzzle or a mystery? Um, because I think we all try and be really smart and solve everything. And a puzzle is something you can think yourself through brute force, intellectual horsepower, get you. Um, And a mystery is something that's only discovered by going on the journey to get to it.


And I think, is the model going to be performant over time best and like, I think that's, I think we're joking ourselves if we think that that's a puzzle. It's a mystery. It's an unknowable thing because we've seen many, many times, you and me have several portfolio companies we work with that started out with models and are taking, now taking situations where like the open source is catching up.


Right. And totally fine with like, okay, we're going to flush the model down the toilet, let's go use the open source model. There's no reason I should have the time and expense and energy to build a model internally when I, this other model is actually going to be cheaper, better. And I can, I can work with outside partners to make it good because ultimately like my customers are still happy and I'm still working with them.


And frankly, by working with them, I have now learned some earned insight that will help me build a better product next year that somebody else who's not in the market with this customer is not going to have. Those are the things. that matter. Everybody's super smart. Everybody's trying to answer a bunch of questions.


I just think it's super important that we don't ask the wrong question. Because if you ask the wrong question for, to a founder, or to each other, um, you will always still get an answer. And so it's just much better to root back to like, is this even a question worth answering? And what is the risk we're underwriting?


And the risk we're underwriting is that, can this enabling technology, whatever you happen to use, serve a customer need in the near term. And then from there, there's too much fog of war to know what's going to happen after


Fraser Kelton: that.


[00:07:45] What makes a Foundational Model... Foundational


Fraser Kelton: I agree with all of that, but certainly there's exceptions like foundation model companies are model companies.


Like should you We're, we're investors in Anthropic. I feel like we feel pretty good about that investment. Um, yeah, yeah. Right. Like, well, what, what, what's different then? Right. Why was there a moment when talking about models would have been the right thing, uh, in, in one market? And why do you think that talking about models in many of the other markets that we're seeing people talk about models doesn't make a lot of sense because that's where this is coming from, right?


Is like we, we then spent now it's, it's a couple of weeks ago, but we met a bunch of people. Over a compact number of weeks, who were coming to us to talk about models. And we were, as in your, in your parlance earlier, like we were asking the wrong questions. Um, because ultimately you're like, we shouldn't even be talking about these models.


The question is, is what's going to happen for these customers?


Nabeel Hyatt: Yeah,


Fraser Kelton: is the product


Nabeel Hyatt: market fit? Is there a customer that's happy? Can they serve those customers over time? Do they have product and think and taste in order to follow them on that journey? Are they earning insights by that contact with the customer?


Those are the most important things. You have to make sure the technical competence to be able to build stuff. They have to like ship code and velocity matters and all that stuff. Um, so how are foundational models different? Well, it's pretty simple. They're foundational models. Like that sounds simple, but like, no, no, we, that sounds pedantic.


It's not. We, we. We've had AI for decades. We've had models for decades, some small, some large, um, nothing about that has changed the way we would have evaluated an AI company. I mean, you know, we invested in cruise, which was a lot of AI and a lot of models. We invested in Grammarly, which is a lot of AI and a lot of models back in the day.


You're not, you're not asking any of those foundational model questions five or 10 years ago, as we're investing in those companies. You're, you're asking about how it's going to change the world and whether you think it's really going to change the world, and then. and then making the call. It's not, it's not, it's not cart before the horse.


I think there is a thing called a foundational model, and I think it's not surprising that every other founder who's building any kind of AI model would now rename their thing a foundational model.


Fraser Kelton: Right, that's the issue here, right? Is, I, uh, we didn't see people pitching pinpoint models or specific models.


They were foundation model for X, foundation model for Y. Right, yeah, but a


Nabeel Hyatt: foundation model for making cupcakes is not a foundation model. I saw a post this last week, I think it dropped into Slack, that the, some of these cool one shot prompts have wildly different levels of efficacy, even across all the foundational model.


So we stare at these loose evals, which give us some random number, but even individual prompts, are, you know, you'd give them a C minus in, in ChatGPT and an A plus in Claude and then the reverse for some other prompt. And so, no, they, and you see, you know, if a model has been trained and tuned a lot more for coding data, it's going to be better at coding.


We know that these large foundational model companies are kind of looking at different areas where they're kind of like piecemealing in data and structuring the data properly in order for it to get more efficacious in those areas. Thanks. So it doesn't mean that one foundational model won't really be a massive, massive, massive company.


I'm, you know, we're obviously very bullish about Anthropic, but, but no, man, like I, I think even foundational models, it's worth asking the question, is it good at the thing I want it to do? It still comes down to a customer trying to do something.


Fraser Kelton: We're hearing from a lot of founders, remember that founder that told us that she used, um, I can't remember which one, but Anthropic for one task and then, uh, GPT 4 for a different task.


Nabeel Hyatt: Oh, we see that across lots of startups, man. Like, yeah, all the time. Yeah, yeah, yeah. For folks who don't have the model, they absolutely are seeing that, like, I like the tone and tenor of Claude for X. I'm using, you know, Gemini Pro, if they give me access for Y.


Fraser Kelton: Yeah, the thing that I thought was interesting is that they had just seen very clearly.


different qualitative aspects that manifest themselves in the product experience for different, different parts of their product. And that is not what you would expect if these are all, you know, foundation models that generalize broadly across everything. Like


Nabeel Hyatt: there really is only large language models.


Multimodal, eventually multimodal, large language models. It's an N of one particular situation in foundational models, which is almost never existed in computing before. And it looked like we're going to have a three or four horse race, you know, between maybe Meta. I don't know how you feel about that. Um, ChatGPT, Claude, Google.


We'll see if anybody else possibly emerges. Um, And everything else is an AI company. It's an enabling AI company. And maybe they're using the foundational models. You know, what we increasingly see is they're using a mix. They're using a foundational model for some stuff. They're using some smaller proprietary models for other things.


Using open source models for other things. Oh, that's great. And like, that's not what you're underwriting. And I think that's, it's just easier to evaluate that way. We have done a couple other large model investments. You know, one in bio, Perfluent, right? Um, one in ADEPT, Action Transformer Models, but we are also in those particular cases, very much in love with the use case.


So, so we did talk about models a lot back then, but I would argue you could have done the reverse and we would have gotten to maybe a faster, better decision, frankly, you could have worked from use case down and capabilities and still gotten it.


Fraser Kelton: Yeah, you know what, I'm now reflecting, so, uh, Ali Madani is the founder of ProFluent, who's scaling the Transformer into biology, just to give some context, and I'm now replaying conversations with him over the past, you know, handful of months, And he positions it.


He doesn't call it a foundation model for biology. He positions it as they have a chassis that they're then going to pull into specific verticals. Um, and we saw the first one was gene editing, which is like amazing. Right? Um, but that is a use case first framing of the problem with an enabling technology that might have some, you know, ability to span across verticals.


And so If I give him like credit, he's been talking about it in that way since the start. Um, I


Nabeel Hyatt: mean to be to be fair and give us wide credit for our people. Like that's a lot of what made us excited about Ali, like and what he was doing, right? He's like, oh, he's not just gonna be a science project where he's just gonna like throw data at the problem.


He's like, no, I like these are the use cases I want in the world. These are the places I think we can build proteins first. So you're gonna help the world like and I'm gonna go to be


Fraser Kelton: and I used to transform


Nabeel Hyatt: architecture.


Fraser Kelton: Yeah. Yeah. Yeah. Yeah. Just as an aside, let me geek out for like 20 seconds. Their, their release came out since we've most recently spoken.


The, the idea that. That we used to get excited about using deep learning for natural language processing techniques five years ago and it could figure out like the double negation in a sentence. Like I remember sitting in my company and being like, Oh my goodness, I figured out the not not problem. And now not only is the transformer writing like full essays, but you know, Ali's company has used, uh, uh, the transformer to design a brand new protein that doesn't exist in nature that's doing gene editing.


In a, in a mammalian self, like it's crazy. I, I think we lose track of how fast the world has changed over the past five years. I agree.


Nabeel Hyatt: I agree. Yeah. And, and, and just to be clear, this doesn't mean I am a, I'm suddenly some anti AI guy. Like, I like, I think you know that, right? Like, like I think, I think GA is gonna work its way through the entire GDP Yeah.


Of the United States over the next 20 years, over the world. And I think we will eventually hit a GI, we don't have to have a debate about when, um, and. It's just a question of how do you evaluate now versus later? And being off by two years in startup land is a dead company. And like off on timing is just wrong.


It's not, it's not off on timing. And so if we're trying to evaluate what you're going into right now, you just still have to just ask the question about whether a user is going to be satisfied.


Fraser Kelton: And you can start from there. Yeah, listen, I've, I've never once thought that you were a Luddite, um, right? Like I, I think that your pragmatism is appreciated.


Um, and what you're saying is, uh, most of these other models, we were people, the industry was asking the wrong questions of. And many people still are, um, and when you ask the wrong question, like, listen, we, we happily spent a lot of time debating, uh, you know, what's the defensibility of this model, this, that, and the other thing, but if you think of it as the enabling technology for the end user, I think you ask very different types of questions.


Nabeel Hyatt: And, and I mean, it's also just kind of encouraging, not just us, obviously, but like, cause we're just being transparent about what we're talking about internally, but I find that same loop happening with founders. Yeah, and it's probably not productive.


Fraser Kelton: Yeah, yeah, yeah, yeah. Some number of minutes ago, you asked me what I thought about Meta and, and their position delivering big models.


And I'm going to use this opportunity to transition into something that I want to talk to you about today. But before, before I get there, I'm going to give myself the odd pat on the back. Um, I think in terms of like the API world where the, the builders are gaining access to, you know, the most capable models, I think a year ago, my observation in post was that there would be two, maybe three small number of players who do this.


I said, OpenAI I said, maybe DeepMind, and it feels like. Uh, open source would then be gapped by one to two years on, uh, on capabilities. And I feel just as good about that even more so now, uh, in terms of like raw capability, and I think there's a lot of evidence suggestive of that, which is great. Like, I think if the open source community is 12 to 18 months behind, And that becomes like a, uh, not a commodity, but like a, uh, a much more accessible layer for product builders.


We're going to see a lot of beautiful things happen and then they'll pay up in every sense when they want to use the most capable models from a small handful of players. The conversation that I want to have with you today is put aside the API and like raw model capability discussion. Put aside the market of AI for work productivity, because I also think that that's a very important market, but it's different from the one that I want to talk to you about.


[00:18:23] AI for Consumers: Navigating the S Curve of Innovation


Fraser Kelton: There has been a lot of news over the past couple of weeks from Meta and others into the AI for consumer space, and this is a very important market. And I've been trying to think through a comment that our friend Diane from Anthropic made at a dinner recently that like what happens in that market depends on where we are in the S curve.


And, and I thought it was just a wonderful framing and I don't know, I have, I have thoughts and that I want to put by you. So I'm not mid to it on this. I think I'm, I'm either on the left or right or the right of the curve. I think if the model is the product for consumers, like if consumers are interacting and the model is


Nabeel Hyatt: what, what is your mom going to use in four years when it comes to AI?


using a


Fraser Kelton: ChatGPT like product. That's right. If the model is the product and there's differentiated value to the end user of the frontier capabilities, I think that goes to a group like OpenAI. He says being the person who launched ChatGPT or part of the team. No, no, no, no, no, no. He says being the head of product at OpenAI when ChatGPT launched.


No, no, no, no. My strong belief on that one. Is like, I remember going to the whiteboard and drawing like a matrices for, for Brad, like, I think the real potential there is around, uh, AI worker productivity for, for white collar workers. And I think that a lot of their use cases that they're publicizing around Moderna and stuff like this really reinforce that.


I think there's, there's, you know, certainly the AI in an API that builders use, there's an AI for work set of products that are going to be a completely different market. And then there's the, what is the consumer using when they think of using, uh, like a broad AI product? What for the past year and a bit would have been ChatGPT for many people.


I think we're at the early stages of the S Pair for that. And I think there's, wonderful amount of ambiguity as to how things are going to play out over the next handful of years. And I've been trying to think through what dynamics lead to what outcome.


Nabeel Hyatt: I'm going to insert for a second here. Do you still think it's a chat interface in four years, the winner?


So you're assuming in four years, there's a winner of this market and they're the Google of the, I'm talking to an LLM from your, that your mother's going to use, right? Do you think that's still a chat box?


Fraser Kelton: Well, listen, um, Let me dodge that question directly by setting up the conversation, right? Is there's a future where there needs to be real product exploration.


Like it's a, it's a mystery that you're going to cut through and figure out the right product experience that delivers profound value to the end user in that world. I think it's a net new entrant who wins. So if it's not a chat bot and you have to have some sort of crazy product experience delivered, you know, that doesn't exist today that we haven't foreseen or, or that we haven't seen, I think it's net new.


I think if it's a, if it's a chat like experience where, and chat, chat can blossom into many different things. Like if you have tool use and all sorts of other capabilities that are coming into a basic chat like experience, if the differentiation happens at the model layer, but the UI remains somewhat consistent with where we are or simple, and the end user cares about the frontier of capabilities, I think it's, it's going to be OpenAI or, or, or similar.


And then I think every other scenario, it's going to go to the incumbent. And I think we're going to see Meta, Google, and Apple all just have a big piece of that market on their platforms. Well, I'll


Nabeel Hyatt: contend with your earlier point. You listed, I think, a handful of people there that could maybe win. The kind of like, Google, OpenAI, Meta.


Maybe Anthropic, like, and maybe nobody else is your point. Like it's going to be the incumbents and these handful of people. I don't, I just think metas isn't meta Google Wave. Like, like I love that Zuck is doing. what he's doing. I, I love, you can just tell when he's talking in a podcast that he feels this, like he has founder energy in such a wonderful way that you know, the guy's not going to give up.


He's in the middle of it. We're going to get some good models from Lama over the next couple of years. Like it's awesome. I liked all that. But when it comes to consumer surfacing, like he just jammed it into Instagram and every other bit of everything. It's like Google wave. I don't think they have the canvas.


To have this conversation properly with a customer, to have a customer come to them with the expectation of interacting with a Facebook platform in the right way. Maybe there's a case that messaging something like WhatsApp is maybe, maybe there, but it just feels very tacked on.


Fraser Kelton: It is very tacked on, of course it is, but it's so aggressive.


It's like wonderful Zuck energy, um, but here's what we can assume to be true, right? It is. They shared, OpenAI, Sam shared that they have a hundred million weekly active users at some point. Um, sure. Let, like, let's just say that's a massive number. That's a massive number. You know that the large majority of those individuals, the large majority of those users are on the free tier, which is 3.



  1. Right? Yeah. That means, that means that the large majority of users are now getting the exact same experience that you can get within the top level feature of WhatsApp, Instagram, and every place else that Zuck has just jammed it in. And I saw that Chrome, Google now in Chrome, in the search bar, if you start your search with at, you actually get taken to Gemini.


Yeah, and so we're just going to see them get more and more aggressive to jam that into their distribution channels. I'm sure we're going to see the same thing at WWDC. And so then the question to you is, it's certainly not Wave. Like we have seen for a year and a half that, that a large majority of people who are turning to these products, whether it's Claude or, or, uh, ChatGPT are doing it for a basic chat experience that is now in the hands of both.


Both Google and Meta. And so for you to think that they're going to fail, you have to think that there's a net new product experience that needs to be delivered for this to win. I think there's


Nabeel Hyatt: a few things worth covering. One is. I'm always going to be coming off, uh, looking for the ways that startups beat large companies because it's literally been dedicated my whole life to it.


And so finding the scenes where they screw up and they have problems and, and therefore this is the path, like, just literally what I wake up to do. So that's the first thing because it also happens over and over and over again. The pattern of. Large company wakes up to the new innovation thing and then jams in a fast solution, not fully thought through, trying to use their power and energy and still loses.


Partly because, as you put it very aptly a couple weeks ago, they shipped their org chart, not just what customers want. That's just a, that happens all the time. Like for, for every time that a war is won, the, you know, Microsoft does compete. Microsoft Mechanics With Netscape and does release Internet Explorer because they did get out to market properly and they use their incumbent bundling advantage of an OS to get there There's a hundred other examples where it just didn't work.


And, and so the, the chat, the, the Google Wave comment was, you know, Google decided, Oh my gosh, like Facebook's really big. We got to have social too. And then just like shoving social in every little orifice that they possibly could in Google in front of your face. Not in the right context where you want to think about these problems and interact with these things.


And the context matters for a consumer. And that's my problem with the meta thing. Like they just shoved it everywhere, brute force. Most of the places are not really thought through and are not really contextually relevant. And then in the Google situation, it's built into search. Great. It comes up in my Gemini at the top.


I think probably for a bunch of people, that may be their first experience with real AI is the top of Google, just because of how big Google is. Makes sense. Right. But as we already talked about like four episodes ago, there's kind of three different types of search, right? Yeah, yeah. And there, and, and so the question is, I think for the third type of search, the research search, the deep back and forth search, the chat, the like, I'm trying to dig, dig, dig on something, I do think that's a new user interface.


I don't think there's any chance Google or Meta wins that. They'll never be thoughtful enough to build the right service areas for it. Um, there's probably some amazing person inside of each of those orgs who knows that and loves to ship it and they'll want to let them. And so they have to leave to start a new company where they can join Claude and they can do it there.


But there's a, there will be a new set of companies that will meet those new affordances. That's my bet.


Fraser Kelton: I don't, uh, I don't personally believe in what I'm just going to say, but the argument against your argument is that to date, what we've seen is that consumers en masse don't value the frontier of capabilities.


And they're, and they don't care about differentiated product. They want, they want a good enough model. They


Nabeel Hyatt: want an


Fraser Kelton: answer that works fast. That's right. They want a good enough model that's now basically broadly available to anybody in a chat like interface. And that's a case that that would be the bull case for what Zuck is doing.


Nabeel Hyatt: I think that's kind of true. As long as you understand why you're going to it. And when I, when I clicked the little message bar and Instagram, I don't know why I'm there. I don't know what I'm like, just the same way that if you, if you shoved, uh, like a music player right in there, I would be like, what?


Like, why didn't you just launch this as an app? What are you talking about?


Fraser Kelton: On the, on the Instagram explored tab, when they even give you like the perplexity inspired, you know, tongue in cheek, like they just ripped off the, the perplexity UI. It is jarring. You're like, wait just a second. I want, I want, I want reels in the photos that I like.


I don't, I don't want, oh, write me a recipe for this.


Nabeel Hyatt: Can I just say one last little rant here and then let's, let's move on. You know, the fact that everybody's using 3. 5, uh, because they're all in the free plans is actually a quite interesting point. Um, it's an unfortunate point because I can't tell you how many times I try and talk to people, even fellow founders and VCs and people in the industry.


And I'm like, wait, you're on the free plan? Like, what are you doing? You don't even, like, you're not even, and then they're complaining about it hallucinating and not giving good outputs and like, oh, my retention is off. And I'm like, well, you're not even using the good technology. I don't know what you're talking about.


I think. We do suffer a little bit of an industry trying to communicate to consumers. Um, it's fine for enterprise cause they can just test all the models and figure out what is efficacious, but we have a, we have done a very bad job. I'll blame you. You should have done this at OpenAI before you left.


Like the product marketing around. Why you would upgrade and move from this plan to this plan. Why you should pay for Opus at Claude. Like, why you should be on Gemini Pro. We've not found a good set of language to make people understand that it's a categorically different experience. It is so much better when you are using state of the art models for so many things.


Not everything, right? For like 40 percent of the things, it's fine to use 3. 5, which is why it caught everybody by surprise that ChatGPT spiked so quickly, but like, we don't really do a great job as an industry explaining to people what that real experience is going to be like. And that's kind of unfortunate.


Fraser Kelton: I don't think they care. I think that we like, happily so, because it's our, our you and me and others in this industry, like we have to care, uh, about trends and where things are going to go. But. Most people, this is, this is something that they're going to interact with a couple of times here and there, and it makes their life a little bit easier.


And then they want to go to the softball game after work.


Nabeel Hyatt: I


Fraser Kelton: get it.


Nabeel Hyatt: I get it. But that's, but that's product marketing is so important because they are distracted and they just want solution. And if they get frustrated, that solution isn't there. It's our fault for not explaining to them that the solution is right around the curve.


It's just not on the free plan.


Fraser Kelton: Yeah, but they don't care enough about having that problem solved more elegantly to spend the 20 bucks a month. The crux of this discussion of where things are going to go is, um, are we always going to see that users don't value the frontier of capabilities in a consumer facing application and therefore won't be willing to pay?


And in that case, I don't think it's going to be the large labs unless they change their, unless they change their business model. Um, I think I heard you say earlier, which I, is what I actually agree with is I think there's going to be a net new product experience that gets delivered. Like I, the likelihood that what, what we shipped because Noah iterated on a UI with Tina, uh, was the right UI, like.


That seems so preposterous to me that somebody is going to show us a beautifully new creative way to shape the product experience. And I think at that point, we will see that that users then care about it. We're still pre the end game. I like that, Fraser. I like that. Yeah. And in that case, like I, I actually think that that that's a net new entrant who wins that.


Nabeel Hyatt: Yeah. Or there's a small team that one of these folks who actually does get it out the door and That's right. It can come from up there, but it's still not new. I'm looking forward to that. You'll, you'll send me a text if you see it. Okay.


Fraser Kelton: Wonderful. Let's be done. Yeah, let's do it. Thanks everybody. We'll see, uh, uh, in a couple of weeks.


We'll, we'll see you. I don't know. We'll see you when


Nabeel Hyatt: we see you.


Fraser Kelton: Yeah. That's it. We'll see you with a couple of tweets summaries. And, and, and just let us know what format you'd actually like this in. That would be helpful. We can do it. We can transform. Later.