Dec. 11, 2023

Building entropically, Gemini, Prompt engineerings revenge, and Superpowered

Building entropically, Gemini, Prompt engineerings revenge, and Superpowered
The player is loading ...
Hallway Chat

Nabeel and Fraser briefly discuss Google's new AI model, Gemini, and keeping authenticity in startup pitches. They discuss the perils of trying to simplify when technology is lending towards complexity. How to stay authentic to yourself when pitching and fundraising. The new findings of Claude prompting, and the likely continued need for prompt engineering. Finally, they talk about the startup SuperPowered, its potential pivot, and how passion for problem-solving can impact a company's direction.

* Google Gemini's excellent product video "demo"
* History of Razorfish
* Claude 2.1 Prompting Technique
* AI Meeting notes from Superpowered.me

  • (00:00) - Building when entropy is increasing
  • (01:00) - Introduction and Welcome
  • (01:15) - Discussing the Frequency of AI Developments
  • (01:45) - Google's AI Developments and the Gemini Team
  • (02:19) - Explaining Gemini and its Significance
  • (05:04) - Analyzing Google Gemini from afar
  • (18:02) - You are 5 words away from being done
  • (27:00) - The analogy of the early web vs early LLMs
  • (30:05) - Do we need the Razorfish of AI
  • (33:27) - The Future of AI Tools and Platforms
  • (34:11) - The Importance of Implementation Engineers in AI
  • (35:54) - Are we in an entropic or de-entropy phase?
  • (37:57) - The Importance of Authenticity in Marketing
  • (38:53) - Founders just being real
  • (47:28) - How would you fundraise differently now?
  • (50:15) - Superpowered.me and AI Note Taking Startups
Chapters

00:00 - Building when entropy is increasing

01:00 - Introduction and Welcome

01:15 - Discussing the Frequency of AI Developments

01:45 - Google's AI Developments and the Gemini Team

02:19 - Explaining Gemini and its Significance

05:04 - Analyzing Google Gemini from afar

18:02 - You are 5 words away from being done

27:00 - The analogy of the early web vs early LLMs

30:05 - Do we need the Razorfish of AI

33:27 - The Future of AI Tools and Platforms

34:11 - The Importance of Implementation Engineers in AI

35:54 - Are we in an entropic or de-entropy phase?

37:57 - The Importance of Authenticity in Marketing

38:53 - Founders just being real

47:28 - How would you fundraise differently now?

50:15 - Superpowered.me and AI Note Taking Startups

Transcript
Hallway Chat | Building entropically, Gemini, Prompt engineerings revenge, and Superpowered
Nabeel and Fraser briefly discuss Google's new AI model, Gemini, and keeping authenticity in startup pitches. They discuss the perils of trying to simplify when technology is lending towards complexity. How to stay authentic to yourself when pitching and fundraising. The new findings of Claude prompting, and the likely continued need for prompt engineering. Finally, they talk about the startup SuperPowered, its potential pivot, and how passion for problem-solving can impact a company's direction.

* Google Gemini's excellent product video "demo"
* History of Razorfish
* Claude 2.1 Prompting Technique
* AI Meeting notes from Superpowered.me
  • (00:00) - Building when entropy is increasing
  • (01:00) - Introduction and Welcome
  • (01:15) - Discussing the Frequency of AI Developments
  • (01:45) - Google's AI Developments and the Gemini Team
  • (02:19) - Explaining Gemini and its Significance
  • (05:04) - Analyzing Google Gemini from afar
  • (18:02) - You are 5 words away from being done
  • (27:00) - The analogy of the early web vs early LLMs
  • (30:05) - Do we need the Razorfish of AI
  • (33:27) - The Future of AI Tools and Platforms
  • (34:11) - The Importance of Implementation Engineers in AI
  • (35:54) - Are we in an entropic or de-entropy phase?
  • (37:57) - The Importance of Authenticity in Marketing
  • (38:53) - Founders just being real
  • (47:28) - How would you fundraise differently now?
  • (50:15) - Superpowered.me and AI Note Taking Startups

What is Hallway Chat?

Conversations we're having about what’s happening and may happen in the world of AI.

Two former founders, now VCs, have an off-the-cuff conversation with friends about the new AI products that are worth trying, emerging patterns, and how founders are navigating a world that’s changing every week. Fraser is the former Head of Product at OpenAI, where he managed the teams that shipped ChatGPT and DALL-E, and is now an investor at Spark Capital. Nabeel is a former founder and CEO, now an investor at Spark, and has served on the boards of Discord, Postmates, Cruise, Descript, and Adept.

It's like your weekly dinner party on what's happening in artificial intelligence.

Nabeel Hyatt: technology tends to go
through ages of entropy and de entropy.

We all love, especially as engineers,
we love de entropy, we love simplifying

everything, cleaning it up getting
the signal from noise bringing it

all down into something that works.

Things that are trying to make a promise
of de entropy too quickly when all

of these LLMs are so new Just feel
incongruous to me when the goal is

solve the problem reliably and, and
we're still not at reliable solution.

Fraser Kelton: Boy, we wrestled with this
one, but that one feels really right.

It's going to get more complicated
in every direction because we are

not at the reliability required for
consistent value in many use cases.

And like, why bother adding abstractions
of simplicity if you say it's

still not going to be good enough?

Nabeel Hyatt: Yeah, exactly.

Fraser Kelton: lot easier for you to get
something than broken into production.

Nabeel Hyatt: That's
what, that's the headline.

Why are we making it easier to
get broken things into production?

Fraser Kelton: Oh, but what a teaser.

Nabeel Hyatt: I know, I know.

Hello everybody.

Welcome to Hallway Chat.

I'm Nabeel.

Fraser Kelton: Fraser.

Nabeel Hyatt: And we are here to
talk about what we've been talking

about in the world of AI mostly.

I, I didn't really know we were signing
up for this every week when we signed up.

They come fast, Fraser.

I felt like I just talked to you
on the hallway chat last week

about the launch of ChatGPT and
all of your stories around that.

But at the same time, there's like
a million things to also talk about.

So it is both, feels like these
C shows are coming all the time,

but also too much to talk about.

Fraser Kelton: I was thinking after
recording the last one, how nice it is to

be able to talk in depth with you about
these topics and just laugh and explore.

And so I'm good with it all.

The, you know, here, here's
something last week, I think I

said the line, where's Google.

And we had an answer.

Kind of, right?

We kind of had an answer.

Nabeel Hyatt: Yeah.

Oh, I loved, we had our AI dinner this
week, and we had somebody from the

Gemini team sitting at the dinner,
all night long, mouth shut, and I

am like spouting off and spitting
all kinds of stuff about Google.

And I don't know that I picked
up the smug look on his face.

Fraser Kelton: I

Nabeel Hyatt: yep, tomorrow morning
you're gonna see that Google's

got a little bit of a comeback.

Although I think it's a little
bit of a comeback, right?

So...

what is Gemini, Fraser?

Fraser Kelton: Gemini gets announced
in summer by Google, where they say,

we're training, A large language
model that's going to be amazing.

And has now become a little bit of a meme
because they have talked and talked about

what they're about to do and the rumors.

Nabeel Hyatt: their answer to
OpenAI and Anthropic and the others.

Yep.

Fraser Kelton: Yep, that's right.

And we're at that dinner, as you
say, and it's become a little bit

of a joke as like, where are they?

They've been talking about this
for months and nothing's here.

And then, boom, we wake
up the next morning.

I get a text from my friend
that says, surprise, and

they've shipped some of Gemini.

They haven't shipped the most capable
model and they shipped a lot of.

Demo videos which we'll come back to
and talk about a little bit, but they've

announced a something called Gemini Ultra,
which is, you can think of it as the

equivalent of GPT 4,, then they've shipped
Gemini, I don't know, the terminology is

crazy, Pro, Gemini Pro, and Gemini Ultra
is not available, Gemini Pro is available

as of the day of the launch, and that
is comparable in performance, at least

on the evals, the evaluations, to GPT 3.

5, and then they have What I think
is probably maybe one of the more

interesting things, they've then
distilled it all down to something

called Nano, which can run on, and is
running on the Pixel, which is a pretty

awesome thing for them to have done.

Worth calling out, Gemini Pro, the
mid tier model that's equivalent to 3.

5, is now live and integrated into
BARD, their ChatGPT like product.

And it's, you know, one year after
the fact that that's been rolled out

broadly by OpenAI, and so quite a

Nabeel Hyatt: I have to admit,
Fraser, If this is nothing else,

it reminded me that BARD exists.

It was the first time
we've even mentioned BARD.

Like that, that, how useful it is
in at least our workflows, right?

Fraser Kelton: it has
reminded me that it's a thing.

It still didn't encourage me to go use it.

Like, I'm not sure what this is
an answer to in terms of ChatGPT.

So, they now have a parity with the model?

Okay.

Right.

Okay.

Sure.

Nabeel Hyatt: We were texting and
I've got some WhatsApp groups that

were fiddling around and talking about
this when it launched in a Discord

group of AI engineers and so on.

And I got to say the evals of course
went from oh my god and then the

demo videos, oh my god, to pretty
quickly, hey, is this a bunch of BS?

Fraser Kelton: Yeah, that's right.

I think the first thing to call out
is I think the general level of

performance across Some eval benchmarks
give you a sense of where relative

performance of the model might be, right?

So GPT 4, far outperforms basically
anything up until the release of

Ultra, and you could then have probably
high confidence that it's going to

perform in a very different class
once you get it into production.

I think the thing that we can say is
without having actually played with it,

the evals suggest that Ultra, the large
one is directionally equivalent to GPT 4.

And That, that's something, right?

I think that the fact that we might
now have two GPT 4 type equivalent

models in, in market proves that
somebody else other than OpenAI can

do something on this magnitude and,

Nabeel Hyatt: not willing say that yet.

That's I'm not absolutely crap.

Like I'm willing to say that
soon and, and, but, but,

Fraser Kelton: Yeah, that's fair.

, Nabeel Hyatt: the MMLU benchmarks are
comparing a five shot reported GPT 4

benchmark to a 32 shot, I think it was.

If I remember correctly, UltraReport.

It's just not, those are
not comparable at all.

Frankly, the stuff that everybody would
probably normally out of the box use

this product for, the average consumer,
those all, at least the evals we can

see, seem comparable and seem fine.

So let's not overstate some kind
of eval problem where there isn't

one, least in this specific case.

But, the kind of like general
math cases in particular seemed

a little cooked and unfortunate.

And I think, frankly, when they're
speaking to a highly technical audience,

I'm not sure why they were doing that.

Like, I, I don't, I don't

Fraser Kelton: Yeah, yeah, that,

Nabeel Hyatt: it's not them.

It just felt like a hiding the ball when
clearly even with code, like the code

generation at a Gemini is quite good.

I just don't understand
why they even did it.

Fraser Kelton: I think if you strip
it away and you look at a comparable

measure of five prompt, five examples
in the prompt, GPT 4 outperforms

it, but it only outperforms it
by a couple of percentage points.

But again, like, I think that
once we get our hands on it, my

guess is that we will see that
this is directionally similar ish.

Nabeel Hyatt: let's finish on this rant
and then I actually want to talk about

the thing I really loved about Gemini.

The part that I think was unfortunate and
I hope no startup takes away from is that

everybody gets excited because there's
an announcement from Google that there's

finally Gemini out and within a few
hours, it just dawns on everybody that,

okay, Gemini is not really here because
it's just pro, we don't have an exact

release date still, this, this graph of
evals is, Cooked in a couple places and

the rest of them are still comparing to
GPT 4 back in March eval tests and only

beat them by three five percent and since
then GPT 4 has gotten a lot better and

By the way, evals don't really matter.

So like that's the negative side the
positive side is So many companies

absolutely fail to show their
product in action in unique and novel

ways that pull the heartstrings.

And I think they, if you haven't, if you
guys are listening to this on the podcast

and you haven't yet watched the Gemini,
Demo videos go on YouTube, you should

take a look there's some wonderful craft
work that is not too pretentious and not

too overblown, but just, in fact, it's
very clean and simple, except for the

YouTuber Roper, Roper video, that's kind
of overblown, but the rest of it is very

simple, good demos of showing various ways
that this product can be put into use that

an average consumer, which might be the
aim of this announcement, is much more

the average consumer You know, or Google
stockholder then it is aimed at engineers.

I mean, didn't you love those demo videos?

You watch them, right?

Fraser Kelton: Oh, yeah, but
I, I don't know if You're

getting me riled up here, man.

I had shared the one demo video where
they show it, the three cup technique

where there's one ball underneath the
cup and then they shuffle the cups around

and it tells them which cup it's under.

Because this, It's a multi modal
model that has been trained from

the start for multi modality.

So, it's accounting for text and image
and, and video and, and audio right

from the, the start of pre training
steps, rather than, you know, kind

of bridging that in after the fact.

And this, this demo video is one of
the best demo videos I've ever seen.

Nabeel Hyatt: Yep.

Fraser Kelton: And then, and then
it comes out that it's all fake!

Nabeel Hyatt: Right.

Fraser Kelton: So in the
demo video, go watch it.

It's, it's remarkable.

There's a a set of hands and some
cups and a ball and the demo says,

okay, now I'm going to put the
cup under here and move it around.

Where is it?

And in, in real time, the Gemini
voice comes back and says, the cups,

I don't know, under the left side,
and the man lifts up the cup on the

left and sure enough, there it is.

Nabeel Hyatt: Right.

Fraser Kelton: And then, People
have discovered shortly thereafter

the fact that this is basically the
equivalent of like a simulated scene

where they had to prompt Engineer
along the way that said, just turns

out to be they fixed it in post.

Nabeel Hyatt: Yeah, that's really

Fraser Kelton: Now, I will compare that,
I will, yeah, I will compare that to

Greg's demo of GPT 4, which was all live.

without any editing and in real time.

And that is the, I think that is the
way that you introduce your products.

It's brave.

It's brave, right?

You're doing it live.

It could fail.

And you, you're owning it because you have
so much confidence in what you've built.

Nabeel Hyatt: uh, I don't,
I can't entirely disagree.

You know, it is as much as I love
the product demo, what I would

have loved was a demo like that
and then a how to behind it.

You know, I think it's okay to make
things that are somewhat polished and

beautiful, but it would be great if it if
it turned out that they reveal the covers.

And by the way, that's no implication on

Fraser Kelton: think, I think,
I think Paula should be.

Nabeel Hyatt: separate

Fraser Kelton: Yeah, yeah.

Polished and beautiful is good, but I
think it has to be grounded in reality.

This is a case where they edited
across two different dimensions, and

people came away with a dramatically
different perspective of what

it is that's actually happening.

And so I think it's unfair to not
allow anybody to use the product, and

then introduce it with a demo video
that basically obfuscates the truth

from two different perspectives.

That's just weird.

I think that this is An excellent
moment for everybody at Google because

they've shipped, or at least they've
partially shipped and I think that

you, they've taken the first step.

No, they've taken the first step.

No, no, no, no, no.

Like, listen, this, the
race was set off a year ago.

They, they did this in a year.

foR a company of their size, this
is, this is not to be, scoffed at.

Think about all of the complexities.

They had to live through smashing
together brain and deep mind.

They had to go and find, like, the
path through all the bureaucracy and

politics to get an aggregate amount of
compute required to be able to do this.

They had to solve all of the
different challenges, both

technically and politically,
within the organization to do this.

And it's out.

And I think that, that itself is
something that should be respected.

And, we can squabble over the evals
and stuff, and the proof will be

when we actually get to use it.

But it looks, it looks directionally good.

And that's something.

You know, I, I feel like I also
did a good job playing your role.

You usually are the one who's
clairvoyant in, in many respects.

And at that dinner, My guess was that
Google was going to come roaring back.

Nabeel Hyatt: your quote.

Fraser Kelton: that.

Yep.

Because they are The best at, at the
technical pieces that have to come

together for training a model like this.

And if you look at some of the stats,
I forget what the stats called, but

for basically the measure of efficiency
when they were training Ultra, I think

they reached some level of like 90%, 97
percent efficiency in the utilization

of their hardware when training Ultra,
which is just a remarkable achievement.

And this is the area where we
should expect them to be great.

And I think they have shown
that they can be great, if not,

you know, on a year's delay.

And then I think the real challenge for
them is going to be how they bring the

great technical piece into their two
products that are now the two front war.

BARD, as you laughed earlier, is a
thing, and then the second one is they're

going to have to find the right way to
integrate these technologies into search.

And that's going to be an
excruciatingly hard challenge because

it's orthogonal to the business
model that is search historically.

Nabeel Hyatt: Look, I, I, I, I say if I
were, I'm not to speak for Demi or Eli

or anybody else over the Google team and
what they're doing, I'm sure they know

a lot more about how to do this than
we do, but, but I do think their role

or their way to fit if I were trying
to navigate this space and I was Google

was to take almost an Apple approach
to this given their scale and size.

And what I mean by that is I, I always
joke people think of Apple as innovative,

and I think of Apple as a last mover
advantage, not first advantage company.

They have had a few moments in their
life where they have been very early,

but in many ways it's taking the
things that are already out there,

that are already somewhat proven.

And then making them so polished
and so well thought through that

you just, they feel like they
fit into your life immediately.

And, you know, they were not the first
to release little notification widgets

on a smartphone that was Android.

They're not the first to do
wireless charging that was Android.

Go way back, they, they, they took a lot
of their early ideas from Xerox PARC.

And so if Google wants to the game
of being last, because it's really

gonna work and work reliably,
there is a game to be played there.

Because I don't think OpenAI
wants to play that game, to be

honest, and you can't play both.

I think right now, in many ways,
OpenAI Playing closer to the Android

or Samsung, if we're going to use
smartphone analogy model where they are

riding the front edge of development.

It drives them crazy if somebody else gets
something out new ahead of them and they

want to play the front edge of the game.

I think both can be successful
strategies as long as the thing that

Google eventually releases As you get
to Ultra is worth the time and energy.

That's the, you know, the, like,
it's worth weight that that's the

thing that will be left to find out.

Fraser Kelton: We shall see.

We shall see.

Nabeel Hyatt: look, not, it's hard.

Of course, the cup example is tough.

You know, these, these
prompts are hard to shape.

It's hard to get the little alien
inside my computer to understand

that I'm playing a Cup game.

Fraser Kelton: Issue with the cup thing is
that they imply that they lead the viewer

to believe that there's zero prompting.

It's not that prompting's hard.

The way that the video is presented
suggests that there's zero prompting and

that there's this real time multimodal
model watching you and, and the cups

and inferring with real reasoning

and there's somewhat complex prompting
happening at each step behind the

scenes, which is what I think has,
has caused everybody to be really

disappointed in, in a decision to do that.

Nabeel Hyatt: The last thing I'd say
on Gemini, is that, is a lot of this

consternation would have been solved.

If they would have released APIs for
developers to build with at the same time.

And I think, I think we've, supposedly
gonna come out December 13th.

I don't know if Ultra's
gonna be involved in that.

But in a world of AI movement that's
five, seven days from now, I mean

op open AI fires a CEO and goes
through a, a coup attempt then gets

back a CEO in that time period.

Like a lot, a lot happens in five days.

Fraser Kelton: lot happens in five days.

Nabeel Hyatt: and so, like, I'm sure
this was PR oriented, they wanted

people to watch a Mark Roper video and
so on and so forth before developers

had control of the narrative.

But it's really unfortunate on,
because I think it creates a sense

of doubt when it shouldn't be.

It should just be high fives,
hand clapping, and playground.

And I think that was a little bit
of a PR mishap, but we'll, we'll

see what happens in, in seven days.

Fraser Kelton: Yep.

Yep.

And anyway, prompt, prompting is hard.

We, we talked last time about efforts in
ChatGPT to simplify the complexity and

ambiguity of prompting specifically with
DALI, where they want to take the, the

three words that somebody who's unfamiliar
or lazy with their, their directions wants

to do and how, if you're a power user
such as yourself, it's just suboptimal.

Nabeel Hyatt: Yeah, it's,

Fraser Kelton: What, what

Nabeel Hyatt: I saw my great example
of that this week because I'm going

to keep banging the drum that I think
Prompt Engineering is a real skill and

will be a career for quite some time
and that actually Prompt Engineering

is going to become more of a language.

Before it eventually gets abstracted out,
but our ability to totally abstract it

out while we're still trying to figure
out what these non deterministic models

can actually do is very, is very far
away, maybe years and years away before

we can build these systems on top of them.

I got handed a wonderful
example of this today, I sent

it your way which is Anthropic.

Which we are investors in by disc
full disclosure to everybody.

Anthropic has a competitive model
to OpenAI and Gemini called Claude.

And there is a well known research
problem and execution problem

in these long context windows.

of AI where I'm asking it to, for
instance, look at an entire PDF or look

at a long chat and find some phrase
or find some word inside of that, did

Sam talk about the beach or not, or
what's the best cooking technique?

And it turns out that if it's mentioned
in the beginning of a doc, Or at the

end of a doc every model, all of these
LLMs show that they can find information

at the beginning of a doc and at
the end of the doc faster and more

reliably than in the middle of the doc.

The middle, it's the missing middle
it just sometimes misses stuff.

Well, this has been a quote unquote
known thing, of which people have been

trying to do all kinds of different
engineering techniques chunking the

data into smaller bits, and then
there's like comparison evals against

different models at different times,
and how they perform on the missing

middle, and so on and so forth.

And then it turns out that Claude
releases a paper today called Claude 2.

1 Prompting, that says, well,
what did it say, Fraser?

What's the crazy, deep Engineering
technique that, that scientists

have figured out in order to finally
unlock moving from 23 percent

missing middle accuracy up to 97
percent missing middle accuracy.

Fraser Kelton: mean, they add a
line to the prompt that says here

is the most relevant sentence in the
context, which basically nudges the

prompt to go and pull out the relevant
sentence for the question at hand.

And that's the bump,

Nabeel Hyatt: Yeah.

I mean, that's insane.

This afternoon, I'm going to do
some work and figure out whether

this works in GPT 4 as well.

I didn't have a chance to come up,
but, but it'd be really interesting.

If both of the results, don't you
think, Fraser, would be interesting?

Like, if that phrasing does work in
GPT 4 as well then it's like, oh, you

just figured out a new incantation,
kind of like we found out that if

you, you tell a model you're going
to tip it to do something, I'll give

you 20 if you answer this question.

They, it tends to perform better in
that question, even though, of course,

you're not giving the model 20.

Another crazy incantation.

And then if, so one, if it worked,
that's interesting, and it tells us

more, a little tip into the language
of how to use these models for large

context windows, which is particularly
valuable for Claude, because it

has such a large context window.

You can just put lots and
lots of text in there.

If it doesn't work in other models,
that's even more interesting, right?

Now, for all these companies that are
trying to say, don't worry, I'm building

middleware dev tools that let you switch
in and out models arbitrarily, with

the, like, like they're all the same.

That they're not.

Fraser Kelton: I would be so
surprised if they're the same today.

And the difference is only
going to grow over time.

There's a whole bunch of
different things going on here.

This is a quote unquote eval
called Needle in the Haystack.

And I think that, yet again, this is
a situation where the eval doesn't

measure anything proximately close
to what happens in, in production

for people who are building products,
right, is if you, if you insert into

the middle of some financial set of
documents, a single sentence that

says Dolores Park is the best place
to have a drink in San Francisco and

then the model can't find it, right.

I'm not sure that that is reflective
of any real world problem that

people are trying to solve with this.

The other thing that is so interesting
here is the Anthropic model they

hypothesized performed poorly when
people were running it through that,

that eval because they've trained their
models to cut down on inaccuracies

specifically for these types of use cases.

Right.

And so they basically have trained the
model to say, okay, if something feels

completely orthogonal from the rest of
the documents, it's probably not something

that's, that's important and or accurate.

It's probably not even accurate.

So, so just ignore that.

Right.

And then the eval is basically testing for
the model performance to do exactly that.

Nabeel Hyatt: Yeah, but I want to get back
to the point that I wanted to make, which

this is six words that you put into a
prompt if you were trying to do long text

retrieval, text from a long context window
that, that does boost performance and.

I don't know, it just, it tells
me how naive we are collectively

about how to use these models.

Um, Emma, who's an AI hacker in
residence for us, she did a benchmark

on some internal tools that she was
using on Glaive versus GPT and she

found that without prompt engineering,
Glaive did better than GPT 4 probably

because it's trained only on highly
quality synthetic data and so on and

so forth, but that if you added the
sentence, You're a well known historian

to the prompt for both Glaive and
GPT that then 4 suddenly did better.

And it's just another good testament
to you just need to find the magic five

incantation words to suddenly make your
business be able to move into prod.

That's

Fraser Kelton: that is so crazy.

They could just even try to internalize
that in the brittleness of these models.

You're going to have, you are,
you're a well known historian and

then it finally outperforms it.

I don't know how this gets solved,
like, other than at the system level.

Nabeel Hyatt: But, in the very early
days of video games, you worked at the

assembly level to make a video game.

In the early days of computer
graphics, before we got to

engines, we had to work in code.

And we will eventually get
to lots of GUIs and engines.

And we've actually talked before
about how prompt engineering is not

how every average user to use these
products and people are bad at English.

But at the same time, if you need
performance, you need to be at bare

metal as close to the model as possible
for probably a little while until

everything really, really works.

And at that point, when it's automatic
when we've made our 50th first person

shooter, That's in production and making
hundreds of millions of dollars a year.

Then we can talk about making an engine
for making first person shooters.

And when we get in game parlance,
you get Unreal and and you get

Unity and so on and so forth.

But it feels like we are still in,
you know, how was Pac Man built?

Fraser Kelton: I don't, I don't want
to open up this can of worms, but

don't you think that is a measure of.

The model's capabilities
not being strong enough.

Nabeel Hyatt: We know that.

Just like in, like, early programming
days, you were wrangling with

the amount of memory on computer.

You've got a thousand twenty four
bytes of memory, and you're just

trying to make a spreadsheet work in
this tiny little bit of memory, and

you need every little squeezing bit of
thing just to make it operate, right?

And isn't about speed the way it often
was back then but it's still about

whether the job can be done well or not.

And, and yeah, we'll need to be very
close to bare metal, until all these

things run perfectly all the time, and be
about efficiency and cost and abstraction

Fraser Kelton: what Yep

Nabeel Hyatt: the rest of that stuff.

If five words for your specific use
case going to increase performance,

then I don't know if I'm Fidelity or
Procter Gamble or Figma, or InstaWork

or another startup , like, I don't know
that I'm willing to take the future of

my business's effectiveness in AI, which
could twist and turn on five words,

Fraser Kelton: right.

Yeah.

Nabeel Hyatt: And who's going
to figure out those five words

for your specific business?

It's certainly not going to be
some random middleware company.

It's going to be you because you
care about your company and you've

hacked away at it or had a prompt
engineer who's hacking away at it.

You've really worked it to try and
figure out how to wrangle this alien

to do the work that you want it to do.

Fraser Kelton: The point here is that
the brittleness of these models today

across different use cases suggests that
you're going to want to have people,

quote unquote, like, working at the metal.

Yep.

Nabeel Hyatt: analogy I would use is,
in the really early days of the web,

there was almost immediately
A bunch of WYSIWYG web page

developer software companies.

There were 30 startups that were like, you
don't have to learn CSS and HTML just use

our little product, and you can get your
web page out without tweaking it at all.

And, you know, if we fast
forward 10 years, of course,

there's many of those companies.

Today, there's Squarespace and
webflow, a bunch of these companies

that are helping everybody from a
restaurant up the street all the

way to complex enterprise websites.

But in the early days, As a good example,
prior to CSS, the way that you laid

things out on a webpage, so the way I got
something to show up on the right hand

side of a webpage versus the left hand
side of a webpage, was to use a kludge

which is to build a table, kind of like
a spreadsheet on that webpage in HTML,

and then, in one of the cells on the
right hand put my logo so it's on the

right, and then make the cells of that
spreadsheet And it's a, for me, it feels

like we are way more in that land than
we are in, in WYSIWYG abstraction land.

And so the whole first wave,
the whole first couple of years.

Of WYSIWYG website builder companies all
went out of business very, very quickly.

What, what happened there?

If we're going to use that
analogy, what happened there?

WhAt would be the business, if you
wanted to help a million companies

build their first LLM applications.

And the contention is that it's
not the time to build the square

space of the space, uh, which I'm
not, by the way, you know, this

is us just chatting on a podcast

.
A founder could walk in tomorrow.

And pitch and pitch the most beautiful
wonder idea for Squarespace for AI

and, and just prove you totally wrong.

And that's the joy of this process,

Fraser Kelton: That's,
that's what this rule so fun.

Nabeel Hyatt: Yeah, exactly.

So strong, strong convictions,
really loosely held.

But, actually, do you
believe in my analogy?

Do you think that's an apt analogy
or do you think I'm full of it?

Fraser Kelton: No, I don't
think you're full of it.

So, if I understand what's happened in the
Anthropic case, it is The way that they

have tried to nudge the model to improve
performance has then resulted in some

wonky behavior that you can then nudge it
over that hurdle with five magic words.

And what does that say to me?

That, that says to me that
there's probably a solution that

happens at the system level.

If you think about how this may mature
why would they want their customers

to ever have to think about that?

They'll, they'll find ways to absorb
the solution or abstract the solution

for use cases where it makes sense.

Nabeel Hyatt: Yeah, but I
don't have time for that.

I'm a founder that wants
first mover advantage.

Or, My boss me that I need to have an
AI strategy and I need to, I need to

launch next month and it can't, it's
got to get out of demo land cause I've

got an earnings report next quarter.

Fraser Kelton: This, this is
why that that person is having.

Random success.

Sometimes they're succeeding,
sometimes they're failing.

And sometimes they come back to
the drawing board with an entirely

new approach one month later.

We've seen that a lot.

Nabeel Hyatt: That's very true.

I do wonder.

If Procter and Gamble and, and Fidelity
and JPMorgan and every other company

is trying to figure out how to use AI.

If I just think about the web, the
web analogy for a second, and you

don't want to overstretch any analogy
of course, but the really effective

companies in that first wave for
helping to bring everybody onto the

web were kind of a mixture of tools
companies slash consulting companies.

Fraser Kelton: Yeah,

Nabeel Hyatt: It was scient and viant
and Razorfish that, you go pay them

hundreds of thousands of dollars
and they would build time magazine.

com for the first time, these
kind of mixture of design agency,

software engineering, and then
they ended up with internal tool

stacks that they knew how to use.

I think there's an analogy to

Fraser Kelton: Oh, hell, yeah,

I mean, yeah.

There's a reason why OpenAI has, I
forget, I'm going to get the names

wrong here, but has a Keystone
partnership with Bain and Anthropic

has a Keystone partnership with BCG.

Is these are footsie things to bring
into the enterprise, as we've seen.

Five words makes the difference between
something that looks horrible and

something that would be delightful in
production, and there has to be people

who can help you navigate that, uh, as the
world is changing underneath your feet.

Three months.

No.

Nabeel Hyatt: Well, the contention
is right that, Razorfish and Scient

and Vine were net new org, yes, they
were consulting organizations that

rhyme with Bain in the way that they
actually but Bain is old school.

Are there really great AI
implementation engineers waiting

at Bain to take you out to market?

Absolutely not, I would guess.

I, I suspected it's a net, that there's
an opportunity for a net new company to be

filled with people who like to implement,
who will help take these tools, which seem

maybe very easy to stand up very quickly.

I can just go to a prompt and type
things in, but I think are probably

more complicated and people will find
are more complicated than they think.

To actually implement and get live.

And that's why I like the HTML analogy.

It's incredibly simple to
build your first HTML page.

But then, and it feels
like anyone can do it.

But actually trying to
run the NewYorkTimes.

com you know, is another whole
order of magnitude more difficult.

And especially in the early days
where people didn't really know

web and how to do web development.

You needed a set of people that were
your launch team and stood up the

internet, you know, website by website.

I think there's a little bit of that
that probably goes on and I just

don't think it's going to be McKinsey
or Bain or the folks that have,

Really very little of this specific
type of DNA, but I could be wrong.

Fraser Kelton: Yeah.

People who did it back in the day for
transitioning people onto the internet.

Did they do it through just specialized
know how, or did they build tools

and platforms that allowed them to,
to simplify the task for others?

Nabeel Hyatt: Like anything you start
out making a thing and then you're

like, once you've done it two or three
times, engineers can't help themselves.

And so you start to build efficient

Fraser Kelton: is it, but that a, so, but
are we back to this is, there actually

is a middleware company, like a tool
that's going to start from a consultancy

type perspective and then get built out?

And then is your, your issue
with the tool startups?

Just the fact that they're not
going to market appropriately.

Nabeel Hyatt: That's a good pushback.

It might be.

I mean, we'll, none of us know, we'll see
how this all plays out, but yeah, maybe

the right way, it's not what VCs want.

Hey, why don't you hire more
implementation engineers?

It's not what VC on a panel would be.

They'd be like, no, no humans.

The AI should write itself.

But for where we are on the technology
side it might be that the right answer

for the next 12 to 18 months is.

You have a whole bunch of
implementation engineers that are

script monkey that know all of the
unique folklore about how to wrangle

these models in the right direction.

So you're still selling your tool
set, but you're selling your tool set

along with a handful of implementation
engineers and a maintenance contract.

And I know that that, that breaks a lot
of the purity software that we would

all love for engineering to be, but
it might be the right thing for, for

this particular stage that we're in.

Fraser Kelton: Could be . You know,
going back to the start of the API,

there were two people, a guy named
Boris and a guy named Andrew at OpenAI

who were prompt wizards, like they
just knew how to, to construct and

orchestrate these things in a way.

And that's what, that's what they did.

They ran around to the implementations
that seemed most interesting and then

helped them sand off the rough edges
to see if it was a path to production.

And in many cases, They could nudge them
there, whereas as few, few people could.

Nabeel Hyatt: Boris is a
great name for a startup.

Fraser Kelton: Yeah, he is remarkable.

He himself could be a startup.

So you don't think that these things
get abstracted the other way, where

they get pulled down into the actual
model level, and that people aren't

interacting with any of this above that.

And, and it kind of ties back
to the Gemini thing, right?

Nabeel Hyatt: Oh, I think
that's a very good point.

Very likely that that happens in parallel.

And, technology tends to go through
ages of entropy and de entropy.

We all love, especially as engineers,
we love de entropy, we love simplifying

everything, cleaning it up getting rid
of the noise from signal bringing it

all down into something that works.

But when things are not working fully
you can't jump three steps ahead, you

have to go through a phase of entropy.

It's why I don't get nervous
about One more model launching

or one more startup launching.

We need as many shots on goal
and bets to move this technology

forward as quickly as possible.

Things that are trying to make a promise
of de entropy too quickly Just feel

incongruous to me when the goal is
solve the problem reliably and, and

we're still not at reliable solution.

And so my is that reliable solution
is going to get way more complicated

before it's going to get easier.

Fraser Kelton: Boy, we wrestled with this
one, but that one feels really right.

It's going to get more complicated
in every direction because we are

not at the reliability required for
consistent value in many use cases.

And like, why bother adding abstractions
of simplicity if you say it's

still not going to be good enough?

Nabeel Hyatt: Yeah, exactly.

Fraser Kelton: lot easier for you to get
something than broken into production.

Nabeel Hyatt: That's
what, that's the headline.

Why are we making it easier to
get broken things into production?

or, we could just fix it
with marketing Frazier.

Fraser Kelton: No, man, this is like,
I'm listening to you talk about Gemini

and then like nudge me and I don't, they,
they misrepresented what the product is.

Nabeel Hyatt: you, you're
not reacting to the evals.

You're reacting to the demo video.

Fraser Kelton: That's right.

Actually, I don't even care
that much about the evals.

I think it's more interesting to
consider that all of these models are

going to have different tricks ranging
from those five words that Anthropic

had to do all the way up to like Q
star with test time compute type stuff.

The thing that bothers me is video.

And I just thought about
what the equivalent is.

Remember like a decade ago Apple
started marketing their new cameras

by showing you the output of the
iPhone camera when they announced it.

And then I don't know
whether it was Samsung or LG.

And when they announced it, they shared
photos from DSLRs and they silently

just wanted people to infer that
that was the image quality that was

coming from the phone, and then people
discovered within an hour that it was a

digital, like, SLR that took the photo.

That feels exactly what happened here.

And I'm sure that the demo with a little
bit of rough edges that they would have

had if they had shown us the prompt steps
in between and the wait for an inference

to occur still would have been a magical
moment and people would have lost their

minds, but because we feel misled, it
erodes our trust and we feel betrayed,

which is a very funny thing to say.

This reminds me of A moment that
has surprised me, and there's a

lesson here broadly for founders,
and it's not just, you know, be

honest in your marketing material.

I knew when I was a founder that the
common wisdom was just be completely

upfront with VCs because they have seen
so many pitches that they can sniff out

when something doesn't sound correct.

I will tell you in a pitch a couple
of months ago you may not remember it.

There was one moment where you paused, you
raised an eyebrow, you asked one question.

And it was, it was not an aggressive
question, but it, it pulled the

first thread that got to the truth.

And my sense is, in that case, if
he had just been up front We reach a

slightly different outcome versus having
to pull that thread and discover that

there was a little bit of, of deception
in how he was presenting things.

Nabeel Hyatt: Oh, well, to be clear, he
was trying to put a gloss on everything.

And I do remember exactly that
meeting the company had gone through

a pivot, some founder breakup y
stuff, you know, just a lot of change.

And I think.

And had been around for a little bit,
all things we didn't know when that

founder came in to present, but we
take first meetings all week long,

like, if we're not good at reading
people and figuring out what's really

happened, you can't do this job.

And meanwhile, the most important part
of this job is Establishing whether the

other person across the hall is authentic
and you can trust them, because you're

going to be on a long journey together.

And so, before it's a good business
model or it's an amazing product or it's

somebody you want to work with because
you love the intellectual banter and you

think they're going to be a great leader
or whatever else is going to get you

excited about this startup, you can't do
it if you don't think they're all being

authentic and real and honest with you.

And so, Yeah, that was a founder who
had clearly gone through some stuff,

Fraser Kelton: we don't care if
they've gone through stuff, right?

Of course.

that's part of the

Nabeel Hyatt: we would love if gone
through stuff, like, you learn some

lessons, just own it, and tell the
story about how your, you started

thinking it was this other thing
and you were just wrong, or, Bye.

You moved to this town and it was
the wrong town because there was

just a bunch of fly by nights.

You took this founder on board, but they
were just a ne'er do well, so you had

to get rid of them or just whatever it
happens to be that, that you went through.

You just want your learned insights.

And I think way too often people
want to tell a glossy story about how

everything's up and to the right and
you got to get on board right now.

Because this round is, oh, the other trick
is like this round is closing in two days.

This stuff, like create senses of urgency.

None of that stuff, all
of that stuff just hurts.

If you have a slow fundraising
process, first of all,

people probably already know.

Just say, I think they're all dumb.

This is the reason I think you are
going to be smarter than all of them.

You can try and go to their ego.

So I'm not saying you
don't try to storytell.

I'm just saying you have
to know how to do it with.

Being yourself and being authentic
to the journey that you've been on.

Fraser Kelton: Yep.

How many times have we seen somebody,
oh, okay, the deal's coming together

in two days, you have to move quickly,
and then we're like, okay, well

then this is not the deal for us.

And then all of a sudden you see
them try to backtrack fairly quickly.

Well listen, we really like you, so maybe
we can give you a couple extra days.

And you're like, alright

Nabeel Hyatt: I had the exact opposite
thing happen to me last week where I had

a founder email in and I passed over email
and I wrote up, but I wrote like a good

little paragraph about why, like this is
the thing that these are the reasons that

I'm not sure you're going to be there.

And I've gotten, mostly you get
crickets, they're going to move on,

which is totally understandable, right?

The second thing you get is defensive,
angry feedback that I'm just dumb.

Which I'm not sure exactly what
that sales tactic is, but whatever.

I got back from that
founder you might be right.

Here are the things that I
think I've worked through.

To try and prove what you're saying
wrong, and then gave a couple of little

notes of the other things they tried
that don't come out, of course, in

the one paragraph pitch of the other
versions of that business over time,

the struggles they've had, and so forth.

I mean, I got on a Zoom on that
person, like, three hours later.

Fraser Kelton: Yep.

Nabeel Hyatt: I was like, Oh, you're

Fraser Kelton: get it.

I get it.

Nabeel Hyatt: this business.

And you're authentically
trying to engage with me on it.

And you're not combative about it.

You're just having a
conversation about it.

Like, awesome.

And I, you know, didn't turn
into an investment that day.

It may in the future we'll
see, but I certainly hold that

founder in really high regard.

Fraser Kelton: I get it.

It was amazing to see, some depth
of experience such that you just,

you knew based on two sentences
that something wasn't right.

And it just reinforced what I had
been told when I was a founder.

Don't bother, right?

Nabeel Hyatt: It's similar when
you're pitching and you don't,

you're trying to gloss over the
particular risks or problem with your

startup, the old real estate trick.

That Realtors use is when they
show you a house they list all

the wonderful things and then, and
then they're doing the walkthrough.

They talk about the one thing that's
the problem with this house and what

they're trying to do focus your time
and energy on the one thing so you

don't think of the 30 other things.

That's very different from
authentically having a conversation

with an investor about your business.

But similarly, these are
early stage startups.

There's no way nothing is
wrong with your business.

Fraser Kelton: Yeah.

Nabeel Hyatt: And so you might as
well talk about the things that you

think are really risky or are broken
or that you haven't figured out yet.

Because the right investor is going to
be the person that's going to be like,

I don't think those are real risks, or
I'm willing to take on that risk, or

like, I think you can solve that risk,
and that's, that's the right way to

have the conversation about the path.

Nobody expects these things to be
totally finished and that's a very,

Fraser Kelton: sure.

Yeah.

Somebody internally here said
that the quickest way to a no

is when there is no risk, right?

Because that's not a venture business.

That's, that's not for us.

Nabeel Hyatt: When a founder feels like
They know the problems they think they've

kind of solved and the areas where they're
self reflective and self aware enough to

realize they've got a lot of work to do.

And you can have an open and honest
conversation about doing that work..

Fraser Kelton: Mm hmm.

I also think the challenge here is
that every firm is different, right?

And so, whenever anybody has
shown us a demo, you see almost

everybody in the room lean forward.

And when people have had stilted
presentation pitch mode, everybody's,

you know, kind of in lean back.

Nabeel Hyatt: At Spark, we like
demos, we like talking product, and

that's, you know, but you're right,
that's not how it is at every shop.

That's not that's not how
lots of investors operate.

Fraser Kelton: that would
probably be the challenge here is

everybody operates differently.

That's where you have an opportunity in
these moments to find the person that

you want to be with for a long time.

Right?

So there, there will, there are
different founders who appreciate

different types of techniques too.

Nabeel Hyatt: It can feel from the
fundraising side, and I certainly felt

it as a founder, like, I just want
somebody to give me a first term sheet.

I'm just trying to raise
capital, whoever it can be.

But that's a little bit like in today's
age, like applying to college by just

saying, I, I really love your school
because it's a great school for learning.

And it's, that's not a great
way to get into college.

I had an admissions person at NYU tell
me that they, the admissions people

there, they always do a thing where they
cover up, why do you want to go to NYU?

And if the answer is you could put in
Columbia instead of NYU the answer, then

that's not the person for NYU, right?

Fraser Kelton: Yep.

Yep.

Nabeel Hyatt: know, it's, if
it's, I love the opportunity for

internships in the dynamic city
and, you know, stuff like that.

It's like, that's not really about NYU.

That's about New York.

so I think similarly when fundraising
The thing I got to in the latter

half of my third, fourth startup in
fundraising was, um, I'm going to pitch

the way I want to pitch, not the way
my founder friends tell me to pitch.

And I'm going pitch in a way that is
authentically me and the way that I

want to talk about how I want to raise,
run this company, the culture I want

to build, the problems my startup has.

I'm just going to lay it
on the table authentically.

And then the job isn't to find.

50 term sheets.

The job is to find one or two term sheets.

If I, if I can pitch the way I want
to pitch my business I'll get lots

of strong no's, but one strong yes.

And lots of strong no's, but one or
two or three strong yes's, it is ten

times more valuable than a bunch of meh.

This seemed okay, because those
don't lead to board seats and checks

and people who are going to join
your cause for the next ten years.

Fraser Kelton: So having listened to
that and then having a moment to reflect.

The thing that I would do differently
that I think would have a material impact.

is to have a very authentic opening
as to why I was excited to have

this conversation with this specific
person in this specific firm.

You and I had the joy of sitting
with that founder a couple of months

ago now, who said, I'm excited
at the prospect of working with

Spark because you have a history of
supporting founders doing brave things.

And I know that that worked.

On you and I because we independently
said it with other people after the fact,

Nabeel Hyatt: good

Fraser Kelton: And it was
gr it was great sales.

It, but it was, well, just like
any great sales, it was authentic

and it, and it resonated, right?

Nabeel Hyatt: and you could feel it in
the tone of when they were saying that,

it was something they were really feeling.

What do you do when you're
pitching, X, Y, Z, fund?

That you don't really know why,
why you're talking to them.

You just, you can't figure out
the most amazing, there's no

obvious amazing reason why you're
talking to them in the first place.

Fraser Kelton: Why are you wasting
either of your party's time?

Is the first one.

If you can't put in 15 minutes of thought
and research and come up with one reason,

then why are you talking to that person?

Nabeel Hyatt: There are people that like,
there are CEOs that like to build very

long spreadsheets of the 40 people that
they're going to go through and talk to.

And look, there are times where
fundraising is really CEOs who It was

the 38th person in the Excel spreadsheet
that you got to that raised the round.

Personally, I have had rounds where
I have had to do that in the past.

But I think that is different
from being casual about it.

And I think that's what talking to.

Long term relationships, these are
big decisions for the person on the

other side and they can feel it when
when the work hasn't been put in.

And so, I know for a lot of founders
raising money can feel like a quote

unquote distraction and I want to
get back to quote unquote work.

And I've always really hated that
phrasing because You know, getting

rid of a board member is like 10
times harder than getting divorced.

Like, you're, you're recruiting
somebody that is going to be

you for a really long time.

Like, you should put the time and
effort in the same way that you would

to recruit a CTO or anybody else.

Fraser Kelton: Oh, for sure, right?

It is, it's a byproduct of COVID
maybe where it became speed dating

and the whole community went crazy,
but the idea that you would sign

up for this level of intense.

Camaraderie without having a an
investment seems rather silly.

You know, I got good news for you.

I got good news for you.

Nabeel Hyatt: What's up?

Fraser Kelton: I googled superpowered.

Nabeel Hyatt: Mm hmm.

Fraser Kelton: And I'm, I'm reading
an article and we'll come back

to the product, but I wanna, I
want to, I wanna live with you.

The company says they are not
shutting down the initial product,

Nabeel Hyatt: Yes!

Fraser Kelton: All right.

Let's take a step back.

Tell us about superpowered
and why you're over the moon.

Mm,

Nabeel Hyatt: so, this is a great
segue into product of the week I have

been trying to record most of my life
on a daily basis more and more of my

life, and try and summarize it and make
it searchable and so forth the super

powered started out as kind of like
meeting bot helper company actually

prior to GPT and then post ChatGPT,
they turn it into an AI note taker

for your Zoom meetings or your Google
Meet meetings and so on and so forth.

Now if that sounds like 30 other startups,
that is because there are like 30 other

startups that are also aI note taking
startups, folks like Fireflys, and I

think Gong does this for salespeople.

And you could just go open the Zoom
app store and take a look through.

And by the way, Zoom itself has
natively launched summarization

as well to take notes while
you're inside of your meetings.

And so the question is, why am
I excited about superpowered not

dying when all these things exist?

Everyone's going to launch a
version of a product, and they're

all going to be noisy, but the real
question is, who's done it right?

And at least in my personal view,
I've tried all of these products.

And none of them are good enough that
I would ever use them week after week

after week, except for SuperPowered.

Fraser Kelton: What,
what is super powered?

Nabeel Hyatt: You mean,
what does the product do?

Fraser Kelton: You just said that it's
all of the small things that they've

done right that make it stand out from a
different AI, like transcription service.

Like, isn't it, don't you just want
it to do reliable transcription?

Nabeel Hyatt: No.

First of all, nobody wants to look
at the transcription of any meeting.

thEre is no way That I want to
actually look through all of the

ridiculous things that I talk about
every single day, word by word.

What you really want is summarization,
and what you really want is action items.

And the execution on that
summarization and the execution on

those action items is what matters.

And it just turns out that
there's actually wildly

variant execution on that job.

The particularly two problems that I
have With most of the other products

that do summarization are first,
they run inside of Zoom as an app.

And I don't want Zoom
having control over it.

I want my desktop to have control over it.

And so, SuperPowered is a
desktop app, not a Zoom app.

That's the first that matters a lot.

Fraser Kelton: that's a big difference.

Nabeel Hyatt: and it allows
them, in particularly, to add

new interfaces, new Chrome.

They can iterate on it,
like, 50 times faster.

than trying to be one button on the
toolbar on the bottom of, of Zoom.

It also means a user doesn't have
to ask corporate overlords whether

they will approve this, this
app to run their infrastructure.

Which I think is a real thing
we've got to think about in AI.

A quick aside, I was talking to my
friend who works at Amazon and he said,

you know, we talk in these podcasts
of all these wonderful products and he

goes, he just, he's like, just remind
you what's going on in real life.

Every time he even goes to use ChatGPT,
Amazon internally puts up this big

prompt that yells at him and says,
listen, just so you know, if you put

any confidential information into this
product, we will come and kill you.

He can't have things scraping his
email to summarize them properly.

You can't have products that are
helping arrange meetings through AI,

Amazon's not getting that stuff happen.

They're on lockdown.

Oh.

And anyway, so, but super powered
runs on your desktop, that's the first

thing, and so I have control over
its use not my corporate overlords.

And then the second thing and again, it
kind of maybe goes back to this previous

conversation about just understanding
where we are on the entropy curve, is

that they let you edit the prompts.

So they have, they have meeting types.

So for instance, if I'm meeting
a new company, I have a meeting

type called New Company.

And then when I meet with the founders we
work with, that's called Founders, right?

And, and the notes I to take away,
the takeaway from each of these types

of meetings is remarkably different.

And of course have prompts that they put
in there that are starter prompts for the

noob who doesn't know what they're doing.

But inevitably, I probably for
90 percent of people today, Like,

something's wrong about that.

It says something in there that I
want that's not right for me, and it

lets you open up the prompt, edit the
prompt, and get what you want out of

it, which is the difference between
something that is kind of like, meh and

okay, and it like gave me a couple of
interesting summarization topics and

titles, versus I feel like an active
participant in making this thing work.

Fraser Kelton: Don't you think that
this is also maybe why they're pivoting

away from it in the sense that This
is a really hard problem, right?

I was just thinking that the diversity
of meetings that people have and then

the preferences of workflows across
those different types of meetings means

that there's like an explosion in Quote
unquote, getting this to work well.

Nabeel Hyatt: I think there's two
points there worth touching on.

The first of which is that,
look, of course it's a problem.

It's a problem.

That there are lots of different use
cases in meetings, and it's a problem that

this is a really busy market with lots of
competition, so it's hard to stick out.

iF a startup doesn't want to solve
problems, then what are they doing?

Like, I, I do worry sometimes that we,
we try and avoid all of the risk in

our startups when problems existing
out in the world is why startups have

a chance to exist in the first place.

So you have to pick your
proper problems, but,

Fraser Kelton: Huh.

Nabeel Hyatt: but yeah, let's, but let's
spend a little time figuring out if I'm

a startup, how to solve this problem,
because if we can solve it, then it's

perfectly obvious for the 18 or 20
other AI meeting note companies that

they have not solved this problem yet.

So if I have a

Fraser Kelton: Mm

Nabeel Hyatt: then suddenly I can
explain that breakthrough very

clearly to my customers, and I now
have an advantage in the market.

Fraser Kelton: Hmm.

Nabeel Hyatt: So,

Fraser Kelton: Good, good point.

That's fair.

Nabeel Hyatt: and then my second point is
somewhat related, but I think it's also

back to the entropy, de entropy thing.

If you, if you honestly think that
we are still at the point where we're

trying to make all these AI products
work, then just accept the idea that

this is AI products are early adopter
products for right now, and they will

not be early adopter products, only
idiots, crazy people, only crazy people

like you or me might try and play with.

An early adopter across every
single vertical and horizontal,

every single week, to see how all
of these tools are developing.

But early adopter doesn't just
mean nerd in, in It means that for

somebody, this problem is so acute

Fraser Kelton: Mm hmm.

Nabeel Hyatt: will be an early adopter
to try and figure out the solution.

And that early adopter customer will
help you find the solution with you if

you give them the tools to work on it.

And as an, as an example, you know,
Adept, which is where an investor in,

is an action, they create an action
transformer model, and they've released

a workflow tool for building your own
little webpage navigator to take actions

on a webpage and do little workflows.

Now, the model itself is not the large
model they'll be launching relatively

soon, so it's an earlier model, and
the workflow tool itself is, let's

be honest, like, kind of hard to use,
clearly an R& D product and definitely

not a late adopter product that I
would give my mother or father, right?

But

Fraser Kelton: mm hmm,

Nabeel Hyatt: for the people who,
which Those workflows are really,

really acute problems in their lives.

They are going to trudge through it.

And then you will learn with your customer
versus in some R& D lab somewhere where

your assumptions about your customer
are wrong, which is the right way to

build when you're early in a market.

Fraser Kelton: mm hmm, the challenges that
exist today are, you know, normal in terms

of trying to figure out how to solve a
large, meaningful problem and that it's

a shame, uh, if that's the reason why
a group who had a little bit of an edge

on it is, is likely to not be investing
too actively into trying to solve it.

Nabeel Hyatt: And look, we don't
know the superpowered AI founders.

I don't know where they are in
funding or their traction or their

progress or what excites them
and gets them up in the morning.

I'm just a consumer of the product . But
all I know is that everybody here should

go to SuperPowered AI and give them
money so that they stay in business,

so that I can keep using product.

Fraser Kelton: You know, I
just skimmed the article.

It says that it's hard to
differentiate in this type of a

market to have sustained growth.

thEy are profitable, and so
they hope to find somebody who

will just continue to run it.

But they're pivoting to become an API
provider for anybody to create a natural

sounding voice based AI assistant.

Nabeel Hyatt: That is also a busy space
of course, but, you know, it's also

very possible that's just a problem that
they are more excited about solving.

And will, they will through the
hard difficulties of that particular

problem with more verve and with
more passion than they have for

meeting notes, which is fine.

Fraser Kelton: you know, yeah, people
don't often talk about that, right?

Is you go through a pivot and you're
doing it for a lot of You know,

logical reasons, but you might either
pivot into or away from an idea

that you actually care deeply about.

Nabeel Hyatt: Yeah, it's not a short road.

Yeah let's be done for today.

I

Fraser Kelton: do it

Nabeel Hyatt: think we went
through some good stuff.

Go download SuperPowered.

Give it a try.

We'd love to hear from
you on that product.

I'll also add there's a couple
other products that have launched

that are allowing you to do live
AI drawing if you wanna try one.

Leonardo AI launched a pretty good
live canvas feature where you can draw

on one side through a prompt and it
redraws it on the right hand side.

If you want to give it a shot
and then we will see you all.

doing one next week, Fraser?

Fraser Kelton: I don't know.

Let's see.

We'll see in the future.

We, we're not sure.

Nabeel Hyatt: We'll see
you all in the future.

Bye

Fraser Kelton: See ya.

All audio, artwork, episode descriptions and notes are property of Fraser Kelton & Nabeel Hyatt, for Hallway Chat, and published with permission by Transistor, Inc.

Broadcast by