Enxiang Qiu

Writing

Andrew Ng: The Biggest Change Isn't Stronger AI. It's Faster Startup Execution

Apr 30, 2026Blog

After watching Andrew Ng at YC AI Startup School, the shift that matters most to me is not that models got stronger. It is that AI is rewriting the execution economics of startups.

Browse all writing

When people talk about AI startups right now, the easiest thing to get excited about is usually the obvious one: models got better, agents look more human, and new tools can automate more work than they could a few months ago.

But after watching Andrew Ng's talk at YC AI Startup School, I don't think his most important point was simply that AI is powerful. The more important point is something else:

AI is not changing the basic logic of startups first. It is changing the execution economics of startups.

Demand still matters. Product judgment still matters. Users do not suddenly love a bad product because the model behind it got stronger. What really changed is the layer underneath all of that: the speed at which a vague idea can become a prototype, the speed at which that prototype can be shown to users, and the speed at which feedback can be turned into another iteration.

That may be the most real and most underrated effect AI is having on startups right now.

1. AI did not change the basic rules of startups, but it changed the cost of learning

Andrew Ng keeps coming back to one point: one of the strongest predictors of startup success is execution speed.

That line sounds familiar, almost boring. But in the current AI environment, it means something sharper than it used to.

In the past, many teams moved slowly not because they lacked ambition, but because building anything at all was expensive. Writing code was expensive. Changing code was expensive. Changing direction was even more expensive. As a result, many product judgments stayed trapped inside slides, PRDs, and meetings for too long, and real validation happened too late.

That is no longer true to the same extent. AI coding, agentic workflows, and all kinds of reusable building blocks have made it dramatically cheaper to build something testable first. AI does not make startups easy, but it does make mistakes surface faster, and it gives the right direction a better chance of being discovered earlier.

So the capability founders most need to upgrade now is not storytelling. It is the ability to compress a vague direction into a testable prototype quickly.

2. The most important line in the talk: Concreteness buys speed

If I had to keep only one sentence from the talk, it would be this:

Concreteness buys speed.

The meaning is straightforward: specificity determines execution speed.

This is where a lot of teams get stuck.

We often think the missing input is time, money, or technology. In practice, the missing input is frequently the ability to make an idea concrete. As long as an idea remains abstract, nobody on the team is really imagining the same thing. Engineering cannot truly start, and feedback cannot truly happen.

Specificity does not guarantee that you are right. Its value is almost the opposite: even when you are wrong, you find out much sooner.

And in startups, learning that you are wrong quickly is often more valuable than preserving the illusion of being right for too long.

3. In the AI era, code is no longer the only bottleneck. Product judgment is.

Another important observation in the talk is that once engineering speed increases, the bottleneck shifts toward product judgment.

In the past, many teams were constrained by "we do not have time to build this."
More and more teams are now constrained by "we can build it, but we do not know what to build."

That is a subtle change, but a fundamental one.

If a prototype is already cheap to build, what creates separation is no longer just coding ability. It is the ability to answer questions like:

That is why Andrew spends so much time on feedback loops. His message is not simply "move faster." It is "touch reality faster."

AI makes building faster, but it does not replace product judgment.

4. The biggest opportunities are still in the application layer

Another point I strongly agree with is his view on the application layer.

The loudest conversations are always about models, compute, and infrastructure because those topics feel more technical and produce stronger narratives. But Andrew's point is clear: the biggest startup opportunities still live in applications.

The reason is not complicated.

No matter how strong the underlying capability becomes, someone still has to organize it into concrete value. Models do not automatically become products. Capabilities do not automatically become workflows. A technical breakthrough does not automatically become an experience users want to pay for.

So the real work is not repeating that "AI is powerful." The real work is finding out:

In that sense, AI entrepreneurship is less about chasing the newest buzzword and more about understanding one specific user group and one specific task with much more clarity.

5. Fast does not mean careless

Andrew also uses a phrase I like a lot: move fast and be responsible.

It is a useful correction to the lazy interpretation of startup speed, the one that assumes velocity means every other form of judgment can wait.

A prototype can be rough. Validation can focus on the main path first. But that still does not remove a more basic question:

Is this thing actually worth building?

He mentioned that his team has turned down projects that made sense commercially but were not ones they wanted to support ethically. That detail matters, because it shows that responsibility does not show up in PR language. It shows up in topic selection, boundary-setting, and what you decide not to build.

The danger in the AI era is not only technical risk itself. It is also the temptation to confuse speed with permission to stop judging.

Mature speed is not reckless speed. It is the ability to move quickly while keeping moral and product judgment intact.

6. What I want to internalize is not a conclusion, but a way of working

If I try to turn the talk into a method rather than a set of quotes, the most useful thing to practice is this loop:

At its core, the talk is not really about "how to understand AI." It is about how to build products more honestly in the AI era.

Closing

Many people leave AI videos carrying excitement.
What makes this talk more valuable is that it offers something sturdier than excitement. It offers a better startup posture: