Vibe coding is the practice of building software by describing what you want in natural language and letting AI write the code. The term was coined by Andrej Karpathy in February 2025, but the forces behind it stretch back over two decades. This article traces the full arc — from the earliest autocomplete features in IDEs to the AI-powered app builders that now let people with no programming experience ship working software.

Understanding this history matters because vibe coding did not appear from nowhere. It is the result of converging trends in AI research, developer tooling, and a persistent market demand: millions of people who wanted to build software but could not write code. Each phase built on the one before it, and knowing where we came from makes it easier to understand where this is heading.

The Demand That Always Existed

Long before anyone talked about vibe coding, the demand was there. Every industry had people with ideas for software — internal tools, customer-facing products, workflow automations — who lacked the technical skills to build them. The gap between "I know exactly what I need" and "I can make it" was enormous and expensive to cross.

For decades, the only options were to learn to code (a multi-year commitment), hire a developer (expensive and slow), or use a visual builder (constrained and limiting). Platforms like Microsoft Access in the 1990s, WordPress in the 2000s, and Bubble and Webflow in the 2010s each served a slice of this demand. But none of them fully solved the problem. Visual builders forced you into their constraints. Learning to code required a level of commitment that filtered out most people. Hiring developers required capital that most ideas could not justify.

This unmet demand — the gap between non-technical people who knew what they wanted and the ability to build it — is the economic engine behind everything that followed. Every tool in the vibe coding landscape exists because this gap persisted for decades.

The Precursors: Autocomplete to Copilot (2001–2022)

The technical lineage of vibe coding begins with something far more modest: autocomplete. In 2001, Microsoft shipped IntelliSense in Visual Studio .NET, a feature that could suggest method names, complete function signatures, and display documentation inline as you typed. It was not intelligent — it worked by indexing the project's type system and presenting matching options in a dropdown menu. But it established a foundational idea: the editor should help you write code faster by predicting what you want to type next.

Over the next fifteen years, autocomplete got incrementally better. Eclipse's content assist, JetBrains' code completion in IntelliJ IDEA, and VS Code's built-in IntelliSense all improved on the pattern. These tools understood type systems, imported libraries, and common patterns. They saved keystrokes. They did not write code for you.

The first real leap came from machine learning. In 2018, researchers began training neural networks on large code corpora. Deep TabNine (later Tabnine) launched in 2019, using GPT-2 to predict multi-token completions. It was not just completing what you started typing — it was guessing what you might type next based on the patterns it had learned from millions of lines of open-source code. For the first time, the editor was doing something that felt like understanding context, even if the underlying mechanism was statistical prediction.

Then, in June 2021, GitHub Copilot launched in technical preview. Built on OpenAI's Codex model (a descendant of GPT-3 fine-tuned on code), Copilot was a qualitative leap. It could generate entire functions from a comment. Write a docstring describing what you wanted, press Tab, and Copilot would produce a plausible implementation. It was not always correct — the code sometimes had bugs, used deprecated APIs, or hallucinated non-existent functions — but it was fast, and it worked often enough to change how developers thought about writing code.

Copilot's significance was cultural as much as technical. It normalized the idea that AI could participate in the act of programming. Developers began to think of their editor as a collaborator, not just a text input tool. The phrase "AI pair programmer" entered common use. By the time Copilot became generally available in June 2022, it had over a million users and was generating an estimated 40% of the code in repositories where it was enabled.

The Inflection Point: ChatGPT and the Shift in Developer Habits (2022–2023)

If Copilot was the first chapter, ChatGPT was the plot twist. When OpenAI released ChatGPT on November 30, 2022, it reached one million users in five days and 100 million within two months — the fastest consumer adoption in history at that point. And a significant portion of those users immediately started using it to write code.

ChatGPT changed the workflow. Instead of writing code and asking an AI to complete it (the Copilot model), developers began describing what they wanted in plain English and asking ChatGPT to write the entire implementation. The conversation model was fundamentally different from the inline-completion model. You could explain a business requirement, ask for a database schema, request a React component, and get working code in response. You could paste an error message and get a debugging explanation. You could say "now add pagination" and get the updated code.

This was not autocomplete. This was dialogue-driven programming, and it was accessible to a much wider audience. Non-developers who had never opened a code editor could now paste ChatGPT's output into a file, run it, and see something work. The results were often rough — missing error handling, security vulnerabilities, no tests — but they worked. For prototyping, internal tools, and personal projects, "it works" was enough.

GPT-4, released in March 2023, raised the quality bar significantly. It could handle longer contexts, produce more coherent multi-file projects, and understand more complex requirements. By mid-2023, a growing community of non-developers was building real projects by copying code from ChatGPT into files, running them, pasting error messages back, and iterating. It was messy, manual, and inefficient — but it was working.

This period — late 2022 through 2023 — is when the behavioral shift happened. Developers stopped treating AI as a novelty and started treating it as infrastructure. Surveys from Stack Overflow, JetBrains, and GitHub consistently showed that 60–70% of professional developers were using AI coding tools regularly by the end of 2023. The question was no longer "should I use AI to code?" but "which AI tool should I use?"

AI-Native Editors Emerge: Cursor, Windsurf, and the IDE Wars (2023–2024)

The limitation of ChatGPT as a coding tool was obvious: you had to copy and paste. The model could not see your codebase. It could not read your existing files, understand your project structure, or make changes across multiple files simultaneously. Every interaction required manually providing context. This was slow, error-prone, and scaled poorly as projects grew.

Cursor launched in early 2023 to solve exactly this problem. Built as a fork of VS Code, Cursor embedded AI directly into the editor with full access to your codebase. Its key innovation was codebase-aware conversation: you could ask Cursor to make a change, and it would read the relevant files, understand the project structure, and generate edits across multiple files at once. The Cmd+K shortcut became iconic — highlight some code, describe what you want to change, and Cursor rewrites it in context.

Cursor also introduced Composer mode in late 2023, which took the concept further. Instead of editing one file at a time, Composer could plan and execute multi-file changes: add a new API route, update the database schema, modify the frontend component, and adjust the types — all from a single natural language instruction. This was the first tool that made AI-driven development feel like it could handle real project complexity.

Windsurf, originally known as Codeium, took a different approach. Where Cursor focused on codebase-aware editing, Windsurf developed Cascade — an autonomous agent mode that could plan multi-step tasks, execute them, and iterate on the results. Cascade could be given a high-level objective ("add user authentication to this app") and would work through the implementation steps autonomously, asking for human confirmation at key decision points. It was smoother, more visual, and positioned as the tool for developers who wanted the AI to do more of the driving.

By mid-2024, the AI editor market had become genuinely competitive. GitHub enhanced Copilot with Copilot Chat and workspace-level context. JetBrains added AI features to IntelliJ and the rest of its IDE suite. Amazon's CodeWhisperer (later Q Developer) targeted the enterprise market. The term "AI-native IDE" entered common usage to distinguish tools like Cursor and Windsurf — where AI was the core design principle — from traditional editors that had added AI features as an afterthought.

The competition drove rapid improvement. By the end of 2024, the best AI editors could handle tasks that would have been unthinkable a year earlier: refactoring entire modules, migrating between frameworks, generating test suites, and managing complex state changes across dozens of files.

Andrej Karpathy Names It (February 2025)

The practice existed before it had a name. Developers and non-developers alike had been building software with AI for two years by early 2025. But the term that unified the movement came from Andrej Karpathy — former director of AI at Tesla, co-founder of OpenAI, and one of the most respected voices in machine learning.

On February 2, 2025, Karpathy posted on X (formerly Twitter): "There's a new kind of coding I call 'vibe coding,' where you fully give in to the vibes, embrace exponentials, and forget that the code even exists." He described a workflow where he would describe what he wanted, let the AI produce code, run it, observe the result, and iterate — without ever reading the code line by line. When something broke, he would paste the error back and let the AI fix it. "It's not really coding," he wrote. "I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works."

The post went viral because it named something millions of people were already doing. The term stuck instantly. Within weeks, "vibe coding" was being used in blog posts, conference talks, YouTube videos, and product marketing. It captured the essence of the new workflow: you describe what you want, the AI handles the implementation, and you evaluate the result. The "vibe" was the human intent — the what and the why — while the AI handled the how.

Karpathy's framing was significant for another reason: it gave permission. He was not a hobbyist or a marketer. He was one of the most accomplished machine learning researchers in the world, and he was publicly saying that this approach to building software was legitimate and productive. That validation mattered enormously for the growing community of people who were building things with AI but felt uncertain about whether what they were doing "counted" as real development.

The App Builder Moment: Lovable, Bolt, and the Non-Coder Era

While Cursor and Windsurf were making developers faster, a parallel revolution was making developers optional. The AI app builders — Lovable, Bolt.new, and v0 — went further than any previous tool by generating complete, working applications from natural language descriptions.

Lovable (originally called GPT Engineer) launched in 2024 and quickly became the default tool for non-technical founders. You describe your app — "a project management tool with kanban boards, user accounts, and team invitations" — and Lovable generates a full-stack application with a React frontend, Supabase backend, authentication, and deployment. The output is real code: TypeScript, React components, SQL migrations. You can export it, modify it in Cursor, or continue iterating in Lovable's browser-based editor.

Bolt.new, from the team behind StackBlitz, took a speed-first approach. Using WebContainers to run a full Node.js environment in the browser, Bolt could generate and run applications in seconds without any server-side infrastructure. It became the fastest way to go from idea to working prototype. Where Lovable emphasized polish and full-stack generation, Bolt emphasized iteration speed and the ability to try ideas quickly.

v0, built by Vercel, focused on the component and UI layer. Rather than generating complete applications, v0 specialized in creating individual UI components and pages with high design fidelity. It was particularly strong at generating shadcn/ui components and integrating with Next.js projects. For developers building within the Vercel ecosystem, v0 became the go-to tool for UI generation.

The cultural impact of these tools was profound. For the first time, people with no programming background could describe an idea and get a working web application in minutes. The "claudeboat" moment — when a non-technical creator shipped a viral project built entirely with AI tools — represented a cultural turning point. It was not a toy or a demo. It was a product that people used, and it had been built by someone who had never written a line of code before.

But the app builder moment also brought honest limitations into focus. A security study in May 2025 found vulnerabilities in a significant number of Lovable-generated applications, particularly around missing Row Level Security policies in Supabase. Code quality tended to degrade as projects grew more complex — the AI would lose context, introduce inconsistencies, or generate redundant code. These tools were exceptional for prototyping and early-stage MVPs, but graduating to production typically required moving the codebase to Cursor or another editor for manual refinement.

Agents Enter the Picture: Autonomous Code, Autonomous Shipping

The next evolution was from assistant to agent. Where earlier tools responded to individual prompts — you ask, it generates — agents could be given a goal and work toward it autonomously. The distinction matters: an assistant generates code when you ask for it; an agent plans, executes, tests, and iterates on your behalf.

Claude Code, released by Anthropic, embodied the terminal-native agent approach. Running directly in the command line, Claude Code could navigate a codebase, read files, write code, run tests, handle errors, and commit changes — all from a single high-level instruction. A developer could say "add a user dashboard with recent activity, notification preferences, and account settings" and Claude Code would plan the implementation, create the necessary files, write the code, run the development server to check for errors, and iterate until it worked.

Replit Agent took the concept further for non-developers. Built into Replit's browser-based IDE, Replit Agent could take a project description and build the entire application autonomously — including setting up the development environment, installing dependencies, configuring the database, and deploying to production. The vision was literal: describe what you want, and the agent ships it.

Windsurf's Cascade mode, Cursor's agent mode, and GitHub Copilot's workspace agents all represented variations on the same theme. The AI was not just writing code — it was planning, reasoning about architecture, making decisions, and executing multi-step workflows. The human's role shifted from writing code to supervising an agent's work: reviewing plans, approving changes, and course-correcting when the agent took a wrong turn.

This shift also introduced new risks. Agents could make architectural decisions that seemed reasonable in isolation but created problems at scale. They could introduce security vulnerabilities, delete files that seemed unnecessary, or take expensive wrong turns that consumed API credits. The practical reality of agent-driven development in 2025 was a mix of impressive capability and necessary oversight — the agent could do 80% of the work, but the remaining 20% required human judgment.

The First Wave of Vibe-Coded Successes

By late 2025, the first generation of vibe-coded projects was reaching real revenue milestones. These were not side projects or demos — they were products with paying customers, built entirely or primarily with AI coding tools.

The pattern was consistent. A solo founder or small team would prototype in Lovable or Bolt, validate the idea with early users, graduate the codebase to Cursor for production hardening, deploy on Vercel or Railway, and handle payments through Stripe or Lemon Squeezy. The entire stack — from idea to revenue — could be assembled and built by a single person in weeks rather than months.

Several indie projects built with vibe coding tools crossed the $1,000 MRR mark, then $10,000, and in a handful of cases, $100,000. These were SaaS tools, marketplaces, content platforms, and niche utilities. The builders ranged from designers with no coding background to experienced developers who used AI to move ten times faster than they could have alone.

The economics were striking. A traditional software startup needed $50,000 to $200,000 to reach a working MVP with a small team of developers. A vibe-coded MVP could reach the same functional milestone for $50 to $200 per month in tool subscriptions, with a single person's time as the primary investment. The lower cost floor meant more ideas could be tested, more niches could be explored, and failure was far less expensive.

But the successes came with caveats. Many vibe-coded projects hit scaling walls when complexity grew beyond what AI tools could manage cleanly. Technical debt accumulated faster than in traditionally-developed projects. Security reviews revealed gaps that required manual expertise to fix. The projects that succeeded tended to be the ones where the builder learned enough about code to review and guide the AI's output, rather than treating it as a black box.

Where We Are Now — and Where This Is Going

As of early 2026, the vibe coding landscape has settled into a clear structure. There are two primary categories of tools: app builders (Lovable, Bolt.new, v0) for generating applications from scratch, and AI editors/agents (Cursor, Windsurf, Claude Code) for working with existing codebases. Most successful projects start with an app builder and graduate to an editor as complexity grows.

The underlying AI models continue to improve rapidly. Claude, GPT, and Gemini each release new versions with better code generation, longer context windows, and more reliable reasoning. Specialized coding models from DeepSeek, Mistral, and others are narrowing the gap with frontier models for common coding tasks. The quality of AI-generated code in early 2026 is substantially better than what was possible even a year ago.

The Model Context Protocol (MCP), introduced by Anthropic, is becoming a standard for connecting AI tools to external data sources — databases, APIs, documentation, and development tools. MCP allows AI coding tools to access real-time information about your project's infrastructure, making their suggestions more accurate and their actions more reliable.

The open questions for the next phase are significant. How will AI-generated code handle scaling to millions of users? How will security practices evolve to match the speed of AI-assisted development? Will the quality of AI-generated code improve fast enough to handle genuinely complex systems, or will there always be a ceiling where human expertise is required? And what happens to the software development profession when a significant portion of new software is built by people who are not, in the traditional sense, software developers?

These questions do not have answers yet. What we know is this: the tools are good enough today that a determined individual can go from an idea to a working, revenue-generating product using AI as their primary coding tool. That was not true two years ago. It was not even close to true five years ago. Whatever comes next, the shift has already happened. Vibe coding is not a trend or a fad. It is a permanent change in how software gets made.


New to vibe coding?

Start with our step-by-step learning path — from understanding the basics to shipping your first app.

Start Learning