Inside the launch — and future — of ChatGPT

As winter descended on San Francisco in late 2022, OpenAI quietly pushed a new service dubbed ChatGPT live with a blog post and a single tweet from CEO Sam Altman. The team labeled it a “low-key research preview” — they had good reason to set expectations low. 

“It couldn’t even do arithmetic,” Liam Fedus, OpenAI’s head of post-training says. It was also prone to hallucinating or making things up, adds Christina Kim, a researcher on the mid-training team.

Ultimately, ChatGPT would become anything but low-key.

While the OpenAI researchers slept, users in Japan flooded ChatGPT’s servers, crashing the site only hours after launch. That was just the beginning.

“The dashboards at that time were just always red,” recalls Kim. The launch coincided with NeurIPS, the world’s premier AI conference, and soon ChatGPT was the only thing anyone there could talk about. ChatGPT’s error page — “ChatGPT is at capacity right now” — would become a familiar sight.

“We had the initial launch meeting in this small room, and it wasn’t like the world just lit on fire all of a sudden,” Fedus says during a recent interview from OpenAI’s headquarters. “We’re like, ‘Okay, cool. I guess it’s out there now.’ But it was the next day when we realized — oh, wait, this is big.”

“The dashboards at that time were just always red.”

Two years later, ChatGPT still hasn’t cracked advanced arithmetic or become factually reliable. It hasn’t mattered. The chatbot has evolved from a prototype to a $4 billion revenue engine with 300 million weekly active users. It has shaken the foundations of the tech industry, even as OpenAI loses money (and cofounders) hand over fist while competitors like Anthropic threaten its lead.

Whether used as praise or pejorative, “ChatGPT” has become almost synonymous with generative AI. Over a series of recent video calls, I sat down with Fedus, Kim, ChatGPT head of product Nick Turley, and ChatGPT engineering lead Sulman Choudhry to talk about ChatGPT’s origins and where it’s going next.

A “weird” name and a scrappy start

ChatGPT was effectively born in December 2021 with an OpenAI project dubbed WebGPT: an AI tool that could search the internet and write answers. The team took inspiration from WebGPT’s conversational interface and began plugging a similar interface into GPT-3.5, a successor to the GPT-3 text model released in 2020. They gave it the clunky name “Chat with GPT-3.5” until, in what Turley recalls as a split-second decision, they simplified it to ChatGPT. 

The name could have been the even more straightforward “Chat,” and in retrospect, he thinks perhaps it should have been. “The entire world got used to this odd, weird name, we’re probably stuck with it. But obviously, knowing what I know now, I wish we picked a slightly easier to pronounce name,” he says. (It was recently revealed that OpenAI purchased the domain chat.com for more than $10 million of cash and stock in mid-2023.)

As the team discovered the model’s obvious limitations, they debated whether to narrow its focus by launching a tool for help with meetings, writing, or coding. But OpenAI cofounder John Schulman (who has since left for Anthropic) advocated for keeping the focus broad.

The team describes it as a risky bet at the time; chatbots were viewed as an unremarkable backwater of machine learning, they thought, with no successful precedents. Adding to their concerns, Facebook’s Galactica AI bot had just spectacularly flamed out and been pulled offline after generating false research.

The team grappled with timing. GPT-4 was already in development with advanced features like Code Interpreter and web browsing, so it would make sense to wait to release ChatGPT atop the more capable model. Kim and Fedus also recall people wanting to wait and launch something more polished, especially after seeing other companies’ undercooked bots fail.

Despite early concerns about chatbots being a dead end, The New York Times has reported that other team members worried competitors would beat OpenAI to market with a fresh wave of bots. The deciding vote was Schulman, Fedus and Kim say. He pushed for an early release, alongside Altman, both believing it was important to get AI into peoples’ hands quickly.

OpenAI had demoed a chatbot at Microsoft Build earlier that year and generated virtually no buzz. On top of that, many of ChatGPT’s early users didn’t seem to be actually using it that much. The team shared their prototype with about 50 friends and family members. Turley “personally emailed every single one of them” every day to check in. While Fedus couldn’t recall exact figures, he recalls that about 10 percent of that early test group used it every day.

Image: Cath Virginia / The Verge, Getty Images

Later, the team would see this as an indication they’d created something with potential staying power.

“We had two friends who basically were on it from the start of their work day — and they were founders,” Kim recalls. “They were on it basically for 12 to 16 hours a day, just talking to it all day.” With just two weeks before the end of November, Schulman made the final call: OpenAI would launch ChatGPT on the last day of that month.

The team canceled their Thanksgiving plans and began a two-week sprint to public release. Much of the system was built at this point, Kim says, but its security vulnerabilities were untested. So they focused heavily on red teaming, or stress testing the system for potential safety problems. 

“If I had known it was going to be a big deal, I would certainly not want to ship it right before a winter holiday week before we were all going to go home,” Turley says. “I remember working very hard, but I also remember thinking, ‘Okay, let’s get this thing out, and then we’ll come back after the holiday to look at the learnings, to see what people want out of an AI assistant.’”

In an internal Slack poll, OpenAI employees guessed how many users they would get. Most predictions ranged from a mere 10,000 to 50,000. When someone suggested it might reach a million users, others jumped in to say that was wildly optimistic.

On launch day, they realized they’d all been incredibly wrong.

After Japan crashed their servers, and red dashboards and error messages abounded, the team was anxiously picking up the pieces and refreshing Twitter to gauge public reaction, Kim says. They believed the reaction to ChatGPT could only go one of two ways: total indifference or active contempt. They worried people might discover problematic ways to use it (like attempting to jailbreak it), and the uncertainty of how the public would receive their creation kept them in a state of nervous anticipation.

The launch was met with mixed emotions. ChatGPT quickly started facing criticism over accuracy issues and bias. Many schools ran to immediately ban it over cheating concerns. Some users on Reddit likened it to the early days of Google (and were shocked it was free). For its part, Google dubbed the chatbot a “code red” threat.

OpenAI would wind up surpassing its most ambitious 1-million-user target within five days of launch. Two months after its debut, ChatGPT garnered more than 30 million users.

When someone suggested it might reach a million users, others jumped in to say that was wildly optimistic.

Within weeks of ChatGPT’s November 30th launch, the team started rolling out updates incorporating user feedback (like its tendency to give overly verbose answers). The initial chaos had settled, user numbers were still climbing, and the team had a sobering realization: if they wanted to keep this momentum, things would have to change. The small group that launched a “low-key research preview” — a term that would become a running joke at OpenAI — would need to get a lot bigger.

Over the coming months and years, ChatGPT’s team would grow enormously and shift priorities — sometimes to the chagrin of many early staffers. Top researcher Jan Leike, who played a crucial role in refining ChatGPT’s conversational abilities and ensuring its outputs aligned with user expectations, quit this year to join Anthropic after claiming that “safety culture and processes have taken a backseat to shiny products” at OpenAI.

These days, OpenAI is focused on figuring out what the future of ChatGPT looks like.

“I’d be very surprised if a year from now this thing still looks like a chatbot,” Turley says, adding that current chat-based interactions would soon feel as outdated as ’90s instant messaging. “We’ve gotten pretty sidetracked by just making the chatbot great, but really, it’s not what we meant to build. We meant to build something much more useful than that.”

Increasingly powerful and expensive 

I talk with Turley over a video call as he sits in a vast conference room in OpenAI’s San Francisco headquarters that epitomizes the company’s transformation. The office is all sweeping curves and polished minimalism, a far cry from its original office that was often described as a drab, historic warehouse.

With roughly 2,000 employees, OpenAI has evolved from a scrappy research lab into a $150 billion tech powerhouse. The team is spread across numerous projects, including building underlying foundation models and developing non-text tools like the video generator, Sora. ChatGPT is still OpenAI’s highest-profile product by far. Its popularity has come with a lot of headaches. 

“I’d be very surprised if a year from now this thing still looks like a chatbot”

ChatGPT still spins elaborate lies with unwavering confidence, but now they’re being cited in court filings and political discourse. It has allowed for an impressive amount of experimentation and creativity, but some of its most distinctive use cases turned out to be spam, scams, and AI-written college term papers.

While some publications (include The Verge’s parent company, Vox Media) are choosing to partner with OpenAI, others like The New York Times are opting to sue it for copyright infringement. And OpenAI is burning through cash at a staggering rate to keep the lights on.

Turley acknowledges that ChatGPT’s hallucinations are still a problem. “Our early adopters were very comfortable with the limitations of ChatGPT,” he says. “It’s okay that you’re going to double check what it said. You’re going to know how to prompt around it. But the vast majority of the world, they’re not engineers, and they shouldn’t have to be. They should just use this thing and rely on it like any other tool, and we’re not there yet.”

Accuracy is one of the ChatGPT team’s three focus areas for 2025. The others are speed and presentation (i.e., aesthetics).

“I think we have a long way to go in making ChatGPT more accurate and better at citing its sources and iterating on the quality of this product,” Turley says.

OpenAI is also still figuring out how to monetize ChatGPT. Despite deploying increasingly powerful and costly AI models, the company has maintained a limited free tier and a $20 monthly ChatGPT Plus service since February 2023.

When I ask Turley about rumors of a future $2,000 subscription, or if advertising will be baked into ChatGPT, he says there is “no current plan to raise prices.” As for ads: “We don’t care about how much time you spend on ChatGPT.” 

“They should just use this thing and rely on it like any other tool, and we’re not there yet.”

“I’m really proud of the fact that we have incentives that are incredibly aligned with our users,” he says. Those who “use our product a lot pay us money, which is a very, very, upfront and direct transaction. I’m proud of that. Maybe we’ll have a technology that’s much more expensive to serve and we’re going to have to rethink that model. You gotta remain humble about where the technology is going to go.”

Only days after Turley tells me this, ChatGPT did get a new $200 price tag for a pro tier that includes access to a specialized reasoning model. Its main $20 Plus tier is sticking around but it’s clearly not the ceiling for what OpenAI thinks people will pay.

ChatGPT and other OpenAI services require vast amounts of computing power and data storage to keep its services running smoothly. On top of the user base OpenAI has gained through its own products, it’s poised to reach millions of more people through an Apple partnership that integrates ChatGPT with iOS and macOS.

That’s a lot of infrastructure pressure for a relatively young tech company, says ChatGPT engineering lead Sulman Choudhry. “Just keeping it up and running is a very, very big feat,” he says. People love features like ChatGPT’s advanced voice mode. But scaling limitations mean there’s often a significant gap between the the technology’s capabilities and what people can experience. “There’s a very, very big delta there, and that delta is sort of how you scale the technology and how you scale infrastructure.”

Even as OpenAI grapples with these problems, it’s trying to work itself deeper into users’ lives. The company is racing to build agents, or AI tools that can perform complex, multistep tasks autonomously. In the AI world, these are called tasks with a longer “time horizon,” requiring the AI to maintain coherence over a longer period while handling multiple steps. For instance, earlier this year at the company’s Dev Day conference, OpenAI showcased AI agents that could make phone calls to place food orders and make hotel reservations in multiple languages.

For Turley and others, this is where the stakes will get particularly steep. Agents could make AI far more useful by moving what it can do outside the chatbot interface. The shift could also grant these tools an alarming level of access to the rest of your digital life.

“I’m really excited to see where things go in a more agentic direction with AI,” Kim tells me. “Right now, you go to the model with your question but I’m excited to see the model more integrated into your life and doing things proactively, and taking actions on your behalf”

The goal of ChatGPT isn’t to be just a chatbot, says Fedus. As it exists today, ChatGPT is “pretty constrained” by its interface and compute. He says the goal is to create an entity that you can talk to, call, and trust to work for you. Fedus thinks systems like OpenAI’s “reasoning” line of models, which create a trail of checkable steps explaining their logic, could make it more reliable for these kinds of tasks.

Turley says that, contrary to some reports, “I don’t think there’s going to be such a thing as an OpenAI agent.” What you will see is “increasingly agentic functionality inside of ChatGPT,” though. “Our focus is going to be to release this stuff as gradually as possible. The last thing I want is a big bang release where this stuff can suddenly go out and do things over hours of time with all your stuff.”

“The last thing I want is a big bang release”

By ChatGPT’s third anniversary next year, OpenAI will probably look a lot different than it does today. The company will likely raise billions more dollars in 2025, release its next big “Orion” model, face growing competition, and have to navigate the complexity of a new US president and his AI czar.

Turley hopes 2024’s version of ChatGPT will soon feel as quaint as AOL Instant Messenger. A year from now, we’ll probably laugh at how basic it was, he says. “Remember when all we could do was ask it questions?”