Skip to main content
Business

Q&A: What it’s like to have AI coworkers, according to Replit CEO Amjad Masad

AI agents are a much more clear-cut win than chatbots because of how well they imitate human labor.

A bald man with a goatee looks at the camera against a background of teal circles and squares and swirling blue and white textures.
Replit CEO Amjad Masad | Source: Charter

Some people who work in the tech industry say they already have AI coworkers. These are AI agents they communicate with on Slack that can work autonomously for an extended period of time on tasks including software coding and sales outreach. 

To better understand how that actually works and their future use in other industries, we spoke with Amjad Masad (opens in new tab), founder and CEO of Replit, a company that makes AI agents and AI coding tools.

Here are excerpts from the conversation, which took place on the sidelines of Bloomberg’s Going to Work event in Baltimore this past week, edited for length and clarity:

Are AI coworkers here? And should more people expect to have them in their organizations?

Yes. They’re certainly here increasingly, as of maybe in January, February, when tech companies began embracing AI agents. 

I would differentiate AI agents from AI copilots. With AI copilots, you have a chatbot that’s sitting there and you’re chatting with it, taking chunks of work, and it’s a one-shot type of relationship.

Whereas AI agents can work for an extended period of time without monitoring and can call a bunch of tools, can access a lot of different databases and knowledge, can do deep research, and then they determine their halting condition when they feel like they’re done or they couldn’t get the thing done and then come back to you. I would say we only got there about maybe January, February this year. 

There’s a [nonprofit] called METR (opens in new tab) that put out a paper maybe late last year that talked about how long an AI agent can run unsupervised, and they were making the case that every seven months it is doubling. At the time it was like we were at five minutes, then 10 minutes. But they totally underestimated how fast it was going to go. I would say last year, two or three minutes was the max.

Replit Agent 1 could run for two minutes unsupervised before it went off the rails and the context window filled up and it just couldn’t stay coherent. In February, it was like 20 minutes. Now our AI agent can run three hours doing actual useful work that will often be largely correct. And so it is not doubling. It is 10xing every few months. 

By next year, you’ll be able to give AI agents chunks of work that will take a day or two to get done.

What is an example of the work that AI agents or AI coworkers are doing? People talk about, well, they could book your travel or things like that ...

This [booking travel] actually turns out to be a hard problem because consumer problems are actually harder because they are more decentralized. It needs to use a lot of different tools that it’s not trained on. 

Software engineering is very, very clear, and for many reasons, software engineering is the one that companies are focused on. It’s clear value. You can create reinforcement learning environments where the agents are learning very, very quickly because you could just give them a virtual machine, give them a goal, a verifiable goal, and they can learn. So we’re making a lot of progress there.

But there are a lot of other things that are similar to software engineering that are coming down the line. Support tickets is one that is happening very quickly, so support agents are getting deployed, and I think our support team would be 10x larger in prior eras given the amount of customers that we have.

You’re going to start to see it in sales development representative (SDR) and a lot of go-to market-type (GTM) roles. It is essentially a deep research agent. It’s qualifying leads, it’s writing emails, doing outreach, scheduling calendar events for the sales team. 

The experience of working in a tech company right now is that you have in your Slack AI agents like Replit, Cursor, whatever. You can message @cursor, “Create this PR,” and it can go work for an hour or two and create a pull request that otherwise you would have given to a junior engineer or an intern. So a lot of people have that experience of being able to be on Slack and talking to an AI agent like they would talk to a human. Software engineering for many reasons is way ahead of the curve, but we’re going to start seeing it in other areas.

What are the implications for human work if you have AI coworkers? In terms of what human workers spend their time on, in terms of the nature and quantity of jobs, and the dynamics within organizations ...

For so long, the entire economy has been bottlenecked by software engineers. We need a lot more software. So it is hard to see it actually create displacement within engineering because AI agents are happening.

We need more software engineers to manage more AI agents. There’s still more demand for software engineers. But not every aspect of the economy is like that. Not every job and role is like that. I don’t need infinite support reps. I need enough support reps to answer the customer.

The more AI agents can answer successfully, the fewer support reps that we need. I would expect over the next year, 18 months, for support as a role to start really getting affected, QA [quality assurance] as a role to start being truly affected.

My optimistic take is that there’s going to be less specialization, perhaps counterintuitively. Starting from the industrial revolution, we went into extreme specialization and that led to the sort of Marxist theory of alienation and how I make only one part of the pencil, put it down the factory pipeline, it goes to someone else. Right now, because people have access to these AI agents, especially entrepreneurs can do the marketing, the sales, and the engineering all by themselves.

You can see these companies that are making millions of dollars where it’s like one or two or three people. Even Replit—when we got to our $150 million annual current revenue, we were like 70 people. SaaS companies that were getting to that scale 10 years ago were 700 people. So there’s a factor of 10x right now where companies are potentially 10x smaller.

What that means is I would rather hire very smart generalists that can manage more AI agents. What kind of characteristics am I looking for? I’m looking for someone who’s a clear thinker and a clear communicator. Just being able to break down the ideas and give them to the AI requires clear communication, someone who’s organized and can do more work across the board, someone who understands the business problems. It benefits the generalists, the manager.

The consultant types are actually very high leverage right now because they are fundamentally generalists. It disadvantages the hyper-specialized person in the enterprise.

When you have AI agents doing things like support and basic coding, they’re potentially replacing jobs that people do early in their careers. Do you see that as an issue? And do you see any solutions?

Yes, it is very much an issue. If I am a software engineering manager at Meta, do I hire four junior engineers that I have to manage and they come in with all the overhead people come in with? Or do I hire one senior software engineer that can spin up 10 agents at a time?

It’s very obvious that I’m going to go with the senior engineer. So the salaries for senior engineers have never been higher. You hear that anecdotally — I’m not sure we’re seeing this in the data yet — a lot of new grads are struggling to find a job.

That being said, there are new grads who are very good at using AI. They’ve been using AI for four years now, and we hire some of those people. We hired an 18-year-old kid, for example, who’s very good at coding with AI, learned how to code using AI. So that’s the counterpoint, which is he didn’t go to computer science school to get classical training in computer science and programming. He learned on his own how to be very, very proficient with AI.

This gives you a sign of where education should be going. It should be more practical and more on-the-job training and more about how to work with AIs. Perhaps counterintuitively, I think the soft skills become more important than the hard skills. I don’t need them to know how assembly language works. I would rather see them be very generative in terms of ideas, be able to generate a lot of ideas and able to communicate clearly those ideas.

A lot of companies are stuck on AI adoption, trying to get people to use ChatGPT or Claude or your tools but haven’t gone the next step, which is where fundamental processes, workflows, and ways of working are impacted by AI. How do you jump from adoption to something more fundamental?

The reason companies struggled with AI adoption is because AI wasn’t ready. The crazy thing is a lot of them tried it in 2023, 2024, and it was like, ‘These chatbots are cute and useful on the margins.” But you need to try it every three months because the capabilities jump drastically.

Software engineering agents just became possible this year. If last time you had an AI group and you did a study on AI productivity and you didn’t see anything and you’re just, ‘This is bullshit, this is hype,” you’re going to miss the next transition.

AI agents are a much more clear-cut win than AI chatbots and AI copilots. They’re a much clearer-cut win because it is human labor-like. You can give them tasks that they can achieve on their own.

This interview was originally published by The Standard’s sister publication, Charter (opens in new tab). Read Charter’s research playbook (opens in new tab) on the new practices of leadership for the age of AI and sign up for its free newsletter (opens in new tab).

Kevin Delaney can be reached at [email protected]