Charlie 2025 - A Recap and What’s Next
We’re really proud of how far Charlie has come in 2025 and wanted to share a brief reflection on the state of agentic software development that Charlie is part of.
Agents can RTFM. Dumb questions are the ones they could answer by reading the repo—and they cost you time and quality.
You’ve heard it a thousand times: “There are no dumb questions.” That’s not really bad advice, for humans at least.
We ask questions to learn, to align, to reduce risk. If you’re a junior engineer trying to understand a codebase, asking the “obvious” question can be the fastest path to competence. If you’re a senior engineer reviewing something thorny, asking a clarifying question can save days of rework.
But agents aren’t humans. Agents have a different superpower: agents actually RTFM. Quickly.
Agents can search the repository, scan previous PRs, inspect docs, look at tests and CI logs. They can follow imports and call sites, pull up the exact file, line, and commit where behavior is defined.
So for agents, some questions truly are dumb and they highlight a failure mode that wastes your time.
For an agent, a “dumb” question is one whose answer is already available inside the working set of the task—if the agent would just do the work to find it.
Questions like:
These questions often come dressed up as “clarification,” when what they really mean is that the coding asisstant is expecting you to hand-feed it context in order to do work. These early-generation coding assistants trained a lot of teams to accept this. People started writing mini-essays in prompts. They started anticipating follow-up questions. They started pre-loading context just in case.
That workflow is backwards. We believe agents should do their own research, gather context, and make choices in service of the task.
How many times have you come back to a coding assistant and realized it had been waiting for you to answer some silly question for the last few hours? These small, seemingly inoccuous questions can cost you days.
Agents are asynchronous. Humans go to meetings. People step away. Notifications get buried. Time zones exist. The reviewer you need might be offline. The problem isn’t how simple or complex the question is, it’s how much forward progress will be wasted waiting for a response? If an agent is going to be part of your engineering team, it needs to reduce coordination overhead, not create even more.
Another challenge with these dumb questions is that they force humans to over-specify. Once a team learns that the agent will ask them a bunch of basic questions, they start trying to answer those questions in advance. They cram the prompt with:
This might feel useful, but it’s not. Over-specification has predictable failure modes:
When you lob these huge prompts over the wall at an agent, it stops being a tool that reads and reasons about your actual system and becomes a tool that guesses based on hand-fed context. Giant prompts that preempt dumb questions just turn the agent into a prompt-following robot with you driving from the passenger seat, taking the slow road to your destination.
We believe that if the answer is in the repo, the agent should find it. If the answer is in your tools, the agent should fetch it. If the answer is in previous work products (PRs, tickets, comments), the agent should read them.
Agents should only need to ask you questions if the answer isn’t available anywhere, and it’s essential information necessary to deliver satisfactory work. Agents are asking good questions when they are important for decision making, not about information retreival.
The job of an agent is not to outsource reading to a human babysitter, the job is to minimize human interruption while maximizing correctness.
One way great agents like Charlie are different from coding asisstants is they RTFM and gather the ground truth on their own.
When Charlie is triggered by work (a GitHub PR, a Slack/Linear mention, etc.), he doesn’t start by asking for a prompt-shaped info dump. He starts by gathering context:
This is why “dumb questions” are such a big deal: avoiding them isn’t just a UX improvement. It changes the whole shape of the collaboration.
You deserve an agent that becomes a trusted teammate that you can assign real work to—not a slot machine you keep having to feed context to.