For agents, there really are dumb questions

Agents can RTFM. Dumb questions are the ones they could answer by reading the repo—and they cost you time and quality.

Abstract geometric background

You’ve heard it a thousand times: “There are no dumb questions.” That’s not really bad advice, for humans at least.

We ask questions to learn, to align, to reduce risk. If you’re a junior engineer trying to understand a codebase, asking the “obvious” question can be the fastest path to competence. If you’re a senior engineer reviewing something thorny, asking a clarifying question can save days of rework.

But agents aren’t humans. Agents have a different superpower: agents actually RTFM. Quickly.

Agents can search the repository, scan previous PRs, inspect docs, look at tests and CI logs. They can follow imports and call sites, pull up the exact file, line, and commit where behavior is defined.

So for agents, some questions truly are dumb and they highlight a failure mode that wastes your time.

What are “dumb questions” for agents?

For an agent, a “dumb” question is one whose answer is already available inside the working set of the task—if the agent would just do the work to find it.

Questions like:

  • “Can I run ls inside this directory?”
  • “Where is the code for this?”
  • “Can you paste the error?”
  • “What does this function do?”
  • “What version are we on?”
  • “What’s the build command?”
  • “Do we already have a pattern for this?”
  • “What does the repo prefer here?”

These questions often come dressed up as “clarification,” when what they really mean is that the coding asisstant is expecting you to hand-feed it context in order to do work. These early-generation coding assistants trained a lot of teams to accept this. People started writing mini-essays in prompts. They started anticipating follow-up questions. They started pre-loading context just in case.

That workflow is backwards. We believe agents should do their own research, gather context, and make choices in service of the task.

The efficiency problem: a Y/N question can cost you a day

How many times have you come back to a coding assistant and realized it had been waiting for you to answer some silly question for the last few hours?  These small, seemingly inoccuous questions can cost you days.

Agents are asynchronous. Humans go to meetings. People step away. Notifications get buried. Time zones exist. The reviewer you need might be offline. The problem isn’t how simple or complex the question is, it’s how much forward progress will be wasted waiting for a response? If an agent is going to be part of your engineering team, it needs to reduce coordination overhead, not create even more.

The quality problem: dumb questions create dumb prompts

Another challenge with these dumb questions is that they force humans to over-specify. Once a team learns that the agent will ask them a bunch of basic questions, they start trying to answer those questions in advance. They cram the prompt with:

  • directory structure explanations
  • commands to run
  • conventions for naming
  • historical context
  • design constraints
  • copy/paste chunks of code

This might feel useful, but it’s not. Over-specification has predictable failure modes:

  • You accidentally omit details the agent needed
  • You include stale details and steer the agent into outdated patterns
  • You bias the agent toward the prompt, away from the codebase
  • You end up doing “work about work” instead of shipping

When you lob these huge prompts over the wall at an agent, it stops being a tool that reads and reasons about your actual system and becomes a tool that guesses based on hand-fed context. Giant prompts that preempt dumb questions just turn the agent into a prompt-following robot with you driving from the passenger seat, taking the slow road to your destination.

We believe agents should be able to answer their own questions

We believe that if the answer is in the repo, the agent should find it. If the answer is in your tools, the agent should fetch it. If the answer is in previous work products (PRs, tickets, comments), the agent should read them.

Agents should only need to ask you questions if the answer isn’t available anywhere, and it’s essential information necessary to deliver satisfactory work. Agents are asking good questions when they are important for decision making, not about information retreival.

The job of an agent is not to outsource reading to a human babysitter, the job is to minimize human interruption while maximizing correctness.

What this looks like in practice

One way great agents like Charlie are different from coding asisstants is they RTFM and gather the ground truth on their own.

When Charlie is triggered by work (a GitHub PR, a Slack/Linear mention, etc.), he doesn’t start by asking for a prompt-shaped info dump. He starts by gathering context:

  • reading the relevant code and history
  • loading repo-specific instructions and constraints
  • pulling linked context from connected tools (when relevant)
  • starting an ephemeral dev environment to actually run commands and validate behavior
  • making a concrete plan and executing it
  • posting results back where the work is happening (PR, issue, thread)

This is why “dumb questions” are such a big deal: avoiding them isn’t just a UX improvement. It changes the whole shape of the collaboration.

You deserve an agent that becomes a trusted teammate that you can assign real work to—not a slot machine you keep having to feed context to.