Featured image of post How to Read Technical Docs in the AI Era: Distilled Reading

How to Read Technical Docs in the AI Era: Distilled Reading

The Unfinished Docs Problem

We’re often told to read more technical documentation. But the reality I’ve encountered is: some docs are so long that reading them for too long leads to:

  1. Drowsiness and declining efficiency
  2. Getting interrupted by something else midway
  3. Getting pulled into another article by an interesting reference — and that article is just as long. I’ve done the math: reading all of them is simply not feasible; the time cost is too high.

The result? My browser perpetually has dozens of half-read technical article tabs open, and I can’t bring myself to close them. Sometimes I save them to bookmarks. But that doesn’t solve the root problem — it just makes the bookmarks folder grow and grow. The bookmarks folder becomes one enormous todo list.

The Reading Bottleneck

This pile of accumulated technical docs weighs on me. It makes me feel guilty, like a form of technical debt in my mind. I keep telling myself I’m not trying hard enough — that’s why I haven’t finished reading them.

But today I realized: finishing all those technical docs is simply impossible. Because:

  1. Their length means reading them will inevitably consume far more time than I can reasonably afford.
  2. Docs spawn more docs. This process never stops.

Just as technical systems have bottlenecks, I call this the reading bottleneck. It’s a problem that requires a logical-level solution, not just brute-force effort.

The essence of the bottleneck is that both time and attention are finite resources. Time, as a unit, is too abstract — it doesn’t capture the variability in how fast we actually read. So I prefer to use attention as the unit of measure. Think of human attention as a kind of token: your Attention Token, or AT. Strong focus generates an A.T. Field (LOL). Once it’s depleted, sleep is the only way to recharge.

This reading bottleneck is fundamentally:

Total time to read all docs > Your available attention

Distilled Reading

Everything Can Be Distilled

Here’s something I realized: even though I’ve only skimmed most of those technical docs, I’ve still been doing solid technical work. Some docs were even obsolete before I got around to reading them. Most of the information in those docs doesn’t need to be memorized — I just need to know it exists, like a dictionary I can look things up in.

This even applies to docs that are already summaries or digests. They can be distilled further. A distilled piece of text can be distilled again — down to a single sentence, if needed. The distillation process loses information, but that loss is acceptable and expected.

LLM-Assisted Distillation

In the age of AI-assisted programming, a developer’s most valuable resource is attention.

When my attention starts to fade, I can use an LLM to summarize the rest of an article. After the LLM reads and condenses it, I review the outline first, then ask follow-up questions on the parts I care about.

Taking it further: why not have the LLM produce a short outline from the very beginning? I read the outline, then decide: do I continue reading, or do I move on? If I continue, I can choose between reading every line carefully or zooming in on the sections that matter most to me.

Don’t Wander Off

When you come across an interesting concept while reading, resist the urge to Google it. Here’s what that behavior chain looks like:

Search keyword → See search results page → Click a result → Read that page

Google’s search result format per entry:

LOGO: Site name
One-line page description (not necessarily the title)
2-3 lines of short preview

You burn attention scanning each result. Then you open a new, visually busy page and burn more attention locating the thing you actually wanted. Other elements on that page may also consume your attention.

So: when you encounter an interesting concept, don’t immediately Google it.

  1. Ask the AI directly — let it find what you’re looking for
  2. Ask it to include links as supporting evidence
  3. You can click the links to verify. If a link is broken, tell the AI to check the links itself before responding, filtering out dead ones

This is what I call Distilled Reading.

Verification

We need a way to verify this method — otherwise it can’t be falsified, and anything that can’t be falsified is pseudoscience. If this is just me talking nonsense, that should be provable. If the method doesn’t work, it’s wasting your time.

Verification dimensions:

  • The number of open technical doc tabs in your browser should decrease
  • The number of unread articles in your bookmarks should decrease, or be archived
  • At the end of the day, you should feel that you actually accomplished your planned reading — and that the guilt has eased (admittedly subjective, and a bit pseudoscientific, but that’s fine)

Use these same criteria to evaluate whether this method works for you — and decide whether to trust me and change your reading habits, or conclude that this is all bullshit. Either way, you’ve taken an important step: you’ve personally tested how AI can change how you live. That’s valuable, regardless of the outcome.