We've all seen it. Someone builds an elaborate developer workflow with 17 commands, documents it in a README that's 840 words long, and sprinkles in 383 emojis for good measure.
The workflow is expressed as , and it was last updated .
The commands you need to "just memorize"
0 you'll actually remember · 0 you'll forget within a week · 0 are aliases for standard tools
Cognitive load: you vs. an LLM
Here's the thing about those 17 commands and that 840-word README:
You can hold ~7 novel commands in working memory. You have 17.
840 words is ~1,100 tokens. A typical context window is 128K+ tokens.
But here's the catch
Your workflow documentation was last updated last week. How much of it is still accurate?
The format matters: scripts vs. docs
There's a fundamental complexity that engineers build and don't maintain. It can be expressed as nested bash scripts or nested markdown files. One of those aligns with the strengths of LLMs. The other curtails them.
* Markdown can't "fail" in the sense that it won't crash your terminal. But outdated docs mislead, which is its own kind of failure.
The workflow spectrum
Every agentic workflow sits somewhere on this spectrum. The best ones are invisible — not because they're simple, but because they don't require you to become a student of someone else's bespoke system.
The paradox
The best agentic workflow should feel invisible, not like a homework assignment. We've somehow circled back to making simple tasks feel like piloting a spaceship.
The irony: you built an "agentic workflow" that requires a human to memorize commands, read a wall of documentation, and debug bespoke bash scripts. That's not an agentic workflow. That's a homework assignment.
The alternative
Use standard tools with standard commands. Put your project-specific knowledge in markdown files — not because humans will memorize them, but because LLMs will read them every single time, perfectly, without complaining. The complexity doesn't disappear. It just moves to a place where your tools can actually handle it.