AI Is Making Us Faster Than We Can Think

Our Perspectives highlights the voices of leadership and experts within the Redhorse Corporation on trends, topics and insights that are of interest to our customers and partners. The views and opinions expressed here are unique to each author and not to the company as a whole.
To read more and catch new insights regularly from our VP of Engineering, Matt Pikar, click here to connect with him on Linkedin.
AI didn’t sneak into knowledge work by replacing people. It did something more subtle — and more dangerous for human capital.
It compressed time.
Now, somewhere between “just brainstorming” and “we shipped it,” an entire layer of thinking quietly disappeared.
Decisions that used to take hours now take minutes. Workflows that used to require deliberate thought now feel frictionless. Outputs arrive faster than we can explain them. And because the outputs look reasonable, we keep going.
The work didn’t get easier. It just got faster than we could explain. That’s the part worth paying attention to.
Recently, Anthropic published an internal study of its own engineers showing how AI tools are reshaping workflows — boosting productivity while also surfacing concerns about skill erosion and supervision capability as engineers rely more on AI for execution 1.
That’s not fear of job loss.
That’s fear of skill loss.
That distinction matters — especially for organizations that operate under speed, ambiguity, and consequence.
Judgment isn’t where most people think it is
In knowledge work, judgment doesn’t live in the final answer.
It’s built upstream, in the unglamorous parts:
- deciding what problem is worth solving
- forming and discarding hypotheses
- understanding tradeoffs
- explaining why an answer is acceptable, not just that it exists
AI is increasingly good at producing plausible answers quickly. And so we’ve started using it to replace exactly those intermediate steps — the ones that feel slow, fuzzy, and hard to measure.
Humans are still “in the loop.” They’re just in the loop at the part where nodding is expected.
Accountability remains, but causal contact weakens. That’s not a tooling issue. It’s a system design choice.
Software engineering is already feeling this
You don’t need a study to see it if you’re paying attention. Talk to senior engineers and you’ll hear a pattern repeat. Yes, code ships faster and surface quality looks fine. But:
- system intuition is thinner
- subtle wrong design patterns creep in
- explanations get weaker
- refactoring increases
“I don’t know why this works, but it does” is becoming an uncomfortably common sentence, echoed across reporting and discussion on AI-assisted coding 2.
Reviews validate outputs that the reviewer didn’t generate and couldn’t reconstruct from first principles. Oversight drifts away from authorship and understanding is replaced with acceptance. This isn’t because engineers are lazy or incapable. It’s because the workflow no longer forces them to think in the way that builds judgment.
Speed masks the loss — until something breaks.
What this looks like in practice inside engineering teams
In practice, this shift shows up in a few consistent ways inside modern engineering organizations using LLM-assisted workflows.
To read the rest of this blog, please visit Matt Pikar’s Substack.