“Automating Tasks” - What AI news misses about how architects actually work
(Originally shared on LinkedIn, March 2026).
Anthropic’s Graph implies over 80% of architecture tasks could be automated.
We take a closer look at what Anthropic's AI labour market graph actually reveals about architecture vs. the key considerations it can't measure.
Just because AI could theoretically "do most of an architect's job" doesn't mean it should. The biggest risk for architecture is sitting around and waiting to see what happens.
Everyone seems to be sharing Anthropic’s AI and labour markets graph. But what does it really show? The blue area where an LLM could theoretically speed up a task. The red is what's actually happening. For architecture & engineering, theoretical coverage is high but real-world usage is limited.
There’s a lot hiding in that gap for those in the know. This isn't just about adoption speed, it's about what the measure can't see.
I've been writing about AI's impact on architecture since 2016, and the core variable hasn't changed: it's the professionals who speak up to define the standard of their industry that will shape it moving forwards.
If the profession stays quiet, clients, regulators, and the public will take graphs like this at face value and assume AI can do more than is sensible. It's not about what it could theoretically do, it's about getting ahead of the messaging before the outcome is decided for you.
An architect's career is one of extensive learning to hold competing realities in tension. You develop instincts for where risk sits in a project, not from lists, but from lived experience of decisions playing out across dozens of projects, usually hidden in the way people talk, or the subtleties of how they work.
I've sat in meetings where demands were unactionable but couldn't be dismissed, it’s not uncommon for what someone describes to be very different from what they want. I've also known instinctively a suggested path was wrong, even when general consensus said otherwise. The hardest part was articulating why before things get cemented and the project moves on.
Now imagine an AI tool completing tasks around those moments. The path everyone agreed on gets accelerated at face value with documentation and momentum before anyone pauses to question it. The tasks were “completed”, and yet the project is in more danger than if nothing had been automated at all.
This is what the gap really means. Filling it carelessly isn't progress, it's risk. AI doesn’t assume your PI or Principal Designer liability, but it’s happy to “do the work” for you. It doesn't behave like liability matters, and it doesn't care.
Architects shouldn’t avoid AI, the opposite, but the tools architects use need to be designed for how architects actually work, not how a capability benchmark says they could.
We built vBrief with this in mind. We use AI to extract, organise, and track project requirements from your documents and communications, but we designed it so the professional judgement comes from you. Your brief, tasks, risks, and change logs aligned in one traceable platform, less admin, without sacrificing agency.
Practicing architects should be leading this conversation, not waiting to see where it lands.
If you want to learn more about how you can use AI to responsibly manage critical project information, you can contact us here.