Four symptoms of ownership falling prey to AI
Four ways ownership leaks when AI agents provide the first take.
Ignoring system level through “happy vibecoding”
AI agents are excellent at solving isolated function-level problems but notoriously bad at maintaining system-wide consistency or handling non-standard constraints. Subject matter experts need to be aware of that.
When auditing one of the projects for a client, I took a hard look at a solution that had almost entirely been vibecoded. It did solve the problem at hand, but was plagued with many problems - starting with poor maintainability, no documentation and worst of all, its logic was so perplexingly complex and diluted that eventually my team decided to rewrite it.
Until we pulled the trigger on that, it had a long-term negative effect on the engineering team continuing that project. For several months nobody was confident about making changes that could break the product, and having it offline for an extended period of time was obviously a no-go.
We don’t live in a perfect world and there are always factors that affect optimal decision making when executing on a project - looming deadlines, lack of specific knowledge, or unplanned absences in crunch times.
That is all good and valid, but one shouldn’t sacrifice understanding the big picture of the system for quick gains in output. Even if the skills or talent aren’t there, there are still ways to supplement own effort and understanding using an AI-based approach.
Effect of lack of ownership: creation of wasteful solutions, increased effort for future improvements actually addressing the new system challenges.
The quick lever: ownership of the outcome always starts with self-awareness; to counteract it, a quick and easy way to elicit such self-reflection is to run simple prompting for adopting a “contrarian view” or a sparring partner:
“Pre-mortem” analysis: conduct a ‘pre-mortem’ on this design. Assume it is 6 months from now and this solution caused a critical incident in production. Describe the exact sequence of events that led to the failure and identify the root cause.
Feedback à la “ruthless minimalist”: act as a strict, minimalist Principal Engineer who hates adding new dependencies, critique this proposal and argue why it is over-engineered. Suggest a simpler alternative that uses only our existing stack.
Boosting own productivity at the cost of others’
Many AI tools are adept at centralizing knowledge and capable of rapidly creating documentation. Such low entry barrier for synthesizing information makes it easy to fall into a false sense of rapid productivity and value creation, especially if the end product fails to create the expected level of clarity. No clarity means time wasted by others reviewing, commenting, and following up on it.
Not so long ago, whiteboarding and the creation of “demonstrative” code changes used to be the standard for proposing changes to existing systems. Ideation, exploration and consensus building was always achieved among humans, and it will remain so.
In the meantime, productivity tools and platforms providing first-class support for distributed teams have elevated the process of asynchronously building understanding.
Knowledge organization has advanced, too — tools such as Obsidian, Notion or NotebookLM allow the ingestion of institutional knowledge in all forms: documents, audio or video recordings. It’s a no brainer use case for AI integration — by creating a specific context, one can quickly synthesize new data based on the provided artifacts.
But coming to such quick outcomes doesn’t mean that one exercises ownership, in fact it means the opposite; without refining and intentional editing of the artifact, the author risks wasting the time of fellow reviewers at the benefit of own time savings.
The feeling of the LLM “getting you” doesn’t mean that others will.
Let’s take work ticket descriptions as an example. Ticketing tools now generate first drafts for descriptions and other data necessary to make it clearly known, what the work should be about. If the author relies purely on the AI-generated text and doesn’t own the description, it causes:
resentment of not putting enough effort into documenting what should be done;
as a consequence, having the label of a corner-cutter;
Effect of lack of ownership: reviewers lose time and perceived “productivity gain” is a net-negative for the team.
The quick lever: the leader must react swiftly whenever this happens, and employ very basic, albeit targeted coaching. Consider the following questions (and statement):
Imagine you’re out sick tomorrow. Can another engineer start without a meeting? If not, what’s missing?
How much time did this save you? How much time will it cost reviewers to interpret?
Add a 5-line decision log: options considered, why we chose this, and what we’re deliberately not doing.
In the first question, the individual should realize that the “bus factor” is very real and that it exposes them as a potential blocker. The self-reflection of the second question should ideally lead to the fact that one ought to optimize for team time, not author time. The last statement is the most powerful of all, because it equips the individual with the notion that they have the agency to create recommendations for others. When the self-realizations will have successfully led to crafting a much better documentation, the individual knows how to progress from “specialist” to “expert“ mode by relieving others of the responsibility to scrutinize and quality-check their work.
Manager escalation as communication avoidance
Communication in a well performing team is like oil to an engine - one doesn’t work without the other. Team members solve problems because they understand the assignment. They understand the assignment, because they communicate.
One can’t simply defer stakeholder communication to the manager anymore. In the past, team members would come to me and signal problems that should be escalated or articulated to other team leads and would ask me to do my “manager thing” and take care of that. In a high-pace environment, it’s relied upon the leadership of the individuals to build bridges between stakeholders and influential actors on a project.
AI agents already automate large portions of manual work. The degree of automation is expected to rise - at some points models will become so advanced and have the contextual coverage of the entire knowledge base of an company.
Many engineers still cling to their technical expertise as if it were their biggest asset. But the low entry barrier to programming - now essentially anyone can become a programmer - means technical execution is becoming more accessible.
The role of the subject matter expert in the field of technology nowadays is that of an auditor of the AI agent output, adjusting the prompt to cover the intended technical solution or whether to rewrite it again.
Writing code is no longer the most valuable asset one can produce - communication, judgment, and ease of alignment are becoming more and more of a differentiator.
Effect of lack of ownership: individual misses out on increased impact & visibility across other teams / departments, diminishing own career growth.
The quick lever: turn “problem escalation” into “proposal ownership” - before asking the manager to “do their thing”, ask the team member to do the following:
The 3–5 sentence message they would send;
Articulate what is needed from the other party;
Two options and their recommendation.
This increases individual ownership because it has the individual realize what the situation is, instead of the manager doing that, after context-switching and trying to debug what’s going on.
Moreover, the main reason for failing escalations is that the “what” isn’t formulated well - making the ask explicit is ownership of outcomes, not activity. Ultimately having the recommendations is the biggest upgrade as it signals: I’m thinking like an operator, instead of Oh, I’m waiting for prescription. Even if the recommendation is wrong, it’s at least coachable as it builds the “ownership muscle”.
Treating the agent as an oracle
The paper Your Brain on ChatGPT: Accumulation of Cognitive Debt… describes a pattern I’ve started recognizing in engineering teams too: when the assistant does too much of the thinking, the human’s sense of ownership and engagement tends to drop quietly.
In engineering, this shows up as a dangerous bias: the AI overlord has the answer. But an agent’s “wisdom” is not universal, it’s highly contextual - as it should be.
And it serves many companies well: optimized for the org’s main domain, its tooling, its stack, its institutional knowledge, etc. That’s exactly why it can be so effective… and exactly why it can be misleading outside that context.
Subject matter experts then let rigor sink: they accept a polished proposal, skip the uncomfortable step of consulting teammates who own adjacent constraints (security, infra, data, product), and ship an answer they can’t fully defend.
Effect of lack of ownership: missed constraints show up late (when they’re expensive), and the team can’t explain or defend the decision under pressure.
The quick lever: one AI answer requires one human dissent - before proceeding, the owner must do two things:
Name the constraint owner they’ll ask for a 5-minute sanity check (security / data / infra / product);
Write 3 assumptions the AI made that could be wrong in our system;
Capture one dissent (a concern, alternative, or “this doesn’t apply here”) and either address it or explicitly accept the risk.


