AI’s threat to “judgment capital”
We have been asking the wrong question about artificial intelligence. For several years now, the dominant public debate has revolved around a familiar set of concerns. Will AI take jobs? Will it spread misinformation? Will it amplify bias? Will it concentrate power? Will it make certain professions obsolete? These are serious questions, but they are not the deepest question.
The central danger of AI is that, if deployed carelessly and at scale, it may erode the human and institutional capacities on which sound judgment depends. Once those capacities weaken, authority loses legitimacy and self-government becomes harder to trust.
The issue is what people and institutions may cease to be able to do if they increasingly rely on AI to perform the slow work of interpretation and deliberation through which mature judgment is built. AI is an extractive technology. And what it threatens to extract is judgment capital.
Judgment Capital
Judgment capital is the accumulated human and institutional capacity to perceive reality clearly and interpret ambiguity without flinching. It is what allows people to weigh competing considerations and justify decisions in public. It also includes the willingness to bear responsibility for those decisions, and to train successors who can renew those capacities across generations. It is among the most important forms of wealth any society, profession, or civilisation possesses. Yet unlike financial capital, it rarely appears on a balance sheet. It is invisible until it begins to fail.
When judgment capital is strong, institutions can adapt under pressure. They can distinguish signal from noise. They can make difficult decisions without panicking or posturing. They can correct course. They can learn. They can resist seduction by the merely fashionable or the merely efficient. They can produce leaders who are prudent.
When judgment capital is weak, institutions become brittle. They become articulate but shallow. They appear informed but are easily manipulated. They generate impressive analysis but poor discernment. They lose the capacity to tell what is merely plausible from what is actually true. They confuse what can be measured with what matters, and what is expedient with what is wise. They become, in effect, high-functioning forms of institutional confusion.
This is why AI cannot be understood adequately through the conventional categories of productivity or efficiency, let alone simple task substitution. Those categories are too narrow. They focus on what the technology appears to save in time or labour, but not on what it may slowly consume in capability. The possibility we have not yet reckoned with is that AI may help societies generate short-term performance by liquidating the accumulated stocks of human judgment on which their long-term resilience depends.
Structural Danger
That is a very different kind of risk. It is a structural danger. It arises precisely because AI can be immensely useful. It can draft and summarise. It can classify. It can recommend. It can persuade. It can remove friction. It can reduce effort. It can produce coherence quickly. It can make institutions look more intelligent than they are. And that is exactly why the danger is so profound.
Human judgment is formed through difficulty – by wrestling with ambiguity, confronting reality, and drafting badly before revising under pressure. It is formed by hearing objections, by noticing what does not fit, and by living with error long enough to learn what should be done and why. A technology that systematically removes those formative frictions may create gains in speed and output. But it may also degrade the process by which judgment itself is cultivated. That is what I mean by cognitive extractivism.
Cognitive extractivism is the creation of economic and institutional value through the large-scale depletion of the human capacities people need if they are to discern well, verify claims, accept responsibility, and govern themselves. Industrial society extracted nature. Digital society extracted attention. The AI age, if badly governed, may extract judgment.
The question is whether AI will be deployed in ways that augment and deepen human judgment, or in ways that quietly mine it. That distinction may determine more than the future of individual organisations. It may shape the future of institutions and professions. It may also shape democratic life, perhaps civilisation itself. We usually recognise civilisation by what can be seen from the outside – its laws, its institutions, the visible machinery of common life. But beneath all of these lies a more fundamental inheritance – the capacity of human beings to judge.
Civilisation is held together by disciplined human judgment when the ground is unclear and interests collide. In law, legitimacy rests on people who can tell evidence from assertion and principle from preference. Markets depend on decision-makers who can read signals sceptically and know where responsibility lies. Democratic life depends on the same thing. So do universities and the professions – people able to face reality and weigh competing goods, then answer for what they decide.
In each case, what sustains the institution is a cultivated human capacity – the ability to make sense of reality and deliberate about competing goods, then take responsibility for judgment when no algorithm can absolve anyone of moral or practical accountability. This is why judgment capital is so valuable. It is a hard strategic asset and a civilisational necessity.
The problem is that judgment capital is slow to build and easy to spend. It accumulates through education and apprenticeship. It grows through habits of explanation and verification, then through repeated exposure to the burdens of decision-making. It is embedded in routines of challenge and review. It also lives in traditions of craft and in cultures that reward seriousness over theatre.
But because judgment capital is intangible, institutions often consume it without recognising that they are doing so. They optimise for speed and call it progress. They remove developmental steps and call it efficiency. They compress deliberation and call it agility. They replace friction with fluency and call it intelligence.
The depletion is noticed only later. Succession weakens. Decisions become thinner. Dissent disappears. Formal compliance rises even as real prudence declines. At that point organisations discover that they are full of highly credentialled people who do not know how to exercise judgment. The danger with AI is that it may accelerate this depletion on a scale and with a subtlety unlike anything we have previously encountered.
Outputs and Capacities
The prevailing conversation about AI has focused overwhelmingly on outputs. We ask whether the model is accurate and fair, whether its reasoning can be explained, whether it is safe. These are important questions. But they are still largely questions about the behaviour of the system. They are not yet questions about the long-term condition of the humans and institutions that use the system. That omission matters enormously.
A society can regulate outputs and still degrade capacities. An organisation can govern models responsibly and still undermine the formation of judgment in its workforce. A board can install controls and still become dependent on AI-mediated forms of analysis that weaken the organisation’s ability to think independently. A school can prohibit cheating and still produce graduates whose habits of attention and interpretation, even of writing, have quietly atrophied. In other words, even lawful and carefully governed use of AI may carry a profound long-term risk if it normalises the outsourcing of the very activities through which judgment is formed.
That risk is easy to miss because output metrics often improve first. Drafting becomes faster. Reports become cleaner. Analysis becomes cheaper. Decisions can be prepared more quickly. Communications become more polished. The institution feels more capable. But capability is not the same as capacity. Capability refers to what can be produced now. Capacity refers to what can be sustained, renewed, and exercised over time. That distinction is fundamental. A company can increase capability while depleting capacity. A university can increase throughput while weakening scholarship. A government can improve administrative speed while eroding legitimacy. A civilisation can become more technologically sophisticated while becoming less able to govern itself wisely.
We have seen versions of this dynamic before. Financial institutions can boost short-term returns by underinvesting in maintenance or resilience, even ethical discipline. States can centralise authority in ways that increase administrative efficiency while hollowing out civic responsibility. Organisations can deliver quarterly results by weakening the very apprenticeship structures that produce future leaders. In each case, the short-term performance is real. But it is funded by a hidden drawdown on something more foundational. That is how extractive systems work. The possibility before us is that AI will become the most powerful extractive technology yet devised for the realm of human cognition.
The depletion of judgment capital occurs through a series of reinforcing dynamics, each of which can look useful or harmless in isolation. Taken together, however, they may produce a profound civilisational deskilling. The pattern recurs in several overlapping mechanisms:
AI removes the formative work through which judgment is built
Human beings become capable of judgment by undertaking the difficult work that precedes good answers. A young lawyer learns judgment by grappling with conflicting precedents and messy facts, then trying to make a case under pressure. A junior analyst learns from building a model and then discovering, through checks and failures, the hidden assumptions inside it. A young manager learns by drafting a strategy memo. Then comes the defence of it. Then revision. Then the moment when reality pushes back. A student learns by struggling to explain something clearly. In each case, the developmental value lies partly in the friction itself.
If AI takes over too much of this formative labour, the user may still obtain the output while losing the process that would have developed the capacity. The work still gets done. The memo appears. The analysis is compressed for them. The presentation takes shape. Recommendations surface almost fully formed. But the human being shifts toward supervision. The person becomes less of a builder and more of a reviewer, less of a thinker and more of a selector.
This is a strategic problem. Civilisations and organisations depend on the continuous replenishment of judgment capital through apprenticeship. If the lower and middle rungs of professional formation are hollowed out, the upper ranks may still function for a time. But eventually the pipeline weakens. Institutions find themselves with people who can operate systems but cannot truly judge circumstances. They have outputs. They no longer have depth.
AI weakens contact with reality by substituting coherence for encounters
A second mechanism of depletion lies in AI’s extraordinary capacity to generate plausible coherence. This is one of its greatest strengths. It can gather fragments and synthesise information. It can impose structure on a messy situation, then produce articulate narratives with astonishing speed. In many contexts, that is useful. Indeed, it can be transformative.
But there is a hidden danger here. Reality is often resistant and disorderly. It contains contradiction, awkward detail, and long stretches of ambiguity that refuse to resolve on command. Much of mature judgment consists in learning to endure and interpret that mess without falsifying it.
AI, by contrast, often presents the world in pre-digested form. It tidies and compresses. It clarifies what was once messy. It creates a sense of command over complexity. It gives leaders something they crave – legibility. Yet legibility can be purchased at the cost of reality-contact.
When decision-makers increasingly encounter the world through AI-generated summaries and tidy strategic frames, they may become more fluent about reality while being less exposed to it. They may lose contact with the raw materials of judgment – unresolved tension, the outlier fact, some ugly human detail, the contradiction that will not fit the preferred frame. The result is a new kind of institutional fragility – organisations that speak more coherently but see less clearly.
AI suppresses the developmental role of dissent and interpretive struggle
Strong judgment emerges from contestation. Within healthy institutions, judgment is refined through argument and review. It is sharpened by objection, by alternative interpretation, and by the disciplined surfacing of what others would prefer to ignore. This is as true in strategy and governance as it is in science and law. AI can support these processes. It can also weaken them.
If AI becomes the first-draft author and the main device for compressing evidence and proposing options, then the range of what enters deliberation may become subtly narrower. People may challenge the output, but they do so after the frame has already been set. The terms of debate may be pre-structured by the model’s assumptions, training distributions, or rhetorical defaults.
More importantly, when organisations get used to receiving neat and fast recommendations, their tolerance for the slower, messier process of human disagreement often declines. Friction begins to look like inefficiency. Objection begins to look like obstruction. The temptation grows to treat AI-mediated coherence as a substitute for the strenuous work of genuine deliberation. Yet that difficult process is part of how institutions think. A world in which AI reduces the appetite for interpretive struggle is a world in which the muscles required for wise disagreement begin to weaken.
AI blurs authorship and responsibility
Judgment capital depends on knowing who judged and why, then being clear about where accountability sits. This is one of the most underappreciated institutional dangers of AI.
In an AI-mediated environment, it becomes easier for decisions and analyses to emerge from a murky sequence – someone prompts, the model generates, others revise, and approval spreads until responsibility is everyone’s and no one’s. The final product may look polished. But responsibility becomes harder to trace.
Who exactly formed the view? Who weighed the trade-offs? Who can explain the rationale in genuinely human terms? Who stands behind the judgment, rather than merely behind the process?
Formal accountability may remain. The executive signs the paper. The board approves the recommendation. The doctor signs the report. The official issues the decision. But when substantive cognition is increasingly distributed between people and systems, legitimacy can begin to thin. Institutions require intelligible authority. When authority becomes opaque or merely performative, little more than a residue, trust declines. And without trust, verification becomes costly. Dissent curdles. Governance hardens into something brittle.
AI makes it easier to consume inherited judgment than to renew it
One of the reasons AI is so attractive is that it allows people to benefit from accumulated human knowledge without having to reproduce the labour that originally created that knowledge. That is, in many contexts, precisely its promise.
But every civilisation faces a recurring challenge – how to transfer inheritance without destroying formation. How to give the next generation access to the fruits of past achievement while still requiring enough effort and responsibility to make them worthy stewards rather than passive consumers.
AI intensifies this challenge dramatically. It offers users synthesis without scholarship and polished prose without the labour that once produced it. It can deliver recommendations before real deliberation has taken place, and it can create an air of competence before the underlying formation exists.
This can feel like progress. Sometimes it is progress. But if the ratio of inheritance to formation becomes too high, societies begin living off accumulated stores of judgment without replenishing them. That is the essence of depletion.
One reason the risk is not yet fully visible is that we still describe many elite professional activities as forms of knowledge work. That phrase, once useful, is now increasingly inadequate. The most consequential roles in modern institutions are defined by the exercise of judgment.
The decisive question in these roles is whether the person can make sense of what matters, challenge what seems obvious, and take responsibility under conditions where information is abundant, but wisdom is scarce. That is why the AI debate needs to move from the language of knowledge work to the language of judgment work.
It is one thing for AI to retrieve information or help with drafting. It is another for it to become the default intermediary through which professionals meet complexity and shape decisions before explaining what they have done. Once that shift occurs, we are no longer merely augmenting labour. We are intervening in the ecology of judgment itself. And that should be treated as a first-order governance issue.
Why Does Judgement Matter?
Why should societies and the institutions within them care about this? Because the depletion of judgment capital is a direct threat to social resilience. It weakens institutions over time. At a larger scale it endangers continuity and corrodes democratic legitimacy. Every serious institution depends, whether it acknowledges it or not, on its judgment infrastructure. That infrastructure shows itself when an institution can notice weak signals and face uncomfortable truths without flinching, while still acting responsibly when scripts fail.
Neither procedure nor dashboard can substitute for this, nor can a model or an AI assistant. Indeed, the more sophisticated the informational environment becomes, the more valuable judgment becomes. When everyone has access to analysis, discernment becomes the scarce resource. When everyone can generate polished narratives, credibility and reality-testing become the competitive advantage. When every institution can automate parts of thought, the ability to preserve human seriousness becomes civilisationally strategic. This is why AI should be understood as a possible stressor on our collective judgment capital.
Societies and institutions should be asking questions such as these:
Are we using AI in ways that accelerate professional formation, or bypass it?
Are junior staff still being trained through demanding cognitive work, or merely supervising generated outputs?
Are our decision processes becoming more rigorous, or just more articulate?
Is dissent becoming less necessary because AI produces plausible consensus too quickly?
Do our people still know how to build an argument, interrogate evidence, and defend a judgment without synthetic assistance?
Are we augmenting our institution’s capacity for discernment, or quietly spending it?
These are survival questions for any institution or society that expects to remain competent over time. No society can buy mature judgment at scale once it has lost the conditions that produce it.
The Collapse of Apprenticeship
Perhaps the clearest institutional manifestation of this problem will be the collapse of apprenticeship. The great institutions of modern society reproduce themselves through developmental ladders. Novices begin with constrained responsibilities. They do laborious work, make mistakes under supervision, and slowly learn to tell surface fluency from substance. Over time they acquire the capacity to exercise discretion well. This is how institutions reproduce leaders. It is how professions produce masters and civilisations produce competent adults.
AI threatens this process precisely because it can perform much of the work previously assigned to beginners and intermediates. On one level, that looks like progress. Why force a junior professional to do work a machine can complete in seconds? But that is the wrong frame. The question is what function the task served in human formation.
Many so-called low-value activities were valuable because they built the person doing them. They trained attention and disciplined thought. They taught people how structure mattered and why relevance had to be earned. They demanded precision. Above all, they made accountability concrete by exposing the learner to the grain of the work.
Once those developmental stages are removed, the institution may gain efficiency while losing the ladder by which capability is transmitted. This is a profound strategic risk. Over time the bench thins. Middle ranks become more managerial than interpretive. Juniors become more presentational than substantive. And eventually the institution discovers that it has become highly dependent on inherited talent it no longer knows how to reproduce. The same logic applies beyond the institution. A civilisation that loses its mechanisms of apprenticeship begins to consume its own future.
If judgment capital is one pillar of civilisation, another is verification. Human societies depend on institutions and practices so truth can be established and defended, even when it is contested. Courts rely on evidence. Science relies on replicability and disciplined standards of proof. Journalism relies on attribution and verification. Markets depend on trustworthy records and on rules of disclosure that people still believe. Ordinary personal life depends on the practical ability to know who said what, what happened, and what can reasonably be believed. AI places unprecedented strain on this verification infrastructure.
The concern is that synthetic content at scale raises the baseline cost of trust. When voice and image can be forged, and identities or interactions simulated, proving authenticity becomes harder and more expensive. It also becomes more socially contested. In such a world, bad actors enjoy a structural advantage. But so does cynicism. Once plausible deniability expands, institutions face a new burden – to prove that evidence has not been fabricated or subtly distorted.
Beyond Business
This matters far beyond the business sphere. Governance depends on verification. Internal investigations do too. So does public trust, and modern cyber defence. The more fragile the verification environment becomes, the more expensive it is to govern and deliberate, or simply to persuade. And there is a second-order effect. When verification becomes harder, authority itself weakens. A society can survive many lies. It struggles to survive when proving truth becomes systematically harder than manufacturing plausibility.
There is another danger that societies and institutions should recognise quickly – the rise of performative intelligence. AI can make institutions sound smarter than they are. It can sharpen prose and speed up synthesis. It can make public reasoning look polished and strategic narratives look elegant. In many cases, it can also improve substance.
But the risk is that people begin to mistake fluency for understanding. They confuse synthesis with judgment. They take articulate outputs as proof of institutional seriousness. An institution feels more intelligent because it is producing higher-quality expression. Yet expression is not the same as discernment.
An institution can become progressively more polished while becoming progressively less self-aware. It can produce better papers while asking worse questions. It can make governance look more rigorous while actually weakening challenge. It can create the impression of strategic depth while losing contact with operational and social reality. This is performative intelligence – the condition in which AI enhances the display of intelligence more reliably than the substance of it.
Why is this so dangerous? Because it hides depletion. A decline in judgment is easy to notice when institutions become chaotic or visibly incompetent. It is much harder to notice when they become smoother and faster, wrapped in more sophisticated language. AI threatens to make institutions look wiser than they are. That is a genuinely treacherous form of risk.
At this point, some may object that this is simply romanticising struggle. Surely it is irrational to preserve difficulty for its own sake. Surely civilisation advances by reducing unnecessary effort. Of course it does. The point is to distinguish between friction that is wasteful and friction that is formative. A mature society removes needless burdens. But it does not eliminate the disciplines through which human beings become capable of freedom and responsibility, above all of judgment. This is the moral stake in the AI debate. The question is what kinds of people we are training ourselves to become in relation to those systems.
The End of Human Agency?
If AI becomes a universal prosthesis for thought, performance may survive even as human agency thins out. Authorship begins to blur. Responsibility becomes harder to locate. Seriousness may drain away almost unnoticed. That would amount to an anthropological shift.
Human beings are creatures formed by the difficult exercise of understanding and judgment, then by taking responsibility for the consequences. To outsource too much of that is to change ourselves. That is why the stakes are civilisational. A civilisation is defined by the sort of human beings it cultivates and the sort of judgment it esteems.
If this diagnosis is right, then the governance conversation around AI must change. The central question cannot be only – how do we deploy AI safely and effectively? It must also be – how do we deploy AI without depleting the judgment capital on which our institutions and societies depend?
That requires a different kind of leadership. It requires leaders to think less like technology adopters and more like stewards of civilisational capability.
From that diagnosis come a set of practical implications:
Treat judgment as a civilisational asset
Most institutions are trying to identify where AI can save time or improve quality. Far fewer are asking where human judgment is so central that the careless removal of cognitive labour would degrade the institution over time.
Leaders need to identify the domains in which judgment must still be formed by people. In those domains, reality has to be tested directly, and moral accountability remains mission-critical. Those domains should not be governed by a simple logic of maximum automation.
Protect apprenticeship
If entry-level and mid-level cognitive work is being automated, institutions must intentionally redesign pathways of formation. Otherwise they will eat their seed corn.
The question is whether younger people are still being required to do first-principles work and defend their reasoning, then learn from correction. If not, the institution is weakening the future stock of human judgment.
Design for challenge
Institutions need durable ways to preserve disagreement, alternative interpretations, and real human challenge. Where stakes are high, leaders should deliberately preserve occasions on which the human task is to reason independently of the machine.
Preserve contact with reality
Senior decision-makers must resist the temptation to encounter the world only through AI-mediated summaries. They need exposure to unfiltered signals and inconvenient facts. That includes dissenting voices and the texture of lived experience. Otherwise institutional authority becomes trapped in a synthetic cocoon of legibility. The more polished the informational environment becomes, the more intentional institutions must be about preserving unsmoothed access to reality.
Clarify authorship and accountability
In AI-mediated institutions, it becomes crucial to know who truly owns a judgment. Who actually stands behind the reasoning? Who can explain it in plain terms and defend it under scrutiny? Institutions that fail to preserve intelligible authorship will eventually weaken legitimacy.
Govern for replenishment
This may be the most important principle of all. Every use of AI should be assessed for its effect on the replenishment of human and institutional capability tomorrow.
Human Wisdom v Artificial Intelligence
A system that delivers impressive short-term efficiency while weakening human formation and the habits of responsibility may be profitable in the narrow sense. It may also be deeply unwise.
The AI era demands a new test of leadership. In the first wave of digital transformation, the critical question was whether leaders understood technology well enough to adopt it. In the next wave, the question will be whether they understand human beings and institutions well enough to govern a civilisation shaped by it. That is a different standard.
It requires leaders to ask harder questions about where automation is safe and where human cultivation must remain intact. They need to know when speed helps and when it hollows out depth. They also need to watch what repeated use of the technology normalises, and what it slowly erodes. This is pro-stewardship.
A wise civilisation does not permit tools to quietly consume the human capacities that keep freedom and legitimacy alive, and make responsibility possible. That is the line we are in danger of crossing. The central strategic challenge of the AI age is to ensure that in building more intelligent systems, we do not leave ourselves with shallower people and weaker institutions, with self-government quietly eroding beside them.
The deepest question, then, is whether our societies can deploy AI without liquidating the judgment capital on which they depend. Everything of lasting importance follows from how we answer that question.
We stand at the beginning of a technological transformation whose full consequences we do not yet understand. But one thing is already becoming clear. AI will change which capabilities are strengthened, and which are quietly allowed to decay. That is why the conversation must rise above the current fascination with productivity and novelty, above all with scale.
The true test of AI is whether it can make them wiser without making their people shallower. It is whether it can extend human capability without depleting the human capacities from which all serious capability ultimately springs.
If we fail that test, the cost will be borne in the weakening of the social and moral infrastructure on which civilisation depends. We will inherit institutions and societies that can generate endless output and endless simulation while possessing less of the disciplined judgment required to decide what is true and what matters, then act justly on that knowledge. That is a dangerous form of civilisational substitution – more synthetic cognition, less human discernment.
The responsibility of leadership, then, is to defend the conditions under which judgment remains possible. The leaders who matter in this century will be those who understand that judgment is a form of capital and that it can be depleted. They will also see that the first duty of serious stewardship is to ensure that the pursuit of intelligence does not quietly erode the human capacity to govern it. That is the defining question of the AI age. It confronts every serious institution in public life.
Roger Chao writes on major debates shaping contemporary Australia, examining political conflict, social change, cultural tension, and the policy choices that define national life.

