Friction by Design
Artificial Intelligence, Accountability, and the Limits of Speed in the Department of Defense
Abstract
The Department of Defense is often framed as facing a tradeoff between technological acceleration and bureaucratic inertia. This framing is misleading. The Department is not optimized for speed, but for accountability: documented authorship, procedural defensibility, and identifiable human responsibility for decisions carrying legal, ethical, or operational consequence. Artificial intelligence systems, by contrast, are optimized for output velocity, probabilistic generation, and diffuse authorship.
This paper argues that introducing AI into accountability-bound federal institutions initially increases administrative friction rather than reducing it. As AI accelerates drafting and analysis, it expands review burden, audit exposure, and institutional caution. The result is not streamlined governance, but a thickening of documentation, verification, and attestation requirements as organizations reassert control over responsibility.
At scale, acceleration produces gravity. When the cost of producing legible justifications collapses, uncertainty is increasingly resolved through formal written action. AI therefore does not merely increase the volume of drafts; it increases the likelihood that ambiguity becomes a record—memos, reports, justifications, and filings—expanding discoverability and oversight surface area. Speed produces records; records attract scrutiny; scrutiny reinforces further documentation.
Rather than collapsing bureaucracy, AI functions as a friction multiplier and an institutional stress test by forcing accountability mechanisms to operate at machine speed. Under current legal and democratic frameworks, efficiency gains plateau as compensatory controls expand to preserve legitimacy. Absent fundamental changes to the rules governing authority, authorship, and liability, accountability will continue to dominate speed. The expansion of friction is therefore not a transitional failure of adoption, but an expected outcome of introducing acceleration-oriented tools into accountability-centric institutions.
I. Introduction — The False Binary of Speed vs. Bureaucracy
The Department of Defense is frequently framed as an institution caught between technological acceleration and bureaucratic inertia. In this telling, emerging tools—particularly artificial intelligence—promise efficiency and speed, while institutional process is cast as resistance to change. This framing is misleading.
The DoD is not optimized for speed. It is optimized for accountability.
Its internal systems prioritize documented authorship, procedural defensibility, and identifiable human responsibility for decisions that carry legal, ethical, or operational consequence. These priorities are not cultural artifacts or legacy habits. They are the product of statutory requirements, constitutional constraints, and oversight regimes designed to preserve legitimacy under scrutiny.
Artificial intelligence systems are optimized along a different axis. They maximize output velocity, probabilistic exploration, and rapid convergence toward legible results. Authorship is diffuse. Iteration is opaque. Responsibility is externalized to the user who adopts the output.
The tension between these systems is not a matter of institutional reluctance to innovate. It is a structural mismatch between acceleration-oriented tools and accountability-bound governance. When introduced into federal workflows without corresponding changes to authority and liability structures, AI does not collapse bureaucracy. It exposes the depth of the constraints that bureaucracy exists to enforce.
This paper argues that early AI integration within the Department of Defense will increase administrative friction rather than reduce it. As production accelerates, review, documentation, and attestation requirements expand to compensate. The result is not streamlined governance, but a predictable amplification of process as the institution reasserts control over responsibility.
Understanding this dynamic requires abandoning the speed-versus-bureaucracy binary entirely. The relevant question is not whether the DoD can move faster, but whether accountability can move at machine speed without undermining the legal and democratic foundations the institution is designed to protect.
II. Accountability as the Operating Constraint
In federal institutions, accountability is not a secondary concern or an administrative preference. It is the primary operating constraint against which all processes, tools, and authorities are evaluated. Speed, efficiency, and innovation are permitted only insofar as they do not compromise traceability, responsibility, and procedural defensibility.
Within the Department of Defense, accountability functions as both a legal requirement and an institutional safeguard. Decisions must be attributable to identifiable humans, supported by a documented process, and defensible under retrospective scrutiny by inspectors general, courts, Congress, and the public. This requirement applies not only to outcomes, but to the process by which those outcomes are reached.
Bureaucracy, in this context, is not inefficiency. It is a risk-management architecture. Documentation, layered review, and formalized approval chains exist to preserve legitimacy when decisions are challenged after the fact. These mechanisms ensure that responsibility can be assigned, errors can be investigated, and authority can be reasserted without destabilizing the institution itself.
Importantly, accountability imposes limits on delegation. While tools may assist in analysis, drafting, or data aggregation, decision authority remains inseparable from human agency. This separation is not accidental. It reflects constitutional principles, statutory obligations, and democratic expectations that consequential actions are ultimately owned by people, not systems.
As a result, any technology introduced into this environment is evaluated not by its capacity to accelerate production, but by its compatibility with existing accountability structures. Tools that obscure authorship, compress deliberation, or diffuse responsibility trigger compensatory controls. Rather than streamlining workflows, they invite additional documentation, review, and oversight to restore legibility.
Understanding accountability as an operating constraint clarifies why efficiency gains in the Department of Defense are often incremental and bounded. It also explains why attempts to graft acceleration-oriented technologies onto accountability-bound systems frequently produce friction instead of flow. The institution is not resisting change; it is enforcing the conditions required for its own legitimacy and survival.
III. The Accountability–Efficiency Mismatch
The integration of artificial intelligence into federal workflows exposes a fundamental mismatch between two distinct optimization logics: institutional accountability and computational efficiency. Each system is internally coherent. Their collision produces friction.
The Department of Defense is designed to preserve accountability under uncertainty. Its processes emphasize traceable authorship, sequential review, and deliberate pacing to ensure that responsibility for decisions can be clearly identified and defended. Efficiency is valued, but only within the bounds of legibility and control.
Artificial intelligence systems are optimized for a different objective. They maximize throughput by generating probabilistic outputs at scale, exploring solution spaces rapidly, and converging on legible results without requiring deliberative justification. These systems are indifferent to authorship clarity and agnostic to downstream accountability. Responsibility is implicitly transferred to the human who elects to use the output.
This divergence creates a structural mismatch. When AI accelerates production within accountability-bound environments, it does not eliminate process. It displaces it. Tasks previously constrained by human labor become constrained by validation, certification, and review. Speed at the front end generates drag at the back end.
Crucially, this mismatch is not resolved through better interfaces, improved training, or cultural adaptation. It persists because the two systems optimize for incompatible outcomes. Institutions must slow processes to preserve accountability, while AI systems derive value precisely from removing those frictions.
Absent explicit changes to the rules governing authority, liability, and authorship, efficiency gains achieved through AI remain partial and unstable. The institution compensates by expanding oversight and documentation to realign accelerated output with its accountability requirements. What appears as bureaucratic resistance is, in fact, structural enforcement of constraint.
The accountability–efficiency mismatch is therefore not a transitional challenge on the path to automation. It is the defining condition under which AI operates within federal governance.
IV. What the Department of Defense Cannot Delegate
Despite widespread assumptions about automation and decision support, the Department of Defense operates under firm boundaries regarding what can and cannot be delegated to artificial systems. These boundaries are not informal norms or cultural preferences; they are imposed by law, oversight, and the requirements of democratic governance.
The Department cannot treat AI-generated output as authoritative in the absence of a clearly identifiable human owner. Decisions that carry legal, ethical, or operational consequence must be attributable to a person empowered to exercise judgment and accept responsibility. Automated outputs may inform that judgment, but they cannot replace it.
Similarly, the Department cannot accept anonymous or non-attributable authorship. Institutional legitimacy depends on the ability to reconstruct how and why a decision was made, who made it, and under what authority. Diffuse or machine-mediated authorship complicates this reconstruction and therefore triggers compensatory controls.
The Department also cannot permit automated decisions that produce binding consequences without human adoption. While AI systems may recommend courses of action or generate analytic products, the act of deciding remains inseparable from human agency. This separation preserves accountability and ensures that authority is exercised within established legal frameworks.
Finally, the Department cannot collapse review layers without increasing institutional risk. Review mechanisms exist to detect error, bias, and misalignment before decisions are executed. When tools accelerate output beyond the pace at which humans can confidently validate it, additional review is introduced to restore confidence rather than removed to preserve speed.
These constraints are often mischaracterized as bureaucratic inertia or resistance to innovation. In reality, they are the guardrails that allow the institution to function under constant scrutiny. AI challenges these boundaries not by offering new authority, but by accelerating processes that the Department is legally required to control. The resulting friction is not a failure of adoption; it is the predictable consequence of non-delegable responsibility.
V. AI as a Friction Multiplier
Artificial intelligence is frequently described as a force for reducing friction within complex organizations. In accountability-bound institutions, the opposite effect is observed. Rather than eliminating process, AI redistributes and amplifies it.
By dramatically accelerating drafting, analysis, and synthesis, AI systems increase the volume of material entering institutional workflows. This acceleration does not remove existing requirements for review, validation, and attribution. Instead, it expands the workload associated with those requirements. Friction that was previously constrained by human production limits is displaced upward into oversight and certification layers.
The bottleneck shifts accordingly. Where human labor once limited output, review latency becomes the dominant constraint. Supervisors, legal reviewers, and oversight bodies must now assess a greater quantity of material in the same or shorter timeframes. As a result, decision tempo is governed less by drafting speed than by institutional risk tolerance.
This dynamic produces a multiplier effect. Each incremental increase in output velocity generates a disproportionate increase in review burden. The faster the system produces legible results, the more effort is required to ensure those results are defensible. Rather than accelerating governance, AI forces institutions to invest additional resources in control mechanisms to preserve accountability.
Importantly, this effect is not an implementation failure or a temporary imbalance. It is a predictable outcome of introducing acceleration-oriented tools into systems designed to survive scrutiny. AI does not bypass institutional safeguards; it activates them. The resulting friction is not evidence of dysfunction, but of an institution enforcing its operating constraints under new conditions.
Understanding AI as a friction multiplier, rather than a friction reducer, is essential for interpreting early adoption outcomes. It explains why productivity gains often appear muted and why administrative density increases even as technical capability improves.
VI. Documentation Will Get Slower, Not Faster
A common assumption underlying federal AI adoption initiatives is that automation will reduce administrative burden by accelerating drafting, analysis, and reporting. In accountability-bound environments, the opposite dynamic emerges.
Artificial intelligence systems dramatically increase the volume of legible output. That output must still be reviewed, validated, attributed, and defended by human officials. Each increase in generated material expands the surface area for error, misinterpretation, and post hoc scrutiny.
The resulting cycle is predictable:
Artificial intelligence increases output
Increased output expands review requirements
Expanded review increases audit exposure
Increased audit exposure heightens institutional caution
The consequence is not speed, but drag.
As AI-generated material proliferates, organizations respond by reinforcing procedural safeguards. Disclaimers multiply. Provenance requirements expand. Human attestations become more explicit and more frequent. Documentation becomes thicker rather than thinner as institutions attempt to reassert control over accelerated production.
This effect is not a temporary artifact of early adoption or inadequate training. It is a structural response to risk. In environments where decisions are discoverable, contestable, and legally consequential, increased output necessarily produces increased defensive documentation.
Artificial intelligence does not remove the need for accountability. It magnifies the effort required to maintain it. Until decision authority itself is automated—a condition incompatible with federal law and democratic governance—paperwork will grow faster than efficiency gains.
VII. The Verification Vortex
At the point of first contact, AI integration within accountability-bound institutions manifests not as automation, but as verification overload. The acceleration of output severs the implicit coupling between authorship and familiarity that traditionally underwrites confidence in official work products.
This dynamic is not unique to AI adoption. Contemporary commentary on staff performance has noted that many junior officers enter staff roles able to report facts but less able to produce decision-ready synthesis, recommendation, and written products that withstand scrutiny. The downstream effect is the same: increased clarification cycles, expanded supervisory intervention, and more time spent validating work products than producing them. AI does not create the verification burden; it removes the natural pacing that previously hid it.
In pre-AI workflows, sustained human effort functioned as an informal validation mechanism. Time spent researching, drafting, and revising created intuitive awareness of assumptions, sources, and limitations. When AI systems generate complete drafts or analyses in seconds, that familiarity is no longer inherent. Responsibility remains, but epistemic confidence does not.
The human user is therefore repositioned from author to certifier. Rather than producing content through deliberation, the official must validate content they did not generate through sustained cognitive engagement. This shift initiates a verification vortex in which effort is consumed by checking, justifying, and defending output after the fact.
Within this vortex, several reinforcing dynamics emerge. Probabilistic claims must be independently verified to guard against error or hallucination. Analytical reasoning must be reconstructed to ensure conclusions are defensible. Supervisory review layers expand to compensate for uncertainty introduced by accelerated generation. Each additional safeguard slows throughput while increasing administrative burden.
The verification vortex is self-reinforcing. As output accelerates, validation demands intensify. As validation slows decision tempo, pressure mounts to produce more material to justify action. The institution responds rationally by adding further review and documentation, deepening the cycle.
This phenomenon is not the result of individual distrust or resistance. It is a systemic response to the introduction of tools that decouple speed from certainty. In accountability-centric environments, verification becomes the dominant form of labor, and automation reshapes work not by removing effort, but by relocating it into defensive assurance.
VII-A. The Filing Cascade
As artificial intelligence lowers the cost of producing legible analysis, recommendations, and justifications, uncertainty is increasingly resolved not through informal deliberation, but through formal written action. What might previously have been settled through clarification, judgment, or tacit understanding is instead converted into a record.
This conversion is rational. In accountability-bound institutions, written filings preserve attribution, protect the actor, and create procedural defensibility under future scrutiny. As output accelerates, the safest response is not restraint, but documentation. AI therefore does not merely increase the volume of drafts; it increases the likelihood that ambiguity becomes formalized as a memo, report, justification, or filing.
At scale, this produces a filing cascade. Each AI-assisted product expands the universe of discoverable material, enlarges audit and oversight surface area, and increases the incentive for preemptive record creation. Review queues lengthen not because decision-makers lack information, but because they are saturated with defensively generated artifacts.
The effect is gravitational. Once formalization becomes the default response to uncertainty, additional acceleration compounds rather than relieves institutional load. Speed produces records; records attract scrutiny; scrutiny reinforces further documentation. The system slows not due to resistance, but because accountability converts velocity into mass.
Box 1 — Friction as a Weapon System: Historical Precedent
The dynamics described in the preceding sections are not novel. They have been observed, studied, and deliberately exploited before.
In 1944, the Office of Strategic Services published the Simple Sabotage Field Manual to guide resistance efforts in occupied Europe. Despite its title, the manual’s most consequential insight was not about physical destruction. Its core contribution was organizational: the identification of bureaucracy, procedure, and human accountability as systems that could be stressed into paralysis through small, deniable actions.
The manual emphasized what it termed “general interference with organizations and production.” Rather than attacking assets, it targeted processes. Saboteurs were instructed to insist on proper channels, demand written orders, refer matters to committees, reopen settled decisions, multiply paperwork, and advocate caution and propriety at every step. These actions were effective not because they violated rules, but because they followed them to the letter. The objective was not disruption through force, but drag through legitimacy.
This doctrine rested on a clear institutional insight: accountability systems protect authority, but they do so by generating friction under uncertainty. When responsibility is diffuse, when decisions are discoverable after the fact, and when error carries asymmetric punishment, organizations respond by slowing themselves down. Review expands. Documentation thickens. Ownership becomes defensive. The OSS treated this behavior as an exploit surface.
What artificial intelligence introduces into contemporary federal workflows is not a new vulnerability, but a new trigger. By accelerating output while diffusing authorship, AI activates the same accountability reflexes that simple sabotage once sought to induce deliberately. Review layers expand to compensate for epistemic uncertainty. Human actors shift from authorship to certification. Procedural density increases as institutions reassert control over responsibility.
The scaling logic is identical. The OSS manual rejected spectacular acts in favor of distributed micro-friction, multiplied across thousands of ordinary participants. AI produces a parallel effect incidentally. Each probabilistic output is defensible in isolation; collectively, they impose verification and attestation burdens that accumulate faster than institutions can absorb them. The result is systemic drag without malice.
This historical precedent clarifies an essential point: friction amplification is not evidence of institutional failure. It is a predictable outcome of accountability-bound systems under stress. Where the OSS viewed bureaucratic drag as a weapon to be wielded, modern AI systems reproduce it unintentionally by accelerating production beyond the pace at which legitimacy can be maintained. The mechanism is the same. Only the intent differs.
VIII. Diffuse Authorship and Procedural Defensibility
Procedural defensibility depends on clear and legible authorship. In accountability-bound institutions, the ability to attribute decisions, analyses, and recommendations to identifiable individuals is central to post hoc review, error correction, and the preservation of institutional legitimacy.
Artificial intelligence complicates this foundation. AI-generated content is the product of layered interactions among training data, model architecture, system configuration, and user prompts. Even when a human adopts the output, authorship is hybrid rather than singular. The resulting diffusion undermines the clarity required for retrospective accountability.
When authorship becomes ambiguous, institutions compensate by expanding process. Additional documentation is required to explain how output was generated, under what conditions it was reviewed, and why it was deemed acceptable. Attestations become more explicit as officials seek to reassert ownership over content they did not directly produce. Review chains lengthen to distribute risk and reinforce defensibility.
This expansion is not an overreaction. It is an adaptive response to uncertainty under scrutiny. In environments where decisions are subject to legal challenge, congressional inquiry, or public contestation, ambiguity in authorship increases institutional exposure. Procedural density grows as a protective measure.
Diffuse authorship therefore does not weaken bureaucracy; it strengthens it. The institution responds by reinforcing its safeguards, not relaxing them. What may appear externally as inefficiency is internally understood as risk containment.
As long as accountability remains anchored to human responsibility, clarity of authorship will remain non-negotiable. AI-generated content does not erode this requirement. It intensifies the institutional effort required to satisfy it.
IX. Why Efficiency Gains Plateau
Initial exposure to AI-enabled tools often produces the appearance of dramatic efficiency gains. Drafts materialize instantly. Analytic options proliferate. Work that once required days of effort can be produced in minutes. In accountability-bound institutions, these gains plateau quickly.
Box 2 — The Policy Countermove: Speed as Doctrine
The Department’s emerging AI posture is not merely technological. It is openly doctrinal. In the January 9, 2026 Artificial Intelligence Strategy for the Department of War, senior leadership directs the Department to become an “AI-first” warfighting force and explicitly frames bureaucratic friction as a target for elimination rather than a condition to manage.
The strategy’s logic is straightforward: the United States is in a race, therefore “speed wins.” The Department is directed to “weaponize learning speed,” measure cycle-time and adoption as decisive variables, and accept that the risks of moving too slowly outweigh the risks of imperfect alignment. In parallel, it calls for an aggressive “wartime approach to blockers,” explicitly naming ATOs, test and evaluation/certification, contracting, hiring, and other policy constraints as inhibitors to rapid experimentation and fielding.
This is the policy-level counterstroke to the dynamics described in this paper. Where accountability systems thicken under scrutiny, the strategy attempts to compress the timeline of accountability through executive pressure, metric discipline, and formal barrier removal mechanisms (including a monthly “Barrier Removal Board” empowered to waive non-statutory requirements and escalate unresolved blockers). The intent is not to deny oversight, but to subordinate it to tempo as a strategic variable.
However, the same memorandum also reveals why friction reappears even under a speed mandate. The strategy emphasizes “single accountable leaders,” measurable outcomes, and recurring demonstrations and reporting—explicitly re-centering ownership as the stabilizing mechanism for accelerated execution. Even while attacking process drag, it reasserts the institutional requirement for attributable responsibility. Speed doctrine compresses timelines; it does not dissolve liability.
This produces the core tension: policy can demand velocity, but accountability cannot be outpaced indefinitely. When accelerated experimentation collides with retrospective scrutiny, the institution still pays the defensibility cost—either upfront (controls, documentation, attestations) or downstream (investigations, reversals, remedial governance). The strategy is therefore best understood not as a refutation of the accountability–efficiency mismatch, but as official recognition of it—and an attempt to win the mismatch through tempo, consolidation of ownership, and systematic removal of non-statutory friction.
The plateau occurs because production speed is not the dominant constraint on decision-making. Review, validation, and attestation impose fixed costs that do not scale with output velocity. As AI increases the volume of material requiring assessment, these fixed costs expand rather than compress. Time saved in drafting is absorbed by time spent verifying, documenting, and defending.
This dynamic produces diminishing returns. Each additional increase in output velocity yields smaller net gains and greater marginal burden. Review latency becomes the limiting factor, and institutional caution rises in response to heightened exposure. The system stabilizes not at maximum efficiency, but at the threshold of acceptable risk.
Crucially, this plateau is not a function of immature technology. It persists even as models improve in fluency and accuracy. The constraint is institutional, not technical. Absent changes to the rules governing authority, liability, and authorship, efficiency gains remain bounded by the requirements of accountability.
As a result, AI adoption within the Department of Defense does not follow a linear productivity curve. Early acceleration is followed by stabilization as compensatory controls expand. The institution adapts by absorbing the tool into existing processes rather than reorganizing itself around speed.
Efficiency gains therefore stall not because AI fails to perform, but because accountability cannot be compressed beyond a certain point without undermining legitimacy. Under current legal and governance structures, this plateau is not an anomaly. It is the expected outcome.
X. Implications — AI as an Institutional Stress Test
Viewed through the lens of accountability, artificial intelligence functions less as a productivity tool than as an institutional stress test. By accelerating output within systems designed to privilege traceability and responsibility, AI exposes the limits of existing governance structures rather than transcending them.
The friction generated by AI adoption is therefore diagnostic. It reveals where authority is tightly coupled to human judgment, where procedural defensibility is non-negotiable, and where risk tolerance is bounded by legal and democratic constraint. Increased documentation, expanded review, and heightened caution are not failures of implementation; they are signals of institutional self-regulation under pressure.
This framing reframes common adoption narratives. When AI fails to deliver anticipated efficiency gains, the explanation does not lie in user resistance or technical immaturity. It lies in the mismatch between acceleration-oriented tools and accountability-bound systems. Institutions respond to this mismatch not by abandoning control, but by reinforcing it.
AI thus clarifies rather than resolves institutional tension. It makes visible the costs of accountability that were previously masked by human production limits. As output scales, those costs become explicit, measurable, and unavoidable.
Understanding AI as a stress test rather than a solution allows observers to interpret early outcomes accurately. The expansion of bureaucracy is not evidence that the technology is incompatible with governance, nor that governance is incompatible with innovation. It is evidence that accountability remains the dominant organizing principle—and that any transformation must contend with it directly rather than assuming it away.
XI. Conclusion — Accountability Wins by Design
The Department of Defense does not face a choice between technological acceleration and bureaucratic inertia. It operates within a hierarchy of constraints in which accountability consistently outweighs efficiency. Artificial intelligence does not alter this hierarchy; it tests it.
By accelerating production without carrying authority, AI intensifies the demands placed on review, documentation, and attestation. Processes expand not because institutions fail to adapt, but because they must preserve legitimacy under scrutiny. What appears externally as friction is internally understood as institutional self-preservation.
This outcome is predictable. As long as decision authority remains inseparable from human responsibility, systems that generate output faster than humans can confidently validate it will trigger compensatory controls. Bureaucracy grows to contain acceleration, not to resist it.
AI therefore does not collapse federal process. It exposes the depth and necessity of the constraints that govern it. Under existing legal and democratic frameworks, accountability will continue to dominate speed, and efficiency gains will remain bounded by the requirements of defensibility.
In this sense, the expansion of friction is not a transitional failure on the path to automation. It is the expected result of introducing acceleration-oriented tools into accountability-bound institutions. Accountability wins not by accident, but by design.
About the Author
Brad N. is a U.S. Army Reserve noncommissioned officer and Department of Defense civilian whose work sits at the intersection of operational process, oversight, and institutional accountability. His writing examines how emerging technologies interact with high-oversight organizations, and how governance systems behave under acceleration. The views expressed are his own and do not represent the Department of Defense or the U.S. Government
.

