The Weave: Context Is Everything, and Most of Us Are Working Without It
On context, cognitive pivot, and the literacy gap that is quietly reshaping how we think, work, and fail
The word was already old when the algorithms arrived. Context comes from the Latin contextus - a joining together - from contexere: con, together, and texere, to weave. Its root is textile. Context is literally what is woven around something. Before any machine learned to read it, the word had already traveled through centuries of human disciplines, each one adding a layer of meaning without discarding what came before.
Rhetoricians used it first: the passage means what it means because of what surrounds it. To quote someone out of context was already a complaint in the sixteenth century. Linguists formalized it by the twentieth: J.R. Firth argued that you know a word by the company it keeps. Anthropologists stretched it further - context became the web of social circumstance that makes an action intelligible. And long before large language models arrived, software engineers had already borrowed the word for precise technical purposes: a context switch, an execution context, a rendering context. Each meaning layered over the others, the weave thickening with each generation of use.
Then artificial intelligence collapsed all of these meanings into a single urgent concept and handed it to everyone at once. The context window. The prompt context. In-context learning. Context-aware retrieval. A word that had always required some training to use carefully was now in the mouths of every knowledge worker, every executive, every curious person who had typed something into a chat interface and watched it respond.
Most of them had not been given the tools to think about what they were holding.
I. The flow no one names
Consider what actually happens when a person sits down to prompt an AI system. There is a moment before any words are typed - a rich, mostly invisible interior event. The person already holds the task they are trying to accomplish, the prior attempts that failed, the constraints they have not yet articulated, the half-formed sense of what a good result would feel like. Most of this is tacit, in the philosopher Michael Polanyi’s sense. The person knows more than they can tell.
The act of writing a prompt is an act of transposition: moving from that rich interior knowing into something the model can work with. This transposition is always lossy. What makes it onto the page is a small, imperfect selection from everything the person meant. The model then receives this incomplete signal and does something remarkable - it reconstructs intent, filling in the gaps from its training on vast human expression. The response it returns reflects that reconstruction, not the original interior. The person reads the response through their own existing mental furniture and, if they are paying attention, something shifts. A new understanding persists - not the model’s output, but the changed configuration of the human mind that encountered it.
The context flow:
Stage What it is Tacit interior What you mean Prompt text What you write Model inference What it reconstructs Response What it returns Reading mind What you receive Persisted context What changes in you
Each transition in this chain is a potential failure point. Each one involves a different kind of context operating by different rules. And almost no one - in the popular discourse, in the enterprise deployment guides, in the prompt engineering tutorials - names this chain as a chain, or identifies where specifically things tend to go wrong.
People who have rhetorical and editorial training have a structural advantage here. Their craft is already about making interior context visible to someone who does not share it. They have spent years learning to ask: what am I actually trying to say, and does this text say it? That habit of mind - the discipline of transposition - is exactly what effective prompting requires. The skill is not new. What is new is how urgently everyone needs it.
II. The technical weave and its discontents
Run the same conversation at the architectural level, and the word context is doing entirely different work - while sounding identical.
When a client says their AI system does not have context on their data, they may mean any of several distinct technical failures. The model may lack access to the document corpus. The text chunks being retrieved may be too short to carry enough surrounding meaning. The system may not maintain conversational state across sessions. The model may be unable to reason coherently across multiple retrieved documents simultaneously. These are all context problems. They require different solutions. A client who cannot distinguish them will buy the wrong architecture and blame the vendor.
Vector databases, embedding models, retrieval pipelines, MCP integrations, chunking strategies, token window management - each of these is a form of context engineering. Each one is an attempt to answer the question: what does the model need to know, in what form, at what moment, to reason well about this problem? That is, at bottom, the same question the individual prompter is answering when they sit down to write. The vocabulary differs. The discipline is the same.
The conflation of these two senses - the human-cognitive and the technical-architectural - is not merely a semantic inconvenience. It produces real failures in real organizations. Decision-makers who understand context only in the human sense assume that better prompting will solve what is actually a data architecture problem. Engineers who understand context only in the technical sense deploy systems that work correctly but are never adopted, because the humans operating them have not developed the interior literacy to use them well. The two layers are entwined. You cannot fix one without attending to the other.
III. The pivot and its cost
There is a second problem underneath the first, and it has barely been named.
Context is not static. The person working in an AI conversation is operating in one frame - a particular set of interpretive schemas, attentional modes, vocabulary, and relational assumptions. The person who then receives a text from a child, or steps into a difficult conversation with a colleague, or comes home to a partner who needs presence rather than productivity, must make a pivot. The frame must change. The prior context must recede. A different self must come forward.
This is not a new problem. Code-switching between social registers has always required cognitive work. Erving Goffman spent a career mapping the frames through which social life is organized and the effort required to move between them. What is new is the frequency, the depth, and the absence of natural boundaries that once enforced the transitions.
The always-on smartphone created the first wave. Work ended when you left the building - that enforced pivot was a psychological protection that looked like a constraint. When the building became portable, the protection dissolved. AI engagement deepens the problem in a specific way: an AI conversation is not passive consumption. You are actively constructing meaning, transposing interior context into language, interpreting responses. The cognitive engagement is closer to a demanding intellectual conversation than to scrolling a feed. And it is available everywhere, at all times, with no natural stopping point.
The result is what we might call pivot contamination - the state in which a person is physically present in one context while cognitively still inhabiting another. You are at the dinner table but still in the document. You are in the meeting but still in the chat thread. The frame switch has not completed. The prior context bleeds.
This is a psychological cost that accrues quietly, without fanfare, and compounds. Existing research - attention restoration theory, cognitive load frameworks, the contested but phenomenologically real literature on ego depletion - circles this problem without quite naming it. What none of these frameworks fully addresses is the quality of the pivot itself: the difference between a clean switch, in which you genuinely arrive in the new context, and a contaminated one, in which you perform arrival while remaining elsewhere.
The sociological dimension is that the norms have not formed. We have developed protocols, however imperfect, for phone use at dinner. We have almost no norms for AI engagement - and because it often looks like someone quietly reading, its intrusive quality is invisible to the people around them. The knowledge workers most likely to be doing deep AI work are the same people whose labor has already colonized personal time most aggressively. The pivot stress compounds existing overload rather than arriving on fresh ground.
The ability to pivot cleanly between contexts is itself a form of literacy. It requires knowing which context you are currently operating in - a metacognitive awareness of your own frame. It requires recognizing when a pivot is demanded. It requires the actual cognitive and behavioral practices that accomplish a clean transition. And it requires honest accounting of what frequent, incomplete, or forced pivots cost. These are learnable skills. They are almost never taught, because the problem is new enough and diffuse enough that no one has assembled them into a teachable framework.
IV. Three populations, one gap
The gap in context literacy does not distribute evenly. Three populations are visible, separated not by intelligence but by the depth of their working model of the systems they inhabit.
Population one - the casual user. Treats AI like a search engine. Pulls on one thread and expects a result. Has no model of the weave. When output disappoints, concludes the tool is limited or dishonest.
Population two - the professional user. Understands the prompt layer reasonably well. Treats model inference as a black box. Frustrated by inconsistency but lacks the framework to diagnose where in the chain things failed.
Population three - the architect. Holds all layers simultaneously: the user’s interior context, prompt engineering, retrieval architecture, model inference, and the organizational knowledge environment the whole system inhabits.
The gap between the first and third population is not primarily a technical gap. It is a meta-taxonomic gap - the capacity to reason about context at multiple levels of abstraction simultaneously. This is a skill most professional education does not build, and which the dominant discourse mostly does not name. Prompt engineering tutorials address the second population at best. They rarely help someone develop a genuine model of the whole chain.
The folk theory of the system shapes everything downstream. The casual user who believes AI is “like a search engine, but smarter” is not wrong in an unintelligent way - they are applying the mental furniture they have to a situation that requires different furniture. When their mental model does not fit the room, they do not update the model. They conclude the room is defective. This is the same pattern visible in enterprise deployments where well-designed systems fail not because the technology is wrong but because the humans operating them have not developed the interior literacy to use it as designed.
V. The executive summary problem
The executive summary was born from a legitimate need. Senior decision-makers with genuine time constraints needed to allocate attention across large volumes of material. The original contract was honest: this is a compressed version for people who will delegate the full reading to others, then act on their recommendation. The summary was never meant to substitute for understanding. It was meant to route responsibility.
What happened over time is that the routing function collapsed. The delegates stopped reading the full document too. The summary migrated from the top of the decision chain to every level of it, and the implicit contract changed without acknowledgment: the summary became the document.
Skimming is a rational adaptation to information overload. When the volume of material demanding attention exceeds any human capacity to process it fully, the mind develops triage strategies. The problem is not that people skim. The problem is that skimming has become the only mode, applied indiscriminately to material that requires something different. The skill atrophying is not reading per se. It is the metacognitive ability to recognize when a piece of material requires sustained attention and then to actually shift modes - to pivot, deliberately, into depth.
Social media industrialized the substitution of surface contact for genuine engagement. AI is now automating it. You can ask a generative tool to summarize a paper, extract three talking points, and draft a LinkedIn comment in under a minute. The comment will be coherent. It will reference the paper correctly. It will read as engaged. It will be entirely hollow - not because AI wrote it, but because no understanding preceded it. Conversations that were once self-selecting - you had to have engaged with the material to participate plausibly - are now open to anyone with a summary and a generative tool. The signal-to-noise ratio in intellectual discourse collapses. The incentive to engage deeply erodes further.
When you read a conclusion without the argument that produced it, you have an opinion you can repeat but not a position you can defend, extend, or honestly revise when challenged.
This is particularly consequential for arguments about AI itself. The popular discourse on artificial intelligence is populated almost entirely by people who have engaged with summaries. The people writing those summaries often have not read the underlying research. The result is a public conversation conducted almost entirely at the surface, about a technology that operates almost entirely in the deep.
VI. What literacy requires
The argument, assembled, is this: context literacy is the foundational competency of the current moment, it is not being systematically developed, and the cost of that gap is paid daily by individuals and organizations who do not yet have a name for what is failing.
Context literacy is not a single skill. It is a cluster of related capacities: the ability to externalize tacit knowing into clear language; the ability to reason about a system at multiple levels of abstraction simultaneously; the ability to recognize which context you are currently inhabiting and when a pivot is demanded; the ability to make that pivot cleanly; the ability to shift between surface and depth reading deliberately and knowingly; and the metacognitive honesty to know when you have understood something versus when you have acquired a talking point about it.
None of these are exotic. Most of them have names in existing disciplines. Rhetoric addresses the transposition problem. Cognitive science addresses the frame-switching problem. Information science addresses the retrieval and architecture problem. What is missing is their integration - a framework that names the full chain from tacit interior knowing to persisted human understanding, identifies where along that chain things fail, and gives people the vocabulary to diagnose and address those failures.
The slow work tradition - the practice of sustained, deliberate engagement as a discipline rather than an inconvenience - is not nostalgic. It is a technical response to a technical problem. The person who has spent years practicing the externalization of interior context, who has developed habits of deep reading and clean cognitive transition, who has built a working model of the systems they inhabit - that person does not experience AI as mysterious or threatening or dishonest. They experience it as a powerful tool that rewards exactly the skills they have spent years developing.
The goal is not to make everyone an architect. It is to raise the floor - to give the casual user enough of a model that they can ask better questions, diagnose simple failures, and recognize when they need more help. To give the professional user enough understanding of the full chain that they can identify which layer failed when output disappoints. To give the architect a vocabulary that is legible outside the technical community, so that the case for context literacy can be made to the people who make decisions about organizational learning and development.
The word was already old when the algorithms arrived. It is not going away. What we do with it - whether we build genuine understanding around it or let it become another term of art that most people use without really knowing what they are holding - is a choice. It is, in the end, a literacy choice. And literacy choices, made collectively, over time, have a way of determining which futures become possible.
This essay draws on two prior conversations: “AI Prompting and Model Context Limitations” and “Context across Prompting and AI.” It is offered as a working draft for the longer paper, and as a demonstration of its own argument: the understanding it attempts to build cannot be extracted from a summary. It must be followed through.
