When Typewriters Became a Cheating Deterrent: What One Instructor’s AI Gambit Reveals About Assessment Design

10 min read · 2,160 words

The classroom door opens at 8 a.m. Students file past desks arranged in testing rows. No laptops. No phones. Just thirty-two manual typewriters, the kind with keys that jam if you strike too fast, borrowed from the theater department’s props closet and a local vintage shop that usually rents them for wedding photo booths.

This is what assessment authenticity looks like in 2025 at a mid-sized liberal arts college in Ohio, where one composition instructor decided that the only way to be certain students write their own essays is to make them do it on machines that predate spell-check. The typewriters aren’t pedagogical theory. They’re surrender terms.

The instructor—call her what she is: a contingent faculty member teaching four sections on a renewable contract—spent August watching her carefully designed writing prompts get fed into ChatGPT by students who saw no reason to struggle with thesis statements when a chatbot could generate five options in seconds. She tried detection software. False positives flagged international students whose English was improving too quickly. She tried oral defenses. Students scheduled appointments and didn’t show. She tried take-home essays with collaboration encouraged. The essays came back perfect, polished, and utterly lacking the specific struggle she knew these writers still had with subordinate clauses.

So she ordered typewriters.

The Students Who Couldn’t Type Fast Enough

Assessment authenticity used to mean something different. It meant tasks that mirrored real-world application, assignments that couldn’t be gamed by memorization or test-taking tricks. The typewriter gambit reveals how thoroughly that framework has collapsed. What we call authentic assessment now is really verification theater—an escalating series of constraints designed to prove a human did the work, not to prove the human learned anything worth doing.

The students who suffer first aren’t the ones using AI. They’re the ones who can’t type. Specifically: the international student whose written English has genuinely improved but who learned to write on a phone keyboard. The student with minor motor control issues who relies on backspace and autocorrect. The non-traditional student returning at forty whose last encounter with a typewriter was in 1987 and who now works slower, thinks slower, produces less because the medium itself is hostile.

Three students dropped the course after the first typewritten exam. Two cited anxiety. One, in an email the instructor shared with her department chair, wrote simply: “I cannot show what I know this way.”

The department chair approved the accommodation requests that followed—four students now take exams on laptops in a proctored room with internet disabled—but noted in a faculty meeting that this creates its own problem. How do you defend giving some students a faster, more forgiving tool while others hammer away at mechanical keys? The justification is disability law. The optics are a two-tiered system where some minds are trusted and others require analog surveillance.

What Gets Measured When the Medium Matters More Than the Message

Every constraint is a choice about what counts. In-class essays favor students who think quickly under pressure. Take-home essays favor students with time and quiet spaces. Oral exams favor native speakers and extroverts. The typewriter adds a new filter: it favors students whose thoughts arrive in roughly linear order, who can compose a paragraph in their head before their fingers touch the keys, who don’t rely on seeing words on a screen to know what they mean.

This isn’t a return to authentic assessment. It’s the abandonment of it. The instructor isn’t measuring writing ability anymore. She’s measuring the ability to write under conditions designed primarily to exclude machines, with human variation as collateral damage. The students who lose are the ones whose writing process depends on iteration—draft, revise, delete, reshape. The typewriter punishes the backspace thinker.

One student, a junior who’d earned A’s in two previous writing courses, stared at her typewriter for eleven minutes during the midterm before writing a single word. She finished three paragraphs in the allotted hour. Her previous essays, composed on a laptop with Track Changes visible, showed dozens of revisions per paragraph—not because she was cheating, but because that’s how her thinking worked. The typewriter didn’t reveal her authentic ability. It revealed what happens when you design assessment around exclusion rather than inclusion.

“We’re solving for the wrong variable. We should be asking what we want students to be able to do, not what we want to prevent them from doing. But prevention is easier to measure.”—Department chair at a liberal arts college

The prevention mindset has precedent. Universities banned calculators in math exams, then allowed them, then required specific models to prevent programming. They banned smartphones, then built Faraday cage exam rooms. They disabled copy-paste functions in online quizzes. Each prohibition was framed as protecting assessment authenticity, but each one redefined what was being assessed. When you ban calculators, you’re no longer testing mathematical reasoning—you’re testing arithmetic speed plus mathematical reasoning. When you require typewriters, you’re no longer testing writing ability—you’re testing typewriting speed plus writing ability.

The students left behind are the ones for whom the added requirement isn’t trivial. The calculator ban disadvantaged students with dyscalculia. The smartphone ban disadvantaged students who used phones as accessibility tools. The typewriter requirement disadvantages anyone whose composing process evolved in a digital environment, which is everyone under thirty.

The Credentialing Crisis Nobody Wants to Name

Here’s what the typewriter experiment actually demonstrates: institutions no longer trust their own credentials. A college degree supposedly certifies that a student can think critically, write clearly, solve problems. But if that certification requires mechanical typewriters to verify, what does the degree actually certify? That the student could perform these skills under surveillance? That they could perform them without digital tools they’ll use in every job afterward? That they could perform them at all, or just that they couldn’t get a chatbot to do it for them?

The employment implications arrive faster than anyone expected. A Fortune 500 HR director speaking at a higher education conference in March described a new interview protocol: candidates with degrees from 2023 onward face additional writing assessments, completed on-site, because hiring managers no longer assume the degree itself proves writing competence. The assumption isn’t that all recent graduates cheated. The assumption is that universities can’t tell who did.

That loss of faith doesn’t hurt universities most. It hurts the students who didn’t cheat, who developed real skills, who now carry credentials the market has decided to discount. They’re paying the reputational price for an institutional verification failure they didn’t cause.

Meanwhile, the students who did use AI throughout their coursework face a different problem. They have degrees but lack skills those degrees supposedly represent. Some will fail upward into jobs that also rely on AI tools—the question of authentic human capability deferred indefinitely. Others will hit walls in work environments that require unassisted competence and discover they cannot deliver it. The gap between credential and capacity has always existed. AI widens it to a chasm.

The Assessment Arms Race Has No Winner

The typewriter is a temporary fix degrading in real time. Students have already started using AI to outline essays in advance, memorizing structures and arguments, then reproducing them on typewriters with minor variations. The instructor knows this. She can see the rhetorical patterns that appear across multiple exams, the suspiciously sophisticated argumentative moves from students who struggle in class discussion. But she can’t prove it. The typewriter proves the words were typed by human hands. It proves nothing about where the ideas came from.

Next semester, she’s considering oral exams exclusively. Then she’ll face a different problem: students who can’t articulate in speech what they could express in writing, students with speech anxiety, students for whom English is a third language and who need writing’s slower pace to access complex ideas. Each attempt to restore assessment authenticity excludes a different subset of learners.

Some institutions are moving the opposite direction. A computer science program in California now allows unrestricted AI use in all assignments, betting that the skill worth certifying is “working effectively with AI tools” rather than “programming without assistance.” Students build more sophisticated projects. Employers report mixed results—impressive portfolios, but new hires who can’t debug without AI assistance, who lack intuition about why code works because they never struggled through the understanding that failure builds.

The assessment authenticity crisis isn’t about finding the right tool restrictions. It’s about the collapse of the assumption that academic performance and learned capability are the same thing. They never were, exactly. But the gap was manageable. A student could cheat on a test and still probably learned something from studying for it. A student could plagiarize a paper and still probably absorbed some content from the sources. AI severs even that tenuous connection. A student can receive an A without reading a single page or understanding a single concept.

Universities are designed around the assumption that assessment correlates with learning. That assumption is breaking.

Who Loses When We Can’t Tell Who Learned

The students using AI aren’t the primary victims of this verification crisis, though some will become victims of their own skill gaps eventually. The students abstaining from AI aren’t victims either, though they’re subsidizing others’ shortcuts with their own effort. The real losers are the ones in between—students who use AI sometimes, for some tasks, trying to figure out where the line is when their institutions can’t tell them because their institutions don’t know.

These students get contradictory guidance. One professor bans all AI use. Another requires it for brainstorming. A third says “use it responsibly” without defining responsibility. Students learn that assessment authenticity is situational, arbitrary, unmoored from principle. They learn that the goal isn’t developing capability but producing work that passes verification, whatever that requires in each context.

This cohort will graduate uncertain of their own competence. They used AI for the literature review but not the analysis. They used it to polish prose but generated their own arguments. They think. When they struggle in future work, they won’t know if they’re facing a normal learning curve or revealing a gap that AI papered over. The uncertainty itself is corrosive.

Meanwhile, the students who can’t afford college at all watch this credentialing crisis with interest. If degrees no longer reliably signal capability, if employers are adding their own assessments anyway, why take on debt for credentials the market is discounting? Alternative certification accelerates. Bootcamps proliferate. Companies build internal academies. The students who lose are the ones who needed the traditional degree pathway’s structure, support, and social capital—the pathway that’s losing legitimacy precisely when they need it most.

The Typewriter Is a Symptom

Assessment will not return to a pre-AI baseline. The typewriter instructor knows this. She’s already planning for a fall semester where typewriters won’t work anymore, where students will have had time to develop workarounds, where she’ll need a new constraint. She’s tired. This is her seventh year teaching contingent, her fifth course prep this academic year, and she’s spending weekends sourcing verification mechanisms instead of designing learning experiences.

The institutions aren’t solving this. They’re deferring it. Task forces meet. Policies are drafted. Committees discuss academic integrity at length. Then individual instructors, usually the ones with the least power and most precarious employment, implement whatever they can manage with the resources they have. Typewriters. Oral exams. In-class essays on paper. Each a small retreat from digital pedagogy that took decades to develop.

FetchLogic Take

Within eighteen months, a major accrediting body will require member institutions to verify assessment authenticity through methods that directly measure human capability, not just detect AI absence. This will force universities to choose: either redesign assessment around competencies that can be verified in-person and in-real-time (narrowing what gets credentialed), or adopt multi-stage verification processes that increase costs and time-to-degree (narrowing who can afford credentials). Either path accelerates stratification. Elite institutions will afford robust verification and maintain credential value. Under-resourced institutions will either abandon verification rigor (credential devaluation) or lose students to alternative pathways that skip the pretense. The students caught between—those who need credentials but attend institutions that can’t afford to verify them—will hold degrees the market trusts least precisely when they need that trust most. The assessment crisis is a class filter in formation.

AI Tools We Recommend

ElevenLabs  ·  Synthesia  ·  Murf AI  ·  Gamma  ·  InVideo AI  ·  OutlierKit

Affiliate links · we may earn a commission.

Leave a Comment

We use cookies to personalise content and ads. Privacy Policy