If your inbox feels fuller than ever, imagine what it looks like inside a court registry, a scientific journal, or a lawmaker’s office right now. Around the world, institutions built on reading and responding to written documents are being hit by a wave of AI generated text that keeps rising, with no sign of an ebb.
At the heart of this growing alarm is a simple shift. For most of history, writing took time, effort, and some skill. That natural friction quietly protected courts, universities, publishers, and public agencies from being buried in submissions. Generative AI has nearly removed that barrier. Anyone can now spin up letters, filings, essays, or research drafts in seconds, and the people on the receiving end are struggling to keep up.
From sci fi magazines to real courtrooms
One of the earliest and clearest warning signs came from Clarkesworld Magazine. In early 2023, the respected science fiction outlet temporarily shut down story submissions after being flooded with hundreds of low quality AI written pieces, a surge that reportedly multiplied its usual spam load many times over. The magazine has since reopened, using internal tools and policies to separate human stories from machine generated ones, but even its editors admit they do not know how long that will be sustainable.
The same pattern is playing out in more serious arenas. Courts in several countries have already sanctioned lawyers for submitting briefs padded with fictitious case citations invented by chatbots, including the high profile Mata v Avianca decision in New York, where attorneys were fined after a judge confirmed that multiple cited cases simply did not exist. In the United Kingdom, a senior judge has publicly warned that courts must brace for a coming “tsunami” of AI assisted legal claims that could slow down justice if safeguards are not in place.
Public confidence is beginning to wobble. A recent analysis by the National Center for State Courts notes that many Americans already fear AI will be harmful to the courts and highlights the risk that fake or AI manipulated evidence could undermine trust in verdicts. When the paperwork gets shaky, faith in the outcome can follow.
Not all AI writing is bad news
It would be easy to look at this flood and conclude that AI text belongs nowhere near serious institutions. Yet the story is more complicated. As Bruce Schneier and Nathan Sanders point out in their recent analysis, AI tools can also make science and public debate more inclusive, especially for people who never had access to professional editors or English language support.
Before AI, only well funded researchers could routinely hire human help to polish journal articles. Now a grad student in a small lab, or a scientist writing in a second language, can lean on AI for grammar, structure, or basic clarity. The same holds for job seekers who use AI to clean up a résumé or draft a clearer cover letter. In practical terms, that means some doors that used to open mainly for those with money or insider connections can, at least in theory, open a bit wider.
The problem starts when the line between assistance and deception blurs. AI tools make it frighteningly easy to fabricate credentials, invent references, or generate persuasive but misleading political messages. Experts warn that lobby groups could run large astroturf campaigns where one organization uses AI to produce thousands of “individual” letters that appear to come from ordinary citizens. The same software that helps a voter explain their lived experience to a representative can also help a powerful actor drown that experience out.
An arms race of text and filters
To a large extent, institutions are responding with the only tool that can keep up with AI scale output, which is more AI. Journals and conference organizers are experimenting with automated checks to flag likely machine written papers. Social networks rely heavily on automated moderation to spot spam, bots, and synthetic posts. Courts and agencies are testing AI systems that triage incoming filings so that human staff can focus on the most urgent or clearly legitimate cases.
This is quickly becoming an arms race. As detectors improve, people learn new prompts and tricks to slip around them. No editor or judge can realistically promise to distinguish human and machine writing with perfect accuracy forever. Some outlets are already shifting to a trust based model, limiting submissions to a smaller pool of known contributors rather than leaving the door wide open to the entire internet.
What ordinary people should keep in mind
For most of us, AI writing tools are slipping into daily life in quiet ways, from draftingemails to summarizing long reports. Used honestly, they can save time and help people who are not professional writers express themselves more clearly. That matters at work, in school, and in civic life.
The red line is letting the software make things up about the world or about you. If an AI tool is inventing court cases, academic sources, or job experience, you are not just “speeding up paperwork.” You are putting your reputation at risk and adding more noise to systems that are already stretched.
At the end of the day, the tsunami of AI paperwork is not going away. Highly capable models can run on an ordinary laptop, and there is no realistic way to turn them off or ban them out of existence. What societies can do is nudge the balance so that AI assistance mostly empowers genuine voices rather than burying them. That means clearer rules, better tools for spotting fraud, and everyday users who treat AI as a helper, not a free pass to cut corners.
The article was published on The Washington Post.



