What I Learned From 14 Technical Writers on the Future of AI

By
Josh Fechter
Josh Fechter
I’m the founder of Technical Writer HQ and Squibler, an AI writing platform. I began my technical writing career in 2014 at…
More About Josh →
×
Quick summary
AI in technical writing is powerful, but only if you understand its role. I’ve used AI in documentation workflows because it can help you move faster, generate first drafts, enforce consistency, and repurpose content across audiences.

I’ve used AI tools in my documentation workflow enough to recognize the emotional trap.

You paste in a prompt, get back something clean, and your brain gives you that little dopamine hit that says “nice, we’re done.” Then a user pings support because the procedure skips a prerequisite, the concept explanation is correct but useless, or the API examples look plausible but do not run.

I treat AI as nothing more than a drafting accelerator. It can’t become a documentation authority.

Technical Writers’ Perspectives on AI

Below are the statement sections from the original article, kept as named perspectives, with expanded context and interpretation based on the same source.

1. Arthur C. Brooke, Author and Professor

Arthur C. Brooke frames the problem as two kinds of intelligence: the ability to generate facts versus the wisdom gained from lived experience. 

In documentation terms, this shows up when AI can define an API field but cannot guide a user through the messy reality of how systems behave under pressure. It can write a plausible explanation, but it does not understand the ecosystem of constraints, edge cases, and human expectations that make docs useful.

When I use AI, I keep this framing in mind. I want the model to help with fluid work, like drafting and summarizing, while I keep ownership of crystallized work, like deciding what matters, what to emphasize, and what users will misunderstand.

2. Tirzah Alexander, Technical Writer and Editor at SAIC

Tirzah Alexander notes that AI can assist writers but cannot replace human writers when nuance and engagement matter.

I read this as a reminder that technical writing is not just technical. It’s communication. If your docs are accurate but dull, users stop reading. If your docs are engaging but vague, users get stuck. Humans balance that tension better than models.

AI can still help here in the early drafting stage. It can generate multiple ways to explain the same concept, which gives you options. But it cannot decide which explanation matches your users’ mental model unless you give it strong direction.

3. Jennifer Nelson, Senior Technical Writer

Jennifer Nelson sees AI as useful for drafting, formatting, and grammar checking, while emphasizing that key responsibilities remain human, like collaborating with SMEs and verifying processes.

This matches the workflow that feels safest to me. AI can draft, but humans validate. AI can format, but humans decide the structure. AI can propose language, but humans own voice and accuracy.

If you are trying to replicate a specific voice across a doc set, style overrides and governance become critical. Otherwise, you end up with a doc portal that sounds like ten different assistants wrote it, because they did.

4. Rachel Nix, Technical Writer at Rho

Rachel Nix emphasizes human intuition and empathy, and the idea that humans and AI can complement each other. 

I love this framing because it removes the false choice. It’s not “AI or humans.” It’s “AI plus humans,” with a clear division of responsibility.

Empathy is not a soft skill in documentation. It’s a performance skill. The more you anticipate confusion, the fewer support tickets you generate. AI can help you draft explanations, but you still need a human to feel the user’s frustration and design docs to reduce it.

5. Cameron Fletcher, Technical Writer at Aptera Motors

Cameron Fletcher points to human ingenuity as essential when documenting novel ideas and inventions, while still seeing AI as useful for brainstorming and outlines.

This is a great reminder that AI thrives on patterns. Novelty is where it struggles. If your product is new, AI will default to the closest thing it has seen, which can be misleading.

Where AI helps is getting you unstuck. It can generate outline options, propose structures, and help you create a drafting runway. But you still need to bring the real insight, because the model cannot invent product truth.

6. Steve Arrants, Technical Writer at Agilent Technologies

Steve Arrants compares documentation loss to technical debt, pointing out that organizations often ignore the debt created when writers leave.

This resonates because AI can speed up doc creation, but it does not solve doc maintenance. If anything, AI can accelerate doc debt if teams generate content without a plan for ownership and updates.

The fix is boring and structural: governance, clear ownership, and maintenance cycles. AI can help you identify drift, but humans still need to decide what to update and when.

7. Heather Flanagan, Lead Editor at OpenID Foundation

Heather Flanagan warns that the biggest future challenge may be teams taking shortcuts with AI when they skip reviews.

This is the risk I see most often: AI becomes permission to skip the hard parts. People generate text and then treat it as done.

Review is not optional. It is the quality gate that protects users. If you adopt AI, you should tighten the review, not loosen it, because the output can be wrong in unpredictable ways.

8. Shane Copland, Senior Technical Writer at Amazon

Shane Copland highlights that AI can deliver vague output when precision is required, and that “AI maintenance” becomes a real skill.

I interpret “AI maintenance” as the ongoing work of keeping prompts, constraints, and context aligned with the product. Tools change, models change, and your documentation changes. If your AI workflow is not maintained, it decays.

This is one reason I like agent-style workflows. They make maintenance explicit. You update the rules in one place, and the system follows them.

9. Edith E Bell, Ph.D., Technical Writer

Edith E Bell, Ph.D., emphasizes credibility and due diligence, pointing to proofreading and fact-checking as non-negotiable.

This perspective is a reality check: AI can make writing faster, but it cannot make responsibility disappear. When a doc is wrong, the user does not blame the model. They blame the company.

Proofreading and editing are not just editorial polish. They are risk management.

10. Elizabeth C. Haynes, Technical Writer at ScienceIO

Elizabeth C. Haynes warns that companies may assume AI means they no longer need technical writers, and that this mindset can cause documentation quality to suffer. 

I see this as a leadership problem, not a tooling problem. If leaders think documentation is just “words,” then AI looks like a replacement. If leaders understand documentation as user enablement, then AI looks like an assistant.

The irony is that the better AI gets at producing text, the more companies will need humans to protect quality, structure, and truth.

11. Jeanette DePatie, Senior Technical Writer

Jeanette DePatie argues that technical writing stands out when it adds insight and foresight, and that writers may need to distinguish their voice as AI becomes more common.

This lands for me because it focuses on differentiation. If anyone can generate a generic explanation, then generic explanations stop being valuable.

Insight is what makes docs feel like they were written by someone who understands the product, while foresight is what prevents user confusion before it happens.

12. Jen Heller, Technical Writer at CuneXus & National Student Clearinghouse

Jen Heller centers audience analysis and the human reality of frustration when users get stuck.

This is where I think AI can support writers without replacing them. AI can help you generate alternate explanations for different audiences. It can also help you identify where your docs assume too much knowledge.

But you still need a human to decide what the user feels at each step, and to design the documentation experience accordingly.

13. Kevin Halaburda, Senior Technical Writer at Keithley Instruments

Kevin Halaburda emphasizes the importance of trust in human-produced content and points out a procedural failure mode in which missing prerequisites surface late in the process.

This is one of the most practical warnings in the entire set of perspectives. AI errors are not always obvious. They can be delayed.

That’s why quality assurance testing matters. If you publish procedural content, you should test it or have it tested. If you cannot test it, you should not pretend it is tested.

14. Scott Clark, Technical Writer

Scott Clark expects businesses to rely on generative AI for documentation, while still needing human writers who can empathize with users.

He also points toward a shift in focus: more fact-checking and copyediting to ensure AI-generated content is accurate and free of bias.

That aligns with what I see: the writer becomes more editorial, more strategic, and more responsible for the system, not just the sentences.

Also, if you want to master AI technical writing skills, check our AI technical writing course.

AI Technical Writing Certificate

Benefits of AI in Technical Writing

AI’s biggest advantage in technical writing is compression. It compresses time, reduces friction, and helps you move faster through the necessary but repetitive parts of the job.

The highest-value teams use AI where it’s okay to be wrong early, as long as it gets useful fast. That sounds abstract, so let me make it concrete: AI excels at first drafts, refactoring, consistency checks, and content suggestions. AI is dangerous when you ask it to invent an unknown product truth.

Faster content production without sacrificing clarity

The most obvious win is speed. Generative AI can turn scattered notes, meeting transcripts, and half-baked outlines into a readable draft that you can shape into documentation.

What matters is how you use that speed. If you take the draft as final, you ship fast and ship wrong. If you take the draft as clay, you get the best of both worlds: momentum plus judgment.

Improved accuracy through consistency tooling

A lot of people hear “AI” and think “it writes.” In practice, some of the best AI value is editorial.

AI tools can help with grammar checking, consistency in terminology, and real-time feedback on readability. They can flag when you call the same thing “token,” “API key,” and “auth key” across three pages, even if each page sounds fine on its own.

If you’ve ever inherited a doc set with multiple writers and zero governance, you know how painful this is to fix manually. AI-assisted consistency checks can turn that cleanup from a week-long project into a steady background habit.

Content governance gets easier to enforce

When I say “content governance,” I mean the rules that keep documentation predictable. This includes your style guide, terminology list, preferred procedural patterns, and review standards.

An AI-powered platform can help you enforce those rules through content scoring, automated checks, and content suggestions that align with your standards. Used well, that’s less time policing formatting and more time improving information architecture and user outcomes.

Content reuse at scale

Content reuse is where AI starts to feel like leverage instead of convenience.

If your docs are modular, meaning they’re built from reusable chunks like concept blocks, prerequisite sections, warnings, and step patterns, then AI can help you recombine and adapt those chunks for different audiences. It can also help you identify duplicate content that should be consolidated or highlight where two pages contradict each other.

This does not replace structured authoring. It rewards it.

Quality assurance support that catches “doc debt”

Technical writers have a version of technical debt. I call it doc debt: small inconsistencies and outdated instructions that pile up until nobody trusts the docs.

AI can help reduce doc debt by scanning for drift, detecting inconsistencies across sections, and suggesting updates when terminology, UI labels, or workflow steps change. It’s not perfect, but it’s helpful as an early warning system.

If you want a broader overview of tools that support these workflows, I keep a running list in software documentation tools.

Best Practices for Using AI Tools

If I could tattoo one sentence on every doc team’s forehead, it would be this: AI is not a source, it’s a generator.

The best practices that follow are “how to keep AI in the generator lane” while humans keep ownership of truth, intent, and accountability.

Start with constraints, not prompts

Most people prompt AI as if it were a helpful coworker. That’s the wrong mental model.

I get better results when I treat prompts like a specification. I define the audience, the use case, the exact output format, and what must be included. I also define what must not be included, especially anything that requires product truth the model does not have.

The point of constraints is to prevent confident guessing. You are trying to make it hard for the model to improvise.

Provide strong API reference contexts

Developer documentation is where AI can help and hurt at the same time.

AI can produce decent descriptions of endpoints, parameters, and response objects, but it will also invent an example payload that looks realistic but is wrong. So I treat “API reference context” as a required input, not an optional detail.

When I use AI for API docs, I feed it the known facts I trust. That can include endpoint definitions, field lists, error codes, and a few real examples. Then I instruct it to never invent fields or behavior, and to highlight any place where the provided context is missing.

If you want my step-by-step approach, I can teach you how to write API documentation.

Use agent-style documentation workflows

AI becomes safer when you make it boring.

By “boring,” I mean you build repeatable workflows that constrain output. For example, you can maintain agent markdown files that define your voice, terminology rules, formatting constraints, and approved patterns for concepts, tasks, and references.

You can also build context agent instructions that tell the model how to behave. Things like: cite only from approved sources, never guess steps, use the standard warning template, and output in the required section order.

This is the difference between “AI as a toy” and “AI as a controlled system.”

Build in automated content checks

A reliable AI workflow should not depend on a writer’s mood.

Automated content checks are your guardrails. These can include scanning for banned phrases, enforcing terminology consistency, verifying structure, detecting missing prerequisites, and flagging step sequences that violate your process template.

This is also where style overrides matter. If you have a distinctive voice or a specific way you structure procedures, you want those rules enforced automatically.

Collaborate with subject matter experts like a pro

AI does not reduce your need for SMEs. It increases it because it lets you produce drafts faster, so you can ask SMEs to review more often.

The trick is to change what you ask them to do. I get better SME reviews when I give them something specific: validate steps, confirm constraints, verify examples, check edge cases. I avoid asking “is this good?” because you’ll get vague feedback or no feedback.

If your doc describes a process, verifying processes is not optional. You either run it yourself or get confirmation from the system’s owner.

Use AI for personalization, not invention

AI-assisted content customization is one of the better use cases when it’s grounded in approved content.

For example, you can take a single concept explanation and ask the model to produce a beginner-friendly version, an advanced version, and a version tailored to a specific role. The model is not creating truth. It’s translating and formatting approved truth.

This is one of the safest ways to use AI, and it’s also one of the most useful, because technical documentation often fails when it assumes every reader has the same context.

If you want a structured way to practice these workflows, I built an AI technical writing certification course.

Challenges and Risks of AI Integration

AI makes it easy to produce content. That’s the problem.

In technical writing, the hard part is not output. The hard part is correctness, usefulness, and trust. AI can degrade all three if teams adopt it without discipline.

AI hallucinations in procedural documentation

Hallucinations are not always blatant. Sometimes the model invents a command. Sometimes it omits a critical prerequisite. Sometimes it combines two workflows into a single hybrid flow that no system supports.

Procedures are vulnerable because users follow them in a linear fashion. If the error is at step two, they stop early, and you catch it. If the error is at step nine, they waste time, lose confidence, and often blame themselves first.

That’s why procedures require human oversight and a real run-through. If you cannot run it, you need SME confirmation. Anything else is gambling with your users.

Over-reliance turns AI into an accountability sink

An accountability sink is when nobody feels responsible.

I’ve seen teams ship AI-generated content with a vague assumption that “someone will catch it.” That someone often does not exist when timelines are tight, and ownership is unclear.

AI can make this worse by making the output sound professional. It has the rhythm of authority. Humans are vulnerable to that.

So your process has to make accountability explicit. Someone owns accuracy. Someone owns approvals. Someone owns ongoing maintenance.

Automation can amplify bad documentation

Bad documentation is not always wrong. Sometimes it’s just unhelpful.

AI can generate accurate explanations that still fail to answer the real question. They describe a concept but do not show how to use it. They list parameters but do not explain behavior. They provide steps but not the why.

The danger is that automation makes it cheaper to produce that kind of content at scale. You can flood your doc portal with words and still increase the number of support tickets.

This is also where “unique angle” matters. Human writers bring product insight, user empathy, and a point of view about what matters. AI can imitate tone, but it cannot decide what matters without human direction.

Compliance risks and industry regulations

AI raises compliance risks in two ways.

First, there’s the content risk: incorrect instructions, missing warnings, or misleading claims in regulated environments. Second, there’s the process risk: who approved what, what sources were used, and whether you can audit changes.

If you work in a regulated domain, you need content governance that treats AI as a controlled tool. You need proofing and editing standards that align with the content’s risk level. You need a clear rule for what cannot be generated.

For a practical risk lens to help teams discuss controls and accountability, I reference the NIST AI Risk Management Framework.

Human emotions and user trust

Here’s something we do not talk about enough: documentation is emotional.

When users get stuck, they feel frustrated, confused, and embarrassed. Great docs reduce that. Bad docs amplify it.

AI can help you write cleaner sentences, but it cannot feel the user’s frustration. That’s why user feedback still matters, and why human writers still matter. The goal is not to produce text. The goal is to reduce user pain.

Changing Roles and Responsibilities

AI shifts technical writing from production-heavy to system-heavy.

When AI can draft fast, the writer’s value moves toward the parts of the job that protect quality: context, architecture, editorial judgment, and governance.

Context curator becomes a real role

Context curation is the work of assembling the inputs that make content trustworthy.

This includes maintaining approved sources, keeping terminology lists up to date, defining what the model is allowed to use, and ensuring the model sees the correct product context. In teams using retrieval workflows like RAG, this becomes even more important because the quality of retrieval determines the quality of output.

If the retrieval set is messy, you get messy docs. If it’s clean, AI becomes much more useful.

From writer to content director

I’m seeing more writers step into a content director mindset.

That means you’re less focused on writing every sentence and more focused on orchestrating the documentation system: templates, publishing workflows, review paths, style rules, and quality assurance. You become the person who keeps the doc machine honest.

Editor-in-chief skills matter more

AI makes it easy to generate content, but not to maintain quality.

So copyediting, fact-checking, and information architecture become more valuable, not less. You need someone who can spot contradictions, reduce redundancy, and ensure every page maps to a user task.

You also need someone who can protect consistency across the doc set, because AI can introduce subtle drift if you let it generate freely across different sections.

Language model training as a documentation skill

Most technical writers will not train models in the deep machine learning sense. But more writers will shape model behavior through prompts, constraints, and curated context.

That’s a form of training in practice, and it becomes part of the job. The better you are at shaping AI output, the more leverage you get.

Avoiding the accountability sink

If you adopt AI without clear ownership, the “writer role” can become an accountability sink where everyone assumes the system is correct because it’s automated. Strong teams fight this by making responsibilities explicit: who owns accuracy, who owns governance, who owns approval, and who owns updates.

Future Outlook for Technical Writing Careers

I don’t buy the narrative that AI will eliminate technical writers. I think it will eliminate a certain kind of technical writing workflow: the workflow where value is measured purely by word count and output volume.

The future rewards writers who can own outcomes, not just deliverables.

Assistive technical writing jobs and new expectations

“Assistive technical writing” is a useful framing because it acknowledges a real shift: many writers will be expected to use AI-powered tools as part of their standard workflow.

That does not mean you become a prompt engineer all day. It means you become someone who can use automation responsibly, while still protecting documentation quality.

Job security comes from ownership and credibility

Job security in technical writing has always been tied to trust.

When a team trusts you to keep docs accurate, useful, and aligned with the product, you become essential. AI does not change that. It raises the bar.

Writers who can build governance systems, collaborate effectively with developers, manage content delivery methods, and translate user feedback into doc improvements will remain valuable.

Career pivots and collaboration with developers

AI can nudge some writers toward career pivots into content strategy, information architecture, knowledge management, and documentation operations.

It can also deepen collaboration between writers and developers, because the fastest way to reduce AI hallucinations is to tighten the loop with the people who own the system behavior.

Distinctive voice still differentiates

AI can mimic tone, but it struggles to maintain a distinctive voice that is consistent across a doc set and aligned with product values.

Writers who can craft a distinctive voice while staying precise will stand out. This is not marketing fluff. It’s the difference between docs that feel alive and docs that feel like generic help center filler.

If you want the bigger picture of how I think about progression in this field, I recommend following the technical writer career path.

Related Interview Articles on the Future of AI

Explore more expert perspectives on how AI is shaping the writing and documentation fields:

Conclusion

AI can improve technical writing when you use it to accelerate drafts, enforce consistency, and support content reuse under strong governance. It can also degrade documentation fast if you use it as a substitute for verification, SME collaboration, and human editorial judgment.

My working rule is simple: let AI help you write, but never let it replace your responsibility to be right.

Stay up to date with the latest technical writing trends.

Get the weekly newsletter keeping 23,000+ technical writers in the loop.