The Nebula Paradox: LLM usage in Science Fiction Writing

Close-up of a human hand writing a manuscript with a pen featuring subtle robotic elements, symbolizing technology as a writing tool.

Science fiction has always been the genre where we test-drive uncomfortable futures before they arrive. Which makes it oddly fitting that the Science Fiction Writers Association now finds itself caught in a very familiar speculative bind: how to react when a new technology challenges long-standing assumptions about creativity, labor, and authorship.

Last week, the SFWA released updated eligibility rules for the Nebula Awards. Initially, the rules struck a balance that, while imperfect, at least acknowledged reality. Works wholly written by large language models were ineligible. Works that used LLMs in some capacity had to disclose that usage, and voters could decide for themselves what weight to give it.

Then came the follow-up e-mail. And with it, a sharp turn of the wheel.

Under the revised language, any use of an LLM at any point in the writing process now results in automatic disqualification.

The Sound of a Thousand Nastygrams

Absent transparency about what triggered the change, it’s hard not to assume the revision was driven by volume rather than reflection (think octogenarians shaking fists at stochastic parrots). People who are angry tend to write e-mails. People who are ambivalent, curious, or cautiously open-minded tend not to. The result is policy shaped by whoever shouts loudest—and historically, that’s not how the SFWA has done its best thinking.

The irony is that the original rules leaned on something the SFWA explicitly praised elsewhere: trust. Trust nominators. Trust voters. Trust the community to weigh disclosure and make informed decisions. That trust evaporated almost overnight.

The Absurd Edge Cases Nobody Thought Through

Under the new rules, the following are now disqualifying offenses:

  • An author asked an LLM for story ideas, picked one, and wrote the entire story themselves.
  • An author Googled a fact and unknowingly used an LLM-generated summary.
  • An author asked an LLM to explain a complex physical process so they could understand it well enough to write about it accurately.
  • An author bounced a plot problem off an LLM instead of a friend.
  • An author accepted grammar corrections from a tool that uses LLMs behind the scenes.
  • An author’s beta reader used an LLM to find a continuity error.
  • An author’s editor consulted an LLM about an obscure point of English grammar.

None of these involve the LLM writing the story. But all of them now “taint” the work beyond redemption.

That’s not a standard. That’s a purity test.

Authorship, Ghosts, and Convenient Blind Spots

Here’s where the paradox really sharpens.

The Nebulas honor authors, yet they do not outlaw ghostwriting. An author can have another human generate ideas, fix structure, punch up prose, solve plot problems, or even write large chunks of text—and none of that is disqualifying. The human did it, so we politely avert our gaze?

But swap that human for an LLM, and suddenly authorship becomes sacred again?

What exactly is the SFWA trying to champion here?

If the concern is authorship, the rules don’t enforce it. If it’s labor ethics, that rationale needs to be stated explicitly. If the worry is training data, then say so—and explain why “clean” or licensed models wouldn’t qualify. Right now, the rules communicate moral outrage without moral clarity.

The Thing That Actually Matters

I’m not arguing for LLM usage in science fiction writing. I’m arguing against pretending this is a simple problem with a simple solution.

Science fiction, of all genres, should understand that tools change, boundaries blur, and definitions matter. Ultimately, what readers respond to is not the workflow but the work itself: the quality of the prose, the strength of the ideas, the emotional impact, the questions that linger after the final page.

If a story does all of that—if it challenges, moves, and endures—are we really prepared to say it doesn’t count because a machine helped somewhere along the way?

That’s the real question. And the SFWA has answered it, but not in the way it probably intends. By design, the revised rules declare that none of this actually matters. Not the quality of the prose. Not the originality of the ideas. Not the emotional impact or the staying power of the story. What matters instead is whether an author ever brushed against a tool that has been arbitrarily forbidden.

In doing so, the SFWA isn’t just setting award eligibility rules—it’s implicitly asserting a boundary around what kinds of creative processes can produce “legitimate” science fiction in the first place. That’s an extraordinary amount of authority to claim, and one that merits far more dispassionate thought and reflection than the reactionary, knee-jerk reversal that brought us here. This is especially true for a genre built on interrogating new tools, new futures, and the consequences of drawing hard lines too early.

Jayson Adams is a technology entrepreneur, artist, and the award-winning and best-selling author of two science fiction thrillers, Ares and Infernum. You can see more at www.jaysonadams.com.