An Interview with Emily Hockaday, Senior Managing Editor of Asimov's Science Fiction Magazine 

 

by Jason Ellis

Published June 2025

 


Abstract: In this pertinent interview, Jason W. Ellis delves into the complex and rapidly evolving landscape of generative artificial intelligence (AI) with Emily Hockaday, the senior managing editor for Asimov’s Science Fiction and Analog Science Fiction and Fact. The conversation spans the ethical, environmental, and cultural implications of AI in literary production. Hockaday, building on her guest editorial in the May/June 2023 issue of Analog, reflects on the initial innocence of AI tools and their subsequent transformation into potent forces capable of threatening the livelihoods of authors and artists. She explores the tensions between the rights of creators and the burgeoning capabilities of AI, the environmental costs of AI training, and the potential for AI to flatten cultural diversity by favoring the works of well-known authors over new, emerging voices. As Hockaday and Ellis navigate these issues, they touch on the broader societal implications of AI, including the potential for misinformation and the erosion of human agency. Throughout, the interview grapples with the fundamental question: how can we ensure that the human voice remains central to the creation and curation of literary culture with the rise of generative AI?

Keywords: interview, literature, science fiction, artificial intelligence, editing 


 

 

 

 

 

Jason Ellis: Hello, my name is Jason Ellis and I'm an English professor at the New York City College of Technology, part of the CUNY system, and it's also called City Tech. My guest in this interview for a special issue of NANO, New American Notes Online, is Emily Hockaday, who is the managing editor and poetry editor for Asimov’s Science Fiction and Analog Science Fiction and Fact. It's been a great honor to me to get to know Emily over the past six, seven years through collaborations on the annual City Tech Science Fiction Symposium, which includes an Analog Writers Panel and the announcement of the winner of the Analog Award for Emerging Black Voices. Emily agreed to talk with me today about generative artificial intelligence, which we've discussed before about what effects it's having on culture in general and the science fiction publishing industry specifically. She added and expanded on her thoughts in a thought-provoking guest editorial in the May/June 2023 issue of Analog Science Fiction and Fact. And so I thought, why don't we begin there, Emily, and can you tell us a little bit about the editorial on generative AI and SF publishing?

Emily Hockaday: Well, I wrote this very early on in the advent of AI reaching the masses. So I think I was writing it a full year ago. It was, yeah, October. I think we had our conversation in September and then that started the seeds of this editorial. Thank you for that, by the way, Jason. And I was writing it, yeah, in October and November. So it was early on, but we had already run through Craiyon or DALL·E Mini, in the spring. I remember lots of friends going on there and laughing at the ridiculous images that came up with ridiculous prompts. And it was just before, I think, OpenAI, before ChatGPT had been released. But there was another algorithm, a text generating algorithm that had come out and that had inspired me to get this editorial out because it seemed like it was just waiting to explode. And then of course it did.

Jason: And thinking about where we're at now where there are real threats to people's jobs, it seems like all this started with toys, that it was something silly and playful and has kind of turned into something much scarier or more advanced than that now.

Emily: Yeah, I think because living in a capitalist society, it's just like corporations can see a way to use this to raise profits and they're going to try to. Unfortunately, that's just, I think, how our society functions. I'm still clinging on desperately to optimism that this will fail. I probably am wrong. But I'm still trying to stay hopeful that it won't be as easy to replace people as I think a lot of companies are imagining it will be. 

Jason: Right. And I guess you tied in with these issues of what effect generative AI is going to have on society and creative work. It seems like there's these tensions that you explore in your editorial. And maybe one of the biggest that applies to SF is the rights of authors and artists versus the fair use of using their works to train AI technologies. I'm wondering if you have some thoughts about a way to resolve that or do we need to have a total rethink on what the rights of the individual and the rights of the commons of all of us in terms of how we use the works that get published in Analog or in books?

Emily: Well, for me personally, my personal ethics, I think it's wrong to use people's writing to train AI. And of course, just in the past week, I've been seeing so many authors that I follow, so many authors in the industry, even other poets, I've seen where their work was included in this big grab that OpenAI did from the website where they could torrent books. And I don't know what's going to happen with that. It seems like there are a lot of people making complaints and maybe there'll be a class action lawsuit. I don't know. Is it Microsoft? I think who's on the hook now?

Jason: I'm not sure exactly how some of those lawsuits are playing out, but certainly they've been making investments in OpenAI to incorporate it into Bing search. And which I think in some ways is making AI much more accessible to a lot of people who on the one hand, some are using it for playful purposes. But then there are other people that are using it for either business or even to, in a sense, ape the work of creative artists to make stories that are similar in style or poems that mirror the work of someone else. And it does so with very few barriers. You get a Microsoft account, you log in, and then you can start working with it. So I guess maybe there's that added aspect of how easy it is now to access these technologies for someone who just wants to find out what they're about versus someone who may be using them, if not for what are termed illegal yet, but what we might think are unethical reasons to make money in a sense or gain some notoriety or prestige from producing something that looks like the work of Heinlein or the work of Asimov or something like that. And so I guess in one other aspect of that, another tension that exists is that there's competition now that regardless of the legality of these things, now they're people in the marketplace that are producing AI generated stuff, whether it's explicit or not. And then the artists, the human artists that are toiling away trying to continue producing art that is essentially competing not only amongst themselves, but also with the generative AI stuff. And I guess that from your perspective, not just as an editor, but as a poet, how does that make you feel? What does that make you think about? 

Emily: Well, I will say that certainly the text generating has gotten better. Maybe six months ago, the text generator couldn't even do free verse poetry. As far as I saw, you ask ChatGPT 3 for a poem and it would give you something in rhyming couplets or ABAB scheme or tercets, but always rhyme, always metered. Plenty of modern poets write that way, but not predominantly. So I was feeling a little glib, I would say at the time, I was like, oh, these algorithm based generators can't even make a modern free verse poem. Well, now they can. That's been changed. That's gotten more sophisticated, but the poems are still bad. You know, I saw a friend who asked an AI generator to write like it was, I'm not going to name her, but if it was me, I would have typed, “write an Emily Hockaday poem.” So she put her name in and asked it to write a poem in the style of her. And she posted it to get a laugh. And it had taken the first line of one of her poems to start. And then the poem was really trash. It was very bad. It was didn't make much sense. And then it was overly narrative in places. And again, like I said, in six months, it became much more sophisticated. So I don't want to say that a text generator will never be able to write as well as a human because maybe it will. But right now, I still feel that we're at a place where it's extremely easy to tell the difference.

Jason: And I guess there's something about the way generative AI is stringing words together in a probabilistic fashion of what it thinks should come next based on a certain context and instruction. But there's something, at least for the present, that we as human beings can look at and point to and say this is obviously bad or not written by a human versus something that we would readily identify as being human generated. It's something indelible, hard to articulate, but at least for the time being it seems that is a hallmark of what we're able to do in distinguishing the artificial versus the quote unquote real human generated stuff. 

Now, one thing that maybe problematizes this a little bit is another tension that was in your editorial about traditional compositional work. You're thinking about which historically is used and continues to use a variety of technologies. We have auto correct, we have spell check those types of things built into our word processors, for example, versus the potential for new fiction and poetry and art. In which artists leverage generative AI. I'll throw this out there as a really great example: Nettrice Gaskins is a generative AI artist who does some really neat stuff using it as a part of her toolbox to make new things. 

An example of this that I haven't yet read about, but I imagine will happen at some point, if you're thinking about, Meta's Llama, their large language model which they call open source but isn't really open source but it's the closest closest thing we've got to that right now. Well, what someone can do is create basically an add-on to their large language model so you have the large language model, and then you can create a thing called a LoRA (low-rank adaptation of large language models) on a smaller data set, but it would be a tailored data set. So let's say Emily Hockaday, she could build a LoRA based on all of her poems. And so when she goes to use Llama, she layers the LoRA on top of it and says I would like a poem in the style of Emily Hockaday and it would produce something that probably would be a lot closer in line with your work than just using OpenAI stuff that may only know one or two examples of your work that it pulled from the Internet somewhere. So what are your thoughts about your artists using AI as a part of their wheelhouse, or is that maybe taking things a step too too far?

Emily: That's a tricky question for sure. That's where it becomes very gray, I would say for me I don't remember if I touched on this in my editorial but when I was researching for the editorial I was reading about writers who self publish and make a living self publishing and who are on really tight deadlines because in the self publishing game I guess when you have a series going and you have fans that are obsessed with your series, if you miss deadlines you really risk, highly risk losing some of those fans. I guess there's metrics on it. Fans will wait X number of months before they go to find a different series to follow and you might lose fans. And so this one writer who I was reading an article about she was using a text generating algorithm that she had trained with all of her own work, all of her own novels she had fed the novel that she was writing into it, and she used it, giving it prompts to write filler scenes in between what she had written. As connective tissue and then she added it to them and made them her own. But she was really feeling the deadline and she was like this is going to help me so much. I feel there's a difference between doing something like that and just basically plagiarizing a bunch of anonymous to you individuals who have poured their blood, sweat, and tears into their writing. And then using an algorithm that's been fed off of this plagiarized data and trying to sell that as your own. I do think there's a distinct difference. I still feel very iffy about these text generative AIs. Ethically, in general, I feel weird about them. They consume a lot of water, drinking water just like mining for Bitcoin does so they're very bad environmentally, and they open up doors to all these terrible places, in my opinion. But I like you're saying there is a difference between an artist using their own work and using this algorithm as a tool to produce more of their own personal work, than just grabbing from the internet whoever's source material.

Jason: Right. And I think that this ties back into the point that you raised earlier about capitalism. On the one hand there's certainly I'm sure that that kind of research into how to retain fans if you're self publishing. But there is a certain aspect of being able to produce and produce quickly enough to meet the demand in order to retain those fans in order to simply put food on the table for some folks right. And so they're in a certain sense there's this other aspect of capitalism I think is driving the use of generative AI, which I think maybe even more insidious or more scary in terms of using it in the workplace to write an email, or in this case being able to meet a deadline. And I don't know if in some ways it sounds a little bit like people are being pushed to have to use the technology simply to keep up the with the rat race to keep up with capitalism's demands on on the individual in different ways. What are your thoughts on that?

Emily: Yeah, I think there is danger there that everyone is going to suddenly feel that they need to be an expert algorithm trainer in order to keep their jobs. I'm very against an algorithm taking a person's job, it seems so unnecessary. There just are enough profits. I'm always here to rail against capitalism. It seems very unnecessary, but I hear it. I hear it just out in society. People are worried about this. People are thinking, how can I become an expert in this so that I can be marketable so that I can keep a job? And we are running an editorial in Analog in a couple of issues where the argument is that AI is a tool that we will need to learn to stay in creative fields. I hope that doesn't become true, but it very well could. 

Jason: And I guess related to that is also the issue that you bring up in the editorial about how certain authors may use a tool like this take over a certain amount of market share. You're thinking people are saying the same thing now about Taylor Swift. She's so big in our culture that she's pulling dollars away from smaller indie artists that just don't get recognized or don't pick up fans because just, I don't know, it's a phenomenon. It's hard to imagine she's just an individual, but it's this bigger thing. And so with generative AI, you can imagine if we start seeing new Asimov novels being published in his style or Heinlein novels. And some of these dead authors, are already taking up vast amounts of bookstore shelf space. So it seems like it's going to make a bad problem even worse. What do you think about that?

Emily: Yes, I did touch on that in the editorial. I think in a tongue-in-cheek way saying what if we can get all these new novels by feeding training a specific Asimov algorithm or I made the joke that we'd want more albums from young singers that died too soon. There are certainly people on my list that I love that from. But in all seriousness, I agree that it's taking shelf space from new writers. And that's how culture is supposed to progress. We're not supposed to stay stagnated. We don't need more Asimov novels. He wrote novels, he built up a genre, he did a lot. But it's time for new people to step up and it's time for the genre to change. And if we backpedal and start reading new works from classic authors, I guess that's why I titled my editorial, the dystopia of culture. That's not the direction we want to go. We don't want to go with a flat randomization of AI nonsense being turned out. And we also don't want to go backward to highly trained, maybe technically good writing by dead people.

Jason: And I guess on the last level related to your editorial is not just in the way that text is generated, the stories might be generated, but in the decision making process of magazines or in the publishing industry. That it seems like AI could further flatten our culture, that instead of opening it up to new vistas, new voices, new ways of doing things that if we train AI to make decisions about what goes into the magazine or what gets published on the shelves, it's only going to reproduce or select those things that reproduce what came in the past. Is that a concern?

Emily: I did joke about that in the editorial as well. Right now, I'm not concerned with that. I just feel that what we're seeing from these algorithms is really bland. Let's say an algorithm started choosing the stories for Asimov’s and Analog, our readers would unsubscribe immediately. 

We don't accept AI submissions. That's a big part of the magazines. And in the submission process, you have to check that it wasn't AI generated before submitting. Obviously, we're still getting plenty, but it's easy, very easy to weed them out right now, because they're so obvious and so bad. But there's just, there's nothing compelling. 

Like I said, these things do get more sophisticated and very quickly. But in the year that this has been going on, there is nothing good. Still, it hasn't the fiction has not become or the poetry submissions to be seen some of those, not as many, but some, there's nothing that would even at this point come close to passing as human writing at least to an editor. There's no chance of us accidentally publishing something that was written with AI. So I guess that still has me feeling safe for now that we need editors, that we need people, humans to adjudicate and curate.

Jason: Right. And you mentioned through the submission process now that there's a checkbox for this wasn't generated by AI. When did you add that to the submission process or had that been there for a while?

Emily: That was added to the Asimov’s. Actually, it's in our guidelines for Analog but we don't have the checkbox yet because we haven't upgraded but Asimov’s had that upgrade done in March 2023.

Jason: And was that in response to beginning to see more submissions that were AI generated? 

Emily: Yeah, we were hoping that that would slightly cut down some of those submissions.

Jason: Right. How many or numbers or percentages would you say you guys began to see in the two magazines that were AI generated.

Emily: It has gone up and down, but I would say we are submissions almost doubled. 

Jason: Wow. 

Emily: Yeah. Luckily, very easy to weed out so far. So it does still take time away from the editors which is difficult, but it's easy to tell the difference but yeah, basically submissions doubled.

Jason: That's amazing. And it puts a lot more work and labor that y'all have to put in too, because even if something's bad you still have to find out that it's bad, just time to read through.

Emily: You have to open the submission and I would say usually you can tell the first couple of sentences that it’s AI. So that isn't time we sit but you do you have to physically open the file, close it, mark that you want to ban it. It takes time.

Jason: And how many folks are having to read these per month? There's you and...

Emily: I don't read the slush actually. I only read the poetry so for Analog it's Trevor [Quachri] and for Asimov’s it's Sheila [Williams]. 

Jason: Wow. 

Emily: Yes, they read it all. I know they’re heroes. Yes.

Jason: It's like this is collateral damage from people that are abusing the technology of generative AI to try to slip in something in the flagship magazines of the field. What are your thoughts or maybe things that you've heard about why people might do this? Are they just wanting to get a publication credit, or because I don't think it's necessarily the path to riches, to publish a single story?

Emily: Ironically, that is what it is. We were listed on websites as publications that pay. So because we were on those websites, they started garnering interest and we're getting now submissions from all over the globe that have been they go through it using AI algorithms and I think it really is a way to make money, which I can't fault someone for trying, but it does make our job a lot harder when we're clogged up with submissions that we won't ever even consider.

Jason: And it's nothing off their back because it's open submissions, so they just submit and then you have to do the work. 

Emily: Yeah. 

Jason: And so I was wondering, since you some months have passed after your editorial, what has been some of the fan responses, the reader responses to your editorial.

Emily: So, mostly people are writing in to agree with it. That this is something of concern. And some people had some kind of funny observations. There was a mistake in the magazine and they were like, I read this editorial and then I saw this block of repeated text. Because I responded, oops, that was just human error. Wish I could blame it on an algorithm, but no. 

And then someone else wrote in with a very interesting point that when the text generators work, It doesn't matter if the text is accurate. It doesn't matter. That's not the point. And it doesn't matter. Yeah, it just doesn't matter if it's truthful or accurate. And the point is really just to write a thing and get texts out there. And that's very dangerous. And I found, I thought that was a really interesting point. He was comparing it to propaganda because we have had another article run in the magazine, a history piece, a special feature as we call it by Ed Wysocki, Jr. And it was about basically texts that had been written by people, but it was sort of trying to sway the reader and not going with the full truth. And so this person had seen the parallel there that they were almost about the same thing that getting inundated with text, people will feel overwhelmed by it and read it and might start to just think it's true. And or just not care if it's true. 

And I think we are seeing that in terms of meme sharing on social media. People post memes that agree with them whether they're true or not. And I think that that problem will amplify with AI generated text. Because when I wrote the editorial, I did look, I gave some prompts and I gave the examples, I published them in the piece. It was a history of science fiction. And there were some very ridiculous things like grandfather paradoxes: Ray Bradbury was influenced by a novel called Fahrenheit 451, that kind of thing. And then all sorts of other stuff that people who know science fiction would laugh about because obviously this isn't true. But if you didn't know science fiction, and you didn't have this understanding of how the text was generated you might just be like, oh, this isn't great writing, but maybe these facts are true, and that there's this potential for misinformation just spreading wildly. And that is how the generators work. They don't care. There's no fact checking with them.

Jason: And I guess that's kind of a scary thing for the future, especially thinking about the upcoming presidential election and thinking about the Cambridge Analytica scandal with Facebook, how granular the research is on how to push people's buttons, not necessarily to change their opinions 180 degrees, but how you can incrementally make adjustments. And with the work that was done before from Russia and other actors, they require people working at scale, which requires a lot of capital and a lot of facilities, this kind of thing. But with generative AI, if you got someone smart enough to figure out the programming to do that kind of work at scale all within the AI, then you don't need all those people. So not to say that I want those people to have jobs because the work they were doing was kind of evil and nasty, but it's an even more evil and nasty thing, how much worse it can be when they are generative AI is doing those little nudges, those little misinformation changes in people's thinking, what knowledge they have, how they make choices. That's kind of scary.

Emily: Yes, it can be much wider scale. It is scary.

Jason: Let's see. You mentioned the changes to the submission box on Asimov’s and it's in the policies on Analog. The submissions have gone really up. And just scanning here to give me we've covered a lot of things that we're going to talk about. Oh, one of the last things that I wanted to ask you about since we've been talking about text generation. What about with image generation has there been submissions or concerns about the artwork on the cover or the illustrations inside the magazine.

Emily: So the illustrations inside the magazine we have, we commissioned those artists that we've worked with for years and that we trust. So those are totally solid. But as soon as all this happened, we were very clear. We talked to our art department and we all met together and we were like we need to make sure that any cover image we use isn't using AI generated art. So we did very early on make sure that that was a priority because we do use some stock art for our covers. So there was a concern that AI art is going to start filtering in there, so we're on top of that. We definitely don't want to showcase any AI generated art. We have such talented artists and we love working with them and we want them to get to share their talent.

Jason: I think that's one of the things that is heartening to me about Analog and Asimov’s. And I think it's true for some of the other science fiction magazines you're thinking of Clarkesworld which has been at the center of this online debate about the effects of generated AI on the back end of publishing. But it seems like all of you are working toward supporting human agency, supporting human beings to have livelihoods, to be able to create good art, whether it be in writing or visually for fans who are totally engaged in what it is that we do in Science Fiction. And I think there's something really positive about that. So I guess in closing, do you have any final thoughts about generative AI writing, images, its effect on the magazine. 

Emily: I think we covered most, most of it. Yeah, just that we're going to try and keep it all people made as long as we can. I think part of the reason why we feel so passionately about it and why Neil [Clarke] from Clarkesworld feels so passionately about it is because we love our jobs. We love what we do. I don't want someone to replace me with an algorithm. I don't want someone to replace our incredible authors with algorithms. And I also think that our readers are really discerning. They would notice. They expect a certain level of quality. You've seen our Brass Tacks column [letters to the editor]. They write in. They would tell us right away. So I think that that's part of it. We feel really dedicated to the industry and so did all the other science fiction magazines out there publishing and I think that I hope that that will keep things as they are and moving forward with new writers, but as they are for the foreseeable future.

Jason: Well, I hope that you guys can keep making science fiction a people-first field. I think that with you all at the helm it's definitely a strong possibility. I think it'll happen. And I assume we're going to continue having these discussions about the effects of AI and I'm really curious what this is going to look like another year from now. So, thinking ahead, maybe pencil it in if you want, maybe we should have another conversation a year from now.

Emily: A yearly check-in on the status.

Jason: We might get to a point where your AI will have an interview with my AI but not hopefully we're hopefully not there yet …

Emily: for an AI audience.

Jason: For an AI audience, exactly. So Emily, thanks so much for taking your time to talk with me today about this and I wish you and Asimov’s and Analog the best of luck with maintaining human-first.

Emily: Thank you. Thanks for having me and thanks for engaging in this great conversation. 

Jason: Sure thing.


 

Production Process Notes: 


While spending the early part of Summer 2023 studying how generative AI products work and reading Emily’s editorial in the May/June 2023 issue of Analog Science Fiction and Fact, I reached out to her to ask about my interviewing her about the effects of generative AI on the science fiction publishing industry for my YouTube Channel, dynamicsubspace. During a conversation with Sean Scanlan and Patrick Corbett at Zatar Cafe in mid-August, Sean reminded me about the special interview issue of NANO, which seemed to be a great fit for the interview that I had pitched to Emily. After confirming with her that we would publish the interview in NANO, we set an early October 2023 date for our online interview. In preparation for our talk, I wrote out a list of general questions to serve as a beat sheet and shared it with Emily. For our interview, we used Google Meet, because I had difficulty installing Zoom on my desktop computer running Debian 12, which has been my default video communication software since the COVID pandemic began in early 2020. This was fortuitous, because I liked the aesthetics of its video window arrangement so that the interviewee–Emily–was the main subject of the video frame by default. On the day of our interview, I ran OBS Studio in the background to record my system audio and Chromium browser window that I had logged into Google Meet. Emily and I chatted at the beginning of our online video meeting before starting the interview as shown on YouTube. After we completed the interview, I edited the video created by OBS Studio with Shotcut and made the title card and credits card with GIMP, or GNU Image Manipulation Program. I exported the video as an MP4 file and uploaded it to YouTube. To create a draft transcript, I extracted the audio and converted it to MP3 using FFMPEG, and uploaded the audio to Revoldiv, a free, AI-based transcription service that uses OpenAI’s speech recognition system called Whisper. It took less than a minute for Revoldiv to create a transcript that identified two speakers. Instead of using it’s built-in text editor/audio player, I exported the transcript as a raw text and pasted it into a Google Doc for final editing. With the YouTube video open in a separate window, I played the interview video while correcting the very error-free transcript as it went along. I added names and links for context to the transcript, too. Finally, I read through the corrected transcript and used the search feature through several iterations to edit it for readability.