An author and freelance journalist has admitted to using AI to help him write a book review for The New York Times.
Alex Preston’s review of Jean-Baptiste Andrea’s novel Watching Over Her, published by The New York Times in January 2026, draws phrases and full paragraphs from Christobel Kent’s review in The Guardian. The “error” was brought to light by a reader, who alerted The New York Times to the similarities.
Preston told The Guardian he is “hugely embarassed” and “made a huge mistake.”
The Times promptly dropped Preston, calling his “reliance on A.I. and his use of unattributed work by another writer” a “clear violation of the Times’s standards.” An editor’s note now precedes the review online, advising readers of the issue and providing a link to the Guardian review.
Preston’s apology to The Guardian raises more questions than it resolves. The portion quoted online seems to speak more to the issue of unattributed work than his use of AI. It reads: “I made a serious mistake in using an AI tool on a draft review I had written, and I failed to identify and remove overlapping language from another review that the AI dropped in.” This implies that if he had removed the “overlapping” language, the issue would have been avoided.
As a literary critic and scholar, I believe the deeper question isn’t whether or not critics should do more to hide their use of AI—but the ethics of using it at all.
Why AI can’t do criticism
The role of the critic isn’t to summarize or repackage art, but to actively participate in a conversation about it. “Good criticism thrives in the complexity of its environment,” writes critic Jane Howard, who is also The Conversation’s Arts + Culture editor. “Each review sits in conversation with every other review of a piece of art, with every other review the critic has written.”
