- Inkwell Insights
- Posts
- Studio Ghibli, Stolen Books, and the Use of Art
Studio Ghibli, Stolen Books, and the Use of Art
Ah, yes, another bit of ranting about generative AI in the arts and publishing space. Let's do it.

It takes me a while to write each of these newsletters.
First, I have to try to arrange for guests to come onto the Inkwell Insights podcast so that I can tell you about them in the issue. Then, I must think through the topic I want to explore– will it be literary theory? Something of the moment in the news? A book review?
And then, of course, I must sit down and write. That’s probably the most time-consuming part. Unlike other things that I write day in and day out, the way I approach writing these newsletters is intentionally cumbersome. My goal is to force myself to capture a clear narrative and point of view while preserving my specific voice, and to do so while deliberately casting aside the trappings of writing for online readers that have dominated the industry for years (little “rules” like writing at a fifth grade reading level, having plenty of white space, using headers to break up your text so readers don’t actually have to read).
Given that my newsletters can be challenging to write and that most people who write and promote newsletters do so for the explicit purpose of monetization, a thought crossed my mind this past week. In my defense, I’ve had the flu and a temperature well over 102 degrees. But I did ask myself if it would be worth it to use AI to replicate my writing style and use that as an assistant to help me produce newsletter drafts faster.
No. The answer I arrived at is no. For a million reasons. We’ll touch on them in a moment.
I’ve mentioned this before– I think– in a previous newsletter or on the podcast, but I’m not anti-AI. Artificial intelligence has many practical applications that are of net benefit to society. Its uses in medical imagery, for example, so far prove incredibly promising for detecting cancers faster. Businesses using retrieval-augmented generation to help customers get answers and solve problems faster can also be a big win. Even generative AI has applications in the workplace.
(Sidebar: I’m going in a different direction in this newsletter, so I’m not going to devote significant time to the environmental impact of AI here. I want that to have more space, so I’ll briefly it here and then reserve more time for it later. Here’s the TL;DR– AI uses a lot of computing power, which requires a lot of data centers, which require a lot of electricity and water. The carbon footprint of using ChatGPT to answer a query vs a traditional search engine search is like Taylor Swift taking a 20-minute private jet flight vs you or I flying in a sold-out economy.)
So, if I’m not against the use of AI in all cases, why do I not want to use it for my newsletter?
Well… the main reason I kept returning to is that I enjoy writing these newsletters. I enjoy writing difficult things because doing so makes me a better writer. I enjoy it because it allows me to leverage my unique background and love of literature.
There’s so much in this life that I have to do despite not enjoying it. Why would I start to outsource of the things I do enjoy to a machine that can’t experience the same satisfaction?
Also, this newsletter is a form of artistic expression for me. And art is not merely a product. It’s a process.
Even if I could crank out five or six of these a month with AI, what would be the actual benefit? I would have done nothing to hone my writing skills. I wouldn’t have needed to create space in my schedule to sit down and think deeply. There’d be no need for me to dig into industry trends and ask questions. There’d be newsletters to read, but I would get nothing out of it.
(There’s a good chance they’d be rather soulless, too. I don’t mean to toot my own horn, but toot toot, honey, I put a little bit of my own zest into each of these.)
As I unpacked these thoughts, a few things were happening concurrently in the digital zeitgeist. And since I was trapped on my couch with the flu (miserable, sweaty, sore, and wishing I could be sedated), I had time to scroll, read, judge, and scroll some more.
Stolen Books Train Meta AI
On March 20th, 2025, The Atlantic published a piece titled “Search LibGen, the Pirated Book Database That Meta Used to Train AI.”
In it, journalist Alex Reisner covers the AI database of published books being used to train various AI models, most notably some of Meta’s recent AI applications. Users are able to search the database to see what titles appear, and many authors found themselves caught off guard to learn that their books had been included.
The reality for many contemporary writers is to assume that anything you publish online will be indexed by a search engine and consumed by an AI dataset. Many bloggers and website admins report that AI-based crawlers often disregard meta tags and robots.txt directives that intend to prevent the contents of a page from being crawled. Historically, these methods have been a way for writers and content creators to maintain a modicum of control over which parts of their content footprint are readily accessible to the public. As more crawlers completely disregard these directives, though, we’re left with fewer options, like relying on gating content and physical mediums.
However, what the LibGen expose revealed to many published authors was that even the world of traditional publishing and physical media wasn’t safe from ingestion. Their work was brought into the database without their prior awareness or consent– or the awareness or consent of their publishers.
Intellectual property law is a dumpster fire in the best of times. When burgeoning technologies backed by billions of dollars are involved, it’s even more of a mess.
There are numerous ongoing global lawsuits challenging the status quo and seeking to define what AI companies have a right to access and use, and whether or not training data should be considered a valid application of “fair use,” the term given to “transformative” usage of intellectual properties that are permitted under copyright law.
But the more significant issue at hand, in my opinion, is not whether intellectual property law clearly defines what is or isn’t allowed. The issue is whether or not the actions are ethical. What is ethical and what is legally allowed aren’t always fully aligned, especially considering how influential money is in widening the playing field for what large corporations are allowed to do.
For the writers whose work has been included in datasets like LibGen, the sense is that their work has been stolen. The hard work and dedication that goes into honing your craft well enough to produce a published work is significant, and to have a machine take the product of your labors and learn to spit out something that sounds vaguely like you is reductive, insulting, and disrespectful to the craft.
There’s a clear divorce between the product and the process of creating it. For AI companies leveraging sources like LibGen, books are nothing more than an output that can be trivialized as patterns and predictable sequences of words. For the authors behind the books, each work represents hours upon hours of hard labor, careful decision-making, thoughtfulness, and practice. For a machine to then take their creations and presume to be able to create "the same thing” is a clear misunderstanding of how art functions and what the “product” of art actually is.
What tech bros tout as a great democratizer of creativity is a flimsy sham. Spinning a story out of AI doesn’t make one a writer. To be a writer requires, well, writing. To write a story is to respond to the world around you– to inject your unique point of view into broader conversations. LLMs cannot do that. Even if you prompt them really well, they’re limited by their training data and the statistical median of stories that have been previously told. They can approximate a compelling tale but not craft something wholly unique.
Is Nothing Sacred? Hayao Miyazaki and the Studio Ghibli “Trend”
Shortly in the wake of authors finding their work stolen and included in the LibGen database, a new AI trend emerged online.
Logging into Twitter and Facebook, one could find a deluge of AI-generated images approximating the style of acclaimed animator and Studio Ghibli founder, Hayao Miyazaki.
With new ChatGPT functionality, users were able to enter the popular chat interface, upload their photos, and prompt the ML model to recreate their image in the style of Studio Ghibli. And, it did.
Studio Ghibli is renowned for the care it takes in telling stories and producing its artwork. Films like Howl’s Moving Castle, My Neighbor Totoro, and Spirited Away are revered worldwide for the quality of handcrafted artwork and ingenuity that brings them to life and their meaningful social commentary. These works of art have established themselves as classics. The cozy animation style gives each tale a sense of charm, and the stories are gripping and endearing.
It makes sense that there are people who would want to see themselves depicted in the style of their favorite artist, but the method of churning out these images with AI should raise concerns.
As quoted by The Independent,
[…] the trend also highlighted ethical concerns about artificial intelligence tools trained on copyrighted creative works and what that means for the future livelihoods of human artists, as well as ethical questions on the value of human creativity in a time increasingly shaped by algorithms.
Part of what makes Studio Ghibli’s art so meaningful is the care and craft that goes into creating it. When it is generated in bulk by an AI model, none of that care is present. You get something resembling the source material but entirely devoid of the same quality.
It’s like walking into a HomeGoods, buying a mass-produced piece of abstract art, and then claiming you own an original Pollock when you hang it on the wall. Maybe it’s a lovely work of art. Maybe it will brighten up your room. But it’s not valuable. It’s not unique. It’s not all that interesting.
Plus, we should be uncomfortable when there is a thematic chasm between something claiming to be art and its source material. Miyazaki, for example, spoke at length about environmentalism and exploitation. These are well-trodden motifs throughout Studio Ghibli’s films. To ignore the environmental impact of AI tools to churn out cheap reproductions with no artistic merit is antithetical to what his art stands for.
It’s akin to the (very real) Hunger Games x Shein collab. Shein is one of the largest purveyors of fast fashion, and has come under fire for its working conditions, unsafe products, and environmental impact. Like virtually all fast fashion brands, it exploits low income workers for the amusement and easy access to goods of wealthier audiences who are geographically separated from the conditions in which its goods are produced– namely, the United States.
The Hunger Games, meanwhile, is a dystopian narrative in which the residents of Panem’s capital pit children from the subserviant districts outside of the capital against each other in an annual fight to the death. They do this for their entertainment and efforts to suppress the population.
I doubt Suzanne Collins had anything to do with the collaboration (in my heart, I have to believe that she didn’t), but I can’t imagine anything less thematically appropriate for partnering with the Hunger Games than the exploitative practices of a company like Shein.
It’s Up to Us to Defend (Real) Art
Teachers at both the high school and university level are already lamenting the worsening literacy skills of young people in the classroom.
Chronically online tech bros would have you believe that they, too, can be authors without putting in the practice because they’ve engaged in some basic LLM prompting.
Artists are losing out on gigs to folks using generative tools to save money at the cost of coming up with anything unique and interesting.
Art is good for society. It is enduring. It is a means of challenging those in power, uplifting those outside of power, and pushing our cultural heritage to new heights.
For art to endure, we have to defend it. We have to constantly challenge ourselves to embrace new ideas, engage with new artists, and reject cheap impersonations of art that would dimish it from within the echo chamber of stolen training data.
Plus, we should remind AI absolutists that generating creative writing or digital art is a really dumb use case for AI. Of all the powerful things it can do, why would we want to approximate the part of the human experience in which the creation is as meaningful as the product?
I’ll conclude with a final quote from Hayao Miyakazi as he famously reacted to an AI-generated image of a lurching zombie for a video game:
“Whoever creates this stuff has no idea what pain is whatsoever. I am utterly disgusted… I strongly feel that this is an insult to life itself.”
Reply