On Wednesday, Axios announced that OpenAI had signed agreements with The Atlantic and Vox Media that will allow the creator of ChatGPT to license its editorial content in order to perfect its language models. But some editors at these publications – as well as the unions that represent them – were surprised by these announcements and are not happy about them. Already, two unions have issued statements expressing “concern” and “concern.”
“Union members of The Atlantic’s editorial and business and technology units are deeply troubled by the opaque agreement The Atlantic has reached with OpenAI,” read a statement from the Atlantic union. “And above all by the complete lack of transparency from management about what the agreement entails and how it will affect our work.”
The Vox Union, which represents The Verge, SB Nation and Vulture, among other publications, reacted similarly, writing in a statement: “Today, members of the Vox Media Union…were informed without warning that Vox Media had reached an agreement. a “strategic content and product partnership” with OpenAI As journalists and workers, we have serious concerns about this partnership, which we believe could have a negative impact on our union members, not to mention ethical and well-documented environmental issues surrounding its use. of generative AI.
OpenAI has already admitted to using copyrighted information taken from publications like those that just signed licensing deals to train AI models like GPT-4, which powers its ChatGPT AI assistant. Although the company maintains that this practice is fair use, it has simultaneously licensed training content from publishing groups like Axel Springer and social media sites like Reddit and Stack Overflow, sparking protests from users of these platforms.
Under multi-year agreements with The Atlantic and Vox, OpenAI will be able to openly and officially use the publishers’ archived materials (dating back to 1857 in the case of The Atlantic) as well as current articles to form responses generated by ChatGPT and other AI language models. In exchange, publishers will receive undisclosed sums of money and be able to use OpenAI technology “to power new journalism products,” according to Axios.
Journalists react
News of these agreements surprised both journalists and unions. On announced this without consulting their authors, but I have very strong written assurances from our editor that they want more coverage like the last two weeks and will never intervene. If it’s wrong, I’ll stop.
Journalists also reacted to the announcement of these agreements through the publications themselves. On Wednesday, The Atlantic editor-in-chief Damon Beres wrote an article titled “A Devil’s Deal with OpenAI,” in which he expressed skepticism about the partnership, likening it to a deal with the devil that could backfire against him. He highlighted concerns about AI’s use of copyrighted material without authorization and its potential to spread misinformation at a time when publications have recently seen a series of layoffs. It drew parallels with audience research on social media, leading to clickbait and SEO tactics that degraded media quality. While acknowledging the financial benefits and potential reach, Beres cautioned against relying on inaccurate and opaque AI models and questioned the implications of journalism companies’ complicity in the potential destruction of the Internet as that we know, even if they try to be part of the solution by joining forces. with OpenAI.
Similarly, at Vox, editorial director Bryan Walsh wrote an article titled “This article is about OpenAI training data,” in which he expressed his misgivings about the licensing deal, drawing parallels between the relentless pursuit of data by AI societies and classical AI thinking. Bostrom’s “paperclip maximizer” experiment, warning that a single-minded focus on market share and profits could ultimately destroy the ecosystem that AI companies rely on to train their data. He fears that the growth of AI chatbots and AI generative search products will lead to a significant drop in traffic from search engines to publishers, potentially threatening the livelihoods of content creators and the wealth of the Internet. -even.
Meanwhile, OpenAI still fights for ‘fair use’
Not all publications are eager to get on the license plate with OpenAI. The San Francisco-based company is currently in the middle of a lawsuit with the New York Times in which OpenAI claims that scraping the posts’ data for AI training purposes is fair use. The New York Times attempted to prevent AI companies from conducting such scrapings by updating its terms of service to prohibit AI training, arguing in its lawsuit that ChatGPT could easily become a substitute for the NYT.
The Times accused OpenAI of copying millions of its work to train AI models, finding 100 examples in which ChatGPT regurgitated articles. In response, OpenAI accused The New York Times of “hacking” ChatGPT with misleading prompts simply to file a lawsuit. New York Times lawyer Ian Crosby previously told Ars that OpenAI’s decision “to enter into agreements with news publishers only confirms that they know that their unauthorized use of copyrighted works by copyright is far from ‘fair’.
Although this issue has not yet been resolved in court, for now the Atlantic Union is seeking transparency.
“The Atlantic has upheld the values of transparency and intellectual honesty for more than 160 years. Its legacy is built on integrity, derived from the work of its writers, editors, producers and business staff,” he wrote. “OpenAI, on the other hand, has used news articles to train AI technologies like ChatGPT without authorization. The people who continue to maintain and serve The Atlantic deserve to know exactly what management has licensed to an outside company and how, specifically, they plan to use the archives of our creative output and work product.