Ruzlat/Adobe Stock
To hear it from Apple or Elon Musk, AI is our inevitable future, a future that will radically reshape life as we know it, whether we like it or not. In Silicon Valley math, it’s all about getting there first and carving out territory so everyone can rely on your tools for years to come. When he addresses Congress, at least someone like OpenAI CEO Sam Altman will mention the dangers of artificial intelligence and the need for strict regulatory oversight, but in the meantime, everything is moving forward at full speed. steam.
Many companies and individual actors buy into this hype, often with disastrous results. Many media outlets have been caught publishing AI-generated garbage under fictitious names; Google buried its search results with fake “AI Overview” content; Earlier this year, parents were outraged to learn that a Willy Wonka-themed family pop-up event in Scotland had been marketed to them with AI images that looked nothing like the sinister warehouse they had entered . Amid all this discontent, it seems there is a new marketing opportunity to be seized: being part of a anti-AI, pro-human counterattack.
Beauty brand Dove, owned by multinational conglomerate Unilever, made headlines in April by pledging to “never use AI-generated content to depict real women in its ads,” according to a statement from the company. Dove explained the choice as being in line with its successful and ongoing “Real Beauty” campaign, first launched in 2004, which saw professional models replaced by “ordinary” women in ads that focused more on the consumer than on products. “Committing to never use AI in our communications is just one step,” Alessandro Manfredi, Dove’s chief marketing officer, said in the press release. “We won’t stop until beauty is a source of happiness, not anxiety, for every woman and girl.”
But while Dove has taken a strong stance against AI to protect specific brand equity around body image, other brands and ad agencies are concerned about the broader reputational risk of relying on content automated and generative that bypasses human review. As Ad age As other industry publications have reported, contracts between companies and their marketing firms are now more likely to include strong restrictions on how AI is used and who can sign off on it. These provisions not only help prevent poor quality AI-generated images or copy from embarrassing these customers in the public arena, but they can also reduce the reliance on artificial intelligence in internal operations.
Meanwhile, creative social platforms are demarcating areas that are supposed to remain AI-free and thus getting good feedback from customers. Cara, a new artist portfolio site, is still in beta testing but has generated significant buzz among visual artists due to its proudly anti-AI philosophy. “With the widespread use of Generative AI, we decided to create a space that filters Generative AI images so that people who want to find authentic creations and artwork can do so easily,” says the application website. Cara also aims to protect its users from harvesting user data to train AI models, a condition automatically imposed on anyone uploading its work to the Instagram and Facebook Meta platforms.
“Cara’s mission began as a protest against the unethical practices of AI companies that exploit the Internet for their generative AI models without consent or respect for people’s rights or privacy,” said a company representative. rolling stone. “This fundamental principle of opposition to such unethical practices and the lack of legislation protecting artists and individuals is what motivated our decision to refuse to host AI-generated images. They add that as AI tools are likely to become more common in the creative industries, they “want to take action and see legislation passed that will protect artists and our intellectual property from current practices.”
Older sites in this space are looking to add similar safeguards. PosterSpy, another portfolio site that helps poster artists network and get paid commissions, has been a vibrant community since 2013, and founder Jack Woodhams wants it to remain a haven for human talent. “I have a pretty strict no-AI policy,” he says rolling stone. “The website exists to advocate for artists, and while Generative AI users consider themselves artists, that couldn’t be further from the truth. I’ve worked with real artists from around the world, from emerging talents to household names, and comparing the blood, sweat and tears these artists put into their work to a prompt in an AI generator is insulting,” Woodhams said, to “real artists who have trained for years to be as competent as they are today.”
Part of the pressure to set these standards comes from customers themselves. Game publisher Wizards of the Coast, for example, has repeatedly faced fan outrage over the use of AI in its products. Dungeons and Dragons And Magic: The Gathering franchises, despite the company’s various commitments to curating franchises’ AI-generated images and writing and committing to “innovation, ingenuity and the hard work of talented people.” When the company recently posted a job posting for a senior AI engineer, consumers sounded the alarm again, forcing Wizards of the Coast to clarify that it is experimenting with AI in video game development , and not in its table games. This back and forth demonstrates the perils for brands trying to avoid debates about this technology.
It is also a measure of the vigilance necessary to prevent a complete AI takeover. On Reddit, which does not have a general policy against generative AI, it is up to community moderators to ban or remove this type of content as they see fit. So far, the company has only argued that anyone seeking to train AI models on its public data must agree to a formal commercial agreement with them, with CEO Steve Huffman warning that it could report those who don’t. to the Federal Trade Commission. The Medium publishing platform has been slightly more aggressive. “We’re blocking OpenAI because they gave us a protocol to block them, and we would block pretty much everyone if we had a way to do it,” CEO Tony Stubblebine said. rolling stone.
At the same time, Stubblebine says, Medium is counting on curators to stem a tide of “bullshit” it sees surging across the Internet in the age of nascent AI, preventing it from being recommended to users. “There are currently no effective tools for spotting AI-generated content,” he says, “but humans spot it immediately.” At this point, even automated content filtering cannot be fully automated. “We used to delete a million spam emails a month,” Stubblebine notes. “Now we are removing 10 million.” For him, it’s a way of ensuring that real writers maintain fair visibility and that subscribers can discover writing that speaks to them. “There’s a huge gap between what someone will click on and what someone will be happy they paid to read,” Stubblebine says, and those who provide the latter could reap the rewards as the Web expands. is burdened with the first. Even Google’s YouTube has promised to add warning labels on videos that have been “edited” or “synthetically created” with AI tools.
It is difficult to predict whether institutional resistance to AI will continue to grow, although between a series of high-profile AI failures and growing distrust of the technology, Companies that effectively oppose it in one form or another appear poised to overcome the hype cycle. (not to mention the fallout from burst bubble predict some observers). Then again, if AI continues to dominate the culture, it could be left serving a smaller demographic that insists on non-AI products and experiences. As with all strategic business decisions, it’s a shame that there isn’t a robot that can predict the future.