Here’s a deal that’s as horrifying as it gets: For less than $100,000, it might now be possible to use artificial intelligence to develop a virus capable of killing millions of people.
That’s the conclusion of Jason Matheny, president of the RAND Corporation, a think tank that studies security and other issues.
“It would cost no more to create a pathogen that could kill hundreds of millions of people than it would to create a pathogen that could kill only hundreds of thousands of people,” Matheny told me.
On the other hand, he noted, it could cost billions of dollars to produce a new vaccine or antiviral in response.
I told Matheny that I was the Times’ Tokyo bureau chief when a religious cult called Aum Shinrikyo used chemical and biological weapons in terrorist attacks, including one in 1995 that killed 13 people on the Tokyo subway. “These weapons would be capable of doing much greater damage,” Matheny said.
I’m a longtime member of the Aspen Strategy Group, a bipartisan organization that studies global security issues, and our annual meeting this month focused on artificial intelligence. That’s why Matheny and other experts joined us — and then scared us.
In the early 2000s, some of us feared that smallpox could be reintroduced as a biological weapon if the virus were stolen from the labs in Atlanta and the Novosibirsk region of Russia that have been storing the virus since the disease was eradicated. But thanks to synthetic biology, there would be no need to steal it.
A few years ago, a research team created a cousin of the smallpox virus, horsepox, in six months for $100,000, and with AI, it could be easier and cheaper to perfect the virus.
One reason biological weapons have not been used much is that they can have a boomerang effect. If Russia released a virus in Ukraine, it could spread to Russia. But a retired Chinese general has raised the possibility of biological warfare targeting particular races or ethnicities (probably imperfectly), which would make biological weapons much more useful. It would also be possible to develop a virus that would kill or incapacitate a particular person, such as a troublesome president or ambassador, if one obtained that person’s DNA at a dinner or reception.
Assessments of China’s research into ethnic targeting are classified, but they could explain why the U.S. Defense Department has said the most significant long-term biological warfare threat comes from China.
Artificial intelligence also has a more promising side, of course. It promises to improve education, reduce road accidents, cure cancer and develop new miracle drugs.
One of the best-known benefits is protein folding, which could lead to revolutionary advances in medical care. Scientists used to spend years or even decades figuring out the shapes of individual proteins, but then a Google initiative called AlphaFold came along that can predict those shapes in minutes. “It’s like Google Maps for biology,” Kent Walker, Google’s president of global affairs, told me.
Scientists have since used updated versions of AlphaFold to work on pharmaceuticals, including a vaccine for malaria, one of the biggest killers of humans in history.
So it’s not certain whether AI will save us or kill us first.
Scientists have been studying for years how artificial intelligence could dominate warfare, with drones or autonomous robots programmed to find and eliminate targets instantly. The war could involve robots fighting robots.
Robotic killers will be heartless in the literal sense, but they won’t necessarily be particularly brutal. They won’t rape, and they may also be less prone than human soldiers to the rage that leads to massacre and torture.
The magnitude and timing of job losses—for truck drivers, lawyers, and perhaps even coders—are a major uncertainty, because they could amplify social unrest. A generation ago, U.S. officials were unaware of how trade with China would cost factory jobs and seemingly lead to an explosion of deaths of despair and the rise of right-wing populism. Can we better manage the economic disruptions wrought by AI?
My suspicion of artificial intelligence is that, while I see promise in it, the last two decades have reminded us of its capacity to oppress. Smartphones were dazzling—and I apologize if you’re reading this on your phone—but there’s evidence linking them to worsening mental health in young people. A randomized controlled trial published this month found that children who gave up their smartphones experienced improved well-being.
Dictators have taken advantage of new technologies. Liu Xiaobo, the Chinese dissident who won the Nobel Peace Prize, believed that “the Internet is a gift from God to the Chinese people.” Things didn’t turn out that way: Liu died in Chinese custody, and China used artificial intelligence to increase surveillance and tighten the grip on its citizens.
AI can also make it easier to manipulate people, in ways reminiscent of Orwell. A study published this year found that when Chat GPT-4 had access to basic information about the people it interacted with, it was about 80% more likely to persuade someone than a human with the same data. Congress was right to worry about TikTok’s algorithm manipulating public opinion.
All of this underscores why it is critical that the United States maintain its lead in artificial intelligence. While we may be reluctant to step on the accelerator, this is not a competition in which it is acceptable to come in second to China.
President Biden knows all this, and the limits he has placed on China’s access to the most advanced computer chips will help preserve our edge. The Biden administration has hired top people from the private sector to think about these issues, and last year issued a major executive order on AI safety, but we will also need to develop new systems in the years ahead to improve governance.
I’ve written before about AI-generated nude images and videos, as well as the irresponsibility of deepfake companies and the major search engines that drive traffic to these sites. Tech companies have also routinely used immunities to avoid liability for promoting child sexual exploitation. None of this inspires confidence in the ability of these companies to govern themselves responsibly.
“We have never had a situation where the most dangerous and impactful technology is entirely in the hands of the private sector,” said Susan Rice, who was President Barack Obama’s national security adviser. “There is no way that Silicon Valley technology companies are going to be able to decide the fate of our national security and perhaps even the fate of the world without restraint.”
I think that’s true. Managing AI without stifling it will be one of our great challenges as we embrace the most revolutionary technology since Prometheus brought us fire.