On Monday, as part of its Worldwide Developers Conference, Apple unveiled the software features of its various products, including the iPhone and iPad. The most anticipated part of the show was getting details on how the company would integrate artificial intelligence into its phones and operating systems.
During the presentation, Apple executives showed how the tech giant’s AI system — which they pointedly called Apple Intelligence instead of artificial intelligence — could help search for texts and photos, create images, correct grammar and spelling, summarize text and edit photos. .
After the announcement, tech experts, extremely online billionaires and cheap seats around the world complained that the features were small potatoes. CNET’s Katie Collins wrote that Apple’s coolest new features were long overdue, summing up her reaction with “finally.” Bloomberg’s Mark Gurman called them “minor upgrades.” My colleague Jordan Hart said they weren’t the silver bullet Apple needed to reinvigorate the company. And Elon Musk expressed his disappointment by sharing a stupid meme. In summary, many people are disappointed with Apple’s practical integration of AI. Sure, summarizing long emails and doing call transcriptions might seem boring compared to speculation that AI could be used to detect cancer earlier, but guess what? The scale and specificity of Apple’s vision also makes it the first major technology company to successfully integrate AI.
Apple is using AI to do what technology has proven it can do: be an assistant. Yes, the virality of OpenAI’s ChatGPT-3 has highlighted the potential of AI. But using AI to power a robot that performs your tasks or to answer open-ended questions is still extremely imperfect. Chatbots lie, they hallucinate, they tell my colleagues to eat glue. Google’s rollout and subsequent abandonment of offering AI answers to people’s search queries is just a sign that the current iteration of the technology is not ready for all the use cases dreams are made of Silicon Valley – not to mention venture capitalist Marc Andreessen’s claims that AI will be able to “save the world”, “improve war” and become our therapists, guardians, confidants and collaborators, leading the way in a “golden age” of art.
Apple’s updates are a call for everyone to take control. This is a clarion call to other tech companies to be concrete in what they promise consumers and deliver AI products that make our lives progressively easier instead of confusing us with excessive promises. Apple’s use of the best of AI is also the best way for normal people to understand what it can do. It’s a way to build trust. Sure, maybe one day AI will figure out how to destroy civilization or something, but for now, it’s best to find that photo of your dog dressed as a pickle that you took in 2019. And for the big one majority of people, that’s perfectly fine.
What does AI do?
The fact that people are disappointed with Apple says more about the hype around AI’s capabilities than it does about Apple. Since 2019, Musk has been promising that Tesla will make a self-driving robotic car, and he has long oversold its driver-assistance technology by calling it “autopilot.” OpenAI’s internal arguments, transformed into palace intrigue and media fodder, are primarily focused on concern about the speed with which the supposedly formidable power of AI will reshape humanity, not on the limits of its current practical application. The biggest models, the most powerful Nvidia chips, the most talented teams recruited from the hottest startups: this is the drumbeat of AI news from Silicon Valley and Wall Street . We’ve seen technology hype cycles before; it’s mainly about raising money and selling shares. Only time will tell whether the investments made by Wall Street and Silicon Valley in AI infrastructure will actually produce commensurate returns. This is how this game goes.
Apple’s updates are a call for everyone to take control.
But lost in all the noise is the reality of what AI is good (and bad) at right now – especially when it comes to the big language models that underpin most new AI tools that consumers will use, such as virtual assistants and chatbots. . The technology is based on pattern recognition: rather than making value judgments, LLMs simply analyze a vast library of information they have sucked up (books, web pages, speech transcripts) and guess which word comes from it. more logically then in the chain. There is an inherent limitation in this design. Sometimes facts are improbable, but what makes them facts is that they are provable. It may not make sense that Albany, not New York, is the capital of New York State, but it is a fact. It might be a good idea to use glue, an adhesive, to stick cheese on a pizza, if you’re a robot with no context about what “food” is. But that’s certainly not the way to do it. As they are, large language models cannot make this value judgment between the model and the facts. We don’t know if they will ever be able to. Yann LeCun, Meta’s principal AI scientist and one of the “godfathers of AI”, said that LLMs have a “very limited understanding of logic” and that they “don’t understand the physical world, n have no persistent memory, cannot reason in any reasonable definition of the term, and cannot plan. He also said that they cannot learn anything beyond the data they are trained on – anything new or original – which makes them mentally inferior to a house cat.
In other words, they’re not perfect.
Enter Apple, a company known for its culture of perfection. He was slow to embrace the hype around AI and, as I mentioned, for a time he refused to use the term “artificial intelligence”, instead preferring the long-dethroned name snoozefest “machine learning”. Apple began developing its own generative AI after ChatGPT-3 launched in 2022, but it only revealed the new features when it felt they were good and ready. This technology will power features like Genmoji, which lets you describe a custom emoji to fit whatever’s happening, and then create it – say, one of you cries while eating an entire pizza. It will also power more practical applications, like writing an email to your boss when you’re sick or checking the link your mother sent you in a text message. Currently, these basic call and response applications are where LLMs excel.
Apple’s rigorous standards serve to firmly establish AI’s current capabilities – or its limits, depending on how you look at the glass.
If you want to use the latest Apple products to enter the weirder, more fungible world of talking with a chatbot, Siri will call ChatGPT for you and let you go wild. This is Apple drawing a clear line between where its reliability ends and a world of technological inconsistency begins. For Apple, this distinction makes sense. It wants its products to be associated with cutting-edge technology but also with efficiency and productivity.
This distinction, however, does not benefit the rest of Silicon Valley or its venture capitalists. Anyone raising money or investing in this technology would prefer that you view the capabilities and value of AI as a moving target – especially up, to the right, and fast. Apple’s rigorous standards serve to firmly establish AI’s current capabilities – or its limits, depending on how you look at the glass. The alternative is what we see in other companies, where users are guinea pigs, accustomed to working with technology that makes them question what they see. Societies around the world are already grappling with a crisis of trust in institutions; Faulty AI only spreads this distrust wider and faster. It’s another stone in the wall between people’s faith and what they read on the Internet. In this way, Apple’s cautious approach could be a service to the rest of the tech industry. By slowly acclimating its constellation of users to AI that improves their lives instead of frustrating them, Apple is making the technology seem like a natural upgrade rather than an unreliable and scary intrusion.
Sure, Apple’s AI may not be sexy or scary, but at least it doesn’t seem stupid. Ideally, this means it won’t make our world dumber either.
Linette Lopez is a senior correspondent at Business Insider.