iOS 18 Project Greymatter will use AI to summarize notifications, articles and more


Apple’s next-generation operating systems will incorporate Project Greymatter, bringing a host of AI-related improvements. We have new details on planned AI features for Siri, Notes, and Messages.

AI will enhance several core apps with summarization and transcription capabilities

Following numerous claims and reports about AI-related improvements in iOS 18, AppleInsider received additional information about Apple’s plans in the field of AI.

People familiar with the matter revealed that the company is internally testing a variety of new AI-related features ahead of its annual WWDC. Known by the project codename “Greymatter,” the company’s AI improvements will focus on practical benefits for the end user.

In pre-release versions of Apple’s operating systems, the company has been working on a notification summary feature known as “Greymatter Catch Up.” The feature is linked to Siri, meaning users will be able to request and receive an overview of their recent notifications via the virtual assistant.

Siri is expected to benefit from significantly updated response generation capabilities, via a new Intelligent Response Framework, as well as Apple’s in-device LLM. When generating responses and summaries, Siri will be able to take into account entities such as people and businesses, calendar events, locations, dates and more.

In our previous reports on Safari 18, the Ajax LLM and the updated Voice Memos app, AppleInsider revealed that Apple plans to introduce AI-based text summarization and transcription into its in-app apps. We’ve since learned that the company intends to bring these features to Siri as well.

This eventually means Siri will be able to respond to on-device queries, create summaries of long articles, or transcribe audio like in the updated Notes or Voice Memos apps. This would all be done through the use of Ajax LLM or cloud-based processing for more complex tasks.

We’re also told that Apple has been testing improved, “more natural” voices, as well as improvements to text-to-speech, which should ultimately result in a significantly better user experience.

Apple has also been working on multi-device media and TV controls for Siri. This feature would, for example, allow someone to use Siri on their Apple Watch to play music on another device, although this feature isn’t expected until later in 2024.

The company decided to integrate artificial intelligence into several of its core system applications, keeping in mind different use cases and tasks. One notable area of ​​improvement is photo editing.

Apple developed generative AI software to improve image editing

iOS 18 and macOS 15 are expected to bring AI-powered photo editing options to apps like Photos. Internally, Apple has developed a new Clean Up feature, which will allow users to remove objects from images through the use of generative AI software.

Clean Up tool will replace Apple’s current Retouch tool

Also linked to the Greymatter project, the company created an application for internal use known as “Generative Playground”. People familiar with the app revealed this exclusively to AppleInsider that it can use Apple’s generative AI software to create and edit images, and that it integrates iMessage as a dedicated app extension.

In Apple’s testing environments, it is possible to generate an image using artificial intelligence and then send it via iMessage. There are indications that the company is planning similar functionality for end users of its operating systems.

This information aligns with another report claiming that users will be able to use AI to generate unique emojis, although there are additional possibilities for image generation features.

According to people familiar with the matter, pre-release versions of Apple’s Notes app also contain references to a generation tool, although it is unclear whether that tool will generate text or images – as is the case with the Generative Playground application.

Grades will receive an AI-based transcript and summary, as well as math grades.

Apple has prepared significant improvements for its built-in Notes app, which is expected to debut with iOS 18 and macOS 15. The updated Notes will gain support for in-app audio recording, audio transcription, and of the summary powered by LLM.

The Notes app in iOS 18 will support audio recording, transcription, and summarizing in the app.

Audio recordings, transcriptions and text summaries will all be available in a single note, alongside any other material users choose to add. This means that a single note can, for example, contain a recording of an entire lecture or meeting, accompanied by images and text on a whiteboard.

These features would turn Notes into a real powerhouse, making it the must-have app for students and professionals. Adding audio transcription and summarization features will also allow Apple’s Notes app to better position itself against competing offerings such as OneNote or Microsoft’s Otter.

While app-level audio recording support and AI-powered audio transcription and summarization features will significantly improve the Notes app, they’re not the only things Apple is working on. worked.

Math Notes: Create graphs and solve equations using AI

The Notes app will receive a brand new addition in the form of Math Notes, which will support proper math notation and allow integration with Apple’s new GreyParrot Calculator app. We now have additional details on what Math Notes will entail.

iOS 18’s Notes app will introduce support for AI-assisted audio transcription and math notes

People familiar with the new feature revealed that Math Notes will allow the app to recognize text in the form of math equations and offer solutions to them. Support for graph expressions is also in the works, which means we could see something similar to the Grapher app on macOS, but in Notes.

Apple is also working on math-related, typing-focused improvements in the form of a feature known as “Keyboard Math Predictions.” AppleInsider He was told that this feature would allow mathematical expressions to be completed whenever they were recognized as partial text input.

This means that, in Notes, users would have the option to autocomplete their math equations in the same way that Apple currently offers predictive text or inline completions on iOS – which are also expected to come to visionOS later this year.

Apple’s VisionOS will also benefit from improved integration with Apple’s Transformer LM, the predictive text model that offers suggestions as you type. The operating system is also expected to receive a redesigned voice commands user interface, which serves as an indicator of how much Apple likes input-related improvements.

The company is also looking to improve user input through the use of “smart replies”, which will be available in Messages, Mail and Siri. This would allow users to respond to messages or emails with basic text responses instantly generated by Apple’s Ajax LLM on the device.

Apple’s AI vs. Google Gemini and other third-party products

AI has made its way into virtually every application and device. The use of AI-focused products such as OpenAI’s ChatGPT and Google Gemini have also seen a significant increase in overall popularity.

Google Gemini is a popular AI tool

While Apple has developed its own AI software to better position itself against the competition, the company’s AI isn’t as impressive as something like Google Gemini Advanced, AppleInsider has learned.

At its annual Google I/O developer conference on May 14, Google demonstrated an interesting use case for artificial intelligence: users could ask a question in video form and receive an answer or suggestion generated by the AI.

As part of the event, Google’s AI saw a video of a broken record player and asked why it wasn’t working. The software identified the model of the turntable and suggested that the turntable might not be properly balanced and was not working because of it.

The company also announced Google Veo, a software capable of generating videos through the use of artificial intelligence. OpenAI also has its own video generation model called Sora.

Apple’s Project Greymatter and Ajax LLM cannot generate or process video, meaning the company’s software cannot answer complex video questions about consumer products. This is likely why Apple has sought to partner with companies like Google and OpenAI to enter into a licensing agreement and make more features available to its user base.

Apple will compete with products like the Rabbit R1 by offering vertically integrated AI software on established hardware

Compared to physical AI-themed products, such as the Humane AI Pin or Rabbit R1, Apple’s AI projects have a significant advantage in that they work on devices users already own. This means that users will not need to purchase a special AI device to enjoy the benefits of artificial intelligence.

Humane’s AI Pin and Rabbit R1 are also commonly considered unfinished or partially functional products, and the latter was even revealed to be little more than a custom Android app.

Apple’s AI-related projects are expected to debut at the company’s annual WWDC on June 10, as part of iOS 18 and macOS 15. Updates to the Calendar, Freeform, and Settings apps system are also in preparation.



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top