Apple will soon use your on-device data to train Apple Intelligence
Apple is preparing to roll a new technology of how is allegedly training its Apple Intelligence system in the upcoming beta versions of iOS 18.5 and Macos 15.5.
Listen to the story

Apple is changing how it trains its artificial intelligence (AI) model, with a new approach, aims to improve performance by preserving user privacy. According to a Bloomberg report, the company is preparing to roll a new technology in the upcoming beta versions of iOS 18.5 and Macos 15.5. This shift has also been expanded by Apple in an official blog post which was published on Apple’s machine learning research website.
The post explains how Apple currently depends on synthetic data – or data that is generated artificially rather than collected from real users – to train AI features to write AI features such as tools and email summs to write. Although it helps protect privacy, Apple admits that synthetic data has limitations, especially when people try to understand how people write or summarize long messages.
To solve this, Apple is offering a method that compares synthetic email to real people – without user reaching email content anytime. It works like this. Apple says that it first creates thousands of fake emails that cover a series of subjects everyday. For example, Apple gives an example of a random email that reads: “Would you like to play tennis tomorrow at 11:30 am?” Each message is converted into an embeding – a type of data that represents its content, such as subject and length.
These embeding are then sent to a small number of user devices that are selected in Apple’s device analytics program. The participating equipment compare synthetic embeding with a small sample of the recent email of the user, and choose which synthetic message is the most similar. However, Apple says that real email and matching results never leave the device.
Apple states that it uses a privacy method called differential privacy, in which the devices only send back the anonymous signals. Apple then analyzes which synthetic messages were most often chosen – without knowing which device chose. These popular messages are used to improve AII features of Apple, which better reflect the types of materials that maintain complete privacy.
Apple says that this process helps refines its training data for features such as email summary, which makes the AI more accurate and useful without compromising on the user trust.
The same method is already in use for features like Genmoji, which is the custom emoji tool of Apple. Apple explains that the anonymity that indicates anonymity (such as an elephant in a chef’s cap) is common, the company can fix its AI model to give better response to real -world requests. Rare or unique signs remain hidden, and Apple never connects data to specific equipment or users.
Apple confirmed that similar confidentiality-centered techniques would soon be applied to other AI tools such as image playground, image wand, memory creation and visual intelligence features.