The iPhone 15 Opts for Intuitive AI, Not Generative AI

Tech product launches in 2023 have become predictable: Everything now comes with generative AI features that will serve up chatty but knowledgeable text or mind-blowing images. The rollout of the iPhone 15 this week shows Apple opting to Think Different.

The new device comes with the A17 Pro processor, an Apple-designed chip to put more power behind machine-learning algorithms. But the features highlighted at the launch event yesterday were generally subtle, not mind expanding. The company appears focused on AI that is intuitive not generative, making artificial intelligence a part of your life that smoothes over glitches or offers helpful predictions without being intrusive. Apple made a similar choice to ignore the generative AI bandwagon earlier this year at its developer conference in June.

A new voice-isolation feature for the iPhone 15, for example, uses machine learning to recognize and home in on the sound of your voice, quieting background noise on phone calls. As usual for iPhone launches, yesterday’s event spent ample time on the power of the new phone’s camera and image-enhancing software. Those features lean on AI too, including automatic detection of people, dogs, or cats in a photo frame to collect depth information to help turn any photo into a portrait after the fact.

Additional AI-powered services are also coming to newer iPhone models via the new iOS 17 operating system, due out next week. They include automated transcription of voicemails, so a person can see who’s calling before picking up a phone call, and more extensive predictive text recommendations from the iPhone keyboard. Neither is as flashy as a know-it-all chatbot. But by making life easier, they just might convince people to spend more time with their phones, pushing up usage of Apple’s services.

Apple’s intuitive AI is also at work in some new accessibility features. For people who are blind or have low vision, a new Point and Speak feature in the Magnifier app will let them aim the camera at objects with buttons like a microwave and hear their phone say which their finger is touching. For people with medical conditions like ALS that can rob a person of the ability to speak, iOS 17 can create a synthetic voice that sounds like them after they read 15 minutes of text prompts.

Smartphones have become hard to improve on with transformative new features, and overall the iPhone 15 rollout was underwhelming, says Tuong Nguyen, director analyst at Gartner covering emerging technology. But Apple excels at the kind of interface design that makes subtle AI-powered features work.

Nguyen thinks the adaptive audio feature that blends music or calls with nearby voices or ambient sound, due out this fall for AirPods, and the new “double tap” gesture that controls an Apple Watch Series 9 with a simple tap of index finger and thumb—both powered by machine learning—have the potential to become features so intuitive that they become a standard that other companies emulate.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Todays Chronic is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – todayschronic.com. The content will be deleted within 24 hours.

Leave a Comment