The critique of Apple’s halting rollout is not entirely unfair. Though it is, at times, missing the point.
Apple, like every other big player in tech, is scrambling to find ways to inject AI into its products. Why? Well, it’s the future! What problems is it solving? Well, so far that’s not clear! Are customers demanding it? LOL, no. Last year, the backlash against one of Apple’s early ads for its AI was so hostile that the company had to pull the commercial.
The real reason companies are doing this is because Wall Street wants them to. Investors have been salivating for an Apple “super cycle” — a tech upgrade so enticing that consumers will rush to get their hands on the new model.
In a rush to please shareholders, Apple made a rare stumble. The company is owning its error, it seems, and has said the delayed features would roll out “in the coming year.”
Of course, the cryptic delay has only given oxygen to the narrative that Apple has become a laggard in the Most Important Tech Advancement in decades.
AI can only fail
There’s a popular adage in policy circles: “The party can never fail, it can only be failed.” It is meant as a critique of the ideological gatekeepers who may, for example, blame voters for their party’s failings rather than the party itself.
That same fallacy is taking root among AI’s biggest backers. AI can never fail; it can only fail. Failed by you and me, the smooth-brained Luddites who just don’t get it. (To be sure, even AI proponents will acknowledge available models’ shortcomings — no one would argue that the AI slop clogging Facebook is anything but, well, slop — but there is a dominant narrative within tech that AI is both inevitable and revolutionary.)
Tech columnists such as the New York Times’ Kevin Roose have suggested recently that Apple has failed AI, rather than the other way around.
“Apple is not meeting the moment in AI,” Roose said on his podcast, Hard Fork, earlier this month. “I just think that when you’re building products with generative AI built into it, you do just need to be more comfortable with error, with mistakes, with things that are a little rough around the edges.”
To which I would counter, respectfully: Not.
Roose is right that Apple is, to put it mildly, a fastidious creator of consumer products. It is, after all, the $3-trillion empire built by the notoriously detail-obsessed Steve Jobs.
The Apple brand is perhaps the most meticulously controlled corporate identity on the planet. Its “walled garden” of iOS — despised by developers and fair game for accusations of monopolistic behavior, to be sure — is also part of the reason one billion people have learned to trust Apple with their sensitive personal data.
Apple’s obsession with privacy and security is the reason most of us don’t think twice to scan our faces, store bank account information or share our real-time location via our phones.
And not only do we trust Apple to keep our data safe, we trust it to design things that are accessible out of the box. You can buy a new iPhone, AirPods or Apple Watch and trust that the moment you turn it on, a user-friendly system will hold your hand through the setup and seamlessly sync it with your other devices. You will rarely need a user manual filled with tiny print. Even your Boomer parents will be able to navigate FaceTime calls with minimal effort.
Roose contends, at one point in the episode, that “there are people who use AI systems who know that they are not perfect,” and that those regular users understand there’s a right way and a wrong way to query a chatbot.
This is where we, the people, are failing AI. Because in addition to being humans with jobs and social lives and laundry to fold and art to make and kids to raise, we should also learn how to tiptoe around the limitations of large language models that may or may not return accurate information to us.
Apple, Roose says, should keep pushing AI into its products and just get used to the idea that those features may be unpolished and a little too advanced for the average user.
And again, respectfully, I would ask: To what end?
As Hard Fork co-host Casey Newton notes in the same episode, it’s not as if Google or Amazon has figured out some incredible use case that’s making users rush to buy a new Pixel phone or an Echo speaker.
“AI is still so much more of a science and research story than it is a product story,” Newton notes.
In other words, Large language models are fascinating science. They are an academic wonder with huge potential and some early commercial successes, such as OpenAI’s ChatGPT and Anthropic’s Claude. But a bot that’s 80% accurate — a figure Newton made up, but we’ll go with it — isn’t a very useful consumer product.
Back in June, Apple floated a compelling scenario for its newfangled Siri. Imagine yourself, frazzled and running late for work, simply saying into your phone: Hey Siri, what time does my mom’s flight land? And is it at JFK or LaGuardia? In theory, Siri could scan your email and texts with your mom and give you an answer. That saves you several annoying steps of opening your email to find the flight number, copying it, then pasting it into Google to find the flight’s status.
If it’s 100% accurate, it’s a fantastic time saver. If it is anything less than 100% accurate, it’s useless. Because even if there’s a 2% chance it’s wrong, there’s a 2% chance you’re stranding mom at the airport, and mom will be, rightly, very disappointed. Our moms deserve better!
Bottom line: Apple is not the laggard in AI. AI is the laggard in AI.