Dodging the hassle: Why has the iPhone’s autocorrect feature gone crazy? | iPhone

Dodging the hassle: Why has the iPhone’s autocorrect feature gone crazy? | iPhone

Don’t worry, you’re not going crazy.

If you think your iPhone’s autocorrect has gone crazy recently, inexplicably correcting words like “come” to “coca” and “winter” to “w Inter” – then you are not the only one.

Judging by comments online, hundreds of Internet sleuths feel the same, and some fear it will never be resolved.

Apple released its latest operating system, iOS 26, in September. About a month later, conspiracy theories abound and a video which purports to show an iPhone keyboard changing a user’s spelling of the word “thumb” to “thjmb” has racked up more than 9 million views.

“There are many different forms of autocorrect,” said Jan Pedersen, a statistician who did pioneering work on autocorrect for Microsoft. “It’s a little difficult to know what technology people are actually using to make their predictions, because it’s all below the surface.”

One of the godfathers of autocorrect has said that those waiting for an answer may never know how this new change works, especially considering who is behind it.

Kenneth Church, a computational linguist who helped develop some of the first self-correction approaches in the 1990s, said: “What Apple does is always a deep, dark secret. And Apple is better at keeping secrets than most companies.”

Internet has been noises about autocorrect for the last few years, even before iOS 26. But there is at least one concrete difference between what autocorrect is now and what it was several years ago: artificial intelligence, or what Apple calledin its iOS 17 release, an “on-device machine learning language model” that would learn from its users. The problem is that this could mean many different things.

In response to a query from The Guardian, Apple said it had updated autocorrect over the years with the latest technologies and that autocorrect was now a language model on the device. They said the keyboard issue in the video was not related to autocorrect.

Autocorrect is a development of an earlier technology: spell checking. Spell checking began in about the 1970s and included an early command in Unix (a coding language) that listed all misspelled words in a given text file. This was simple: compare each word in a document with a dictionary and inform the user if any are missing.

“One of the first things I did at Bell Labs was acquire the rights to the British dictionaries,” said Church, who used them for his early work in autocorrect and for speech synthesis programs.

Autocorrecting a word (that is, suggesting in real time that a user might have meant “their” instead of “their”) is much more difficult. It’s all about math: the computer has to decide, statistically, whether by “graff” you are more likely to mean a giraffe (only two letters missing) or a homophone, like “graph.”

In advanced cases, autocorrect also has to decide whether a real English word you used is actually appropriate for the context, or whether you probably meant that your teenager was good at “math” and not “meth.”

Until a few years ago, the cutting-edge technology was n-grams, a system that worked so well that most people took it for granted, except when it seemed unable to recognize less common names, sanctimoniously replacing swear words with unsatisfactory alternatives (which can be annoying) or apocryphally. changed sentences such as “handing over a baby in a taxi” to “devouring a baby in a taxi.”

skip past newsletter promotion

Simply put, n-grams are a very basic version of modern LLMs like ChatGPT. They make statistical predictions about what you’re likely to say based on what you’ve said before and how most people complete the sentence you’ve started. Different engineering strategies affect the data an n-gram autocorrect takes, Church says.

But they are no longer the latest in technology; We are in the age of AI.

Apple’s new offering, a “transformative language model,” involves technology that is more complex than the old autocorrect, Pedersen says. A transformer is one of the key advances underpinning models like ChatGPT and Gemini: it makes these models more sophisticated at responding to human queries.

What this means for the new autocorrect is less clear. Pedersen says that whatever Apple has implemented will likely be much smaller than familiar AI models; otherwise it couldn’t run on a phone.

But more importantly, it is likely to be much more difficult to understand what is going wrong in the new self-correcting models than in previous models, due to the challenges of interpreting the AI.

“There’s this whole area of ​​explainability and interpretability, where people want to understand how things work,” Church said. “With the older methods, you can get an answer to what’s going on. The latest and greatest things are like magic. They work much better than the older things. But when they go away, they’re really bad.”

Leave a Reply

Your email address will not be published. Required fields are marked *