close
close

Siri has jokes: Apple dictation considers that “racist” means “Trump”

Siri has jokes: Apple dictation considers that “racist” means “Trump”

You can’t do this. Apple acknowledged an error in its characteristic of dictation of the iPhone, which made the word “racist” transcribed as “Trump”.

However, “Trump” did not appear every time you said “racist”.

The vocal function in the text also wrote words like “reinhold” and “you” when a user said “racist”, according to FOX test. Most of the time, the function wrote accurately “racist”.

The Apple’s iPhone dictation that transcribed the “racist” that “Trump” has quickly become viral and you cannot refrain from thinking about AI prejudices, software glitches and political symbolism.

What is crazy is that some users on social networks have considered it hilarious, with some jokingly praising Apple because it made Siri “smarter”. I would only add that making smarter Siri is the child’s game – that is ridiculous.

Some people have celebrated glitch as an “accidental truth”, while others have written things like: “Apple intelligence is real! There were no mistakes here. “

I found this very interesting: “” Bug “… a programmer there does God’s work. This has been intended and is a delight. “

Yes, we can all agree that it is very unlikely that it is an error. This has a dishonest (or not dishonest) programmer, written everywhere.

Therefore, others have criticized Apple for what they have seen as an example of prejudice, intentionally or unintentionally.

Apple replied saying:

We are aware of a problem with the speech recognition model that supplies dictation and we run a solution as soon as possible.

Fox adds: “Apple says that speech recognition models that dictate power can temporarily display words with some phonetic overlap before landing on the correct word. Bug affects other words with a “r” consonant when dictated, says Apple. “

I’m not sure I buy the theory of phonetic overlap. An expert, John Burkey, the founder of Wonderrush. New York Times wrote:

But he said that the data that Apple collected for his artificial intelligence offers are unlikely to cause the problem, and the word itself was probably a clue that the problem is not just technical. Instead, he said, there was probably a software code somewhere on Apple systems that caused the iPhone to write the word “Trump” when someone said “racist”.

This smells like a serious farce, ”said Mr. Burkey. “The only question is: did anyone throw this into data or slipped it in the code?

Why this is greater than “just a farce”

This case highlights the concerns about the neutrality. If a model of dictation makes politically loaded substitutions such as the above – whether it is accidental or systemic – it can be distrust in technology companies, which is already low, if I could add.

It also raises a more important question: what other prejudices could be present in AI systems that are not as visible immediately?

I think this whole debac mainly affects confidence in AI. Users may wonder if technological companies incorporate political prejudices in their products.

To be honest, I have seen too many examples of this type of prejudice and always change in a way – which is understandable, because most of the great technology employees have the same political beliefs and live in areas of the same thing.

It is only natural for some of their prejudices to turn them into the products they produce.

The quick response of Apple to remedy the error is a positive step, but this incident reveals a bigger problem: the responsibility of the technological companies to ensure that you will remain correct and impatient.

As artificial intelligence becomes an integral part of our daily lives, ensuring transparency in how these systems are trained and monitored becomes primordial.

On the surface, Apple’s dictation bug is a funny, shared glitch. But below the surface, it is a memory that you are through accidental errors or deeper prejudices-it takes consequences in the real world.

Reactions to this incident reveal how people interpret mistakes by their own ideological lenses. If he supports my beliefs, regardless of glitch or bias, it’s funny. If I throw what I think harmful to the way of thinking, then there is no matter of laughter.

The ball is in the Big Tech yard to make sure AI is neutral and reliable. It will be a tougher test than we imagined.