• Welcome to all New Sikh Philosophy Network Forums!
    Explore Sikh Sikhi Sikhism...
    Sign up Log in

Opinion Is Autocomplete Evil? Some Women Take A Hard Look At Google !

spnadmin

1947-2014 (Archived)
SPNer
Jun 17, 2004
14,500
19,219

Is Autocomplete Evil ?


by Tom Chatfield

http://www.bbc.com/future/story/20131106-is-google-autocomplete-evil/1

Women shouldn’t have rights.” “Women shouldn’t vote.” “Women shouldn’t work.” How prevalent are these beliefs? According to a recent United Nations campaign, such sexism is dispiritingly common, and it’s why they published these sentiments on a series of posters. The source? These statements were the top suggestions offered by Google’s “instant” search tool when the words “Women shouldn’t…” were typed into its search box.

Google Instant is an “autocomplete” service – which, as the name suggests, automatically suggests letters and words to complete a query, based on the company’s knowledge of the billions of searches performed across the world each day. If I enter the words “women should,” the number one suggestion on my own screen is “women shoulder bags,” followed by the more depressing “women should be seen and not heard.” If I type “men should”, the enigmatic suggestion “men should weep” pops up.

The argument behind the UN campaign is that this algorithm offers a glimpse into our collective psyche – and a disturbing one at that. Is this really true? Not in the sense that the campaign implies. Autocomplete is biased and deficient in many ways, and there are dangers ahead if we forget that. In fact, there is a good case that you should switch it off entirely.

Like many of the world’s most successful technologies, the mark of autocomplete’s success is how little we notice it. The better it’s working, the more seamlessly its anticipations fit in with our expectations – to the point where it’s most noticeable when something doesn’t have this feature, or Google suddenly stops anticipating our needs. The irony is that the more effort that’s expended making the results appear so seamless, the more unvarnished and truthful the results feel to users. Knowing what “everyone” thinks about any particular issue or question simply means starting to type, and watching the answer write itself ahead of our tapping fingers.

Yet, like any other search algorithm, autocomplete blends a secret sauce of data points beneath its effortless interface. Your language, location and timing are all major factors in results, as are measures of impact and engagement – not to mention your own browsing history and the “freshness” of any topic. In other words, what autocomplete feeds you is not the full picture, but what Google anticipates you want. It’s not about mere truth; it’s about “relevance”.

This is before you get on to censorship. Understandably, Google suppresses terms likely to encourage illegality or materials unsuitable for all users, together with numerous formulations relating to areas like racial and religious hatred. The company’s list of “potentially inappropriate search queries” is constantly updated.

False premise

None of this should be news to savvy web users. Yet many reactions to the UN Women campaign suggest, to me, a reliance on algorithmic expertise that borders on blind faith. “The ads are shocking,” explained one of the copywriters behind them, “because they show just how far we still have to go to achieve gender equality.” The implication is that these results are alarming precisely because they are impartial: unambiguous evidence of prejudice on a global scale. Yet, while the aim of the campaign is sound, the evidence from Google is far from straightforward.

The greatest danger, in fact, is the degree to which an instantaneous answer-generator has the power not only to reflect but also to remould what the world believes – and to do so beneath the level of conscious debate. Autocomplete is coming to be seen as a form of prophecy, complete with a self-fulfilling invitation to click and agree. Yet by letting an algorithm finish our thoughts, we contribute to a feedback loop that potentially reinforces untruths and misconceptions for future searchers.

Consider the case of a Japanese man who, earlier this year, typed his name into Google and discovered autocomplete associating him with criminal acts. He won a court case compelling the company to modify the results. The Japanese case echoed a previous instance in Australia where, effectively, the autocomplete algorithm was judged to be guilty of libel after it suggested the word “bankrupt” be appended to a doctor’s name. And there are plenty of other examples to pick from.

So far as Google engineers are concerned, these are mere blips in the data. What they are offering is ever-improving efficiency: a collaboration between humans and machines that saves time, eliminates errors and frustrations, and enriches our lives with its constant trickle of data. All of which is true – and none the less disturbing for all that.

As the company’s help page puts it, “even when you don’t know exactly what you’re looking for, predictions help guide your search.” Google has built a system that claims to know not only my desires, but humanity itself, better than I could ever manage – and it gently rams the fact down my throat every time I start to type.

Did you know you can turn autocomplete off just by changing one setting? I’d recommend you give it a try, if only to perform a simple test: does having a computer whispering in your ear change the way you think about the world? Or, of course, you can ask Google itself. For me, typing “is Google autocomplete...” offered the completed phrase “is Google autocomplete a joke?”. Unfortunately, the answer is anything but.
 

Attachments

  • evilgoogle.jpg
    evilgoogle.jpg
    16.9 KB · Reads: 379

Ishna

Writer
SPNer
May 9, 2006
3,261
5,192
Reminds me of once when I was 15 and I did a Google search for 'teen pagan network' and got a sponsored ad result for "pre-teen l.olitas"...

15 year old me sent a VERY angry message to Google. I never got a reply. Wow... that was a long time ago.
 

spnadmin

1947-2014 (Archived)
SPNer
Jun 17, 2004
14,500
19,219
They don't even reply when you write about a frank violation of the law. And I am talking about the law, and not about one's personal social or ethical school of thought. Now Google expects us to believe them when they launch public relations stunts to regain public confidence after playing ball with the NSA.
 

❤️ CLICK HERE TO JOIN SPN MOBILE PLATFORM

❤️ CLICK HERE TO JOIN SPN MOBILE PLATFORM

📌 For all latest updates, follow the Official Sikh Philosophy Network Whatsapp Channel:
Top