Thomas Klikauer for BuzzFlash: Algorithms – the Good, the Bad, and the Ugly
February 15, 2023
By Thomas Klikauer
We are inescapably and perhaps even inextricably exposed to algorithms. Ever since Persian mathematician Muhammad al-Khwarizmi, we knew them as “al-korizmi” or algorithms.
Today, we understand al-Khwarizmi’s algorithms as a sequence of instructions used to solve a specific problem or to perform a computation. More advanced algorithms use “equations” to divert code execution through various routes.
Such algorithms can even be used in automated decision-making. Yet, they can also create a form of automated reasoning.
Algorithms also use human characteristics as descriptors for machines – such as a computer, for example. It simply means putting human-invented opinions, ideas – and even biases, stereotypes, ideologies, and prejudice – into a mathematical equation for computer coding.
As the Internet increasingly defines our lives, computer algorithms progressively control and perhaps even decide our future. Yet, the boundaries between a controller (e.g. Google’s algorithms) and the “to be controlled” (us) become virtually imperceptible.
In a rather telling case, the good people of a North English town called “Scunthorpe” discovered how controlled they are – the hard way. In what became known as the “Scunthorpe Problem”, Scunthorpe inhabitants were blocked from opening their AOL accounts. The all-controlling internet giant had installed a new “profanity filter”. The filter blocked certain words even the ones within the name of their town – Scunthorpe.
Behind such filters runs an invisible algorithm. As pieces of a computer code, they are the gears and cogs of our modern machine age. Algorithms can come in three versions. They appear as Sergio-Leone-like as “The Good, the Bad, and the Ugly”. The first one is the “the Good”.
On the positive side, such algorithms have given us everything from social media to search engines, from satellite navigation systems to music recommendation techniques. Thankfully, they are part of our modern infrastructure. They work – often for our benefit – in hospitals, courtrooms, etc. They even find the quickest way to a Thai restaurant.
Yet, Sergio Leone’s “the Bad” is never far away. Algorithms also have the secret power to slowly and subtly change the rules about what it means to be human. They move our power to make decisions onto machines and software codes, i.e. algorithms.
In other words, algorithms can be used, for example, by medical doctors to over-ride their very own diagnoses – for better or worse. Even more problematic and this might well be “the Ugly”: algorithms have the power to undermine democracies. Yet, most algorithms aren’t inherently bad or ugly.
Perhaps one might like to argue that no machine, tool, or algorithm is ever either good or bad “in-itself”. It may be that it is an issue of how these algorithms are used that really matters. This also means that under our beloved system of capitalism, it is capitalism – and not us – that largely shapes how algorithms are used and what for.
For example, GPS was invented to launch nuclear missiles. Today, it helps delivering pizzas. Quite apart from the military and the corporate – the for-profit use of algorithms – computer-aided decision-making often boils down to when to trust an algorithmic machine over human (mis-)judgement. This may well lead to one of the key questions on the issue of algorithms, “when is the time to resist temptations to leave algorithms in control of our destiny or destination?” – literally.
These questions came to the fore on 11th May 1997 when IBM’s Deep Blue faced Kasparov – the world’s greatest chess master. Today, some argue that this day marked the beginning of The Age of the Algorithm.
The Age of Algorithms may indeed be defined by a series of logical instructions that achieve – from start to finish – a mathematical task like a code that can play chess. At its very basic level, even a pizza recipe – a set of instructions – might count as an algorithm.
In theory, any self-contained list of step-by-step instructions for achieving an objective could be seen as an algorithm. At a more sophisticated level, algorithms are almost always mathematical objects. They use equations, arithmetic, algebra, calculus, logic, and probabilities. These are all parts of a computer code. In the Age of Algorithms, they – more or less – run the Internet. But not just that.
Through algorithms, Amazon, Disney, and Netflix suggest which films people might “like” to watch next or which movie makes the most profit for them. Meanwhile, the algorithms in your car selects the fastest route. All of them use a mathematical-algorithmic process to make decisions out of a vast array of possible choices – such as the right streets that lead to your destination.
Of course, in consumer capitalism, algorithms have become a preferred tool of corporate advertisers. They run – often unseen and unnoticed – behind the scenes and the screen. Worse, algorithms can also classify consumers as someone interested in their products. They do this on the basis of consumer characteristics.
For marketing to be effective often means making the consumer (us) to buy things we don't need with money we don't have, and to impress people we don't even like. And for that, algorithms often need to eradicate unwanted information – information that is useless to corporations.
Instead, and when programmed correctly, an algorithm focuses on what is important to the advertising corporation. Marketing calls this as separating the “signal” from the “noise”.
When this becomes even more sophisticated, we enter the domain of self-learning. We tend to call this artificial intelligence or machine learning.
Ever since Robert Wiener, we know how this works: you give the machine data, a goal, and it creates a feedback-loop telling the programme when it is on the right track. After that, you set it in motion and leave it to work out the best way of achieving the end – either to sell stuff or to send missiles into Iraq or Ukraine.
Yet, the capabilities of artificial intelligence are often grossly overestimated. We are not – yet – in the Age of Artificial Intelligence. What we have today, is only ‘intelligence’ in the narrowest sense of the word.
Artificial intelligence should be understood not as a revolution in intelligence but, more realistically, as a revolution in computational statistics (find meaning in random data), hence algorithms and not artificial intelligence. It is basically sophisticated statistics – not (yet!) artificial intelligence.
Yet, sophisticated statistical algorithms are used by standard search engines. Google, for example, uses statistical analysis in their algorithms. They do have the power to alter our view of the world.
They (the controller) also decide what we (the controlled) usually see on page one of search results. As the old joke goes, there is no better place to hide a dead body than on the second page of Google search. 91.5% of all people only look at the 1st page of Google; 2nd page = 4.8%; 3rd page = 1.1%, etc.
Many are tempted to believe that what is on the first page is enough. Yet, Internet corporations do know and exploit the fact that,
when people are unaware, they are being manipulated,
they tend to believe they have adopted their new thinking voluntarily.
It works on Google just as well as on Facebook, etc. Worse, it does not stop there. Many also believe that algorithms are right most of the time.
Furthermore, not too few people end up believing that algorithms always deliver superior judgement. For example, when searching for the cheapest airfare, we like to believe that algorithm goes through every airline to check if – Skyscanner and Flightscanner – are listing the cheapest deals.
Yet, there is a very human tendency to take algorithms at face value – often without really wondering what is going on behind the scenes of your computer. In short, algorithms are a bit like magic, mirages, and illusions. At first glance, algorithms may even appear to be like wizardry. Yet, as soon as you start to understand how the [algorithmic] trick is actually being done, the mystery “melts into air”.
In truth, algorithms – no matter how simple their actual design are – will be very likely to make better predictions than people almost always. This works in the following way: human beings have a very strong tendency to over-trust anything we don’t really understand – one of which is the algorithms.
It is even more surprisingly to realize; we are less tolerant of an algorithm’s mistakes than of our own – even if a person’s very own mistakes are bigger! Here is a particular telling example,
In the UK, a woman called Lisa contacted her local Tesco supermarket. Lisa complains that her online shopping data must be wrong. While shopping online, she spotted that “condoms” were among her online list of “My Favourites”. They couldn’t possibly be her boyfriend’s! Lisa claimed. They did not use condoms.
Upon her insistence, Tesco’s analysts looked into Lisa’s data. They discovered that her list was accurate – condoms were purchased. But instead of creating a tiff, Tesco reached a diplomatic decision. It apologized and removed the condoms from Lisa’s “My Favourites” list.
Similarly, the retailer Target runs a pregnancy predictor behind the scene. In fact, most retailers do that – with the mastermind being Amazon.
In fact, every time people shop online, signs up for a newsletter, registers on a website, enquire about a new car or condoms, fills out a warranty card, buy a new sofa or home, etc. – personal date are handed over every time.
And do not sit on your toilet while your automated vacuum is running, recommends MIT’s technology review. One reason is that your information and pictures of you are not just collected and sold to a data broker, but may also be transmitted to the company that sold you the vacuum robot.
The details people type into an insurance comparison website, for example, are sold to a data broker. Worse, in some cases, even your entire web browser history can be bundled up and sold on. Attention: now is the time to clear recent history!
These are data profiles of your digital shadow that you leave behind every time you use the Internet. Very often, it includes a little tiny flag – known as a cookie – on your computer. Such a cookie acts like a signal – a red flag – to all kinds of other corporate websites floating around the highly commercialized Internet. It tells such corporations that you are someone who should be served up, for example, an advertisement for condoms or a Caribbean cruising.
In the vast space of the five billion gigabytes of Internet, they hide in the unseen background. Unnoticed, their algorithms sell information. In most cases, people will never know that some corporations have those data in the first place. Worse, virtually all of us never – voluntarily – offered these data to corporations.
MIT’s technology review said recently, often, we opt in simply by using the product, as noted in privacy policies with vague language that gives companies broad discretion on how they disseminate and analyze consumer information.
Yet, these data can contain the most personal information and private secrets. And worse, they can be converted into a saleable commodity. Regrettably, in almost all countries, local laws do not protect you.
In other words, the selling of your data via data brokers remains largely unregulated – just as how neoliberalism’s zealous drive for deregulation want it. It also means that data on your privacy are sold – literally as we speak! Here is one – rather illuminating – example: a judge’s “browser history” showed on one particular day in August 2016:
· 18.22: http://www.tubegalore.com/video/amature-pov– ex-wife-in-leather-pants-gets-creampie42945.html
· 18.23: http://www.xxkingtube.com/video/pov_wifey_on_sex_st ool_with_beaded_thong_gets_creampie_4814.html
· 18.24: http://de.xhamster.com/movies/924590/office_lady_in_pa nts_rubbing_riding_best_of_anlife.html
· 18.27: http://www.tubegalore.com/young_tube/5762– 1/page0
· 18.30: http://www.keezmovies.com/video/sexy-dominatrix-milks-him-dry-1007114?utm_sources
As juicy as it is, the judge was not doing anything illegal. It is not a crime to access these pages – perhaps not even immoral. Many people might correctly argue that he was not doing anything wrong at all. Yet, the Internet data might – none the less – become very handy in the hands of someone who might want to blackmail the judge, humiliate him, or cause a little friction between him and his wife.
On much more grand scale are the operations of Cambridge Analytica. Cambridge Analytica claims that their system of just five characteristics is able to ascertain an individual’s personality. These are: openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism.
Taken together, these five elements offer a highly useful way to describe what kind of a person you are. Built into an algorithm, such a company can – potentially – infer someone’s personality.
The information feeding this comes from something as simple as “Facebook Likes”! In some cases, the entire enterprise is motivated by marketing. In other cases, it can be motivated by an ideology or politics leading to what is known as the micro-targeting of voters.
This might have worked its magic in the 2016 Trump election. In this election, it did not take much for Trump to win. Trump’s team knew, it cannot get the majority of Americans to vote for Trump.
They were right. The public vote favoured Hillary Clinton roughly 66-to-61 million. Hillary Clinton (66 million votes) was vastly more popular than Trump (61 million votes). Yet, Trump won the state of Michigan by just 11,837 votes; Wisconsin by 27,257 votes; and Pennsylvania by 68,236 votes.
At least to some extent, the election outcome (the Good) of the 2016 presidential run was – with the kind assistance of algorithms (the Bad) – nudged in favor of Trump (the Ugly). And the rest – as they say – is history.