AI can overcome human weaknesses

Many of the advantages of AI are well known, understood and praised. And its limitations are also well known. But there are other important features that, although not mentioned often, deserve our attention.

Benefits – AI apps can perform incredibly complicated tasks with ease. They can customize recommendations for the next song you like, or choose from millions of X-rays for the one that indicates a problem. Furthermore, they can perform these tasks at levels of volume and precision that human experts cannot match. Monotonous – but important – jobs can be dispatched smoothly and without complaints.

limitations – At the same time, many articles have been written about humans with abilities that AI does not have. These articles often argue that humans and AI should work together, with AI augmenting the more expansive abilities of humans. We can imagine, anticipate, feel and judge changing situations. As a more comprehensive Artificial General Intelligence is not yet within reach, current AI models – which excel at constrained tasks – still benefit from human guidance.

Beyond the obvious, AI has advantages that directly match human weaknesses. Unlike us, it understands probabilities, doesn’t introduce bias, is painfully consistent, and avoids undue risk.

Probability x results – Humans understand results, but are generally poor in processing probabilities. Monty Hall’s Let’s Make a Deal is an example where humans demonstrate a lack of understanding of probabilities and up-to-date antecedents. In this game, there are three doors: one has a car and the other two have goats. The contestant chooses a door at random, and Monty chooses to open any of the other doors that show a goat. Monty then offers the contestant a chance to switch to the other closed door. They should? It turns out that by opening a door, Monty has given the competitor additional information, and the competitor will win the car two-thirds of the time if he always switches. We humans tend to get these kinds of questions wrong, but AI can answer it perfectly.

Trend – Humans have a lot of prejudices, whether we call them “gut instinct” or some other name. Confirmation bias is perhaps the most common: we look for and interpret information that supports a preconceived assumption or theory. Two humans can watch the same news program and come away with different conclusions about the day’s events. In contrast, bias only manifests itself in AI through the data we provide to learn. AI bias is restricted to a finite dataset rather than the ever-changing complexity of human experiences, memories, beliefs and fears. In this sense, AI bias is arguably more discreet and resolvable than human bias.

Consistency – The AI ​​is consistent, painfully so. Unless we say otherwise, he will consistently do what we ask. The only consistent characteristic of humans is that we are not consistent – ​​in exercise, in diet, in the paths we take to work, etc. Furthermore, humans find ways to rationalize our inconsistencies. It is not inconceivable that the same patient profile presenting the same symptoms to the same physician can obtain different diagnoses at different times. AI ensures consistency of procedures and results, as long as the underlying population does not have a large deviation.

Risk – AI will not take risks, but humans will. Of course, that’s why we want human intelligence to augment AI. The power of human ingenuity is to take risks and bet on it. For example, betting on electric cars when no algorithm would suggest doing so. But sometimes that risk manifests itself in a dysfunctional moment, like the Challenger space shuttle disaster that killed its crew. Even though an engineer called for a postponement of the launch, citing safety concerns stemming from a design flaw in the O-ring seals, the shuttle rose as scheduled and exploded about a minute after liftoff. In her analysis of this fatal 1986 disaster, sociologist Diane Vaughan coined the term “normalization of deviance” to describe teams that become desensitized to unhealthy practices. The AI ​​would have objectively assessed and determined that the launch should have been delayed.

So yes, AI can’t imagine, anticipate, feel, and judge, but AI also understands probabilities, doesn’t introduce new biases, is consistent, and avoids undue risk. The fact that we can feel and judge does not always work in our favor.

Leave a Reply

Your email address will not be published.