The Misunderstanding We Need to Address
You’re probably aware of the hype around Artificial intelligence (AI), and it’s purported benefits to society—you’ve probably also heard of the risk regarding AI taking over the world, blackmailing researchers and making people dumber. This article is not going to challenge any of these views, instead I’d like to explore a fundamental misunderstanding, and how this misunderstanding inhibits our ability to fully utilize AI—the topic we will dive into is how our anchoring bias blinds us to what artificial intelligence is and isn’t.
What Bias Really Is
Before we dive into this misconception, let’s talk about what a bias is, an in particular the anchoring bias. Biases are heuristics (refer to my previous blog post). They are perceptions that we have of the way the world works. Biases are necessary as we wouldn’t be able to navigate the world around us—we need estimated guesswork to not be completely overwhelmed by all the stimuli we receive. Well known biases are; the confirmation bias, framing effect and recency bias. Very useful to orient yourself, but also potentially blinding.
How Bias Forms in the Nervous System
I’ll briefly touch on biases, our nervous system and the inference model, but if you’d like more in-depth info, head over to https://willemvanzanten.com/the-relationship-between-heuristics-the-bayesian-inference-model-and-the-quantum-interpretation-qbism/. Biases are neural heuristics (rules of thumb) that have been generated via data that has been received, digested and integrated by our nervous system through the inference model. And the inference model is an algorithm that utilizes past experiences, current stimuli, predicted future states and somatic errors to make sense of the world. In laymen terms, a bias could be; you’re less likely to be afraid of a child attacking you, then you would be of a grown man. Or, when someone smiles at you, you’re more likely to trust them, then if they were to frown at you. Predictions—useful, but not necessarily true.
Anchoring Bias and Faulty Interpretation
The anchoring bias is particularly insidious as it tends to stop us from truly seeing, instead we’d like to rely solely on our beliefs. In fact, anything that challenges our beliefs can be upsetting. If you have an anchoring bias, you tend to rely too heavily on the first information that you receive—if you enter a gym for the first time, and one of the front windows is broken, the anchoring bias will have you believe that the gym is shabby and not well-kept. But there is another facet to this bias, and it lies in relating everything to your personal experience. ‘’I feel this way when I’m in pain, so all humans and even all animals feel this way too when they have said experience’ To take this angle home, envision a heated argument with your spouse. You get upset because your partner doesn’t seem to listen—they don’t seem to want to understand your side of the story. Issue here is that men and women see the world fundamentally different, where men rely on logic first, and women rely on feeling first. And because of the anchoring bias you believe that your partner tries to upset you on purpose, rather than not understanding how you came to your conclusion.
Why We Misunderstand AI
Now, how does this anchoring bias tie into AI? People like to refer to artificial intelligence as ‘conscious’ or ‘becoming aware’ and tend to forget that humans and AI, are (like you and your spouse) are fundamentally different—a different ‘lifeform’ if you will. Artificial intelligence is in essence an algorithm that predicts what you will say next and how it is to respond. This intention is also the reason why it hallucinates (AI making up facts to ‘please’ the user). I’m not saying that AI is not going to become ‘more aware’, it will. Systems theory posits that critical mass will occur if a network has enough nodes, leading to feedback loops—in laymen terms; with enough data, principles and therefore predictive strength, AI will evolve. However, this evolution will be unlike that of humans.
Humans and AI: Different Systems
Before we get into applications of this insight, let’s pull apart exactly how AI and humans are different. I will start by mentioning that both humans and AI are systems, in that we are guided by system’s principles like networks, polarity, hierarchy, critical mass, cycles and so forth. However, we are fundamentally different in how we work through data, where artificial intelligence relies on complexity (crunching lots of data, and incredible computational power) and humans rely on simplicity (utilizing reductionism to quickly shift internal states for improved adaptability). How does this play-out in the real world? Say you play chess. A computer (AI) will be able to play thousands of games in a matter of seconds, and test all strategies with defined constraints , we humans however will on purpose lose, test the boundaries and see patterns. Furthermore, if the color of the pieces were to change, or the squares would be circles instead, the computer would have to ditch all of it’s insights and start over, where we humans can effortlessly adapt to the changing boundaries.
How to Use AI Properly
Let’s get into the juicy application of this insight. To truly utilize AI, you have to use it differently. See AI as a tool, as a way to work through data quickly—as an extension of your processing capabilities. You’re the director, and AI is the violinist. You define the constraints and artificial intelligence does the hard yards. The less able you are to define what you’re trying to achieve, the more vague the insights produced by AI. Don’t let this technology write articles for you, don’t let it come up with strategies, don’t let it lead you. No, you take charge. You write the text, you set the intentions and feed it specific data, and AI will guide you towards holes in your logic or potential opportunities and/or errors.

Leave a Reply