top of page
Search

AI Unveiled: Debunking Myths and Discovering Truths

  • Dorian Munro
  • Mar 22, 2024
  • 10 min read


Dear reader, before we delve into this week’s post and explore the AI myths, I’d like to share a word of caution: the topics we're about to unravel might provoke a range of emotions, some of which might make you feel uneasy or uncomfortable. It’s not my intention to cause discomfort; my goal is to navigate the vast and often divisive field of AI with honesty and an open mind, covering the good, the ugly and everything in between. In this journey, I position myself not as an AI sceptic nor an unwavering optimist but as an inquisitive seeker to understand AI's true impact, acknowledging that even my perspective can sometimes be clouded. So please bare in this in mind as you continue to read this post.


Myth no.1: AI as the Villain of Sci-Fi


Let’s debunk first and my personal favourite myth, an image of AI as a villain. In many AI conversations with friends and colleagues, I’ve noticed a recurring theme of apprehension and fear towards Artificial Intelligence, largely influenced by its portrayal in science fiction. Films like 'Terminator' and 'Matrix' have left a lasting impression, painting AI as a formidable adversary. Does anyone remember Robin Williams in Bicentennial Man? Not all stories paint AI as the bogeyman. Yet, it's important to distinguish between cinematic dramatization and the reality of AI's current capabilities. Contrary to the fears of a Skynet-style apocalypse, experts like Mo Gawdat, Ex-Chief business officer for Google X, in his book ‘Scary Smart’ suggests that such scenarios are among the least probable outcomes of the current AI's evolution.


This doesn't mean we face no challenges—issues like widespread convincing deep fakes, job displacement, which I’ll elaborate later in this post, and further societal disconnection are more immediate concerns, underscoring the need for a balanced, informed dialogue about AI's role in our future and how we intend to use it. The real adversary is not AI itself but how we, humans, choose to use these powerful systems. Much like a kitchen knife, which can be used to chop tomatoes or, regrettably, as a weapon, AI's impact is determined by human intent. This analogy serves to remind us that the tools we create, AI included, reflect our values and choices. It's our responsibility to steer these innovations towards beneficial ends, mindful of the potential for misuse.


Despite these challenges, I'm convinced there's a silver lining and how collectively we can inspire a vision of how AI can be a force for good. By fostering a deep understanding of AI's capabilities and limitations, we can imagine and work towards a future where AI enhances our creativity, bolsters our problem-solving abilities, and strengthens our connections with one another. And as I said that I find recent research about younger generations that use Generative AI to discuss and solve their traumas very fascinating.


Range of various helpers available on Character.ai website

So here goes my first myth debunked: I strongly disagree with a belief that sentient machines will pose an immediate threat to humanity. Instead, my greater concern lies with the wrong people gaining access to open-source AI models.

 

 

Myth no.2: AI’s Omnipotence and Superior Intelligence


Let's tackle another misconception that's I find very intriguing, the belief in AI’s omnipotence and its potential to eclipse human intellect. Truth be said, when I first interacted with ChatGPT 3.0 back in November 2022, it felt like talking to an entity who could really think and understand me like another human would do. It was like nothing I had seen before, despite the fact I’ve been interacting with various AIs for years, such Apple's Siri. ChatGPT was (and still is to be frank) something else and back then it amazed and mesmerized me. This encounter marked the beginning of my deep dive into AI, leading me to learn more about answers to my questions what AI is built of and how does it really work?


When we first interact with systems like OpenAI’s ChatGPT, Anthropic’s Claude or Google’s Gemini, we might believe these beings are more intelligent than us, they seem to reason and give a false sense of sentience and here’s what I learned how does it do it. There’s a very important tool AI engineers use to teach AI called Machine Learning. For those new to the concept, Machine Learning is how AI systems learn to do things without being directly programmed for every task by being exposed to a massive amounts of data. Imagine teaching a child to recognise animals by showing them picture cards. Over time, they start spotting animals they've never seen before correctly. That's a bit like how machine learning works – AI learns from loads of examples, finding patterns and making decisions all on its own.


Deep learning is a subset of ML, which is a subset of AI. https://www.ibm.com/design/ai/basics/ml/

But there’s something a tad more complex called Deep Learning, which uses what's called neural networks. If that sounds familiar, it's actually inspired by how our brains work, connecting dots (or in AI's case, data points) to make sense of text, images, and sounds. Deep Learning helps AI tackle even more complicated tasks, like understanding human speech or recognising faces in photos, but it requires even bigger amounts of data. This is in a very plain words, my explanation of what’s under the hood of most of Generative AI tools we see today. If anyone is keen to dive deeper into this subject, I highly recommend enrolling to Andrew Ng's free course, "AI for Everyone" It's a fantastic resource for getting a handle on these concepts without getting lost in the technical weeds.


Despite these advancements, it's crucial to remember that AI's 'learning' is fundamentally different from human learning. AI excels at identifying patterns and making predictions based on data, but it lacks the ability to understand context or the nuances of human experience that inform our decisions and understanding.


Now, we might think, with tools this powerful, could AI outsmart us? Well, not exactly, but I’ll cover this in a latter part of this post. Despite the fancy term, neural networks are really just a series of calculations based on the data AI have been fed. They're not thinking or understanding the way humans do. They work within a so-called AI "black box," making it tricky at times to understand how they reach certain conclusions. This complexity, along with the need for massive computing power, are big hurdles. Tech giants are pouring resources into AI development, but there's still a long way to go to reach very anticipated Artificial General Intelligence, where machines can perform any intellectual task a human can. What is AGI? That's a subject for another day.

 

 

Myth no.3: AI's Infallibility and Lack of Bias


After delving into the myth surrounding AI's omnipotence, it's time we examine another widespread false fact, that AI systems are flawless and devoid of bias. It's easy to fall into the trap of thinking machines, with their complex algorithms, data processing capabilities and logical thinking, operate beyond the human shortcomings of error and prejudice. They may not have a human like intelligence yet, but they certainly can inherit our biases and make mistakes.


As I mentioned earlier, AI, at its core, learns from mass amounts of data. This data, collected from various sources, reflects the real world — including the good, the bad, and unfortunately, the biased aspects of society. When AI systems are trained on datasets that contain certain beliefs and opinions, these systems can inadvertently perpetuate and even amplify those in their outputs. An AI trained predominantly on data from one linguistic or cultural background might lean towards prejudiced norms, potentially sidelining other equally valid perspectives. To remain neutral here, I will not be providing concrete real-world examples here. So, whether it's facial recognition systems showing inaccuracies in identifying individuals from certain demographic groups or job application tools favouring one set of candidates over another, examples abound of AI reflecting back at us the biases ingrained in the data it was fed. I recommend watching Sasha Luccioni's TedTalk titled "AI Is Dangerous, but Not for the Reasons You Think" where she covers this particular issue and many others.


Addressing issue of inherited biases is not merely about refining algorithms but requires a concerted effort towards ethical AI development. This involves scrutinizing the data used for training AI, ensuring it is as diverse and representative as possible. Moreover, developers and researchers are increasingly adopting techniques to identify and correct biases within datasets. But in order to achieve that, we need a global collective approach that defines mutually agreed AI training framework.


There’s a certain concept called AI alignment. It is set of principles to make sure AI systems work in a way that matches our collective ethical beliefs and societal needs. It's like teaching AI to play fair, respect our privacy and human ways of life. This effort is vital to ensure that as AI gets more involved in our lives, it supports a future that benefits everyone, without causing harm and without broadening social inequality among us. It's about guiding AI to be a helpful companion in solving the world's problems, not just a tool for efficiency, productivity and profit.


In essence, while AI has the potential to transform our world in countless positive ways, its development and application must be approached with a critical eye towards the ethical implications. The myth of AI's infallibility and lack of bias is just that — a myth. Recognizing and addressing the biases within AI is not only about improving technology but is fundamentally about ensuring fairness and equity in the automated processes that increasingly shape our world.


I might be dreaming big, but I’m hopeful that together, we'll find a way to not only advance and evolve AI but also ourselves, sooner rather than later.

 

 

Myth no.4: The End of Work


It was quite fitting to ask Copilot to create humorous illustration of a relaxed human at home while AI does everything for this part

I promised to elaborate point on job displacement, so let's delve into the nuanced conversation about this subject in the current AI era. The notion that AI will usurp all jobs, rendering human effort obsolete, is a fear (or hope for some), deeply ingrained in our collective psyche, but fuelled by dystopian narratives and speculative fiction as we have already discovered. Yet, the unfolding reality is more intricate and, dare I say, optimistic.


AI, indeed, heralds a work transformation as we know it today, by automating tasks that are routine, monotonous, or heavily reliant on data processing. This shift is not unprecedented; history is full of technological advancements that have reshaped the workforce, from the industrial revolution to the digital age. Let’s ask ourselves this question, can we imagine working without using computer or the internet these days? Would I hire a person without these essential skills? The key distinction lies in AI's role as an augmentative force rather than a replacement. It's about reimagining work where AI tools liberate us from drudgery, empowering human creativity, productivity, and innovation to take the forefront. Karim Lakhani, a professor at Harvard Business School, summarised it well "AI is not going to replace humans, but humans with AI are going to replace humans without AI"


Consider the sectors where AI's impact is palpable yet balanced by an enduring need for human insight: healthcare, where AI diagnoses are complemented by the care and understanding of medical professionals; creative industries, where AI-generated art sparks new forms of expression that are deeply human in interpretation and appreciation; and even technology itself, where the design, ethical implementation, and oversight of AI systems necessitate a nuanced understanding of human values and societal impacts. My blog serves as an example to all of you, it is powered by AI tools, but it only helps with a small fraction in the entire process of publishing these posts.


The future of work, then, is not a stark landscape of human redundancy but a vibrant tableau of human-AI collaboration. It's about cultivating a workforce that is resilient, adaptable, and enhanced by technology. New jobs will emerge, many of which we can scarcely imagine today, driven by the very technology that fears us. One of the very first brand new job roles I noticed that have emerged during ongoing AI revolution were Prompt Engineering opportunities, for people who know how to talk to AI and get the most accurate responses and results out of them.


This evolution calls for a recalibration of skills, evolution of our education as well as societal structures to ensure that everyone has the opportunity to thrive in this new era. As we venture forward, let's replace fear with curiosity, speculation with preparation, and uncertainty with innovation. The narrative of AI and work is ours to write to harness it's limitless opportunities for growth, discovery, and human advancement.

 


Myth no.5: AI as the Silver Bullet Solution


As we explored some of the myths already, it’s clear that AI's is not yet the universal solution we might hope for, but I’d like to expand this thought a bit more. Though AI offers transformative capabilities in productivity, efficiency and creativity, it alone cannot unravel the most complex issues and challenges such as climate change and social inequality. As I previously explained, biased data will lead to biased AI and as a result, we cannot and should not let it take a lead solving the most complex challenges humanity face today.


AI systems do, however, serve as a powerful ally in specific areas, such as scientific research and Google DeepMind's efforts are great examples of that. Their effort in combatting antibiotic resistance or Lookout app's support for the visually impaired are truly remarkable and these are only few of many great use cases out there. Yet, the broader spectrum of challenges that still require attention call for a united approach, combining AI’s analytical capabilities with human insight, ethical foresight, and I will say it again, a collaborative action across global communities.


Acknowledging AI's limitations and its ethical implications doesn't diminish its potential but rather emphasizes the importance of a holistic approach to technology deployment. Engaging with AI thoughtfully, as if guiding a student, enhances our interactions and, by extension, AI's development to better comprehend and align with human values. I often like to think that my current interactions will act as a training data set for the future iteration of these AI systems and maybe one day my positive interaction will resonate and result in a better, more thoughtful and evolved AI.


Without shadow of the doubt AI represents a significant leap forward for us as humanity, but it is not yet a panacea for all problems, I often think it might as well never be. But nevertheless, it is a driver for a change for good, it will enable a lot of us to achieve new heights in personal as well as professional development.


Human and AI working together - generated using DALL-E 3

And as I end this (rather lengthy) post on positive note, I'd like to express my gratitude for taking time to read it in its entirety and I hope my insights have sparked a change in the way we look at AI, not with fear but a careful optimism. I tried to convey a message of nuanced understanding of the subject and help understand how very instrumental power we wield to help navigate the future intertwined with AI as a companion. I'd like to extend an invitation and encourage all my readers to share your thoughts on the myths we covered in this post and maybe even bring up any other myths you've encountered. I'd be very much interested to hear about it.


Thank you and until next time,

Dorian

 

 
 
 

Comments


© 2024 by Dorian Munro. Powered and secured by Wix

bottom of page