The Myths of AI

LingchaoMao
FernandoAcosta
Lingchao Mao Fernando Acosta-Perez
Georgia Institute of Technology University of Wisconsin-Madison

Artificial Intelligence (AI) is a field of science focused on building systems capable of performing tasks that typically require human intelligence or on analyzing data at a scale far beyond human capacity. Artificial Intelligence (AI) is a field of science focused on building systems capable of performing tasks comparable to or exceeding human capabilities. AI has a plethora of applications in different domains, such as face recognition, medical diagnosis, playing chess, and language generation. In this article, we unpack some of the most common myths surrounding AI: what it can do, what it can’t, and what we often misunderstand. By clarifying these misconceptions, we hope to promote a more realistic and informed conversation about the future of these intelligent systems.

Myth 1: AI is a New Technology

Although artificial intelligence may seem new to the public, its roots go back decades. One of its core technologies, the artificial neural network, was introduced by Frank Rosenblatt in the 1950s. Despite early excitement, its inability to learn complex patterns led to harsh criticism, most notably from Minsky and Papert in 1969. Such criticisms led to the first "AI winter" (1974–1980), a period marked by declining AI research. Neural networks were largely abandoned until the mid-1980s, when the backpropagation algorithm enabled the training of multi-layer perceptrons, reigniting interest. This laid the groundwork for today’s revolution, which accelerated in the 2010s thanks to advances in data, computing power, and algorithm design.

Myth 2: AI is Just ChatGPT

In November 2022, Artificial Intelligence transitioned from being a term used in academic and industry circles to a mainstream worldwide concept that has shaped the conversation around technology for the past three years. Unfortunately, such rapid proliferation has led to inaccurate descriptions of what constitutes an AI system.

For the general public, the term AI is immediately associated with ChatGPT, which is arguably the most popular form of AI known to the public. However, AI is much more than ChatGPT. The U.S. National Defense Authorization defined AI as “any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets” (NASA, 2024). In other words, technologies like ChatGPT are a type of Artificial Intelligence, and potentially of the most advanced types as of today. But Artificial Intelligence is a broad area concerned with designing systems that can learn on their own and complete complex tasks.

Myth 3: AI Agents Can Think Like a Human

Nowadays, headlines like “AI is at the level of a PhD student” or “AI outperforms humans on law and medical exams” can give the impression that AI is smarter than humans and will take over the world soon. However, it’s important to take such claims with a grain of salt. Researchers typically evaluate AI models using public benchmarks where models perform specific tasks, such as question answering or pattern recognition. While AI can outperform humans on some tasks, it is not uncommon to see LLMs fail at basic logical tests (Huckle and Williams, 2025). As such, do these benchmarks truly measure intelligence? No single benchmark—or even a combination of them—can fully capture the complexity of intelligence. This has spurred ongoing research to develop harder, more comprehensive tests like Humanity’s Last Exam Phan et al. (2025) and Vending-Bench Backlund and Petersson (2025), designed to better assess AI’s long-ranged reasoning and understanding.

Another popular belief is that AI is on a path to replicate human intelligence—or that this should even be the ultimate goal. But whether human intelligence is the ideal template for general intelligence is yet to be discussed. Human intelligence is just one form of general intelligence, as a product of 100 thousand years of evolution for survival Korteling et al. (2021). We are brilliant at social reasoning, storytelling, and visual recognition, yet struggle with logic-heavy tasks, long-term memory, or high-dimensional math. This is known as Moravec’s Paradox: what’s easy for humans (like recognizing a friend’s face or detecting sarcasm) is hard for AI, and vice versa. For example, it is easier for us to identify a face in a photo than to multiply two six-digit numbers—yet computers handle the latter with ease. Unlike humans, AI systems are optimized on vast amounts of numbers rather than biological evolution. Vision-language models, for example, can spot subtle anomalies in radiology scans that doctors may miss, yet they lack real understanding of illness the way a physician does.

Human intelligence is slow, energy-efficient, and biologically constrained—shaped by evolution with limited memory, multitasking, and adaptability. In contrast, AI is lightning-fast, easily updated or replicated, highly scalable, and capable of computations far beyond human cognitive capacity (Korteling et al., 2021). The distinction is even sharper when it comes to creativity. Large language models write poems by predicting the most probable next token based on training data and your prompt instructions.  Human creativity, in contrast, is intentional—shaped by goals, values, experiences, and reasoning. While LLMs can imitate creativity, they lack the abstract insight, purpose, and novelty that characterize true innovation.

Rather than aspiring to mimic human thought, AI’s real power lies in complementing it—pushing beyond the bounds of evolved cognition to help us reason about systems we weren’t built to comprehend.

Myth 4: Training AI is Just Feeding it Data

When you start a job, your boss might ask, “We have some data—can we build an AI?” As a fresh graduate, it’s tempting to say “Sure!” But real-world AI development is far more complex than that. Before diving into modeling, you need to pause and ask: What specific business problem are we solving? How will the AI’s output create measurable value? Without clear objectives, even the most advanced models risk becoming solutions in search of a problem. Once the use case is clearly defined, the next step is to evaluate your organization’s data readiness.

Data readiness is often underestimated but frequently the biggest hurdle in AI projects. Is your data truly accessible, or is it trapped in silos—like CRM systems, sales platforms, emails, or hand-written paperworks? Is the data clean, accurate, structured, and up to date? Do you have enough high-quality labeled data specific to your AI task? Are you collecting diverse data that covers all your business scenarios? Do you have the infrastructure to efficiently manage, analyze, and secure your data? Without a solid data readiness, even the best performing AI models will struggle to deliver real business impact in the long-run.

Besides having data and building the model, developing a robust AI infrastructure is essential. What systems will you use to evaluate performance during offline development and online in production? How will you monitor for model drift or downtime? Can your infrastructure ensure low latency and stable responses when the model is applied in real-time? And importantly, how will you safeguard security and privacy of user’s data? Building and maintaining this infrastructure is a challenging and critical part of turning AI models from prototypes into reliable, scalable business tools.

Myth 5: AI Doesn’t Make Mistakes

It’s a common misconception that AI systems are purely quantitative, objective, and therefore do not make mistakes. In reality, AI can make simple—sometimes amusing, sometimes costly—mistakes. For example, chatbots may generate perfect grammar but illogical statements.  Because of their probabilistic design, generative models can “hallucinate,” confidently presenting fabricated or incorrect information as fact.

AI systems are only as good as the data they are trained on, and, thus, can behave unpredictably when faced with unfamiliar or edge cases. Self-driving cars, for example, can misidentify a plastic bag as a rock or abruptly stopped for harmless shadows. Researchers have also shown that even high-performing image classifiers can be tricked by simple adversarial attacks, where perturbing a few pixels or adding white noise to the image background would cause the model to completely misclassify an object (Goodfellow et al., 2015).

Beyond obvious errors, AI systems can inherit the biases present in human-generated data. Facial recognition technologies have been shown to produce higher error rates for individuals from certain ethnic groups. A widely known example is the “COMPAS” algorithm used in the U.S. criminal justice system to assess recidivism risk. This algorithm has been found to systematically assign higher risk scores to Black defendants compared to white individuals with similar profiles Angwin et al. (2022). Hence, human oversight of AI outputs is always needed, as we must actively monitor and prevent these systems from amplifying existing societal biases.

Myth 6: AI Content is Always Discernible

Generative AI is being used by millions of people every day. Unfortunately, there are no reliable ways to distinguish AI generated content from human generated content. Although this might not be an issue in most applications, generative AI will most certainly be used by bad actors for illicit activities. In the case of LLMs, one of the main difficulties is the fact that there is randomness at inference (i.e., when the model is responding to a prompt). This randomness accounts for the fact that if you give the same prompt to an LLM multiple times, you will obtain different answers. Therefore, even with complete access to a model, this randomness makes it hard to tell when content is generated by an LLM. Some have argued for the inclusion of watermarks in generative AI outputs, for easier identification, but such ideas have not been implemented in production settings as of 2025.

Artificial intelligence is here to stay. Although to some it may seem like a new technology, it is a field that has been evolving for more than 60 years. To some, systems such as ChatGPT are viewed as agents capable of human-like thought. However, it is important to recognize that these technologies are still in their early stages of development, and there is compelling evidence that highlights their limitations. In the coming years, AI is expected to penetrate nearly every industry in different forms. To integrate it safely and appropriately, we believe it is essential to understand both the capabilities and the constraints of the technology.

References: 

Angwin, J., et al., 2022. Machine bias, in: Ethics of data and analytics. Auerbach Publications, pp. 254–264.

Backlund, A., Petersson, L., 2025. Vending-bench: A benchmark for long-term coherence of autonomous agents. arXiv preprint arXiv:2502.15840 .

Goodfellow, I.J., Shlens, J., Szegedy, C., 2015. Explaining and harnessing adversarial examples. Huckle, J., Williams, S., 2025. Easy problems that llms get wrong.

Korteling, J.E., et al., 2021. Human-versus artificial intelligence. Frontiers in Artificial Intelligence 4, 622364.

NASA, 2024. What is artificial intelligence? https://www.nasa.gov/what-is-artificial-intelligence/. Accessed: 2025-09-22. Phan, L., et al., 2025. Humanity’s last exam. arXiv preprint arXiv:2501.14249 .

Acknowledgements: We would like to thank Pranesh Saisridhar for taking time to review this article. Photo credit goes to Alex Shute for the header photo, and Xu Haiwei for footer photo.