Psychology Is Dying — and Artificial Intelligence Is Its Killer!

Sunday, May 25, 2025

Saed News: The wave of merging psychology with artificial intelligence promises a seductive future — but behind the scenes, a scientific and ethical disaster is looming. Don’t miss this report!

Psychology Is Dying — and Artificial Intelligence Is Its Killer!

According to the Science and Technology Service of Saed News Analytical News Base,
In an era when psychology is grappling with conceptual and reproducibility crises, and artificial intelligence sits at the forefront of the news with exaggerated promises, the alliance of these two fields is more dangerous than hopeful.

If you think artificial intelligence can replace the human mind in psychological research or even propose a theory about it, you have likely fallen into one of the most deceptive scientific errors of our time. If you do not read this, soon you may witness the death of scientific theorizing and the rise of pseudoscience!


Psychology Wounded in the Arms of Hallucinating AI

Scientific psychology has undergone serious reconstruction in the last decade; a crisis known as the "reproducibility crisis" revealed that a large portion of psychological findings are not reproducible in scientific retests. This revelation sounded alarm bells for the scientific legitimacy of psychology.

In fact, the idea of building an artificial mind has occupied researchers’ minds for years. Now, with advances in language models and neural networks, some think this dream has come true. But the reality is that what machines do is nothing but a superficial imitation of human behavior. Machines do not understand; they only repeat. They have no feelings; they only compute. The human mind is a complex, flexible, and ambiguous phenomenon that cannot be reconstructed with a few million lines of code and data.

Some researchers emphasize that such crises can be beneficial if they lead to rethinking theories and methodologies. But the problem begins when these crises prepare desperate minds to accept "quick and seemingly pleasant solutions." This is the moment when AI enters the scene with slogans like "We can reconstruct the human mind!"

For example, many psychological researchers, instead of revising their theoretical foundations, have turned to machine learning models to predict human behavior. This is exactly where psychology’s vulnerability becomes a gateway for pseudoscientific claims.

A study titled "Combining Psychology with Artificial Intelligence" stresses: "While psychology has become vulnerable, society is experiencing an AI hype cycle more damaging than previous ones."

This "hyped" AI problem is not just about scientific claims; it also:

  • Leaves a heavy footprint on environmental destruction;

  • Is based on hidden labor of cheap and exploited workers;

  • In many cases, not only reproduces but reinforces discrimination.

Meanwhile, psychologists suffering from theoretical and statistical poverty easily fall for these promises without realizing the risks. This situation is exactly a "perfect storm": psychology with scientific crises, AI tempted by quick-fix solutions, and ultimately, science on the brink of falling into pseudoscience.


Trap One: Is AI a "Mind"?

Let’s give a concrete example. Imagine you are chatting with a robot. It jokes, answers your questions, maybe even shows empathy. Does this mean the robot "has a mind"? This is the dangerous trap.

Some researchers explicitly say: "AI systems may appear human-like, but they are merely sets and stages that deceive us."

This mistake is called a "category error"—attributing the properties of one category (like the human mind) to things that do not belong to that category (like algorithms).

The problem becomes acute when some researchers seriously suggest AI can replace humans in psychological experiments (Dillon et al., 2023). This suggestion is not only unscientific but, according to Rooij and Guest (2024), "unethical and inhuman."

If this trend continues, the result will be the gradual elimination of humans from research about themselves; conducting studies with no understanding of the complexity of human experience; and producing fake data disguised as scientific.


Trap Two: Is AI a "Theory"?

Some might say: "We do not claim AI has a mind; we just say its models can be theories for understanding humans." This is the second mistake. Researchers expose several common errors in this argument:

  1. Prediction is not Explanation
    If a machine learning model can predict how a person will behave on a psychological test, does this mean it "explains" that behavior? The answer is no.

Simply put, if you can accurately predict tides with a chart, does that mean you understand the secret of tides? No. You only know when they happen, not why.

AI models work exactly like this. They can predict human behavior under certain conditions but have no explanation for the why and how. They imitate rather than interpret. Science is not just prediction; science means understanding.

  1. Functional similarity is not cognitive similarity
    Based on the principle of multiple realizability, behavioral similarity between AI and humans does not necessarily imply cognitive similarity. This is just a logical illusion.

A machine might imitate human behavior, e.g., solving a problem correctly, but this does not mean it thinks like us. Like how a digital clock and an analog clock both show time but operate via different mechanisms, similarity in appearance does not mean similarity in nature. AI, even if it gives human-like answers, does not necessarily have human understanding.

  1. Task ≠ Capacity
    In psychology, we design tests to measure things like memory or decision-making. But these tests only represent a part of human mental capacity. If we train a model to perform well on a test, this does not mean it understands mental function. It’s like someone who memorized a piano piece well but cannot improvise. True understanding goes far beyond performing predefined tasks. More technically, models of psychological tasks cannot alone be cognitive theories.


Trap Three: Automating Cognitive Science, an Impossible Dream

Suppose someone says: "We can automate the entire scientific process; theory-making, data analysis, even inference can be assigned to machines." Is this possible?

Can we really say yes? Probably never!

One of the biggest mistakes is thinking science can be automated: writing, analyzing, inferring, and theorizing by machines. The truth is that theory creation in science, especially psychology, is not only difficult but fundamentally not automatable.

Theory-making is a human process. It requires imagination, intuition, critique, revision, and dialogue. Entrusting this to algorithms may yield seemingly precise results but devoid of meaning and depth—like a beautiful painting with no soul.

Producing theory purely computationally, especially in human cognition, is unsolvable because no algorithm can fully generate a cognitive theory from data. Science is more than statistical analysis; science means explanation, understanding, hypothesis formation, revision, and critique.

AI cannot recognize its own conceptual and cognitive limits. If science is automated under these conditions: science turns into a set of soulless, mindless procedures; scientists lose critical thinking; and the door opens for seemingly scientific pseudosciences (Rooij, Guest, 2024).


Suggested Path: "Non-Makeist AI" — A Cautious Approach

Researchers have a scientific and defensible proposal: view AI as a tool, not a being with a mind or theory. In this approach, called "non-makeist AI," AI is a tool to assist theory-making, not replace it. We should use AI, but cautiously. Not as a substitute for the human mind, but as an aid for analysis and theory construction. We must not deify AI but regard it as a precise and limited assistant. We should always remember science is a product of the human mind, not machines.

We use AI for modeling, simulation, and examining theory limitations, but never treat it as a mind, theory, or even "understanding." In cognitive science, "the activity itself is cognitive, so our cognitive limitations naturally exist in theory-making" (Rich et al., 2021). This proposal is not only realistic but protects us from great dangers.


Summary

Psychology and AI are two important fields, but combining them without theoretical foundation leads more to scientific deviation than progress. AI is not a mind; not a theory; not science. If used without deep philosophical and epistemological understanding, these tools can lead to an illusion of science and the spread of pseudoscience.

When machines replace humans, we not only move away from understanding but fall into traps of errors and deceptions that appear scientific but hold no truth. Cognition is beyond computation. The mind is beyond algorithms.

Therefore, returning to strong theoretical foundations, accepting computational limits, and consciously using AI as a "theory assistant," not a human substitute, is the only way out of this crisis.

AI can be a powerful tool for psychology — but only if it knows its place! In this dazzling digital world, a calm, theory-driven, and human-centered thinking might be the only salvation for science.