Saed News: Recent studies show that older versions of artificial intelligence programs perform poorly in cognitive tests, suggesting that AI programs, like humans, may experience cognitive impairments over time.
According to SaedNews' political service, a new study has shown that artificial intelligence programs, like humans, may experience cognitive impairments over time. The research, published on December 20 in the BMJ journal, reveals that older versions of AI programs perform worse in cognitive tests.
This study evaluated the cognitive abilities of advanced AI models, including ChatGPT and Gemini, using the Montreal Cognitive Assessment (MoCA) test. This standardized test is designed to identify early signs of dementia in humans. According to the results, the ChatGPT 4.0 model performed the best, scoring 26 out of 30, while other versions, such as Gemini 1.0, only scored 16 points. The researchers stated that older chatbots, similar to older human patients, show poorer performance in this test.
One important aspect of the study emphasized the weakness of AI models in performing tasks related to visual-spatial and organizational skills. These tasks include connecting numbers and letters in ascending order, requiring coordination between executive functions and visual reasoning.
According to the researchers, this pattern of deficiency in some cases is similar to that seen in human patients with posterior cortical atrophy, one form of Alzheimer's disease. Moreover, the decline in cognitive performance with age in chatbots mirrors the trend observed in humans.
For example, two versions of the Gemini AI model, less than a year apart, showed a six-point difference in test results. Researchers have assessed this as a sign of rapid cognitive decline in chatbots. These findings suggest that while artificial intelligence can excel in some areas, it still shares vulnerabilities similar to human disorders. Researchers have warned that these vulnerabilities must be carefully considered before using large language models in diagnosis and medical care.
The results also challenge the hypothesis of rapidly replacing human doctors with AI and emphasize the importance of developing and refining AI models.