Apparently, the More You Use Generative AI the Dumber It Will Make the Both of You

IS TOO MUCH GEN AI MAKING US BOTH STUPID_20250619_161237_0000.jpg

The widespread adoption of generative artificial intelligence (AI) tools is on the rise, from chatbots that write emails to systems that create stunning images and video. It promises great efficiency.

However, a growing body of research and expert opinion seems to indicate a significant downside. It's believed that relying too much on these powerful tools could make both human users and the AI models themselves less capable (or dumber) over time.

20250620_165753_0000.png

If you let Gen AI do most of your thinking for you, your mental muscles may atrophy because they don't get used as much

Research indicates a direct link between how often someone uses AI tools and a decline in their critical thinking skills. This happens largely because people "offload" their thinking to the AI. Instead of doing deep analysis themselves, they let the AI do the heavy lifting.

This over-reliance can lead to "cognitive atrophy", where our mental muscles for solving problems, analysing facts, and thinking creatively simply weaken from lack of use. This isn't that different from how people now can't find their way without navigation apps or just trust them way too blindly.

 

The Threat of Model Collapse in AI Development

Beyond how AI affects human minds, this over-reliance also poses a serious threat to the AI models themselves. This problem is called "model collapse" or "data collapse." Generative AI models learn from vast amounts of data, mostly collected from human-created content on the Internet.

But, as more and more AI-generated content fills the internet, future training datasets will contain more of this artificial data. When new AI models are trained on this diluted and often flawed artificial data, they pick up and even make the errors and biases from the AI-generated content worse.

IMG_20250620_103851.jpg

When Generative AI uses content made by Generative AI to train, the errors tend to increase

This can lead to AI outputs that are increasingly generic, repetitive, and less original. Over time, the AI might even "forget" important but less common details or nuanced information (like extra fingers or hands).

Some studies suggest that even a small amount of artificial data, potentially as little as 1 percent, in continuous training cycles can contribute to this decline over successive AI generations. Epoch AI, a notable research group, estimates that the supply of high-quality, human-generated text data could run out between 2026 and 2032 if current trends continue. This makes the risk of model collapse even more urgent.

 

Investing in Quality Data: A Critical Strategy

To prevent model collapse and ensure AI continues to improve, companies are making large investments in getting high-quality data created by humans. This means data must meet strict standards for accuracy, factual reliability, and broad diversity across topics, styles, and demographics.

The data must also be free from technical errors, duplicates, and unnecessary "noise". The process of data collection and preparation for large generative AI models alone can cost hundreds of thousands.

IMG_20250620_172345.jpg

AI apparently needs a lot of human produced content

AI companies are now making direct deals with major content creators and publishers, such as Reuters and TIME, to get access to their large collections of verified, human-written content. There is also a growing trend of paying individual creators for unique, unpublished, or specialised content. This shift is driven by the clear need for fresh, untainted data, along with important legal and ethical concerns about intellectual property.

 

Reshaping Entry-Level Skills in the AI Era

The rise of generative AI also affects the job market, especially for entry-level positions. While AI might automate routine tasks, it is also creating new opportunities that demand specific human abilities.

These new entry-level roles often involve specialised data annotation and labeling, where workers use their human judgment to provide detailed input for AI training. There is also increasing need for people to review and refine AI-generated outputs, ensuring accuracy, identifying biases, and adding human nuance.

IMG_20250620_171509.jpg

Data annotation can be trained, which while a bit mind numbing, can allow AI to remain useful while creating jobs for entry-level workers

These emerging roles highlight a shift towards skills that complement AI. Mostly, this is in areas of critical thinking, understanding context, and the ability to follow complex quality standards.

What are your thoughts on balancing the convenience of AI with the potential for a decline in intelligence for both humans and AI models? Have you been using Gen AI and chatbots a bit too much lately? Have you noticed an over reliance of sorts too? Share your thoughts and join the discussion in the comments below and stay tuned to TechNave.com for more updates.

IMG_20250620_163241.jpg

Don't lean too heavily on him even though he's your buddy