Identification

ChatGPT's Popularity Declines as Users Discover Limitations

Shahid Maqbool

By Shahid Maqbool
On Jul 20, 2023

ChatGPT's Popularity Declines as Users Discover Limitations

Key Takeaways

  • ChatGPT is not as popular as before because people see its limitations.

  • Users think ChatGPT's responses are not as good or smart as before.

  • OpenAI speculates increased flaws due to higher usage.

When ChatGPT first came out in late 2022, it was extremely popular. Lots and lots of people were very excited to use this new AI chatbot.

They were amazed at how smart and human-like ChatGPT seemed. At that time, ChatGPT's responses were novel and impressive to most users.

However, recent data shows that people's interest and usage of ChatGPT have declined quite a bit compared to those initial heights.

Many people who were some of ChatGPT's earliest and most frequent users have noticed a change in its performance recently. These long-time users report that ChatGPT does not seem to give responses of the same quality as it did just a few weeks or months ago.

They describe ChatGPT's latest outputs as appearing "lazier," "dumber," and "less intelligent," containing more mistakes, and sometimes even giving nonsensical or completely wrong answers.

Essentially, these loyal ChatGPT users perceive the chatbot's capabilities to have degraded compared to before.

Why Did ChatGPT Change?

The company OpenAI, which created ChatGPT, denies intentionally downgrading or worsening the chatbot's skills in any way. OpenAI claims that every new update and model version they release is intended as an improvement over previous versions.

However, OpenAI has a theory about why ChatGPT may seem worse lately.

They think the increased number of people using and interacting with ChatGPT more frequently has exposed some limitations in the system that were not as apparent when usage was lower.

With many more queries and prompts, flaws that were rarer before are now becoming more noticeable.

OpenAI's New Approach

Earlier in 2023, OpenAI released its GPT-4 language model which received considerable praise. Experts viewed GPT-4 as the most advanced AI system at that point because of its unique ability to understand and interpret both text and visual images.

However, the drawbacks of GPT-4 were that it operated relatively slowly and was very expensive computationally to run and train.

Now, there are reports that OpenAI may be pursuing a different architecture for GPT-4 going forward. Rather than utilizing one single, massive GPT-4 model, they could be developing multiple smaller, more focused GPT-4 models instead.

In this approach, called a "Mixture of Experts (MOE)," each smaller model specializes and concentrates on a narrower set of tasks or knowledge areas.

Working together, these specialized sub-models can provide similar overall capabilities as one enormous model, but at a reduced computational cost.

What is OpenAI's Response to This Matter?

After receiving complaints about GPT-4, Peter Welinder - VP of Products at OpenAI - came to the model's defence.

Welinder said people may think GPT-4 got worse because more people are using it now. With more usage, problems that were not obvious before are now being noticed.

However, Welinder emphasized that each new version of GPT-4 is made to be smarter than the previous version. The goal is always to improve GPT-4 and make it more intelligent.

Welinder tweet on GPT-4 quality decline

Data from Similarweb shows there has been a measurable drop of around 10% in global website traffic and visits to ChatGPT's site over the last couple of months.

 Data from Similarweb on ChatGPT traffic decline

Fewer people are using ChatGPT now compared to when it first came out. This suggests the initial excitement for AI chatbots like ChatGPT is going away over time.

The more people use ChatGPT and other AI assistants, the more they realize the current technology has important limitations.

People understand now that these AI chatbots are not as advanced as human intelligence.

However, there could be other reasons for the decreased usage besides just the product itself. For example, with summer break starting, fewer students need to use AI assistants for schoolwork right now.

Additionally, more public discussions about possible rules and regulations for AI could make some people hesitant to fully adopt the technology widely.

Related Articles

Leave a reply
All Replies (0)