Why large language models aren’t headed toward humanlike understanding

  • 📰 ScienceNews
  • ⏱ Reading Time:
  • 50 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 23%
  • Publisher: 63%

Education Education Headlines News

Education Education Latest News,Education Education Headlines

Unlike people, today's generative AI isn’t good at learning concepts that it can apply to new situations.

Apart from the northward advance of killer bees in the 1980s, nothing has struck as much fear into the hearts of headline writers as the ascent of artificial intelligence.

But AI’s latest incarnations have begun to threaten job security not only for writers but also a lot of other professionals. By identifying patterns in how such linguistic molecules are combined, LLMs can predict in what order words should be assembled to compose sentences or respond to a query. Basically, an LLM calculates probabilities of what word should follow another, something critics have derided as “autocorrect on steroids.”

At the heart of the debate is whether LLMs actually understand what they are saying and doing, rather than just seeming to. Some researchers have suggested that LLMs do understand, can reason like people or even attain a form of consciousness. But Mitchell and others insist that LLMs do not really understand the world .

“While humans exhibit high performance on both the original and counterfactual problems, the performance of all GPT models we tested degrades on the counterfactual versions,” Mitchell and Lewis report in their paper.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 286. in EDUCATİON

Education Education Latest News, Education Education Headlines