GET IN TOUCH WITH PAKKO, CREATIVE DIRECTOR ALIGNED FOR THE FUTURE OF CREATIVITY.
PAKKO@PAKKO.ORG

LA | DUBAI | NY | CDMX

PLAY PC GAMES? ADD ME AS A FRIEND ON STEAM

 


Back to Top

Pakko De La Torre // Creative Director

Do Chatbots Get Us Any Closer to Human-Level Artificial Intelligence? | PRO Insight

Do Chatbots Get Us Any Closer to Human-Level Artificial Intelligence? | PRO Insight

For AI to mirror human intelligence, it must perceive the world as we do

AI chatbots are getting so good people are starting to see them as human. Several users have recently called the bots their best friends, others have professed their love, and a Google engineer even helped one hire a lawyer. From a product standpoint, these bots are extraordinary. But from a research perspective, the people dreaming about AI reaching human-level intelligence are due for a reality check.

Chatbots today are trained only on text, a debilitating limitation. Ingesting mountains of the written word can produce jaw-dropping results — like rewriting Eminem in Shakespearean style — but it prevents perception of the nonverbal world. Much of human intelligence isn’t marked down. We pick up our innate understanding of physics, craft and emotion not by reading, but by living. Without written material on these topics to train on, the AI comes up short.

Continue reading

Join WrapPRO for Exclusive Content,
Full Video Access, Premium Events, and More!

“The understanding these current systems have of the underlying reality that language expresses is extremely shallow,” Yann LeCun, Meta’s chief AI scientist and a professor of computer science at New York University, said. “It’s not a particularly big step towards human-level intelligence.” 

Holding up a sheet of paper, LeCun demonstrated ChatGPT’s limited understanding of the world in a recent “Big Technology Podcast” episode. The bot wouldn’t know what would happen if he let go of the paper with one hand, LeCun promised. Upon consultation, ChatGPT said the paper would “tilt or rotate in the direction of the hand that is no longer holding it.” For a moment — given its presentation and confidence — the answer seemed plausible. But the bot was dead wrong. 

LeCun’s paper moved toward the hand still holding it, something humans know instinctually. ChatGPT, however, blanked out because people rarely describe the physics of letting go of a paper in text. (Perhaps until now.)

“I can come up with a huge stack of similar situations, each one of them will not have been described in any text,” LeCun said. “So then the question you want to ask is, ‘How much of human knowledge is present and described in text?’ And my answer to this is a tiny portion. Most of human knowledge is not actually language-related.”

Without an innate understanding of the world, AI can’t predict. And without prediction, it can’t plan. “Prediction is the essence of intelligence,” LeCun said. This explains, at least in part, why self-driving cars are still bumbling through a world they don’t completely understand. And why chatbot intelligence remains limited — if still powerful — despite the anthropomorphizing.

This content was originally published here.