Credit: Ociacia / Shutterstock.com

A eerie presentation of “Figure 01,” a humanoid robot capable of conversing, has surfaced on the internet – it almost feels like a scene from I, Robot that never made it to the final cut.

In the demonstration, Figure 01, powered by OpenAI technology, is questioned about what it can “see.” Demonstrating its visual recognition abilities, the cutting-edge robot accurately describes its surroundings, including a red apple, a rack with dishes, and the man interacting with it.

SEE ALSO:

ChatGPT: How to make it read responses aloud

Admittedly a bit uncanny, but not entirely unprecedented, right? For instance, last year, Google showcased how its AI model Gemini could identify various objects placed in front of it, from a blue rubber duck to hand-drawn illustrations (although it was later discovered that clever editing slightly exaggerated its capabilities).

However, the situation takes a turn when the man asks, “Can I have something to eat?” Figure 01 promptly picks up the apple, evidently recognizing it as the only edible item on the table, and hands it over.

Wait, could Will Smith unexpectedly show up any moment now?

How does the Figure 01 robot operate?

So, what exactly enables Figure 01 to seamlessly engage with a human user? It’s a new Visual Language Model (VLM) that transforms Figure 01 from a bulky machine to a futuristic robot that seems a tad too human-like. (The VLM is the result of a collaboration between OpenAI and Figure, the company behind Figure 01.)

After passing over the apple, Figure 01 reveals its capability to handle multiple tasks simultaneously when queried, “Can you elucidate why you [gave me the apple] while picking up this waste?”

While discerning between what constitutes waste and what doesn’t, and appropriately disposing of them in what it identifies as a bin, the robot elaborates that it offered the apple because there was no other edible option present. That’s some impressive multitasking!

Finally, the man inquires about Figure 01’s self-assessment. In a conversational tone, the robot responds, “I-I think I did pretty well. The apple found its new owner, the trash is gone, and the tableware is in its place.”

Brett Adcock, the founder of Figure, explained that Figure 01 has onboard cameras that provide data to the VLM, aiding in the robot’s ability to “understand” the scene before it and interact seamlessly with the human. Additionally, alongside Adcock, Figure 01 is the creation of key individuals from Boston Dynamics, Tesla, Google Deep Mind, and Archer Aviation.

Poking fun at Elon Musk’s Optimus robot, Adcock proudly stated that Figure 01 does not require teleoperation. In other words, unlike Optimus, famous for folding a shirt, Figure 01 can function independently.

Adcock’s ultimate aim? To train an advanced AI system to control billions of humanoid robots, potentially revolutionizing numerous industries. It appears that I, Robot may be more lifelike than we originally thought.

Topics
Artificial Intelligence
Robotics

Image of Kimberly Gedeon

Kimberly Gedeon
East Coast Tech Editor

Kimberly Gedeon is a tech enthusiast who loves delving into the latest gadgets from cutting-edge iPhones to immersive VR headsets. She has an inclination towards unconventional, avant-garde tech marvels, whether it’s a 3D laptop, a gaming setup that doubles as a briefcase, or smart glasses capable of recording videos. Her journey in journalism began roughly ten years ago at MadameNoire, where she covered tech and business before becoming a tech editor at Laptop Mag in 2020.

Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *