Modern AI models are usually trained on pre-existing data, like text, images, and video, developing through a combination of progressive learning algorithms. But it is also this foundation that can lead to inconsistencies between the final product generated by AI and the physical reality it is attempting to mimic.
Attempting to overcome that challenge, Covariant, an OpenAI spinoff, has created a Robotics Foundation Model (RFM-1) that learns through existing online data, as well as through observing situations unfolding in the physical world. In a press release, Covariant claims the model “provides robots the human-like ability to reason, representing the first time generative AI has successfully given commercial robots a deeper understanding of language and the physical world.”
Here, what is meant by a “human-like ability to reason” is RFM-1’s ability to make outcome predictions based on information gathered from the model’s IRL surroundings. For example, when a robot is given a task, the model generates a visual of what said task could look like once completed. The prediction helps determine whether the robot will encounter any performance obstacles, and allows it to ask its prompter for solutions. Using simple language, the person prompting the robot can offer solutions to help bring the task to completion through typed conversation.
So far, RFM-1 has only been used in a lab setting but Covariant intends to soon release it to industrial customers using AI for work, like production and distribution facilities.
Mashable Read More
Covariant, an OpenAI spinoff, has developed a model that, much like humans, learns through observation.