Due to the need to interact with the real world, embodied agents are required to possess comprehensive prior knowledge, long-horizon planning capability, and a swift response speed. Despite recent large language model (LLM) based agents achieving promising performance, they still exhibit several limitations. For instance, the output of LLMs is a descriptive sentence, which is ambiguous when determining specific actions. To address these limitations, we introduce the large auto-regressive model (LARM). LARM leverages both text and multi-view images as input and predicts subsequent actions in an auto-regressive manner. To train LARM, we develop a novel data format named auto-regressive node transmission structure and assemble a corresponding dataset. Adopting a two-phase training regimen, LARM successfully harvests enchanted equipment in Minecraft, which demands significantly more complex decision-making chains than the highest achievements of prior best methods. Besides, the speed of LARM is 6.8x faster.
The overall framework of LARM. In this framework, the network takes the target task description, multi-view images, agent information, and environment feedback as input to predict a skill token. The skill token is matched with the skill embeddings, which are generated based on a pre-prepared skill library, to select the optimum skill. Then, the agent performs this skill, which helps the agent one step closer to completing the target and changes the environment.
More capability examples of LARM, which include traveling a long distance to find a village, building a nether portal and entering the nether, multiple agents collaboration to combat zombies.
@article{li2024larm, title={LARM: Large Auto-Regressive Model for Long-Horizon Embodied Intelligence}, author={Zhuoling, Li and Xiaogang, Xu and Zhenhua, Xu and SerNam, Lim and Hengshuang, Zhao}, journal={arXiv preprint arXiv:2405.17424}, year={2024} }