World Models

From Humanoid Robots Wiki
Revision as of 06:36, 28 June 2024 by Vrtnis (talk | contribs)
Jump to: navigation, search

World models leverage video data to create rich, synthetic datasets, enhancing the learning process for robotic systems. By generating diverse and realistic training scenarios, world models address the challenge of insufficient real-world data, enabling robots to acquire and refine skills more efficiently.

Date Title Authors Summary
2017 Learning from Simulated and Unsupervised Images through Adversarial Training Ashish Shrivastava et al. technique that refines simulated images to make them more realistic using adversarial training, enhancing the quality of synthetic data for training robotics models.
2018 World Models David Ha and Jürgen Schmidhuber agent builds a compact model of the world and uses it to plan and dream, improving its performance in real environments. This aligns well with the interest in universal simulators.
2020 NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis Ben Mildenhall et al. high-fidelity views of complex 3D scenes, instrumental in creating synthetic data for robotics, and relevant for generating diverse visual environments for training robots.
2024 Real-world Robot Applications of Foundation Models: A Review K Kawaharazuka, T Matsushima et al. overview of the practical application of foundation models in real-world robotics, including the integration of specific components within existing robot systems.
2024 Is SORA a World Simulator? A Comprehensive Survey on General World Models and Beyond Z Zhu, X Wang, W Zhao, C Min, N Deng, M Dou et al. surveys the applications of world models in various fields, including robotics, and discusses the potential of the SORA framework as a world simulator.
2024 Large Language Models for Robotics: Opportunities, Challenges, and Perspectives J Wang, Z Wu, Y Li, H Jiang, P Shu, E Shi, H Hu et al. perspectives of using large language models in robotics, focusing on model transparency, robustness, safety, and real-world applicability.
2024 3D-VLA: A 3D Vision-Language-Action Generative World Model H Zhen, X Qiu, P Chen, J Yang, X Yan, Y Du et al. Presents 3D-VLA, a generative world model that combines vision, language, and action to guide robot control and achieve goal objectives.
2024 A Survey on Robotics with Foundation Models: Toward Embodied AI Z Xu, K Wu, J Wen, J Li, N Liu, Z Che, J Tang integration of foundation models in robotics, addressing safety and interpretation challenges in real-world scenarios, particularly in densely populated environments.
2024 The Essential Role of Causality in Foundation World Models for Embodied AI T Gupta, W Gong, C Ma, N Pawlowski, A Hilmkil et al. importance of causality in foundation world models for embodied AI, predicting that these models will simplify the introduction of new robots into everyday life.
2024 Learning World Models with Identifiable Factorization Y Liu, B Huang, Z Zhu, H Tian et al. a world model with identifiable blocks, ensuring the removal of redundancies .
2024 Imagine the Unseen World: A Benchmark for Systematic Generalization in Visual World Models Y Kim, G Singh, J Park et al. systematic generalization in vision models and world models.