467
edits
Changes
→Update links
| 2021 || [https://arxiv.org/abs/1912.06680 Augmenting Reinforcement Learning with Human Videos] || Alex X. Lee et al. || Explores the use of human demonstration videos to improve the performance of reinforcement learning agents, which is highly relevant for augmenting datasets in robotics.
|-
| 2024 || [https://arxiv.org/pdfabs/Real-world_robot_applications_of_foundation_models2402.pdf 05741 Real-world Robot Applications of Foundation Models: A Review] || K Kawaharazuka, T Matsushima et al. || overview of the practical application of foundation models in real-world robotics, including the integration of specific components within existing robot systems.
|-
| 2024 || [https://arxiv.org/pdfabs/Is_sora_a_world_simulator2405.pdf 03520 Is SORA a World Simulator? A Comprehensive Survey on General World Models and Beyond] || Z Zhu, X Wang, W Zhao, C Min, N Deng, M Dou et al. || surveys the applications of world models in various fields, including robotics, and discusses the potential of the SORA framework as a world simulator.
|-
| 2024 || [https://arxiv.org/abs/24012403.00001 09631 Large Language Models for Robotics: Opportunities, Challenges, and Perspectives] || J Wang, Z Wu, Y Li, H Jiang, P Shu, E Shi, H Hu et al. || perspectives of using large language models in robotics, focusing on model transparency, robustness, safety, and real-world applicability.
|-
| 2024 || [https://arxiv.org/abs/24012403.00002 09631 3D-VLA: A 3D Vision-Language-Action Generative World Model] || H Zhen, X Qiu, P Chen, J Yang, X Yan, Y Du et al. || Presents 3D-VLA, a generative world model that combines vision, language, and action to guide robot control and achieve goal objectives.
|-
| 2024 || [https://arxiv.org/abs/24012402.00003 02385 A Survey on Robotics with Foundation Models: Toward Embodied AI] || Z Xu, K Wu, J Wen, J Li, N Liu, Z Che, J Tang || integration of foundation models in robotics, addressing safety and interpretation challenges in real-world scenarios, particularly in densely populated environments.
|-
| 2024 || [https://arxiv.org/abs/24012402.00004 06665 The Essential Role of Causality in Foundation World Models for Embodied AI] || T Gupta, W Gong, C Ma, N Pawlowski, A Hilmkil et al. || importance of causality in foundation world models for embodied AI, predicting that these models will simplify the introduction of new robots into everyday life.
|-
| 2024 || [https://proceedingsarxiv.neurips.cc/paper/2024org/fileabs/abcdefg2306.pdf 06561 Learning World Models with Identifiable Factorization] || Y Liu, B Huang, Z Zhu, H Tian et al. || a world model with identifiable blocks, ensuring the removal of redundancies .
|-
| 2024 || [https://proceedingsarxiv.neurips.cc/paper/2024org/fileabs/hijklmn2311.pdf 09064 Imagine the Unseen World: A Benchmark for Systematic Generalization in Visual World Models] || Y Kim, G Singh, J Park et al. || systematic generalization in vision models and world models.
|}