Changes

Jump to: navigation, search

World Models

2,154 bytes added, 27 June
no edit summary
World models leverage video data to create rich, synthetic datasets, enhancing the learning process for robotic systems. By generating diverse and realistic training scenarios, world models address the challenge of insufficient real-world data, enabling robots to acquire and refine skills more efficiently.
{| class="wikitable sortable"
! Date !! Title !! Authors !! Summary
|-
| 2017 || [https://arxiv.org/abs/1703.06907 Sim-to-Real Transfer of Robotic Control with Dynamics Randomization] || Josh Tobin et al. || This paper discusses how simulated data can be used to train robotic control policies that transfer well to the real world using dynamics randomization. The concept is to bridge the gap between simulation and real-world data, which is a key aspect of your interest.
|-
| 2017 || [https://arxiv.org/abs/1612.07828 Learning from Simulated and Unsupervised Images through Adversarial Training] || Ashish Shrivastava et al. || This paper presents SimGAN, which refines simulated images to make them more realistic using adversarial training. This technique can be used to enhance the quality of synthetic data for training robotics models.
|-
| 2018 || [https://arxiv.org/abs/1803.10122 World Models] || David Ha and Jürgen Schmidhuber || This paper introduces a concept where an agent builds a compact model of the world and uses it to plan and dream, improving its performance in the real environment. This aligns well with your interest in universal simulators.
|-
| 2020 || [https://arxiv.org/abs/2003.08934 NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis] || Ben Mildenhall et al. || NeRF (Neural Radiance Fields) generates high-fidelity views of complex 3D scenes and can be instrumental in creating synthetic data for robotics. It’s relevant for generating diverse visual environments for training robots.
|-
| 2021 || [https://arxiv.org/abs/2103.11624 Diverse and Admissible Trajectory Forecasting through Multimodal Context Understanding] || Krishna D. Kamath et al. || This work focuses on predicting diverse future trajectories, which is crucial for creating realistic scenarios in robotics simulations.
|-
| 2021 || [https://arxiv.org/abs/1912.06680 Augmenting Reinforcement Learning with Human Videos] || Alex X. Lee et al. || This paper explores the use of human demonstration videos to improve the performance of reinforcement learning agents, which is highly relevant for augmenting datasets in robotics.
|}

Navigation menu