Difference between revisions of "Pose Estimation"

From Humanoid Robots Wiki
Jump to: navigation, search
Line 6: Line 6:
 
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
 
Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.
  
These models can range from simple algorithms for 2D pose estimation to more complex systems that infer 3D poses. Recent advances in deep learning have significantly improved the accuracy and robustness of pose estimation systems, enabling their use in real-time applications.
 
  
 
{| class="wikitable sortable"
 
{| class="wikitable sortable"

Revision as of 01:50, 7 June 2024

Pose estimation is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.

It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.

Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.


Sr No Model Developer Key Points Source License
1 OpenPose Carnegie Mellon University Detecting key points of the human body, including hand, facial, and foot OpenPose GitHub MIT