Difference between revisions of "Pose Estimation"

From Humanoid Robots Wiki
Jump to: navigation, search
(Add MoveNet)
Line 13: Line 13:
 
|-
 
|-
 
| 1 || OpenPose || Carnegie Mellon University || Detecting key points of the human body, including hand, facial, and foot || [https://github.com/CMU-Perceptual-Computing-Lab/openpose OpenPose GitHub] || MIT
 
| 1 || OpenPose || Carnegie Mellon University || Detecting key points of the human body, including hand, facial, and foot || [https://github.com/CMU-Perceptual-Computing-Lab/openpose OpenPose GitHub] || MIT
 
+
|-
 +
| 2 || MoveNet || Google Research || Detecting 17 critical key points of the human body || [https://github.com/tensorflow/tfjs-models/tree/master/posenet MoveNet GitHub] || Apache 2.0
 
|}
 
|}

Revision as of 01:56, 7 June 2024

Pose estimation is a computer vision technique that predicts the configuration of a person's or object's joints or parts in an image or video.

It involves detecting and tracking the position and orientation of these parts, usually represented as keypoints.

Pose estimation is widely used in applications such as motion capture, human-computer interaction, augmented reality, and robotics. The process typically involves training machine learning models on large datasets of annotated images to accurately identify and locate the keypoints.

Pose Estimation Related Models

Sr No Model Developer Key Points Source License
1 OpenPose Carnegie Mellon University Detecting key points of the human body, including hand, facial, and foot OpenPose GitHub MIT
2 MoveNet Google Research Detecting 17 critical key points of the human body MoveNet GitHub Apache 2.0