Using pre-trained models (models already trained with large amounts of data that is then labelled), we can now do complex analysis of our body with our web cameras. This includes detecting and tracking poses, hands, facial features.
In order to track different body parts the machine learning model identifies what’s called pre-trained landmarks of each body part.
Below is an example of all the 21 landmarks on a hand.
<aside> 💡 Pros and Cons
<aside> ✅ Because the model is only analysing the landmarks you can easily change background setting and the tracking will still work. This is different from the methods we used during Teachable Machine.
</aside>
<aside> ⛔ These models can be slow to load and run in the browser due to the amount of landmarks being tracked in real-time.
</aside>
</aside>
Quickly Try it!
Real-Time Tracking - simple tracking of two hands.
We will look at hand tracking just now but there are examples for facial and body tracking included at the end of this worksheet. They follow very similar structures.
//we can look for the key lines where we can make changes
/*try change the name.x and name.y to track different points on the hand */
circle(fingerIndexPt1.x, fingerIndexPt1.y, 10);
Control graphics with finger tracking (a line between two fingers) https://editor.p5js.org/AndreasRef/sketches/QBfTGW8fj
Hand Tracking Image load