Miscellaneous


Published on Jun 25, 2012

Tennis Ball Collecting Robot Contest. Students can pick up up to 25 tennis balls in 3 minutes. The more balls they pick up, the higher grades they will get. Two groups got full scores!
 

AI learns tennis

Oct 10, 2019

An AI trained via reinforcement learning learns how to play Tennis. We explore different ways of nudging the AI. The enviroment shown in this video is part of the ML-Agents examples made by Unity3D. Thank you!!
 

Robots playing tennis!

Nov 12, 2019

Check out the world’s first cinebot tennis match by Mark Roberts Motion Control and Steve Giralt.

Thanks to Love High Speed for phantoms and the entire crew and post team.


Robot tennis - Behind the scenes

Nov 13, 2019

Behind the scenes of the world's first Cinebot tennis match shot by Mrmoco, Steve Giralt with phantom 4k support from Love High Speed

Bolt CineBot, camera robot, Mark Roberts Motion Control Ltd, Surrey, United Kingdom
 

AI learns how to play physically simulated tennis at grandmaster level by watching tennis matches

May 4, 2023

A system has been developed that can learn a range of physically simulated tennis skills from a vast collection of broadcast video demonstrations of tennis play. The system employs hierarchical models that combine a low-level imitation policy and a high-level motion planning policy to control the character's movements based on motion embeddings learned from the broadcast videos. By utilizing simple rewards and without the need for explicit annotations of stroke types, the system is capable of learning complex tennis shotmaking skills and stringing together multiple shots into extended rallies.

To account for the low quality of motions extracted from the broadcast videos, the system utilizes physics-based imitation to correct estimated motion and a hybrid control policy that overrides erroneous aspects of the learned motion embedding with corrections predicted by the high-level policy. The resulting controllers for physically-simulated tennis players are able to hit the incoming ball to target positions accurately using a diverse array of strokes (such as serves, forehands, and backhands), spins (including topspins and slices), and playing styles (such as one/two-handed backhands and left/right-handed play).

Overall, the system is able to synthesize two physically simulated characters playing extended tennis rallies with simulated racket and ball dynamics, demonstrating the effectiveness of the approach.

research.nvidia.com/labs/toronto-ai/vid2player3d

0:00 Introduction to amazing new AI technology that can learn playing tennis
0:18 The permission to upload video
0:26 The video of the paper starts with introduction
1:08 Motion capture has been the most common source of motion data for character animation
2:13 System Overview
3:07 Approach
5:00 Complex and Diverse Skills
6:05 Task Performance
6:46 Styles from Different Players
7:16 Two-Player Rallies
8:13 Ablation of Physics Correction
8:36 Ablation of Hybrid Control
8:58 Effects of Removing Residual Force Control

Computer animation faces a major challenge in developing controllers for physics-based character simulation and control. In recent years, a combination of deep reinforcement learning (DRL) and motion imitation techniques has yielded simulated characters with lifelike motions and athletic abilities. However, these systems typically rely on costly motion capture (mocap) data as a source of kinematic motions to imitate. Fortunately, video footage of athletic events is abundant and offers a rich source of in-activity motion data. This inspired a research paper by Zhang et al. that explores how video data can be leveraged to learn tennis skills.

The authors seek to answer several key questions, including how to use large-scale video databases of 3D tennis motion to produce controllers that can play full tennis rallies with simulated racket and ball dynamics, how to use state-of-the-art methods in data-driven and physically-based character animation to learn skills from video data, and how to learn character controllers with a diverse set of skills without explicit skill annotations.

To tackle these challenges, the authors propose a system that builds upon recent ideas in hierarchical physics-based character control. Their approach involves leveraging motions produced by physics-based imitation of example videos to learn a rich motion embedding for tennis actions. They then train a high-level motion controller that steers the character in the latent motion space to achieve higher-level task objectives, with low-level movements controlled by the imitation controller.

The system also addresses motion quality issues caused by perception errors in the learned motion embedding.
 
Back
Top