Page 2 of 2 FirstFirst 12
Results 11 to 14 of 14

Thread: Miscellaneous

  1. #11

  2. #12


    Vid2Player: Controllable Video Sprites that Behave and Appear like Professional Tennis Players

    Aug 12, 2020

    This video shows results from the paper "Vid2Player: Controllable Video Sprites that
    Behave and Appear like Professional Tennis Players".
    See the project page at: cs.stanford.edu/~haotianz/research/vid2player
    Haotian Zhang

  3. #13

  4. #14


    AI learns how to play physically simulated tennis at grandmaster level by watching tennis matches

    May 4, 2023

    A system has been developed that can learn a range of physically simulated tennis skills from a vast collection of broadcast video demonstrations of tennis play. The system employs hierarchical models that combine a low-level imitation policy and a high-level motion planning policy to control the character's movements based on motion embeddings learned from the broadcast videos. By utilizing simple rewards and without the need for explicit annotations of stroke types, the system is capable of learning complex tennis shotmaking skills and stringing together multiple shots into extended rallies.

    To account for the low quality of motions extracted from the broadcast videos, the system utilizes physics-based imitation to correct estimated motion and a hybrid control policy that overrides erroneous aspects of the learned motion embedding with corrections predicted by the high-level policy. The resulting controllers for physically-simulated tennis players are able to hit the incoming ball to target positions accurately using a diverse array of strokes (such as serves, forehands, and backhands), spins (including topspins and slices), and playing styles (such as one/two-handed backhands and left/right-handed play).

    Overall, the system is able to synthesize two physically simulated characters playing extended tennis rallies with simulated racket and ball dynamics, demonstrating the effectiveness of the approach.

    research.nvidia.com/labs/toronto-ai/vid2player3d

    0:00 Introduction to amazing new AI technology that can learn playing tennis
    0:18 The permission to upload video
    0:26 The video of the paper starts with introduction
    1:08 Motion capture has been the most common source of motion data for character animation
    2:13 System Overview
    3:07 Approach
    5:00 Complex and Diverse Skills
    6:05 Task Performance
    6:46 Styles from Different Players
    7:16 Two-Player Rallies
    8:13 Ablation of Physics Correction
    8:36 Ablation of Hybrid Control
    8:58 Effects of Removing Residual Force Control

    Computer animation faces a major challenge in developing controllers for physics-based character simulation and control. In recent years, a combination of deep reinforcement learning (DRL) and motion imitation techniques has yielded simulated characters with lifelike motions and athletic abilities. However, these systems typically rely on costly motion capture (mocap) data as a source of kinematic motions to imitate. Fortunately, video footage of athletic events is abundant and offers a rich source of in-activity motion data. This inspired a research paper by Zhang et al. that explores how video data can be leveraged to learn tennis skills.

    The authors seek to answer several key questions, including how to use large-scale video databases of 3D tennis motion to produce controllers that can play full tennis rallies with simulated racket and ball dynamics, how to use state-of-the-art methods in data-driven and physically-based character animation to learn skills from video data, and how to learn character controllers with a diverse set of skills without explicit skill annotations.

    To tackle these challenges, the authors propose a system that builds upon recent ideas in hierarchical physics-based character control. Their approach involves leveraging motions produced by physics-based imitation of example videos to learn a rich motion embedding for tennis actions. They then train a high-level motion controller that steers the character in the latent motion space to achieve higher-level task objectives, with low-level movements controlled by the imitation controller.

    The system also addresses motion quality issues caused by perception errors in the learned motion embedding.

Page 2 of 2 FirstFirst 12

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •