U.S. Patent No. 10,632,385: Systems and methods for capturing participant likeness for a video game character
Issued April 28, 2020 to Electronic Arts Inc.
Priority Date: January 27, 2016
Summary:
U.S. Patent No. 10,632,385 (the ’385 Patent) relates to systems and methods for capturing the likeness of a person (such as a real life athlete) from video of live events to create a video game character. The ’385 Patent details systems and methods that send videos from multiple cameras, camera angles, and events to a processing server that generates poses of the person and associates them with a movement type, such as running or jumping, and/or game stimulus, such as celebration or fatigue. This data is used to generate a character model that reflects the likeness of the person. These systems and methods may be cheaper and less time consuming than traditional motion capture techniques like having the person come to a production studio and wear a motion capture suit.
Abstract:
Systems and methods for capturing participant likeness for a video game character are disclosed. In some embodiments, a method comprises receiving, at a pose generation system, multiple videos of one or more live events, the multiple videos recorded from a plurality of camera angles. A target participant may be identified, at the pose generation system, in the multiple videos. A set of poses may be generated, at the pose generation system, of the target participant from the multiple videos, the set of poses associated with a movement type or game stimulus. The set of poses may be received, at a model processing system, from the pose generation system. The method may further comprise generating, at the model processing system, a graphic dataset based on the set of poses, and storing, at the model processing system, the graphic dataset to assist in rendering gameplay of a video game.
Illustrative Claim:
1. A system comprising: a memory comprising instructions; and a processor configured to execute the instructions to: correlate video images from a plurality of camera angles based on reference point locations; identify a target participant using multiple cameras at the plurality of camera angles during at least one live event; generate a set of poses of the target participant based on poses of a character model selected from a stored set of poses for the character model, the selection based on a movement type or game stimulus; generate a graphic dataset for the movement type or game stimulus based on the generated set of poses; and store the graphic dataset to assist in rendering a game character representative of the target participant during gameplay of a video game.