U.S. Patent No. 11,007,445: Techniques for curation of video game clips

Issued May 18, 2021, to Lenovo Singapore Pte Ltd
Filed/Priority to August 16, 2019

Overview:

U.S. Patent No. 11,007,445 (the ‘445 patent) relates to computer curation of video game clips from a user’s gameplay of a video game. The ‘445 patent details a device which has stored, executable instructions for: analyzing a user’s video game gameplay and generating one or more searchable video clips based on that gameplay. The analysis includes analysis of audio from a user’s microphone and at least one searchable video is generated upon identification from detecting: laughter, a positive exclamation, or a negative exclamation. Similarly, the analysis can be of user inputs by associating them with similar input sequences. The ‘445 patent allows videos to be searchable by key terms, including terms relating to: input sequences, funny moments, attacks on particular areas of the body, and losses. The video clip can be from the perspective of the character controlled by the user or from the perspective of another character. The ‘445 patent could shorten editing time for clips from longer streams for video game streamers.

 

Abstract:

In one aspect, a device includes at least one processor and storage accessible to the at least one processor. The storage may include instructions executable by the at least one processor to analyze a user’s gameplay of a video game and to curate one or more video clips based on the user’s gameplay. The video game clips may be curated based on player input to an input device, player audio from a microphone, and/or player video from a camera.

 

Illustrative Claim:

  1. A device, comprising: at least one processor; and storage accessible to the at least one processor and comprising instructions executable by the at least one processor to: analyze a user’s gameplay of a video game; and generate one or more searchable video clips based on the user’s gameplay; wherein the analysis of the user’s gameplay of the video game comprises analysis of audio from the user detected via a microphone, and wherein at least one of the searchable video clips is generated based on identification from the audio of one or more of: laughter, a positive exclamation by the user, a negative exclamation by the user.