In the business of video games, intellectual property is critical to success, and Patents, Copyrights, and Trademarks are the bricks with which your IP portfolio is built. The Patent Arcade is the web’s primary resource for video game IP law, news, cases, and commentary.

U.S. Patent No. 10,688,390: Crowd-sourced cloud gaming using peer-to-peer streaming


Issued June 23, 2020 to Sony Interactive Entertainment LLC
Priority Date: November 5, 2018




Summary:
U.S. Patent No. 10,688,390 (the ’390 Patent) relates to real-time crowd-sourced gameplay control using peer-to-peer streaming. The ’390 Patent details a method of using a streaming technology named WebRTC to create a peer-to-peer, direct connection that avoids multiple hops that negatively impact latency in a video stream. A host can stream video to another user, who in turn can share the video to another user, and so on. In some embodiments, other players may be able to join a game session and takeover the controls to help out. In some implementations, the primary and secondary user devices allow for real-time interactive gameplay in the cloud video game session. Peer-to-peer streaming may be used to have a massive mutli-player online gaming experience without the cloud gaming provider needing additional resources to support secondary users. 

Abstract:
A method includes: executing, by a cloud gaming machine, a session of a cloud video game, the session configured to generate gameplay video; streaming the gameplay video from the cloud gaming machine over a network to a primary user device; wherein the primary user device is configured to stream the gameplay video over a peer-to-peer network to one or more secondary user devices; wherein the primary user device is configured to process primary inputs, that are generated from interactive gameplay associated with the primary user device, and secondary inputs, that are generated from interactive gameplay associated with the one or more secondary user devices, to generate aggregated inputs; receiving, over the network by the cloud gaming machine, the aggregated inputs from the primary user device; wherein executing the session of the cloud video game includes applying the aggregated inputs to update a game state of the cloud video game.

Illustrative Claim:
1. A method, comprising: executing, by a cloud gaming machine, a session of a cloud video game, the session configured to generate gameplay video; streaming the gameplay video from the cloud gaming machine over a network to a primary user device, the primary user device configured to render the gameplay video to a primary display; wherein the primary user device is configured to stream the gameplay video over a peer-to-peer network to one or more secondary user devices, each of said secondary user devices being configured to render the gameplay video, respectively, to one or more secondary displays; wherein the primary user device is configured to process primary inputs, that are generated from interactive gameplay associated with the primary user device, and secondary inputs, that are generated from interactive gameplay associated with the one or more secondary user devices, to generate aggregated inputs; receiving, over the network by the cloud gaming machine, the aggregated inputs from the primary user device; wherein executing the session of the cloud video game includes applying the aggregated inputs to update a game state of the cloud video game that is processed to generate the gameplay video.

U.S. Patent No. 10,493,363: Reality-based video game elements


Issued December 3, 2019 to Activision Publishing Inc.
Priority Date: November 9, 2016




Summary:
U.S. Patent No. 10,493,363 (the ’363 Patent) relates to using video imagery from the camera of a real-life vehicle in a video game. The ’363 Patent details a method of displaying video from a vehicle’s camera by a game device and transmitting user input to the vehicle to modify the gameplay state. In some embodiments, overlays of in-game features (such as a heads-up-display) may be superimposed onto the video from the vehicle’s camera. Other objects or structures that are visible to the vehicle’s camera, including other vehicles, may be identified by a process performed by a game device and assigned gameplay elements, such as points for circumnavigating an identified object or structure. Surfaces may also be identified and assigned gameplay elements, such as a point reduction for driving on grass, or a powerup for driving on another type of surface. 

Abstract:
A videogame may make use of real world imagery for play of a video game utilizing a real world vehicle. Items may be identified in the real world imagery, and the identified items may become gameplay elements.

Illustrative Claim:
1. A method for use in providing videogame play of a videogame, comprising: receiving, by a game device, video imagery from a camera of a vehicle; displaying the video imagery by the game device; receiving, by the game device, user inputs for operation of the vehicle; transmitting, by the game device, operation commands to the vehicle; modifying a gameplay state of the videogame based on information of the video imagery and the user inputs for operation of the vehicle, including determining types of surfaces for areas of the gameplay world by comparing information of the video imagery with a library of information about potential types of surfaces and modifying the gameplay state of the videogame based on a type of surface identified in the video imagery upon which the vehicle is located; identifying items in the video imagery and assigning gameplay elements of the videogame to at least some of the items; and modifying display of the at least some of the items in the video imagery identified as gameplay elements of the videogame based on gameplay status of the at least some of the items.

U.S. Patent No. 10,665,265: Event reel generator for video content


Issued May 26, 2020 to Sony Interactive Entertainment America LLC
Priority Date: February 2, 2018




Summary:
U.S. Patent No. 10,665,265 (the ’265 Patent) relates to generating an event reel of the best moments from a longer video, such as of a recorded video game session. The ’265 Patent details a method of selecting video segments based on spectator reaction data, and synchronizing the cuts between different segments to a music track to create an edited compilation of video clips. In the context of video game streaming, for example, spectators may “live react” to video game sessions by inputting emojis, likes, and thumbs up/down that correlates to the video content in real time. This spectator reaction data is used to identify which sections of video content generated certain feedback so that optimal video clips may be compiled into an event reel. 

Abstract:
Methods and systems are provided for generating an event reel based on spectator reaction data, the event reel is temporally synchronized to a music track. A method includes receiving a video file for video content and receiving spectator reaction data related to reactions generated by spectators while viewing the video content. The method includes processing the spectator reaction data to identify video time slices from the video content that correspond to segments of interest of the video content. The method includes processing a music track to identify markers for the music track that correspond to beats of the music track and generating an event reel having a video component defined by a sequence of the video time slices that is temporally synchronized to the markers of the music track.

Illustrative Claim:
1. A method for creating an event reel for video content that is synchronized to music, the method including, receiving a video file for the video content; receiving spectator reaction data defined by reactions generated by spectators while viewing the video content, the spectator reaction data is indicative of segments of interest wherein the reactions of the spectator reaction data are timestamped; processing the spectator reaction data for identifying video time slices from the video content that correspond to the segments of interest of the video content having a threshold reaction intensity, each of the video time slices includes a plurality of video frames from the video content, wherein processing the spectator reaction data includes calculating reaction intensity for the spectator reaction data during intervals of time for the video file, the reaction intensity is identified by a summation of reactions during each of the respective intervals of time; processing a music track, received for creating the event reel, to identify markers for the music track that correspond to beats associated with the music track; and generating the event reel having a video component assembled for a sequence of the video time slices and an audio component defined at least partially by the music track, the generating the event reel includes synchronizing the video time slices to the markers for the music track, wherein the synchronizing causes scene changes between sequential video time slices to correspond in time with at least a portion of the beats associated with the music track.


Subscribe

Enter your email address to be notified about new posts (average 1-2 emails/wk).

Loading


Lawsuit Updates

View All

Patent Spotlight

View All

Read the Book!

The Patent Arcade’s editors also have literally written the book on video game law. Get your copy today!

Latest News

Read More
GET THE APP ON IOS

Follow Us On
The Web’s Best IP Law Resources
Recognized By

DISCLAIMER

The information on this site is provided for informative and educational use only and should not be relied on as legal advice.  No attorney-client relationship exists by virtue of you reading our blog.  Always consult an attorney if you need specific legal guidance.