U.S. Patent No. 10,987,579 Graphics Rendering System

Issued April 27, 2021, to Electronic Arts Inc.
Filed/Priority to March 28, 2018

Overview:

U.S. Patent No. 10,987,579 (the ‘579 patent) relates to a graphics rendering system which receives data from a server that generates 3D graphics and renders them for the client in 2.5D. The ‘579 patent details a system receiving data such as 2D vertexes, meshes, textures, and depth ahead of frames where a player would see them, rasterizing them, and projecting them on a 2D plane from a player’s field of view (FoV). The system detects and predicts the player’s FoV to place the planes as the player moves, so the player experiences the effect of a full 3D space while being less burdensome on bandwidth and transfer rates for a user by loading fewer 3D objects. This could significantly lower bandwidth needed for 3D games, allowing for higher resolution frames and benefitting users with systems like phones or laptops that lack powerful 3D rendering engines.


Abstract:

A graphics rendering system is disclosed for generating and streaming graphics data of a 3D environment from a server for rendering on a client in 2.5D. 2D textures can be transmitted in advance of frames showing the textures. Data transmitted for each frame can include 2D vertex positions of 2D meshes and depth data. The 2D vertex positions can be positions on a 2D projection as seen from a viewpoint within the 3D environment. Data for each frame can include changes to vertex positions and/or depth data. A prediction system can be used to predict when new objects will be displayed, and textures of those new objects can be transmitted in advance.


Illustrative Claim:

1. A computing system comprising: a network communications interface configured to communicate via a network with a server that is generating a virtual 3D environment; a memory; and one or more processors configured to execute computer-readable instructions to perform steps comprising: receiving, from the server over the network, a 2D texture for an object in the virtual 3D environment; receiving first frame data comprising: an identification of the object; locations of vertexes of a 2D mesh for the object generated based in part on a field of view of a virtual character within the virtual 3D environment; and depth data for the object generated based in part on the field of view of the virtual character within the virtual 3D environment; storing the texture in the memory; mapping the texture onto the locations of the vertexes of the 2D mesh for the object, wherein the locations of the vertexes of the 2D mesh are locations of parts of a 3D object in the virtual 3D environment projected onto a 2D plane as seen from a viewpoint of the virtual character; rasterizing a first frame of a video based in part on the depth data and the texture mapped onto the locations of the vertexes of the 2D mesh for the object; receiving second frame data comprising: the identification of the object; and updated locations of vertexes of the 2D mesh for the object; mapping the texture to the updated locations of the vertexes of the 2D mesh for the object; and rasterizing a second frame of the video based in part on the depth data and the texture mapped onto the updated locations of the vertexes of the 2D mesh for the object.