U.S. Patent No. 10,991,110: Methods and Systems to Modify a Two Dimensional Facial Image to Increase Dimensional Depth and Generate a Facial Image That Appears Three Dimensional

Issued April 27, 2021 to Activision Publishing Inc.
Filed: April 9, 2020 (claiming priority to December 6, 2016)

Overview:

U.S. Patent No. 10,991,110 (the ‘110 patent) is a continuation of U.S. Patent No. 10,055,880 and relates to creating an augmented reality (AR) face mask that appears three-dimensional by increasing the depth of a two-dimensional image. The ‘110 patent details a computer-implemented method of making a three-dimensional appearing AR face mask from a two-dimensional image, achieved by the computer first acquiring an image of a face from a camera, then mapping two groups of key points on the two-dimensional image to generate a texture map.

One of the groups of key points represents anatomical locations on the eyebrows, eyes, nose, and lips. The two groups of key points are compared by the computer to determine the distance and proportions of facial features as well as a several scaling factors, each scaling factor being a function of one or more proportions, from each of the groups of key points. The key points are used by the computer to generate a texture map of the two-dimensional image and then project the map onto a three-dimensional mesh stored in the computer. The scaling factors are then used to adjust the 3D mesh to yield a three-dimensional appearing AR face mask.

This could be used for several features in games, such as but not limited to: displaying faces to users of Heads-up displays (HUDs), displaying faces on normally faceless characters in multiplayer shooters, or printing custom avatars.

Abstract:

The specification describes methods and systems for increasing a dimensional depth of a two-dimensional image of a face to yield a face image that appears three dimensional. The methods and systems identify key points on the 2-D image, obtain a texture map for the 2-D image, determines one or more proportions within the 2-D image, and adjusts the texture map of the 3-D model based on the determined one or more proportions within the 2-D image.

 

Illustrative Claim:

  1. A computer-implemented method for increasing a dimensional depth of a two-dimensional image to yield an augmented reality (AR) face mask, said method being implemented in a computer having a processor and a random access memory, wherein said processor is in data communication with a display and with a storage unit, the method comprising: acquiring from the storage unit the two-dimensional image; acquiring an image of a face of a person from a camera; using said computer and executing a plurality of programmatic instructions stored in the storage unit, identifying a first plurality of key points on the two-dimensional image; using said computer and executing a plurality of programmatic instructions stored in the storage unit, identifying a second plurality of key points on the two-dimensional image; using said computer and executing a plurality of programmatic instructions stored in the storage unit, generating a texture map of the two-dimensional image; using said computer and executing a plurality of programmatic instructions stored in the storage unit, projecting said texture map of the two-dimensional image onto the image of the face of the person; using said computer and executing a plurality of programmatic instructions stored in the storage unit, modifying the first plurality of key points based on the second plurality of key points; and using said computer, outputting the AR face mask image based on the modified first plurality of key points.