U.S. Patent No. 10,636,217: Integration of tracked facial features for VR users in virtual reality environments

Issued April 28, 2020 to Sony Interactive Entertainment Inc.
Priority Date: September 30, 2016

Summary:
U.S. Patent No. 10,636,217 (the ’217 Patent) relates to tracking the facial feature movements of the user of a head mounted display (HMD) to an avatar of that HMD user that can be viewed by other HMD users. The ’217 Patent details a method of detecting an HMD user’s eye gaze and mouth movements with eye gaze sensors and cameras on the HMD. A virtual face is generated for the HMD user’s avatar using the captured eye gaze and mouth movements. The position and size of an HMD user’s nose may also be detected and used to capture facial feature expressions along with eye gaze and mouth movements. Using this data, a face is generated for the avatar that approximates the facial movements of the HMD user. Other players may see the generated virtual face of an HMD user when they are all in a perspective that allows line of sight or a viewable angle of the virtual face. 

Abstract:
A method for rendering a virtual reality (VR) scene viewable via a head mounted display (HMD) is provided. The method includes detecting eye gaze of a user using one or more eye gaze sensors disposed in a display housing of the HMD. And, capturing images of a mouth of the user using one or more cameras disposed on the HMD, wherein the images of the mouth include movements of the mouth. Then, the method includes generating a virtual face of the user. The virtual face includes virtual eye movement obtained from the eye gaze of the user and virtual mouth movement obtained from said captured images of the mouth. The method includes presenting an avatar of the user in the VR scene with the virtual face. The avatar of the user is viewable by another user having access to view the VR scene from a perspective that enables viewing of the avatar having the virtual face of the user. Facial expressions and movements of the mouth of the user wearing the HMD are viewable by said other user, and the virtual face of the user is presented without the HMD.

Illustrative Claim:
1. A method for rendering a virtual reality (VR) scene viewable via a head mounted display (HMD), comprising, detecting eye gaze of a user using one or more eye gaze sensors disposed in a display housing of the HMD; capturing images of a mouth of the user using one or more cameras disposed on the HMD, the images of the mouth include movements of the mouth; capturing sensor data for a nose of the user when wearing the HMD; generating a virtual face of the user, the virtual face including virtual eye movement obtained from the eye gaze of the user and virtual mouth movement obtained from said captured images of the mouth, the virtual face including a virtual nose that is modeled based on the sensor data of the nose of the user; and presenting an avatar of the user in the VR scene with the virtual face, the avatar of the user being viewable by another user having access to view the VR scene from a perspective that enables viewing of the avatar having the virtual face of the user, such that facial expressions and movements of the mouth of the user wearing the HMD are viewable by said other user, the virtual face of the user being presented without the HMD.