U.S. Patent No. 10,573,048: Emotional reaction sharing
Issued February 25, 2020 to Oath Inc.
Filed/Priority to July 24, 2016
U.S. Patent No. 10,573,048 (the ‘048 patent) relates to identifying facial expressions of viewers of content at different points in time, which can be displayed in real time. The ‘048 patent details a device which, through a user reaction distribution service, uses landmark points mapped onto facial features to identify changes in a user’s facial expressions over time, as well as audio from the user to identify the user’s mood. At the user reaction service, the device uses an expression recognition algorithm to track changes in the mapped points, identifying different facial expressions over time and then verifying the expressions with the mood from the user. The total number of viewers’ different facial expressions are evaluated to find the most common expression, which is then sent to viewers live. The expression could be represented with an animation, image, text, or symbol. The ‘048 patent could allow viewers of game streamers to understand the general feelings and mood of an audience without having to read a fastmoving chat box.
One or more computing devices, systems, and/or methods for emotional reaction sharing are provided. For example, a client device captures video of a user viewing content, such as a live stream video. Landmark points, corresponding to facial features of the user, are identified and provided to a user reaction distribution service that evaluates the landmark points to identify a facial expression of the user, such as a crying facial expression. The facial expression, such as landmark points that can be applied to a three-dimensional model of an avatar to recreate the facial expression, are provided to client devices of users viewing the content, such as a second client device. The second client device applies the landmark points of the facial expression to a bone structure mapping and a muscle movement mapping to create an expressive avatar having the facial expression for display to a second user.
- A computing device comprising: a processor; and memory comprising processor-executable instructions that when executed by the processor cause performance of operations, the operations comprising: receiving, at a user reaction distribution service, a first set of landmark points, a second set of landmark points, and a mood of a first user from a client device, wherein: the first set of landmark points represents a set of facial features of the first user at a first point in time and the second set of landmark points represents the set of facial features of the first user at a second point in time while the first user is viewing content through the client device, and the mood is identified at the client device from audio of the first user while the first user is viewing the content through the client device; evaluating, at the user reaction distribution service, the first set of landmark points and the second set of landmark points, using a facial expression recognition algorithm that maps changes in location of landmark points to facial movements indicative of facial expressions, to identify a facial expression of the first user while the first user is viewing the content; verifying the facial expression of the first user based upon the mood; identifying, at the user reaction distribution service, a set of facial expressions of other users viewing the content during a time interval between the first point in time and the second point in time based upon landmark points received from client devices of the other users, wherein: the client device of the first user and the client devices of the other users define a group of client devices, and the facial expression of the first user and the set of facial expressions of other users define a group of facial expressions; ranking, at the user reaction distribution service, the group of facial expressions to determine a most frequently occurring facial expression, amongst the group of facial expressions, during the time interval; and sending, from the user reaction distribution service, the most frequently occurring facial expression to a plurality of client devices amongst the group of client devices in real-time during viewing of the content by the first user and by the other users.