1. A method for automatically animating lip synchronization and facial expression of three-dimensional characters comprising:obtaining a first set of rules that define output morph weight set stream as a function of phoneme sequence and time of said phoneme sequence;obtaining a timed data file of phonemes having a plurality of sub-sequences;generating an intermediate stream of output morph weight sets and a plurality of transition parameters between two adjacent morph weight sets by evaluating said plurality of sub-sequences against said first set of rules;generating a final stream of output morph weight sets at a desired frame rate from said intermediate stream of output morph weight sets and said plurality of transition parameters; andapplying said final stream of output morph weight sets to a sequence of animated characters to produce lip synchronization and facial expression control of said animated characters.
Initially, the court noted that these claims do not appear to be directed to an abstract idea on their face. Op. at 13. Further, the court noted that the claims do not cover the prior art methods of computer-assisted, but non-automated, lip synchronization. Id. Additionally, the court cited defendant’s non-infringement positions as evidence that the claims did not preempt all manners of automating lip synched animation. See Op. at 14. The court observed that, in section 101 motions, the parties positions are flipped: the patentee must argue that noninfringing alternatives exist and the defendant must argue that there are no noninfringing alternatives. Id.
In one of the clearer statements to date on 101 analysis, the court proceeded to argue that the claims must be evaluated in the context of the prior art: