Apple's FaceID technology uses an infrared dot projector much like the Kinect to get a depth map of the face. According to Wikipedia, this depth map is used when creating Animoji, so it sounds like there is some 3D data as well as the single camera that goes into tracking facial expressions.
Example code if you want to start playing around with TrueDepth: https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/streaming_depth_data_from_the_truedepth_camera
Apple's FaceID technology uses an infrared dot projector much like the Kinect to get a depth map of the face. According to Wikipedia, this depth map is used when creating Animoji, so it sounds like there is some 3D data as well as the single camera that goes into tracking facial expressions.
Here is a useful link of how the Animoji works
And how iPhone X TrueDepth camera works
Example code if you want to start playing around with TrueDepth: https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/streaming_depth_data_from_the_truedepth_camera