I just moved all my Kinects back to my lab after my foray into experimental mixed-reality theater a week ago, and just rebuilt my 3D video capture space / tele-presence site consisting of an Oculus Rift head-mounted display and three Kinects. Now that I have a new extrinsic calibration procedure to align multiple Kinects to each other (more on that soon), and managed to finally get a really nice alignment, I figured it was time to record a short video showing what multi-camera 3D video looks like using current-generation technology (no, I don’t have any Kinects Mark II yet).
I decided to embed the live 3D video into a virtual 3D model of an office, to show a possible setting for remote collaboration / tele-presence (more on that coming soon), and to contrast the “raw” nature of the 3D video with the much more polished look of the 3D model. One of the things we’ve noticed since we started working with 3D video to create “holographic” avatars many years ago was that, even with low-res and low-quality 3D video, the resulting avatars just feel real, in some sense even more real than higher-quality motion-captured avatars. I believe it’s related to the uncanny valley principle, in that fuzzy 3D video that moves in a very lifelike fashion is more believable to the brain than high-quality avatars that don’t quite move right.