Synchronization of images from multiple cameras to reconstruct a moving human
Moore, C, Duckworth, T, Aspin, R and Roberts, DJ 2010, Synchronization of images from multiple cameras to reconstruct a moving human , in: Distributed Simulation and Real Time Applications (DS-RT), 2010 IEEE/ACM 14th International Symposium on, 17th - 20th October 2010, Fairfax, VA.
|PDF - Published Version |
Restricted to Repository staff only
Download (623kB) | Request a copy
What level of synchronization is necessary between images from multiple cameras in order to realistically reconstruct a moving human in 3D? Live reconstruction of the human form, from cameras surrounding the subject, could bridge the gap between video conferencing and Immersive Collaborative Virtual Environments (ICVEs). Video conferencing faithfully reproduces what someone looks like whereas ICVE faithfully reproduces what they look at. While 3D video has been demonstrated in tele-immersion prototypes, the visual/temporal quality has been way below what has become acceptable in video conferencing. Managed synchronization of the acquisition stage is universally used today to ensure multiple images feeding the reconstruction algorithm were taken at the same time. However, this inevitably increases latency and jitter. We measure the temporal characteristics of the capture stage and the impact of inconsistency on the reconstruction algorithm this feeds. This gives us both input and output characteristics for synchronization. From this we determine whether frame synchronization of multiple camera video streams actually needs to be delivered for 3D reconstruction, and if not what level of temporal divergence is acceptable across the captured image frames.
|Item Type:||Conference or Workshop Item (Paper)|
|Uncontrolled Keywords:||3D reconstruction;3D video;acquisition stage;frame synchronization;image synchronization;immersive collaborative virtual environment;jitter;live reconstruction;moving human reconstruction algorithm;multiple camera video streams;multiple image feeding;teleimmersion prototypes;temporal divergence;video conferencing;visual-temporal quality;cameras;image motion analysis;image reconstruction;solid modelling;synchronisation;teleconferencing;video communication;|
|Themes:||Media, Digital Technology and the Creative Economy|
|Schools:||Colleges and Schools > College of Science & Technology > School of Computing, Science and Engineering > Virtual Environments & Future Media Research Centre|
|Depositing User:||Mr Rob Aspin|
|Date Deposited:||25 Oct 2011 11:00|
|Last Modified:||20 Aug 2013 18:15|
Actions (login required)
|Edit record (repository staff only)|