We are developing Aware Projectors that sense their environment and their projections, can self-calibrate and interact with each other to form large displays autonomously. Most of our prior work has been in the area of applications detailed in the project below.
Members: Justen Hyde, Dan Parnham, John Robinson, John Mateer, Steve Smith, Lijiang Li
We have developed a number of Video-Augmented Environments including PenPets, dtouch and the Audiophoto Desk. Current research is focused on reliable image analysis algorithms for all lighting conditions, exploitation of the structure of projections in interpreting 3D shape, and development of innovative applications for education and cultural institutions. To these ends we have adopted the OpenIllusionist software framework, developed by Justen Hyde and Dan Parnham, as the infrastructure for our future VAEs. OpenIllusionist is particularly suited to the implementation of large multi-agent VAEs, for example Robot Ships which we have developed for the National Museum of Scotland. Future research aims towards the "Aware Projector" appliance where data projectors incorporate cameras and computers with wireless networking and collaborate to form large, interactive, projected spaces.