Magic Window

Magic Window is an immersive live video experience. The users can change their perspective as if they are looking through a real window. The augmented video supports interaction with both live and pre-recorded media content through gesture-based controls, such as leaning sideways to see further to the side or leaning in to zoom. Current applications include viewing live demos remotely and games such as charades and taboo.

Magic Window is an immersive live video experience. The users can change their perspective as if they are looking through a real window. The augmented video supports interaction with both live and pre-recorded media content through gesture-based controls, such as leaning sideways to see further to the side or leaning in to zoom. Current applications include viewing live demos remotely and games such as charades and taboo. The Magic Window as a concept has existed in several applications for a while now, starting with Brian Davidson and Jeff Wilson. That concept is basically a live video stream controlled via gestures like hand movement, leaning your body, and face tracking. Its current form is as a web application to make use of WebRTC and other technical affordances of web browsers. It uses a Kinect to track users’ bodies in motion and a high-resolution fisheye video camera to capture everything in front of the display. Combined, it is a powerful tool that combines the latest technology to investigate how people would interact remotely given certain affordances. Affordances are facets of a design that seem to ask for a user’s interaction without explicit instruction, such as the contours of a computer mouse or shape of a door handle suggest how to grasp it. How people notice and use those affordances are fundamental to interaction design. For Magic Window, that interaction is currently being tested in the context of teleconferencing, partnering with the office productivity company Steelcase. The affordances we are currently testing are what would be expected of a normal window. For example, to see what’s further to the left on the other side of the window, one would lean or move to the right side to see around the corner. Also, one would get closer to the window to see a more detail on the other side. Those interactions are simulated via the Kinect’s face tracking abilities (i.e. leaning) and the perspective algorithm applied to the fisheye camera’s video feed (i.e. seeing further to one side). In addition to these interactions, there is content surrounding the video feed (see this article’s photo) based on the nature of the current use case.

 

External Links & Resources