Interactive Projection Arena
- teejaydub
- Aug 31, 2016
- 3 min read
This blog was made to detail the process, methods, and roadblocks on the way to Maker Faire 2016.
What it is
The project that we will be showcasing is one that incorporates and innovates upon several different technologies.
Our vision is to have attendees walk up to the booth, pull out their phones, and take control of one of a few RC cars on a 4'x6' platform. It isn't an ordinary platform, though. A projected image will encompass the entire platform and dynamically adapt to where the cars are and what the users are telling them to do, transforming the platform into a landscape that could only exist in a video game. Imagine seeing this: A field of grass with defined boundary lines and goals at either end. A virtual soccer ball at the center. Users will be able to drive their real-life RC car across the field, and interact with the virtual ball. The user will press the 'boost' button, and a flame will be projected behind the RC car. The end result of the project will be several different games whose visual output will be dependent on the user's interaction through their avatar, the RC car. A little more of how that's possible below.

Attendees will join a network being broadcast by our router, and be automatically directed to a lobby in a web app developed for HTML5 where they will wait for their turn to play. They will be controlling rechargeable 3D-printed RC cars that were designed just for this project. Atop each vehicle will be an IR LED and an IR receiver. The input that the user provides on their phone (Go forward, turn, boost) will be transcribed, transported, and translated into pulses of light that will be picked up by that receiver on the car to tell it where to go. A webcam will be pointed towards the field, and a Python script on the computer will use image processing and computer vision to analyze the webcam image and pick up the locations of each of the vehicles from the visual marker that each LED provides. The software will then relay the location of the cars to Unity which will drive the games being played and generate the dynamic imagery. The final image will be sent to a short throw projector mounted above the field. Then, people will say "Wow, that's cool"
What we have so far
We have completed initial proof of concept testing, which involved developing a demo scene in Unity, as well as the Python script that pulls images from the webcam to be analyzed. IR LED tracking is working. Currently, we've only tested tracking one object, but the code that has been developed has support for any number of objects. Communication between Python and Unity has been established over UDP, allowing for the measured locations of the cars to be sent to Unity to be used for monitoring collisions and generating effects. This has been tested and confirmed working.
What's next
In the coming weeks, I will be focusing on developing different game modes with interactive environments and objects. This will lead into the development of the web application which will act as the user interface to control each car. The control system for the vehicles will follow, which will involve the use of a microcontroller to both drive the motors of each car and receive user inputs. There's plenty more to do after that, up to and including building the structure that will contain the platform and support the projector and camera, but I'm confident that we have the resources to be able to pull it all off.
You'll be seeing what's currently being worked on whenever I make a new post, so that's cool.
Comments