NO_THING

Project:
Posted:
Updated:

NO_THING is a tracking and mapping framework that uses infrared light to turn portable physical objects into interactive displays. Prepared with reflective foil markers, unassuming surfaces as simple as a sheet of paper can reveal customized, proximity-related content that responds to point and click, drag and drop or spatial gestures.

I joined the Milla & Partner Innovationlab in early 2015, at the time when this NO_THING technology was about to be used by 1.5 million people in the German Pavilion at the EXPO in Milan. At the same time, this post on CreativeApplications.Net and the video above looked at the future potential of this technology. In the Innovationlab, we wanted to explore this future potential, but the setup at the time was unwieldy and the code contained in proprietary software did not give us the control, freedom and flexibility we wanted. Calibration was manual and time-consuming, while the results could never be perfect. But credit where credit is due: This setup laid the foundation, served as the core technology at the aforementioned EXPO, and demonstrated its capabilities. However, the next iteration would also see its applications in the years to come.

The benefits were evident: a novel technology with intuitive handling and interaction, different markers that can encode different languages so visitors don't have to keep selecting their preferred language, no need to maintain and hand out expensive and complicated devices, and the ability to spatially explore and interact with a space while augmenting the analog world with digital information.

So the Innovationlab posed this question to me: Is it possible to build NO_THING from the ground up with the flexibility and artistic freedom we desire? Having just joined the team as an intern, I naturally started with a blank sheet. After many days of researching image processing, vector geometry, coordinate systems, camera APIs, calibration techniques, and much trial and error, the project slowly took shape in my mind. It was going to be composed of OpenFrameworks, motion tracking cameras, a computerized calibration system, various ways of displaying content, the ability to communicate with other software, and all of that tied together by a bunch of code.

One of the major hurdles was the conversion from camera coordinate space to projector coordinate space. To recap briefly: When holding a piece of paper or a »board« into the projector beam, the desired content should be displayed on said piece of paper. And the digital content should be »glued« to the board as it moves. Of course, we need a projector, but we also need a camera to track where the board is. But even if we figure out where the board is in the camera image, we still don't know where to draw pixels on the projector to actually hit the board. Using a computerized calibration system, I finally got a calibration file with a set of numbers that somehow decode the position and rotation difference between the camera and the projector. After messing around with coordinate systems, computer vision, and matrix calculations for quite a while, I was really happy when the projected coordinate system axes finally matched up with the board.

A lot of brain power also went into the algorithm that actually detects the objects with the markers. The camera only sees points where the infrared light is reflected from the markers on the objects – essentially just some x,y pixel coordinates. The interesting part is to detect which points belong to a particular object, which are just noise, like the reflection of a watch, or which still don't belong to any object because half of the points are obscured by something else... To finally determine the orientation in space for each object that the camera sees. Let's just say it was a lot of vector geometry code.

Animation of how the computer sees and tracks multiple boards entering the field of view.

After probably a few months of development, I felt ready to present a small technology demonstration. The new NO_THING framework was able to control a small radio-controlled ball called »Sphero« by tilting a board. The tracking was stable enough to augment multiple pages of a brochure with different digital content based on the markers applied to each page. This was not just live projection mapping with distorted textures, but live 3D projection mapping where mapping onto 3D objects was possible. The surroundings and all boards were aware of other nearby boards. And finally, content could be rendered in a web view, making development of interactive content viable.

Over the years, this new iteration of the NO_THING framework was used in a number of projects, one of which I would like to highlight – the »Changi Experience Studio« at the »Jewel«. The Jewel is a »nature-themed entertainment and retail complex« in Singapore's Changi Airport, which opened in 2019. NO_THING was one of the foundational technologies throughout this enormous project, so I was involved in the development, assisted on-site with the software implementation, provided training on how the calibration and other parts of the software work, and just had one of the most amazing workplaces for a few weeks, including a view of the largest indoor waterfall in the world.

This project has been quite a journey for me.

Text and media used with permission from Milla & Partner.