Archive for the ‘Labs’ Category
Whenever Stimulant embarks on a new installation project, we survey the landscape of hardware solutions to determine what we can use to create a robust and immersive experience. Our offices are full of touchscreens, sensors, video cards, and gaffers tape and we’re constantly putting those pieces together in different ways to see what we can accomplish.
Our team has been spending a lot of time lately with depth-sensing cameras, which have gone from being a futuristic dream to a mature product category since the release of the Microsoft Kinect in 2010. We’ve worked with all of these devices in an array of different projects. How does each device fare? Read on.
The Original: Microsoft Kinect
Version: Microsoft Kinect for Windows SDK 1.7
It didn’t take long for the Microsoft Kinect accessory for XBox 360 to be reverse-engineered by the open-source community and repurposed for use in a greater context instead of just games. Fortunately, Microsoft decided to foster that community rather than fight against it by launching Kinect for Windows, a slightly-tweaked version of the hardware combined with a fully-featured set of development tools. The Kinect is actually a package of sensors, in addition to the depth-sensing camera there’s an infrared camera, an RGB camera, and an array of directional microphones.
Microsoft Kinect Advantages:
- Skeletal tracking: Applications can track the positions of user’s joints (head, shoulders, hips, hands, etc.) in space. Two people can be tracked simultaneously, in either standing or sitting poses.
- Face tracking: Various attributes of the user’s face can be tracked, including the relative positions of lips and eyebrows, which can be interpreted as facial expressions.
- Multiple sensors: Multiple Kinect sensors can be used together in order to track more users simultaneously, or to get a wider view of a space.
- Raw data: The Kinect SDK provides access to the raw depth data from the sensor, as well as images from the IR and RGB cameras. This can be processed by the application developer as they see fit.
- Voice control: The microphone array in the sensor can be pointed toward the user to better capture speech. Developers can pre-define a set of commands constituting a grammar for an application. Users are informed when one of a set of commands has been spoken.
Microsoft Kinect Issues:
- Compared to other sensors, the device is fairly large making it more difficult to conceal in an installation.
- A dedicated power cord is required because the sensor uses more power than can be provided by a USB bus. The amount of data generated by the sensor also tends to saturate a USB controller, so it’s recommended that each sensor be on its own controller.
- The sensing resolution is not as advanced as other sensors on the market. For example, the Kinect cannot easily distinguish individual fingers on a hand, which means gestures tend to involve more gross movements than simple pointing.
- Up to six users can be recognized in the field of view of the sensor but only two users can be tracked in detail.
- While there are open-source toolkits for working with the Kinect on non-Windows platforms, most of the features listed above require the Microsoft SDK, which is only supported for desktop applications on Windows 7 and 8.
- Kiosks, installations, and digital signage projects where the user will be standing fairly far away from the display.
- Windows development environments.
The New Challenger: Intel Perceptual Computing SDK 2013
Version: Intel® Perceptual Computing SDK 2013
Intel’s Perceptual Computing SDK 2013 aims to provide many of the same features as Kinect in a smaller package and a less Microsoft-centered toolset. Intel recently opened up the Perceptual Computing Challenge to encourage development of innovative multimodal interfaces with this SDK. The hardware is designed for tracking of a single seated user at close range and includes RGB and depth cameras as well as dual microphones.
Intel Perceptual Computing SDK 2013 Advantages:
- Smaller and less expensive: The Intel camera (produced by Creative) is smaller than a Microsoft Kinect for Windows sensor, is powered over USB, and is designed to sit on top of most computer displays.
- Close-range tracking: It is specifically built for close range tracking, with a range of 0.5ft to 3.25ft (and a diagonal frame of view of 73 degrees).
- Hand posture/gesture recognition: The SDK allows recognition of hand postures like thumbs up, thumbs down and peace, and gestures like waving, swiping and circling with a hand. Other gestures like grab and release or pan and zoom can be implemented by examining the openness of the palm and fingertip positions.
- Facial analysis: The Intel SDK provides capabilities for face tracking, recognition and detection as well as age and gender determination. Expressions like smiling and winking can also be detected.
- Speech: Developers can leverage speech recognition by specifying a predefined list of commands, or multiple lists constituting a grammar. The SDK also has built-in support for speech synthesis powered by Nuance.
- Raw data: Developers have access to raw color and depth data from the sensor along with a confidence map to account for distance and light conditions.
- Framework support: The Intel SDK supports frameworks like Processing, Unity and OpenFrameworks and ships with basic examples to make setting up and writing simple apps with these frameworks easier.
- Getting some of the deeper features (like age and gender detection) to work is a bit tricky.
- Due to the close range of the tracking system, hand gestures must be designed such that a user’s hand doesn’t occlude their own view of the display.
- Desktop/laptop applications where the user will be seated in front of the PC.
- Close range applications where features, apart from hand tracking and recognition, are necessary without too much precision or accuracy.
The Small Wonder: Leap Motion
Version: Leap Motion SDK v 0.7.7
The Leap Motion controller is a tiny unit designed to be placed below and in front of a display, and uses an infrared technique to determine the position and orientation of fingers (or finger-like things, such as pencils) while they are positioned over it. Tracking is very accurate and fast, and the device can be calibrated to map fingertip positions to precise locations on screen. Unfortunately, the device’s range is quite limited at 8 cubic feet, but it seems like quite a bit less in our experience. There is also no access to depth data or a point cloud. If what you want is very accurate finger tracking, the Leap Motion is perfect, but it’s not very well-suited for much else.
Leap Motion Advantages:
- Gesture recognition: Finger tracking is fast and accurate.
- Even smaller and less expensive: The device is physically very small and is inexpensive.
- Developer friendly: The “Airspace” application store provides a way for developers to market and distribute Leap apps.
- Framework support: The Leap supports a number of frameworks, including .NET, Processing, Cinder, etc.
- Compatible: Works on both Mac OS and Windows.
Leap Motion Issues:
- Sensing range is fairly limited.
- Only fingers are tracked. There is no skeleton or face tracking.
- No access to the raw sensor data.
- Controlled kiosk environments with a pointing-based UI.
- Generally best for general audience desktop apps which can be distributed in the Airspace store.
We’ve discovered that while there are a number of interesting camera-based sensors available, each tends to specialize in a few features, which are different from the others. For standing skeletons, Kinect is the only way to go. For seated voice and gesture, the Intel device may be a better choice. For precise pointing, the Leap is accurate and fast. It’s also possible to combine multiple devices. What could we do with two Kinects combined with a Leap, for example?
At Stimulant we always have a dual goal of creating amazing interactions beyond the typical mouse and keyboard, but we also require our installations to be bulletproof enough to withstand sustained public use (and abuse). These robust and affordable systems open up a huge range of possibilities for our work, and hopefully yours as well. Share your experiences with depth-sensing devices with us on Twitter @stimulant.
Big plans are in the works for Stimulant in 2013. We’re thrilled to announce that we’re adding an experience center and research laboratory to our San Francisco office. The Stimulab will incubate and showcase our latest innovations, as well as create a space where our team can work seamlessly with clients and partners to conceive and prototype new ideas.
The Stimulab will also showcase many of the concepts and technologies we explored in 2012, including:
- Multi-modal interfaces that merge speech, gesture, touch, and multiple devices
- Mobile and tablet experiences for Windows 8, Android, and iOS
- Massive, multi-touch touchwall installations
- Generative art and procedural graphics
- Digital interactive museum exhibits
- Embedded applications for small, portable devices
- Consultation on designing for devices
Continue to follow us on Twitter @stimulant. We’ll keep you up to date on the Stimulab’s progress as well as our latest ideas for events, museums, retail and beyond.
UPDATE: LoopLoop wins “Best in Category, Expressing” and “Best in Show” at the inaugural Interaction Design Awards! Read the full press release here.
Our work at Stimulant ranges from massive interactive wall-sized installations to small handheld devices. Our friends at Sifteo gave us an amazing opportunity to work on our smallest device yet.
Designing for “Inch-Scale” Computers
Sifteo cubes, originally featured in 2009 at TED, are sturdy 1.5-inch-square devices with 1-inch screens, not unlike a child’s building block. They have an amazing tactile quality and fit well in hands of all sizes and ages. Sifteo cubes are aware of their own orientation, tilt, direction, and proximity to other Sifteo cubes. A single button is embedded underneath each cube’s 128-pixel-wide screen. They are controlled wirelessly by a nearby computer and come in packs of three (expandable up to six) cubes.
Sifteo asked us to contribute to their launch portfolio of games that focus on kinesthetic learning, spatial reasoning, and collaboration. We whittled dozens of concepts down to a project we’d all love to work on: a multitrack music toy that was more exploratory than goal-based, and would leverage the minimalist and modular nature of the cubes themselves.
Passion, Prototyping and Playtesting
We all love music at Stimulant. Many of us have been DJs or musicians at some point in our lives. Combining this love with our penchant for interactive, playful experiences is part of what makes coming into work so rewarding, so it wasn’t a surprise when we after much deliberation we decided to go down the path of making music.
The design team started prototyping the interaction design while our developers researched the technical constraints and possibilities of the Sifteo cubes themselves. The design began on paper, with lots of little doodles of possible screen states, and talking through the interactions between each Sifteo cube. We even used existing, physical game pieces to playtest the application without writing a line of code, and used even verbal beatboxing in lieu of actual audio output.
Bringing LoopLoop to Life
With the interaction model prototyped on paper, we began the process of laying down the technical framework and exploring our visual and audio design options. Sifteo’s development team was extremely supportive of our efforts, modifying their SDK framework and sharing our passion for what LoopLoop could become.
We opted for a visual style that would mimic the inferred emotional attributes of the Sifteo cubes themselves: cute, minimal, quirky, with surprising complexity revealed over time. This look and feel influenced the sound palette and naming of the application, which is a nod both to the onomatopoeia of the patterns, as well as the nature of how the tracks repeat themselves.
The Stimulant team aggressively stuck to a “less is more” ethic when it came to features, scope, and priorities. We focused on doing fewer things, and doing them better. Iteration and fine-tuning of the technical, interactive, visual, and aural aspects of the project led us towards something that we felt was fun, engaging, and a joy to play with.
When Microsoft showed SecondLight at PDC 2008, we were inspired to make something similar work with our current Surface unit. What you see here is a prototype that takes advantage of Surface’s object recognition capabilities to recognize the position of one or more iPhones on the Surface, and allows those phones to “see through” the images and reveal a second layer of information. The possibilities here are fairly extensive; what’s most interesting to us is the potential for adding a layer of personalized information on top of a public computing experience. This could enable users to capture content and take it with them, or to have the system display a personalized information layer (translated text/larger-print type/private messages) for individual users of a multi-user system.
iPhone was the first mobile platform we dug in to, but we’ve also got XRay working on Android-based and Windows Mobile-based phones as well. Big props to Josh for pulling this all together, and to long-time friend Arthur Mount for the use of his fantastic illustrations.
What do you get when you mash-up Microsoft Surface with a Nintendo Wii Balance Board? Tilt-sensitive surface computing! Yes, this is Surface sitting directly on the Balance Board (it supports 600 pounds, we checked). Here, Josh Santangelo (who conceptualized and coded this mashup) demonstrates a simple application that lets users create bubbles of various sizes and roll them around the table by pressing on the edges of Surface. You also get a sneak peek at the WPF/Silverlight physics engine we’ve been working on as well. Tilt sensitivity adds an extra dimension to the Surface experience and opens new doors on an already highly advanced platform.
Welcome to the Lab.
Stimulant's always creating things, and not everything is for a client. Our proofs of concept edify our own curiosity, can solve common client problems, or just probe the edge of what's possible. This is one part of how our culture of technical creatives and creative technicians stay sharp, have fun, and keep looking towards the future.