CART Transcript for Opening Keynote – Eric Brockmeyer • Disney Research

Thursday, April 3, 2014 9:00 a.m. – 10:04 a.m.

MUSEUMS AND THE WEB 2014: OPENING KEYNOTE ERIC BROCKMEYER

Held at:

Renaissance Baltimore Harborplace Hotel

202 East Pratt Street

Baltimore, MD

Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings. 

>> Nancy Proctor: Good morning. I hope everybody’s had a chance to grab a coffee and maybe a little something else. It feels a bit early. I must confess, I’m not even quite sure where to start other than by saying, whew, we made it. As I’m sure you know, Woody Allen says 80% of success is just showing up. So I’d like for you all to give yourselves a big round of applause for making it to Baltimore this early in the morning.

                  [Applause]

                  I’m particularly excited to welcome you here because Baltimore is my new hometown.

How many of you got to take a tour of Baltimore on Tuesday?

Ok. I have to confess I’m really jealous. Today is actually my one‑month anniversary in my new role as Deputy Director of Digital Experience at Baltimore Museum of Art.

[Applause]

So I have to ask some of you to give me the tour a little bit later on. I’d also like to ask for a big round of applause for all the volunteers who made those tours and really this entire conference possible. We couldn’t even get out of bed without you guys. So thank you.

There are more than 600 of us at Museums and the Web this year. Some of you came from the other side of the planet. Some of you came twice. I’m not kidding. Liz Neely. More than 60 came from the DC‑Baltimore area. And we all know that making it to the museum just down the road from you, not to mention into the galleries of your own museum, can sometimes be the hardest journey of all. So well done, everyone, for finding your way here. And now that you know how to get here we expect to see a lot more of you in Baltimore.

For everyone in the area and friends of those in the area we’ve reserved not just a table but a whole room for Saturday morning’s “Birds of a Feather” Breakfast in the Watertable Ballroom. So drop in anytime from 8:00 to 10:30.

Being here, being there, having arrived, look around you and notice who’s sitting next to you. Just take a minute. Are these happy, shiny faces? Is this what success looks like? Success is the theme of the MW24 proceedings which you might have noticed if you’ve cracked our ponderous agenda tomorrow. It includes papers from the Museums and the Web Asia 2013 as well as from this conference. As we experienced last night at the opening reception at the BMA where we broke all previous MW records for opening reception attendance by nearly double, sometimes success looks a little unexpected. It leads to a little messiness.

As we recall from MW2012 in San Diego, sometimes success looks a lot like failure. As we saw in the brilliant closing session where our brave colleagues showed us how they learned from their museum mistakes, how they failed forward.

I think what this often messy always complicated multifaceted picture really shows is maturity. Technology has finally arrived in museums. And sitting at the high table now, alongside curatorial, education, marketing and all the other engines that drive the museum, we’re no longer confined to the IT ghetto. And with that, of course, comes increased responsibility. It’s a lot harder to drive the apple cart than to kick it over, as Bruce said so perfectly yesterday. And the pace at which we have to make critical path decisions for our institutions is only going to continue increasing.

How are we going to keep up? How are we going to hold on to our heads in this demanding new environment?

One answer is by working together, by recognizing that more heads are better than one, by using opportunities like Museums and the Web to come together to learn from one another’s triumphs and one another’s triumphal failures to share the burdens of the costs, to collaborate. And that’s something this community is very, very good at. It’s why I’m very proud to be part of it.

So thank you, again, all, for being here and for the amazing opportunities that these gatherings bring us.

As usual, the conference program is packed with difficult choices for you, so good luck with that. I do want to note some novelties in the program this year before handing over the mic. With the support of Johns Hopkins University Museum Studies Program, for the first time this year we are introducing CART transcription to Museums and the Web. This means that at any time one session in the program will be transcribed live with the text displayed on a second screen in the presentation. These sessions will also be videoed and then captioned with the transcriptions. This will make not only the published papers for our sessions freely available online but also the full text of those presentations and their discussions.

CART transcription is an important step towards increasing accessibility of Museums and the Web conferences as well as augmenting its archives. We are now actively seeking sponsorship to enable us ‑‑

[Applause]

Thank you. We are now actively seeking sponsorship, for all of you who clapped, to enable us to transcribe the entirety of Museums and the Web 2015 and add further supports to the accessibility of that meeting. We aim to break the vicious cycle of inaccessibility that has deprived our field of participants and colleagues with disabilities, and we want to, indeed, recognize that we are all differently abled and that ultimately is our greatest strength.

As you may know, I’m a firm believer that the two key drivers in innovation and the technology sphere are accessibility and artists. Once again, we have curator Vince Dziekan to thank for the Innovative Museums and the Web exhibition. Could you stand up?

>> Over here.

>> Nancy Proctor: There he is, in the front.

[Applause]

And through this exhibition series, which was conceived in memory and inspired by the great work of Xavier Perrot, we hope continue to inspire museum practice by demonstrating how artists use digital tools. If you missed Dan Deacon’s mobile participatory performance last night, I encourage you to download his ground breaking app. It is a very important one to study and to use it to find your way to his next show.

If you have not yet downloaded the Museums and the Web app, you are missing out on some of the best Easter eggs I have ever seen in a mobile experience. I’m not going to say any more. Just download it.

We have Story to thank for developing that app for us. If you want a hint of what surprises you might find in the app, check out the exhibit of Jenny Holzer’s “Please Change Beliefs on Vintage Computers Loaned by Friends and Museum and the Web.”

Thank you all for that blast from the past.

I’d also like to thank IZI.Travel and the mobile providers participating in the Mobile Bakeoff. This is the first time that the museum field can see a range of mobile apps in a side‑by‑side comparison, the platforms that created them, as well as the final products.

So you can download the apps from the Museums and the Web 2014 website. Sandy Goldberg will lead the tasting session Friday at 10:00. There will be a reunion of those who participated, Saturday morning during “Birds of a Feather” Breakfast. You’re going to get your mobile fill this year, I think.

We can meet all of those mobile platform providers and many other partners to museums in the Exhibit Hall’s starting with tonight’s reception from 6:00 to 8:00. The Exhibit Hall will be open until Saturday at noon. You can book speed dating sessions or a full honeymoon, if you like, with many of the vendors you will find in the hall. Just go to the signup sheet for that vendor on the exhibits page on the MW24 site.

I’d like to close by thanking all of those vendors and museum partners who made this conference, our events in Hong Kong, Beijing, and Florence as well as Tuesday’s Deep Diamond E‑mail Archiving possible through their sponsorship and their support of Museums and the Web. They do much more than bring museums’ great products and services. They offer a wealth of expertise and experience going from all over the field, and they’re actively helping us innovate not only our museum experiences but this conference as well. Thank you for helping to raise the quality and standards for everyone.

I’d like to particularly mention MailChimp, platinum sponsor, Piction, bronze sponsors, IZI.Tavel, and last, but by no means, least, our global sponsor and a new company with a familiar face. Many of you will recognize Axiell. How did I do?

>> Very well, thank you.

>> Nancy Proctor: Has been leading workshops and sharing his vast expertise with the Museums and the Web community for as long as I’ve been a part of it, and he’s going to say a few words now about Axiell ALM.

Thank you.

[Applause]

>> Thank you, Nancy. Thank you, Rich. Nancy introduced me. I’m one of those guys that came from the other side of the planet. Some people may know me as the Managing Director of Information Systems, which I’m no longer because as of February, this February, I’m the Chief Technology Officer from Axiell. It’s a company that bought information systems. It has also bought com and recently we acquired Selago. And I’m very proud to say that since today we also acquired KE Software in Australia. So that makes us, I think, the largest software vendor for museum systems in the world.

When I read from my favorite magazines, I read a lot about the internet of things. And when I think about the internet of things, I’m not immediately thinking of my coffee machine talking to my laundry machine but I’m thinking more of museums. And I think that together if we think about it slightly differently, you can talk about the internet of objects. And I think it’s one of our tasks to make the internet of objects possible to connect information and digital information about objects in museums, archives, libraries. I think we have to get out of our silos. There are already some interesting projects such as ‑‑ I think we can do this on a much bigger scale. One of the steps was for us to team up with the different software companies and start this process.

I’d like to thank everybody for coming to the conference. I am really proud to be the global sponsor this year. I wish you all a very successful but also pleasant conference here in Baltimore.

Thank you.

[Applause]

>> Rich Cherry: This program is so long. Hi. I’m Rich Cherry, the Deputy Director of the Broad Art Foundation in Los Angeles and also Co‑Chair with Nancy of Museums and the Web. Normally by this time it feels like we’re just getting started but this week, starting off with a full-day session on e‑mail, archiving and art museums and then some fabulous workshops as well as art experiences last night, I’m a little bushed already. There’s going to be lots of caffeine.

Also to motivate us up here for this conference, it’s going to be really exciting for me to listen to Eric Brockmeyer. Eric is a designer specializing in digital fabrication, parametric design and making things. He currently is a lab associate at Disney Research Pittsburgh. He leads Digital Media at the School of Architecture at CMU and also runs his own company, 7Bit Design. He studied physics and art at Colby College before working on orthopedic device manufacturer. He attended the architecture program at Ball State University where he worked as a fellow for the Institute of Digital Fabrication. As a Master’s of Tangible Interaction Design student at Carnegie Mellon University, he focused on interactive fabrication, interactive projections, organic materials, integrated with lighting and touch sensing technologies.

I think some of the things he’s going to talk about today are things we can see in the future coming into the museum space as potential ways of interacting with our patrons, but maybe not for a while. But let’s see.

Come on up, Eric.

[Applause]

>> Eric Brockmeyer:   Thank you very much, Rich. It’s an honor to be here. It’s a great room. I’ve not spoken to an audience quite this size before, so it’s pretty exciting for me.

As Rich said, I’m a member of Disney Research Pittsburgh. It’s an interaction design and research group. We’re placed within the larger Disney Research environment. Disney Research started about six years ago now as a forward‑looking research group inside of Walt Disney Imagineering. So we work and publish our work within academic context and release our papers to mostly HCI, human computer interaction, conferences. We do a little bit more than that, but we also have computer graphics, computer vision, robotics, and RFID all within the Disney Research group.

We’re located at Carnegie Mellon University in Pittsburgh. We’ve been there for about five, six years now.

Again, our group focused mostly on novel display sensors and interaction techniques. Most people ask when they see the work: Why Disney? I think the sort of important driving factor behind our work is not the technology, it’s the experience. We’re trying to make magical experiences. Working for a company that spent years taking that technology and hiding it kind of in the backstage, it’s got sort of that drama background. So it might relate fairly well to some of the museums’ interactions as well.

Everything we do requires a story, a Disney story. We’re constantly spinning our work into a pitch that reads as a story so people will look at our work and say, ok, it’s very technology‑driven, really interesting. But when we’re developing in‑house, we have to pitch to each other, Disney proper, as a story. It’s a great driver for us.

Our group is made up of a diversity of engineers, scientists, and artists. It’s really the only way to push the work that you’ll see. I have an art and design background, but I work with electrical engineers, mechanical engineers, computer scientists. It’s really interesting. The artists and designers often bring a perspective that the hard scientists just don’t see right up front. I think this is a model that’s starting to be accepted outside of research groups, probably in this community as well. I think you work directly with artists and art. It’s one of the things when we present this work at more academic and more scientific conferences, it can be a shock to people.

So, like I said, the big idea for our group is to make magical experiences, to achieve that, our sort of smaller sub idea is to make everything interactive. This falls in line with pervasive computing, ubiquitous computing, the internet of things, all of those big ideas about the world becoming more interactive. We are looking at it as a means to make magical experiences, but we’re right alongside big companies like, you know, Google, Microsoft, other software, hardware companies who are driving those same ideas.

I have broken the talk up into these subsections. The first one is Making Objects Listen. It’s kind of this group of sensors and really just one big sensor. And one of the things that you’ll notice throughout the work is we cycle between the story and the technology and the application. And often where we start developing our research ‑‑ we don’t have a clear idea exactly of where we’re going to end up. And we have this idea of serendipity, where we’re going to stumble upon some solution that we didn’t expect at the beginning. It’s not that we’re looking for that serendipitous moment but it just happened, and you have to stay open to those.

This is Touché. It’s a smart sensor for conductive materials. The technology works ‑‑ it’s a swept frequency capacitive sensor. It’s sensing frequencies of ‑‑ lots of touch sensors, buttons, museum exhibition that are just single frequency, capacitive buttons. This is more intelligent than that. So we’re scanning across around 200 different frequencies. It creates a capacitive profile for an object or a person or a plant. And then as another object is added, like a human hand, it changes that profile. We can read something into that. We use a mesh learning algorithm to derive some information from that and break it down into gestures.

So this is the sensor in action. It’s really convenient. It’s a single wire. It just plugs into the back of this door handle, in this case. This has already gone through a training procedure where we’re building in some calibration in training and teaching the classifier of what each gesture looks like.

Once we develop the technology, the fun part wasn’t really building the hardware and testing it and sending it back and going through iterations of the board. It was getting a chance to take this and stick it on to random stuff in the lab. Dunking our hands in waters and playing with little fluid things. Figuring out where this could be useful for us.

Touché, an on‑body sensor. It’s a wrist‑worn device where you could use the Touché board to turn the volume up on your phone by making a gesture with thumb repetition, teaching the board this is the gesture for answer my phone. You know, if you have your ear piece or something like that. It’s moving the input away from the mouse, the keyboard, even the connective free air sensor and placing the input device into the environment and into the objects of our environment.

So here’s a nice little chart showing some of the different capacitive profiles for each of these objects: a door handle, a little mobile device where you can start using the case of your mobile device as an input device, a person and a bucket of water.

We stumbled upon, in this case, the most interesting application, which was hooking this up to plants. It happens to be a very nice Disney story as well. We love talking objects, plants, candelabras. You know, inanimate object; that’s a win.

We hooked the sensor up to these plants and were able to get these really nice interactions. We developed this project that went to the graphics conference last year. So we coupled this with some nice visualizations, an open GL, a little ghost routine. There’s a piece of acrylic behind the plants. As you touch the leaves or the stem of the plant, the sensor knows where you are touching and it’s creating these nice sounds and visualizations alongside of it.

We worked with a company called Studio Nanz to do the visualizations and the sound process and stuff. They’re from Berlin. It was a really successful installation.

This plant worked really, really well. You could play music along this. You can tell how far out you are on each of these stems, create these really beautiful sounds and patterns.

The other one that was really good was the bamboo. Bamboo has these breaks along the stem, and you could get very precise location of your hand position on the bamboo because those breaks act like a little resistor capacitive circuit. Simulated it using off‑the‑shelf hardware. Here’s the bamboo.

So that’s one of our, I guess, really the only big sensing project we’ve had for a while. It’s been around for a couple of years now. We’re still using it. We use it on other projects. It’s been a really successful piece for us.

Going into it, we didn’t start this one with the story let’s make plants talk. We kind of circled around to it. We started with let’s make everything talk or let’s make everything listen, rather.

This next section describes giving a little bit more voice to objects. We’re going to start with traditional touch screens. This technology is called electrostatic vibration for screens. It’s a really boring name. It used to have a much more exciting name. Then we hit some trademark issues and kind of pulled back to electrostatic vibration for screens. There you have it.

Basically what we’re doing is injecting a high‑voltage AC signal into a traditional capacitive touch screen. What that allows you to do is to create these synthetic frictional forces on the surface of the screen.

So as a user slides his or her finger on the screen, it’s catching at certain times when we crank up the voltage or change the frequency or the amplitude of that signal. We can create different sensations to that finger. So typically, you’ve got your finger sliding on this glass screen, has no sensation, and we’re trying to bring something more to that.

We created some demos where you’re able to sense textures. If your finger is static, you feel nothing. It’s only the frictional forces. So you have to feel your finger gripping more or gripping less as it moves. It’s kind of, you might say, competitive with like a vibration, mechanical vibration. It kind of provides ‑‑ affords a different type of interaction.

We created this mobile device where the case itself is the ground for the high‑voltage signal so it’s all embedded in the screen. I say high‑voltage. You’re not getting shocked. It’s high voltage. It creates these fun sensations on your finger.

We also added a camera to the back to try to be able to feel textures. We’ve got the live video feed. We kind of abstract that video feed. You can feel this really rough sensations on the screen.

And we’ve done some studies with the blind. It’s still in process. So that work is ongoing. It needs to be abstracted to the point that you wouldn’t recognize the image if you transitioned it to some RGV values. We’re getting closer to using it, a more useful application for the blind.

The 2D screen‑based interaction was interesting. We really enjoyed the project. We pushed it, I think, as far as it can go. It was time to break free of the screen and to see what we could do with tangible objects.

This is RVL, which has a better name but it stands for reverse electro vibration. So at the top you’ll see this is the electro vibration screen at the top. We’re reversing the action. We’re placing the signal on the user’s body and grounding through an object with a conductive coating and insulating layer. Same idea, same technology, but it allows us to augment the environment, add touch to objects that are around us.

We were able to transition from the screen and break out ‑‑ as you’re touching and sliding along the screen, you get that sensation. You can pull it up and actually feel it along the objects as well. So you’re feeling certain points on the object and getting these different haptic sensations. We coupled this with depth cameras. We do that a lot. We use 3D cameras ‑‑ not as the driving technology behind our work but we augment a lot of our work with the connect and other 3D depth cameras.

One of the nice things about this project, RVL, we were able to find nice conductive materials off the shelf. This was a painting embedded with a conductive coat of paint. As you touch the objects, you get different sensations. And here we weren’t driving the signal entirely digitally ‑‑ the sensations weren’t changing because of the change in that digital signal. It was also a change in the thickness of materials. So where the artist laid up extra coats of the conductive paint, you’d have stronger signal. If you have a weaker area. So it was a nice blend of the material properties of this copper‑based conductive ink paint and our technology.

So we took that. You can see there’s kind of this progression screen‑to‑object‑to‑human. So we decided to drive that same ‑‑ use that same technology to drive a signal which transmits an audio signal.

So this is Ishin‑Den‑Shin. This was a project that we showed last year. What’s happening is a user is able to record an audio message into the microphone. The microphone is also providing the signal, the high‑voltage audio signal. And then as the user rubs somebody else’s ear, you can hear this hidden message that they’ve recorded into the microphone. So it’s pretty neat. Pretty intimate interaction. It’s great.

[Laughter]

It was an interesting experience in the lab. For two weeks we were just rubbing each other’s ears.

[Laughter]

I tried explaining it to my wife. I think she got it.

So, yeah, just like I said, it’s kind of like RVL but decided to throw that on to the ear.

It was decided, and probably rightly so, that throwing that high‑voltage signal right into somebody’s ear by their brain isn’t a great idea for museum installation. So we went with a much more innocuous application where we put a machine, aluminum ear on a pedestal, this little resonant box. You can listen to the message using that little box by rubbing that ear. That was our abstraction.

So the video looks cool. We never actually released it. That was probably way too much for whoever to handle.

Like I said before, we used these depth cameras quite often. One of the things that we found was there’s very little haptic feedback that you can get from those. So we decided to try to create what we call free air haptics, the experience in the air, in the open space.

So we’re doing a lot of these things where we’re tracking, waving our hands, clicking buttons, doing whatever with our hands. But we don’t get any sort of feedback. If I try to click a button, there’s no moment when I really know I’ve clicked that button except for that visual feedback. There’s something satisfying about pressing down ‑‑ oops. On the touchpad and getting that click, that sort of ‑‑ and coupling that with whatever visual feedback you’re getting on screen.

So we developed this project that’s called Aireal. It’s essentially a vortex gun. It’s creating vortices of air firing these speakers at the same time, compressing the air in the chamber and forcing it down this nozzle. We can track users with the depth camera. That’s the camera on the device there. And panning tilt servos so that we can move those around.

Here it is. We pumped in party smoke for visualization, which was super fun to film and to photograph. It was pretty neat. Then we started to develop experiences. These kind of wrote themselves a little bit.

The vortices had a low frequency pulse, you know, had to give the speakers time to compress and decompress the chamber. So a butterfly flapping on your hand fit really well with the technology of what was possible and the sensations that it allowed.

Button presses and swipes, like I said, we used it to generate feedback for this mobile device. An iPad where you’re scrolling through iTunes and getting your click and scroll feedback.

Then we did a little gaming application using Unity. We built a quick little game. You can feel these seagulls moving around you in free space, more immersive gaming experience.

Again, the experience that we didn’t know we were looking for or expect ‑‑ we had that story. We wanted to create free air haptics. We knew that, but we again stumbled on an interesting application, augmenting the environment. So we took this device and, you know, if you can imagine this in your living room, we were originally just thinking it was going to be pointed at the people. But if it’s aimed at the curtains and you’re watching a scary movie and you get this ripple passed on the curtains, that’s a magical experience.

[Laughter]

Scary but magical. It was something we hadn’t thought about until we had this on in a room. We were like, oh, the paper’s moving across the table. It just worked.

This next project is a little bit more of a straight interaction project. It’s really cool. It’s utilizing Pico projectors. It’s called side‑by‑side. This is two Pico projectors that are augmented with infrared LEDs. There are two infrared LEDs and a white LED instead of the typical RBG channels in a small Pico projector. There was a filter that made it look green. I say white here but you’ll see they’re all green.

Basically we created two information streams. We created a visible light screen which is showing the actual content that the users see. And then we’ve got our invisible infrared which the cameras on each of those can see. And there we’re presenting AR tags, those augmented reality tags.

So these two mobile projectors can speak to one another and convey their position and some other information about the interaction. Here you see the cute little bunny. Not a Disney character. We don’t use Disney characters. So she ‑‑ did he get the flowers already? Yeah. He’s a cutie. And there you can see at the bottom, the actual live input to the cameras on each of these ones.

And the boxing game. Each character has control of the boxer. They can push a little button to punch. Yeah. It was a fun demo to do. When kids play with these things, they just go like that really fast. I guess it depends on the user.

We did the typical user interaction scenario, built folders, sharing things. So you can imagine these in a more mobile context where your cell phone, your mobile phone, has a projector and you can share content with a user by projecting on to the wall. That’s a feature that might be here.

I’ll skip the last one.

This is a collaborative gaming environment where one’s holding the nest and the other’s holding the mama bird and they’re pushing the baby bird into the nest.

Another area of research that we’re interested in is rapid manufacturing, 3D printing, all of those buzz words. We build a lot of devices in‑house. We had this idea of making everything interactive. There’s this maybe ability to provide interactive objects outside of the typical device manufacturers. So we could put up these objects that people could build themselves or could be built in parks, whatever. It’s rapidly fabricated, interactive objects. So it was looking more at that front end of the development.

We used an optically clear photopolymer and used a series of projects called printed objects. These are 3D printed chess pieces with embedded fiber optics. We printed the light pipes, the shapes around them. We added that infrared reflective tag to throw them on to ‑‑ the old Microsoft Surface but homemade so a touch surface screen with the camera embedded in the stand. It was pretty fun.

This is a fully 3D printed piece. We can channel light through it. That’s an animation overlay, the light bending around the corner there.

The way that we did this was through misusing the materials in the 3D printer. So this optically clear material is a standard off‑the‑shelf material. It’s what we used for the core. We wrapped that around those pipes with support material. Support material typically washed away and removed from the print. We wrapped it around these pipes and found that it had a slightly index of a fraction and allowed us to channel light through an enclosed, embedded object.

The resolution of the printer is ‑‑ it’s really good. It’s a high‑end 3D printer, but it’s not at the point where you’re going to be able to build fiber optics that are going to send data across the ocean or whatever. But we are able to make small objects, little embedded objects, mobile objects, toys and that type of thing.

This is a dPad we created. A fully embedded, printed on the printer. We paused the print, drop in these electronic components, and create essentially super robust buttons. It utilizes inflections, embedded infrared LED for photo transistors receiving the signal from those ‑‑ from the IR emitter. As you put your finger down on top of this clear 3D print, it’s reflecting more or less on to the sensor. Some touch screens work this way as well: Infrared LEDs, infrared camera at the bottom.

So we created also a series of lenses. Looked like this. We did embedded LEDs. So these are LEDs with lenses where we add a secondary lens. It was more a study to see what we could do with the optics of the light.

And this is where going through this process of pausing the print, dropping these pieces in, building this up. This is sped up. This is not realtime.

[Laughter]

Take a little bit longer. We, again, stumbled upon a really elegant and useful solution for us. And that was to create this really polished internal surface. Typically with these 3D printed objects, we track them in. You see the steps, the contours of the three prints stuck. You couldn’t go in later and polish these because they were completely enclosed objects. One time I accidentally left the 3D printer running for too long and got this extra layer of material drooping down on to the surface. We found that those lenses were much more smooth, much closer to the hemispherical input geometry. We found that if we raised the printer up, deposited material from height, let it drip down on to the surface, it would create surface polishing. That was, again, a little more of an engineering find, but it was one of those things that if you’re just continuing to crank and to make mistakes and sort of that idea of failing forward, that Nancy mentioned, that’s what we do all the time. You have to be on the lookout for those moments.

We were able to create light bulbs, which was completely not a research part of the project. But the nice thing about working where we do is they let us go out and explore and take these ideas and these technologies and enter design competitions, send stuff to art shows and do these things. I think it’s really important for a research group to keep a nice balance between art design and the hard sciences. It’s one of the really great things about working where we do.

I’ll try to get through this more quickly. I realize I’m running low on time. I want to leave an opportunity for you to ask questions.

We then took this and really retrofitted it to the Disney story, it’s like, ok, we’ve got these really great 3D‑printed light pipes; let’s make some cute toys. That’s what we did.

This is Papillon. It’s a project where we’re building eyes for toys and really small characterized. So the characters are Beep, Iggy, and Boop. They have their nice interesting personalities. One of the interesting parts about these eyes was trying to develop pixel arrangements appropriate for spherical shapes. We found that the pattern works as both a 3D and a 2D packing extreme. There’s direct mapping from a 3D to 2D.

This is 2D. It’s test‑related. When it’s extruded up on to that spherical surface, you keep a similar pixel size so surface area, cross‑sectional surface area is consistent across the entire sphere. So in some cases, some packing schemes you might have an elongation at the hemisphere or fewer or more pixels. Because we were 3D printing these, we were able to create nice, unique packing schemes.

We went on to turn these into input devices as well. So we were able to augment these 3D prints. Essentially we were printing our own 3D touchscreens. So it’s both an input and an output device. There’s a piece of acrylic in there, a camera, and a Pico projector.

Quickly, just want to get through the last project, leave a little bit of time for conversation.

This is paper generators. This really ‑‑ this project, just like the last ones, kind of stood on its own. We had a guy come in who has an electrical engineering background and was really fascinated by this phenomenon of electrostatics, what you can do with these really small amounts of charge that build up on the surface of our body when you scuff the feet.

So we’re synthesizing this. This is called paper generators. There’s no battery involved at all. So the interaction is utilizing just the input energy from the little girl rubbing the Teflon across that aluminized Mylar and driving the display, turning it on and off. Basically what’s happening is we’re starting with the piece of Teflon, charging it, just rub it against your body or, you know, carpet, whatever. That gets the Teflon charged. You then take the Teflon and you can place it between two conductors or a single conductor and you start to just move those up and down.

So this allowed us to create a variety of input experiences. We could do some sliding gestures. We could do some knocking gestures. And there’s a really small voltage differential that’s created. It gives you this really low, low, low current but fairly high voltage output. And one of the things that this was good for was affecting the displays and also LEDs.

Here’s one of the sliding gestures.

The project, it was a parallel story and technology. So we said, ok, we’ve got this great idea. We can get a little bit of energy out, but we want to make this into interactive paper. How can we make a storybook more interactive without adding a battery?

So this is a series of LEDs. That’s a little rectifier. It’s kind of big and clumpy and ugly. But, again, it doesn’t require any batteries or micro controllers, any embedded intelligence. It’s just directly translating your body’s motion into light, really inefficiently.

[Laughter]

It’s not like, yeah, there are other energy harvesting technologies that would be much better for putting in your car or whatever. Sorry. I just bumped the cable there. There it is.

And, so, we also were really interested in doing this ‑‑ adding this motion. There’s just so little energy. Mechanical motion was not a good candidate. You’re seeing the last three strokes of like 50 rubs to get that little flag to go like that.

[Laughter]

So it was not really the right output. That’s one of the things that you have to find. But LEDs work really well.

This is an infrared LED on the paper generator side. When you use that to transmit some information to the computer, you can imagine you can augment your storybook with some digital content. That’s what is shown here.

So that’s it. Those are the projects. Again, what’s great about working at Disney Research Pittsburgh is the opportunity to work with artists and engineers, scientists, and applied to academic conferences, be at places like this, and meet interesting people and all sorts of fields.

We move constantly between story and technology.

Again, one of the things I didn’t mention is very often we’re seeing technology that may be ‑‑ at home in a certain field and bringing it into our field or into some sort of different context, turning it on its head a little bit. That’s another one of our approaches.

All of this work is available on our website at disneyresearch.com. We are the Interaction Group. You’ll see the Robotics Group. You’ll see work from our other labs. We have Disney Research Boston, Disney Research Zurich, Disney Research L.A. The work I showed today is the Interaction Design Group at Disney Research Pittsburgh.

That’s it. We have time for a couple of questions I think.

[Applause]

Please, yeah, come up to the microphones.

>> Thank you very much. That was incredible. I have one burning question which is: When you’re rubbing the plants, do you have audio sound for us to share?

>> Eric Brockmeyer:   Yeah.

>> It would be great to hear some.

>> To your right.

>> Eric Brockmeyer: I usually talk right over it.

>> It looked like it sounded pretty cool.

>> Eric Brockmeyer:   There is audio. If you go to the website, you can find clips of the audio.

I could answer maybe another question while.

>> What do the three toys do?

>> Eric Brockmeyer:   What do the toys do? They’re, again, coupled with depth cameras. One of the toys was watching the users to see if you were dancing. And the more you were dancing, the more that its eyes started to flash and get really excited.

[Laughter]

That was one of them.

One of them was a little alarm clock where you could swipe between screens. And the last one was an emotional little robot, so it would follow your hand around; get skeptical, scared, happy, show these different emotions. It was pretty fun to build those experiences. A great question.

I think this just has an audio overlay. This is the type of sound that was coming out, but this is a soundtrack for the video. We’ve got some footage on the website with the sound output. And we have more recently done with like a musical instrument application. In fact, I think if you search ‑‑ this is called Botanicus Interacticus. There was a Yahoo! News article on us maybe a month ago, two months ago. You could see the actual snake plant as an instrument. Yahoo! News. You’ll find it, I think. Disney Research Pittsburgh.

>> [Question Inaudible]

>> Eric Brockmeyer:   That’s a great question. There’s lots of testing to make sure that the plants are happy. If you touch a plant enough, it’s not going to be happy.

When the projects leave our group, we’re able to talk about them because we publish this work and release them to academic conferences. But then once it goes into Disney proper, that’s kind of ‑‑ I don’t really know what happens to it. It goes into the machine.

Yes?

>> So much of what you have shown us is not yet commercializable. How much of what you’re doing is done as open source, the things you’re working on?

>> Eric Brockmeyer:   I can repeat it. How much of our work is open, available, open hardware, open software? Honestly, we’re fortunate enough to exist in a world where we’re able to take advantage of communities that share that type of thing. We share our work just explicitly through the conference publications. The Touché board that you saw, we couldn’t release the actual drawings for the board. That wasn’t available in the paper. But there was enough information that somebody came out and built a version of the Touché sensor and open sourced that and I think the more people that are using that, the better. It’s not something that is our goal and our mission, but when stuff like that happens, it benefits all of us.

It’s a good question.

Great. I guess that’s it. Thank you so much. Enjoy the conference.

[Applause]

>> Rich Cherry:   There’s coffee and tea outside. Get some caffeine and then head off to your next round of sessions.

[The Opening Keynote presentation ended at 10:04 a.m.]