Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c02/h08/mnt/19116/domains/ on line 36

Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c02/h08/mnt/19116/domains/ on line 21

Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c02/h08/mnt/19116/domains/ on line 507
Emo-Capsule » Input Methods

Archive for the ‘Input Methods’ Category is off to SIGGRAPH

Tuesday, July 8th, 2008


It has been a long while since we have posted anything on out blog so here is an exciting update!An update on, with great news our group has been ACCEPTED to take part in SIGGRAPH 2008 being hosted in LA. We will be presenting the week of August 11 - 15 at Los Angeles Convention Centre. This is big news, for those unaware as to what SIGGRAPH (short for Special Interest Group on GRAPHics and Interactive Techniques) is a Internation Conference and Exhibition on Computer Graphics and Interactivity Techniques which is being hosted at the LA Convention Center during August 11 - 15 2008. In 2005 25 000 people attended the conference, this is something to be pumped about being accepted to.

EmoCapsule is a study of interactivity. Participants influence the installation by inputting their emotions at The website provides users with emotional statistics and trends based on frequency, location, and weather information collected. The installation is dynamically updated to reflect the dominant emotional mood.

Visitors interact with the current emotional state of the EmoCapsule installation - depicted through sound, text, and colour. Participants move and catch emotion words using their own silhouette and other objects. Participants make loud noises, triggering the installation to emit reactive sounds and display new words. These form a sort of conversation between the user and the installation space.

So if you will please check out and or the SIGGRAPH website. With Love JSHAW and the EmoCapsule Team

Technical Difficulties

Friday, August 17th, 2007

From what I can tell (and what we’ve discussed), one of the bigger technical challenges that I think we will face will keeping track of where each user is within the space. Assigning emotions to an id in a database is fairly straight-forward, as is projecting emotion rings around a person from above.  But I think knowing which person belongs to that id (so as to project the correct emotions around the correct person) might be tricky.

Some ideas for possible solutions: Colour tracking or some other video-based tracking (probably our best bet, would likely require more cameras to view from the side, and might affect how we choose to handle lighting within the installation), Give users something to help us track them (something with a specific colour, rfid, sound..).

That’s pretty much all I’ve got, but hopefully one of you will have a better idea. Please post anything you think of.

PS. My arduino board arrived and I made an LED blink.

Messy sketches

Monday, August 13th, 2007

I was talking to Eric about the limitations of accelerometers — we’d only be able to use a few suits. This would severely limit the fun of the installation. One possible solution I thought about was to divide the installation into phases. In the first phase, users would be given instruction to put on the accelerometers and go in. They would see various poses displayed on the screen to represent emotions. They would mimic said emotions and acquire the emotion-rings as seen in Eric’s sketch. Once they’d had their fun and acquired many rings, they would take off the suit and leave - sort of. They would go out the exit door and through some kind of hallway (I had visions of dark hallways with graffiti-style light following the user as he/she went through (through the use of another web cam or arduino sensors - which could also light up some floor panels or something cool like that). Then bam! the user enters a whole other installation.. they still have their emotion rings (somehow.. this we’d have to sort out). On the wall, since we probably wouldn’t have to show the emotion poses, we could instead display the whole web-inputted network of emotion compatibilities. It would be cool if people could visit the site, upload a photo of themselves and choose their emotions - then the circle would be generated and interact with other people’s representations (much the same way as the live-installation ones do on the floor.

I sketched it up very roughly as I was thinking it through so it might not be that clear, but I’m including it anyway. Ask me any questions.

Installation sketch

Obviously it’s just one of many possible solutions so feel free to tear it apart or suggest other directions!

Re: Eric’s Cool Sketches

Monday, August 13th, 2007

I think the idea of the rings around the user on the ground is awesome, probably not too tough either from what I’ve been looking at (haha, relatively…). I think we could definitely create a simple installation around this, or work it into something more complex, although I even like the idea of just simple white panels, with all of the focus being on the floor. The biggest unresolved issue is how do we assign an emotion to each person. Is is based on their movement? Something they touch or they type in.. or say .. or position their body (which could be difficult, but super cool if it worked.. alternatively, we could define poses for each emotion and flash them/display them in some other way so that people would know to “do a pose” in order to acquire an emotion ring.. which would be totally doable with current accelerometer code) or it could be based on where they walk initially when they enter, or the colour of their clothes? Or some combination of these, or something else entirely! What are you guys leaning towards? I’d really love to work in the accelerometers as I think the pose detection is still sort of novel and it works surprisingly well.. I’m also keen on RFID and/or touch/light/sound sensors with Arduino in some kind of subtle way (embedded into the floor or wall panels maybe or something..)  It would be cool if we could use some combination of things (it would also help our project come across as less “simple/easy” but I think it’s easier said than done to combine them.

What does everyone else think?

Safari Tech Books Online!!

Sunday, August 5th, 2007

Safari: If you haven’t already heard about/used it - it’s amazing. Many many technological books available to read online. If you can stand reading off of your computer screen, it is well worth it.

I’m currently reading a book called Physical Computing: Sensing and Controlling the Physical World with Computers by Dan O’Sullivan, Tom Igoe. I think it could be verrrrry helpful for getting into the microcontrollers and sensors and stuff.

I know I already posted about Arduino, but I found a booklet online that describes how to get started using it. I know I’ll end up going through it and thought some of you might also be interested. I found some Ultrasonic Range Finders that are basically like bats (haha). They can detect the exact distance of objects that are a couple of cm to several meters away. They are a bit more expensive than other sensors (in the $30 each range) but could come in handing in a low-light environment. I’m in the process of tracking down an Arduino microcontroller and some sensors that I can start playing with. I will post what I find!

Arduino and Myron

Friday, August 3rd, 2007

So after a WinXP reinstall, complete with a new version of Processing, WinVDig, and Myron, I’ve now got some of the Myron video tracking samples working with my webcam. Things worth noting: It is sooooo light-dependent. Granted, my camera isn’t very good, but I still think it might be tricky in a dark installation. I’m thinking about solutions.

In the meantime, I’ve been looking into Arduino - A sister project of processing. I think it’s worth taking a look at the site. Essentially relatively inexpensive microcontrollers that allow the use of a wide variety of sensor input (temperature, light, movement) with USB interface. The programming language is all open source and apparently works well with Processing.  It sounds pretty good.

Thoughts on Installation Pt. 2

Wednesday, August 1st, 2007

Hannah and Group,


I do agree with the possibility of huge moveable objects creating the appearance of a cheep half ass final project (as we discussed during our meeting). I also agree we should remove that idea and go no further with it (the over sized objects that is).


The idea of having the emotion words float around/ rain or trace the shape of the users shadow interacting with the installation is defiantly cool. The problem see with this is if or when large groups of people enter the installation the whole bottom half of the screen will be left blank because of the peoples shadows. To get around this we would have to restrict entry through the installation. As an art piece I do not believe we should do this because it will detract from the instant “wow factor” and possibly eliminate some of the users want to experiment, explore and push the limits of the installation (as discussed we want the users to push the boundaries of what we create). I believe as long as we didn’t have this feature throughout the whole installation the raining text would be a great addition; maybe set it up so only 1 or 2 people can interact with it on one panel of the installation near the entrance or exit.


As for ways to display the text, the idea of having the font colour match the meaning of the emotion is GREAT! Keeping the font size relative to the popularity of the word I still think is a good idea (keeping in mind we will have to make sure that one word will not reach font size of 99999999999999 and over take the display).


I know we discussed the different set up of the installation as either walk through or a circle-ish area where everyone will just congregate. The best way to get the most out of the installation allowing for maximum interaction and the least congestion is have an entrance and en exit and the middle be a medium large space for example -> ( ) enter through the bottom exit through the top and interact in the middle. 


Having the emotions broken up into different walls to try and make the user feel the emotion of the installation (Hannah is this kind of what you meant?)  Having the emotions organically flow together will have an influence on the users anyway. The predominate emotion of the installation will create and hopefully make some sort of effect on the viewers. Breaking the emotions into separate walls could cause a battle between panels, some very plane and boring while others overly packed. If this is what we decided on as a group to do I have no problem with it as we are researching and developing to create the best project in the world! (I still like the idea of flowing text and passing a display throughout say 12 projectors (very organic motion) I picture the program the prof. from NYU did with the 6 screens intertwining an animation. It just looked cool (however I do understand that it has been done)


I love the idea or tracking the users through distinctive objects they hold or wear (would this be using florescent colours? Tracking user paths through the installation to create inverted light/shadow graffiti would be cool and be looked into. Another option would be instead of random text floating around; and while tracking user paths use the path of the users to control the flow of the emotion words being displayed. Where the shadows created by the user were, that is where the words will flow. The more predominate the emotion, hence the larger the font the more leeway the word has to travel outside of the user created shadow, or vice versa.

The idea of temperature/touch/motion sensors I like the idea a lot. For instance using dance matts or accelerometers to alter the flow patterns of the display could be neat.


I also still like the original idea or a standard text input through keyboard/computer and phone. 


I am for this: limit input as text and interpret through interaction within the installation. I see this allowing comparison between the textual input and way they act in the installation as the video captures images (such as colours they are wearing, speed of movement.


Clean and clear installation – to do this I feel that the background should be one colour or a light gradient from top to bottom. The background colour could change depending on the overall emotion of the installation but if we want people to focus on the emotional text and feeling some sort presence being in the installation the emotion text should not be competing with other animated objects such as having moving clouds or sun shining or other illustrations in the background I feel they will become cumbersome and distracting.


I was going to post this as a response to your post Hannah, but it ended up being tremendously long and I felt it would be easier to read here. After typing this and re-reading this we may need another group meeting. Or something of the sort. It seems that we are once again veering away from what we came up with as a group in our previous group meeting.


Over all I think having the user control the over all emotion of the installation is key. This can be done while still allowing them to experiment and push the limits of the project. As I saw with the large objects sitting in the installation and allowing people to play/ experiment with them kind of took away from the “emotional aspect” of the installation giving the user the ability to alter the installation without knowing exactly what they are doing. Making the user push the limit while still portraying their inner emotions to the project is definitely a challenge will have to overcome.

Thoughts on Installation

Friday, July 27th, 2007

I’ve been considering some of the ideas we discussed at our in-person meeting (Jordan, Alicia, and I).

I think we need a really solid idea of how we want to tie everything together. Just because it’s feasible to have input through large over-sized items doesn’t necessarily mean that the installation will make sense. I’m concerned that it will just end up a chaotic mishmash of direct input through the use of a variety of large, tacky, devices — which I’m not entirely opposed to as I still think it might be fun.

Ideally, it might be more enjoyable if we could come up with ways that users can interact and influence the installation without necessarily moving around objects with sensors in them. For example, if we could use video and sound detection to  “sense” where in the room people are gravitating and then capture input from them (strictly text perhaps) we could translate this into less direct input.

Another idea I had was that instead of mixing all of the emotion words together throughout the installation, it could be divided into the 6 (or however many we decide) themes (ie. one wall per “basic emotion”). We could then detect emotions from users simply based on their location within the installation. To make this more interesting, we could trace the shadow of the users on each wall (perhaps using text or some other cool light grafiti in the shape of the person.. the possibilities are endless) and give them the characteristics of that particular emotion.

Example - I come into the installation and I go over to the the “sadness” wall/area that is displaying (in a super-cool-Eric-Chan-way) all kinds of sad words, maybe using shades of sad colours, moving the words around in a sad way.. whatever that is. A camera tracks my movement and projects my shadow (real time) on the wall in amongst the text somehow — for just one idea, see Camille Utterback’s work here. So now it’s almost as though the room is influencing the user, not just the other way around. As far as input goes.. I’m not really sure. We could have the user hold or wear some kind of object as they come in, or leave different objects (even something simple like cubes, spheres, etc. of different colours) around the installation that users can touch. There are also different input forms that we can look at like temperature/touch/motion sensors. We can also have have the standard text input through keyboard/computer or phone — maybe even just limit the users to this and try to extract more emotion information through less direct methods, such as the video captured images (such as colours they are wearing, speed of movement.. it wouldn’t have to be anything super complex).  If we were to go this route, I think we could spend a lot of time creating and refining a really cool projection display. We could even explore things like connecting different users by their location, mood words nearest them, type of words they input, colours they are wearing, amount they are moving, etc. Or we might also track user paths through the room to create a sort of inverted light/shadow graffiti.

I definitely want the installation to be fun but I think it will be more impressive if the final presentation is coherent and “clean”. I think it might be more fun for everyone (kids and adults) if the main theme is slightly simpler but also less obvious, while still allowing (encouraging!) everyone to move around and try to manipulate/abuse the system.

Some Research on Emotions and Facial Expression-Ekman

Wednesday, July 11th, 2007

I’ve been doing some readings with emotions, facial expressions… Here are some PDF’s I found relating to Ekman’s research. I havn’t had time to go through all the documents in detail but they seem to be relavent and hold useful information.

Basic Emotions CH. 3 - Paul Ekman

Facial Expressions CH. 16 - Paul Ekman

Facial Expression and Emotions - Paul Ekman


 I have also started writing some stuff in a google doc. I have shared it with everyone. If you haven’t got it please let me know and I will make sure you do.


SEE you guys this weekend.

Research and Organization

Sunday, July 1st, 2007

Checkout: I put a bunch of stuff there - some articles and diagrams I grabbed from the books I found. Go snoop around.I sent out login information for the csit server space so check your email.

Since I’m spending so much time working with the accelerometers, I’ve started spending my outside-of-work time looking into video processing and how this might be useful for deriving gestures. Over the next week, I’ll be posting my test files for these and the other stuff I’ve been working on.

I’m also trying to gather as much theoretical information as I can to help support our design decisions (when we actually make them) and how we will associate particular sounds/images/movements etc with words. I’m doing my best to compile this information on the csit server too.