Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c02/h08/mnt/19116/domains/blog.emocapsule.com/html/wp-includes/cache.php on line 36

Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c02/h08/mnt/19116/domains/blog.emocapsule.com/html/wp-includes/query.php on line 21

Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c02/h08/mnt/19116/domains/blog.emocapsule.com/html/wp-includes/theme.php on line 507
Emo-Capsule » Emotions

Archive for the ‘Emotions’ Category

EmoCapsule.com is off to SIGGRAPH

Tuesday, July 8th, 2008

SIGGRAPH 2008

It has been a long while since we have posted anything on out blog so here is an exciting update!An update on Emocapsule.com, with great news our group has been ACCEPTED to take part in SIGGRAPH 2008 being hosted in LA. We will be presenting the week of August 11 - 15 at Los Angeles Convention Centre. This is big news, for those unaware as to what SIGGRAPH (short for Special Interest Group on GRAPHics and Interactive Techniques) is a Internation Conference and Exhibition on Computer Graphics and Interactivity Techniques which is being hosted at the LA Convention Center during August 11 - 15 2008. In 2005 25 000 people attended the conference, this is something to be pumped about being accepted to.

EmoCapsule is a study of interactivity. Participants influence the installation by inputting their emotions at emocapsule.com. The website provides users with emotional statistics and trends based on frequency, location, and weather information collected. The installation is dynamically updated to reflect the dominant emotional mood.

Visitors interact with the current emotional state of the EmoCapsule installation - depicted through sound, text, and colour. Participants move and catch emotion words using their own silhouette and other objects. Participants make loud noises, triggering the installation to emit reactive sounds and display new words. These form a sort of conversation between the user and the installation space.

So if you will please check out Emocapsule.com and or the SIGGRAPH website. With Love JSHAW and the EmoCapsule Team

Messy sketches

Monday, August 13th, 2007

I was talking to Eric about the limitations of accelerometers — we’d only be able to use a few suits. This would severely limit the fun of the installation. One possible solution I thought about was to divide the installation into phases. In the first phase, users would be given instruction to put on the accelerometers and go in. They would see various poses displayed on the screen to represent emotions. They would mimic said emotions and acquire the emotion-rings as seen in Eric’s sketch. Once they’d had their fun and acquired many rings, they would take off the suit and leave - sort of. They would go out the exit door and through some kind of hallway (I had visions of dark hallways with graffiti-style light following the user as he/she went through (through the use of another web cam or arduino sensors - which could also light up some floor panels or something cool like that). Then bam! the user enters a whole other installation.. they still have their emotion rings (somehow.. this we’d have to sort out). On the wall, since we probably wouldn’t have to show the emotion poses, we could instead display the whole web-inputted network of emotion compatibilities. It would be cool if people could visit the site, upload a photo of themselves and choose their emotions - then the circle would be generated and interact with other people’s representations (much the same way as the live-installation ones do on the floor.

I sketched it up very roughly as I was thinking it through so it might not be that clear, but I’m including it anyway. Ask me any questions.

Installation sketch

Obviously it’s just one of many possible solutions so feel free to tear it apart or suggest other directions!

And another thing..

Monday, August 13th, 2007

We could incorporate a web component (and perhaps another component to the installation) by allowing users to define the compatibilities. So users can decide whether happy belongs with sad, or sad belongs with confused. If that makes sense?

Re: Eric’s Cool Sketches

Monday, August 13th, 2007

I think the idea of the rings around the user on the ground is awesome, probably not too tough either from what I’ve been looking at (haha, relatively…). I think we could definitely create a simple installation around this, or work it into something more complex, although I even like the idea of just simple white panels, with all of the focus being on the floor. The biggest unresolved issue is how do we assign an emotion to each person. Is is based on their movement? Something they touch or they type in.. or say .. or position their body (which could be difficult, but super cool if it worked.. alternatively, we could define poses for each emotion and flash them/display them in some other way so that people would know to “do a pose” in order to acquire an emotion ring.. which would be totally doable with current accelerometer code) or it could be based on where they walk initially when they enter, or the colour of their clothes? Or some combination of these, or something else entirely! What are you guys leaning towards? I’d really love to work in the accelerometers as I think the pose detection is still sort of novel and it works surprisingly well.. I’m also keen on RFID and/or touch/light/sound sensors with Arduino in some kind of subtle way (embedded into the floor or wall panels maybe or something..)  It would be cool if we could use some combination of things (it would also help our project come across as less “simple/easy” but I think it’s easier said than done to combine them.

What does everyone else think?

Sketches Concept

Monday, August 13th, 2007

Hey guys, I’ve done some sketching of a concept I’ve been thinking about in regards to our interactive installation. Click the images to enlarge.

The idea is about compatibility. The user walks into the room and a halo surrounds them with different emotions or perhaps descibing their profile. When one user approaches another, there could be an attraction or rejection symptom. If two emotions attract then the halos of each user become joined. The more users together the more attraction / rejection.

This is just an interesting take on compatibility between users. Of course we are still using text as our visual element.

Let me know of your thoughts.

BTW, I just got JMyron up and running on my Mac Book Pro (MBP) hehe and played around with some simple motion tracking stuff. Pretty cool.

Thoughts on Installation Pt. 2

Wednesday, August 1st, 2007

Hannah and Group,

 

I do agree with the possibility of huge moveable objects creating the appearance of a cheep half ass final project (as we discussed during our meeting). I also agree we should remove that idea and go no further with it (the over sized objects that is).

 

The idea of having the emotion words float around/ rain or trace the shape of the users shadow interacting with the installation is defiantly cool. The problem see with this is if or when large groups of people enter the installation the whole bottom half of the screen will be left blank because of the peoples shadows. To get around this we would have to restrict entry through the installation. As an art piece I do not believe we should do this because it will detract from the instant “wow factor” and possibly eliminate some of the users want to experiment, explore and push the limits of the installation (as discussed we want the users to push the boundaries of what we create). I believe as long as we didn’t have this feature throughout the whole installation the raining text would be a great addition; maybe set it up so only 1 or 2 people can interact with it on one panel of the installation near the entrance or exit.

 

As for ways to display the text, the idea of having the font colour match the meaning of the emotion is GREAT! Keeping the font size relative to the popularity of the word I still think is a good idea (keeping in mind we will have to make sure that one word will not reach font size of 99999999999999 and over take the display).

 

I know we discussed the different set up of the installation as either walk through or a circle-ish area where everyone will just congregate. The best way to get the most out of the installation allowing for maximum interaction and the least congestion is have an entrance and en exit and the middle be a medium large space for example -> ( ) enter through the bottom exit through the top and interact in the middle. 

 

Having the emotions broken up into different walls to try and make the user feel the emotion of the installation (Hannah is this kind of what you meant?)  Having the emotions organically flow together will have an influence on the users anyway. The predominate emotion of the installation will create and hopefully make some sort of effect on the viewers. Breaking the emotions into separate walls could cause a battle between panels, some very plane and boring while others overly packed. If this is what we decided on as a group to do I have no problem with it as we are researching and developing to create the best project in the world! (I still like the idea of flowing text and passing a display throughout say 12 projectors (very organic motion) I picture the program the prof. from NYU did with the 6 screens intertwining an animation. It just looked cool (however I do understand that it has been done)

 

I love the idea or tracking the users through distinctive objects they hold or wear (would this be using florescent colours? Tracking user paths through the installation to create inverted light/shadow graffiti would be cool and be looked into. Another option would be instead of random text floating around; and while tracking user paths use the path of the users to control the flow of the emotion words being displayed. Where the shadows created by the user were, that is where the words will flow. The more predominate the emotion, hence the larger the font the more leeway the word has to travel outside of the user created shadow, or vice versa.

The idea of temperature/touch/motion sensors I like the idea a lot. For instance using dance matts or accelerometers to alter the flow patterns of the display could be neat.

 

I also still like the original idea or a standard text input through keyboard/computer and phone. 

 

I am for this: limit input as text and interpret through interaction within the installation. I see this allowing comparison between the textual input and way they act in the installation as the video captures images (such as colours they are wearing, speed of movement.

 

Clean and clear installation – to do this I feel that the background should be one colour or a light gradient from top to bottom. The background colour could change depending on the overall emotion of the installation but if we want people to focus on the emotional text and feeling some sort presence being in the installation the emotion text should not be competing with other animated objects such as having moving clouds or sun shining or other illustrations in the background I feel they will become cumbersome and distracting.

 

I was going to post this as a response to your post Hannah, but it ended up being tremendously long and I felt it would be easier to read here. After typing this and re-reading this we may need another group meeting. Or something of the sort. It seems that we are once again veering away from what we came up with as a group in our previous group meeting.

 

Over all I think having the user control the over all emotion of the installation is key. This can be done while still allowing them to experiment and push the limits of the project. As I saw with the large objects sitting in the installation and allowing people to play/ experiment with them kind of took away from the “emotional aspect” of the installation giving the user the ability to alter the installation without knowing exactly what they are doing. Making the user push the limit while still portraying their inner emotions to the project is definitely a challenge will have to overcome.

Gesture and Affect Recognition in Music

Monday, July 30th, 2007

I was reading through some of the printed articles that I have, and I came across one that really sparked an interest. It’s from the University of Geneva and deals with translating the emotional and gestural aspects of dance and movement into related sounds and instruments.

The first one is the actual article, the following two just kind of expand on the ideas proposed.

ftp://infomus.dist.unige.it/pub/Publications/EyesWebIEEE99.pdf

ftp://infomus.dist.unige.it/Pub/Publications/CIM2003-Gesture.pdf

ftp://infomus.dist.unige.it/pub/Publications/Kansei97_LabProjetcs.pdf

The last thing that I thought would be necessary to include is the actual “Gesture Dictionary” that they propose using to determine the emotional aspects of movment. Its all pretty cool to just take a quick look at.

http://recherche.ircam.fr/equipes/analyse-synthese/wanderle/Gestes/Externe/index.html 

Thoughts on Installation

Friday, July 27th, 2007

I’ve been considering some of the ideas we discussed at our in-person meeting (Jordan, Alicia, and I).

I think we need a really solid idea of how we want to tie everything together. Just because it’s feasible to have input through large over-sized items doesn’t necessarily mean that the installation will make sense. I’m concerned that it will just end up a chaotic mishmash of direct input through the use of a variety of large, tacky, devices — which I’m not entirely opposed to as I still think it might be fun.

Ideally, it might be more enjoyable if we could come up with ways that users can interact and influence the installation without necessarily moving around objects with sensors in them. For example, if we could use video and sound detection to  “sense” where in the room people are gravitating and then capture input from them (strictly text perhaps) we could translate this into less direct input.

Another idea I had was that instead of mixing all of the emotion words together throughout the installation, it could be divided into the 6 (or however many we decide) themes (ie. one wall per “basic emotion”). We could then detect emotions from users simply based on their location within the installation. To make this more interesting, we could trace the shadow of the users on each wall (perhaps using text or some other cool light grafiti in the shape of the person.. the possibilities are endless) and give them the characteristics of that particular emotion.

Example - I come into the installation and I go over to the the “sadness” wall/area that is displaying (in a super-cool-Eric-Chan-way) all kinds of sad words, maybe using shades of sad colours, moving the words around in a sad way.. whatever that is. A camera tracks my movement and projects my shadow (real time) on the wall in amongst the text somehow — for just one idea, see Camille Utterback’s work here. So now it’s almost as though the room is influencing the user, not just the other way around. As far as input goes.. I’m not really sure. We could have the user hold or wear some kind of object as they come in, or leave different objects (even something simple like cubes, spheres, etc. of different colours) around the installation that users can touch. There are also different input forms that we can look at like temperature/touch/motion sensors. We can also have have the standard text input through keyboard/computer or phone — maybe even just limit the users to this and try to extract more emotion information through less direct methods, such as the video captured images (such as colours they are wearing, speed of movement.. it wouldn’t have to be anything super complex).  If we were to go this route, I think we could spend a lot of time creating and refining a really cool projection display. We could even explore things like connecting different users by their location, mood words nearest them, type of words they input, colours they are wearing, amount they are moving, etc. Or we might also track user paths through the room to create a sort of inverted light/shadow graffiti.

I definitely want the installation to be fun but I think it will be more impressive if the final presentation is coherent and “clean”. I think it might be more fun for everyone (kids and adults) if the main theme is slightly simpler but also less obvious, while still allowing (encouraging!) everyone to move around and try to manipulate/abuse the system.

Some Research on Emotions and Facial Expression-Ekman

Wednesday, July 11th, 2007

I’ve been doing some readings with emotions, facial expressions… Here are some PDF’s I found relating to Ekman’s research. I havn’t had time to go through all the documents in detail but they seem to be relavent and hold useful information.

Basic Emotions CH. 3 - Paul Ekman

Facial Expressions CH. 16 - Paul Ekman

Facial Expression and Emotions - Paul Ekman

NakedFace_NewYorker.pdf

 I have also started writing some stuff in a google doc. I have shared it with everyone. If you haven’t got it please let me know and I will make sure you do.

 ENJOY!

SEE you guys this weekend.