Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c02/h08/mnt/19116/domains/blog.emocapsule.com/html/wp-includes/cache.php on line 36

Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c02/h08/mnt/19116/domains/blog.emocapsule.com/html/wp-includes/query.php on line 21

Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c02/h08/mnt/19116/domains/blog.emocapsule.com/html/wp-includes/theme.php on line 507
Emo-Capsule » 2007» July

Archive for July, 2007

My hands smell like wax

Monday, July 30th, 2007

As does the rest of the house. So I made my first rear projection screen using no name wax paper. I have to admit, I was fairly disappointed. The stuff is cheap and not waxy enough so it doesn’t really stick together, even when I melt the hell out of it with the iron. Still curious, I asked my dad to find the slide projector for me and we tried it out. Holy cow! It looks SO awesome. The photos don’t do it justice because of the projected light and the fact that I had to turn lights on to get it exposed. In a dark room, it looks amazing. You don’t see the flaws in the screen at all. And it’s surprisingly sharp. The best is that you can move it around and the image moves fluidly with it.

I also tried it with a slightly translucent plastic which didn’t work as well - I suspect because it was too transparent. This might be something else to try, but more expensive and probably not worth while if the wax paper thing holds.

PS. Don’t iron on peel-and-stick-tiled floors. Even with a towel underneath. It’s bad news.

Gesture and Affect Recognition in Music

Monday, July 30th, 2007

I was reading through some of the printed articles that I have, and I came across one that really sparked an interest. It’s from the University of Geneva and deals with translating the emotional and gestural aspects of dance and movement into related sounds and instruments.

The first one is the actual article, the following two just kind of expand on the ideas proposed.

ftp://infomus.dist.unige.it/pub/Publications/EyesWebIEEE99.pdf

ftp://infomus.dist.unige.it/Pub/Publications/CIM2003-Gesture.pdf

ftp://infomus.dist.unige.it/pub/Publications/Kansei97_LabProjetcs.pdf

The last thing that I thought would be necessary to include is the actual “Gesture Dictionary” that they propose using to determine the emotional aspects of movment. Its all pretty cool to just take a quick look at.

http://recherche.ircam.fr/equipes/analyse-synthese/wanderle/Gestes/Externe/index.html 

Thoughts on Installation

Friday, July 27th, 2007

I’ve been considering some of the ideas we discussed at our in-person meeting (Jordan, Alicia, and I).

I think we need a really solid idea of how we want to tie everything together. Just because it’s feasible to have input through large over-sized items doesn’t necessarily mean that the installation will make sense. I’m concerned that it will just end up a chaotic mishmash of direct input through the use of a variety of large, tacky, devices — which I’m not entirely opposed to as I still think it might be fun.

Ideally, it might be more enjoyable if we could come up with ways that users can interact and influence the installation without necessarily moving around objects with sensors in them. For example, if we could use video and sound detection to  “sense” where in the room people are gravitating and then capture input from them (strictly text perhaps) we could translate this into less direct input.

Another idea I had was that instead of mixing all of the emotion words together throughout the installation, it could be divided into the 6 (or however many we decide) themes (ie. one wall per “basic emotion”). We could then detect emotions from users simply based on their location within the installation. To make this more interesting, we could trace the shadow of the users on each wall (perhaps using text or some other cool light grafiti in the shape of the person.. the possibilities are endless) and give them the characteristics of that particular emotion.

Example - I come into the installation and I go over to the the “sadness” wall/area that is displaying (in a super-cool-Eric-Chan-way) all kinds of sad words, maybe using shades of sad colours, moving the words around in a sad way.. whatever that is. A camera tracks my movement and projects my shadow (real time) on the wall in amongst the text somehow — for just one idea, see Camille Utterback’s work here. So now it’s almost as though the room is influencing the user, not just the other way around. As far as input goes.. I’m not really sure. We could have the user hold or wear some kind of object as they come in, or leave different objects (even something simple like cubes, spheres, etc. of different colours) around the installation that users can touch. There are also different input forms that we can look at like temperature/touch/motion sensors. We can also have have the standard text input through keyboard/computer or phone — maybe even just limit the users to this and try to extract more emotion information through less direct methods, such as the video captured images (such as colours they are wearing, speed of movement.. it wouldn’t have to be anything super complex).  If we were to go this route, I think we could spend a lot of time creating and refining a really cool projection display. We could even explore things like connecting different users by their location, mood words nearest them, type of words they input, colours they are wearing, amount they are moving, etc. Or we might also track user paths through the room to create a sort of inverted light/shadow graffiti.

I definitely want the installation to be fun but I think it will be more impressive if the final presentation is coherent and “clean”. I think it might be more fun for everyone (kids and adults) if the main theme is slightly simpler but also less obvious, while still allowing (encouraging!) everyone to move around and try to manipulate/abuse the system.

Some Research on Emotions and Facial Expression-Ekman

Wednesday, July 11th, 2007

I’ve been doing some readings with emotions, facial expressions… Here are some PDF’s I found relating to Ekman’s research. I havn’t had time to go through all the documents in detail but they seem to be relavent and hold useful information.

Basic Emotions CH. 3 - Paul Ekman

Facial Expressions CH. 16 - Paul Ekman

Facial Expression and Emotions - Paul Ekman

NakedFace_NewYorker.pdf

 I have also started writing some stuff in a google doc. I have shared it with everyone. If you haven’t got it please let me know and I will make sure you do.

 ENJOY!

SEE you guys this weekend.

Interesting things……

Tuesday, July 3rd, 2007

Hey there . I found some interesting articles…

http://www.smoothware.com/danny/woodenmirror.html

Research and Organization

Sunday, July 1st, 2007

Checkout: http://ug.csit.carleton.ca/~imd_sp04/Hannah/ I put a bunch of stuff there - some articles and diagrams I grabbed from the books I found. Go snoop around.I sent out login information for the csit server space so check your email.

Since I’m spending so much time working with the accelerometers, I’ve started spending my outside-of-work time looking into video processing and how this might be useful for deriving gestures. Over the next week, I’ll be posting my test files for these and the other stuff I’ve been working on.

I’m also trying to gather as much theoretical information as I can to help support our design decisions (when we actually make them) and how we will associate particular sounds/images/movements etc with words. I’m doing my best to compile this information on the csit server too.