Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c02/h08/mnt/19116/domains/blog.emocapsule.com/html/wp-includes/cache.php on line 36

Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c02/h08/mnt/19116/domains/blog.emocapsule.com/html/wp-includes/query.php on line 21

Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c02/h08/mnt/19116/domains/blog.emocapsule.com/html/wp-includes/theme.php on line 507
Emo-Capsule

Re: Eric’s Cool Sketches

August 13th, 2007 by Hannah

I think the idea of the rings around the user on the ground is awesome, probably not too tough either from what I’ve been looking at (haha, relatively…). I think we could definitely create a simple installation around this, or work it into something more complex, although I even like the idea of just simple white panels, with all of the focus being on the floor. The biggest unresolved issue is how do we assign an emotion to each person. Is is based on their movement? Something they touch or they type in.. or say .. or position their body (which could be difficult, but super cool if it worked.. alternatively, we could define poses for each emotion and flash them/display them in some other way so that people would know to “do a pose” in order to acquire an emotion ring.. which would be totally doable with current accelerometer code) or it could be based on where they walk initially when they enter, or the colour of their clothes? Or some combination of these, or something else entirely! What are you guys leaning towards? I’d really love to work in the accelerometers as I think the pose detection is still sort of novel and it works surprisingly well.. I’m also keen on RFID and/or touch/light/sound sensors with Arduino in some kind of subtle way (embedded into the floor or wall panels maybe or something..)  It would be cool if we could use some combination of things (it would also help our project come across as less “simple/easy” but I think it’s easier said than done to combine them.

What does everyone else think?

Sketches Concept

August 13th, 2007 by Eric

Hey guys, I’ve done some sketching of a concept I’ve been thinking about in regards to our interactive installation. Click the images to enlarge.

The idea is about compatibility. The user walks into the room and a halo surrounds them with different emotions or perhaps descibing their profile. When one user approaches another, there could be an attraction or rejection symptom. If two emotions attract then the halos of each user become joined. The more users together the more attraction / rejection.

This is just an interesting take on compatibility between users. Of course we are still using text as our visual element.

Let me know of your thoughts.

BTW, I just got JMyron up and running on my Mac Book Pro (MBP) hehe and played around with some simple motion tracking stuff. Pretty cool.

Safari Tech Books Online!!

August 5th, 2007 by Hannah

Safari: If you haven’t already heard about/used it - it’s amazing. Many many technological books available to read online. If you can stand reading off of your computer screen, it is well worth it.

I’m currently reading a book called Physical Computing: Sensing and Controlling the Physical World with Computers by Dan O’Sullivan, Tom Igoe. I think it could be verrrrry helpful for getting into the microcontrollers and sensors and stuff.

I know I already posted about Arduino, but I found a booklet online that describes how to get started using it. I know I’ll end up going through it and thought some of you might also be interested. I found some Ultrasonic Range Finders that are basically like bats (haha). They can detect the exact distance of objects that are a couple of cm to several meters away. They are a bit more expensive than other sensors (in the $30 each range) but could come in handing in a low-light environment. I’m in the process of tracking down an Arduino microcontroller and some sensors that I can start playing with. I will post what I find!

Arduino and Myron

August 3rd, 2007 by Hannah

So after a WinXP reinstall, complete with a new version of Processing, WinVDig, and Myron, I’ve now got some of the Myron video tracking samples working with my webcam. Things worth noting: It is sooooo light-dependent. Granted, my camera isn’t very good, but I still think it might be tricky in a dark installation. I’m thinking about solutions.

In the meantime, I’ve been looking into Arduino - A sister project of processing. I think it’s worth taking a look at the site. Essentially relatively inexpensive microcontrollers that allow the use of a wide variety of sensor input (temperature, light, movement) with USB interface. The programming language is all open source and apparently works well with Processing.  It sounds pretty good.

Thoughts on Installation Pt. 2

August 1st, 2007 by Jordan

Hannah and Group,

 

I do agree with the possibility of huge moveable objects creating the appearance of a cheep half ass final project (as we discussed during our meeting). I also agree we should remove that idea and go no further with it (the over sized objects that is).

 

The idea of having the emotion words float around/ rain or trace the shape of the users shadow interacting with the installation is defiantly cool. The problem see with this is if or when large groups of people enter the installation the whole bottom half of the screen will be left blank because of the peoples shadows. To get around this we would have to restrict entry through the installation. As an art piece I do not believe we should do this because it will detract from the instant “wow factor” and possibly eliminate some of the users want to experiment, explore and push the limits of the installation (as discussed we want the users to push the boundaries of what we create). I believe as long as we didn’t have this feature throughout the whole installation the raining text would be a great addition; maybe set it up so only 1 or 2 people can interact with it on one panel of the installation near the entrance or exit.

 

As for ways to display the text, the idea of having the font colour match the meaning of the emotion is GREAT! Keeping the font size relative to the popularity of the word I still think is a good idea (keeping in mind we will have to make sure that one word will not reach font size of 99999999999999 and over take the display).

 

I know we discussed the different set up of the installation as either walk through or a circle-ish area where everyone will just congregate. The best way to get the most out of the installation allowing for maximum interaction and the least congestion is have an entrance and en exit and the middle be a medium large space for example -> ( ) enter through the bottom exit through the top and interact in the middle. 

 

Having the emotions broken up into different walls to try and make the user feel the emotion of the installation (Hannah is this kind of what you meant?)  Having the emotions organically flow together will have an influence on the users anyway. The predominate emotion of the installation will create and hopefully make some sort of effect on the viewers. Breaking the emotions into separate walls could cause a battle between panels, some very plane and boring while others overly packed. If this is what we decided on as a group to do I have no problem with it as we are researching and developing to create the best project in the world! (I still like the idea of flowing text and passing a display throughout say 12 projectors (very organic motion) I picture the program the prof. from NYU did with the 6 screens intertwining an animation. It just looked cool (however I do understand that it has been done)

 

I love the idea or tracking the users through distinctive objects they hold or wear (would this be using florescent colours? Tracking user paths through the installation to create inverted light/shadow graffiti would be cool and be looked into. Another option would be instead of random text floating around; and while tracking user paths use the path of the users to control the flow of the emotion words being displayed. Where the shadows created by the user were, that is where the words will flow. The more predominate the emotion, hence the larger the font the more leeway the word has to travel outside of the user created shadow, or vice versa.

The idea of temperature/touch/motion sensors I like the idea a lot. For instance using dance matts or accelerometers to alter the flow patterns of the display could be neat.

 

I also still like the original idea or a standard text input through keyboard/computer and phone. 

 

I am for this: limit input as text and interpret through interaction within the installation. I see this allowing comparison between the textual input and way they act in the installation as the video captures images (such as colours they are wearing, speed of movement.

 

Clean and clear installation – to do this I feel that the background should be one colour or a light gradient from top to bottom. The background colour could change depending on the overall emotion of the installation but if we want people to focus on the emotional text and feeling some sort presence being in the installation the emotion text should not be competing with other animated objects such as having moving clouds or sun shining or other illustrations in the background I feel they will become cumbersome and distracting.

 

I was going to post this as a response to your post Hannah, but it ended up being tremendously long and I felt it would be easier to read here. After typing this and re-reading this we may need another group meeting. Or something of the sort. It seems that we are once again veering away from what we came up with as a group in our previous group meeting.

 

Over all I think having the user control the over all emotion of the installation is key. This can be done while still allowing them to experiment and push the limits of the project. As I saw with the large objects sitting in the installation and allowing people to play/ experiment with them kind of took away from the “emotional aspect” of the installation giving the user the ability to alter the installation without knowing exactly what they are doing. Making the user push the limit while still portraying their inner emotions to the project is definitely a challenge will have to overcome.

My hands smell like wax

July 30th, 2007 by Hannah

As does the rest of the house. So I made my first rear projection screen using no name wax paper. I have to admit, I was fairly disappointed. The stuff is cheap and not waxy enough so it doesn’t really stick together, even when I melt the hell out of it with the iron. Still curious, I asked my dad to find the slide projector for me and we tried it out. Holy cow! It looks SO awesome. The photos don’t do it justice because of the projected light and the fact that I had to turn lights on to get it exposed. In a dark room, it looks amazing. You don’t see the flaws in the screen at all. And it’s surprisingly sharp. The best is that you can move it around and the image moves fluidly with it.

I also tried it with a slightly translucent plastic which didn’t work as well - I suspect because it was too transparent. This might be something else to try, but more expensive and probably not worth while if the wax paper thing holds.

PS. Don’t iron on peel-and-stick-tiled floors. Even with a towel underneath. It’s bad news.

Gesture and Affect Recognition in Music

July 30th, 2007 by Alicia

I was reading through some of the printed articles that I have, and I came across one that really sparked an interest. It’s from the University of Geneva and deals with translating the emotional and gestural aspects of dance and movement into related sounds and instruments.

The first one is the actual article, the following two just kind of expand on the ideas proposed.

ftp://infomus.dist.unige.it/pub/Publications/EyesWebIEEE99.pdf

ftp://infomus.dist.unige.it/Pub/Publications/CIM2003-Gesture.pdf

ftp://infomus.dist.unige.it/pub/Publications/Kansei97_LabProjetcs.pdf

The last thing that I thought would be necessary to include is the actual “Gesture Dictionary” that they propose using to determine the emotional aspects of movment. Its all pretty cool to just take a quick look at.

http://recherche.ircam.fr/equipes/analyse-synthese/wanderle/Gestes/Externe/index.html 

Thoughts on Installation

July 27th, 2007 by Hannah

I’ve been considering some of the ideas we discussed at our in-person meeting (Jordan, Alicia, and I).

I think we need a really solid idea of how we want to tie everything together. Just because it’s feasible to have input through large over-sized items doesn’t necessarily mean that the installation will make sense. I’m concerned that it will just end up a chaotic mishmash of direct input through the use of a variety of large, tacky, devices — which I’m not entirely opposed to as I still think it might be fun.

Ideally, it might be more enjoyable if we could come up with ways that users can interact and influence the installation without necessarily moving around objects with sensors in them. For example, if we could use video and sound detection to  “sense” where in the room people are gravitating and then capture input from them (strictly text perhaps) we could translate this into less direct input.

Another idea I had was that instead of mixing all of the emotion words together throughout the installation, it could be divided into the 6 (or however many we decide) themes (ie. one wall per “basic emotion”). We could then detect emotions from users simply based on their location within the installation. To make this more interesting, we could trace the shadow of the users on each wall (perhaps using text or some other cool light grafiti in the shape of the person.. the possibilities are endless) and give them the characteristics of that particular emotion.

Example - I come into the installation and I go over to the the “sadness” wall/area that is displaying (in a super-cool-Eric-Chan-way) all kinds of sad words, maybe using shades of sad colours, moving the words around in a sad way.. whatever that is. A camera tracks my movement and projects my shadow (real time) on the wall in amongst the text somehow — for just one idea, see Camille Utterback’s work here. So now it’s almost as though the room is influencing the user, not just the other way around. As far as input goes.. I’m not really sure. We could have the user hold or wear some kind of object as they come in, or leave different objects (even something simple like cubes, spheres, etc. of different colours) around the installation that users can touch. There are also different input forms that we can look at like temperature/touch/motion sensors. We can also have have the standard text input through keyboard/computer or phone — maybe even just limit the users to this and try to extract more emotion information through less direct methods, such as the video captured images (such as colours they are wearing, speed of movement.. it wouldn’t have to be anything super complex).  If we were to go this route, I think we could spend a lot of time creating and refining a really cool projection display. We could even explore things like connecting different users by their location, mood words nearest them, type of words they input, colours they are wearing, amount they are moving, etc. Or we might also track user paths through the room to create a sort of inverted light/shadow graffiti.

I definitely want the installation to be fun but I think it will be more impressive if the final presentation is coherent and “clean”. I think it might be more fun for everyone (kids and adults) if the main theme is slightly simpler but also less obvious, while still allowing (encouraging!) everyone to move around and try to manipulate/abuse the system.

Some Research on Emotions and Facial Expression-Ekman

July 11th, 2007 by Jordan

I’ve been doing some readings with emotions, facial expressions… Here are some PDF’s I found relating to Ekman’s research. I havn’t had time to go through all the documents in detail but they seem to be relavent and hold useful information.

Basic Emotions CH. 3 - Paul Ekman

Facial Expressions CH. 16 - Paul Ekman

Facial Expression and Emotions - Paul Ekman

NakedFace_NewYorker.pdf

 I have also started writing some stuff in a google doc. I have shared it with everyone. If you haven’t got it please let me know and I will make sure you do.

 ENJOY!

SEE you guys this weekend.

Interesting things……

July 3rd, 2007 by Alicia

Hey there . I found some interesting articles…

http://www.smoothware.com/danny/woodenmirror.html