Select Page

My design challenge

I am creating a musical instrument that people can interact with using only eye movements and facial gestures. This allows disabled people who cannot use their arms and hands to play music.

 

What’s your domain?

I want to focus on a group of users who cannot use their arms or fingers to interact with instruments.

Therefore my solution will be a musical instrument / interface, that is controlled only by interactions that can be performed with the face. Interactions are therefore limited to things like:

  • Eye gaze positions
  • Facial gestures (position of mouth, eyebrows, jaw, eyes open/closed)
  • Head position
  • Blow/sing into microphone
  • Voice commands
 

Who are you designing for?

I believe that the ability to express oneself artistically should be available to all, regardless of physical disabilities or challenges.

My user group is deliberately narrow, but exists. My instrument may also prove to be of value of others later on.

 

What value do you you want to deliver to them?

The ability to play music for those who are unable because of physical constrains. From my research with Kasper, Sebastian and at Jonstrupvagn I learnt how much music means to this user group. Especially at Jonstrupvagn, where almost none of the residents are able to play an instrument without difficulties, music is a facilitator for social interaction, a way to express and big emotions and a arena for creativity. If technology can make it possible for this group to overcome their physical limits and play together or compose by themselves, I am sure that it would add a lot of value to their everyday life.

   

Is it a product? Is it a service? Or what?

I am designing a product for the present. The instruments/software I end up designing will work 100%, and should be easy to implement on schools or in the homes of disabled persons. At this stage I have some loose ideas about how it could later evolve into a service, but that is a future scenario, and not something I plan to explore in detail in the coming months.

 

What kind of technology are you exploring/considering?


Sensors:

1) Eyetracking via dedicated sensor. I am currently working with a 99$ eyetracker from The Eye Tribe. Although it is not the most precise tracker on the market, it is affordable and works across platforms (Mac, PC, Linux, Android). The accessibility is important to my, since most other trackers are quite expensive and only work on the Windows platform.

2) Facetracking via webcam. Most computers have a webcam, and if not it is easy and affordable to hook up an external one. I want to use facial gestures as a supplement to eyetracking interactions.

 

Software:

Processing will transform the raw sensor inputs (x- and y-coordinates from the eyetracker + webcam input) into meaningful interactions for musical real time performances. Designing a user interface that can be controlled exclusively via eye gaze and face gestures will be a big challenge, but should be possible. I am gonna rely on existing libraries for Processing (oscP5, netP5 openCV, etc), but will put a lot of effort into writing software that enables users to interact musically with a DAW (like Ableton) using only eye and face interactions.

– I am going to use Max4Live to transform OSC-messages from Processing into musical commands (trig notes, insert notes in sequencer, adjust effect, change scale, make more experimental/generative musical systems, etc).

– I will use Ableton Live to trig sounds and add effects.

 

What’s the magic for you?

Moments like this (with betters sound design):


Progress so far?

See my post in the following categories