Today we had our midterm presentation, for our fellow students, here at Aalborg University in Copenhagen. For this presentation we had a collection of recordings from several laboratory sessions, which we have edited into one movie. The movie shows small clips from some of the laboratory sessions together with explanations of what is going on.
You can see the video by clicking this link.
At this presentation we were also asked to put up our PowerPoint presentation on our webpage. The presentation can be downloaded here.
The aim of the testing was towards the acquisition of the image and the color-tracking algorithm. Since we are mapping the computer vision part to the sound part it is essential also to test the scaling of the different values. We are using three objects (packs) as demonstration respectively (amplifier, low pass-filter and simple delay) . The testing went pretty well.
You can view the video of our testing session in the lab here
You can also view some of the pictures taken here
Up until now we have been working with the main idea of creating the D.I.G.I Glove as the P1 project, however we did not pay much attention to the sound part of the project. After the presentation of the P0 project we have been discussing and thinking about how to put more sound into the project, this lead us into a brainstorming process. During the brainstorming we observed that all of the group members found it more challenging and interesting to work with computer vision as an input and sound as an output, where the main-goal would be the output as manipulated sound. This was mainly generated as so, because during the time being and attending to all classes we have gained more knowledge of how we can combine both computer vision and sound output.
We did some more information retrieval and brainstorming of what is possible and eligible for a third semester project in medialogy. The initial idea and bringing this new concept into action was motivation from audiopad device. This leads us to our new concept which so far goes by the name ConDio (controlling audio), where we will be focusing on creating new real-time sampling pattern from existing samples, by using computer vision as an input.
Basically we are going to have a camera mounted above or below the application/installation, this camera will be tracking a so far uncertain amount of different coloured objects on a table above or below the camera. Each object will represent a different sound sample, sound effect/filter or functions; each object is able to interact with another, by measuring the distance between the two objects it will generate different outputs/variables that will affect the sample, filter or function differently.
The idea and the question is:
"How can we create a different, new electronic music pattern of existing samples, by simple interaction of different objects placed on a table?”
Our device should be a tangible device meaning that it will react on a human sensorial basis, but in this case the human senses (touch, sound and visual perception) is one of the measuring devices of the quality of the output regarding the desired sound of the user.We aim to produce the application in Max/Msp together with Jitter. However we recently discovered that Java also can be implemented in the final application e.g. Visual output dependable of creativity.
We really look forward to work with this project because we feel a lot of enthusiasm with in the group, regarding this project.