Audio Textures: Opening The Inner Eye Through Music
School: New Mexico School for the Arts
Area of Science: Cognitive Science
Interim: Team Number: 80
School Name: New Mexico School For The Arts
Area of Science: Cognitive/Perception
Project Title: Audio Textures
Houston, We Have A Problem:
How do we perceive our world? Easy. We have five senses: sight, smell, taste, touch, and hearing. However, it seems as though most of our perception is based upon our ability to see. Through sight we can obtain information as minute as the tick of a minute hand on a clock or as grand as a view of the Grand Canyon. It is through the rods and cones of our eyes that light becomes understanding and therefore, reaction. According to J.J. Gibson, “If we are to understand the problem of why the visual world looks as it does the first thing to do is to look at it.” But, what next? Does our perception end there? How can we use more than our eyes to understand? The answer is in music. Our project is to test whether a human being can use sound to navigate and perceive the world around them through technology.
Solute + Solvent = Solution:
Our solution is to use computer programming to translate the visual world into an audible world. Through NetLogo we are writing code that will analyze an image in pixels and translate qualities of hue, saturation, and brightness into musical aspects of timbre, velocity, and pitch. First, by importing pictures and “performing” them simply from left to right and then by manipulating the aspects of translation through edge detection, polar-coordinates, and color filtering. Then, once we have a working way to “map” image to sound, we will start working with movies in order to understand how we perceive motion. Our method will be to use a polar coordinate system to see how the “flow field” that the observer is in expands from a central point and then convert that change into music. Lastly, after our model can interpret all aspects of the visual world, we will work with different ways of translating those aspects into sounds and run actual tests to see which “mappings” are the most effective and easiest to learn. That is the solution to a new perception.
Work To Date:
So far our project is running smoothly. Our NetLogo model currently imports and interprets images, translating x-coordinate into pitch, y-coordinate into time played, brightness into loudness, and hue into timbre. Agents move across the screen and inspect the pixel below them then play a certain sound according to the information received. We have also incorporated a major scale, rather than a chromatic scale, into the model for the sake of more harmonious results. Also for the sake of the musical aspect, we have built in a color filter that removes the dominant hue, or “background color”, so that only certain pixels are performed based on the variance in color of the image. As far as technological integration goes, our model has the ability to import images from an Android camera in order to perceive actual surroundings. Most recently, we have added an edge detection code, which uses a Sobel filter to simplify an image to its basic edges, as well as polar coordinates to modify our means of reading and translating images. Finally, we now have the ability to import movies and are using our edge detection and polar coordinate aspects to interpret them in real time and create results.
We expect to finish our project on time. Soon, we will be able to start experimenting with real world visual fields in order to test different mappings for our model. Through our different trials, we hope to prove that we can use more than just our sense of sight to perceive our world.
Sponsoring Teacher: acacia mccombs
Mail the entire Team