Skip to topic | Skip to bottom
Home
Hiaz.AuditoryDisplaysr1.1 - 19 Sep 2008 - 10:35 - TWikiGuesttopic end

Start of topic | Skip to actions

Auditory Displays - perceptualizing information

Sound is wonderful, because, different than sight (which is also beautiful, of course), sound connects things, and sound accommodates many voices at one time and they remain intelligible. Whereas with sight, we see one object, and very rarely do we actually have transparencies and reflections. Sound, as a medium, aesthetically allows us to experience environment as connections between living things, and cycles, and rhythms.

Gordon Hempton (from: http://www.acousticecology.org/writings/writingsquotes.html)

ICAD

Sonification is the use of non-speech audio to convey information. Research in the field of sonification is coordinated by the International Community for Auditory Displays(ICAD) .

from: http://en.wikipedia.org/wiki/Sonification

ICAD 2006 at Queen Mary, University of London

ICAD 2006 is an international conference held annually on the topic of Auditory Display and will be hosted at Queen Mary, University of London by the Interaction, Media and Communication research group in the Department of Computer Science.

http://www.dcs.qmul.ac.uk/research/imc/icad2006/index.php

Auditory Seismology by Florian Dombois

Usually seismic waves have a frequency spectrum below 1 Hz and therefore cases are rare where earthquakes are accompanied by hearable sounds. The human audio spectrum ranges between 20 Hz - 20 kHz which is much above the spectrum of the earth's rumbling and tumbling. This is one of the reasons why seismometric records are commonly studied by the eye and visual criteria. Nevertheless if one compresses the time axis of a seismogram by about 2000 times and plays it on a speaker (so called 'audification'), the seismometric record becomes hearable and can be studied by the ear and acoustic criteria.

Philosophical and psychological research results show that there is a substantial difference between seeing and hearing a data set, because both evolve and accentuate different aspects of a phenomenon. From philosophical point of view the eye is good for recognizing structure, surface and steadiness, whereas the ear is good for recognizing time, continuum, remembrance and expectation. In studying aspects like tectonic structure, surface deformation and regional seismic risk the visual modes of depiction are hard to surpass. But in questions of timely development, of characterization of a fault's continuum and of tension between past and expected events the acoustic mode of representation seems to be very suitable.

see: http://www.auditory-seismology.org/version2004/

Audio Microscope by Joe Davis

Part of my installation at Ars Electronica is undertaken in collaboration with Katie Egan and pertains to a question about two "singing plants". At a point in time now almost exactly two years ago, a young pre-med student approached me with an interesting question. She had recently returned from South America where she had carried on field work in the Ecuadorian rain forest. There she had encountered a Native American brujo or "medicine man". The brujo had told her that a given species of plant in the mountains sings a different song than the same species of plant in the valley. The student wanted to know if it was possible to "listen" to plant cells.

All acoustic phenomena, including "sound", are the result of mechanical movements of physical objects within or upon the surface of a solid, gaseous, or liquid acoustic medium such as steel, air, water, etc.. In the case of the acoustic phenomena we call "sound", the movement of physical objects occurs at or close to audio frequency so that the resulting waves or pattern of waves passing through an acoustic medium do so at audio frequency. When these audio frequency waves impinge on the human listening apparatus (the inner ear) the result is that "sound" is perceived in the human brain.

To begin with, it seemed to me that the problem wasn't that cells are naturally "mute". Many of them - and their flagella, cilia, pili, etc., - are normally at least partly engaged in activities that appear to occur at audio frequency. Further, no non-dormant living organisms are known to exist in vacuum or otherwise outside of an acoustic medium.

At the time, there were to my knowledge no existing microphones of sufficient sensitivity to register microacoustic signatures of individual (microscopic) cells. The function of conventional microphones generally depends on the mechanical motion of crystals or diaphragms that react to impinging sound waves. Sound waves generated by individual cells or microorganisms are simply too weak to effect such movements in mechanical listening apparatus.

Conventional microphones translate audio frequency sound waves into audio frequency electrical (electromagnetic) signals. These electrical signals may then be routed through amplifiers, equalizers, and other electronic audio equipment and eventually into speakers or earphones where electric signals are transduced back into "sound". At the speaker, sound is created when a electromagnetically-driven diaphragm or crystal produces corresponding sound waves in surrounding air .

At the turn of the last century Alexander Graham Bell built what was probably the world's first optical transducer of sound waves. He called it a "photophone". Instead of translating sound into electrical signals, Bell built an apparatus that turned sound waves into audio frequency pulses of light. He also built "detectors" that would convert audio frequency pulses of light into electrical signals that could then be converted into sound. To construct my audio microscopes I also used optical detectors and specially illuminated stages and microscope slides that allow only light reflected from the surfaces of specimens to enter the objective lens of microscopes . These optical signals are then transduced into electrical signals via detectors mounted on the microscope eyepiece. The electrical signals are subsequently routed through more or less conventional audio equipment so that they may then be perceived as sound in the ear/brain of the user/observer.

At early stages of this work I was surprised to find a wide range and diversity of information in the microacoustic world. At lab we find organisms on almost a daily basis that we have never seen or listened to before. We therefore now routinely listen to organisms for the first time. Different organisms make different sounds in the way that say, the sounds of horses are perceived as different than the sounds of sheep. My experiments with spectrum analysis tend to reinforce that notion. I found that slightly different acoustic signatures corresponded to slightly different species of microorganisms. Paramecium multimicronucleatum for instance, has a slightly different audio signature than Paramecium caudatum. The signatures of a given species however tend to be uniquely distinct to that species. So as it turns out, the two plants of the same species must indeed "sing the same song", unless perhaps the Ecuadorian brujo knows of some exceptional organism unlike those we have observed to date.


from: http://www.aec.at/festival2000/texte/artistic_molecules_2_e.htm

artisticmol_2.jpg

Sonenvir

SonEnvir is a research project that investigates applying sonification in a number of scientific disciplines, in order to develop a general sonification software environment. It is the first collaboration of the four universities in Graz, Austria: The Karl Franzens University, the University of Technology, the Medical University, and the University for Music and Dramatic Arts.

from: http://sonenvir.at/

related:

Sine Clock

If you spend enough time in a place, you will begin to know the patterns of that place - the sun and moon, traffic, tides, smells, faces, sounds. SineClock presents an aural pattern - the interaction of three sets of sine waves - representing the time of day.

http://music.columbia.edu/~douglas/portfolio/sineclock/

Terminology

  • Audiation - n. The mental review of sonic experiences with an auditory display [AD pp 188]. C.f. ideation - n. the power of the mind for forming ideas [Ch].
  • Audification - n. the direct playback of data samples [AD pp xxvii]: the direct conversion of data to sound [AD pp 190]. C.f. sonification.
  • Audiolisation - n. see auralisation.
  • Auditory icon - n. a mapping of computer events and attributes to the events and attributes that normally make sounds...In general, the result is to relate interface sounds to their referents in the same way that natural sounds are related to their sources and, thus, to allow people to use their existing everyday listening skills in listening to computers [Gaver, AD pp 420]: a cariacture of sounds occurring as a result of our everyday interactions with the world ... mapped onto events and objects in the interface about which [it provides] auditory feedback [Lucas, An Evaluation of the Communicative Ability of Auditory Icons and Earcons, 1994]. C.f earcon. An auditory icon uses sound effects whereas earcon is music-based.
  • Auralisation - n. the auditory representation or "imaging" of data [AD pp xxvii]: the representation of program data using sound...an auralisation is based on the actual execution data of the program [Jackson, AD pp 292]. C.f. sonification.
  • Earcon - n. tone or sequence of tones as a basis for building messages [Blattner, AD pp 450]: a nonverbal audio message used in the user-computer interface to provide information to the user about some computer object, operation, or interaction: the aural counterpart of an icon [Blattner et al. Earcons and Icons: Their Structure and Common Design Principles, 1989]. C.f. auditory icon.
  • Sonification - n. a mapping of numerically represented relations in some domain under study to relations in an acoustic domain for the purposes of interpreting, understanding, or communicating relations in the domain under study [Scaletti, AD pp 224]: data-controlled sound [AD pp xxvii]: "Processes that disrupt the relationships of successive samples in favour of simplifying and enhancing features of the data, such as multiplying the data by a cosine wave, would be classified as sonification [AD pp 190].

from: http://computing.unn.ac.uk/staff/cgpv1/lexicon1.htm

Online resources:

related:


to top


Hiaz.AuditoryDisplays moved from Hiaz.SoniFication on 23 Jan 2006 - 17:37 by HiazHhzz - put it back
You are here: Hiaz > CategorySound > AuditoryDisplays

to top

Copyright © 1996 - 2006 by hiaz. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback.