Skip to topic | Skip to bottom
Note: Included topic Gulli.WebTopBar? does not exist yet
Gulli
Gulli.StreAmr1.1 - 02 Jun 2005 - 21:11 - TWikiGuesttopic end

Start of topic | Skip to actions
Sonification of network traffic using the Gullibloon framework:

"not to know anything about the beauty of webserver events is one of the greatest misses in everyones life" (dozent z, basel 2001)

This is an example of using Gullibloon components combined with audio generation software, in this case PD -

"PD (aka Pure Data) is a real-time graphical programming environment for audio, video, and graphical processing." http://puredata.org

the stream is driven through several types of events, produced by a computer / webserver, which are gathered and translated to OSC messages, which PD is able to understand and process.

Data Processing and Management

all data is processed and managed with python, a popular scripting language http://www.python.org

Used Sound Generators:

Synthetical sources:

1) The Percolate Library for Pure Data

"A collection of synthesis, signal processing, and image processing objects"

http://www.music.columbia.edu/PeRColate/

espacially used: physical models of a bowed bar and a mandolin

2) The PAF: a cosine ring modulator for waveshaper

Sample sources:

3) A polyphonic sample player and a set of percussive and harmonic samples

Events:

message names, explanation and musical representation:

1) /new/connection

every time a host sends some data for the first time, a new connection is established

2) /new/host

apears eg. when someone views a webpage on the monitored server, or someone checks his email over the server

blabla...

3) /delete/host

every host has an expiration time, when this time is exceeded without sending or recieving any data, the host is deleted

4) /delete/connection

basically the same as /delete/host

The audio generation is divided into 4 layers:

a) event layer: network events are triggering different sound generators

b) loop layer: as a meta layer, the musical output of the event based audiogeneration is repeatedly recorded into buffers (also triggered by network events), and played back at different speed and pitch. this layer serves as musical "glue" between the bare network events in producing repetitive rythmical or harmonical patterns.

c) mixing layer: each soundsource is connected to an audio matrix which is able to dynamically route sound inputs into sound outputs

d) effect layer: this patch is processing the generated musical output with delay, distortion and a filterbank

ad a)


to top


You are here: Gulli > StreAm

to top

Copyright © 1999-2018 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback