I woke up at 3AM and decided to extend the mapping used for the sonification of Robert Hart's 3GM homemade muon detector. You can read what I have done previously in terms of mapping here. The way in which I have extended the mapping is relatively opaque and distanced from the actual data - this is because mapping sonic data from what is essentially a one bit stream in a meaningful feels a little like squeezing blood from a stone.
You can listen to an outcome here. This does not represent the final state of a mapping with which I am happy, but it is some sort of start. I hope.
However, I would consider this a challenge and a way to overcome limitations rather than a negative aspect. So, as a result, the mapping is a little more abstract and indirect now compared to my previous example.
The mapping and subsequent audio output has been extended in a number of ways. Instead of triggering just one sample, the mapping triggers a total of 11 samples - 3 of which are triggered 'directly' from the incoming data, and 8 of which form part of a drum machine.
The three 'directly' triggered samples have their velocity controlled by the accumulator system (as described in my previous blog entry on this subject). One of the samples has the pitch also set by the accumulator.
With the more indirect samples (the drum machine), the data that controls them is generated in a more complex fashion. In a sense, the first part of this generation / interpolation aspect is similar to the accumulator setup.
Keep in mind that the muon detector can detect one ray at one given point in time, for only one position. What this means in terms of the data output is that if a ray is detected, then a high state (a 'one', basically) is sent to the computer.
What the computer does, in order to generate the drum machine data, is to poll the output of the muon detector once every 350 milliseconds (this can be changed based on the environment and how 'busy' the outcome should be). If in this polling period, the computer has detected a cosmic ray, then a high state is stored. If in this polling period, the computer has not detected a cosmic ray, then a low state is stored.
This series of 1's and 0's corresponding to each polling period is placed in a 32-bit buffer (which is, as a result of continuous detection polling, being updated constantly). This buffer of 32 bits in length actually forms the basis for the two high bytes and the two low bytes used in my bitwise rhythm generator (which can be read about here and here).
Basically, this bitwise rhythm generator uses these two words (so, two high bytes and two low bytes or 32-bits worth of information) and a set of user-determined logic gate operations to form a looping rhythmic sequence of one to four bars in length (depending on the user settings). The data from the bitwise rhythm generator is sent to Ableton Live as MIDI data - in a format suitable for the Impulse device.
This bitwise rhythm generator also includes provision for combining the data inside of the sequencer to generate MIDI CC messages (to be precise, message streams). The idea is to be able to use just four bytes of data to generate (relatively) complex rhythms as well as control data.
This MIDI CC data is then also sent to Ableton Live, where it controls various effects and instrument paramters associated with the previously mentioned Impulse device. Approximately forty parameters are controlled from just three MIDI CC streams, from reverb depth, delay times, frequency cut offs and compression thresholds, to transposition, time compression / expansion, bit crushing and saturation character. See the full list in the screen shots below.