Sunday, January 06, 2013

How Much SNR Bandwidth is Required to (Re)Construct Audio Signals On a Congnitive Level?

I am interested in the cognitive, process-based and technical aspects of storing audio signals at very low amplitude levels, and then seeing what the effect is on a signal, and how such a signal is interpreted in the brain (assuming that the brain has a previous memory of the original signal).

Here is an initial experiment.

Listen to the following source (original) signal. It is a loop that fades out of 40 seconds or so.



This signal was then sent through a series of gains, which applied a uniform gain of between -88dBFS and -94dBFS via the master bus. For reference, the theoretical SNR of a 16 bit yields a maximum of around 6.02 dB x 16 + 1.7 = 98.02 dB.


A file was rendered for each single decibel increment from -94dB to -88dB.These were then normalised to 0 dBFS. You can watch this process below.


You can see the resulting waveforms below. Note how each increase in a single decibel results in a significant increase in signal reconstruction (of course), but there are also noise elements that still contain some signal in the context of the original rhythm or sound - difficult to describe, but when listening to these examples, I find this an interesting thing.







0 comments: