Distance Vs. Intensity

It was now time that I set forth to discover the relationship between a sound’s distance and a sound’s loudness.  I believe that most people would find this relationship to be self-evident– sound should decrease as distance increases.  We assume this to be true for all cases, but our old pal physics makes this a little more tricky.

 

In order to clarify the nature of this relationship, I went to studio F and borrowed the JBL Eon speakers.  I placed them in this fashion:photo (1)

 

I then connected the speakers to my tone generator using an eighth inch to double quarter inch cable.  Since time was limited to only 8 seconds, I would set the tone generator to play in Logic at time = 0 and, since tone generator is a Logic plugin, I would tell it to bypass the plugin at t = 8 seconds.

 

Starting with a tone generated at 1000hz, I loaded my SPL meter on my IPhone and set 3 different desks across the room– one 3 feet away from the JBL’s, one 6 feet, and one 12 feet.  For each of these different positions I recorded the SPL reading for each of the different test tones, which can be seen in the chart below:

chart

 

The mid
 range frequencies acted pretty much the same.  Due to the limited sound field, being 3 feet away from very large speakers, the close recording recorded less Db than the middle of the room recordings (6 feet away).  As would be expected, loudness decreased with distance after the mid range.

Oddly, the one frequency that behaved very differently was the low 100Hz.  This frequency actually recorded the highest Db level furthest away from the source.  I had a couple theories on why this would be. My first impression was that there was an external sound happening (possibly in studio A).  However, I figured that sound proofing must be pretty good, so it must have to do with science.  I ultimately concluded that since lower frequencies take longer to to cycle, there must be a relationship to time and cycle that makes low frequencies take more time to build-up.  A further aspect to this build-up could be that lower frequencies are more reactive to walls, and since the 12 foot station was right next to a wall the sound waves may be pushing against it.

 

Listen-Record-Listen

thomas paine park

On Monday afternoon I strolled down to Thomas Paine Park, located conveniently close to my abode on 80 Lafayette Street.  The goal was this: to get an ambient recording of a relatively quite area so that I could presumably analyze noise that is not usually noticed or appreciated.  Empty, pseudo-white noise can be quite complex, and, barring interference from the recording device itself, should make for a pretty cool analysis of noise matter.

I armed my little garageband app on my phone and hit record.  My surroundings, at the time, seemed somewhat clear and indistinct.  I sat and waited:

As I finished my recording, I figured that truck traffic had probably interfered with my sound’s consistency at times. I also wondered if my subtle shifting could have caught hold of my phone’s compressor, threatening the true natural ambience.  The majority of my worries, though, lay in describing layers of indistinct tiny noises in musical terms.  I suppose the sound of the city has a harsh, brittle timbre in comparison to maybe a smoother ambience in the country.  Surely the noise of machinery and hustle will activate more sound in higher, 2-5kHz regions.  But at the same time, trucks and such will also emit very low frequency sounds due to their engine size. Perhaps, if we are to compare the city to country in sounds, the city is just overall louder but we attribute this loudness to brittle-ness since the 2-5kHz region is the most sensitive range for our ears.

 

Next up, I made my way down to campus to visit the library, where I would get some solo recordings.  I found my way to an empty corridor that overlooked the Bobst lobby.

 

Although I think it would have helped if there was not so much lobby noise, there is ultimately no true quiet in New York.  I recited the audio tests, as seen below:

I then made my way to the big concert hall in Steinhardt’s Education buidling.  I repeated the task, as shown again in the sample below:

And finally I returned to my dorm room and recorded, as can be seen here:

 

I then had a small room recording and two huge room recordings to work with.  The first thing I noticed upon playback was the reverberation qualities.  The small room had a far more noticeable reverb, one that ran right up against the original voice but didn’t have a very big tail at all.  The proximity of the walls seemed to be bouncing my voice back and forth quickly, but the items in the room and the plaster walls quickly killed the tail.  The larger rooms, however, had little noticeable reverb.  One would guess that, “hey, since your in a bigger room, wouldn’t you expect a bigger reverb?”  The answer seemed to be no; the relative smallness of my voice wasn’t nearly enough to carry to the walls, and therefore it died on impact.  There is probably an extremely subtle reverb, but without walls in proximity this reverb was undetectable.  Thus, the library and music hall were the most reliable in terms of true raw voice clarity, considering that the small room reverb sounded pretty weird and obtrusive.  The hall recordings, however, sounded clean, barring the subtle background noises

I acquired. However, just listening to the recordings side-by-side makes you favor the hall recording even more – for reasons I can’t fully explain, the hall recording makes my voice sound much fuller, much stronger, whereas the room voice sounds dulled.  These mysteries, and many many more, should be solved with subsequent research in sound acoustics.

 

 

Mixerz

Ahh yes, the world of noise color, sine waves, and EQ analysis. The only thing that makes this world even better is the subsequent manipulation of these noises that only a mixer can provide!

 The first step in my experimenting was to first acknowledge the properties of my mixer.  I used the mixer in Logic Pro 9 to conduct my experiments, and a screenshot of this mixer is shown below:

Image

 

Even though 16 mixer tracks weren’t needed for this experiment, it still looks cool to have them up, as well as the fact that it simulates a real analog mixer.  The track that I was focused on, though, was track one, as shown here:

Image

 

The first thing to note is the input at the very top.  Logic is unlike standard analog mixers because there is no gain knob at the top, but rather a Channel Strip Setting.  This is used as a way to save particular settings, both in terms input and insets.  It takes away the duty of having to readjust your mixer for each instrument and expedites your mixing process.  This is unimportant, though, in the actual processing of sound.  What is primarily important is the I/O in the center of the strip.  The input here is an EVP88 (a virtual instrument provided by logic, controlled by MIDI).  Whether the input is a virtual instrument or an audio instrument connected via an audio interface, the idea is the same; both will produce a signal that, by default, runs to the main output (Output 1-2). 

Next, one should not neglect the various sending options provided by logic.  In order to send audio to an auxiliary track (that will, in turn, run to the main output), you can use the reroute the main output to a bus.  Busses can also be used by the Sends, which will send audio at varying degrees. 

Next comes everyone’s favorite part, the inserts.  Logic is set up in a way that negates the need for EQ knobs by making visual equalization available through the insets.  Therefore, the first insert shown on the image is an EQ chart.  This will send to the Test Oscillator below it.  Finally, below the inserts there is the master fader and solo/mute options, as well as a panning tool.

Now that I have my tools, I was ready to experiment.  I first passes a 1kHz signal through the channel via the Test Oscillator, shown below.  On an EQ analysis graph, this sine wave will have a very obvious peak at the 1kHz mark.  Using Eq, I attempted to manipulate the signal. Unfortunately, there was no effect generated other than the volume of the signal increasing and decreasing as I passed over it.  So if you’re trying to detune a sine wave using EQ, you’re in for a bad time, buster. 

Image

 

However, some goofy results came from testing the white and pink noises.  White noise, interestingly, produced a level signal across all bands of the EQ.  By adjusting the EQ, you can produce the rising and falling noise builds that we all know and love from EDM.  Furthermore, by adjusting the pink noise (which provides noise proportional to the band its on), I could make the same effect, although less useful for our beloved EDM because of the increased low end.

 The mixers application to music is obvious. The mixer, with EQ, can bring balance and unity to many instruments at one time.  Although EQ can’t change all the properties of sound, it is extremely useful in balance, as well as its inherent audio effects.

Loudness – Real and Perceived

Today’s experiment was to test specific sounds, comparing their physical sound output with the perceived amplitude.  This was done so by taking one of the plainest elements of music– the sine wave. The first step was to create sine waves and be able to adjust their pitch according to frequency.  Using Logic Pro, I was able to use the ES2 synth to create a sine wave.  The ES2 is seen below:

Image

Turning all additional settings off, I was able to synthesize a perfect sine wave.  All I had to do now was set specific frequencies, so I created MIDI data and set 8 different tracks for the 8 different pitches.  Each pitch was set to not reach above -20db in the master bus:Image

Now I can solo each track and compare the amplitude to the given frequency using both a.) my ears and b.) my Android SPL meter.  I used the JBL EON speakers first, setting up a graph in excel to test for each of the 8 frequencies: Image

As you can see, the speakers are loudest around 250Hz.  But ouch! Some of those high pitches seem way louder than the measurements would indicate.  I then decided to test for perceived loudness.  Using each of the frequency categories, I determined perceived loudness using an arbitrary scale of 1-100:

Image

 

Now we see that perceived loudness differs greatly from speaker loudness, peaking at 6k and dropping down with the exception of 2k. One last step I had to take, though, was to use one more speaker system to see if the output of the speaker was affecting the results on the SPL.  I then used my mac speakers, and the results on the SPL are as shown:

Image

 

Again, there is a sharp rise from the nearly inaudible 100Hz to 250Hz.  However, these speakers don’t peak until 2k.  Now we have enough data for a sufficient visual representation, shown here:

Image

 

The blue line represents arbitrary perceived loudness, the red line the JBL EONs, and the green line Mac speakers. The Y-axis represents the amplitude shown on the SPL (with the exception of the blue arbitrary perception measurement, overlaid to help with interpretation).  

Several conclusions can be made with this graph.  First, we see how perceived loudness greatly differs from actual amplitude.  As sounds were played up the spectrum at loud volumes, the higher sounds became more and more difficult to bear until around 8k.  Looking at the blue line, we are increasingly sensitive to sound up to 2k, where we then have a drop off which peaks back into around 6k. This differs greatly with the JBLs, which peak at 250k and descend gradually.  However, the mac speakers make an attempt to peak right were human perception is most sensitive, 2k.  This is likely the case because mac speakers aren’t large enough to peak at the low end, and instead must create the illusion of loudness by peaking at 2k (but they must not seem overly tinny and piercing by peaking higher than this)