catapult magazine

catapult magazine
 

discussion

Frequencies and Notes

Default

joelspace
Nov 11 2002
08:10 pm

Thought I’d make a new category to reply to Norb’s comment that seemed to imply that you had to have more musical maturity to understand jazz than U2.

Maybe your right about maturity and jazz, but perhaps the older you get the easier it is to understand previous generations.

On the other hand, if the suggestion is that electronic compositions by the likes of U2, (or Eminem of the hip hop generation) have inherently less musical depth than Bee Bop, I would have to disagree.

The new electronic musical dimension which is closely associated with timbre has something to do with frequency alocation. The options are endless. There are 20,000 frequencies to choose from. Multiply that by 20 tracks and and an infinite number of waveform patterns and you can really make a lot of different noises.

Default

JabirdV
Nov 16 2002
06:50 am

I think there is a bit of confusion regarding the Topic Title is this post. Notes are Frequencies, and Frequencies are Notes. There are 100s of thousands of frequencies, each being a harmonic of it’s root. While human hearing is confined within the range of 20 Hz to 20KHz, that does not mean that the upper harmonics (or frequencies) nor the lower harmonics (or frequencies) do not affect us. Joelspace, the dryer that comforts you eminates a root frequency in the 40Hz to 100 Hz range…it’s upper harmonics continue up the scale beyond what you can hear. The upper harmonics beyond your hearing range, are responsible for your perception of emotion as well as the point of origin of the sound. Lower sounds have longer waves traveling shorter distances, higher frequencies have shorter wave forms that travel longer distances. Upper harmonics (above the 20K range) travel have a minute wave form and travel at such velocities that while we cannot hear them, they are still bombarding us and affecting our perception of the frequencies that we can hear. Literally, the upper harmonics shape the lower harmonics. The more harmonics within our range of hearing, the easier it is to place the point of origin, the higher up the scale, and the less harmonics within our range of hearing, the more difficult it becomes. A good example of this is the cricket. It’s chirping begins in the upper range of our audible spectrum, and while it does create some lower harmonics the majority of it’s sound falls within the upper harmonics which is why they are so difficult to locate as they chirp away somewhere within your living room.

All this for what? You are posting regards to how certain music affects us, from the “scientific” analysis of frequencies. There are other variables that have to be accounted for: Volume, Rythym, Sustain and more. All of these affect the way we “hear” sound. Yes, a low frequency kick at 90 dB will hit us harder in the body because of the length of it’s waveform and the volume level it is projected with. This is why alot of hip hop and dance music add a 50 Hz tone to the kick to enhance the lower frequencies with a sustained and controlled frequency known to affect the body in certain ways.

One last thing, the debate of analog vs digital is really seated around the upper harmonic issue. CD technology only allows us to obtain a maximum of 22 K (appx) while analog tape allows up to 40 K (appx). There is a certain “warmth” that is spoken of with the analog medium, and that is found, partially, in the inclusion of those upper frequencies that CD technology filters out. The recent revolution of digital technology with 24 bit 96K sample rates is allowing us to “hear” more of those upper harmonics up to about 48 K. This is allowing more of that “warmth” within the digital realm that was lost in it’s birthing. To hear this (if financially possible) listen to the Fleetwood Mac Rumors CD and then the Fleetwood Mac Rumors DVD-A. Forget the 5.1 (only for a minute) and just listen to the stereo tracks. There is a noticeable difference due to the upper frequency range that is included on the DVD-A.

OK, I’ve jibber jabbered too long. Hope I don’t sound like a know it all. Just wanted to throw that bone out there.

Default

joelspace
Nov 17 2002
08:38 am

Thanks! I didn’t know that was the reason analog was warmer. So why aren’t the Beatles recordings brighter? Was that just their performance style or does it have more to do with recording through Tube Mic pre-amps? Do transistor mic-pre amps filter out high frequencies as well? Why does digital sound seem to have more brightness and clarity?

And the frequency discussion: maybe what I’m trying to say is that we need to expand our definition of notes. It seems ridiculous to try to explain a color that Underworld (techno) manipulates into existance as a Bb chord. Like you say, there are a lot of other variables involved including the shape of the waveform, the attack, release, harmonic, level variation, etc.

Default

JabirdV
Nov 17 2002
12:20 pm

The Beatles music is bright because 1. most imprtantly I think that is the way they were inteded to be listened to, and 2. they only had four track recorders until Sgt Peppers. Everything was just “fill the tracks up and bounce em down to one stem” Later they mixed the stems. All of the bouncing down and layering causes lower frequencies to be lost (the whole cloning problem at an earlier stage) irreversibly. The engineers of the day knew that and could compensate.

Every microphone (Tube, ribbon or dynamic) has it’s own characteristics. They are like brushes to a painter. They are the tools the engineer uses to craft the music he/she is recording and create a desired finished result. The best engineers hear the finished song in their heads and can choose microphones to record the different instruments that would best acheive that end.

Digital ism’t any brighter or more clear. It is just less “warm”. It is sterile. Some like that. Some don’t. It’s all a matter of taste that hopefully will become all inclusive once DVD-A and SACD become the standard and not the current CD technology. Also be aware that the over use of compression (via compressors and limiters) has tainted our judgement of true dynamic range which might trick you into thinking that what you are listenting to is “brighter” but actually it is just compressed out the wazoo and has no dynamics left in it.

A Bb will always be a Bb. Depending on which octave will designate which frequency. Underworld might utilize the Bb to accentuate what they are getting at (amidst the other noises and music surrounding the note) but it is the Bb that they are focused on and are drawing the audience towards. They may not know it is a Bb logistically, but they know what a Bb sounds like. Yes, the other factors also determine the ultimate reaction of the audience member.

Jazz music, as you all have been speaking of, is the best example of this. The greatest Jazz musicians could express themselves just as easily with a note as they could with a word. Their goal was to learn music and it’s expression just as someone would learn a language. That Bb could be used as a happy note to one musician…and the ultimate blue note to another. It was all in how they expressed the note through their bodies and instruments.

OK I am done.

Default

grant
Nov 17 2002
12:45 pm

This thread just became “Mix Magazine”-worthy. But very very helpful. It sounds to me like JabirdV is proving that we’re already speaking in frequency language, which is playing a larger and larger role in how we understand “the note”.

In some ways, this topic is also about how music (which many poets have called a direct link to the soul) became defined scientifically in terms of “notes”, and now frequencies, instead of as direct expressions of the human being (i.e. Redefining the Bb chord as sadness/blueness/red-hotness/coolness/density/rich joy/ righteous anger/ vengeful rage/lustfulness etc.).

I think there might be a problem, however, with reducing music to a “representation of emotion” just as we’re realizing that thinking of language as “representational” was a problem passed on to us by Modernism. Why can’t music BE an emotion rather than represent one?

Default

JabirdV
Nov 17 2002
04:34 pm

Grant,

It is possible that in the understanding of the frequencies and perspective notes, that we may better understand which of these, and in what combinations, affect the brain, body, and spirit of a man.

I understand the theory in that music could be a language of its own. It reminds me of the idea that math is a language of its own that only God has mastered. The idea that music is an emotion is one I will have to ponder. I am still of the understanding that if you study the effects of certain frequencies on the body, brain and even spirit, that you will have a better understanding of how to make music that will have a very intentional effect on the listener.

Default

grant
Nov 19 2002
07:03 am

I wouldn’t compare math language with music-in-and-of-itself.

What I’m saying is that music comes before language. It isn’t a language at all, but is articulated in language via notes, frequencies—basically, mathematical symbols. Although using such a mathematical language to describe music does work, I think it is a reduction of what music fully is.

The burden of mathematic articulation often must be lifted from the shoulders of the musical performer before the music can be what it is. This is evident, I think, by the way classical performers have to commit the notes to memory and then forget the notes in order to truly let the piece breathe and be what it is.

I still think it’s necessary to articulate music in a language, but are there other ways, closer to the “being” of music itself, to describe music? Instead of a Bb, couldn’t we describe a note (in its performative context, of course) as sadness, surprise, joy, longing etc. ?

Default

JabirdV
Nov 19 2002
08:16 am

As I attempt to unravel the intricacies of the universe…(ha ha ha) I am discovering more and more that mathmatics are more the language of God than anything else. It is within the parameters of mathmatics that everything in our physical and non physical universe is explained. (If you haven’t seen it, go rent the movie Pi)

Music, itself, is no different. Music must adhere to the laws of mathmatics. I understand what you are saying about the human spirit of learning the notes and the forgetting them to play with your soul, but regardless of the emotional deliverance of the music being performed, the player is still adhering to a mathmatical language. It just so happens that he/she is much more fluent and has a deeper grasp on the language than the amateur. This separates the students from the masters. They find the life in the notes being played, and in some ways tap into the very heart of God (whether they realize it or not).

I think the mystisim that you are getting at is just that: the very nature of God hidden in the equations that the master musician taps into and performs from.

Default

grant
Nov 19 2002
05:35 pm

Along the same lines as “Pi” is a book by John Updike called “Roger’s Version”, in which a brilliant math student proposes for his doctoral dissertation to prove God’s existence with equations. He fails miserably, not just in his mathematics, but in other areas of his life as well.

Putting this question about music in terms of the language of God is very interesting. We’ve been studying this idea of God’s language in a Bible study here in Chicago by looking at the many facets of the Word of God as described in John 1 and Paul’s descriptions of the Holy Spirit. We’re finding that the Holy Spirit is such a wonderful gift precisely because it speaks to God for us. As the presence of God living among us, the Holy Spirit enables us to be united, connected, communioned, communicated with God.

The idea that God speaks in languages is a Biblical one, but there’s no good reason to say mathematics is that language. Shouldn’t we just stop at the realization that God created a universe with certain numerical relationships that are ours to discover?

Default

Norbert
Nov 19 2002
06:25 pm

I hate throwing 1.5 cents far into a discussion, but the language thing lately has grabbed me. Please view this as a mere side note (no pun intended). So much for a qualifier.
First, the opening to Tolkien’s Silmarillion is beautiful in describing the creation of the earth as a physical manifestation of God’s (Iluvater’s) perfect song. Check it out (the first dozen or so pages) if you’ve never read it before.
The other thing was the use of music as language on a much smaller level. My son, as of yet, is still incapable of speaking. Yet he has his own language of cooing, crying, humming and babbling. His tone and inflection most definitely point to an immature way of conversing. I realize that this is a bit simplistic, yet can’t music be seen as such at some level. Music doesn’t necessarily have to be the result of fine motor control and studied rhythms does it (among other things of course)? My son’s language, even at his most terrible moments, still seems musical to me. Sure it may be reminiscent of a bad day for a bitter Charles Mingus, but it’s still musical and it’s still conversational.

Default

grant
Nov 20 2002
07:23 am

Yes, this is why I think the elements of music are already being learned in the womb. The child experiences the relationships between anxiety and the rushing heartbeat of the mother, serenity and the consistent, calm pulsing of the mother’s blood. And as you’re saying, Norb, it seems like a child learns the sounds that are supposed to go with the words before learning the words themselves. This is why I think music comes before language.

I’m defining language in terms of the actual signs we use to communicate. And when I say music, I should really be more specific by saying elements of music, or musicality. Music, just like art, is only Music in a certain context. But we learn the musicality of existence (the harmony of relationality) before we learn language. So the first experiences of musical relationships (like the relationship of anxiety to the rhythm of the heart) enables us to see the relationships of words to things and emotions.