Music Awareness

June 3, 2010

I just learned of this 2007 experiment by the Washington Post (post courtesy of The Bold Life), which can be summarized as follows:

  • The Post arranged to have a man play the violin for 45 minutes in the middle of a busy DC-Metro station. The material consisted of six different works by J.S. Bach.
  • Reactions from onlookers and passersby were documented, peaking at mild, short-lived interest. (Oddly enough, some of the strongest reactions seemed to come from children, whose parents were quick to scurry them along regardless…)
  • In total, only six people stopped to listen and twenty gave money (grand total: $32)
  • Upon completion there was no applause or acknowledgement.

The violinist was world-renowned virtuoso Joshua Bell, playing a $3.5 million Stradivarius violin (more on Bell and his extravagant instrument here). Two days prior, Bell performed a sold-out show in Boston where seats averaged $100 each.

A litany of questions and conclusions followed (“In a commonplace environment at an inappropriate hour, do we perceive beauty? Do we stop to appreciate it? Do we recognize talent in an unexpected context?…”). But for me, it brought to mind a comment from jazz pianist Bill Evans in his biography Bill Evans: How My Heart Sings, where he laments the fact that jazz music is all too often relegated to being background music for the din of conversation in a club. Bill estimated that only a very small percentage of his listeners during a performance actually picked up on the nuances and excitement of what he and his trio were playing.

The DC experiment demonstrates the importance of the listener participation. Listening to music is not a passive act, where the notes and chords wash over your ears and into your head effortlessly. To get the most out of a piece of music, and thus putting it on par with a great novel in terms of complexity and storytelling magic, the listener needs to take an active role in the process. Literature transcends entertainment as soon as you learn to read, and music can do the same when you learn to listen.

Hearing and listening are not the same thing, as the DC Metro experiment makes clear…


Music and the Mind

June 1, 2010

Here is a link to a great New York Times conversation with Aniruddh D. Patel, author of “Music, Language, and the Brain,” fellow at the Neurosciences Institute in San Diego, and self-proclaimed Neuroscientist of Music. One insight, following the discovery of a parrot named Snowball who dances to the beat of a Backstreet Boys song:

“What do humans have in common with parrots? Both species are vocal learners, with the ability to imitate sounds. We share that rare skill with parrots. In that one respect, our brains are more like those of parrots than chimpanzees. Since vocal learning creates links between the hearing and movement centers of the brain, I hypothesized that this is what you need to be able to move to beat of music.”

Patel continues:

“Before Snowball, I wondered if moving to a musical beat was uniquely human. Snowball doesn’t need to dance to survive, and yet, he did. Perhaps, this was true of humans, too?”

The question, of course, remains why? Why do we, along with parrots, respond instinctively to music?

My take: it could be that music provides the same “neuro-catharsis” during daytime hours as dreaming does while we’re asleep, stimulating our brains and escaping our analytical reality. Music (and through association, dance)  may be a vestigial “sanity check,” a screensaver of the mind, to bring us out of our day-to-day and prevent our mental processes from becoming to static and habitual.


On i

August 29, 2009

Apple_Wired

There is little doubt that Apple is not just a company, it’s a zeitgeist. Apple products inspire brand loyalty that rivals Harley-Davidson’s (Exhibit A), with a reputation centered on quality and innovation.

But there’s something more insidious going on, and it has nothing to do with Apple Fanboys: Apple has taken our identities. Not literally of course, but it has taken our own identifier, “I.” For those interested in the philosophical implications of the self and what it means to be conscious and self-aware, “I” holds great importance. 18th century philosopher David Hume famously explored the concept of the self over time, and the book Gödel, Escher, Bach: An Eternal Golden Braid is a Pulitzer Prize-winning 800-page tome centered around defining the Self as a “strange loop,” and explores this concept through a wide range of analogies and examples. These are just two of hundreds of works based on “I”.

But what of “i”?

Apple’s iPod has relegated the proper noun “I” to the ranks of standard noun, and instead gives Pod the distinction. The Pod is the Thing, not us. The iMac, the iPhone… iWork, iLife… What happens when we start to use the lower-case “i” to refer to ourselves?:

i think, therefore i am not.

i don’t think this was an intentional move by Apple, but simply an unintended consequence. My feeling is that they used “i” because it looks like an upside-down exclamation point—a purely aesthetic choice. But perhaps they are playing with the use of i to represent imaginary numbers in mathematics, and used this to embed the concept of “imagination.” Or maybe “innovation” is the suggestion. But the connection between the imaginary and the self is a dark philosophical notion, one that we are all familiar with after having watched The Matrix.

At the end of the day the concept works brilliantly from a marketing perspective. To get someone to fall in line and do your bidding, you must first break the will. You must destroy your subject’s sense of importance and worth. “I am nothing.” Or, rather:

iThink, therefore iBuy.


The Loudness War

August 24, 2009

Loudness_War

A war has been raging and you can hear its noise grow louder, but you may never have noticed it.

It’s called The Loudness War: “the music industry’s tendency to record, produce, and broadcast music at progressively increasing levels of loudness to attempt to create a sound that stands out from others.” For the past few decades, mastering studios have been tasked with baselining singles and albums at ever increasing volumes in order to keep up with, and attempt to exceed, the efforts of competing artists and radio hits. Airplay is at stake, and sheer volume is seen as the easiest method to get to the top of the charts. (The hardest method, by the way, is to write pop songs that strike an an irresistable balance between catchyness and pretension, so as to straddle the teeny-bopper hunger for the hook and the more mature sensibility of nuanced and thought-provoking performances, all laced with passion and youth and drive. So, you have to admit, you can see the appeal of the easy out here…)

The problem is not merely the immaturity of watching rival companies spending time and money shouting their way out of an argument. The fact is, this Decibel Inflation has what most consider to be an unacceptable side-effect: distortion. As volumes are increased with each mastering and re-mastering session, you lose definition and contrast between the highs and lows. In effect, the lows become high and the highs become higher. So you’re left with a more one-dimensional result than is likely desired. As Bob Dylan lamented:

“You listen to these modern records, they’re atrocious, they have sound all over them. There’s no definition of nothing, no vocal, no nothing, just like—static.”

The Loudness War’s collateral damage is dynamic range. Modern records are set in a world where there is little difference between black and white, red and yellow, green and purple. It is instead a compressed landscape of shades that lack distinction. Dark gray and pale gray, rose and salmon, jade and lavender… The Loudness War is the reason your older albums sound softer than the one you bought last year, and why classic records are constantly being remastered. The old standbys can’t keep up in the current marketplace without a little lift.

If you want to hear an example firsthand, check out this YouTube clip: The Loudness War.

Of course, the matter does come down to preference. Some argue that the louder baseline volume of current recordings are in keeping with the increased sources of noise occurring in daily life, and the prominence of music playback devices that let listeners bring their music outside into these noisy environments. This is in stark contrast to the listening of LP’s in a reverb-friendly and relatively quiet room.

Unfortunately there are no checks and balances here. I don’t know when the breaking point will be reached, but I hope it’s not our eardrums…


Medicinal Music

June 7, 2009

music_therapy

“We know music can calm, influence creativity, can energize. That’s great. But music’s role in recovering from disease is being ever more appreciated.” – Dr. Ali Rezai, director of the Center for Neurological Restoration at Ohio’s Cleveland Clinic

Music as medicine is the latest notion in the long-established principle that music affects physiology. It’s  a mellifluous dance of organized vibrations in the air striking an eardrum, with the vibrations being transferred through bones and nerves into the gray matter of your brain. Listening to music, or any sound really, is the act of your body translating physical motion into aural playback in your mind.

Expanding the scope a bit, consider this:

  • Music begins as a concept in a musician or composer’s mind, a purely cerebral activity.
  • It is then transferred into either the physical act of playing an instrument, or composing.
  • If composed, it takes the vision of a musician to read (intake) the notes and convert them to the appropriate physical movement, whether vocal or instrumental
  • Once the music is performed, it makes its way into a listener’s ear and sent along to the brain, where the snapping of synapses creates playback inside the listener’s mind

It’s a circular experience, beginning and ending in the mind.

But what if that’s not where the journey ends?

This is what a recent MSNBC article attempts to answer. The thought is that the journey continues past the mind and actually influences physical behavior and recovery from injury:

“Research has already shown that if you play a piece — like Mozart — at a certain slow beat, the listener will adapt their heart beat to the beat of the music.” – Dr. Claudius Conrad, senior surgical resident at Harvard Medical School; pianist

This extra leg of the aural journey, from listening to physical response, is detailed in the article: the synaptic pulse in your brain, in addition to stimulating your auditory cortex, also hits the hypothalamus, which controls heart rate and respiration in addition to stomach and skin nerves. This is why a tune can “give you butterflies or goose bumps.”

The journey also includes the chemical, in the form of hormones. It was found that, in addition to a reduction in blood pressure and heart rate, critically ill patients can show a “50 percent spike in pituitary growth hormone” when listening to Mozart sonatas. This hormone is known to stimulate healing.

So what does that mean for the aspiring musician looking to make a living?:

“At Cleveland Clinic, Rezai and other neurosurgeons collaborate with The Cleveland Orchestra to compose classical pieces to play for patients during brain operations.”

And one of the oldest instruments, the harp, is still the go-to solution for music therapy:

The harp is the only instrument that has 20 to 50 strings and is open, unlike, say, a violin. When a harpist strikes a chord, she also opens vibrations in strings just above and below the few she plucks. Those vibes… are absorbed by the body.

The world of medicine is becoming entwined with the world of music, which is likely to result in a whole new cache of careers and job opportunities for musicians and doctors alike.


The Golden Page

May 29, 2009

fibonacci-nature-3

This Information Age we’re living in is full of knowledge, most of which is free and entirely at our fingertips. Yet despite the litany of sites offering free downloadable copies of classics, the world at large remains largely unread. Why?

Perhaps its because the words are not on a page.

You may argue that words are words, and can be read wherever they appear. While this is true I argue that the medium matters. A lot. More than we may realize. Amazon’s Kindle is trying to address this issue, which is this: People want to read things in a format that suits one’s field of vision.

I dont think this is a conscious choice. It’s simply a more comfortable reading experience when you’re looking at something your eye is able to take in without trouble. This is why reading a novel on your computer screen, or scanning through a treatise typed on a billboard, will never be best practice. The medium matters.

So what, then, of music?

The term “medium” or “format” in music relates to the way in which the sound is recorded and listened to, and can range from LP’s to streaming mp3’s.  And the format does matter. Audiophiles who swear by the warmth of long-playing records sometimes have a hard time enjoying the experience  of listening to music on an iPod Shuffle. Similarly, Apple-philes find that the portability and interactive nature of the iPod and iPod Touch make listening to music more fun, and find LP’s antiquated, crackly, and inconvenient.

In the end it amounts to personal preference, but always remember that the way you intake certain art forms can affect your opinion more than the art itself. The subtle way that content relates to medium is an overlooked aspect of preference.

(For further reading into the mysterious nature of aesthetics, check out the Wikipedia article on The Golden Ratio: http://en.wikipedia.org/wiki/Golden_ratio#Aesthetics )


Creating Creativity

May 21, 2009
Credit: Cauê Rangle

Credit: Cauê Rangle

SEED Magazine has a great article regarding creativity—it’s an investigation into the way artists are able to utilize their creative talents on command. They probe this mystery through the use of an fMRI machine to identify which parts of the brain are utilized, and when, during an improvised jazz solo. The goal was to untangle the disparate elements of inspiration:

William James described the creative process as a “seething cauldron of ideas, where everything is fizzling and bobbing about in a state of bewildering activity.”

The findings are interesting: before the solo even begins, a pianist was found to have “deactivated” their dorsolateral prefrontal cortex (DLPFC), which is the portion of the brain associated with planning and self-control: “In other words, they were inhibiting their inhibitions, which allowed the musicians to create without worrying about what they were creating.”

The article drives on from there, delving into other aspects of the improvisatory experience. Spikes in medial prefrontal cortex activity, for example, which is an area associated with self-expression (“it lights up, for instance, whenever people tell a story in which they’re the main character”), and premotor cortex activity which is linked to the physical execution of notes. But it’s the first point I find the most interesting: It is a musician’s lack of activity in a particular area—conscious thought—that drives a successful solo before a single note is played.

Creativity, then, may not be a result of the presence of talent, but rather the lack of inhibition. One’s supreme willingness to simply try may be the best kept secret to artistic success.