Wheel of Musical Keys
Have you ever wondered which keys are the most popular in recorded music? Here at Pandora, our Music Analysis team has just finished analyzing a batch of 34,324 songs and we thought you might want to know what the dominant tonalities of those are.
Well, it might not come as much of a surprise that the most common key has no sharps or flats, or that the least common keys have lots of accidentals. But we have now verified it with data! It’s what we do.
... View more
A Day in the Life of a Music Analyst
An interview with Music Analyst Kevin Seal
Who are the Music Analysts at the core of the Music Genome Project, driving the best music recommendation system in the world on Pandora? We are a team of musicians who listen to songs and tag them with musicological, lyrical, genre, and mood data. Today I interviewed one of our Music Analysts, Kevin Seal, to get a sense of what a music analyst does in a day.
Kevin Seal, Music Analyst
Camilo: I’m here with Kevin Seal, Music Analyst, and I’m interested in finding out, ‘what is a day in the life of a Music Analyst?’
Kevin: Absolutely. Well, my day starts with choosing my tracks for the day. So, depending on if I’m doing ‘scrubbing’ of old tracks that have been analyzed and improving the analysis to be consistent with our new and improved analytical traits, or choosing fresh tracks that have just been released or that are about to be released.
In my personal case, I prefer to have a wide swath of different styles and genres. I find it easier to do half an hour of Folk, half an hour of Death Metal, half an hour of Reggae, and just mixing it up stylistically a lot. If I end up with, you know, three hours of Rock, my ears get tired, so I prefer keeping a very eclectic base of music that I’m going to be looking at.
Camilo: What happens when you are done analyzing for the day, what other projects are you working on?
Kevin: I write my own music, I have a band called Seal Party, so a lot of times I’m working on arrangements for new Seal Party songs, writing horn charts, trying to finish lyrics after months of writers’ block, a lot of that. I also teach middle school choir, right now I’m at East Bay School for Boys in Berkeley, so working out performances with them, teaching them how to read music notation, that kind of stuff. And then I also am the organist for the San Jose Sharks, so many weekends I’m driving down to San Jose and playing clap prompts for the Shark Tank fans to enjoy their hockey.
Camilo: Wow, that’s a lot of musical hats you wear! How do you manage it when you are touring with your band?
Kevin: Nowadays I’m trying to use as much vacation time as I can when I’m touring. I have gone out on tours where I’ll wake up, have my continental breakfast, and then try to squeeze in a few hours of analysis time before driving to the next city, and I will be doing some of this on the upcoming tour that starts this week, but I prefer when I’m on the road to focus as much as I can on just being on the road.
Camilo: Thanks Kevin! Keep up the good work.
-Camilo Landau, Music Analysis at Pandora
... View more
Over the last few years, we on the Music Analysis team here at Pandora have been immersed in a massive project revolving around the question of genre. We set out to identify, name and define all known musical genres - to the best of our ability - so we could then apply those genres to over a hundred thousand artists and millions of songs. The result of that project is a sprawling, complex interlocking musical tree of nearly 1400 different genres that is one of the most comprehensive musical genre systems in use. We call it the Analyst Genre Taxonomy (AGT). Creating the AGT has required a team of trained musicians to spend thousands of hours researching, writing, hunting out perfect examples, and debating… endless debating, to come up with a functional system for tagging artists and songs with genre. Note that I say “functional,” not “finalized,” because with genre, nothing is ever final. Since genre is fluid and ever-expanding, this a somewhat Sysyphusian task that forced us to step back and address some fundamental, almost philosophical questions before we could even begin. What is genre? Is it exclusively a musical phenomenon - a combination of specific instruments, harmonies, rhythmic patterns? What about musically-adjacent things like the subject of the lyrics, the recording techniques, even the age of the instruments being played? Or how about completely non-musical elements: time period, cultural context, sexual or racial identity, the age of the artist, geography? Is genre just a marketing concept or is it an integral part of the compositional process? How do you draw distinctions between different genres while also acknowledging their relationship and overlap? For example, how do you simultaneously emphasize the differences and connections between New Wave and Synth Pop or Avant-Garde Jazz and Free Improvisation? How do you handle sorting genres that genuinely straddle the fence between entirely different musical universes: Reggae Rock, Folk Pop, Funk Jazz, Country Rap? And then there’s the question of nomenclature. There are many genres that people identify with several different, interchangeable names, like Soft Rock and Lite Rock, or Nashville Sound and Countrypolitan. And there are totally distinct genres that people use the same exact name for, like “Electronic Music,” which could refer to 1950s Moog experimentation or EDM… or both. Then, of course, there are genres that have evolved so much from their original form that their name has lost its meaning. Bedroom Pop from 1997 has nearly no relationship to what we call Bedroom Pop today. Oldies used to refer to material from the late 40s-early 60s, and now refers to late 70s - early 90s. Then you’ve got something like Adult Album Alternative, (which, as an aside, was a term clearly invented by a music industry professional, as no musician ever thought, “I want to write an Adult Album Alternative song.”). Every single one of those words has aged out of its original meaning. “Adults,” are now Millennials and Gen-X’ers instead of Boomers. What used to be heard as “Alternative,” now sounds entirely mainstream. And “Albums,” as a phenomenon, are no longer the way most people engage with music. So is there any value to that genre name anymore? Furthermore, what feel like extremely important genre distinctions today might not even be recognizable a couple decades down the line. Is the distinction that was made in the 1980s between Synth-Pop and Sophisti-Pop even audible to today’s listeners? How about the endless parsing of Breakbeat sub-genres? Will the people of 2100 hear the distinctions between the 33 different genres of Metal that we have carefully identified, or will it all just sound like Rock to them? Perhaps most importantly for staying relevant, how do you decide when a new sound has earned full-fledged genre status? Artists, critics, scholars and marketers are constantly throwing new terms and names at the wall to describe new music or reclassify older music. Some of the names stick and some don’t. When does Glitchcore gain enough traction to be called a legit genre? When exactly did Cloud Rap tip that scale? Are there enough Paisley Underground or Madchester or Hyphy or Sertanejo Romântico artists that it’s worth continuing to call those unique genres, or can we safely roll those artists up to their parent genres? Should TikTok music be considered a genre entirely unto itself? Any time you set about naming things that already exist and sorting them into buckets, you’re going to run into impossible problems. For example, we run the risk of potentially offending makers and listeners by getting something wrong. It’s dicey for any one person or institution to claim to have the definitive say on any of this stuff. And, while we’re being honest, we struggle with how to handle the music that truly defies genre. Regardless if a track or an artist doesn’t fit neatly into a slot (and few actually do) we still have to put them somewhere. Even truly singular artists like Moondog, Ryuichi Sakamoto or Diamanda Galas. Consider, if you will, that hat that holds two cans that you can sip from straws. Should you store that with your hats or with your drinking vessels? Or should you have an entirely separate box where you keep all your multi-use objects? Despite it being difficult, messy and often less than ideal, we have to do something with the many musical drinky hats out there. If you spend enough time pondering these questions (which, to the annoyance of our managers, boy oh boy did we ever) you will inevitably wind up asking the big one: Is genre real? As musicians and listeners, we usually say, “No,” as we don’t like to perceive ourselves or our tastes fitting into pigeonholes or believing that our inclinations can be reduced to a two-word description. A top-forty DJ or a record exec might say, “Yes,” with the kind of confidence necessary to retain their job in the multi-billion dollar music industry. As Music Analysts, we say, “We can’t answer whether or not genre is real, but we need a way to help listeners find music they like, and this is just one tool in the toolkit, albeit a pretty handy one.” If genre were the only thing we were using to identify an artist or song, it would be a lot more problematic. But since it’s only one of hundreds of other traits that we consider, it removes some of the pressure to get our genre definitions 100% perfect (which, as noted above, is impossible). As for the real answer of whether or not genre is real, I’d say, “What do you think?” -Scott Rosenberg
... View more
MGP2 - What’s Next for the Music Genome Project Here at Pandora, we have recently completely redesigned the way we analyze music for the Music Genome Project, with a new system we call MGP2. We’ve developed a collection of new taxonomies that we use to describe songs, and a new, text-based tagging system that allows us to annotate music much more accurately and completely. This new way of annotating has improved all of our downstream systems and led to important improvements in the data science, machine learning, and content understanding that make our recommendation systems the best in the world. What is MGP2? When the Music Genome Project first began over 20 years ago, music analysis was done on pencil and paper, with analysts manually ripping CDs (remember those?) into the ingestion systems. Music Analysts analyzing music in the early days of PandoraThe Music Genome Project Interface was soon developed, consisting of a series of “genes” that needed to be scored on a ten-point scale. This system enabled us to gather a massive amount of uniform data on millions of songs. However, as the years went by and music evolved, the original fixed set of genes could not keep up. As the tech stack aged, making changes to the system proved too costly and impractical. In the rapidly changing and increasingly genre-agnostic world of modern music, we needed a more fluid and flexible way to analyze songs. Enter MGP2. This semantic, tag-based system lets us listen to songs and then add tags from a set of taxonomies. These taxonomies include: Genre Musicology Instrumentation Vocals Lyrics Mood Overall Having fluid and dynamic control over these taxonomies allows us to add or adjust things as needed. It also turns out that these tags are much easier for humans to interpret, and also easier for machine learning models to use as inputs. MGP2 also allows us to provide a more detailed and in-depth level of song analysis. Here is an example: in our old MGP1 system, we could indicate if a song contained Afro-Latin rhythms, and to what degree. In the MGP2 system, we can still indicate a general “Afro-Latin Feel” if we want, but now we can get more specific, with the Afro-Latin Feel tag acting as a parent to a number of more specific tags, such as Merengue Feel, Bomba Feel, or Afro-Peruvian Feel. Another example: Previously we could indicate if a song was in an odd meter. Now, we can say specifically what meter the song is in, with children of the Odd Meter tag including 5/4 Meter, 7/4 Meter, and Mixed Meter (eg, ⅞ + 5/4). This helps us have a more accurate understanding of the music so we can craft excellent listening experiences. We also wrote thousands of translations, essentially a set of rules, to convert the numeric scores of the 2.2 million songs we analyzed in our old system into the text-based tags we created for MGP2. Once our science team updated our machine-learning models to use these tags as inputs, we saw immediate improvements in the scale and accuracy of all downstream systems, including predictions, recommendations, track grouping, and more. These models allow us to leverage the information from the 2.2 million songs we have analyzed onto the rest of the tens of millions of songs in our massive complete catalog. You can read more about the groundbreaking work they have published. Perhaps two of the most significant details that we can now provide with our new MGP2 system are Genre and Mood. Our newest taxonomy is the Analysis Mood Taxonomy (AMT), which allows us to tag songs with a wide range of specific emotional states. Our comprehensive genre taxonomy, AGT, is a detailed hierarchy of genres with over 1400 specific sub-genres, painstakingly organized by our expert Music Analysts. All together, the new MGP2 system, built on a modern tech stack with flexible and dynamic taxonomies, is a large leap forward for the systems that give Pandora the most thorough and sophisticated content understanding, and the best music recommendations in the world.
... View more
What is the Music Genome Project? Inspired by the Human Genome Project of the 1990s and early 2000s, the Music Genome Project (MGP) was conceived by Pandora founder Tim Westergren to catalog the fundamental characteristics of the vast body of recorded music. The goal of this ambitious project was to allow music lovers to discover music based on inherent musical qualities, rather than sales data or industry-backed marketing. The Music Genome Project provides a detailed analysis of millions of songs, describing features of harmony, rhythm, melody, vocals, instrumentation, lyrics, and more. This data powers our best-in-class recommendation algorithms. Thanks to the rigorous and detailed music analysis of the Music Genome Project, none of our competitors can come close to the depth of our content understanding. Who are the Music Analysts and what do we do? We are a group of trained musicians, working together to describe the music you listen to on Pandora. Leveraging our expertise in music theory, genre, and music production, we listen to individual songs and tag them with the musical attributes from the Music Genome. Over the past 20 years we have analyzed 2.2 million songs. With a Pandora music catalog in the tens of millions, we rely on a number of methods for prioritizing the most important songs to analyze. Assisting us in this prioritization effort is our amazing Curation team, who are top experts in their respective genre areas. Our data science team then helps us leverage the rich data from the MGP to extrapolate further information about other related material in order to unlock content understanding around more songs, albums, and artists. So how do we analyze a song? Let’s dive in. When analyzing a song, we often start with the genre, choosing from one or more of the 1300+ subgenres we have developed in our comprehensive genre taxonomy. Since many songs fit into more than one main genre, or borrow influences from a variety of genres, our nuanced analysis process allows us to indicate multiple genres and influences. Training is an essential ingredient for the Music Analyst team. In addition to extensive new hire training, ongoing training is required for maintaining broad genre knowledge, and for staying on top of the latest trends in music. Next, we apply our music theory backgrounds to assess the musicological features of the song. Is this song in a major or minor key, or somewhere in between? How complex is the harmony? Are there many chord changes, and how often do they repeat? What is the time signature and feel? Does this song have a heavy backbeat, a swing or shuffle feel, or an Afro-Latin beat? How is the melody presented, and how would you describe it musicologically? Now we put on our producer caps and listen carefully to the arrangement and instrumentation of the song. What instruments are present and what are their roles? Is that trombone taking lyrical solo, and does it have a blaring or a mellow tone? Are those synthetic or acoustic drums? We also listen carefully to the vocalists and describe their timbres and characteristics, which can have an enormous impact on the overall character of the song. Is this singer passionate, or laid-back? Are they singing in a high or low register, and are they gruff and growly or light and breathy? We also listen to lyrics of the song and broadly assess the main themes and subjects. Is this a tender love song, a bittersweet breakup song, or a brash boasting song? Are the lyrics snarky and cynical, poetic and metaphorical, or laden with vernacular slang? Is this squeaky clean, or are there swear words or potentially offensive themes? After so much close listening we can step back and assess some broad overall characteristics of the song. Is this recorded live or in the studio? Is the production polished and sparkly, or does it have more of an underground vibe? What is the balance between acoustic, electric, and synthetic sonorities? What dominates the compositional balance of the song, is the emphasis on the lead vocals, the groove, or the performance? Finally, we can step even further back and assess the dominant mood or moods present in the song, including valence and arousal levels, using the newest taxonomy we have developed, called AMT. Before we complete and submit the analysis, we ask ourselves, what is the most salient and notable aspect of this song? If you were to describe this song to a friend, what is the one thing you would call out, and have you done that with this analysis? As you can see, the level of depth and detail that goes into tagging a single song is unmatched anywhere in the streaming music universe. We have provided this kind of analysis across millions of songs, which we then leverage to further our content knowledge across tens of millions more songs that make up our full catalog. Our data science team has used this massive treasure-trove of data to create the best music recommendation systems in the world, armed with the most comprehensive and complete content understanding database available anywhere. We believe that the next song matters, and that is why we work hard to make our streaming service the best in the world. I hope this helps explain why when that next song comes on your Pandora station, the recommendation we provide is the absolute best available. -Camilo Landau, Music Analysis, Pandora
... View more