Music’s Quantitative Methods

Today, recommendation engines designed to curate the perfect individual playlist are redefining both the demand and supply side of the music industry, affecting how listeners judge music and how purveyors sell it. Consumer and song information needs processing, and suitable algorithms have to be found. The result is that there is much machine-intelligent number crunching going on to determine music plays, royalty payments, and marketing targets. The odd musical composition will occasionally default as well to a formulaic template.

This article will explore this new use of algorithms in the music business. This is not an easy task. Businesses don’t reveal their secrets for fear of yielding a competitive advantage. The target, moreover, is elusive and methods to reduce big data for business operations are always evolving. Still, companies like Spotify and Pandora, and less so Apple Music, have been open about how they use algorithms to help artist discovery by their customers. We turn to this next.


The Musical Genome Project developed by Pandora Media has been classifying music data manually and through automated algorithms since 1999. 450 data points are collected on every one of their one million songs, including the genre, instrumentation, tempo, and gender of the vocalist. Specially trained musicians study each track and feed the information to dedicated software. This may not be unlike Netflix employees scanning TV shows and music for content, but the data is processed statistically as well to determine usage patterns that are not obvious to the naked eye and can be counter-intuitive.

A good example is Soundscan. In the early 1990s, Soundscan collected bar code point-of-sale data for the record labels. Their premium package offered correlation analysis and could identify songs and artists whose sales were peaking together for no apparent reason. For instance, today, if the correlation coefficient of Sheryl Crowe, a country artist, and Andra Day, an urban artist, were strong after repeated releases, then label executives at Warner Music could tie the two together and think of a suitable marketing campaign, even pairing the two artists, not related by genre at all, in concert. The algorithm that generates the correlation coefficient is picking up an apparently random connection that on closer inspection turns out not to be so —- perhaps because the cohort of listeners they are tracking gravitate to both artists: birds of the same feathers flock together, and these fans may share similar income level.

Spotify’s data driven approach is well known. The company has developed a workflow manager they call Luigi. Luigi is used to process Spotify’s users’ data and generates music recommendations and song picks on their online radio service. The data is also used in decision-making and provides forecasting information and business analytics. In 2013, Spotify used Luigi to predict the Grammy Awards winners. Four out of the six predictions made by Spotify turned out correctly.

Moreover, Spotify recently acquired music intelligence company (see “Spotify’s Secret Weapon”, MBJ, Oct. 2014). The Echo Nest is a platform for developers and media companies. The Echo Nest attempts a new taxonomy of hearing, both decoding audio and textual content in recorded music so that future playback devices can use its self-defined parameters to showcase music with attributes that are yet not self evident but could be in time. In this regard, it compares to Pandora’s Music Genome Project when it launched. However, The Echo Nest is far more automated and human decision–making tends to stop at the programming level (good examples of The Echo Nest platform can be found at As well as using its own algorithms to analyze and classify music, The Echo Nest crawls the web to find artist and recording data, thus supplementing its base information.

Another player is Pandora’s Next Big Sound. Pandora acquired the company in 2015, suggesting that spending on music intelligence analytics is infectious among the major online music players. Next Big Sound analyses the popularity of artists on social networks like Facebook, Twitter and Wikipedia, as well as streaming services and radio. It then sells its data to record companies and other outlets. Its reports on the consumption of music are becoming ever more important in the industry.

Pandora’s Next Big Sound clients also include well known consumer brands like Pepsi and American Express. Their analytics are helping identify artists that are likely to gain significant growth in a desirable demographic.  Sponsorship money, of course, is a welcome addition for talent and the record labels, and can supplement traditional revenues in live music and merchandising. Besides, it is also cheaper for a consumer brand to buy a license for a commercial six months before it becomes a hit in the charts.


An important aspect of current data gathering is that services tend to offer curated plays to differentiate themselves competitively from one another. Playlists have always been the core currency of streaming, but now more than ever they are becoming the beating heart, the fuel that drives both discovery and consumption. In doing so they are helping drive hit singles into dominance and albums to the side.

Streaming services are “hyper personalizing” their user content. Spotify’s playlist Discover Weekly has played a big role. Its algorithm cross-references data coming from one user with that of users with similar tastes to recommend new songs and artists. More personalized playlists, like the Release Radar and Daily Mix, are intended to both net and please subscribers. Sometimes they don’t. Users of Discover Weekly complain that it can be a hit or a miss. It often suggests the same artist and songs and fails to account for the random factor in users’ preferences.

Algorithms may go so far in predicting human nature, but hope springs eternal. Artificial intelligence, i.e. intelligence programmed by humans, is much in vogue, and extra musical factors, as well as the crooked timber of humanity, are believed to be fair game for future predictions.

In effect, a listener’s location, mood, and even the weather are now becoming contributing factors in some recommendation engines. Google Play, in particular, is working on such adaptive functions. Their algorithms will apparently be able to recognize a listener’s guilty pleasures, see where the listener has been throughout the day, and understand how people listen to music overall. When combined with the sort of data provided by a smartphone, it could mean that music services could find the right track for one’s daily circumstances.

The idea that an algorithm could help users choose music according to the minute drama of their daily lives is a novel one, and perhaps a good one. It is also fraught with problems, not least the trade-off that it would imply with privacy concerns. Moreover, if Joe Smith loves jazz guitarist Pat Metheny, how would he feel if that were the algorithm’s track of choice when he is at the dentists? Would he like it or not? It is hard to imagine someone programming that answer correctly.

Still, collecting all sorts of data for its later reduction and monetization has much currency. The rise of smart assistants such as Apple’s Siri and Amazon’s Alexa in the home also points the use of devices that become “musical concierges” of sorts in the living room or car. And IBM’s Watson engine, the all time chess champion of the world, is now engaged by London startup Quantico to improve music recommendations by adding music reviews, blogs, and Twitter comments .

Pity, if you will, the folks at Apple Music. They still depend on human curators to compile their playlists. Or so they say.

Machine Credits

  Algorithmic music making may be on the horizon as well.

Tech companies do not usually work in a vacuum and Jukedeck, part of London’s TechHub, a global community of entrepreneurs and startups, has machine-written half a million pieces of original music already. Jukedeck says that the technology it employs has been around for half a century, but that it is one of the first companies to tap into it commercially. Jukedeck is programmed to write music note by note rather than using loops; currently, users can choose from (i) seven genres — including pop, ambient, and rock,   (ii) two moods, (iii) two instruments, and (iv) various tempos. Google and the Natural History Museum in London have used Jukedeck’s compositions for advertising and promotional jingles. A machine-written composition pays no royalties, so the creators of Jukedeck believe they may have a new business model to exploit, saving retail on the intellectual property costs of Muzak plays.

Finally, it is worthwhile noting that Google’s Deepmind has been used to create  classical piano music, and its Magenta project seeks to use machine learning to create (sic) “compelling art and music”. Thus, the future of algorithmic compositions, and the challenge for musicians therein, cannot be discarded off hand.


Across the board, both in music business and music production, there is a growing schism between the individual decisions made by musicians and their intermediaries and the more impersonal approach of technology companies. The market is devolving power to centralized tech-savvy operators for whom music is only of peripheral value. For Google and Apple propping up the music market is rarely the end of what they do. Music, rather, is used as a foil to accomplish other goals, like selling hardware or reaching users.

Pandora and Spotify work more in tandem with the music business. However, Pandora is financed through ads, not direct music sales. As for Spotify, it has relied for its growth on fresh money coming in from late investors like Coca-Cola and Goldman Sachs. As paid music subscription are yet insufficient to cover costs, and the current market value of the service exceeds revenues by a factor of four, it seems that Spotify’s investors are more interested in things other than music. For them, music might well be the Trojan horse that can open up the big data treasure trove on a new generation of music listeners.

Thus, it must be admitted that if music’s big data future is abetting the market it is also compromising its integrity. The intrinsic value of a piece of music is not as clear today as it once was, and in the age of automated purchases, this cheap product of mass consumption is worth more for what it tells us about the user. As the self-sufficiency of music making in the closed echo system of old is no longer an option, music stakeholders have to play the game differently if they are to survive.


By Alexander Stewart



  • Sisario, Ben. “Pandora Buys Next Big Sound to Track Popular Music.” The New York Times. The New York Times, 19 May 2015. Web. <>.
  • Pham, Alex. “Business Matters: Why Spotify Bought The Echo Nest.” Billboard. 6 Mar. 2014. Web. <>.
  • Ratliff, Ben. “Slave to the Algorithm? How Music Fans Can Reclaim Their Playlists from Spotify.” The Guardian. Guardian News and Media, 19 Feb. 2016. Web. <>.
  • Rijmenam, Mark Van. “How Big Data Enabled Spotify To Change The Music Industry.” Datafloq Read RSS. 29 Aug. 2015. Web. <>.
  • Langham, Matthew. “Spotify, Big Data, And The Future Of Music Streaming.” Dotted Music. 21 Nov. 2015. Web. <>.
  • Owsinski, Bobby. “The Music Industry’s Big Data Problem.” Forbes. Forbes Magazine, 02 June 2016. Web. <>.
  • Shubber, Kadhim. “Music Analytics Is Helping the Music Industry See into the Future.” The Data Store: On Big Data. Guardian News and Media, 09 Apr. 2014. Web. <>.
  • Kadziulis, Mike. “The Importance of Music Curation in the Digital Era.” PigeonsandPlanes. Complex, 20 Oct. 2016. Web. <>.
  • McGoogan, Cara. “Jukedeck Has Taught a Computer Program to Write Songs in Seconds .” The Telegraph. Telegraph Media Group, 21 May 2016. Web. <>.
  • Hogan, Marc. “Up Next: How Playlists Are Curating the Future of Music.” Up Next: How Playlists Are Curating the Future of Music | Pitchfork. 16 July 2015. Web. <>.
  • Burg, Natalie. “The MacallanVoice: Next Big Sound Empowers Musicians With Big Data.” Forbes. Forbes Magazine, 10 Nov. 2016. Web. <>.
  • Augur, Hannah. “Will Big Data Write The Next Hit Song?” Dataconomy. 1 Jan. 2016. Web. <>.
  • Marshall, Alex. “From Jingles to Pop Hits, A.I. Is Music to Some Ears.” The New York Times. The New York Times, 22 Jan. 2017. Web. <>.
  • “Music’s Smart Future: The Impact of AI on the Future of the Music Industry.” British Phonographic Industry, 17 Feb. 2017. Web. <>.
  • Fildes, Nic. “Financial Times.” Financial Times. Financial Times, 2 Dec. 2016. Web. <>.


Leave a Reply

Your email address will not be published. Required fields are marked *