Apple in 2018 closed its $400 million acquisition of music recognition app Shazam. Now, it’s bringing Shazam’s audio recognition capabilities to app developers in the form of the new ShazamKit. The new framework will allow app developers — including those on both Apple platforms and Android — to build apps that can identify music from Shazam’s massive database of songs or even from their own custom catalog of pre-recorded audio.
Many consumers are already familiar with the mobile app Shazam, which lets you push a button to identify what song you’re hearing, and then take other actions — like viewing the lyrics, adding the piece to a playlist, exploring music trends, and more. Having first launched in 2008, Shazam was already one of the oldest apps on the App Store when Apple snatched it up.
Now the company is putting Shazam to better use than being just a music identification utility. With the new ShazamKit, developers will now leverage Shazam’s audio recognition capabilities to create their own app experiences. There are three parts to the new framework: Shazam catalog recognition, which lets developers add song recognition to their apps; custom catalog recognition, which performs on-device matching against arbitrary audio; and library management.
Shazam catalog recognition is what you probably think of when you think of the Shazam experience today. The technology can recognize the song playing in the environment and then fetch the song’s metadata, like the title and artist. The ShazamKit API will also be able to return other metadata like genre or album art. And it can identify where in the audio the match occurred.