Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
MuMu is a Multimodal Music dataset with multi-label genre annotations that combines information from the Amazon Reviews dataset and the Million Song Dataset (MSD). The former contains millions of album customer reviews and album metadata gathered from Amazon.com. The latter is a collection of metadata and precomputed audio features for a million songs.
To map the information from both datasets we use MusicBrainz. This process yields the final set of 147,295 songs, which belong to 31,471 albums. For the mapped set of albums, there are 447,583 customer reviews from the Amazon Dataset. The dataset have been used for multi-label music genre classification experiments in the related publication. In addition to genre annotations, this dataset provides further information about each album, such as genre annotations, average rating, selling rank, similar products, and cover image url. For every text review it also provides helpfulness score of the reviews, average rating, and summary of the review.
The mapping between the three datasets (Amazon, MusicBrainz and MSD), genre annotations, metadata, data splits, text reviews and links to images are available here. Images and audio files can not be released due to copyright issues.
MuMu dataset (mapping, metadata, annotations and text reviews)
Data splits and multimodal feature embeddings for ISMIR multi-label classification experiments
These data can be used together with the Tartarus deep learning library https://github.com/sergiooramas/tartarus.
NOTE: This version provides simplified files with metadata and splits.
Scientific References
Please cite the following papers if using MuMu dataset or Tartarus library.
Oramas, S., Barbieri, F., Nieto, O., and Serra, X (2018). Multimodal Deep Learning for Music Genre Classification, Transactions of the International Society for Music Information Retrieval, V(1).
Oramas S., Nieto O., Barbieri F., & Serra X. (2017). Multi-label Music Genre Classification from audio, text and images using Deep Features. In Proceedings of the 18th International Society for Music Information Retrieval Conference (ISMIR 2017). https://arxiv.org/abs/1707.04916
http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.htmlhttp://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
What makes us, humans, able to tell apart two songs of different genres? Maybe you have ever been in the diffcult situation to explain show it sounds the music style that you like to someone. Then, could an automatic genre classifcation be possible?
Each row is an electronic music song. The dataset contains 100 song for each genre among 23 electronic music genres, they were the top (100) songs of their genres on November 2016. The 71 columns are audio features extracted of a two random minutes sample of the file audio. These features have been extracted using pyAudioAnalysis (https://github.com/tyiannak/pyAudioAnalysis).
The song names for each track is exactly the same as in the https://www.kaggle.com/caparrini/electronic-music-features-201611-beatporttop100 dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is a collection of mel-spectrogram features extracted from Indian regional music containing the following languages:
Hindi, Gujarati, Marathi, Konkani, Bengali, Oriya, Kashmiri, Assamese, Nepali, Konyak, Manipuri, Khasi & Jaintia, Tamil, Malayalam, Punjabi, Telugu, Kannada.
Five recordings are collected for each language for four artists (2Male + 2Female) each. 2 artists out of 4 for each language are old veteran performers, and the remaining 2 are contemporary performers. Overall, the dataset includes 17 languages and 68 artists (34 Males and 34 Females). There are 340 recordings in the dataset, with a total duration of 29.3 hrs.
Mel-spectrogram is extracted from a 1-second segment with a 1/2 second sliding window for each song. Extracted mel-spectrogram for each segment is annotated with language, location, local_song_index, global_song_index, language_id, location_id, artist_id, gender_id and no_of_artists.
_
This project was funded under the grant number: ECR/2018/000204 by the Science & Engineering Research Board (SERB).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was studied on Temporal Analysis and Visualisation of Music paper, in the following link:
https://sol.sbc.org.br/index.php/eniac/article/view/12155
This dataset provides a list of lyrics from 1950 to 2019 describing music metadata as sadness, danceability, loudness, acousticness, etc. We also provide some informations as lyrics which can be used to natural language processing.
The audio data was scraped using Echo Nest® API integrated engine with spotipy Python’s package. The spotipy API permits the user to search for specific genres, artists,songs, release date, etc. To obtain the lyrics we used the Lyrics Genius® API as baseURL for requesting data based on the song title and artist name.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
MGD: Music Genre Dataset
Over recent years, the world has seen a dramatic change in the way people consume music, moving from physical records to streaming services. Since 2017, such services have become the main source of revenue within the global recorded music market.
Therefore, this dataset is built by using data from Spotify. It provides a weekly chart of the 200 most streamed songs for each country and territory it is present, as well as an aggregated global chart.
Considering that countries behave differently when it comes to musical tastes, we use chart data from global and regional markets from January 2017 to December 2019, considering eight of the top 10 music markets according to IFPI: United States (1st), Japan (2nd), United Kingdom (3rd), Germany (4th), France (5th), Canada (8th), Australia (9th), and Brazil (10th).
We also provide information about the hit songs and artists present in the charts, such as all collaborating artists within a song (since the charts only provide the main ones) and their respective genres, which is the core of this work. MGD also provides data about musical collaboration, as we build collaboration networks based on artist partnerships in hit songs. Therefore, this dataset contains:
This dataset was originally built for a conference paper at ISMIR 2020. If you make use of the dataset, please also cite the following paper:
Gabriel P. Oliveira, Mariana O. Silva, Danilo B. Seufitelli, Anisio Lacerda, and Mirella M. Moro. Detecting Collaboration Profiles in Success-based Music Genre Networks. In Proceedings of the 21st International Society for Music Information Retrieval Conference (ISMIR 2020), 2020.
@inproceedings{ismir/OliveiraSSLM20,
title = {Detecting Collaboration Profiles in Success-based Music Genre Networks},
author = {Gabriel P. Oliveira and
Mariana O. Silva and
Danilo B. Seufitelli and
Anisio Lacerda and
Mirella M. Moro},
booktitle = {21st International Society for Music Information Retrieval Conference}
pages = {726--732},
year = {2020}
}
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
Dataset Card for Music Genre
The Default dataset comprises approximately 1,700 musical pieces in .mp3 format, sourced from the NetEase music. The lengths of these pieces range from 270 to 300 seconds. All are sampled at the rate of 22,050 Hz. As the website providing the audio music includes style labels for the downloaded music, there are no specific annotators involved. Validation is achieved concurrently with the downloading process. They are categorized into a total of 16… See the full description on the dataset page: https://huggingface.co/datasets/ccmusic-database/music_genre.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Dataset Card for MusicCaps
Dataset Summary
The MusicCaps dataset contains 5,521 music examples, each of which is labeled with an English aspect list and a free text caption written by musicians. An aspect list is for example "pop, tinny wide hi hats, mellow piano melody, high pitched female vocal melody, sustained pulsating synth lead", while the caption consists of multiple sentences about the music, e.g., "A low sounding male voice is rapping over a fast paced… See the full description on the dataset page: https://huggingface.co/datasets/google/MusicCaps.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
MusicOSet is an open and enhanced dataset of musical elements (artists, songs and albums) based on musical popularity classification. Provides a directly accessible collection of data suitable for numerous tasks in music data mining (e.g., data visualization, classification, clustering, similarity search, MIR, HSS and so forth). To create MusicOSet, the potential information sources were divided into three main categories: music popularity sources, metadata sources, and acoustic and lyrical features sources. Data from all three categories were initially collected between January and May 2019. Nevertheless, the update and enhancement of the data happened in June 2019.
The attractive features of MusicOSet include:
| Data | # Records |
|:-----------------:|:---------:|
| Songs | 20,405 |
| Artists | 11,518 |
| Albums | 26,522 |
| Lyrics | 19,664 |
| Acoustic Features | 20,405 |
| Genres | 1,561 |
The large-scale MUSIC-AVQA dataset of musical performance contains 45,867 question-answer pairs, distributed in 9,288 videos for over 150 hours. All QA pairs types are divided into 3 modal scenarios, which contain 9 question types and 33 question templates. Finally, as an open-ended problem of our AVQA tasks, all 42 kinds of answers constitute a set for selection.
According to a study on music consumption worldwide in 2022, younger generations tended to find new songs via music apps and social media, while older generations also used the radio as a format to discover new audio content.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The Song Describer Dataset: a Corpus of Audio Captions for Music-and-Language Evaluation
A retro-futurist drum machine groove drenched in bubbly synthetic sound effects and a hint of an acid bassline. The Song Describer Dataset (SDD) contains ~1.1k captions for 706 permissively licensed music recordings. It is designed for use in evaluation of models that address music-and-language (M&L) tasks such as music captioning, text-to-music generation and music-language retrieval. More information about the data, collection method and validation is provided in the paper describing the dataset. If you use this dataset, please cite our paper: The Song Describer Dataset: a Corpus of Audio Captions for Music-and-Language Evaluation, Manco, Ilaria and Weck, Benno and Doh, Seungheon and Won, Minz and Zhang, Yixiao and Bogdanov, Dmitry and Wu, Yusong and Chen, Ke and Tovstogan, Philip and Benetos, Emmanouil and Quinton, Elio and Fazekas, György and Nam, Juhan, Machine Learning for Audio Workshop at NeurIPS 2023, 2023
Neural Audio Fingerprint Dataset
(c) 2021 by Sungkyun Chang https://github.com/mimbres/neural-audio-fp This dataset includes all music sources, background noise and impulse-reponses (IR) samples that have been used in the work "https://arxiv.org/abs/2010.11910">"Neural Audio Fingerprint for High-specific Audio Retrieval based on Contrastive Learning".
Format:
16-bit PCM Mono WAV, Sampling rate 8000 Hz
Description:
/… See the full description on the dataset page: https://huggingface.co/datasets/arch-raven/music-fingerprint-dataset.
https://brightdata.com/licensehttps://brightdata.com/license
Unlock powerful insights with our custom music datasets, offering access to millions of records from popular music platforms like Spotify, SoundCloud, Amazon Music, YouTube Music, and more. These datasets provide comprehensive data points such as track titles, artists, albums, genres, release dates, play counts, playlist details, popularity scores, user-generated tags, and much more, allowing you to analyze music trends, listener behavior, and industry patterns with precision. Use these datasets to optimize your music strategies by identifying trending tracks, analyzing artist performance, understanding playlist dynamics, and tracking audience preferences across platforms. Gain valuable insights into streaming habits, regional popularity, and emerging genres to make data-driven decisions that enhance your marketing campaigns, content creation, and audience engagement. Whether you’re a music producer, marketer, data analyst, or researcher, our music datasets empower you with the data needed to stay ahead in the ever-evolving music industry. Available in various formats such as JSON, CSV, and Parquet, and delivered via flexible options like API, S3, or email, these datasets ensure seamless integration into your workflows.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
References:
Nakai, Koide-Majima, and Nishimoto (2021). Correspondence of categorical and feature-based representations of music in the human brain. Brain and Behavior. 11(1), e01936. https://doi.org/10.1002/brb3.1936
Nakai, Koide-Majima, and Nishimoto (2022). Music genre neuroimaging dataset. Data in Brief. 40, 107675. https://doi.org/10.1016/j.dib.2021.107675
We measured brain activity using functional MRI while five subjects (“sub-001”, …, “sub-005”) listened to music stimuli of 10 different genres.
The entire folder consists of subject-wise subfolders (“sub-001”,…). Each subject’s folder contains the following subfolders:
anat: T1-weighted structural images
func: functional signals (multi-band echo-planar images)
Each subject performed 18 runs consisting of 12 training runs and 6 test runs. The training and test data were assigned with the following notations:
Training data: sub-00*_task-Training_run-**_bold.json
Test data: sub-00*_task-Test_run-**_bold.json
Each *_event.tsv file contains following information:
Onset: stimulus onset
Genre: genre type (out of 10 genres)
Track: index to identify the original track
Start: onset of excerpt from the original track (second)
End: offset of excerpt from the original track (second)
The duration of all stimuli is 15s. For each clip, 2 s of fade-in and fade-out effects were applied, and the overall signal intensity was normalized in terms of the root mean square.
For the training runs, the 1st stimulus (0-15s) is the same as the last stimulus of the previous run (600-615s). For the test runs, the1st stimulus (0-15s) is the same as the last stimulus of the same run (600-615s).
Preprocessed data are available from Zenodo (https://doi.org/10.5281/zenodo.8275363). Experimental stimuli can be generated using GTZAN_Preprocess.py included in the same repository.
The original music stimuli (GTZAN dataset) can be found here: https://www.kaggle.com/datasets/andradaolteanu/gtzan-dataset-music-genre-classification
Caution This dataset can be used for research purposes only. The data were anonymized, and users shall not perform analyses to re-identify individual subjects.
This dataset contains 80K music album reviews by the users of rateyourmusic.com.
The dataset was acquired by scraping on May 2022. It contains 79922 album reviews and ratings (if available).
The scraper can be found at this GitHub repo.
The album chart of albums from which the reviews were scraped from can be found here.
This dataset can be used to practice NLP.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The MuSe (Music Sentiment) dataset contains sentiment information for 90,408 songs. We computed scores for the affective dimensions of valence, dominance and arousal, based on the user-generated tags that are available for each song via Last.fm. In addition, we provide artist and title metadata as well as a Spotify ID and a MusicBrainz ID, which allow researchers to extend the dataset with further metadata, such as genre or year.
Though the tags themselves cannot be included in the dataset, we include a jupyter notebook in our accompanying Github repository that demonstrates how to fetch the tags of a given song from the Last.fm API (Last.fm_API.ipynb)
We further include a jupyter notebook in the same repository that demonstrates how one might enrich the dataset with audio features using different endpoints of the Spotify API using the included Spotify IDs (spotify_API.ipynb). Please note that in its current form, the dataset only contains tentative spotify IDs for a subset (around 68%) of the songs.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is a collection of mel-spectrogram features extracted from Indian folk music containing the following 15 folk styles:
Bauls, Bhavageethe, Garba, Kajri, Maand, Sohar, Tamang Selo, Veeragase, Bhatiali, Bihu, Gidha, Lavani, Naatupura Paatu, Sufi, Uttarakhandi.
The number of recordings varies from 16 to 50 in the mentioned folk styles representing the scarcity of availability of given folk styles on the Internet. There are at least 4 artists and a maximum of 22. Overall there are 125 artists (34 female + 91 male) in these 15 folk styles.
There is a total of 606 recordings in the dataset, with a total duration of 54.45 hrs.
Mel-spectrogram is extracted from a 3-second segment with each song's 1/2 second sliding window. Extracted mel-spectrogram for each segment is annotated with folk_style, state, artist, gender, song, source, no_of_artists, folk_style_id, state_id, artist_id, gender_id.
_
This project was funded under the grant number: ECR/2018/000204 by the Science & Engineering Research Board (SERB).
The most successful music streaming service in the United States was Apple Music as of September, with the most up to date information showing that 49.5 million users accessed the platform each month. Spotify closely followed, with a similarly impressive 47.7 million monthly users.
What is a music streaming service?
Music streaming services provide their users with a database compiled of songs, playlists, albums and videos, where content can be accessed online, downloaded, shared, bookmarked and organized.
The music streaming business is huge, and has sometimes been lauded as the savior of the music industry. The biggest two services are in constant competition for the monopoly of the market. Apple Music was launched in 2015, whereas Spotify has been around since 2008. Other popular streaming services include Deezer, SoundCloud and iHeartRadio.
Do artists make a lot of money from streaming services?
In short, unfortunately not. Both Apple Music and Spotify have been frequently criticized for the tiny royalty payments they offer artists. Particularly for emerging talent, streaming services are far from a lucrative source of income. Bigger, established stars like Taylor Swift are more likely to regularly make a good amount of money this way. But either way, a track needs to go viral or be streamed several million times before it earns any real cash.
Our Moroccan Music Dataset is a comprehensive collection of audio samples representing six prominent genres of Moroccan music. The dataset is structured into six folders, each dedicated to a specific genre: Chaabi, Rap, Andalusian, Rai, Gnawa, and Imazighn.
Each folder contains a diverse range of audio samples showcasing the unique characteristics and styles of the respective genre. From the infectious beats of Chaabi to the powerful lyrics of Rap, the soulful melodies of Andalusian, the energetic rhythms of Rai, the spiritual chants of Gnawa, and the traditional sounds of Imazighn, the dataset encompasses the rich musical landscape of Morocco.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
MuMu is a Multimodal Music dataset with multi-label genre annotations that combines information from the Amazon Reviews dataset and the Million Song Dataset (MSD). The former contains millions of album customer reviews and album metadata gathered from Amazon.com. The latter is a collection of metadata and precomputed audio features for a million songs.
To map the information from both datasets we use MusicBrainz. This process yields the final set of 147,295 songs, which belong to 31,471 albums. For the mapped set of albums, there are 447,583 customer reviews from the Amazon Dataset. The dataset have been used for multi-label music genre classification experiments in the related publication. In addition to genre annotations, this dataset provides further information about each album, such as genre annotations, average rating, selling rank, similar products, and cover image url. For every text review it also provides helpfulness score of the reviews, average rating, and summary of the review.
The mapping between the three datasets (Amazon, MusicBrainz and MSD), genre annotations, metadata, data splits, text reviews and links to images are available here. Images and audio files can not be released due to copyright issues.
MuMu dataset (mapping, metadata, annotations and text reviews)
Data splits and multimodal feature embeddings for ISMIR multi-label classification experiments
These data can be used together with the Tartarus deep learning library https://github.com/sergiooramas/tartarus.
NOTE: This version provides simplified files with metadata and splits.
Scientific References
Please cite the following papers if using MuMu dataset or Tartarus library.
Oramas, S., Barbieri, F., Nieto, O., and Serra, X (2018). Multimodal Deep Learning for Music Genre Classification, Transactions of the International Society for Music Information Retrieval, V(1).
Oramas S., Nieto O., Barbieri F., & Serra X. (2017). Multi-label Music Genre Classification from audio, text and images using Deep Features. In Proceedings of the 18th International Society for Music Information Retrieval Conference (ISMIR 2017). https://arxiv.org/abs/1707.04916