2025
Conference article  Open Access

GATSY: graph attention network for music artist similarity

Di Francesco A. G., Giampietro G., Spinelli I., Comminiello D.

Graph Attention  Information Retrieval (cs.IR)  FOS: Computer and information sciences  Recommendation Systems  Graph attention  Recommendation systems  Graph neural networks  Information Retrieval  Artist similarity  Artist Similarity  Graph Neural Networks 

The artist similarity quest has become a crucial subject in social and scientific contexts, driven by the desire to enhance music discovery according to user preferences. Modern research solutions facilitate music discovery according to user tastes. However, defining similarity among artists remains challenging due to its inherently subjective nature, which can impact recommendation accuracy. This paper introduces GATSY, a novel recommendation system built upon graph attention networks and driven by a clusterized embedding of artists. The proposed framework leverages the graph topology of the input data to achieve outstanding performance results without relying heavily on hand-crafted features. This flexibility allows us to include fictitious artists within a music dataset, facilitating connections between previously unlinked artists and enabling diverse recommendations from various and heterogeneous sources. Experimental results prove the effectiveness of the proposed method with respect to state-of-the-art solutions while maintaining flexibility. The code to reproduce these experiments is available at https://github.com/difra100/GATSY-Music_Artist_Similarity.

Source: PROCEEDINGS OF ... INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, pp. 1-8. Roma, Italy, 2025

Publisher: Institute of Electrical and Electronics Engineers Inc.


[1] D. Ellis, B. Whitman, A. Berenzweig, and S. Lawrence, “The quest for ground truth in musical artist similarity,” in Int. Soc. on Music Inform. Retrieval Conf. (ISMIR), Paris, France, 2002.
[2] S. Oramas, M. Sordo, L. Espinosa-Anke, and X. Serra, “A semantic-based approach for artist similarity,” in Int. Soc. on Music Inform. Retrieval Conf. (ISMIR), Ma´laga, Spain, 2015.
[3] V. Yadav and S. Bethard, “A survey on recent advances in named entity recognition from deep learning models,” in Int. Conf. on Computat. Linguistics, Santa Fe, NM, USA, 2018.
[4] M. Wang, X. Rao, L. Chen, S. Shang, and B. Zhang, “SSARGNN: Self-Supervised Artist Recommendation with Graph Neural Networks,” Future Gen. Comput. Syst., vol. 144, 2023.
[5] J. Lee, N. J. Bryan, J. Salamon, Z. Jin, and J. Nam, “Disentangled multidimensional metric learning for music similarity,” in IEEE Int. Conf. on Acoustics, Speech and Signal Process. (ICASSP), 2020, pp. 6-10.
[6] B. McFee and G. Lanckriet, “Heterogeneous embedding for subjective artist similarity,” in Int. Soc. on Music Inform. Retrieval Conf. (ISMIR), Utrecht, Netherlands, 2009.
[7] D. Bacciu, F. Errica, A. Micheli, and M. Podda, “A gentle introduction to deep learning for graphs,” Neural Netw., vol. 129, pp. 203-221, 2020.
[8] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip, “A comprehensive survey on graph neural networks,” IEEE Trans. Neural Netw. Learning Syst., vol. 32, no. 1, pp. 4-24, 2020.
[9] S. Wu, F. Sun, W. Zhang, X. Xie, and B. Cui, “Graph neural networks in recommender systems: A survey,” ACM Comput. Surv., 2022.
[10] F. Korzeniowski, S. Oramas, and F. Gouyon, “Artist similarity with graph neural networks,” in Int. Soc. on Music Information Retrieval Conf. (ISMIR), 2021.
[11] W. L. Hamilton, R. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Int. Conf. on Neural Inform. Process. Syst. (NIPS), Long Beach, CA, USA, 2017.
[12] I. Spinelli, S. Scardapane, M. Scarpiniti, and A. Uncini, Efficient Data Augmentation Using Graph Imputation Neural Networks, pp. 57-66, Springer Singapore, 2021.
[13] P. Velicˇkovic´, G. Cucurull, A. Casanova, A. Romero, P. Lio`, and Y. Bengio, “Graph attention networks,” in Int. Conf. on Learning Representations (ICLR), Vancouver, Canada, 2017.
[14] E. Hoffer and N. Ailon, “Deep metric learning using triplet network,” in Int. Workshop on Similarity-Based Pattern Recogn. 2015, pp. 84-92, Springer International Publishing.
[15] C.-Y. Wu, R. Manmatha, A. J. Smola, and P. Krahenbuhl, “Sampling matters in deep embedding learning,” in IEEE Int. Conf. on Comput. Vision (ICCV), Los Alamitos, CA, USA, 2017, pp. 2840-2848.
[16] D.-A. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (ELUs),” in Int. Conf. on Learning Representations (ICLR), San Juan, Puerto Rico, 2016.
[17] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Int. Conf. on Machine Learning (ICML), Lille, France, 2015.
[18] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Int. Conf. on Learning Representations (ICLR), San Diego, CA, USA, 2015.
[19] I. Loshchilov and F. Hutter, “SGDR: Stochastic gradient descent with restarts,” in Int. Conf. on Learning Representations (ICLR), Toulon, France, 2017.
[20] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “RoBERTa: A robustly optimized BERT pretraining approach,” in Int. Conf. on Learning Representations (ICLR), Addis Ababa, Ethiopia, 2020.
[21] K. Ja¨rvelin and J. Keka¨la¨inen, “Cumulated gain-based evaluation of IR techniques,” ACM Trans. Inform. Syst., vol. 20, no. 4, pp. 422-446, 2002.
[22] L. van der Maaten and G. Hinton, “Visualizing data using tSNE,” J. Machine Learning Research, vol. 9, no. 86, pp. 2579- 2605, 2008.

Metrics



Back to previous page
BibTeX entry
@inproceedings{oai:iris.cnr.it:20.500.14243/563682,
	title = {GATSY: graph attention network for music artist similarity},
	author = {Di Francesco A.  G. and Giampietro G. and Spinelli I. and Comminiello D.},
	publisher = {Institute of Electrical and Electronics Engineers Inc.},
	doi = {10.1109/ijcnn64981.2025.11228629 and 10.48550/arxiv.2311.00635},
	booktitle = {PROCEEDINGS OF ... INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, pp. 1-8. Roma, Italy, 2025},
	year = {2025}
}