Music Recommender Systems (mRS) are designed to give personalised
and meaningful recommendations of items (i.e. songs, playlists
or artists) to a user base, thereby reflecting and further complementing
individual users’ specific music preferences. Whilst accuracy
metrics have been widely applied to evaluate recommendations in
mRS literature, evaluating a user’s item utility from other impactoriented
perspectives, including their potential for discrimination,
is still a novel evaluation practice ...
Music Recommender Systems (mRS) are designed to give personalised
and meaningful recommendations of items (i.e. songs, playlists
or artists) to a user base, thereby reflecting and further complementing
individual users’ specific music preferences. Whilst accuracy
metrics have been widely applied to evaluate recommendations in
mRS literature, evaluating a user’s item utility from other impactoriented
perspectives, including their potential for discrimination,
is still a novel evaluation practice in the music domain. In this work,
we center our attention on a specific phenomenon for which we
want to estimate if mRS may exacerbate its impact: gender bias.
Our work presents an exploratory study, analyzing the extent to
which commonly deployed state of the art Collaborative Filtering
(CF) algorithms may act to further increase or decrease artist gender
bias. To assess group biases introduced by CF, we deploy a
recently proposed metric of bias disparity on two listening event
datasets: the LFM-1b dataset, and the earlier constructed Celma’s
dataset. Our work traces the causes of disparity to variations in
input gender distributions and user-item preferences, highlighting
the effect such configurations can have on user’s gender bias after
recommendation generation.
+