Williams, DAH ORCID: https://orcid.org/0000-0003-4793-8330, Hodge, VJ and Wu, CY
2020,
'On the use of AI for generation of functional music to improve mental health'
, Frontiers in Artificial Intelligence, 3
, p. 497864.
|
PDF
- Published Version
Available under License Creative Commons Attribution 4.0. Download (688kB) | Preview |
|
![]() |
PDF
- Accepted Version
Restricted to Repository staff only Download (461kB) |
Abstract
Increasingly music has been shown to have both physical and mental health benefits including improvements in cardiovascular health, a link to reduction of cases of dementia in elderly populations, and improvements in markers of general mental well-being such as stress reduction. Here, we describe short case studies addressing general mental well-being (anxiety, stress-reduction) through AI-driven music generation. Engaging in active listening and music-making activities (especially for at risk age groups) can be particularly beneficial, and the practice of music therapy has been shown to be helpful in a range of use cases across a wide age range. However, access to music-making can be prohibitive in terms of access to expertise, materials, and cost. Furthermore the use of existing music for functional outcomes (such as targeted improvement in physical and mental health markers suggested above) can be hindered by issues of repetition and subsequent over-familiarity with existing material. In this paper, we describe machine learning (ML) approaches which create functional music informed by biophysiological measurement across two case studies, with target emotional states at opposing ends of a Cartesian affective space (a dimensional emotion space with points ranging from descriptors from relaxation, to fear). We use Galvanic skin response (GSR) as a marker of psychological arousal and as an estimate of emotional state to be used as a control signal in the training of the ML algorithm. This algorithm creates a non-linear time series of musical features for sound synthesis ‘on-the-fly’, using a perceptually informed musical feature similarity model. We find an interaction between familiarity (or more generally, the featureset model we have implemented) and perceived emotional response so focus on generating new, emotionally-congruent pieces. We also report on subsequent psychometric evaluation of the generated material, and consider how these - and similar techniques -might be useful for a range of functional music generation tasks, for example in nonlinear sound-tracking such as that found in interactive media or video games.
Item Type: | Article |
---|---|
Contributors: | Dannenberg, RE (Editor), Xia, G (Reviewer) and Bonnici, A (Reviewer) |
Schools: | Schools > School of Computing, Science and Engineering > Salford Innovation Research Centre |
Journal or Publication Title: | Frontiers in Artificial Intelligence |
Publisher: | Frontiers |
ISSN: | 2624-8212 |
Related URLs: | |
Funders: | Engineering and Physical Sciences Research Council (EPSRC) |
Depositing User: | USIR Admin |
Date Deposited: | 20 Oct 2020 08:13 |
Last Modified: | 16 Feb 2022 05:51 |
URI: | https://usir.salford.ac.uk/id/eprint/58601 |
Actions (login required)
![]() |
Edit record (repository staff only) |