Treffer: ARTIFICIAL LIFE IN INTEGRATED INTERACTIVE SONIFICATION AND VISUALISATION: INITIAL EXPERIMENTS WITH A PYTHON-BASED WORKFLOW

Title:
ARTIFICIAL LIFE IN INTEGRATED INTERACTIVE SONIFICATION AND VISUALISATION: INITIAL EXPERIMENTS WITH A PYTHON-BASED WORKFLOW
Contributors:
Intelligent Instruments Lab. University of Iceland, Jones/Bulley Studios. London, UK
Publisher Information:
Georgia Institute of Technology
International Community for Auditory Display (ICAD)
Publication Year:
2024
Collection:
Georgia Institute of Technology: SMARTech - Scholarly Materials and Research at Georgia Tech
Document Type:
Konferenz conference object
File Description:
application/pdf
Language:
unknown
DOI:
10.21785/icad2024.003
Rights:
Creative Commons Attribution Non-Commercial 4.0 International (CC BY-NC 4.0) ; https://creativecommons.org/publicdomain/zero/1.0/ ; http://creativecommons.org/licenses/by-nc/4.0/
Accession Number:
edsbas.86625B9E
Database:
BASE

Weitere Informationen

Presented at the 29th International Conference on Auditory Display (ICAD 2024) ; Multimodal displays that combine interaction, sonification, visualisation and perhaps other modalities, are seeing increased interest from researchers seeking to take advantage of cross-modal perception, by increasing display bandwidth and expanding affordances. To support researchers and designers, many new tools are being proposed that aim to consolidate these broad feature sets into Python libraries, due to Python’s extensive ecosystem that in particular encompasses the domain of artificial intelligence (AI). Artificial life (ALife) is a domain of AI that is seeing renewed interest, and in this work we share initial experiments exploring its potential in interactive sonification, through the combination of two new Python libraries, Tölvera and SignalFlow. Tölvera is a library for composing self-organising systems, with integrated open sound control, interactive machine learning, and computer vision, and SignalFlow is a sound synthesis framework that enables realtime interaction with an audio signal processing graph via standard Python syntax and data types. We demonstrate how these two tools integrate, and the first author reports on usage in creative coding and artistic performance. So far we have found it useful to consider ALife as affording synthetic behaviour as a display modality, making use of human perception of complex, collective and emergent dynamics. In addition, we think ALife also implies a broader perspective on interaction in multimodal display, blurring the lines between data, agent and observer. Based on our experiences, we offer possible future research directions for tool designers and researchers.