Result: Interaction with Machine Improvisation

Title:
Interaction with Machine Improvisation
Contributors:
Sciences et Technologies de la Musique et du Son (STMS), Institut de Recherche et Coordination Acoustique/Musique (IRCAM)-Université Pierre et Marie Curie - Paris 6 (UPMC)-Centre National de la Recherche Scientifique (CNRS), MuSync, Institut de Recherche et Coordination Acoustique/Musique (IRCAM)-Université Pierre et Marie Curie - Paris 6 (UPMC)-Centre National de la Recherche Scientifique (CNRS)-Institut de Recherche et Coordination Acoustique/Musique (IRCAM)-Université Pierre et Marie Curie - Paris 6 (UPMC)-Centre National de la Recherche Scientifique (CNRS)-Inria Paris-Rocquencourt, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), University of California [San Diego] (UC San Diego), University of California (UC), Shlomo, Argamon and Shlomo, Dubnov and Kevin, Burns
Source:
Shlomo. :219-245
Publisher Information:
CCSD; Springer, 2010.
Publication Year:
2010
Collection:
collection:UPMC
collection:CNRS
collection:INRIA
collection:INRIA-ROCQ
collection:IRCAM
collection:INRIA_TEST
collection:TESTALAIN1
collection:STMS
collection:INRIA2
collection:UPMC_POLE_1
collection:SORBONNE-UNIVERSITE
collection:SU-SCIENCES
collection:SU-TI
collection:ALLIANCE-SU
collection:INRIA-ETATSUNIS
Original Identifier:
HAL: hal-00694801
Document Type:
Book bookPart<br />Book sections
Language:
English
ISBN:
978-3-642-12337-5
Relation:
info:eu-repo/semantics/altIdentifier/doi/10.1007/978-3-642-12337-5_10
DOI:
10.1007/978-3-642-12337-5_10
Rights:
info:eu-repo/semantics/OpenAccess
Accession Number:
edshal.hal.00694801v1
Database:
HAL

Further Information

We describe two multi-agent architectures for an improvisation oriented musician-machine interaction systems that learn in real time from human performers. The improvisation kernel is based on sequence modeling and statistical learning. We present two frameworks of interaction with this kernel. In the first, the stylistic interaction is guided by a human operator in front of an interactive computer environment. In the second framework, the stylistic interaction is delegated to machine intelligence and therefore, knowledge propagation and decision are taken care of by the computer alone. The first framework involves a hybrid architecture using two popular composition/performance environments, Max and OpenMusic, that are put to work and communicate together, each one handling the process at a different time/memory scale. The second framework shares the same representational schemes with the first but uses an Active Learning architecture based on collaborative, competitive and memory-based learning to handle stylistic interactions. Both systems are capable of processing real-time audio/video as well as MIDI. After discussing the general cognitive background of improvisation practices, the statistical modelling tools and the concurrent agent architecture are presented. Then, an Active Learning scheme is described and considered in terms of using different improvisation regimes for improvisation planning. Finally, we provide more details about the different system implementations and describe several performances with the system.