Quick Links

Showing posts with label video. Show all posts
Showing posts with label video. Show all posts

Friday, October 28, 2016

10.28 - Distributed Immersive Performance Videos Online

Years ago, at the Integrated Media Systems Center at the University of Southern California, we embarked on a series of Distributed Immersive Performance experiments to determine the effect of network latency on ensemble performance. Building on earlier experiments, these were the first of their kind to use rhythmic and fast classical chamber music to test the limits of collaborative performance over the Internet.  The results and findings were reported in numerous publications including ACM TOMM and proceedings of ACM MM, AES, NASM, and CENIC.

Over the years, these videos have been shown at numerous conferences and invited lectures. In response to requests for these videos, they have been shared online: documentation of the scientific experiments that led to the discovery that tolerance to network latency can be extended by enforcing a common clock, in this case, by delaying players' feedback from their own instruments to lineup with the signal arriving from their partner.


Setup A: Delays: vimeo.com/187646226
This video shows the increasing difficulty in synchronizing over increasing delays.


Setup A: Perspectives: vimeo.com/187647682
This video shows the difference in experience of delay from different perspectives.


Setup A: Commentaries: vimeo.com/189241592
This video shows the players commenting on the experience of playing with delay.


Setup B: vimeo.com/189272144
This video shows the solution we came up with---delaying each players' feedback from their own instrument.



Credits

Elaine Chew, experiment design and analysis
Alexandre R. J. François, software architecture
Christos Kyriakakis, spatial audio
Christos Papadopoulos, audio streaming
Alexander A. Sawchuk, project coordinator
Vely Stoyanova and Ilya Tosheff, performances
Roger Zimmerman, databases

Anja Volk, systematic musicological analysis

Carley Tanoue, performance data analysis
Dwipal Desai, databases
Moses Pawar, databases
Rishi Sinha, audio streaming
Will Meyer, filming and video editing

This material is based upon work supported by the National Science Foundation under grant no. 0321377 at the Integrated Media Systems Center, an NSF Engineering Research Center at the Viterbi School of Engineering, University of Southern California, Los Angeles, California, USA. nsf.gov/awardsearch/showAward?AWD_ID=0321377

Sunday, April 17, 2011

Friday, February 4, 2011

02.04 - Ussachevsky Memorial Festival @ Pomona


Isaac Schankler performs with Mimi (Multimodal Interaction for Musical Improvisation) at the Ussachevsky Memorial Electronic Music Festival at Pomona College organized by Tom Flaherty.  Details below:

Friday, February 4, 2011 - 8:00pm
Lyman Hall, Thatcher Music Building, Claremont, CA
FREE admission

Isaac Schankler's performance with Mimi has been posted on VIMEO:


Other concert performers include:
Robots, Rachel Rudich*, flute; Cynthia R. Fogg, viola; Roger Lebow*, cello; Mojave Trio: Sara Parkins, violin; Maggie Parkins, cello; Genevieve Feiwen Lee*, piano; Joti Rockwell*, electric guitar; Tony Perman, kalimba
 

Electronic and acoustic music by MaryClare Brzytwa, Karlheinz Essl, Tom Flaherty*, Matthew Malsky, Frank Zappa, and more

Thursday, September 30, 2010

09.30 - Mimi Concert Video Annotated



Video of the concert debut of Mimi with Isaac Schankler at the Boston Court Performing Arts Center in Pasadena on Saturday, June 5, 2010, as part of the People Inside Electronics concert event, Vicious Circles and Deadly Elements.

Mimi, which stands for multimodal interaction for musical improvisation, is a system for human-machine improvisation.  Mimi was created by Alexandre François using his Software Architecture for Immersipresence.

In Mimi, the computer learns from the human musician, creates a factor oracle from the music input, and recombines the material to generate improvisations like the music it 'hears'.  The visualizations show the music stream from the computer and from the human, the music material Mimi learns, and how the system recombines the material. 

The human musician determines when Mimi learns, when it starts/stops improvising, and the recombination rate.  The annotations in the video provided by Isaac shows this decision process, and reveals the improviser's thought process as the performance unfolds.

Isaac is a composer-pianist-improviser who received his DMA in Composition from the USC Thornton School of Music in 2010; he is currently a research consultant at MuCoaCo.

Friday, July 23, 2010

07.23 - Mimi4x @ IMIDA Workshop

Mimi4x, an interactive installation for high level structural improvisation based on Mimi, is unveiled at IMIDA 2010, an IEEE Conference on Multimedia & Expo workshop.  Alex François and Elaine Chew present the paper:

Francois, A. R. J., I. Schankler, E. Chew (2010). Mimi4x: An Interactive Audio-Visual Installation for High-Level Structural Improvisation. In Proceedings of the IEEE International Conference on Multimedia & Expo (ICME 2010), Singapore, July, 2010.

and demonstrate Mimi4x in Singapore.  The Mimi4x system is also shown in the video below with four sets of music material composed by Isaac Schankler collectively titled Airport:


The paper will be extended and included in a special issue of the International Journal of Arts and Technology.