The Illinois eDream Institute is dedicated to promoting arts that are conceived, created, and conveyed through digital technologies
Astral Convertible Blog

On October 2nd, John Toenjes debuted an Astral Convertible blog (see eDream’s Projects page for more details), which will follow his and our collaborators’ progress in pre-production work for the restaging of this contemporary American dance masterpiece.  The blog provides special insight into the complex technical challenges such an arts-technology saturated production faces as well as into the importance and nature of collaboration across disciplines.  To date, the Astral Convertible team has, with John’s lead, nearly completed assembling all of the foundational performance technologies: from networked sensors to a trained machine learning program to tower construction (a physical stage element), with only a circuit left to complete.  What’s more, this work will be completely documented by the end of October.

Here is a taste of Astral Convertible blog’s first entry, written by John Toenjes:

“I and a wonderful team of collaborators at the University of Illinois Urbana-Champaign have been working for over a year now on this project, ‘reimagining’ and ‘restaging’ the dance work Astral Convertible, by choreographer Trisha Brown…Today’s post concerns yesterdays work, which was done at the wonderful Digital Performance Laboratory at NCSA. At noon, Mary Petrowicz and I met to view the video of the run-through of the dance, which was held in the Krannert Dance Rehearsal room on Friday Sept. 18. The aims of this session were to

  1. Identity both qualities of movement, and specific gestures in the dance, that would lend themselves to the machine learning algorithms Mary has developed for use in the performance,
  2. Identify group dynamics and behaviors that could also work in her machine learning system,
  3. Determine the best location for a second sensor attached to the dancers’ bodies.
We identified a list, and next rehearsal we will hopefully get some dancers to wear the sensors and try to train the computer to recognize the qualities and gestures. In the meantime, Mary will be working on updating and extending her machine learning models to incorporate these new observations.”

— Kelly Searsmith