thank you greatly; I wanted to give some indication of the process -

- Alan

On Sat, 4 Apr 2015, Mab MacMoragh wrote:

fantastic alan

On Sat, Apr 4, 2015 at 10:11 PM, Alan Sondheim <[email protected]> wrote:


      (This is written as a final documentation for the piece,
      interesting I think from a collaborative and mixed-reality
      viewpoint.)


      Cave Residency During IRQ3

      Untitled ongoing performance (Alan Sondheim, Description)

      http://www.alansondheim.org/irq3day53.jpg

      During IRQ3, I had a residency in the Brown University Cave for
      the duration of the conference. Kathleen Ottinger, Azure Carter,
      and I worked together; we had also worked on a number of pieces
      for at least half a year before that. There were three pieces in
      the Cave itself, collaborations between Kathleen and myself.
      Kathleen did the programming and visuals, and she and I "wrote
      into" and through each other's texts beforehand, producing
      scripts that became independent work. These pieces are her own,
      with my textual collaboration. (I figure Cave setup and
      rehearsal for the performance as a whole was about 40 hours
      in-Cave and maybe 100 in-studio.)

      The Cave has both visuals and sound; during the conference, the
      sound originated from one of two laptops I set up in the room.
      This laptop was projected with into the room; the image was on
      the right-hand wall. The Cave was physically only a small part
      of the room - perhaps a sixth - so there was plenty of room for
      other elements.

      In other words, the first laptop split sound and image; the
      image was projected across the room, and the sound came from the
      speakers surrounding the Cave.

      The second laptop was projected into the room, through the room
      projector, onto a screen, and the sound was sent through the
      large room speakers.

      Both laptops had capabilities to run virtual worlds, and I used
      three virtual worlds during the conference:

      1. My residency area in Second Life, the most popular online
      virtual world. The area is in the Odyssey sim, and was capable
      of video texturing, mesh modeling, and complex physics / avatar
      behaviors.

      2. My three sims in MacGrid, an experimental/research world,
      with completely modifiable physics and highly malleable
      landforms. I have an in-world theater set up in the grid, and
      can project into it.

      3. A local Opensim virtual world on both laptops, with different
      architectures on each; the fundamental configuration or .iar
      file was downloaded from the MacGrid.

      There were, most often, two world projected simultaneously into
      the room.

      One of the laptops also housed a configuration for Bambuser, an
      online application which creates personal online video channels.
      This laptop had a small usb light attached; at times the camera
      would pick up the room, but most often Azure's face. The channel
      would then be sent into onto objects in Second Life; the
      textures were modified to image her face alone, without any
      background. The image was usually inverted, but through
      feedback, there were also smaller 'guide' images with her face
      normal.

      The face/image was embedded in the Second Life objects, with
      objects intersecting it, surrounding it. The appearance was
      ghostly, real-time, and uncanny.

      In this situation, Azure would sing a number of songs, many of
      which have appeared on our cds or lps. These songs were fed into
      one or more SuperCollider programs, designed according to
      specifications, by Luke Damrosch. The suite of programs is
      called "revrev" and allows a musician to work with live reverse
      reverberation - what I call an anticipatory music - the
      reverberation building up to the enunciation of the sound, a
      head instead of a tail. Combining programs allows for a thick,
      more complex way of working with this. The programs also
      involved multiple coherent streams or chords stemming from the
      original sound-source, for example parallel streams a fifth
      above and below the original tones. The programs all ran from a
      prompt, and the parameters could be changed in process.

      At times, I would also use alto clarinet, either to accompany
      Azure, or to create independent sounds which worked with the
      Cave room resonances; these often used a small instrument
      amplifier. One of my goals was to keep everything acoustically
      balanced; live revrev created an environment which could quickly
      go out of control. (I also used a standard clarinet to play into
      revrev directly at times.)

      The video feeds included other elements - the two main sources
      included pre-recorded materials, and texts.

      The pre-recorded materials were produced at NYU's motion capture
      studio, with the help of Mark Skwarek. I worked with two
      performers who did one of two things:

      1. The performers moved at the edge of the recording space,
      producing deliberate glitches or anomalies that distorted the
      figures.

      2. The mocap markers were remapped among the two performers -
      representing a single avatar; as the performers moved in
      topologically complex ways, the projected avatar in the mocap
      room broke up in various ways. The result was an avatar that
      appeared more as an emanation from the performers, than as an
      embodiment of them.

      These are techniques I've used close to a decade, in order to
      create avatar distortions that represent avant-dance, wounding,
      death throes, hysteria, desire, pain, and political issues. The
      videos that were made at NYU (just a few weeks earlier) were
      linked together in a half-hour piece that was played at times,
      as a marker or punctum of what was occurring in the virtual
      worlds.

      The second main source of the video feeds was a series of texts
      I would write into the virtual worlds themselves; these appeared
      as chats on the side of the image. The texts were improvised and
      related to the ongoing mise en scene in-world.

      The virtual world imagery was always, always complex and
      difficult to navigate in-world; for the spectator, it was also
      difficult to disentangle. This was deliberate; the result, and
      one of the main contents of the imagery, was the representation
      of extreme states of mind, which related to the ongoing crises
      of violence in the U.S., Africa, the Mid-East, and so on. The
      primary source for me, for all of this, was the special topic
      Johannes Birringer and I co-moderated for the empyre email list
      in November, 2014, "ISIS, Absolute Terror, Performance" - a
      topic which considered issues of torture, beheading, violence,
      anguish, and fear, for the month. The distorted avatars I work
      with - distorted because of the distorted movement - go all the
      way back to 2011, a 2nd topic for the same, this time with Sandy
      Baldwin, on "Pain, Desire, and Death," in the real and virtual
      (I'm not sure of the exact title). Both of these and my mocap
      lab work resulted in over 100 bvh files - these are files that
      represent real-life performer movement - that could then be fed
      into a virtual world, to animate an avatar or avatars. The
      process is difficult but the result are these distortions.

      So the most recent distortions, from NYU, would be projected;
      other in-world projections were live and in-world, and could be
      viewed in-world by another avatar; this is an important element
      of interactivity I work with. The in-world projections, then,
      resulted in the avatars moving wildly on the screen, creating
      particle emanations in the form of nude human warriors and
      charred bodies, and "dancing" with symbols made to represent
      ISIS and other forms of terror. All of this is at fairly
      high-speed.

      The revrev was heard from three sources - Avatar's voice itself;
      the revrev fed through either the projector speakers or the Cave
      speakers; and the revrev fed through the virtual worlds, as if
      it were emanating within them.

      All of this created a mobile and fluid sonic architecture, one
      that, for me, defined or modified the fixity of the Cave itself;
      room resonances and speaker interactions, beat frequencies,
      etc., all came into play. The sound was a hollowed organic body
      tied to, yet not tied to, the Cave pieces and the ongoing
      transformations visible in the projected images. I imaged a
      sonic bubble, almost a galactic bubble, in which there were
      occurrences both alien and domestic; texts would appear and
      disappear in the space, always grounded by the Cave pieces which
      were purely textual. Most of my time in the Cave was used for
      either working within the virtual worlds, or "tuning" the space
      itself - and the latter began to fascinate me. The sounds and
      images resonated with each other; the four sound streams had
      their own internal resonances; the darkness or brightness in the
      room affected the texture mapping and readability of the
      in-world texts, and so forth. Conditions were constantly
      changing. The room itself was always on the edge of feedback; I
      had to keep the revrev sounding full, but not overloading the
      in-world sounds, and not screeching. We used a lavalier mic to
      correct this in parts.

      The tuning of the room relates to the tuning of the body itself;
      much of my work deals with the labor involved in production,
      especially dance or music production (and the performers for the
      original mocap were almost all dancers); in this residency,
      labor was represented by voice and instrument, but also by the
      sheer weight of the production, which involved constantly
      adjusting the equipment and its position in the room. So even
      though the body was close to invisible (except for the video
      textures from Bambuser) on the screen, it was present in the
      sense of sonic architecture, the body of the piece, the galactic
      bubble, and so forth.

      Artists:

      Kathleen Ottinger
      Azure Carter
      Alan Sondheim
      Luke Damrosch

      Thanks to John Cayley for the opportunity.

      Thanks also to:

      Mark Skwarek, Johannes Birringer, Foofwa d'Imobilite, Sandy
      Baldwin, Kira Sedlock, Frances van Scoy, Patrick Lichty,
      Columbia College, West Virginia University, NYU, Brown
      University


      Audio-Visuals:

      http://www.alansondheim.org/theforge.png
      http://www.alansondheim.org/irq3day24.jpg
      http://www.alansondheim.org/irq3day50.JPG
      http://www.alansondheim.org/irq3day51.jpg
      http://www.alansondheim.org/irq3day49.JPG
      http://www.alansondheim.org/irqday3.mp4
      http://www.alansondheim.org/irqq3.jpg
      http://www.alansondheim.org/irqqb.mp4 (Kathleen Ottinger, Cave)
      http://www.alansondheim.org/irqrevrev.mp4
      http://www.alansondheim.org/irq3day54.jpg
      http://www.alansondheim.org/visage4.png
      http://www.alansondheim.org/irqspace1.mp3


      =====================================================

      _______________________________________________
      NetBehaviour mailing list
      [email protected]
      http://www.netbehaviour.org/mailman/listinfo/netbehaviour





==
email archive http://sondheim.rupamsunyata.org/
web http://www.alansondheim.org / cell 718-813-3285
music: http://www.espdisk.com/alansondheim/
current text http://www.alansondheim.org/td.txt
==
_______________________________________________
NetBehaviour mailing list
[email protected]
http://www.netbehaviour.org/mailman/listinfo/netbehaviour

Reply via email to