This is the accompanying website to the Blueprint Series publication Echo Moiré.

On this page you will find a full video documentation of Echo Moiré as well as additional documentation.

If you quote, build or otherwise use any material presented here, please credit the artist and notify him about the use. If you are inspired to improve or go beyond the ideas presented, please share your ideas broadly and let the artist know too.

Performance History HERE

Video recording full performance, De Fabriek, 2016. Listening with headphones is advised. 

Full choreography x20 speed

Diagram hardware

Diagram sound generation

Controller mappings

Performance floor plan 

Extended technical description of the instrument

Overview

A pair of robots and a control station make up the instrumentation for this work. The robots consist of a two-wheel differential drive base on which speaker horns are mounted. The robots are remote controlled. Although they do have an autopilot with obstacle detection capabilities which can be used when performing in darkness, this is not essential. From the control station I can control both the movement of the two robots and the sound that they project. Both hardware and software I mostly made myself, assembling a mix of different parts found in car audio, mobility vehicles, amateur robotics, etc. I made three versions of the robots so far, a small pair of prototypes that I will not describe here, and two larger versions. The third and largest version is the most complete and that is what I will describe here, unless otherwise specified. 

Motion

The robot locomotion system is built around two strong DC gearmotors with rubber wheels, taken from electric wheelchairs. It might seem that these are overpowered, as they are made to carry the weight of a person, and they are themselves quite heavy. However it is a solution which allows the vehicle to still move smoothly when driving very slowly. Running the motor at a small fraction of its rated speed is inefficient, but it is essential, as these very slow movements reveal how even very small movements of the speaker affect our perception. And when accelerating from very slow movements to full speed, they offer a lot of range to compose the choreography. I removed the breaks that were originally on the motors as these are not necessary. I use a 15A dual motor drive circuit with optical encoders mounted on the motor shafts before they are geared down (where the break was previously installed), for very precise, two directional speed control. The optical encoders are not essential, but it makes especially the very slow movements easier to perform. The motor drive circuit is controlled by a microcontroller (MCU) that has wireless connectivity and which also receives input from 3 range sensors for obstacle avoidance. I previously used only 1 ultrasound range sensor in the front, but this gave rise to blind spots to the sides while turning, so I added two infrared distance sensors angled at 45 degrees on each side. For wireless data connection I currently use Marije Baalman’s Sense stage system that is based on her own Minibee MCU. I use two 2 mobile network nodes (one on each robot) and a network controller module at the control station. In principle any system can be used that allows low latency, reliable two way data communication between the control station and the two robots, across the maximum dimensions of the presentation space. 

Audio

For the audio components I use a low impedance (8 ohm) full range, PA speaker horn (round, aluminium). Each pair of robots uses two different horns of the same design but different size. The two different sizes make the robots two different characters, both visually and sonically. The third pair of robots have 120 watt horn drivers which screw on to their horns, while the second pair have two different horns with integrated drivers of 40 and 25 watts. I find the round shape (as opposed to rectangular designs) creates an elegant optical effect in combination with the rotational movements of the robots. The smaller horns are well suited for rooms around 100-300 m2, depending on the interior acoustic properties, their frequency range extends higher in the spectrum which makes them more directional and I find the image-sources (the spatial acoustic illusion by which we perceive a sound source originating as a mirror reflection behind an acoustically reflective surface such as a wall) more clearly perceivable when emitting sharp clicks. The larger horns are necessary for larger spaces (300-1000 m2), and they can activate modal resonances of the room in the mid lows better than the smaller pair. If I were to design my own horns in the future, I would seek to make a two way coaxial system to get the best of both worlds. 

Each horn is driven by a car audio amplifier that is significantly overpowered (400 W RMS or more). This has to do with the short clicks that I send, which need a lot of amplification for a very short time to activate the space as intended. 

I send the audio signal to the robots from the control station via a wireless audio system. The wireless audio is actually one of the most challenging aspects of the technical setup, which I have never found an optimal solution for, so it’s worth explain this in more detail. There are different systems that I used for this depending on the presentation space, each with its tradeoffs. A low cost hifi digital wireless audio transmitter and receiver pair works for rehearsals and smaller venues. I experience regular, signal dropouts with these. Even if very short, when a single click is missed it draws our mind out of the entrainment, breaking the flow of the experience. The range and reliability of these low power systems also is greatly affected by the absorbing bodies of the audience and interferences from their mobile devices. A rehearsal in the empty space can work perfectly, then when the audience arrives during the performance the signal can become less stable. It helps to ask the audience to turn mobile devices to airplane mode. For performances in larger venues I need to request the rental of a high end professional wireless in-ear monitoring system. The body packs go on the robots and connect to their amplifiers. Apart from being expensive, the issue with these is that many high end wireless systems are not able to transmit correctly the square wave signals that I use. These systems are optimised to transmit other types of musical or vocal signals, and the square waves can get filtered beyond recognition. So when utilising such systems it’s important that I have carefully tested the specific model ahead of the performance, to ensure it is able to transmit the pulse train signals without filtering or distortion. The tradeoffs in the wireless audio connection between signal quality, latency and reliability is an area that could be improved with further work.

Power

The robots are powered by gel acid batteries. The third and largest pair uses 4 x 6 volt, 6Ah batteries connected in series to give a total of 24V, 6Ah. The second pair uses 1 x 12V 12Ah battery. These can run for several hours continuously between charges, allowing me to repeat several rehearsals and performances in one day. The 24V solution is best as it allows to use the full speed of the motors. To power the car audio amplifier I need to step this down to 12V with a DC-DC step down converter which is rated for 12A. I step this down further to 5V with a small voltage regulator to power the control electronics. To increase the battery life, I added remotely controlled relay switches to disable the motors and the audio electronics when the robots are in standby mode (during breaks etc), as the power electronics for these draw a substantial quiescent current. 

Chassis

All the components are connected together using a metal frame. The horn connects at the top, so that it appears as the head of the robot. The two motorised rubber wheels are placed at the rear of the base on the two sides, and they are balanced at the front by a smaller swivelling freewheel. The amplifier at the back of the robot forms the connection between the head and the wheels. In between are placed the batteries and the control electronics. The robot is back heavy due to the weight of the horn drivers at the rear, making the vehicle unbalanced and prone to fall backwards in case of collisions or excessive discontinuities in the floor surface. Nevertheless, it is important for the dramaturgical effect that the robot looks like a creature or character and not like a car. The dramaturgical effect is more important here than the stability of the vehicle. 

Control Station

The control station consists of a laptop running custom made software, 2 midi control surfaces, an audio interface, a DMX interface and the wireless audio and data links. The control surfaces give me 8 motorised faders and 48 rotary encoders plus some buttons. I use 4 faders to control the movement of the robots and the rest to control the sound.  The control surfaces are programmed to send MIDI NRPN, as I need a higher resolution to control the parameter ranges that I use with sufficient precision. I use 2 channels out (L,R) on the audio interface. The audio interface connects to the wireless audio transmitter. The control node of the wireless data link is connected to the laptop. The DMX interface connects the laptop to the dimmers controlling the stage lighting. 

Software

The software consists of several patches I made in Max/Msp for driving and sound generation, as well as the Marije Baalman’s Sense/Stage environment to manage the wireless data link. For driving, I take the input of the 4 faders to control the movement of the 4 wheels. I can switch between “tank mode”, in which each fader controls one wheel, from full speed forward to full speed backwards, with a dead area in the middle of the fader to keep the wheel at rest, to a second mode in which one fader controls speed and the other direction. I programmed a button to switch between the two driving modes. I programmed the motorised faders to move automatically to the correct position in the new driving mode so the transition is seamless. Depending on the choreography, I will use the driving mode that is most convenient for the desired movement. In the software I have limited the acceleration to avoid rapid changes in speed that can cause the robot to flip over. I also have programmed a button to stop them instantly (which I use for the ending) and another button to enable obstacle detection. When obstacle detection is enabled, the robot will override my controls and steer away from obstacles found in its path. This is handy when playing in the dark, but although it would be safer to keep it on the whole time, this behaviour is not as fluid and elegant so when the robots are visible I prefer to risk collisions and keep controls manual. 

The sound is generated by a series of simple oscillators. My patch allows the robots to emit square waves, sine waves and pink noise. I have used all of these in the past, but in the composition documented here I only use square waves so I will only describe this section. Each robot has 4 square wave generators. Of these I control frequency, duty cycle and loudness. The frequency shifts from individual clicks to continuous tones. The duty cycle determines the frequency spectrum or colour of the clicks. On each oscillator I also have a low pass filter, which I use when I want to focus on exciting the room’s resonance modes instead of the reflections. The intention is to have only 1 generator playing on each robot at one time, but it is convenient to have 4 as I can fade between a few presets that I program in advance for the different sections of the performance. On the master output of each robot, I have a variable delay line that I can open to send the signal of one robot to the other robot, in order to play with the time difference between the two sources. Of this delay line I control the level and the time difference. I also control the stage lighting with one rotary encoder, to fade from light to dark and back. Finally on all parameters except the driving faders, I have a line ramp of up to 60 seconds, which enables me to create very smooth and gradual transitions between parameters. This is also useful to keep things in motion while controlling many parameters at once. Each sound generator, on its channel strip on the controller, has a button that blinks to indicate the parameters on that strip are ramping to the selected value. I can press that button to interrupt the ramp and hold the parameters that are currently playing. This is handy to use when improvising. I also have a scored section that allows me to script sequences of interpolating presets. This allows me to trigger sequences during the piece, which are too complex for me to execute live while also driving both machines. I make use of it for certain more complex sections, as other parts work best when I am fully able to react to the response of the space and the motion of the robots in real time. 

The Space

Finally I consider the performance space an integral part of the instrument. It only makes sense to present this work in a space with certain architectural and acoustic qualities, which are both visual and acoustic. I have already mentioned above the minimum and maximum room dimensions for the two robot sizes that I play with (approx 100-1000 m2). Sonically, the space is the resonance chamber that the robots play. Most musical instruments have a sound generator (i.e. the vibrating string in a violin) and a resonance chamber (the body of the instrument) that amplifies certain frequencies emitted by that generator. In this work the sound generators are the two robots, and the resonance chamber is the room. So the audience is inside the instrument (I learned this from Edwin van der Heide). The walls of the room need to be reflective (the plastered walls of a gallery space or raw concrete of an industrial space work best, raw brick or glass are also ok). A space that has walls treated with acoustically absorbent material (such as curtains that cannot be removed, acoustic foam etc) cannot be used to present this work. A reverberation time of a few seconds is also nice to have, although excessive reverberation can drown out the first reflections, making the experience less detailed. I always like to visit the space in advance to check the acoustics, if this is not possible I like to receive a video in which a person is walking around and clapping their hands. 

Visually I use the architectural space as a found scenography. I use lighting where possible to highlight this. I prefer spaces where the architecture is raw, without any unnecessary furnishing. I use the lighting to illuminate the lower sections of the walls behind the robots, so that the robots only appear as shadows or silhouettes against the lit walls. I like to use simple theatre lighting for this, with two light projectors standing low on the floor in each corner of the performance area, projecting a cone of light horizontally across the wall, so that the light projection recalls the cone of sound emitted by the horns. The floor needs to be smooth to allow the robots to glide over it as if they were skating. The audience is seated on the floor at the same height of the robots. There is only one area of the room that offers the correct visual and sonic perceptual perspective on the performance. The audience is limited to seating in this area. I am also seated in the same area with the control station and share the same perspective as the audience. Depending on the size of the space, half or more is used as a stage for the robots to drive on (in this part the floor needs to be left completely free of cables etc), the remaining part is for the audience. In cases where there are architectural elements such as pillars in the room, I create a choreography around these pillars. In this case I also need a second performer to help me with driving, as the risk of collision with the pillars is too great. The audience is seated a maximum of two rows deep so that the robots can drive up to and in proximity with each audience member, and everyone has an unobstructed line of sight with the robots. The audience also needs to be limited as their bodies absorb sound and reduce the reverberation. For this reason I usually perform the work many times for small groups of 30-100 people. The performance requires complete focus so the space needs to be very quiet, with no or only minimal sound spill from outside, and the audience is not allowed to enter after the performance has started. If darkness is used, which increases the strength of the work (but is not always possible), then it needs to be real darkness (pitch black), with all light spill from windows, safety exit signs, led lights on equipment etc blacked out. It makes little sense for me to fade the theatrical lighting off, just to reveal a blaring safety exit sign above the stage area.

stay updated,
subscribe to our newsletter