[ alessandro anatrini ]

Sandy Island

Artificial ecosystem (2022)

Short documentary. Credits: Christian Frank, Peter Wolff (HfMT Webcast).

360° live recording. Credits: Christian Frank, Peter Wolff (HfMT Webcast).

Two mockups from the system’s output.

Sandy Island is an adaptive site-specific installation, running continuously for 60 hours. The installation consists of 288 speakers (Wavefield Synthesis) positioned about 2 metres above the ground and arranged to form a rectangular surface of about 60 m2. LED panels will close the space between the speakers and the floor to create an immersive environment.

In September 1774, explorer James Cook “discovered” Sandy Island in the Coral Sea, not far from French New Caledonia. For the next two centuries, the island is shown on maps and charts, until in 2012 Google Maps removes it from every online map as a consequence of a scientific expedition that definitively certifies its non-existence. The Sandy Island affair is a blunder of reason, the embodiment of how fallacious our abilities to discern between representation and reality are. This is one of the human characteristics that drive us towards arts, as we do when we go to the theatre, for instance. Armed with imagination and comfortably seated, we are ready to lose ourselves in fiction and identify with people and events that seem real but remain imaginary. But what happens when the perspective is reversed, and the stage becomes the instrument to give shape to the paradoxical encounter with a place that is also the projection of an absence?

The intervention is inseparable from the place that hosts it and the sound and visual materials are a direct emanation of it. The visitor is the protagonist of this mise-en-scene, in an unprecedented position on the stage *he is called upon to be part of an ecosystem, the only intangible trace of something that has never existed. Each presence will be recorded by the system through microphones and an AI will modify its evolution in the long term, defining a path based on the shared memory of a collective deception.

The project has been realised in collaboration with Lucas Xerxes (sound diffusion), KLARA Janina Luckow (floor projection) and Alessandro Alessandri (3D modeling).

Première Date Tools Playback format Duration
Forum, HfMT Hamburg from 1 to 3 December 2022 Blender, Cloudcompare, Faust, Live, Max, Metashape, Python, Touchdesigner 288.0 (WFS) and LED videowall undefined

We used four computers to run the system: one for the control messages and the sound generation, one for the video generation, another one for the sound diffusion, and a last one for the fllor projection. On the first image the audio control patch on the first laptop. The messages were sent over a network to all the other machines. The waveform on the bottom is the signal coming from microphones which affects the sysnthesis parameters. On the second image the custom 3D GUI I developed to handle all the different states of the system and their evolution over time. You can read more about that in the Research page. On the third image a detail of the Touchdesigner patch for generating the video content. The control messages for the video were sent from the same GUI in the previous picture.

Credits

Artistic direction & audio-visual composition: Alessandro Anatrini

Sound diffusion: Lucas Xerxes

Video consultant & floor projection: KLARA Janina Luckow

3D modellist: Alessandro Alessandri

Production Manager: Benjamin Helmer

Technical Director: Oliver Kirschner & HfMT Forum Team

Production: KISS - Kinetics in Sound and Space, Stage 2.0, HfMT Hamburg

Documentation: Christian Frank, Peter Wolff