Agile and evocative: Where live coding meets choreography

A computer hologram of a dancer made of tiny points of light

McMaster researcher David Ogborn is working with a colleague at Virginia Commonwealth University to create a language meant for live coding that will produce 3D avatars that dance. (Shutterstock image)

David Ogborn, an associate professor in the Faculty of Humanities’ Communication Studies and Media Arts department, and the principal investigator behind McMaster’s Networked Imagination Laboratory (NIL), received a 2020 New Frontiers in Research Fund (NFRF) grant to further his research in collaborative online coding of dance and movement.

The NFRF recognizes initiatives that push boundaries in new areas of research and inspire high-risk, high-reward research. Ogborn’s project, which he is working on with Kate Sicchio from Virginia Commonwealth University, is one of nine 2020 NFRF awards at McMaster, and is one of the only arts-based projects among the 2020 NFRF recipients.

We spoke with Ogborn and Sicchio about their work, which at the time of writing has entered an exciting new phase with the hiring of 12 research assistants from different faculties across the two universities

Tell me a little about the project.

David Ogborn: The goal of the NFRF project is to make a language that’s meant for live coding that will produce 3D avatars that dance. As we work on it, we’re planning to make the language available on the web and as free and open-source software — having it available for people to use and rework is a key part of what we’re doing.

Both Kate and I have been involved with the live coding movement for a decade or so. This is an international group of people from a variety of different backgrounds, professions and places that explore the creative possibilities of programming, computation and media technologies in front of a live audience, in a way that audience can see what’s going into the activity.

There’s actually a manifesto for the movement — the TOPLAP manifesto — which includes a number of resonant phrases like “Code should be seen as well as heard.”

The key phrase for me is “Show us your screen” — that’s the aspect of live coding that I’ve really emphasized in my own practice and shared with a lot of people through teaching, workshops and performances. There’s an interest in what happens when you’re programming to produce artistic results for an audience and sharing the code at the same time.

We’re bringing together many different things: There’s Kate’s work with using live coding for choreography. There’s also a growing tradition in live coding of people making languages for specific purposes – there’s a whole universe of new programming languages coming into existence, often designed by artists like Kate and myself.

In live coding, you need code that’s agile – it’s not a situation where you can spit out huge programs that are thousands of lines long. A couple of lines, or a small number of characters, have to carry a lot of meaning, or exactly the meaning that is necessary.

You’ve been working on developing a way for people to live code together. Can you tell me about that?

DO: An important part of my research has been developing a collaborative platform for online live coding called Estuary, which is openly available for use online. This is a space where people can come together and use different live coding languages at the same time. For instance, you might have situations where one person is using language A to make beats, someone else is using language B to provide harmonic accompaniment or a sonic texture, and another person is using language C to create visuals or animation.

This choreographic live coding language can become part of that ecosystem — so now you have a fourth person who can add dance to the performance.

Kate, can you talk a little about the role of choreography?

Kate Sicchio: As long as there have been computers, people have been making dance with computers — algorithmic choreography has been around since the 60s. In terms of live coding dance, that’s probably been around for the last 10 years.

What’s exciting about this project is that there’s the potential to use this language in many different ways. You can create dances for avatars in the networked space, which would be a completely digital dance, or you could combine a live dancer with the digital to create a hybrid duet. It can also be a way for a live dancer to learn a dance.

Another thing that’s exciting is the language itself, which will use dance and movement terminology. As David said, you need expressive terms when you’re live coding, because you’re on the fly, trying to create in front of an audience. So when you’re trying to code a dance, it makes sense to think in dance words.

Programming languages are for people, not machines!

Was there a way to capture choreography digitally before this? What were its drawbacks?

KS: The choreographer Merce Cunningham used a software called DanceForms, which was really popular for a while — it was a really big moment for this kind of digital recording.

Almost all of the recent digital work relies heavily on video — there are programs that you can notate, but it’s still video based. Using a 3D avatar, like our language does, allows dancers to zoom in, or see something in mirror image — something you can’t always do with video, which also needs a physical, real-life dancer at some stage of creation.

What outcomes do you foresee for this project? Does it make performance more accessible?

DO: Computing technologies have their own accessibility issues, so I tend to think of this as a change in the way people get access to some musical technologies, rather than a complete provision of accessibility.

It is certainly abstracting performance from physical requirements — if you were to try to learn to play the bassoon, there is only one way to play the bassoon, and you have to do it that way or you can’t play the instrument.

When you have computerized ways of making music, including but not limited to those that involve code, it does tend to open up this design dynamic around multiple ways of achieving a particular result.

Also, while this choreographic coding language is at the heart of the project, it’s not the only outcome. We’re working with dancers at Virginia Commonwealth University, where Kate works, on doing motion capture for dance – with the idea that these motion capture materials become the primary materials for the language, and they’ll impact the design of the language.We’re also making 3D models that we’ll share openly as open source software or creative commons materials — for us, these are the fire starters for the project itself.

Once we have the language up and running, we’ll be performing with it in different ways — so the language is one outcome, but so is all the knowledge that we’ll gain by doing different things with it and showing what’s possible. (A “work in progress” version of the new language, currently named LocoMotion, is available.)

It’s about gathering materials that are choreographically specific, but which can be used in a computational environment. It’s about making a language that lets us manipulate those materials improvisationally and collaboratively and fluidly, and then seeing what kinds of artistic performances and outputs we can now make that we couldn’t make before.

Related Stories