(English) Artificial (?) Antidancer

Disculpa, pero esta entrada está disponible sólo en Inglés Estadounidense. For the sake of viewer convenience, the content is shown below in the alternative language. You may click the link to switch the active language.

A digital companion for the antidance improviser, a stick figure that follows, remixes and counterpoints the movements of a person.

One of them is the recording of a person
One of them is a person improvising live with the other two
One of them is a computer program improvising live with the other two

The process of this project is a tale about simplicity.

In the first three homeworks, I explored different ways of giving a computer the control over the structure of a movement sequence.

The starting point was the skeleton given by the Kinect: a collection of 25 three dimensional points in space.

With TSP dances I used brute force to find the sequence with the shortest distance between poses, and I confirmed that the mathematical distance between two different poses correlated well with their similarity.

With GA Choreographic Counterpoint I used a genetic algorithm to rearrange a given set of poses from a movement sequence, with the goal of finding more movement sequences with a similar amount of movement flow.

With kNN dances and Dijkstra dances, I used a mix of kNN and graphs to build a map of similarities and relationships in a given set of poses. Dijkstra’s algorithm allowed the computer to find the shortest path between two given poses, using only the available set.

In all that time, I overlooked the solution of the linear interpolation between two poses.

It seemed too simple.

It seemed too basic.

For this final project, I wanted to use the idea of a computer generating movement sequences, but counterpointing a sequence performed live by a person. I was interested in the improvisatory dialogue that could arise between the human and the program (made by the same human?)

For this final project, I thought that it would be interesting to have a mix of these past strategies for generating the movement sequences.

For this final project, I thought that it would be interesting to train a RNN to generate a movement sequence freely, or to constrain it so that it would effectively counterpoint a movement sequence executed live.

I went to the trouble of representing the poses in another way, as the points in space would present many problems when generalizing and using other subjects. For example, just because of the difference of sizes of two people, their poses would appear very dissimilar.

I decided to represent the poses as vectors of directions for each of the “bones” between joints. I went to the trouble of normalizing these vectors, and to use them all as a single multidimensional (and also normalized) vector.

Before using the RNN with those vectors, I figured out a way (documentation in process) for training them to predict the next value in a line of any slope.

Then I attempted to train a RNN in multiple ways to interpolate between poses, spending a lot of time.

The results were mostly similar to this.

It has its own beauty and quality, but I was still interested in the counterpoint possibilities.

I also went to the trouble of porting a lot of what I had done before, searching distances between poses, creating the graph between them, from P5js to Processing, so that I could run everything live / without delays.

I even outlined the communication process between Processing and Keras.

When I got too impatient with all the vector math and representations of poses, I decided to program a simple linear interpolation between two random poses.

It seemed too simple.

It seemed too basic.

I saw the result, and the result is basically what you just saw already.

I was impressed.

I added some features; the machine can move freely in its own set of poses, it can choose to move to a pose of the recording, or to a pose of the live person, or to the closest pose from its set to the live person. So that’s actually what you saw already.

I want to add more features and improvisation strategies.

But isn’t that going against the simplicity that all this experience just showed me?

Source code

This project is open source and was built with Processing.

The source code for this project can be found in this Github repo:

  • SkeletonsCapture is the Processing sketch used to record a movement sequence into the vector format I’m using
  • SkeletonsDisplay is the Processing sketch that runs the recording, the live skeleton, and the Artificial (?) Antidancer
  • KerasProcess is the set of tests that I did for training and testing a LSTM with Keras.

← Entrada anterior

1 Comentario

  1. Hail to the simplicity!

Deja un comentario