This generative video art installation is a battle between two animated characters that are created using Recurrent Neural Networks (RNNs).

The neural networks generate text data from two sources that are completely different in tone and subject matter. Since the “training material” that is used to feed the networks are so different, the data output has completely different syntax, structure, subject matter– “personality”.

thousand_seconds_proposal_animals

Using a “sentiment” API toolset, further detail of behavior is generated by assessing the “mood” of the generated text, making the animated characters behave differently– movement, speed, size, color, and shape characteristics change depending on the data. For instance, “happy” text makes the visuals move in a loose, fluid, affable way, “Angry” or “sad” text makes the visuals appear rough, disjointed, provoking a sense of unease.

These two visuals objects are projected in the same physical space and interact with each other. Their “personalities”, generated by the result of the neural net analysis, conflict in ways that are characteristic of their source material.

This is a collaborative work between Eric Medine and James Blaha.

For more information on neural nets:
http://karpathy.github.io/2015/05/21/rnn-effectiveness/
https://github.com/karpathy/char-rnn

“Sentiment” analysis tools:
http://nlp.stanford.edu/sentiment/index.html
http://help.sentiment140.com/api