top of page

ACTION CODING
A collaboration with David Sheinkopf, Gene Kogan, Morgan Refakis, and Ramsey Nassar
Installation, Performance, Video
Kinect, PC, Mac
2015-2017

 

W2020_35.jpg

Action Coding installed in the solo exhibition “Easy Is Not A Concept,”  at Eyebeam, 2016

W2020_34.jpg

Screen shot of Action Coding in the Kinect to Gesture interface, designed by Gene Kogan and performed by Morgan Refakis

Video performance, BodyLang Dictionary

Video performance, BodyLang Code “Ornate”

PROJECT CREDITS

Kinect to Gesture Software: Gene Kogan
BodyLang Stack language: Ramsey Nasser
Choreographic support: Morgan Hille-Refakis & Caitlin Sikora
Arduino support: David Sheinkopf
P5 support: Caitlin Sikora

Additional contributions by: Hillary Merman, Lee Sargent, Ashley Middleton, Dara Blumnthal, Alex Todaro, James George, Brandon Scott, Matt Romein, Adam Chapman, Thomas Goldberg, Kenneth Kirchner, Clarinda Mac Low, Brian Foo, Carlo Antonio Villeneuva


 

RELATED WORKS

Time Step

Video

7 minutes

2015

W2020_72.jpg

Action Coding imagines a space for learning code that is physical, cooperative and visible, asking ‘What if code were approached as an externalized, performed activity such as dance or sport? Could code therefore be learned cooperatively, by watching and repeating through the body? If coding is a visible and cooperatively learned, physical experience, what access is afforded? Who gravitates towards this physical process? How does the physical and mental experience differ from the traditional experience of coding? How does the experience and understanding of code change? And more abstractly, what might the products of coding become if performed in this way?’

Action Coding challenges current systemic biases of the software development by inserting the body as input device into an increasingly disembodied system. It is a speculative project concerned with the future of the body in a digital world and is a working system consisting of open-source machine learning software and Kinect that translates physical input into digital output within a wide variety of coding environments.

A visible and series of full-body actions aids in the transfer of the building blocks of coding: the mind learns through the body, syntax becomes repeatable phrases and logic becomes physical patterns and phrases, like a dance. The procedural memory required by the physical process amplifies the procedural memory required by computer coding; and the motor programs acquired by this process underscore the computational programs of code. Because in Action Coding, coding is a function, in part, of motor learning, a new ‘coder’ may learn and internalize syntax and logic patterns more quickly(1,2), and because they are taken in through the full neuromuscular system, retain them longer(3).

Investigations in computer vision, movement languages, and machine learning resulted in gesture libraries for three coding environments—Arduino, P5, and BodyLang, a custom stack language written by Ramsey Nasser—performed in two live performances, and several video works.

Kinect2Gesture, the application written by Gene Kogan for Action Coding differs from other full-body gestural systems in that it uses machine learning algorithms in the creation of gesture libraries. Users may build and train a computer to recognize any single gesture performed within a pre-set time-frame used by the application to define the start and end parameters of the movement. To train the system in a new gesture, the gesture is be performed repeatedly (20-60 times). Each repeated performance generates a data set that the computer ‘learns’. The wider variety of approaches in the training process, the greater the accuracy the system has when predicting.  As users update this system to create their own gesture libraries, they can also apply those libraries to a variety of coding environments: from Arduino to P5 and beyond. As an application, Kinect2Gesture is not constrained to any particular development environment, nor is anyone who might want to engage with Kinect2Gesture in this manner be constrained to a limited library of pre-made gestures.

This system of hardware and software, though innovative, is not stable enough for real-world application just yet, therefore Action Coding is best understood in the demonstration format. A system operator can both run the system and engage visitors in the project’s details and movement language, perform pieces of the language for the visitor, and teach pieces of the language to the visitor so that he or she may most fully understand the concept and outcome, and share in the vision for an expanded future for code and its products.

 

For tutorial and documentation, please see the Kinect2Gesture video created by Gene Kogan.

1.http://www.nature.com/neuro/journal/v18/n5/full/nn.3993.html
2. http://www.bbc.com/future/story/20140321-how-to-learn-fast-use-your-body;
3. Lee, D.T., & Schmidt, A.R. (2005). Motor Control and Learning: A Behavioural Emphasis. (4th ed). Windsor, ON: Human Kinetics


 

The Arc of An Idea

Digital prints

2015

Web2022_HomePage_Assets-26.png
bottom of page