This is a tutorial on how to use data from the Kinect game controller from Microsoft to create generative visuals built in Processing (a Java based authoring environment). Discussion includes installing and running the OpenKinect libraries as well as the OpenNI API, creating generative visuals that can be tracked by the Kinect, and interacting with “virtual” interface elements (think Minority Report). Authoring environment is a MacBook pro with 4gigs of ram with 2.5 MHZ processor.

This tutorial is informed in large part by the work of Daniel Shiffman (http://www.shiffman.net/) who is responsible for the OpenKinect library for processing. For more detailed info and tutorials, head over to his site and check it out!

This video uses OpenKinect and a midi controller to re-render objects in the Kinect field of view as geometric shapes.

About Processing
from Processing.org:
Processing is an open source programming language and environment for people who want to create images, animations, and interactions. Initially developed to serve as a software sketchbook and to teach fundamentals of computer programming within a visual context, Processing also has evolved into a tool for generating finished professional work. Today, there are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning, prototyping, and production.

About Kinect
The Kinect is a stereo camera (actually triple camera including the IR sensor) that has some pretty sophisticated firmware algorithms that can spit out a wide variety of depth and motion tracking data.
Hardware specs:
Array of 4 microphones supporting single speaker voice recognition
Depth Camera 640×480 pixel resolution @30FPS
Color VGA Motion Camera 640×480 pixel resolution @30FPS
NOTE: for the purposes of this demo I’m using Kinect model #1414. With the newer versions of the “old” kinect (model #1473) the firmware has been changed, and no longer supports the libraries used in this tutorial.

About this tutorial
“Kinect for Processing” involves configuring a set of libraries that can be compiled with the Processing programming environment to parse and manipulate Kinect data.

Use this tutorial however you want, feel free to add to it, be sure
to forward it to whomever and give it away for free. If you charge
some lazy bastard for this tutorial, you suck. If you use it for
a performance and you make money– you rock! If you have some questions,
advice, or praise, contact me at:
info(at)ericmedine(dot)com

What you will need
– Processing environment and compiler.
– OpenKinect libraries
– OpenNI libraries
– Kinect sensor

Other resources:
Processing experiments:
http://www.openprocessing.org/
Open Kinect:
openkinect.org
OpenNI
http://code.google.com/p/simple-openni/
http://code.google.com/p/simple-openni/wiki/Installation
My website:
https://ericmedine.com

UPDATED NOTES ON CERTAIN KINECT MODELS!
With the newer versions of the “old” kinect (model #1473) the firmware has been changed, and no longer supports the libraries used in this tutorial. The Kinect I’m using is model #1414, and as far as I can tell most other libraries and tuturials you can find online refer to this model. So… when you buy a kinect, be sure to specify the model number.

NOTES ON PROGRAMMING ENVIRONMENTS
You may prefer to use the Eclipse environment to work on your Processing files. I personally prefer it due to its code hinting, build management, and ease of use in integrating other JAVA libraries. The basic functions in this tutorial should work in the Processing environment though.

HOWEVER! The way Eclipse handles its build paths makes certain kinds of applications very finicky to build correctly. For instance, if you choose to integrate OpenGl in Eclipse you have to be very careful about how you configure its native libraries. Since the Processing library (‘core.jar’) does not contain the functionality for using the OpenGL context, you need to add the library files named ‘gluegen-rt.jar’, ‘jogl.jar’ and ‘opengl.jar’, which are located: Processing/libraries/opengl/library/ (WIN) Processing.app/Contents/Resources/Java/libraries/opengl/library/ (OS X).

Like the standard Processing library (‘core.jar’) the libraries for OpenGL have to be a part of the project build path. Copy the libraries into the ‘lib’ folder of the project. Right-click on the files in the Package Explorer of eclipse and select ‘Build Path’ → ‘Add to Build Path’.

The above mentioned specific operating system files are located in the same directory as the ‘jogl.jar’. Create a folder named ‘jni’ in your project and copy all .jnilib documents (Mac OS X) into it. Processing version 1.0.7 contains 4 of them (libgluegen-rt.jnilib, libjogl_awt.jnilib, libjogl_cg.jnilib, libjogl.jnilib).

The last step is to link the ‘gluegen-rt.jar’ and the ‘jogl.jar’ library with their native interfaces. Select one of the them in the Package Explorer and right-click to open the context menu. Afterwards choose ‘Build Path’ → ‘Configure Build Path’. A window opens where all library files are listed. Each library is represented by a node that can be opened and closed. Open the nodes of both libs and select the field ‘Native library location’, press ‘Edit’ on the right and set the ‘jni’ folder of your project. Confirm all changes and the project setting are ready for OpenGL.

From http://www.creativecoding.org/en/beyond/p5/eclipse_as_editor#using_opengl_other_libraries

I’ve also found that current versions of the Processing core.jar don’t play nice with openGL– you’ll get an error that looks like this: java.lang.NoSuchMethodError: processing.core.PImage.getCache(

To fix this, I’ve had to get the core.jar from Processing version 1.2, and use that in its build path. Seemed to compile without any problems.

EXPANDED NOTES on OPENKINECT in Eclipse
It seems as if openkinect needs to add native files as well in order to run– when you add your openkinect.jar file, rather than throwing it into your project all by itself, drag the entire folder containing ‘openkinect.jar’ and ‘libKinect.jnilib’ files into your project (call this something like ‘kinectlibrary’). Right click on ‘openkinect.jar’, and add it to your build path. Right click on it again, do ‘configure build path’, then select the field ‘Native library location’, press ‘Edit’ on the right and set the ‘kinectlibrary’ folder of your project.

Gah! There’s also an issue with a conflict between 64-bit versions of Eclipse and the fact that OpenKinect is compiled using 32-bit. You’ll have to fix it by adding “-d32” in Eclipse > Run Configurations > Arguments Tab > VM Arguments, and it should run fine.

STEPS:

1) Install Processing and the OpenKinect libraries
2) Build your Processing application
3) Build a depth tracking function
4)
Install OpenNI
5) Build a hand tracking application

STEP ONE: Install Processing and Open NI

There’s a lot that goes into preparing a workstation to handle software development. Some people like their dedicated programming environments, like Eclipse, some people don’t. For this demo we’re just going to be concentrating on setting up your workstation for the basics necessary to get you up and running.

First, you want to install Processing on your computer. Go here:http://www.processing.org/download/ and follow the instructions to install. This tutorial is based on the MacOSX operating system, but there are linux and windows versions as well.

Go ahead and install the OpenKinect libraries here: https://github.com/shiffman/libfreenect/raw/master/wrappers/java/processing/distribution/openkinect.zip

By default Processing will create a “sketchbook” folder in your Documents directory, i.e. on my machine it’s: /Users/ericmedine/Documents/Processing/

If there isn’t already, create a folder called “libraries” there, i.e. /Users/ericmedine/Documents/Processing/libraries/

Then go and download openkinect.zip and extract it in the libraries folder, i.e. you should now see /Users/ericmedine/Documents/Processing/libraries/openkinect/ /Users/ericmedine/Documents/Processing/libraries/openkinect/library/ /Users/ericmedine/Documents/Processing/libraries/openkinect/examples/ etc.

Restart Processing, open up one of the examples in the examples folder and you are good to go! More about installing libraries:
http://wiki.processing.org/w/How_to_Install_a_Contributed_Library

Anyway, at this point you should have everything you need– Processing and the OpenKinect extras you need to compile to your device. Next step: build your application!

STEP TWO: Build your processing application

Alright, open Processing up and build something!

1) Setup our sketch!

First, you want to import your kinect libraries and create a reference object to your kinect:

import org.openkinect.*;
import org.openkinect.processing.*;

// Kinect Library object
Kinect kinect;

Now, setup your environment and decide how big you want it to be. Oops– the kinect only spits out 640×480… bummer. Luckily for us we are using a vector-based program, so we can upscale later on.

void setup() {
size(640, 480);
}

There you go– you’ve made a Processing app! You can test it from here, but it might throw an error if your Kinect libraries are not correctly integrated into the Processing environment. I’ve found it finicky, and had to restart Processing a couple times before they’re detected.

Now let’s make an instance of the Kinect in setup:

void setup() {
size(640, 480);
kinect = new Kinect(this);
kinect.start();
}

Once you’ve done this you can begin to access data from the kinect sensor. Currently, the library makes data available to you in four ways:

RGB image from the kinect camera as a PImage.

Grayscale image from the IR camera as a PImage

Grayscale image with each pixel’s brightness mapped to depth (brighter = closer).

Raw depth data (11 bit numbers between 0 and 2048) as an int[] array

Let’s look at these one at a time. If you want to use the Kinect just like a regular old webcam, you can request that the RGB image is captured:

kinect.enableRGB(true);

Then ask for the image as the PImage.

PImage img = kinect.getVideoImage();
image(img,0,0);

Here’s the full code, which will show the video pulled from the RGB camera:

import org.openkinect.processing.*;

// Kinect Library object
Kinect kinect;

void setup(){
size(640, 480);
kinect = new Kinect(this);
kinect.start();
kinect.enableRGB(true);
}

void draw(){
PImage img = kinect.getVideoImage();
image(img,0,0);
}

Let’s try using the IR camera! Notice the buggy scrolling…. I’m sure there’s a reason that happens, but I have no idea what it is.

kinect.enableIR(true);

Don’t care for a video image? We can get the depth image as follows:

PImage img = kinect.getDepthImage();

What’s that? It doesn’t work? of course not– you have to enable the depth camera in the “setup” first!

kinect.enableDepth(true); int[] depth = kinect.getRawDepth();

If you hate images and love numbers, try looking at the kinect data as in array of numbers. When you start getting into really sexy image manipulation, you’ll want to use this rather than getting the depth image. However, displaying this data will slow Processing to a crawl. Be prepared.

int[] depth = kinect.getRawDepth();
println(depth);

You can’t enable the IR camera with the depth or RGB– it will display the last one enabled by default. This is a hardware limitation, although I’m sure someone is already building a workaround. There is a bug with the IR camera library as well, it has some wacky scrolling thing going on, which is a bummer.

You might be asking at this point “what’s theKinect’s range?” The answer is 0.7–6 meters or 2.3–20 feet.

Note you will get black pixels (or raw depth value of 2048) at both elements that are too far away and too close. Also, there is the typical stereo vision problem– a “dead zone” will appear when one camera can see a surface, but the other can’t.

Congratulations! You have built a Kinect video system! Now, let’s do something cooler with it.

STEP THREE: Average Depth Tracking!

So, we’ve seen that the most interesting things about the kinect is its lightweight data input. We can take the depth image we’re getting from it and turn it into a more interesting, abstract shape… and use it to track objects that fall within certain depth parameters!

We’re going to use more of Daniel Shiffman’s libraries, in particular the “tracker” class from his depth tracking demo. This is an external class that does all the heavy lifting in terms of data processing, so we can keep our main class fairly clean and easy to read.

 

 

import org.openkinect.*;
import org.openkinect.processing.*;

// Instantiate our kinect tracker. It’s a separate class that we’ll get into in a minute, basically it parses the depth lookup information for us.

KinectTracker tracker;
// Kinect Library object
Kinect kinect;
void setup() {
size(640,480);

// Along with making our new kinect object reference, we’re making a kinect tracker!

kinect = new Kinect(this);
tracker = new KinectTracker();
}

void draw() {
background(255);

// initialize the tracking analysis

tracker.track();

// Shows the depth image so we can see what it’s doing

tracker.display();

// Let’s draw a circle at the raw location

PVector v1 = tracker.getPos();
fill(50,100,250,200);
noStroke();
ellipse(v1.x,v1.y,20,20);

// Let’s draw a circle at the “lerped” location. What’s “Lerp”? It’s an abbreviation for linear interpolation, which is basically finding an average point between two points. For instance, the blue point is the “lerped” location between X0, y0, and X1 and Y1.

PVector v2 = tracker.getLerpedPos();
fill(100,250,50,200);
noStroke();
ellipse(v2.x,v2.y,20,20);

Make it stop!

} void stop() {
tracker.quit();
super.stop();
}

 

So! you should have a nice little processing app that will spawn some circles around whatever finds the depth threshold! Notice how few lines of code we’re looking at– this is the advantage of displaying data in this way.

Wait, what about the kinect tracking class? Here you go. This class takes the raw location from the kinect, and finds all the visual data that falls within a depth range. Bored with only one depth range? Make two!

class KinectTracker {
// Size of kinect
image int kw = 640;
int kh = 480;
// this is the depth range, distance past which we will ignore all pixels
int threshold = 745;
// Raw location
PVector loc;
// Interpolated location
PVector lerpedLoc;
// Depth data
int[] depth;
PImage display;

/// init constructor
KinectTracker() {
kinect.start();
kinect.enableDepth(true);
// We could skip processing the grayscale image for efficiency
// but this example is just demonstrating everything
kinect.processDepthImage(true);
display = createImage(kw,kh,PConstants.RGB);
loc = new PVector(0,0);
lerpedLoc = new PVector(0,0);
}

void track() {
// Get the raw depth as array of integers
depth = kinect.getRawDepth();
// Being overly cautious here, doing a “null” check
if (depth == null) return;
float sumX = 0;
float sumY = 0;
float count = 0;
for(int x = 0; x < kw; x++) {
for(int y = 0; y < kh; y++) {
// Mirroring the image
int offset = kw-x-1+y*kw;
// Grabbing the raw depth
int rawDepth = depth[offset];
// Testing the raw depth against threshold — if it’s less than the threshold, don’t bother!
if (rawDepth < threshold) {
sumX += x; sumY += y; count++;
}
}
}
// As long as we found something
if (count != 0) {
loc = new PVector(sumX/count,sumY/count);
}
// Interpolating the location, doing it arbitrarily for now
lerpedLoc.x = PApplet.lerp(lerpedLoc.x, loc.x, 0.3f);
lerpedLoc.y = PApplet.lerp(lerpedLoc.y, loc.y, 0.3f);
}
PVector getLerpedPos() {
return lerpedLoc;
}
PVector getPos() {
return loc;
}

/// now let’s display some data!

void display() {
PImage img = kinect.getDepthImage();
// Being overly cautious here
if (depth == null || img == null) return;
// Going to rewrite the depth image to show which pixels are in threshold
// A lot of this is redundant, but this is just for demonstration purposes
display.loadPixels();
for(int x = 0; x < kw; x++) {
for(int y = 0; y < kh; y++) {
// mirroring image
int offset = kw-x-1+y*kw;
// Raw depth
int rawDepth = depth[offset];
int pix = x+y*display.width;

// if the pixel data falls within the threshold, make them red!

if (rawDepth < threshold) {
display.pixels[pix] = color(150,50,50);
} else {
display.pixels[pix] = img.pixels[offset];
}
}
}

display.updatePixels();
// Draw the image
image(display,0,0);
}

// stop it!

void quit() {
kinect.quit();
} int getThreshold() {
return threshold;
}

//// set the depth threshold #
//// this is abitrary– but we could make this a function called from the main class… hmm…

void setThreshold(int t) {

threshold = t;
}
}

 

STEP FOUR: Open NI

OpenNi is an application programming interface (API) for writing applications utilizing “natural interaction”, which covers communication with both low level devices (e.g. vision and audio sensors), as well as high-level middleware solutions (e.g. for visual tracking using computer vision). For our purposes, OpenNI and NITE are being used as a wrapper for processing. Therefore not all functions of OpenNI are supported, it’s meant more to deliver a simple access to the functionality of this library. Basically it is another way to access the Kinect data, cameras, etc, and is even instantiated in a similar way as OpenKinect. Unlike OpenKinect, however, it does a lot more with skeleton/gesture tracking, and has a much more robust community. It is a little more finicky and slightly unstable, however.

First, you’ll have to actually install openNI. It’s kind of a hassle, you have to download the installer, open up the terminal and run some sudo commands. You also have to be on a particular version of the MacOS or Windows7, and there’s some weird issues with running 32 vs 64 bit Java libraries. Full walkthru is here:http://code.google.com/p/simple-openni/wiki/Installation

Once you have it installed, you need to make sure that you’ve put the OpenNI libraries in the “libraries” folder in the Processing “sketches” folder. After that’s done, we can use OpenNI!

First, let’s start with the basics– access the cameras. You’ll notice that we import/instantiate the class in a similar way. However, we’re calling the kinect object “context” rather than “kinect”. No reason– that’s just how the example is, we can call it something else.

import SimpleOpenNI.*;
SimpleOpenNI context;

void setup(){
context = new SimpleOpenNI(this);
// enable depthMap generation
context.enableDepth();
// enable camera image generation
context.enableRGB();
background(200,0,0);
// set the size of the canvas to the actual kinect image data
// which is actually 640×480

size(context.depthWidth() + context.rgbWidth() + 10, context.rgbHeight());
}

void draw(){
// update the cam
context.update();
// draw depthImageMap
image(context.depthImage(),0,0);
// draw camera
image(context.rgbImage(),context.depthWidth() + 10,0);
}

 
So– even though we’re using a totally different method of accessing the kinect data, notice that the video received looks exactly the same.
 
STEP FIVE: HAND TRACKING
Now, let’s add some deeper functionality! Let’s build a hand tracking function! Now most of this is from the samples that you can download from SimpleOpenNI (and some of this code I’m unsure of its function) but we’re going to add a couple of tweaks afterward.

/// first we’ll add the libraries
import SimpleOpenNI.*;
SimpleOpenNI context;
// add the NITE function
// this handles turning stuff on and off
XnVSessionManager sessionManager;
XnVFlowRouter flowRouter;
PointDrawer pointDrawer;

void setup() {
context = new SimpleOpenNI(this);
// mirror is by default enabled
context.setMirror(true);
// enable depthMap generation
context.enableDepth();
// enable the hands + gesture
context.enableGesture();
context.enableHands();
// setup NITE
sessionManager = context.createSessionManager(“Click,Wave”, “RaiseHand”);
pointDrawer = new PointDrawer();
flowRouter = new XnVFlowRouter();
flowRouter.SetActive(pointDrawer);
sessionManager.AddListener(flowRouter);
size(context.depthWidth(), context.depthHeight());
smooth();
}

void draw() {
background(200,0,0);
// update the camera
context.update();
// update nite
context.update(sessionManager);
// draw depthImageMap
image(context.depthImage(),0,0);
// draw the list
pointDrawer.draw();
}

/// this unloads the session

void keyPressed() {
switch(key) {
case ‘e’:
// end sessions
sessionManager.EndSession();
println(“end session”);
break;
}
}
/////////////////////////////////////////////////////////////////////////////////////////////////////
// session callbacks
void onStartSession(PVector pos) {
println(“onStartSession: ” + pos);
}

void onEndSession() {
println(“onEndSession: “);
}

void onFocusSession(String strFocus,PVector pos,float progress) {
println(“onFocusSession: focus=” + strFocus + “,pos=” + pos + “,progress=” + progress);
}

/// end session callbacks //////////////////////////////////////////////////////////////////////////////////////////////////
// PointDrawer keeps track of the handpoints
class PointDrawer extends XnVPointControl {
HashMap _pointLists;
int _maxPoints;
color[] _colorList = { color(255,0,0),color(0,255,0),color(0,0,255),color(255,255,0)
};

public PointDrawer() {
_maxPoints = 30; _pointLists = new HashMap();
}

public void OnPointCreate(XnVHandPointContext cxt) {
// create a new list when it’s triggered by the point drawer
addPoint(cxt.getNID(),new PVector(cxt.getPtPosition().getX(),cxt.getPtPosition().getY(),cxt.getPtPosition().getZ()));
println(“OnPointCreate, handId: ” + cxt.getNID());
}
/// update the point

public void OnPointUpdate(XnVHandPointContext cxt) {
//println(“OnPointUpdate ” + cxt.getPtPosition());
addPoint(cxt.getNID(),new PVector(cxt.getPtPosition().getX(),cxt.getPtPosition().getY(),cxt.getPtPosition().getZ()));
}


// remove list

public void OnPointDestroy(long nID) {
println(“OnPointDestroy, handId: ” + nID);

if(_pointLists.containsKey(nID)) _pointLists.remove(nID);
}
//// get the hand points list

public ArrayList getPointList(long handId) {
ArrayList curList;
if(_pointLists.containsKey(handId)) curList = (ArrayList)_pointLists.get(handId);
else {
curList = new ArrayList(_maxPoints);
_pointLists.put(handId,curList);
}
return curList;
}

// put the hand points in an array list
public void addPoint(long handId,PVector handPoint) {
ArrayList curList = getPointList(handId);
curList.add(0,handPoint);
if(curList.size() > _maxPoints) curList.remove(curList.size() – 1);
}

// now that we have our points lists, let’s draw them!
public void draw() {
if(_pointLists.size() <= 0) return;
pushStyle();
noFill();
PVector vec;
PVector firstVec;
PVector screenPos = new PVector();
int colorIndex=0;
// draw the hand lists
Iterator<Map.Entry> itrList = _pointLists.entrySet().iterator();
while(itrList.hasNext()) {
strokeWeight(2);
stroke(_colorList[colorIndex % (_colorList.length – 1)]);
ArrayList curList = (ArrayList)itrList.next().getValue();
// draw line
firstVec = null;
Iterator<PVector> itr = curList.iterator();
beginShape();
while (itr.hasNext()) {
vec = itr.next();
if(firstVec == null) firstVec = vec;
// calc the screen position and find the vertex of the hand point
/// notice we only have an x and y– since we’re using DepthField data, we can check for a z position as well!
context.convertRealWorldToProjective(vec,screenPos);
vertex(screenPos.x,screenPos.y);
}
endShape();

// if we have a vector then put a red dot in it!
if(firstVec != null) {
strokeWeight(8);
context.convertRealWorldToProjective(firstVec,screenPos);
point(screenPos.x,screenPos.y);
}
colorIndex++;
}
popStyle();
}
}

 

So! We have a hand tracker that will draw a line that follows the center of your hand. Nice, but not super useful. Let’s add some interactivity! We can add a button and have something happen when we move the hand over it.

There’s a couple ways to do this. We can check the position of our hand tracker (vertex(screenPos.,screenPos.y) and see if it fits in certain co-ordinates, or we can make a “button” that we can “touch”.

How do we do that?

 

Well, the easiest way is to take advantage of Processing’s Polygon class properties. Normally it’s quite hard to check if a point is inside an irregular shape with the common processing stuff. But it’s easy with the java.awt.Shape/Polygon/Rectangle classes. Just call Shape.intersect(int x, int y) or Shape.contains(int x, int y)!

// So, the first thing we want to do is create a shape. Put this code in the top of the document
//
with all the other code that you use to instantiate variables, etc.
Poly theButton;
/// array of x and y co-ordinates: upper left corner, upper right corner, lower right corner, lower left corner
int[]x={ 20,50,50,20,20};
int[]y={ 20,20,50,50,20};
theButton = new Poly(x,y,5);

/// since we want to be able to see the buttons, let’s put them in a function
/// that way we can call them from inside the “draw” function after we draw the kinect data

/// otherwise the kinect depth data will “cover” the button

void doTargetButtons(){
/// to draw the button, just instantiate the button with the “drawMe()” command
theButton.drawMe();
}

Now, all we have to do is to go into the “if(firstVec != null) { ” statement, and drop in our “intersect” code like so:

if(theButton.contains(screenPos.x,screenPos.y)) {
/// do something here
println(“YOU HAVE HIT A BUTTON”);

}

Note: This method will not work if you transform or rotate shapes using the Processing rotate() or transform() functions, as the underlying java.awt.Polygon (or Rectangle, etc) will remain unmoved even though you will be drawing the shape at its translated position.

Now, let’s do something fun– turn it into a musical instrument!

 

There’s a ton of ways to make Processing do audio– you can import a sound file, or generate audio using one of the many MIDI libraries available to Processing. Let’s use Minim (http://code.compartmental.net/tools/minim/), it’s a pretty robust audio library with a lot of features. We’re going to focus on something simple– a tone generator!

/// import midi libraries
import ddf.minim.*;
import ddf.minim.signals.*;

/// set up the waves
Minim minim;
AudioOutput out;
MouseSaw msaw;

/// instantiate up audio sine
SineWave sine;

//// now that everything is instantiated, let’s build the
////
function spawns the sound and changes the pitch depending on the hand poistion!
/// We will pass “screenPos.x and screenPos.y” from our hand tracking code into this “makeSound” function:

void makeSound(float theHandXm theHandY){
float freq = map(theHandX, 0, height, 1600, 60);

sine.setFreq(freq);
// pan always changes smoothly to avoid crackles getting into the signal
// note that we could call setPan on out, instead of on sine
// this would sound the same, but the waveforms in out would not reflect the panning
float pan = map(mouseY, 0, width, -1, 1);
sine.setPan(pan);
}

///// this initializes the sine wave
setUpSineWave();

/// stop the sound
void stop(){
out.close();
minim.stop();
// super.stop();
}

 

So, if we want to make our sound change according to the hand, call the “makeSound” function from our hand tracking code (instead of a red dot):

makeSound(screenPos.x,screenPos.y);

Now, this is kind of annoying since it’s a constant drone. Let’s toggle the audio– we can attach the sound to the buttons! Let’s make two buttons, one for “off” and one for “on”. Better yet, let’s do something with the depth position. Rather than this:

makeSound(screenPos.x,screenPos.y);

Let’s do this:

makeSound(screenPos.x,screenPos.y, screenPos.z);

Go ahead and experiment with the sound manager and see where to best put the depth tracking!