Interactive X-Ray!

Updated: Jun 21, 2023
A collaboration Elena Glazkova.

The What?The user's movements are represented by a skeleton projection, and they get to choose whether they're hungry or full with a click of a button (well, 2 buttons).





The How?
Tools: Kinect 2*, Kinectron, a Windows Laptop, Projector, Arduino, LED Strips. 
Skills: Coding, Patience, Physical Computing, Soldering.
Time: December. 2019
*Kinect 2
— a motion-sensing input device 
-Kinectron— an open-source software out of two components — an electron application to broadcast Kinect data over a peer connection, and a client-side API that brings realtime, motion capture data into the browser, along with p5.js library and Arduino to build this project.

Final version of code polished and explicitly commented by Lisa Jamhoury, based on our project, and published in official Kinectron repository.GitHub repository with our final code
Browser version of code in p5.js

Developing Process
So, to achieve our goal, we receive data from Kinect through the Kinectron server. Kinect detects the 25 main joints of the human body in front of it, and then Kinectron normalizes the data so that the body can fit into the p5.js canvas. Next, we draw the bones of the skeleton by assigning 'x' and 'y' positions to the corresponding joints. Additionally, our code includes serial communication. In terms of the physical components of the project, we have two buttons controlled by an Arduino Nano—the 'hungry' button and the 'full' button.



Yellow button indicates that the user is hungry, and the green button means they're full.


If the user selects the 'hungry' option, non-edible food will appear in a thought bubble above their head. If they choose 'full', food will appear inside of them.



What we Learned
💀 It’s crucial to determine the correct scaling of the skeleton from the beginning, as it influences both the placement and rotation of the bones.

💀 Working with recorded Kinectron data while adjusting the skeleton and running the sketch locally at all stages is more efficient.

💀 We utilized cameraX and cameraY coordinates, the createVector() function in p5.js, the translate() function, and the offset JavaScript method to place the bones accurately. We received significant assistance in writing the placeBone() and rotateBone() functions from Elena's ICM. 

💀 The level of opacity of the bone images being drawn is crucial for accurate placement and rotation. The lower the opacity, the better.

💀 Factors such as the position of the Kinect, lighting in the environment, obstacles, and external objects like chairs and glass doors significantly impact the accuracy.

A Closer Look at the Food


Building the Installation
We also built a physical component for our experience—a 'scanner' positioned in front of the X-Ray that 'scans' users (though it's simply an installation to enhance the completeness of the experience).

All parts of the installation must be securely connected while remaining highly flexible, as electricity is involved and we'll need to rearrange and reassemble the components multiple times. The initial design of this scanner included two hula hoops, numerous LEDs, some type of 'poles,' and many other elements to consider—particularly, how to assemble everything without it appearing chaotic. Fortunately, Ben Light, provided timely recommendations. Our solution came in the form of PVC pipes.


Special thanks to Lisa Jamhoury!


< Previous