


Inspiration :
James Birdle
Autonomous Trap 001 (2017)



https://jamesbridle.com/works/autonomous-trap-001
With James Birdle’s work there was a number of things that interested me, the documentation nature, the real implementations of his experiments but the main take away was the humour in it and the lightness of his work even though his work and his discoveries have very serious implication to the world we live in.
Tom White :
“This is not a …” collection



https://chauvetarts.com/artist/tom-white
I’ve been a fan of Tom White’s work since I came across it in my research for DH&T in second year, and the same thing I enjoyed then was what I enjoyed now, that being his humour and his working relationship with AI and with this project I wanted to carry this relationship and humour through my work.
Philipp Schmitt :
Declassifier


https://humans-of.ai/?img=8#15745,3139,3586
The Chair Project ( Four Classics) :





From Philipp Schmitt’s work these two are the ones that spoke to me most, again for the same reasons I liked Tom White’s work, this working relationship with machine learning and AI and using it as a tool or as a collaborator in the creation of the art. And as well as their sense of humour I also like the choice of not ‘correcting’ the work and leaving the digital fingerprints on it, be that the flaws or the imperfections.
Initial Ideas:



Bringing Depth :


Option One :
Option one would be a drawing machine allowing the user to use their tracked hands to draw with shapes and colours of their choosing in a blank canvas.

Option Two :
Option Two would be a generative art piece that elements of it could be controlled by the user infront of the screen using their body and limbs.
Generative Art Inspiration :



With these examples I’d like to emulate this level of detail and use of a cohesive colour scheme.
Like my control project I think allowing people to work within a selected colour scheme or a few colour schemes to keep an intersting and appealing aesthetic.
Ideally I use a regression model for this so that there can me movement and control between the different positions of the users hands.
I imagine a space for it to be used is like a hemisphere infront of the camera with trained points and then anywhere between those points would be guessed at.

Moving Forward :
I think I’ll go with option two for two reasons, firstly I feel that option one is very similiar to my control project from the previous year and dont want to tred the same ground or have others compairing the two.
The other is that option two came to me more organically, I had a personal enthusiasm for it and I want to trust my gut more on this one and taking the note from Cat I belive that option two has more depth.
What I need to do to make it a reality :
- Train the skelton model to recognise the positions of the users hands.
- Create the generative art model that has variables that can be tied to the values of the skeleton model.
- Combine the model and the sketch.
- Set up webcam and test again, retrain if required.
BodyPose :
https://docs.ml5js.org/#/reference/bodypose
I think the best option for user interaction would be BodyPose detection, but thinking about it the hemisphere idea would be too inconsistent with the hands stright down the barrel of the camera and ml5 reading it.
Tested above the snow angel idea shows the strongest areas within the space I had and with a screen in portrait and a webcam set up at the same distance it should have the same results.
Training with BodyPose:
With the ml5 BodyPose library and with the help from Cat we trained the regression/prediction model on my arm poses so that when people moved their hands in the view of the camera then it would control the predetermined values within the p5 sketch.
Visuals Inspiration:

https://openprocessing.org/sketch/492429


https://openprocessing.org/sketch/1615214
For these open processing sketches I loved the painterly quality of them as well as their use of colour and movement, finding the line between traditional style and the digital.
Tutorial:
With the visuals I had no clue, no idea but knew what I liked from open processing and when I found a tutorial inspired by the same thing I was I thought it could be a good starting point and by doing it step by step I’ll have a better understanding of what I’m doing and how I can change it how can I tinker.
https://openprocessing.org/sketch/1776463/
Tinkering :






Above is some screen shots of my tinkering and messing around with the tutorial code I followed along with. I need to remove my preciousness around working with code and continue tinkering in my own time.
Final Output:




The code :
// BodyPose Webcam Waves assembled by Nicholas McLaughlin.
// BodyPose traing code by Dr Cat Weir.
// Waves art code originally by Takawo on Openprocessing, original work 220717a by Takawo -- https://openprocessing.org/sketch/1615214 --.
//Thank you to Cat none of this word be possible or work without you!
/* This code is for P5js that uses the Mljs Body pose library, isolating the arms of the users skeleton and then uses the values from the shoulders, elbows and wrists on each arm to control the amplitude and velocity of the modified Waves art sketch.*/
let palettesList = {
0: ["#EFBDEB", "#B68CB8", "#6461A0", "#314CB6", "#0A81D1"],
1: ["#2E1F27", "#854D27", "#DD7230", "#F4C95D", "#E7E393"],
2: ["#BCC4DB", "#C0A9B0", "#7880B5", "#6987C9", "#6BBAEC"],
3: ["#FFD289", "#FACC6B", "#FFD131", "#F5B82E", "#F4AC32"],
4: ["#3A4F41", "#B9314F", "#D5A18E", "#DEC3BE", "#E1DEE3"],
5: ["#CCFBFE", "#CDD6DD", "#CDCACC", "#CDACA1", "#CD8987"],
6: ["#000000", "#CF5C36", "#EEE5E9", "#7C7C7C", "#EFC88B"],
7: ["#E85D75", "#C76D7E", "#9F8082", "#8D918B", "#AD9B9A"],
8: ["#EE6C4D", "#F38D68", "#EAF9FB", "#F662C9", "#17A398"],
9: ["#000000", "#f2f2ed", "#000000", "#f2f2ed", "#000000"],
};
//------------- Waves Variables
let simplex;
let palette;
let xStep = 90;
let xFreq = 0.009;
let yFreq = 0.005;
let amplitude = 100;
let velocity = 0.001;
let waveCount = 40;
//------------- BodyPose Variables-----------------//
let webcam; // Variable to hold your webcam video
let bodyPose; // Variables to hold the body tracking values
let poses = [];
let connections;
let neuralNet; // Variables to hold the learning network
let state = "collection"; // A collection state to train for the values of the arm positions
let leftVal = -1;
let rightVal = -1;
let d = 10;
let f = 10;
//---------------- Load the bodyPose model----------//
function preload() {
let options = {
flipped: true,
};
bodyPose = ml5.bodyPose(options);
}
// Set up the canvas and the
function setup() {
frameRate(10);
createCanvas(windowWidth, windowHeight);
simplex = new SimplexNoise();
palette = palettesList[floor(random(Object.keys(palettesList).length))];
noStroke();
// background(255);
ml5.setBackend("webgl");
//listing all your computer's available media devices
navigator.mediaDevices.enumerateDevices().then(gotDevices);
// Create a constraints object and specify your webcam's deviceId, groupId, kind, and label (this will be printed to the console by the gotDevices function)
webcamInfo = {
video: {
deviceId:
"81986da6f580bdf305cb87f6a276115e084e99c333d938fd64c9bcac4577735e",
groupId:
"2defdc8c883929d6a70b71a9c5df6f5cd2e60555fb9e09aad9cdea3e6803a437",
kind: "videoinput",
label: "HD Pro Webcam C920 (046d:08e5)",
},
};
// Setup your webcam using createCapture; using the constraints set above
webcam = createCapture(VIDEO, webcamInfo);
webcam.size(480, 480);
webcam.hide();
bodyPose.detectStart(webcam, gotPoses);
connections = bodyPose.getSkeleton();
let nnOptions = {
task: "regression", // a regression model so that when the arm is between postions the model will take estimates.
//learningRate: 0.01,
//debug: true,
};
neuralNet = ml5.neuralNetwork(nnOptions);
//neuralNet.loadData("2025-4-24_17-52-59.json", dataLoaded);
const modelInfo = {
// Set the info associated with the model
model: "model.json",
metadata: "model_meta.json",
weights: "model.weights.bin",
};
neuralNet.load(modelInfo, modelReady); // Load in the custom model
}
//changing the model from the regression state to the prediction when its ready.
function modelReady() {
console.log("Model Ready!");
state = "prediction";
}
//load in the data instead of retraining each time.
function dataLoaded() {
console.log("Data Loaded!");
}
// Print a list of all avaiable videoinput devices to the console
function gotDevices(deviceInfo) {
for (let i = 0; i < deviceInfo.length; i++) {
let currentDevice = deviceInfo[i]; // Get the current device info as a JavaScript object
if (currentDevice.kind == "videoinput") {
// Check if the current device type is "videoinput", i.e. a webcam
console.log(currentDevice); // If so, print its info to the console
}
}
}
//key functions to then train the body pose model on how each arm position applies to each value.
function keyPressed() {
switch (key) {
case "w":
leftVal = 10;
rightVal = 10;
console.log(key);
break;
case "q":
leftVal = 10;
rightVal = 10;
console.log(key);
break;
case "1":
leftVal = 100;
rightVal = 10;
console.log(key);
break;
case "2":
leftVal = 200;
rightVal = 10;
console.log(key);
break;
case "3":
leftVal = 300;
rightVal = 10;
console.log(key);
break;
case "4":
leftVal = 400;
rightVal = 10;
console.log(key);
break;
case "5":
leftVal = 500;
rightVal = 10;
console.log(key);
break;
case "6":
leftVal = 10;
rightVal = 100;
console.log(key);
break;
case "7":
leftVal = 10;
rightVal = 200;
console.log(key);
break;
case "8":
leftVal = 10;
rightVal = 300;
console.log(key);
break;
case "9":
leftVal = 10;
rightVal = 400;
console.log(key);
break;
case "0":
leftVal = 10;
rightVal = 500;
console.log(key);
break;
case "t":
state = "training";
neuralNet.normalizeData();
let trainingOptions = {
epochs: 50,
};
neuralNet.train(trainingOptions, whileTraining, finishedTraining);
break;
case "d":
neuralNet.saveData();
break;
case "s":
neuralNet.save();
break;
}
}
// a training state for the regression model.
function whileTraining(epoch, loss) {
console.log(epoch);
}
// a state for when the training is finished and change from a regression model to a prediction model based on the information it has been trained on.
function finishedTraining() {
console.log("Finished Training");
state = "prediction";
}
function gotResults(nnResults, err) {
if (err) {
console.error(err);
return;
}
d = nnResults[0].value;
f = nnResults[1].value;
console.log(nnResults);
}
function gotPoses(results) {
// Save the output to the poses variable
poses = results;
//console.log(results);
}
//Initially drawing the bodyPose points but now just used to draw the waves visuals when a body is detected.
function draw() {
//background(255);
/*
push(); // Draw the flipped webcam image to the canvas
translate(width, 0);
scale(-1, 1);
image(webcam, 0, 0);
pop();
*/
noStroke();
//taking the required arm keypoints from BodyPose to train initially but now used to track the users arm keypoints for the prediction model
if (poses.length > 0) {
let inputs = [];
for (let i = 0; i < poses.length; i++) {
let pose = poses[i];
let leftShoulderX = pose.keypoints[5].x;
let leftShoulderY = pose.keypoints[5].y;
inputs.push(leftShoulderX);
inputs.push(leftShoulderY);
//circle(leftShoulderX, leftShoulderY, 10);
let leftElbowX = pose.keypoints[7].x;
let leftElbowY = pose.keypoints[7].y;
inputs.push(leftElbowX);
inputs.push(leftElbowY);
//circle(leftElbowX, leftElbowY, 10);
let leftWristX = pose.keypoints[9].x;
let leftWristY = pose.keypoints[9].y;
inputs.push(leftWristX);
inputs.push(leftWristY);
//circle(leftWristX, leftWristY, 10);
let rightShoulderX = pose.keypoints[6].x;
let rightShoulderY = pose.keypoints[6].y;
inputs.push(rightShoulderX);
inputs.push(rightShoulderY);
//circle(rightShoulderX, rightShoulderY, 10);
let rightElbowX = pose.keypoints[8].x;
let rightElbowY = pose.keypoints[8].y;
inputs.push(rightElbowX);
inputs.push(rightElbowY);
//circle(rightElbowX, rightElbowY, 10);
let rightWristX = pose.keypoints[10].x;
let rightWristY = pose.keypoints[10].y;
inputs.push(rightWristX);
inputs.push(rightWristY);
//circle(rightWristX, rightWristY, 10);
}
if (state == "collection") {
if (mouseIsPressed) {
let outputs = [rightVal, leftVal];
neuralNet.addData(inputs, outputs);
console.log(inputs);
console.log(outputs);
}
} else if (state == "prediction") {
neuralNet.predict(inputs, gotResults);
fill(f);
//circle(width / 2, height / 2, d);
text("D: " + d + ", F:" + f, width / 2, height);
console.log(d, f);
randomSeed();
let c = shuffle(palette);
background(c[0]);
let yStep = height / waveCount;
for (let y = 0; y <= height; y += yStep) {
push();
translate(0, y);
c = shuffle(palette);
let gradient = drawingContext.createLinearGradient(0, height / 2, width, height / 2
);
gradient.addColorStop(0, c[0]);
drawingContext.fillStyle = gradient;
beginShape();
for (let x = 0; x <= width; x += xStep) {
let noise = simplex.noise3D(x * xFreq, y * yFreq, frameCount * d/2) * f/2;
vertex(x, noise / 2);
}
vertex(width, height);
vertex(0, height);
endShape(CLOSE);
pop();
//a frame around the waves, the fill of it is still tied to d value so that I can see if its picking up a body from the webcam.
rect(0, 0, width, width * 0.05);
rect(0, height - 20, width);
rect(0, 0, 20, height);
rect(width - 20, 0, 20, height - 20);
}
}
}
}
Reflection :
Throughout this project I have been unsure, unsure of my ideas of what I want to make, unsure of my abilities with code and all that outlook is something that I think has been creeping up on me throughout second and third year along with the idea that I’m not the ‘coding type’ but through the exposure to it within every project and more specifically this one I’ve came to the realisation that maybe my colleagues have picked it up quicker but to not let that deter me from continuing with my own learning and tinkering.
Within this project I might have not made my best work but have gained a better understanding of my self and my abilities within code and p5js and even through I might not use it for OpenShare or for the majority of my work in fourth year but throughout my life my goal is to practice it how others practice languages.
If I had more time with this project I’d have explored this rediscovery of code and the functionality of P5js and maybe got the output to a level that the visuals were more appealing and the interactions more focused to the situations discussed at the review, making it an ambient piece that comes to live with people walking past it and gently inviting them to play.







































































































































































































































































































































































































































































































































































































































