• Sense & Sensibility

    Inspiration :

    James Birdle

    Autonomous Trap 001 (2017)

    https://jamesbridle.com/works/autonomous-trap-001

    With James Birdle’s work there was a number of things that interested me, the documentation nature, the real implementations of his experiments but the main take away was the humour in it and the lightness of his work even though his work and his discoveries have very serious implication to the world we live in.

    Tom White :

    “This is not a …” collection

    https://chauvetarts.com/artist/tom-white

    I’ve been a fan of Tom White’s work since I came across it in my research for DH&T in second year, and the same thing I enjoyed then was what I enjoyed now, that being his humour and his working relationship with AI and with this project I wanted to carry this relationship and humour through my work.

    Philipp Schmitt :

    Declassifier

    https://humans-of.ai/?img=8#15745,3139,3586

    The Chair Project ( Four Classics) :

    https://philippschmitt.com/

    From Philipp Schmitt’s work these two are the ones that spoke to me most, again for the same reasons I liked Tom White’s work, this working relationship with machine learning and AI and using it as a tool or as a collaborator in the creation of the art. And as well as their sense of humour I also like the choice of not ‘correcting’ the work and leaving the digital fingerprints on it, be that the flaws or the imperfections.

    Initial Ideas:

    Bringing Depth :

    Option One :

    Option one would be a drawing machine allowing the user to use their tracked hands to draw with shapes and colours of their choosing in a blank canvas.

    Option Two :

    Option Two would be a generative art piece that elements of it could be controlled by the user infront of the screen using their body and limbs.

    Generative Art Inspiration :

    With these examples I’d like to emulate this level of detail and use of a cohesive colour scheme.

    Like my control project I think allowing people to work within a selected colour scheme or a few colour schemes to keep an intersting and appealing aesthetic.

    Ideally I use a regression model for this so that there can me movement and control between the different positions of the users hands.

    I imagine a space for it to be used is like a hemisphere infront of the camera with trained points and then anywhere between those points would be guessed at.

    Moving Forward :

    I think I’ll go with option two for two reasons, firstly I feel that option one is very similiar to my control project from the previous year and dont want to tred the same ground or have others compairing the two.

    The other is that option two came to me more organically, I had a personal enthusiasm for it and I want to trust my gut more on this one and taking the note from Cat I belive that option two has more depth.

    What I need to do to make it a reality :

    • Train the skelton model to recognise the positions of the users hands.
    • Create the generative art model that has variables that can be tied to the values of the skeleton model.
    • Combine the model and the sketch.
    • Set up webcam and test again, retrain if required.

    BodyPose :

    https://docs.ml5js.org/#/reference/bodypose

    I think the best option for user interaction would be BodyPose detection, but thinking about it the hemisphere idea would be too inconsistent with the hands stright down the barrel of the camera and ml5 reading it.

    Tested above the snow angel idea shows the strongest areas within the space I had and with a screen in portrait and a webcam set up at the same distance it should have the same results.

    Training with BodyPose:

    With the ml5 BodyPose library and with the help from Cat we trained the regression/prediction model on my arm poses so that when people moved their hands in the view of the camera then it would control the predetermined values within the p5 sketch.

    Visuals Inspiration:

    https://openprocessing.org/sketch/492429

    https://openprocessing.org/sketch/1615214

    For these open processing sketches I loved the painterly quality of them as well as their use of colour and movement, finding the line between traditional style and the digital.

    Tutorial:

    With the visuals I had no clue, no idea but knew what I liked from open processing and when I found a tutorial inspired by the same thing I was I thought it could be a good starting point and by doing it step by step I’ll have a better understanding of what I’m doing and how I can change it how can I tinker.

    https://openprocessing.org/sketch/1776463/

    Tinkering :

    Above is some screen shots of my tinkering and messing around with the tutorial code I followed along with. I need to remove my preciousness around working with code and continue tinkering in my own time.

    Final Output:

    The code :

    // BodyPose Webcam Waves assembled by Nicholas McLaughlin.
    // BodyPose traing code by Dr Cat Weir.
    // Waves art code originally by Takawo on Openprocessing, original work 220717a by Takawo -- https://openprocessing.org/sketch/1615214 --.
    
    //Thank you to Cat none of this word be possible or work without you! 
    
    
    /* This code is for P5js that uses the Mljs Body pose library, isolating the arms of the users skeleton and then uses the values from the shoulders, elbows and wrists on each arm to control the amplitude and velocity of the modified Waves art sketch.*/
    
    
    
    let palettesList = {
      0: ["#EFBDEB", "#B68CB8", "#6461A0", "#314CB6", "#0A81D1"],
      1: ["#2E1F27", "#854D27", "#DD7230", "#F4C95D", "#E7E393"],
      2: ["#BCC4DB", "#C0A9B0", "#7880B5", "#6987C9", "#6BBAEC"],
      3: ["#FFD289", "#FACC6B", "#FFD131", "#F5B82E", "#F4AC32"],
      4: ["#3A4F41", "#B9314F", "#D5A18E", "#DEC3BE", "#E1DEE3"],
      5: ["#CCFBFE", "#CDD6DD", "#CDCACC", "#CDACA1", "#CD8987"],
      6: ["#000000", "#CF5C36", "#EEE5E9", "#7C7C7C", "#EFC88B"],
      7: ["#E85D75", "#C76D7E", "#9F8082", "#8D918B", "#AD9B9A"],
      8: ["#EE6C4D", "#F38D68", "#EAF9FB", "#F662C9", "#17A398"],
      9: ["#000000", "#f2f2ed", "#000000", "#f2f2ed", "#000000"],
    };
    
    //------------- Waves Variables
    
    let simplex;
    let palette;
    
    let xStep = 90;
    let xFreq = 0.009;
    let yFreq = 0.005;
    let amplitude = 100;
    let velocity = 0.001;
    let waveCount = 40;
    
    //------------- BodyPose Variables-----------------//
    
    let webcam; // Variable to hold your webcam video
    
    let bodyPose; // Variables to hold the body tracking values
    let poses = [];
    let connections;
    
    let neuralNet; // Variables to hold the learning network
    let state = "collection"; // A collection state to train for the values of the arm positions
    
    let leftVal = -1;
    let rightVal = -1;
    
    let d = 10;
    let f = 10;
    
    //---------------- Load the bodyPose model----------//
    function preload() {
      let options = {
        flipped: true,
      };
    
      bodyPose = ml5.bodyPose(options);
    }
    
    // Set up the canvas and the
    function setup() {
      frameRate(10);
      createCanvas(windowWidth, windowHeight);
      simplex = new SimplexNoise();
      palette = palettesList[floor(random(Object.keys(palettesList).length))];
      noStroke();
    
      // background(255);
      ml5.setBackend("webgl");
    
      //listing all your computer's available media devices
      navigator.mediaDevices.enumerateDevices().then(gotDevices);
    
      // Create a constraints object and specify your webcam's deviceId, groupId, kind, and label (this will be printed to the console by the gotDevices function)
    
      webcamInfo = {
        video: {
          deviceId:
            "81986da6f580bdf305cb87f6a276115e084e99c333d938fd64c9bcac4577735e",
          groupId:
            "2defdc8c883929d6a70b71a9c5df6f5cd2e60555fb9e09aad9cdea3e6803a437",
          kind: "videoinput",
          label: "HD Pro Webcam C920 (046d:08e5)",
        },
      };
    
      // Setup your webcam using createCapture; using the constraints set above
      webcam = createCapture(VIDEO, webcamInfo);
      webcam.size(480, 480);
      webcam.hide();
    
      bodyPose.detectStart(webcam, gotPoses);
      connections = bodyPose.getSkeleton();
    
      let nnOptions = {
        task: "regression", // a regression model so that when the arm is between postions the model will take estimates.
        //learningRate: 0.01,
        //debug: true,
      };
    
      neuralNet = ml5.neuralNetwork(nnOptions);
      //neuralNet.loadData("2025-4-24_17-52-59.json", dataLoaded);
    
      const modelInfo = {
        // Set the info associated with the model
        model: "model.json",
        metadata: "model_meta.json",
        weights: "model.weights.bin",
      };
    
      neuralNet.load(modelInfo, modelReady); // Load in the custom model
    }
    
    //changing the model from the regression state to the prediction when its ready.
    function modelReady() {
      console.log("Model Ready!");
      state = "prediction";
    }
    
    //load in the data instead of retraining each time.
    function dataLoaded() {
      console.log("Data Loaded!");
    }
    
    // Print a list of all avaiable videoinput devices to the console
    function gotDevices(deviceInfo) {
      for (let i = 0; i < deviceInfo.length; i++) {
        let currentDevice = deviceInfo[i]; // Get the current device info as a JavaScript object
    
        if (currentDevice.kind == "videoinput") {
          // Check if the current device type is "videoinput", i.e. a webcam
    
          console.log(currentDevice); // If so, print its info to the console
        }
      }
    }
    
    //key functions to then train the body pose model on how each arm position applies to each value.
    function keyPressed() {
      switch (key) {
        case "w":
          leftVal = 10;
          rightVal = 10;
          console.log(key);
          break;
    
        case "q":
          leftVal = 10;
          rightVal = 10;
          console.log(key);
          break;
    
        case "1":
          leftVal = 100;
          rightVal = 10;
          console.log(key);
          break;
    
        case "2":
          leftVal = 200;
          rightVal = 10;
          console.log(key);
          break;
    
        case "3":
          leftVal = 300;
          rightVal = 10;
          console.log(key);
          break;
    
        case "4":
          leftVal = 400;
          rightVal = 10;
          console.log(key);
          break;
    
        case "5":
          leftVal = 500;
          rightVal = 10;
          console.log(key);
          break;
    
        case "6":
          leftVal = 10;
          rightVal = 100;
          console.log(key);
          break;
    
        case "7":
          leftVal = 10;
          rightVal = 200;
          console.log(key);
          break;
    
        case "8":
          leftVal = 10;
          rightVal = 300;
          console.log(key);
          break;
    
        case "9":
          leftVal = 10;
          rightVal = 400;
          console.log(key);
          break;
    
        case "0":
          leftVal = 10;
          rightVal = 500;
          console.log(key);
          break;
    
        case "t":
          state = "training";
          neuralNet.normalizeData();
          let trainingOptions = {
            epochs: 50,
          };
          neuralNet.train(trainingOptions, whileTraining, finishedTraining);
          break;
    
        case "d":
          neuralNet.saveData();
          break;
    
        case "s":
          neuralNet.save();
          break;
      }
    }
    
    // a training state for the regression model.
    function whileTraining(epoch, loss) {
      console.log(epoch);
    }
    
    // a state for when the training is finished and change from a regression model to a prediction model based on the information it has been trained on.
    function finishedTraining() {
      console.log("Finished Training");
      state = "prediction";
    }
    
    
    function gotResults(nnResults, err) {
      if (err) {
        console.error(err);
        return;
      }
    
      d = nnResults[0].value;
      f = nnResults[1].value;
      console.log(nnResults);
    }
    
    function gotPoses(results) {
      // Save the output to the poses variable
      poses = results;
      //console.log(results);
    }
    
    //Initially drawing the bodyPose points but now just used to draw the waves visuals when a body is detected.
    function draw() {
      //background(255);
    
      /*
      push(); // Draw the flipped webcam image to the canvas
      translate(width, 0);
      scale(-1, 1);
      image(webcam, 0, 0);
      pop();
      */
    
      noStroke();
      
      //taking the required arm keypoints from BodyPose to train initially but now used to track the users arm keypoints for the prediction model
    
      if (poses.length > 0) {
        let inputs = [];
    
        for (let i = 0; i < poses.length; i++) {
          let pose = poses[i];
    
          let leftShoulderX = pose.keypoints[5].x;
          let leftShoulderY = pose.keypoints[5].y;
          inputs.push(leftShoulderX);
          inputs.push(leftShoulderY);
          //circle(leftShoulderX, leftShoulderY, 10);
    
          let leftElbowX = pose.keypoints[7].x;
          let leftElbowY = pose.keypoints[7].y;
          inputs.push(leftElbowX);
          inputs.push(leftElbowY);
          //circle(leftElbowX, leftElbowY, 10);
    
          let leftWristX = pose.keypoints[9].x;
          let leftWristY = pose.keypoints[9].y;
          inputs.push(leftWristX);
          inputs.push(leftWristY);
          //circle(leftWristX, leftWristY, 10);
    
          let rightShoulderX = pose.keypoints[6].x;
          let rightShoulderY = pose.keypoints[6].y;
          inputs.push(rightShoulderX);
          inputs.push(rightShoulderY);
          //circle(rightShoulderX, rightShoulderY, 10);
    
          let rightElbowX = pose.keypoints[8].x;
          let rightElbowY = pose.keypoints[8].y;
          inputs.push(rightElbowX);
          inputs.push(rightElbowY);
          //circle(rightElbowX, rightElbowY, 10);
    
          let rightWristX = pose.keypoints[10].x;
          let rightWristY = pose.keypoints[10].y;
          inputs.push(rightWristX);
          inputs.push(rightWristY);
          //circle(rightWristX, rightWristY, 10);
        }
    
        if (state == "collection") {
          if (mouseIsPressed) {
            let outputs = [rightVal, leftVal];
            neuralNet.addData(inputs, outputs);
            console.log(inputs);
            console.log(outputs);
          }
        } else if (state == "prediction") {
          neuralNet.predict(inputs, gotResults);
          fill(f);
          //circle(width / 2, height / 2, d);
          text("D: " + d + ", F:" + f, width / 2, height);
          console.log(d, f);
    
          randomSeed();
    
          let c = shuffle(palette);
          background(c[0]);
    
          let yStep = height / waveCount;
    
          for (let y = 0; y <= height; y += yStep) {
            push();
            translate(0, y);
            c = shuffle(palette);
    
            let gradient = drawingContext.createLinearGradient(0, height / 2, width, height / 2
            );
            gradient.addColorStop(0, c[0]);
            drawingContext.fillStyle = gradient;
    
            beginShape();
    
            for (let x = 0; x <= width; x += xStep) {
              let noise = simplex.noise3D(x * xFreq, y * yFreq, frameCount * d/2) * f/2;
              vertex(x, noise / 2);
            }
            vertex(width, height);
            vertex(0, height);
            endShape(CLOSE);
            pop();
    
            //a frame around the waves, the fill of it is still tied to d value so that I can see if its picking up a body from the webcam.
            rect(0, 0, width, width * 0.05);
            rect(0, height - 20, width);
    
            rect(0, 0, 20, height);
            rect(width - 20, 0, 20, height - 20);
          }
        }
      }
    }
    
    

    Reflection :

    Throughout this project I have been unsure, unsure of my ideas of what I want to make, unsure of my abilities with code and all that outlook is something that I think has been creeping up on me throughout second and third year along with the idea that I’m not the ‘coding type’ but through the exposure to it within every project and more specifically this one I’ve came to the realisation that maybe my colleagues have picked it up quicker but to not let that deter me from continuing with my own learning and tinkering.

    Within this project I might have not made my best work but have gained a better understanding of my self and my abilities within code and p5js and even through I might not use it for OpenShare or for the majority of my work in fourth year but throughout my life my goal is to practice it how others practice languages.

    If I had more time with this project I’d have explored this rediscovery of code and the functionality of P5js and maybe got the output to a level that the visuals were more appealing and the interactions more focused to the situations discussed at the review, making it an ambient piece that comes to live with people walking past it and gently inviting them to play.

  • Design Domain Part Two

    Return to Form

    Chat with Paul:

    What I’m Thinking :

    I like the idea of having the 3 sculptures from part one and giving them 3 different outputs that lend themselves to each sculpture, whether VR, 3D printed, Axi Draw, screen-based, or even an audio output.

    Each piece has its own strengths and weaknesses and considerations:

    Form One :

    For this form I’m unsure, as its my favourite but I dont think it has an obvious output to pair itself with.

    Form Two :

    For this form I think the output of 3D printing would be the best fit as it would have the most sound structure.

    Form Three :

    As this form has the most small stringy parts to it, a digital output such as screen based or VR/AR would be the best fit.

    Display Ideas :

    As discussed with Paul, the three different forms can be a collection without a relationship to each other, and the only relationship I want them to have is to the original work but still seem to be part of the same family.

    “Design is about the outcome; art is about the process.”

    Ok, let’s work with what’s happening first in my head.

    Form Three:

    With form three, my first and strongest instinct was to have it in VR and allow the sculpture to be shown without a restraint to scale or gravity.

    Form Two:

    Chat with Gillian:

    Again, I am going to Gillian for ideas when I hit a wall. So, after explaining that I had plans for two of the three sculptures; VR, 3D printed and nothing else so suggested that the logical next step would be to have high quality prints of the third from made, which seems obvious when pointed out, thats when you know its a good idea.

    Problems Resolving itself :

    So I planned to bring form three from blender and put it into a unity VR scene. But as soon as I put Form Three into Unity it started to struggle and even as I tried to brute force it into a VR scene the HMD would not run. But after replacing it with Form One , the sculpture that I had no idea what to do, the scene ran almost perfectly.

    After Lucia had tested it, she had a great suggestion of two at different sizes, and after I implemented that, the lighting on each sculpture with some ambient music in the background the scene was were I wanted it to be.

    Trying to print a second Form :

    So with the models I’ve made, they are not viable for 3D printing and frankly I’m surprised that my computer only crashed at one point and didn’t burst into flames.

    Ok so my Slicer does work, its just my models that are the problem.

    After downloading a tape cutter thing from Printables to make sure it was not a problem with my software and that sliced in seconds.

    At this moment:

    At this moment of reading week I have been using the time to explore and problem solve the technological problems that are facing me with my design domain outputs.

    For Form One I have it ready to go on the VR headset.

    For Form Three I am currently rendering out high quality images of it to then get them sent to the printers at either the Reid or the Stow.

    And Form Two in its current state is not viable for 3D printing so I can either continue to try and get it to work, I can try and model it again with more of a focus on it to be 3D printed and less madness or I can try and think of a different output that Form Two would let its self too.

    Rendering out Form Three :

    With a focus on the composition of the render and allowing Form three to have breathing room I think three well rendered outputs that show the shape and detail of the sculpture would be good.

    I have tried to keep the overall image as close as I can to the original output as not to just slap some colour onto it, I’ve changed the material to a cool grey/ off white , ive build a background scene for the sculpture and included a simple soft spotlight, similar to the way Form One is displayed in the VR scene.

    The next step when these renders are finished is to have them printed by experts at GSA.

    Display Idea:

    When having the form three final renders printed, also have HQ images of each piece of work that each form has came from to show a clear connection.

    Also include a plane with the form one image within the VR space so that even when in the VR scene they can still compare it to the previous work.

    Where I’m At :

    Well first thing first. I’m an idiot. So initially the plan was to render out the images of Form Three and send them away to the printer with plenty of time and be more than ready for friday open studio but between the rendering finishing and the time I booked the printing consultation all the times were taken and the only avaliable time was 3:15 pm. So the printing will be late.

    Learning AR :

    So with the VR sorted, the renders sent away to the printing I started working on the AR. With a bit of messing around and trial and error I go it working with a simple unity scene and a trigger but again there was a problem.

    I got the AR scene working on the Samsung tablet and reading the trigger but as soon as I implimented my model into the scene the application crashed.

    I’ll swap out the model of a less strenuous model and make sure everything is working correctly and again I might just need to make a model that does allow it. Ideally I could use the model that I had made through the process in DD part one but if it wont work at all I might need to use smoke and mirrors.

    Writing for Design Domain :

    Form One : 

    Form One started in Creative Coding 01 using processing in the first year of my time at GSA. It was one of the first instances where I had seen the power and creative freedom of what we can do here at Interaction Design. 

    Please use the VR headset to explore the first of these digital sculptures.

    Form Two : 

    Form Two is taken from my Realtime Events scene using Unity from my second year of IxD. It was a creative outlet when I needed it, but it also taught me that we are only limited by our imaginations in the sphere of creative technology.

    Please use the tablet camera to explore the second digital sculpture.

    Form Three : 

    Form Three began in Imagined Environments using Adobe After Effects in my third year. It also reminded me that the problems and troubleshooting we tackle here at GSA and IxD are not problems or roadblocks but opportunities to find new and creative workflows, processes, and outputs. 

    Please enjoy these renders of the third sculpture. 

    Set Up Trial :

    With the vetrine reserved for my output I thought it would be a good chance to plan out the layout and get everything ready so that all i have to do is drop everything into it whenever its ready.

    Planning it chronological order so that when the viewer comes up to the display they can get the order of the work instantly, well complying to western conventions.

    Back to AR :

    Banana Man Test :

    The trigger for AR is working and is contined within the application, even through we didnt go through it in the workshops I wanted to figure out this way so that the AR isn’t relying on a shakey internet connection whether it works or not.

    It worked well, if a little shakey with the banana man model as a place holder.

    Form Two Test:

    With everything else ready I had time to continue working on the the Form Two AR and getting it to work as much as possible.

    With the trigger image of my unity image working with the placeholder banana man model I put the model of Form Two into the scene and it ran and seemed to work at a certain distance but getting close enough it tank the frame rate and would crash the scene.

    I need to try it with a smaller model or just allow it to crash if I cant get a decimated model of Form Two.

    At this point :

    At this point Form One is in the VR headset ready, Form Two is working on the tablet with AR even if it is running very slowly. And Form Three is scheduled to be printed Friday afternoon.

    Thursdays ToDo List :

    • Put VR headset on charge.
    • Put AR tablet on charge.
    • Try and make AR model less taxing on the tablet.
    • Set up lights for box

    Friday ToDo List :

    • Set up VR
    • Set up AR
    • Turn on lights
    • Submit Name Card
    • Collect Prints
    • Cut out Prints
    • Present.

    Getting Ready :

    With everything ready, well almost ready, I had time to consider the presentation of my work more and plan out the interactions as much as I could.

    Gillian Saving me:

    Gillian yet again saving me by getting the models into a state that can work within the tech we have like the AR scene.

    There is some detail lost in models, which to be honest I expected because there was no way they would run on a tablet unless I get a brand new Ipad pro or a Galaxy S9. And as it still fits within my theme and practice for this project I think it means more that the AR actully works, even if detail is lost.

    Morning of :

    With the VR headset charged and ready, the AR tablet charged and ready and the prints due later on today I had more time to work away on the AR and impliment the new decimated models that Gillian has gave me and get the AR running even smoother.

    Last minute change up :

    During the final day I had the set up finished, or what I thought was finished, until I got the prints of Form Three back and had the thought that after going through all this effort for the prints they should be displayed. So after a lot of waiting around there was a lot of running around and trying to cut out the prints as perfect as I could and as quickly as I could.

    Final Output:

    What work was shown in the display:

    • The discription of the work with a QR code that takes people to this very page and next to each output is a brief discription of the work and what project it was based on, along with a prompt of how to interact with each piece.
    • The Meta Quest headset with the unity scene of Form One in it and the left controller to more around the scene freely.
    • The Samsung Tablet with the unity AR apk for Form Two pre-loaded and self contained and the printed trigger image.
    • The Blender renders of Form Three printed by the photography printers in the Stow building cut and displayed in the vetrine.

    Return to Form :

    For Design Domain I focused on two things, my own work and my pratice of always wanting to perfect everything and design a product. Within this project, my goal was to challenge myself to lean into the imperfections of digital mediums, the interesting forms and materials within them that would be difficult to replicate anywhere else. I would have never made them consciously, but by taking my previous work from each year and creating these sculptures by exploring a software I’m unfamiliar with and not trying to ‘fix’ them, I wanted to shake myself out of my usual way of working and then show off these ‘odd’ pieces with pride and confidence, allowing them the space they deserve to stand alone and be considered as works of digital sculpture.

    Evaluation :

    If I could do it over with more money and more time I would like to have each sculpture shown perfectly in their respective outputs, give more time and consideration to the space within VR and maybe give the user more control to explore the sculpture in a space with no restrictions.

    I’d have improved the AR experince by maybe have the tabelt waiting in a better position and of course have the undecimated model of the form in AR allowing people to explore the detail and odd interior of it.

    And for the third I’d like to plan better in the future to allow myself more time to display the prints better or even have time to think of how the renders them selves could be improved while still allowing for their form and material.

  • Extended Reality (XR)

    Initial Ideas :

    With the first idea shown on the bottom left of the page I had the idea to meld both the digital world with the real world but not through a frame or portal that is often done but more like the work done by MarshmallowLaserFeast – In the eyes of the animal.

    https://marshmallowlaserfeast.com/project/in-the-eyes-of-the-animal/

    In the work ‘In the eyes of the Animal’ the audience is sat in a forrest before even putting the HMD on and when put on their head they see the scene they’re already sitting in creating the relationship between the real world and the digital. I’d love to have this relationship in my work.

    The second idea at the top of the left page is an exploration of the idea that VR can give people experiences that they would never be able experience and can bring a sense of wonder to the audience that they might have never experienced, for example, space travel. But as not to make this project just a tech demo and more emotionally engaging, the scene would maybe have an impressionist style, an exaggeration of the real thing.

    The third idea at the bottom of the same page is another interactive VR experience that uses light shadows and objects being implied by their shadows. Inspired by the feature of digitial objects in an AR space projecting AR shadows in the real world as well as the use of white space and black paint in the game The Unfinished Swan.

    Idea four, noted on the left page is pretty much as it says there, a VR museum of dead technology and experiences that have either died due to technological advancements and a lack of support or an idea that was killed by those that had to pay for it, or even all of the above.

    The museum would be a place for these 3D animation and effects to be shown in a virtual gallery.

    This fifth idea is the concept of using materials that are not normally related to digital media to trigger a digital response such as a QR code made out of wood or moss or have a AR target made out of organic materials.

    Testing AR :

    Exploring Commercial AR :

    Pokemon GO in the studio:

    Amazons products placement AR:

    Acute Art VR app :

    My thoughts at this time :

    At this time in the project, I keep thinking about the fun that can be had with AR and VR, this idea of making the real world magical and lighthearted and beautiful. Using technology to make magic happen.

    Light hearted, joyful, magical, fun.

    Finding joy in artwork:

    Bird (2008)
    Natasha Light (b.1976)
    Ceramic & fishing line
    H 46 x W 20.5 x D 6 cm
    Peace Museum

    In lights ‘Bird’ the grace and beauty is something that draws me in, with such a simple visual style the grace of a birds flight is captured.

    Joie de Vivre (1998)

    Mark di Suervo

    The joy I found in ‘Joie de Vivre’ (Joy of Living) was the essence of play, the colourful figure in stark contrast of the grey architecture and business people of New Yorks lower Manhattan financial district.

    Seven Magic Mountains Land Art in the Nevada Desert 2016

    Ugo Rondinone

    Again as with ‘Joie de Vivre’ its the contrast between the work and the situation it sits, in ‘Seven Magic Mountains’ the outstretching harsh Nevada desert is harshly and unapologetically interrupted by these colourful standing stones.

    La Joie de vivre [The Joy of Life]

    Max Ernst

    Talk with Neil:

    BlackLight Idea

    Inspiration:

    Unfinished Swan:

    With Unfinished Swan the art style was a huge inspiration for me on this project, mainly for its visuals but also the way they introduce the main mechanic at the start of the game.

    Shadowmatic:

    I love the use of shadows to create different shapes, I’d love to work this into my project somewhere.

    Schim:

    With Schim I want to carry over its sense of warmth and joy in every situation, you work within the shadows in the game but at no point is it scary.

    Superliminial:

    With Superliminial, its the sense of joy and wonder in the majority of the game, using the one function of shifting and controlling perspective throughout.

    Understanding Shadows:

    As the Unity scene will use shadows as its main interaction I think its good to look at them with a different focus, obviously we encounter shadows everyday but looking at them in a different lens or with a different context will give me a better of idea of how my scene should look.

    Testing Light in Unity :

    By turning off the mesh render but allowing the shadows still to be cast, it gave the effect that I wanted. And as it still has the mesh/ box collider then the player cant just walk through it.

    Building a Scene:

    My initial thoughts were to have a maze that was completely shadows but as you need the shadows to be cast onto something the white walls of the maze were made.

    Planning Scenes and Interactions:

    So with the basic concept of the torch sorted and the idea of a maze sorted I needed something for the player to do in the maze and with these interactions I wanted to use the functionality of VR and actions that were exclusive to VR.

    Setting the Tone:

    In the scene I want to spark joy through this unknown and adventure, that yes there is a little bit of fear but at no point are you in danger. Through not knowing what is around the corner but being called to find out I want to have a sense of adventure. Think Indiana Jones; the Mummy in movies or even the games Portal and Ratchet and Clank. All of these examples have fun, mystery and danger but at no point do they take it to horror.

    More Primary Research – PowerWasher VR:

    So with knowing my concept and my functionality I went back to some personal research to see how other studios have done it and with Power washer Simulator it had simple concept, simple movements and functionality that was only in VR so the footage above shows myself messing around and exploring the functionality of VR again through the lens of my own project.

    Building the initial scene and proof of concept:

    At this point I was having trouble with the VR headset so to the best of my ability I began modelling my scene. During this process I had the idea that as the world would be using shadows then recognisability would be a big factor, if the player doesn’t have an idea of what the object is then there wont be any spark of joy or fun because they can not relate the object to anything in the real world.

    The soundtrack:

    After the previous project when audio was the main focus, I didn’t again want it to fall by the wayside and not give it the time it deserved. So as I didn’t think that I would have enough time to do it justice but I had an idea of tone I asked a friend of mine if he would like to make the music for it and he thankfully accepted. I have lots of very musically talented friends but I knew Michael would be the man right for the job.

    Below is his Soundcloud if you want more.

    https://on.soundcloud.com/38jtdNRKCKnNvwds6

    All the prefabs:

    As the scene developed and got bigger and bigger I used more and more prefabs with recognisable silhouettes to fill the scene. This step had a lot of trial and error, I was very conscious that with such a simple concept and such a minimalist art style there wasn’t really anywhere to ‘hide’.

    Building the scene:

    I’ll be honest at this point my documentation was a bit poor as it was a lot of quick ideas, quick changes and quick implementation that didn’t really seem like much at the time but looking back now it was the scene becoming a basic interaction and maze to demo that it is now.

    Play Testing:

    Above is footage of my class mate Lucia testing out the scene. After this session of play testing with both Lucia and Mikhail I finished the final scene and removed the snap turn on the controller.

    Note:Fix:
    Motion SicknessRemoving Snap turn
    Reduce walk speed
    Removing Vignette
    Eye StrainIncreasing light temperature
    Lower Light Intensity
    Lack of EndingAdding in a final scene
    Defining a narrative

    Planning the End :

    Behind the scene:

    Above is stills of the scenes in unity using a green wire frame of the prefabs to show the invisible elements.

    Black Light :

    The Story of BlackLight –

    In Black Light you find yourself in a odd space armed only with your wits and your torch.

    Step into a world where time stands still, filled with intricate mazes and shadowy corners that feel strangely familiar. As you navigate through the labyrinth, you’ll encounter various structures and pathways that invite you to explore. Each area holds its own mysteries, waiting to be uncovered. Embrace the journey as you shine a light on

    the stories hidden within the shadows.

    Where are you ? What happened to the king ? What happened to the men on the ship ? And whats at the end of the maze ? What happened to the heros?

    Reflection:

    If i had more time and more resources I really feel like the concept has legs to stand on and could even be taken further to a full fledged game for VR, at this point it is a proof of concept or a demo.

    If I was to do anything different or take it further, I would like to develop my own shader that gives the same effect as unfinished swan and lean into the reliance on the torch more.

    Also one of the concepts that I didn’t implement was inverting the colours and making the white black but that would either need an after processing effect on the entire scene or another custom shader.

  • Audio Visual

    Research :

    At this point I have no clear plan for this project but I want to keep it that was for as long as possible. At this point I’m exploring the links Paul provided for some inspiration.

    Also the feedback from my summative assessment thats in my head at this point are trying to widen my search for inspiration and connecting a number of different media, I’m not going to force it if it doesn’t happen naturally but still keep it in mind.

    The psychedelic visuals of the classic iTunes visualiser has a nostalgic flair to it and a part of me does love some of the visuals of it but it doesn’t feel like its 1 : 1 with whatever music is played on it.

    Take away : However the audio is visualised or represented, its more appealing when its 1 : 1 and you can see what your ears naturally pick up.

    An Optical Poem – Oskar Fischinger

    With An Optical Poem the visuals have a clear relationship to the music played and convey Fischinger’s personal vision and relationship to colour and music. Even though he was restricted to the media of the times the emotion and energy conveyed still feel indicative of the music. The problem is that this is one mans vision and connection to the music and might not relate to others, that might relate different colours, motions or shapes to different movements or instruments in the piece.

    Take away: An Optical Poem is one mans personal response to the piece of music and might not ‘line up’ with others ideas.

    Electroplankton – Toshio Iwai

    Electroplankton by Toshio Iwai really makes the act of discovering, creating and visualising music obtainable and accessible. The light hearted nature of the program removes the intimidation that some may experience when interacting with technology or music production. Even if it does become a bit mad at times its very forgiving for mistakes and discovery.

    Take away: The interactive nature of Electroplankton allows the user to both create a relationship to the sounds they’ve discovered and see a direct correlation to their actions.

    Personal Note : I want something that can be moving to the user and to help, in one way or another, create a realtionship between the viewer/user and the music they see/hear/create.

    Test Pattern 100m Version at Ruhrtriennale – Ryoji Ikeda

    Idea Sketch – Sound clock :

    The sketch above in my note book is an idea about logging the sound waves of an area or room and logging it in the formation of the hours on a clock face to produce a collection of prints.

    Inspiration – George Crumb, Spiral Galaxy (1972)

    With Crumbs spiral sheet music, it sparked the idea of the music being shown through time and then the idea that the ambient noise of a location at the corresponding time being logged on the sketch, then producing a visualisation of the whole 12 hours.

    Idea Sketch – Audio leading Audio :

    With this idea the concept is to have an audio processing sketch that is driven by an external microphone that picks up the ambient noise of a room and controls wether the music plays or not. Initially the idea was to only play the music when the volume level of the room was below a certain threshold. But I actually think it would be more interesting to invert it and have the music only play when the room is loud enough adding to the noise .

    Idea sketch – FFT Relations:

    FFT images of two different songs mirroring each other on the same page.

    FFT Relationship Development : Instead of two different songs its the same song but the audio waves on the left side of the image is the regular wav file and one on the right is the compressed mp3 file so that on the print you can see the relationship between the two and see what information is lost when an audio is compressed even if you cant really hear it.

    FFT Relationship Development : The visuals of the audio could be either the lined edge justified frequency of the audio or the spectrogram visuals of the two audios and allowing them to overlap or reach out but never come into contact with each other.

    All three ideas above allow themselves to be displayed in a number of ways and that can be explored in the development of this idea.

    Idea Sketch – Three Visuals One Audio :

    The first iteration of this idea is similar to the audio leading audio idea but with a video or movie only played when there is audio detected, playing on the convention of being quiet when watching a movie. This could be displayed as a projector and a mic.

    Concept Test – Sound Clock :

    Inspiration – Albert Bernal, Impossible music #9 

    With Bernal’s mushroom cloud of musical notes, I was inspired less with the image its self and more with the density and overlap of the music notes used to make up the image.

    Sound Waves + Compression :

    Personal Thoughts :

    At this point in the project I am lacking inspiration. I’ve naturally came to the idea of the printed audio visuals, a data visualisation of audio or audio decay. But I cant shake the idea that my work for this project could be more exciting or more interesting.

    The Idea : Audible Decay

    One idea or concept that is really interesting to me is the loss of information or loss of audio quality.

    How can this lost data be shown or be represented to the viewer.

    Digital Visualising Audio :

    Fragile Territories – Robert Henke

    https://roberthenke.com/installations/fragile_territories.html

    With Henke’s work the striking visuals with the awe inspiring scale was appealing to me. The ability to be both still but still energetic and almost breaking out nature of his works in Fragile Territories.

    Minimal Visual Inspiration :

    Frank Stella :

    With Frank Stella’s minimalist work from 67, I was drawn to the creation of shapes within the lines as well as the simplicity of the monochromatic colour scheme.

    Within my work I was considering a monochromatic colouring as not to distract from the audio. A visualiser inspired by Stella’s work would react and shift with the audio without being distracting.

    Dan Flavin – untitled (to jan and Ron Greeenberg)

    In this untitled piece by Dan Flavin from 1973 I love the use of contrast in colour and the use of the space, the wash of colour with the inviting passageway of the blue inviting the user in.

    In this project, as previously stated I don’t think I’ll use colour as not to distract from the audio as can colour carry its own connotations within a work.

    Talk with Paul:

    Don’t just deliver.

    After my conversation with Paul this sentiment kept rattling around in my head.

    I think I keep focusing more on delivering a finished piece or product and less on the creative exploration of the project. I’m thinking like a corporate designer and not like an artist.

    Testing Audio :

    Original Audio :

    I chose the ‘pas de deux‘ by Tchaikovsky for a number of reasons. Firstly it is one of my favourite and most moving pieces of classical music and I was reminded of it through a number of examples that use classical music as well as Paul talking about how music “moves people” and how I was moved by this music.

    Secondly I considered how that with my theme of audio data I was emotionally moved by a piece of music first preformed on 18th December 1892 at the Mariinsky Theatre in St. Petersburg, Russia. This piece of music being streamed over the internet from a server in the UK to my phone that is then sending those signals through a bluetooth connection to my headphones and finally to my ears eliciting a response.

    Through my exploration of effects on Pas de Deux and toying with the effects on Audition it became less of destroying the music and more of isolating an element that would catch my attention and pushing that to its extremes.

    Reverb to extremes :

    Delay to Extreme :

    ⚠️ WARNING LOUD AUDIO at 0: 30 ⚠️

    Different Samples :

    Other than a beautiful piece of classic music I want to use other meaningful audios aswell.

    Above is the first commercially released digitial audio track on PCM which is the jazz album by Steve Marcus and Jiro Inagaki.

    I didn’t decide to take this any further but if this project is revisited then this could be taken further.

    When looking for meaningful quotes, Mikhail had suggested Marshall McLuhan as a known and insightful voice in the world of technology.

    Exploring Effecting the Audio:

    At the most violent moment of the audio, the detached voices of Marshall McLuhan has been processed through the distortion effect to push them to extremes.

    Above is the spectrogram is the digital router noise. I personally loved the visuals of the digital messages.

    For the time stretched router noise I used the denoise to either extreme and syned up as the bed of the audio.

    With Pas de Deux the Studio Reverb on after effects gave the piece of music an ethereal tone with the higher notes dragged out becoming a constant of the piece.

    The screenshots above are the visuals of the archive footage using effects to make them unrecognisable.

    Final Audio:

    The isolated audio without the visuals of the piece.

    Taking it further:

    In my head I envisaged this final piece as a teaser reel of a gallery piece that would be more long form and on a larger scale.

    Lost Global Village

    With my final piece the lost global village I took the 3 audio elements in the composition pushed to extremes with effects and the visuals are archived footage of 3D animation promo footage from the 90’s with the same treatment applied to the effects.

    I had a personal outcome in this project to not just deliver a finished product but to explore.

    Reflection:

    Feedback Notes:

    Personal Reflection:

    I came into this project with pre conceived notions of using personal projects and producing a music track and using midi controllers but that felt like what I usually do lock into and idea.

    So in reaction to this preconceived notion and after a conversation with Paul I tried to not deliver a clear finished product and which I think I failed at and should have leaned into it more.

    If I was doing this again I would listen to Paul and Mikhail’s feedback and allow the work so go to extremes and not get in the way of the process and subdue it. To use one element and let it loop until it becomes ‘difficult’ and allow myself and my art to become comfortable with taking up space.

  • Design History & Theory (DH&T)
  • Design Domain Part One :

    Form

    Every Mickey –

    Matthew Plummer-Fernández

    2017
    50.688 x 23.858 x 49.376 cm
    SLS Nylon, acrylic paint, 3D files of Mickey Mouse sourced online

    Venus of Google –

    Matthew Plummer-Fernández

    2013
    17 x 9 x 30 cm
    3D-printed gypsum in colour

    Reality is not always probable –

    Troika

    In Search of an Impossible Object, 2018

    Anthony Dunne & Fiona Raby

    Conversation with dad :

    When discussing my project with my dad he had mentioned that in greyhound racing that when talking about a dogs form youre talking about how good the dog has been, its most recent 3-5 races to make an educated guess on how well the dog will preform.

    Applying this to the project I was considering how I could apply this to my own practice and looking at my past 3-5 pieces of work and judging how the next piece of work would be.

    My thoughts just now :

    Wed 27th Nov 12:04

    So before I started this project I was very sure and excited about the inital idea of giving form to intangible things or senses that we can see but not interact with but just now for some reason the idea isn’t exciting to me.

    Usually in my projects they seem to be clean and clear and deliverable products that answer the questions of the brief in a very illteral manner but thinking back to Design Domain in second year I used that as an opportunity to challenge my personal process and discover more about myself thought it and with the vauge stucture of this project I think it is a good time to try and challenge myself and do something I wouldn’t normally do and not just try to solve a problem and answer a brief and be more artistic.

    I might be naive, but I think the take on form in the context of greyhounds and they’re ‘form’ (meaning previous results) is something that might not be done by many other people and the idea of what that means to myself as a creative practitioner and that idea does excite me.

    You’re not here to make happy shit. 

    Paul McGuire -2024

    Chat with Sean :

    In the studio I was suring my usual wander when struggling for ideas and not sure where to go and was explaining my struggles to Sean in year two and he had a number of great ideas one of being that I should view my years at GSA as my races, years one, two and currently three and pick work from one of those years to mangle together.

    When I say mangle together in my head I have a clear idea of what I mean, I dont intent to have a messy organic mangled mess but one more like a glitch or Z fighting in a digitial space; a rough, textured, dry mess of my prvious work or aspects of my previous work.

    Thinking it out :

    Z fighting Visuals :

    Glitch Aesthetics :

    Glitch 3D Models :

    Metal Crushed Texture :

    Taking a closer look at the mouse :

    Guest Talk – Tim Rodenbroeker :

    Tim’s website:

    One idea I had during Tim’s talk was to try and do what he did with limiting his technology and do a version of that with an old mac book I have at home and exploring the visuals that can be achieved with it. It is a little bit literal to just do the same thing Tim did but I still think it would be a fun experiment and could yeild fun results considering my interpertation of form, the visual identity I’ve identified and now the limitations of technology and how I can create within it.

    Another take away from Tim’s talk was the idea I was touching upon earlier, that being the idea that for this project is not just a brief to answer and an oppertunity to learn and delve deeper.

    Studying at GSA is the oppertunity to challenge preconceptions and learn about things deeper and educate myself before I go out into the world of business and dont have much time to do that.

    • Self Evaluation
    • Self Discovery
    • Looking into the void

    Its not just to get you ready for the job market.

    “Being an outsider is not good way to make change.”

    • Make change from the inside.
    • Learn about something well enough to change it.

    Great design is more about removing parts, rather than creating or adding anything.

    Tim Rodenbroeker – 2024

    John Gerrard :

    Tutorial with Paul and Cat :

    Possible outcome is to take photos of previous work and use that as different outcome, maybe making them phycial.

    I think the idea of this but it might be too similiar to Elias’ work by making photography a 3D element but it could be one step in the process or one possible outcome.

    Another option is taking the same piece of work and remixing it in 3 different way and presenting those ways in a clear consise thoughtful way.

    I do like the idea of doing 3 different methods as an exploration but I would like to keep the idea of my work being the form. The idea that it is 3 different pieces of work done 3 different ways feels more in line with my interporation of form.

    Digitial Decimation.

    The workings of the computer, a certain style that can only be achieved through the use of technology.

    I think a good way of describing the process is as an almost feedback loop on my work, a process that goes from digital to physical and back again to abstract the work from what it originally was.

    Glitching with a purpose.

    David Burkin :

    https://www.davidbirkin.net/embedded

    Nikita Diakur :

    Don’t spell it out / Don’t be so literal.

    Simplicity at it’s core:

    • Differ in one aspect.
    • Explore your self

    There is beauty and in the simple and power in it.

    Putting it to the test :

    I was wondering about how I would put my idea of decimation into practice and when I crumpled a misprint of my had drawn example for the process, I then realised that I had an example in my hands and decided to scan and damage each itteration and repeat the process.

    Late Night Stress Ideas :

    During the wee hours of the morning I was struggling to sleep and stressing myself out that tommorrow was Monday and that I had no clear direction for my Design Domian project and just vauge ideas. But at that moment I was struck with a panic induced idea of what to do and how to display it.

    As the key themes of my project are :

    • My own work (my form).
    • Challenging my usual practice.
    • Challenge my need for the bells and whistles

    My own work :

    My interpertation of form refers to the context of greyhounds and races and how previous work is refered to as their form, so applying that to myself I’m looking at previous work I’ve done and remixing it in some way.

    Challenging my usual practice :

    As like Design Domain in year two I used it as an opportunity to challenge any preconceived notions I had or challenge my creative process and force myself to do/ think about it in a different way. So for this Design Domain I indentified that for most briefs I have a literal, product like outcome and wanted to produce work that is more unconventional for myself, something more sculptural.

    Challenge my need for the bells and whistles:

    After the talk with Tim Rodenbroeker and his message about doing more with less and recognising that I might go a bit too ‘more’ with it rather than less, I thought it would be a good opportunity to challenge that about myself.

    How to display the work :

    This might need to be taken into Design Domain part two but an idea I had for displaying the work is repurposing old tech into an out reaching sculpture.

    Three pieces of work, all with the same treatment, all displayed on a different piece of technology.

    The form of my work, transformed into different digitial forms and displayed in three different forms of technology.

    Have I said form enough ?

    Idea : Booklet

    For my design domain proposal I should make a booklet/zine about my proposal in InDesign and give it some time and effort for people to flick through properly.

    I’ll use the QR code feature in InDesign to conect the booklet to my learning journal and I have even have the work published digitially so that people can view it on their devices.

    My form I want to remix:

    Year One :

    Creative Coding 01

    Year Two :

    Data Visualisation

    Year Three :

    Imagined Environments

    How to produce the work :

    I was mentally hitting a bit of a wall for some reason and wasnt sure where to go or how to actully start taking the work as I didn’t want to be too similar to what Elias was talking about but after a chat with Gillian I understand that we can start in the same place or start with similiar workflows but differing greatly at some point so produce very different outcomes.

    Idea : Taking processing sketches and exporting them to SVG files and then using those SVG files in 3D packages to create digitial sculptures with them. Making a note not to be too precious with these sculptures and that the digitial texture produced is what I’m looking for.

    Idea : Similiar to the previous idea its taking SVG files from processing but instead of the sketches themselves I want to import documentation images of previous projects and then export them and create 3D sculptures.

    Idea : Documentation of previous projects ran through a processing sketch that warps it or glitches it with a purpose. In the same vain as David Burkin and creating a new piece of work from the documentation.

    Idea : A mix of both ideas above.

    All 3 ideas can all be taken further into the displaying idea I’ve had for Design Domain Part Two.

    Still messing around with old computers :

    Music with it ? :

    This computer was released on 2001 and on that same year Daft Punk released there follow up album, using technology and an artistic vision to drive the album, similiar to our modern practice as creative technologists. It might be a nice idea to tie them together, inviting people in to come closer to this odd technological sculpture.

    And personally one of my favourite albums, cover to cover.

    If I go with this idea I think it would be very fun and think the overall visuals of the piece should change as this a fun album and the visuals are quite joyful and as this is a self reflective piece, I’m not the most quiet and downtrotten of people and the visuals should probably follow that.

    Personal Thoughts :

    10:06 pm Tuesday 3rd of December 2024

    The past couple of days I’ve been feeling quite dejected about my possible outcome for this project and had no clear plans or ideas but as I listen to the Daft Punk album I’m considering using for this project and looking back at what I’ve put on this learning journal there seems to be two conflicting thoughts and two conflicting styles.

    Which I feel like mirror myself.

    • A need/ want to be serious, meaningful and thought provoking.
    • A natural colour, joy, optomisim and conscious choice to be postitive.

    Both rooted in a love and respect for technology, art and design.

    I think this is what Paul is talking about when he says art school is about self-discovery and ‘looking into the void’. When I look into the void, I see two parts of myself: the fun artist and the serious designer.

    I think the difficult part is picking one of them or even harder, trying to have them both coexist in the same place.

    Also its funny that the album is called Discovery as I’m having a journey of self discovery working on a project about my own work.

    “…like trying to wrestle with the dichotomy of a designer who wants to make impactful work, an artist who wants to make joyful work and attempting to do both without sacrificing anything from either side.”

    Lexie Mackie 2024

    Creating Artifacts :

    After a lot of behind-the-scenes work, the three digital sculptures were produced in a matter of hours with the help from Mikhail and from Lexie or showing me how to actually use blender but by either plan or complete accident I stuck by my idea that I had formed along the way that the sculptures are not the main focus and the process is as well as the personal challenges.

    Year One:

    Year Two :

    Year Three :

    Design Domain Design Proposal :

    Self Evaluation:

    Part Two :

    Working title :

    Return to Form

    In part two my aim is to develop and build apon the work done with 3D digitial sculpture and the idea of old tech and ‘glitching with a purpose’ using old devices and displays to display the 3D sculptures or even using some of the time in part two to explore blender more and re-explore the sculptures of part one.

    Another idea for part two was to also explore the digitial to physical workflow and applying the same theory and practice of part one to part two… if the 3D print is to fail then we let it fail and explore the textures and shapes of that ‘failed’ print.

  • Expressive Data

    Data Visceralisation

    We are not just visualising the data we are giving people the opportunity to use other senses to experience the data, we are giving the opportunity to connect with the data on an emotional level.

    My aim is to make one person connect with another person they have and will never meet through data. I think this is an important thought to have throughout this project. In its simpliest form it is about helping people connect.

    The Data :

    For this project we will be using data given to us from LABDA and focusing on one entry to drive the arduino.

    Launch Notes :

    Workshops :

    Through the workshops I loved that is such joy and power in the simpliest of things. Simple movements and lights can draw people in or excite people, so by elevating these very simple movements and lights driven by data then I think/hope that they will still draw people in and excite them.

    Inspiration :

    https://zimoun.net

    One thing that first came to mind when during the second half of the launch was the work of Zimoun that I looked into last year for my spacial audio and how they use multiple small very simple parts in a large scale to create an all encompassing experience.

    I’d like to use this method with my data visualisation. One small simple sculptural piece that could be taken further into a huge represenation of the data.

    Initial sketches :

    Building on the idea :

    Taking the idea of the ring light idea further, I envisioned the singular pillar of light being part of a bigger chandelier or hanging pillars of light, each illuminated with a neo pixel ring that is driven by the data of one persons life in a day.

    If they do something at 6am then at the corrisponding light on the ring with shine down through the pillar.

    The 12 lights of the neo pixel are corresponding to the hours on a clock face AM and PM respectively.

    Components :

    https://www.adafruit.com/product/1643

    https://www.adafruit.com/product/1586

    https://www.adafruit.com/product/1768

    I think for what I’m trying to do the smaller rings would be ideal and would mean that when put into the chandelier I could do multiple rings without taking up too much space.

    I still need a translucent material:

    • greeseproof paper
    • heavy tracing paper
    • clouded perspect

    Chat with Cat:

    When discussing my idea with her Cat came up with a great idea of taking it even further.

    Usmans research is taking data from people around the world so why doesnt each country have its own chandelier displaying its own data. Each country hanging next to each other showing the relationship of its own information pillars as well its relationship to eachother.

    Testing the materials :

    After getting multiple samples of material I simply tested them out with elastic bands and the neo pixel to see how far the light would travel down it, the main effect people light enough and far enough down the tube.

    I will say that all i could think at this point was the neo-pixel lightsabers that you can buy.

    After testing it was obvious that the acetate was the best as it doesnt weigh much, its robust enough but carries the light further down the tube along with a cool white effect to the light.

    At this point it was just using the “diary” sample csv file on the Arduino code so I’m confident that with all off the lights on the light will travel further.

    On the topic of how to display the infromation through the neo pixel I still have to test two different ideas I’ve had :

    1. All the neo pixel ring illumiated within the tube and then one light changing colour at a time to indicate at what time the data was documented.
    2. All the neo pixel ring illumiated within the tube and then one light changing colour and staying that colour so that throughout a 24 hour period the colour of the tube changes as the more activities are documented.
    3. Only one light is illumiated at any one time and when the data is not shown the light is not turned on.
    4. At the start of the day none of the lights are on but as the activities are logged each tube gets lighter and lighter until it resets.

    Designing the prototype :

    Ordering more components :

    In future projects I’ll plan out all the components I need for the project and order them in one as ive spent like £9 on delivery from PiHut.

    The CSV file :


    On Thursday, my alarm rang at 8:30 am, but I snoozed until around 9 am. Then I got up and set the breakfast table for me and my partner, and had yogurt with fruits, cereal and almond butter for breakfast. I got dressed, put on some skin care and make up and was ready to leave by 10 am. I took a bus with my partner to the university campus and got to my office at 10:30 am. Between 10:30 am and 12 pm, I replied to some emails, had a call with a senior colleague, and worked on a grant application. At 12:15 pm I headed to the canteen to get lunch, and ate with colleagues in the staff room. We then chatted and bought tickets to see a movie (Dune) the next day. From 2 to 3 pm I attended a seminar, then I worked on the grant application again until 5:30 pm. I took the bus back home and cooked dinner with my partner. During dinner, I video called my family in my home country. After dinner, I planned an upcoming vacation and then watched a movie until 11 pm. After the movie, I folded some laundry, washed myself, put on some skincare and went to bed. Between 11:30 pm and 12 am, I read a book and played a game on the phone with my partner. I fell sleep at 12:15 am.

    After a quick investigation of the data entries, I was mainly looking at the ones written in english, I found one that had multiple time stamps over 24 hours that would show the full potential of my design concept.

    As the data that I needed was in a freeform casual format I used ChatGPT to break it into manageable pieces that I needed for my CSV file.

    Buidling the tubes :

    As I work over the weekend I was a bit limited with my choices on where to go and personally knew that I would need to get the material quickly so that I had enough time to build my prototype.

    I, at the moment, work in Dunelm in Clydebank and work the weekend but during my shift I found a rubbery malleable material thats used for floor protection in the home that seemed ideal.

    And a full stand of Gorilla glue in the store I also had my bonding agent.

    It also helps that I get 15% off.

    With the measurements I did during the concept design of the project I had a general idea of the sizes and and structure I needed.

    I will be honest through making the tubes, I did have a lot of times when I solved problems by stumbling into solutions; like holding the NeoPixels in place, initially I would have used fishing wire to hold them within the shade but due to its rubbery tough properties it help the rings in place securely when glued together.

    As the NeoPixels that came from PiHut didnt have any wires the needed to be soldered on and I moan about it at the time but I’m quite happy to solder components, especially when it goes well, so each NeoPixel was soldered with a higher quality stranded wire and attatched terminals at the ends for prototyping purposes.

    As well as the NeoPixels needing soldered I had the Adafruit SD card reader sheild to solder as well, so it was a two birds one stone sort of thing, which then frees up the SD sheild that I borrowed.

    The Wiring :

    With the NeoPixels soldered and the breadboard ready the next step will to be either sorting out the housing for the electronics of the prototype and if I have time soldering the breadboard to something more solid.

    The Code:

    // Expressive Data
    // Nicholas Mclaughlin IxD Y3
    
    /* This Arduino sketch is to control a 3 NeoPixel set up that is driven by data from a CVS file and assign colours to the actions and when those actions took place 
    as outlined within the data in the CSV file "Data" */
    
    /* /\/\/\/\/\ This here code is the product of Dr Catherine M. Weir's brilliance and within it are the messings around by Nicholas McLaughlin /\/\/\/\/\ */
    
    // LIBRARIES ------------------------------------------
    #include <SPI.h>
    #include <SD.h>
    #include <RTC.h>  //Unless youre using the R4 which has its own RTC built it.
    #include <textparser.h>
    #include <Adafruit_NeoPixel.h>
    
    // GLOBAL VARIABLES -----------------------------------
    // SD CARD
    const int chipSelect = 10;  //Chip Select pin is usually 10 unless its been hacked to read otherwise, if it is a hacked sheild then change the pin select int
    
    File csv;
    String fileName = "Data.csv";  // This sketch is using the custom CSV file Data, REMEMBER - No longer than 8 characters, not including .CSV
    
    int numRows = 0;
    int sdIndex = 0;
    
    long startPos = 0;
    long pos = 0;
    
    // RTC --------------------------------------
    //RTC_PCF8523 rtc;  // This RTC (SD sheild RTC) is disabled as not to clash with the RTC included in the R4
    
    long tHour = 0;  // Create two longs to hold the hour and minute when you want the timer to fire
    long tMinute = 0;
    
    // NEOPIXELS
    const int pin1 = 3;  // What pins are running what rings
    const int pin2 = 5;
    const int pin3 = 6;
    const int numPixels = 12;  //Number of pixels per ring
    
    // to run the rings as one connected strip for testing/ demonstration purposes
    /* 
    const int pin = 6;
    const int numPixels = 36;
    
    Adafruit_NeoPixel ring(numPixels, pin, NEO_RGBW + NEO_KHZ800);
    */
    
    Adafruit_NeoPixel ring1(numPixels, pin1, NEO_RGBW + NEO_KHZ800);  // Initialising each NeoPixel ring seperatley so that they could in theory run separately
    Adafruit_NeoPixel ring2(numPixels, pin2, NEO_RGBW + NEO_KHZ800);  // but for the prototype theyll be running off the same CSV data.
    Adafruit_NeoPixel ring3(numPixels, pin3, NEO_RGBW + NEO_KHZ800);
    
    // SETUP -----------------------------------------------
    void setup() {
    
      // Serial.begin(9600);
      // while (!Serial) {};  // This is only needed if running from your laptop. REMEMBER - Take it out when the arduino is stand alone.
    
      // NEOPIXELS
      ring1.begin();  //calling all 3 rings
      ring2.begin();
      ring3.begin();
    
      ring1.clear();  // turning off all 3 of the rings
      ring2.clear();
      ring3.clear();
    
      ring1.setBrightness(255);  // remember to change this prototyping brightness
      ring2.setBrightness(255);  // 0-255 brightness
      ring3.setBrightness(255);
    
      for (int i = 0; i < numPixels; i++) {
        ring1.setPixelColor(i, ring1.Color(0, 0, 0, 255));
        ring2.setPixelColor(i, ring2.Color(0, 0, 0, 255));
        ring3.setPixelColor(i, ring3.Color(0, 0, 0, 255));
      }
    
      ring1.show();
      ring2.show();
      ring3.show();
    
      // RTC
      //Again this is all removed as it was clashing with the R4 RTC
      /*if (!rtc.begin()) {                                      // Initialise the Real Time Clock
        Serial.println("Couldn't find the Real Time Clock.");  // If the clock does not start, print an error and put the Arduino to 'sleep'
        while (1) {};
      }
    
      Serial.println("Real Time Clock is running.");  // If the clock starts successfully print a confirmation
    
      rtc.adjust(DateTime(F(__DATE__), F(__TIME__)));  // Update the RTC's time from your computer's clock
    */
    
      if (!RTC.begin()) {
        Serial.println("Couldn't start the RTC.");
        while (1) {};
      }
    
      Serial.println("Clock started.");
      RTCTime timeToSet = currentRTCTime();
      RTC.setTime(timeToSet);
    
      // SD CARD
    
      //Cheching for the SD card, reading it, checking that the correct file is actully there and then using the data found on the CSV file.
      //REMEMBER - Double check the CSV file is actully correct
      //REMEMBER - No commas in the CSV file!
    
      Serial.println("Starting SD Card...");
    
      if (!SD.begin(chipSelect)) {
        Serial.println("Something's gone wrong! Maybe check your Chip Select Pin and that the SD Card is properly inserted?");
        while (1) {};
      }
    
      Serial.println("SD Card Ready!");
    
      if (!SD.exists(fileName)) {
        Serial.println("File is missing.");
        while (1) {};
      }
    
      Serial.println("File found!");
    
      csv = SD.open(fileName);
    
      startPos = findStartPos();  // Find the position in the file that marks the end of the header line
      Serial.print("Start Pos: ");
      Serial.println(startPos);
    
      while (csv.available()) {
        csv.readStringUntil('\n');
        numRows++;
      }
    
      Serial.print(numRows);
      Serial.println(" rows in file.");
    
      csv.seek(startPos);     // Go back to the start of the file
      pos = findStartTime();  // Find the real time start position - see code on findStartTime tab
      Serial.print("New Start Pos: ");
      Serial.println(pos);
    
      csv.close();
    }
    
    // LOOP ------------------------------------------------
    void loop() {
    
      //DateTime now = rtc.now();  // Get the current time from the RTC
      /* NOTE: It is not a good idea to try and print this to the Serial every time you go through the loop - it will clog your Serial port! */
    
      RTCTime now;
      RTC.getTime(now);
    
      if (now.getHour() == tHour && now.getMinutes() == tMinute) {  // Check if the current time matches the target time
    
        // READ IN THE CSV DATA
        csv = SD.open(fileName);
        csv.seek(pos);
    
        String temp = csv.readStringUntil('\n');
        int l = temp.length() + 1;
        char currentRow[l];
        temp.toCharArray(currentRow, l);
        Serial.println(currentRow);
        pos = csv.position();
    
        csv.close();
    
        TextParser parser(","); //if youve forgot to actully take out the commas
        // CREATE VARIABLES TO HOLD THE DATA YOU WANT TO EXTRACT FROM THE CSV FILE HERE, ENSURING THE DATA TYPE MATCHES WHAT IS IN YOUR CSV FILE
    
        char time[6];
        char entry[193];
        char activity[10];
        char next[6];
        long nextMillis;
    
        parser.parseLine(currentRow, time, entry, activity, next, nextMillis);  // PASS YOUR VARIABLES INTO THE TEXT PARSER IN THE ORDER THEY APPEAR IN YOUR CSV FILE
    
        tHour = findHour(next);  // Update thour and tMinute to control when data is next read from the SD Card
        tMinute = findHour(next);
    
        // Print your data to the Serial
        Serial.print("Time ");
        Serial.println(time);
        Serial.print("Entry: ");
        Serial.println(entry);
        Serial.print("Activity: ");
        Serial.println(activity);
        Serial.print("Next: ");
        Serial.println(next);
        Serial.print("Next Millis: ");
        Serial.println(nextMillis);
    
        sdIndex++;
        Serial.print("Index: ");
        Serial.println(sdIndex);
        Serial.println();
    
        if (sdIndex >= numRows) {
          pos = startPos;
          sdIndex = 0;
        }
    
        // NEOPIXELS
        updateNeoPixels(time, activity);
      }
    }
    
    // Any supporting code and files are available on request. 

    Test Tube :

    At this point I had already put the NeoPixels into the tubes and basically wired everything up so to test things out I had made another tube with its only job to test things out on and not ruin the other ones. Through testing I had decided that with the tools I had in my disposal I couldn’t get the clean cut that I had imagined so decided not to cut the prototype tubes.

    Also I had used this ‘test tube’ to figure out how I would attach them to the ceiling, the best way being fishing wire and a sewing needle to get through the tough rubber that seemed to close up no matter what i used to cut the holes.

    Testing the Colours :

    Up until this point I had not properly considered the colours but during the coding of the Arduino I had the opportunity to both give it more thought and elaborate on it a bit more.

    After testing a few colours I thought it would be interesting to assign different colours to different activities but have them all harmonise together. As I had green already programmed into the NeoPixels to indicate activity I thought moving across the colour wheel to use blue to indicate rest and then further round to use purple to indicate travel and keep non logged time as the NeoPixels standard natural white.

    GreenActivity
    BlueLeisure
    PurpleTravel/ Preparation

    Very sophistication housing :

    As the housing for the Arduino, bread board and battery ideally wont be seen by anyone then I didn’t want to spend too much time on it, initially I had ideals of building a bespoke box for it but a modified Tupperware container does the same job in a fraction of the time.

    Thanks to Tom for supplying said container.

    The Wind Chimes :

    The wind chimes are hanging, not from the ceiling as I might have hoped but from a shelf that still hides a multitude of sins and hold the housing for the arduino.

    Final Prototype :

    “We are each our own worst critic” – Ellen Hendriksen

    I have a tendency to see the flaws in my work but I am quite happy with how this turned out and believe that it has some real potential to go further to become something quite beautiful and quite interesting.

    Maybe at some point in the future it will be.

    Documentation :

    I want to revisit it if I have time.

    Plan out the shots better, better quality camera, slicker production.

    As the shades are these long slim tubes with the textured rubber, I wanted to use a 9:16 ratio and close up shots to explore the texture and light of the prototype and imitate how some people got very close to the light as well as under it to inspect it further.

    Taking it further :

    I want to make a mock up of the final idea here, either an illustration or a 3D render.

    As outlined in my initial concepts designs, if this prototype was to be taken further then there would be a few things to expand as well as improve :

    1. First thing to change would be the scale. As seen the prototype is just 3 shades but if it was to be taken further then I would increase the scale and have a large two level chandelier that would show the relationships between people in each data set with each shade expressing the data of one person each.
    2. The power behind it. As this project would become much bigger then the technology would need to follow that with a specific rig built for it to store the data of the CVS file, power and run each NeoPixel and receive data if it is in fact live data being fed into the lights.
    3. The Materials of the whole thing. On a bigger scale with many different shades there will need to be a metal structure or something string enough and light enough to hold the lights in the air. As well as the structure I would like to revisit the shade materials and have them specially made as to cloud the light of the NeoPixels enough to give it the desired effect but still be light enough to be able to hang a large number of them from one structure.

    Evaluation :

    For this project I wanted to keep it simple and I think I did that.

    I wanted to keep the concept simple for a number of reasons, firstly I wanted the final product to not just be interesting and driven by the data but to have the potential to be beautiful as well as modular.

    If I had more time or to do it again I would like to give more time to the materials of the final piece and find the perfect one to both allow for the lights/ the data to be seen but also to create a sleek, clean silhouette with an almost sculptural element.

    My main take away on this is that my coding skills need to be taken up a notch to match where I am in other aspects. I’ll have to allocate time to getting back into it and taking it further with examples and self driven lessons and projects instead of waiting for the brief to force me to.

    Extra Documentation :

  • Interactive Systems

    Before I go into any of this. I’m writing this after the fact, I was in initially going to do this in a week by week chronological format but as I went down the rabbit hole with this project completely I threw myself in and lost track of this documentation.

    This documentation might seem to jump around a lot and thats either because I’ve chosen to focus on aspects Ive been involved in.

    Brief :

    What we took away from it :

    As a team we broke down the brief, distilling it down to the key elements that we wanted to look into and where our scope of work was.

    We did go a little bit out of scope on some aspects but we were treating that as possible suggestions to the client ontop of what they asked for.

    Getting it on the board :

    Just my chicken scratchings of us breaking down the brief as a team.

    Team Building :

    Marco’s nicer more legible version of the team list.

    I thought of a good way for us to separate into our teams. By allowing everyone to voice their opinions on where they would want to be most and where they would want to be least the teams naturally formed.

    There was some members of the team that either had no strong feeling to what they wanted to do or others that wanted to do a bit of everything without committing to any one thing so then the roll of Free Agents was made that meant they could float between the teams and help out whenever needed.

    Planning the week :

    I seemed to be appointed to the roll of organising the project so I viewed that my roll is to organise all the boring but important stuff so that means everyone can do the more enjoyable, more creative activities.

    Glasgow Science Centre Research:

    As we where designing for the Science Museum Group (SMG) I went, with Marco, to the closest equivalent we have, the Glasgow Science Centre to see how they make a fun educational exhibition for the past 20 years.

    Figa board :

    Elias Showed the team how to use a Figma and how we can implement it as a team to work on the visuals of the project together. Its been a great resource for this project and I’ll want to use it going forward for my own projects.

    Click here for the Figma board.

    The figma board was updated by most of the team throughout the entire project.

    The teams (throughout the project) :

    Everyone has their own strengths, weaknesses and personalities. If there is anything that I can do to make it easier on everyone by organising things in a certain way or as acting as a middle man between people then I believe its in my role to do so, that way everyone can focus on their work.

    I am human and initially I was getting annoyed by people and their attitudes towards the group, the project, myself, but after taking a step back I had to remind myself (with help from others) that my roll as logistics is not to try and manage people but to allow them to do what they’re best at and if they don’t then it is not my job to police that.

    As with all group projects there has been some tension but the people that have been in everyday have done such a great job of defusing that energy and reminding whoever is going through it at the time that its not forever and to take a step back and relax. I’m actually proud of core members of the team and how they’ve been supportive of each other and had a willingness to adapt and work.

    Attractor Screen Research

    Looking at font from the time :

    Sketching the form :

    All quick sketches, discussions and ideas of the forum was just assumed to be a podium and a screen after Paul had suggested that the podium be further away from the screen (1.5m). As I was sketching out the drawings on the left Mikhail was discussing his idea for a more connected outreaching design that you can see sketched on the right which he then used as a base for 3D Blender model.

    Iconography :

    Iso talk with Pawel :

    The talk with Pawel game me a great insight into how design firms like ISO work similar to projects we do in the studio or how closely the system I was using for this project naturally followed similar structures as ISO’s.

    Planning Week Two:

    Each week I would start a fresh. Look at what had been done, what has to be done, what was in progress and how everyone was feeling about it personally and then moving forward from there.

    A big take away I had from this part of the project was how much I gained from running it by the team and asking the guys if I had missed anything. One or two times I had missed something which I’m thankful they mentioned.

    UX Journey :

    Using a Miro board I went through the user experience and made sure there was no dead ends that the user could find themselves in making the user annoyed or breaking the program.

    In the future I want to use the Miro board to visualise the user experience flow and think it through so I don’t run into any snags.

    Attractor Screen Text :

    Here is the copy for the possible text that would be on the attractor screen, the call to action and the text that could be on the panel next to the physical interface.

    Attractor Screen Mock up :

    Above are just some initial mock up ideas I had after discussion with the team and incorporating different elements and ideas from everyone. The fonts and bar that Elias and myself had worked on. The footage layout that Mikhail had suggested and the processing code that Marco had built with suggestions from Paul on the carousel menu and Koyaanisqatsi, of course.

    How to communicate Speed and Progression :

    My research that I did into how to effectively communicate the speed of the frame rates and the relationship between that speed and ‘our time’.

    Planning Week Three :

    Bringing it all together for the final week. This weeks focus was obviously trying to finish up, bring it all together because I had it in my head that without allocating any proper time to the presentation then we wouldn’t do it properly and that after all this work it week seem rushed and non-professional.

    Capture Rate vs FPS :

    Mock ups for iconography on how to communicate the relationship of each element.

    Looking back at it now I think we went down into it too much and over designing the system instead of stripping it right back and making it as obvious as possible.

    Screen Mock ups with clocks:

    After we had decided on the clock icons I did a quick mock up and all seemed well under it was brought to our attention that it was obvious to us but wouldn’t be obvious to anyone outside the project.

    I’m sure that kind of insight will come with more experience and time.

    Answer the question and work back :

    Struggling with visualising the relationship between the two different measurements after Pauls conversation.

    The relationship between how fast things actually move.

    And how fast they have to move so that we can see something moving.

    Above are just the initial notes of trying to simplify the lesson that we want the user to take away. So by working backwards, starting with the answer/thought we want the person to have and then trying to reverse engineer how we would get from that answer to the question we have to ask them.

    Screen change mock ups :

    More mad scribbles on the boards showing the creative process of myself, Mikhail, Marco and Elias all trying to come up with a way of communicating the lesson/take way of the whole project.

    Again more quick Figma mock ups of how would we want to show the 3 videos of footage.

    In future projects I’ll want to use figma more just for its speed and accessibility.

    I also think there is a more elegant way to display the 3 windows but at this time I don’t have it and would need to spend more time on it if I could.

    Asking Year two to test :

    We asked the second years to have a try and give us any suggestions or thoughts of what they thought.

    For myself this was an opportunity to observe people using the exhibition from a far and see how they use

    The Presentation :

    The final thing

    After the project is finished and as I’m looking back at my learning journal and trying to document everything I’ve realised that so much of my work in this project is something that I didn’t document because I didn’t think it was important at the time or it was stuff that I don’t know if I could have documented it.

    A lot of my work on this project was planning, team management and visual identity.

    On the part of planning and team management, I didn’t particularly want to take that role on this but it happened naturally and I thought to myself that if I take on this role then that allows the other to get on with what they’re good at. What did happen is that people would come to me and ask for my opinion on each part of the project but I made sure that I would only encourage if they asked or guide, at no point did I want it to become my singular vision.

    On the part of the visual identity myself and Elias worked on it and it all went quite easily, it wasn’t in the scope but I did think it was an important consideration so that even though we’re not actually supposed to do it, we have a clear visual identity to refer to and to help drive the aesthetics of the project.

    Self Evaluation

    Looking back at everything thats happened over the last three weeks there is a few things that I’ve taken away from this experience;

    I had to learn how to not let things bother me as much, at the start I was understandably annoyed that some people were coasting and others were doing all the work and giving up their time but with the help of people in this project and my life, they reminded me that its not the end of the world.

    What I will say though is that I’ve discovered that I actually do like leading a team, it did help that the team had great core members. But when everyone was working and collaborating and everyone was on the same wavelength and in the same flow state, it was amazing.

    Another take away is that if I could do it again then I would take Pawel’s advice and have the role of team leader as its own role instead of an after thought or a secondary role for myself. If I had a second time and a bigger team then ideally there would be the role of team leader/free agent, someone that could help out when needed on all parts of the project but who’s primary role is logistics.

    Over all I enjoyed this project and really enjoyed working as part of a team and I’m really proud of how much everyone has done.

  • Imagined Environments

    Initial Ideas / Notes :

    Initial idea one is the story of two people making a journey to each other throughout Glasgow, with footage of both journeys overlapping or intertwining.

    Idea two is normal footage of people around Glasgow walking around but different sizes, colours, speeds. The normal sights of Glasgow made weird and wonderful.

    Idea Three is playing with the idea of different aspect ratios within one video/screen. One story told from behind multiple different screens.

    Conscious of Time :

    As the project is only two weeks, and it’s already Wednesday, I want to nail down the concept as quickly as possible so that I have plenty of time to get the footage I’m happy with and then have time to make sure the video has polish to it.

    Effects with a good level of polish obviously add to their credibility and believability. And when doing something odd based on the real world, a certain degree of believability is vital.

    Normal made magical :

    I’ve decided to go with the change in size idea for my project as it can be the most effective. It is a pretty simple idea, but I think if done well, it can be the most effective of the ideas I had.
    With personal evaluation, the changing size idea excites me the most about the project, and I think that’s an important consideration.

    Inspiration :

    The BBC Sounds trailer, shown at the project launch, shows how effects can be used to make the real world more amazing and magical. I really like that aspect of it—subverting expectations or the natural world while still telling a story or message.

    The music video for ‘Why’s this dealer?’ by Niko B also shows this aesthetic of weird or magical things happening in the real world, again with a story or narrative that would only be as clear with the effects.

    Again, I am looking at music videos that have used special effects to create something magical out of the routine mundane.

    Rómulo Celdrán

    Looking at the work of Rómulo Celdrán, who makes replicas of everyday objects he interacts with and blows them up. When these everyday objects get to this size, they almost take on a sculptural nature even though they are nearly exact replicas of ordinary objects.

    Proof of Concept :

    Above are two quick mock-ups of the concept for the video. I’m just testing out the visuals of how ordinary everyday people and vehicles will look moving around Glasgow.

    From my iPhone :

    Even as I mindlessly scroll through my phone, the algorithm helps me return to work on this project between the reposted TikToks and the Family Guy clips.


    Even in this clip, I can see the joy and wonder these oversized items bring to this creator. I’m sure other attendees at the con share this excitement.

    Points to focus on :

    After the proof of concept tests there is a number of things that I’ve noticed :

    The size might need to be smaller to create a meaningful space for the larger object to work within. If they are too big, they’ll be in and out of frame in a second, so if they are smaller, the viewer will have more of a chance to see them and enjoy the juxtaposition of small to big.

    Does there have to be a narrative of the minute video? If so, what is the narrative of that video? An insert character for the audience to relate to that is the only person in Glasgow noticing all the weird stuff happening along with them?

    If I plan to shoot correctly, I can complete all the shooting in one day and then use the rest of the time to work on my After Effects skills and make the footage believable.

    To Do List :

    Select 3 different elements from 3 different sources and combine to make a new composite, i.e. a background, a moving object (realistic or 3d model) and a human element.

    • background of Glasgow
    • moving objects (realistic) being the people and vehicles of Glasgow
    • the human element of the people of Glasgow shown in the video

    1 minute long duration minimum. For the duration of 1 minute, I can break it up into 2 ways :

    • I can have 3 scenes of the city and a different element is changed in each scene which gives me 20 seconds each scene
    • Or I have have 4 different locations from around of Glasgow that will break it down to 15 seconds per scene.

    Allowing a couple of seconds at the start and end of the video for a fade in and fade out with a title.

    What do I need to shoot ? :

    • 4 or 5 scenes of Glasgows Cityscapes that show clear movement and busy life 30 seconds to allow for editing.
    • 4 or 5 upclose scenes of people, vehicles or animals that can be blown up to scale with a clear left to right movement ( or diagonal left to diagonal right).
    • Possibly 1 or 2 etablishing shots of identifiable Glasgow scenes or Landmarks for title and end.

    All HQ to survive the upscaling or down scaling, all on the same day, around the same time as to keep lighting similiar.

    Future Ideas :

    I think these ideas are good but maybe a bit late in the game for this project.

    Idea : fistfull of dollers/ block blur effect/tarentino esque copy of wester effects, snap zooms close up and hard line blurs.

    Instead of the standard close up used in cinema or the focus blur with a soft feathering on the edges, I want to do a hard seperation between clear and blury.

    Idea : Tekken 3 fighters in the real world.

    Idea : james bond title sequence effect

    Idea : mirroring image of city and nature double exposure together on the one footage. but both camera movements are the same essentially tying them both together.

    Having an element of the city blocked out with mirrored footage of nature but mirrored camera movement so that as if they are occupying the same space but just at different times.

    To Remember :

    note : adjustment layer over all footage at the end to tie it together

    At the end of production when all the effects are finished and the footage is as close as possibke in it raw output, tie it together visually with a adjustment layer over the finished product

    note : jigsaw effect to immitate a lower qualioty image that then ties the two levels of the footage together. (mosaic mask)

    If the footage is two different qualities and the difference is obvious, tie the footage together by using the jigsaw effect on the lower quality footage immitating a low res look.

    note : consider the movement of the footage and how that effects the footage

    When recording footage, imagine how that footage will be used or how the effects applied to it might follow a direction, or may benefit from a direction.

    Trying to Plan :

    If I have a plan of what I need to shoot then I won’t just be wondering around aimlessly and the footage will all be pretty similiar in lighting and conditions… weather permitting.

    I have an idea of what I was wanting to shoot on that day but the weather didnt hold up for too long, the busy streets got considerably less busy when the rain started. Its almost as if people dont want to be out in the rain.

    In all honesty the plan went quickly out the window for some reason but with that I decided that I would keep an eye out for shots that would have the framing I was looking for or the enviroment I was thinking.

    Out Filming :

    Camera in hand, tripod under the arm, jack white album on in my ears, I set out in Glasgow to get the footage I needed; I had some of it planned, but I thought it would be better if I just tried and discover shots when I’m out and about.

    When I was out I got some shots I wouldnt have thought of but also discovered that with the time contraints and equipment I have, the upclose shots I was wanting to get will either need to be done with my phone or I change my idea slightly.

    After thinking that getting the footage I want wont be possible I started to consider other methods and other footage I could use to in its place and with listening to the Jack White album ‘No Name’ on it was making me think primal or of animals , so then the image of these normal city with the image of animals or fish moving through the city appeals to me and the footage can be found easier in the scale I’m looking for.

    Change of Plans :

    So as I couldn’t get the up close footage of Glasgow that I was looking for. I went with Gillians suggestion and looked at Adobe Stock footage, downloading a variation of animals to then bring them into the footage ive filmed in the city.

    Example of stock footage fish that I was trying to find an excuse to use but the transparent fins where too much.

    Below is some notes of my process when I wasnt having the most luck and then expanding on those notes after the fact when I can take a better look at them :

    its crashed and i lost a good chunck after trying to brute force it

    At the start of the tracking I had done a few of the animals but as I hadnt yet went through the setting troubles and figured out how to make After Effects less distructive it crashed and I had lost a good chunk of my work meaning I had to do the roto brush again.

    I tried to force it to preview one HD rotobrushed preview at a time and it crashed on the last couple of frames.

    I learned that solid contained shapes are best to roto brush

    Also as I did more of the roto brushing I learned that the smaller difficult areas on this footage will obviously be more difficult. I then changed the footage to more solid animals with clearly defined shapes and movements.

    I learned that land animals are worse to roto brush onto existing footage because they actully have to stand on something

    After changing my animals footage I learned that any animal that was making contact with the floor would be another layer of difficulty, where they are making contact with the ground looks too fake and amateur.

    my laptop was struggling during the entire thing

    During this entire process of rotobrushing and assembling my footage I learned that to save some processing power, and in turn my laptop, I had to change the perview settings to as low as it could go (quarter), to assign more ram to After Effects (7.5 gigs of RAM of my total 8 gigs of RAM) and to break it apart into managable chunks of my laptop and then assemble it again in After Effects before rendering it out.

    i have became a master at the roto brush tool / fine detail brush tool in after effects because ive done the same thing over and over again

    As you can probably tell from this lack of modesty I was going a bit mad after doing the roto brushing/ fine detail brushing a number of times and having to the same footage a number of times but because of this I found an efficent workflow and key shortcuts when doing roto scoping.

    Id actully like to use it again when doing more work on the school of fish potential project.

    Comp in a Comp :

    To save my laptop I’m separating the footage into separate scenes, rendering them out with the Media Encoder and then putting them back into After Effects to edit them together and to add a fitting soundtrack.

    The Music can Change it :

    Porcelain was in my head for it because of the two main elements of the track, the steady drums being the street and the normal life and the strings and vocals being the sea life floating above.

    Nude has always gave me this feeling of being underwater so the connection again being a sealife with this one.

    Feel it all around initially gave me that same ocean feel as the others did but in a lighter tone. But as I was looking for the video on Youtube to put here I found another version.

    Excuse the visuals of this video sorry.

    The same song, Feel it all around by Washed Out but slowed down with added reverb give it an even dreamier spaced out feel which i think might work well with the underwater visuals.

    After going down the rabbit hole of slow and reverb songs I found veridis Quo (slow and reverb) that has that feeling I’m looking for, a magical wonder that still fits within the city scape.

    The song has a solid minute at the very start that I’m going to use, it does start to ramp up at the end of that minute though and it makes me want to make a longer video but I’m just capping it at a minute.

    Trust the Process :

    With all the footage rendered out, I now want to bring it back into After Effects to bring it together for a final composition and to add some effects:

    • To start and end the video I’ve put a fade in and out of black at either end as so not start and end so abruptly and fading the music at the same time as the black to gently lead into the video as well to set and maintain the light relaxed tone.
    • Using the camera lens blur effect on an adjustment layer I made the transition between each scene, I felt that the adjustment of focus will again be easier on the eyes and tone as not to just cut to the next shot.
    • And over each scene in an auto colour correction, auto levels and auto contrast just to tie the scene and the imported fish a little bit more together. I could have went deeper into it but I enjoyed some contrast between the normal streets and the vibrant sealife.

    With the visual elements i wanted to really soften them and continue the feel of the entire thing (as ive probably said too many times at this point).

    Final : Magic in the Mundane

    Taking it further:

    I really love the visuals of the school of fish and would love to do another video with only images of schools of fish moving throughout a city scape.

    I really like the combo of the school of fish, the solid rigidity of the city and the music that has elements that mirror both the rigidity and the flowing natural patterns of the fish.

    Evaluation :

    During this project I did have to be adaptable, be that becuase I couldnt get the exact footage I was looking for or hardware restriction of my laptop. I had to exhibit my ability to take it on the chin and keep going.

    My main technical take away from this project is my confidence with the rotobrush, after having to do it a few times I’m quite happy to use it and would like to take my skills further, even footage in my final piece could be better and if given the opportunity I would do it more.

    I could have been more open to other ideas, i was so bogged down with the one concept and might have made my life easier if i did a simpler idea

    A personal criticism I have is that I might have locked in my concept too soon, yes the project is only two weeks but maybe given another day or a wider scope I could have came up with something more abstract or inspired. I am happy with the final visuals but think there is more potential for it either to be more abstract and stylised or more honed in and focused.

    In future i could first look into the software to see what can be done and then maybe let that infrom my decisions/ process/ workflow.

    In this project I was driven by my concept and then had to work around/against the software to make it work. This process did have its own interesting workflow and outputs but I wonder what I would have come up with if I had looked more into After Effects and let that inform my creative process.

  • Open Share Year Two

    Building

    What do I want to do ?:

    Narrowed down to 3 options all with their own draws and aspects I want to focus on.

    Looking at the space again:

    Just familiarising myself with the garage space again, considering where I could present one of my pieces.

    Deciding what to do: Process of elimination

    Data Visualisation:

    So for the data visulisation I wanted to make it human scale and for that an idea was to use real bricks to replace the Lego bricks.

    But when I was looking into the cost of buying 297 bricks it would have cost me £157.41 and I dont really want to pay that and there is cheaper alternatives but the problem with that would be transportation.

    Control:

    For control I could do it, if I was I would want to do a better job with the soldering and have it more secure as well as having a more performative output like the printer idea i had or have the final piece output to a projector or second screen.

    But between the three options I personally feel that Control was the most realised of the projects and couldn’t have gained too much during Open Share.

    Typographical Narratives:

    For Typographical Narratives I initially aimed to have an interactive display that would react with the proximity of the viewer making it more difficult to read the closer they got.

    Graphics

    As the main code and sensor is working for the piece I want to use today to redesign the actual graphics of the piece as well as thinking about how to elevate the piece to gallery level.

    The main task of today was to test the legibility of the font at certain lengths and by asking others, decide on a font type for the smaller text so its not too easy for people to read before the blur effect begins.

    Final Graphic:

    With the final graphic I wanted to try and get something that would draw people in and make them want to read on. I wanted the writing to have the same feel as Hitchhickers Guide to the Galaxy, that sort of horrible news delivered as if it was nothing important or just a fact of life that everyone has accepted.

    “Time is an illusion. Lunchtime doubly so.”

    Installation

    Putting it in the space:

    Ready to present:

    Reflection:

    Last year for Open Share year one I did too much, I stressed myself out and maybe took too much on or just got too wrapped up in it.

    This year I feel I made an overcorrection, my goal for this year was to do something simple, continuing on from a practice I’ve been trying to have throughout the year, something that wasnt too adventurous for a one week project.

    For open share this year I wanted to realise my installation concept from typographiocal narratives, that I had explored in a number of prints and turn these prints into reactive posters.

    In reflection I believe I could have done more, more examples of text on the screen, more effects that warp and shape the text.

    • Pixelisation effect on the test
    • Depth of the text (moving further away from you)
    • Warp and melt

    I do believe this concept has potential and the relationship between the audience and a piece that doesn’t play fair is something I’d like to explore.