For my final in Quantified Humanists I am interested in exploring the data layers of the world as they are perceived by machines.
My project aims to raise my awareness of what machines see, through a minimally invasive display using audio as an interface. For a long time I have been interested in computer vision, machine learning and AI (in the complete science fiction sense and humans and machines coexisting). I am ready to become a human cyborg and get a brain implant and go full “The Matrix”.
I am not alone, the world is beginning to see people who embrace this sort of living, like Steve Mann, Hugh Herr and Neil Harbisson, who in each their way live as a human-machine hybrid. Using machines to gain a lost ability or gain extra senses. I find the idea of extra senses and information to be very interesting.
The experts warn that this sort of enhancement would bring about a detrimental state for human beings. And actually if we take the case of the smart phone, it has already been proven detrimental: humans are now reliant on the “extra limb” for instance to look up an answer to a question, so we would no longer train the brain to remember such information – how many phone numbers do you remember right this moment? (https://www.sciencedirect.com/science/article/pii/S0747563215001272 and https://www.vox.com/science-and-health/2018/3/28/17054848/smartphones-photos-memory-research-psychology-attention)
This Quantified Humanists class has inspired me to think about what this sort of life would be like. So rather than to use self-tracking to optimize my daily routines, like running, eating, feeling, tasks, as is often the case with the quantified self (QS), I have been thinking more about about the kind of optimization I desire. And to what degree that optimization is actually desirable.
I have this dream that being a cyborg would be amazing, and I care less for the ethics. When I should probably be focusing on – the ethics.
Therefore my project will be an experiment into life as/with a machine, to figure out how I am truly feeling about the idea of life as a human-machine hybrid. I will start to investigate this topic, by starting to try to understand better what the machine is actually capable of.
QS Method: Eliciting sensations
My project falls under the category of self-tracking focusing on eliciting sensations (Neff, Gina., Nafus, Dawn., Self-Tracking, Chapter 2: What is at stake? The personal gets political, MIT Essential Knowledge Series), where one tracks one’s reactions and emotions in various situations, to come closer to one’s true feelings.
“The data becomes a “prosthetic of feeling,” something to help us sense our bodies or the world around us’ – Neff & Nafus
“In this style of self-tracking, hypotheses and solutions are put on pause to develop the fullest possible physically embodied sense of what could be going on” – Neff & Nafus
The method includes:
- Tracking to get closer to feelings that may have been lost or need to be re-calibrated (e.g. to regulate weight or re-experience satiety)
- Trial and error is common here
- Start with quantitative numbers and move to more qualitative descriptions as you tune into ‘yourself’
- Most track feelings in real-time, but you can consider setting a baseline and deviating from there.
- Add contextual descriptors to data
(From Neff and Naffus + class slides from week 2 on github here: https://github.com/joeyklee/quant-humanists-2019)).
Inspiration and delimitation
I am inspired by previous projects which have attempted similar enhancement through machines. Not all are self-tracking related.
Us+ by Lauren McCarthy and Kyle McDonald, uses machine learning in the browser to analyze the conversation and provide helpful advice to the user. To bring this into the self-tracking realm one could make a wearable doing the same thing, which tracked in real time all day how one’s conversations are handled – am I talking more than the people I talk to? What is our relationships as perceived by the machine vs. as perceived by me? But I think it is better to start smaller than this.
Pplkpr (people keeper) by Lauren McCarthy and Kyle McDonald is an experiment that allows you to track your relationships with people. Ie. Who drains you from energy and who makes you happy. This is self-tracking, there is nothing intelligent or analytical about the machine. But it downright creeps me out. I am not ready for a world where we quantify our relationships. Therefore, I will focus more on objects than people. Furthermore, I want to minimize the amount of thing I have to enter in, I want to use the machine as point of departure like in the first project.
Robot readable world is a video comprised of several computer vision videos that show what / how machines see. This video inspired me to look directly at what machine learning algorithms are readily available for me to begin experimenting with what the machines can see. I am new to machine learning, and since I wish for this to be a part of my everyday “cyborg” experience, and not just in front of the computer, I will limit it to algorithms that will work from my cell phone.
“Who Wants to be a Self-driving Car?” by moovelLab (Joey Lee included) is a trust exercise where you have to trust the data from the computer vision and sensors in order to navigate a car which you are lying on. The aim is “to see just how different sensor and computers are seeing the world compared to humans”. This project is really interesting, and I wish that I could go this far! The focus is on creating empathizing for the machine, which I find to be the key ingredient to something interesting here.
I will built a machine running in the browser of my phone.
I will take the machine for walks at least once a day and show it the things I see.
The machine will tell me what it sees (using my headphones).
I will score the machine in how correct I think it was.
I will add a description / score for how the machine made me feel in that moment. Did it make me smile? Or scared? Or indifferent?
I will use audio the machine to talk to me because the visual sense is already overstimulated with information and I am very interested in more ambient displays for information. Also I never take off my headphones.
The project will
- Aquire data through a readily available machine learning algorithm to obtain information about objects in the world. https://ml5js.org/docs/ImageClassifier and https://ml5js.org/docs/YOLO are possibilities.
- Read the data out loud to me
- Contextual descriptors: location + date + time
- Feeling descriptors: correct detection + my feeling about it (potentially through speech recognition, though this might not be necessary http://ability.nyu.edu/p5.js-speech/ )
This is a living project. I want to expand the project with more data over time, to include layers of the world which can be perceived ONLY by machines (ie. cannot be perceived by the human sensory system), in effect giving me extra human senses like a real cyborg. Thus the process will expand and I will be learning what is, and isn’t interesting to use machines for as a potential cyborg, by filtering, mining the data and refining over time. But for now the above mentioned will be the extent of the project.
The data collected by the machine will live on this website. Where it is possible for users to explore.
Ben Fry data viz pipeline https://www.safaribooksonline.com/library/view/visualizing-data/9780596514556/ch01.html
The data visualization method is based on location data using Google Streetview as a point of reference for each point, where the objects perceived by the machine are present, as well as the score received by me and how the machine made me feel in that moment.
I will build it in three.js.