“Humanist Text Camera” is an exploration of Gesture Recognition using the Microsoft Kinect Sensor. Media Design Practice’s thesis student SangIn Chung came to me for help in developing this particular facet of her project the “Humanist Text Camera”. Using the Microsoft Kinect Sensor, we developed what Sang-In calls the “Humanist Text Camera” — the sensor’s input is descriptively written as text on a display rather than creating or controlling objects.
The following is an excerpt from the Sang-In’s thesis: “Camera vision is an experience that allows people to read broadcasting information while people’s movements are transmitted into textual information through camera vision. Future camera has a bunch of software –posture, facial, object and speed detection – to interpret and say the space it’s seeing around you – especially about occupied people as a key component of sensing environments. It reports back on what it sees in text, rather than through a video or image. The texts are erasing the specific individuals and become generic computer version of description as equivalent to face blurring effect in current street map practice. What does it mean when you can read the people and the camera version of people is broadcast? Does that change our relationship between camera and human capture?”
Thanks for the project, Sang-In!