Location: San Francisco, CA, United States
I have been using Machine Learning neural nets trained on databases of real photography to render images. In the course of doing this work I discovered disturbing issues of racial bias, privacy, and accuracy which accompany this emerging technology.
I trained a neural net on 512 pairs of archival mugshot photos from a database from the National Institute of Science and Technology, NIST. I then input a previously unseen mugshot, either a front or a profile, and had the neural net predict and render what it thought the alternate view should be.
I was greatly disturbed that the NIST database was heavily biased towards African-Americans, with the percentage being much higher than the 13% currently in the US population. I also felt that the individuals in the mugshots were being exploited for their faces even though they were most likely dead. I decided that if I was going to use this database, then it was my responsibility to highlight the injustices contained within.
While doing this work, I also became aware of how invasive this technology could be. In the future, for example, a single image of a protester caught on cctv could be used to generate multiple camera views or even a complete CG model of the person. Also, there are serious issues on how accurate these generated images will be. In the future will we be generating more photoreal images of the wrong person?
In Hopeful Monsters, I used a completely different neural net trained on 750 photos of myself. In the 750 photos, I have varied poses and clothes. The neural net was then tasked with producing an image that looked like it belonged to the set of 750 photos. It was not making a "xerox" copy, but was instead trying to make an image that would be indistinguishable as a member of the set of 750. Hopeful Monsters shows 16 images out of the 55,000 training cycles run.
Self-Portrait is the image produced by the Hopeful Monsters neural net which I have deemed "closest" to being one of the 750 input photographs.