My solo art show about #AlgorithmicBias and #SurveillanceCapitalism opened yesterday at the gallery Transformer in Washington, D.C. It's called, "The Algorithm Will See You Now" https://www.transformerdc.org/e19
Excited to be able to do something a little more advocacy-oriented. I'll follow up with some details about the artworks in the coming days.
Open through July 23, for those in the Washington, D.C., USA area!
Let's start with "Morale is Mandatory (Algorithm Livery)".
Incorporating facial recognition hardware and a model provided by Google for schoolchildren in its “AIY Vision Kit,” which was sold at Target, this artwork scans for nearby faces. If each face is deemed sufficiently cheerful, they count towards a meter of “smiling faces.”
It hints at the potential for state or corporate monitoring of mood. Imagine a customer service job that mandates a percentage of joy.
“Probing GauGAN2”: #GauGAN2 is an image-generation “AI” that was trained by Nvidia data scientists strictly to create landscapes.
If you ask for people, the best it can do is a strange, gleaming blob.
I’d wager this is by design, given other high-profile AI-bias incidents.
Naturally, I was interested to find out whether there were any residual biases that could be teased out from the network. I compared its outputs for the following phrases:
- where people live
- where white americans live
- where indigenous people live
- where african-american people live
It’s interesting to me that GauGAN2 learned a bit about ethnicities and geographies. The first two look almost the same.
“Feedback Loop (Related Content Machine)”: When you engage with a personalized website, it responds to your inferred interest by providing similar things. The problem with this approach is that it can quickly lead you into a sinkhole—or radicalize you. (Think YouTube autoplay.)
I feel that this is related to the properties of mistrained self-learning AIs—they can self-validate the wrong results.
And here are the unmodified images generated by DALL-E Mini for the prompts:
- menace
- model
#AlgorithmicBias #dalleMini
Interesting what skin tones it chooses, isn’t it?
“Snap Judgment”: There’s a neural network in this device. It was trained with about 290 of the ImageNet topics for categories of worker, such as “baker” or “landlord.”
Because the dataset is not in any way inclusive of the diversity of Earth’s inhabitants, any neural network naively trained from this data, as I have done, will incorporate inherent biases, such as assuming that people of particular skin tones or gender presentations are more or less likely to have each occupation.
“Print/Shred” relies on a companion web site:
https://algorithms.chriscombs.net?m
I’ve fleshed it out with details about algorithmic bias, surveillance capitalism, and some concrete next steps you can take.
@combs GIGO (Garbage in, garbage out)
@combs And if you leave the prompt empty, a signigicant number of south east asian looking women will turn up in the results. Is that the same kind of bias, or is there something going on we don’t completely understand?
“Probing ImageNet”: These thumbnails are real-world screenshots from an illegitimately-acquired copy of ImageNet, a massive dataset of more than a million images that depict tens of thousands of nouns (e.g.: cats).
ImageNet was the raw training data used to train many neural networks (“AIs”) that recognize images. They are shown the desired input and output, and left alone for thousands or millions of iterations until they recognize the desired topics correctly. (Oversimplifying, don’t @ me.)