My solo art show about and opened yesterday at the gallery Transformer in Washington, D.C. It's called, "The Algorithm Will See You Now" transformerdc.org/e19

Excited to be able to do something a little more advocacy-oriented. I'll follow up with some details about the artworks in the coming days.

Open through July 23, for those in the Washington, D.C., USA area!

Let's start with "Morale is Mandatory (Algorithm Livery)".

Incorporating facial recognition hardware and a model provided by Google for schoolchildren in its “AIY Vision Kit,” which was sold at Target, this artwork scans for nearby faces. If each face is deemed sufficiently cheerful, they count towards a meter of “smiling faces.”

It hints at the potential for state or corporate monitoring of mood. Imagine a customer service job that mandates a percentage of joy.

“Illegal in Illinois”: These five mannequin heads show visitor faces, captured from the sidewalk outside the gallery. They are not shown or stored for more than 15 seconds. The name of the artwork refers to the 2008 Illinois law, the Biometric Information Privacy Act.

“Probing GauGAN2”: is an image-generation “AI” that was trained by Nvidia data scientists strictly to create landscapes.

If you ask for people, the best it can do is a strange, gleaming blob.

I’d wager this is by design, given other high-profile AI-bias incidents.

Naturally, I was interested to find out whether there were any residual biases that could be teased out from the network. I compared its outputs for the following phrases:

- where people live
- where white americans live
- where indigenous people live
- where african-american people live

It’s interesting to me that GauGAN2 learned a bit about ethnicities and geographies. The first two look almost the same.

“Feedback Loop (Related Content Machine)”: When you engage with a personalized website, it responds to your inferred interest by providing similar things. The problem with this approach is that it can quickly lead you into a sinkhole—or radicalize you. (Think YouTube autoplay.)

I feel that this is related to the properties of mistrained self-learning AIs—they can self-validate the wrong results.

“Probing ‘DALL-E Mini’”: Many on social media have delighted in using the image-generation “AI” DALL-E Mini. You enter a phrase and it creates images from whole cloth based on your words.

So, I wanted to see what kinds of biases were learned in its training. Here are the unmodified results for the following queries.
- CEO
- assistant

And here are the unmodified images generated by DALL-E Mini for the prompts:

- menace
- model


Interesting what skin tones it chooses, isn’t it?

Sign in to participate in the conversation
Mastodon.ART

Mastodon.ART — Your friendly creative home on the Fediverse! Interact with friends and discover new ones, all on a platform that is community-owned and ad-free. (翻译:DeepL)mastodon.art是艺术家和艺术爱好者的空间,而不是政治内容的空间--有许多其他的fediverse实例,你可以加入以获得更多的一般内容(而且你仍然可以从任何实例中关注你在.art上的朋友);见https://instances.social :)