*

*
Yosemite under Orion's gaze

Friday, September 20, 2019

A theosophist meets this eccentric...

I'm not qualified to tell you exactly what ImageNet Roulette is. It describes itself as a provocation, as a warning system against classification, a Cassandra like clarion pointing to the dangers of relying on artificial intelligence and machine learning datasets.

What is ImageNet? It is a huge image database, organized and classified according to the Princeton English language database WordNet.

You have exactly one week to figure it out for yourself, then it is coming down.

The ImageNet Roulette project has achieved its aims.
Starting Friday, September 27th this application will no longer be available online.
ImageNet Roulette was launched earlier this year as part of a broader project to draw attention to the things that can – and regularly do – go wrong when artificial intelligence models are trained on problematic training data.

ImageNet Roulette is trained on the “person” categories from a dataset called ImageNet (developed at Princeton and Stanford Universities in 2009), one of the most widely used training sets in machine learning research and development.

We created ImageNet Roulette as a provocation: it acts as a window into some of the racist, misogynistic, cruel, and simply absurd categorizations embedded within ImageNet. It lets the training set “speak for itself,” and in doing so, highlights why classifying people in this way is unscientific at best, and deeply harmful at worst.

One of the things we struggled with was that if we wanted to show how problematic these ImageNet classes are, it meant showing all the offensive and stereotypical terms they contain. We object deeply to these classifications, yet we think it is important that they are seen, rather than ignored and tacitly accepted. Our hope was that we could spark in others the same sense of shock and dismay that we felt as we studied ImageNet and other benchmark datasets over the last two years.

“Excavating AI” is our investigative article about ImageNet and other problematic training sets. It’s available at https://www.excavating.ai/

A few days ago, the research team responsible for ImageNet announced that after ten years of leaving ImageNet as it was, they will now remove half of the 1.5 million images in the “person” categories. While we may disagree on the extent to which this kind of “technical debiasing” of training data will resolve the deep issues at work, we welcome their recognition of the problem. There needs to be a substantial reassessment of the ethics of how AI is trained, who it harms, and the inbuilt politics of these ‘ways of seeing.’ So we applaud the ImageNet team for taking the first step.

ImageNet Roulette has made its point - it has inspired a long-overdue public conversation about the politics of training data, and we hope it acts as a call to action for the AI community to contend with the potential harms of classifying people.

So, as the Monkees once sang, apparently I am a believer. Only I'm not, not even a closeted worshipper. And my wife is mildly eccentric at times yes, but she is no flake. In fact, she is solid as a rock. I think you are better off with a random fortune cookie than sampling this sort of AI swill but I guess that that is their point.

But you should still amuse us and enter your picture and let me have a looksee.


Reading the linked article is interesting. Computers and machines are not really very good at describing what they see. The newest mirrorless cameras are using ai to fill in the blanks of your photography, supposedly improve autofocus and make what are supposed to be some very educated guesses based on huge databases of other people's pictures of similar subjects. They are largely terrible at it. Adobe tried something similar last year with Sensei and I tried it, it is frankly awful and mostly useless to anyone with their own set of eyes and a brain. How boring.

I read a scientist say something last week that gave me pause. People kill themselves or each other on the roads and freeways all the time. But a psychological maginot line will be crossed when machines or autonomous cars start inadvertently killing us based on algorithms and faulty data collection. How will we react to that and will we ultimately accept it as breaking a few eggs for the greater good?

Tesla kills. Self driving Uber kills. When Robots kill: Artificial intelligence under criminal law.

So am I a theosophist or a microeconomist, which is it?

2 comments:

Anonymous said...

Greetings! I've been reading your site for a long time
now and finally got the bravery to go ahead and give you a shout
out from Humble Texas! Just wanted to say keep up the good job!

KAT JOY said...

I didn't know how to post an image here so I uploaded them to my ftp site. Yes, them. It takes me a few hundred shots to get a decent "photogenic" one or two of me that agree. And, I have five I like and wanted to see the outcome from that site:

https://www.katjoy.com/kat-comedian.png
https://www.katjoy.com/kat-hotwheel-clip.png
https://www.katjoy.com/kat-face.png
https://www.katjoy.com/kat-lady.png
https://www.katjoy.com/kat-platinum.png