Janelle Shane, aka @JanelleCShane, is an industrial research scientist. In her spare time, she trains neural networks and makes hilarious experiments. Among other things, she trained them to create Dungeons and Dragons spells, generate pick-up lines, and invent Irish tune names.
Interview published on July 11, 2019
Q. Could you introduce yourself in a few words?
A. I have a humor blog about artificial intelligence called AI Weirdness. I use fun experiments to look at how AI works, what it's good at, and what it's not so good at.
Q. How would you explain AI (Artificial Intelligence) to a child?
A. AI is a name for a kind of computer program that comes up with its own way to solve a problem, without someone telling it exactly what to do. The idea is that it can learn kind of like a human does (and there's even a kind of AI that imitates the way human brain cells are connected, called a neural network). That's really useful if it's a problem that we don't know how to describe very well - like how to tell the difference between a cat and a dog in a picture. People use AI for all kinds of things - like the camera filters that change your age or give you bunny ears, or translating a web page to a new language.
Q. You trained neural networks to do funny things like learning how to do knock-knock jokes, create recipes, and invent Star Wars characters. What was the most interesting case according to you?
A. I never get tired of generating recipes with neural networks. It's hilarious to see how they don't really understand how baking works, or even how water works (one recipe told me to fold the water and roll it into cubes).
Q. Your experiments are as fascinating as humorous. Are you trying to prove something when the algorithm outputs silly results?
A. Since people also use "AI" to describe the super-sophisticated computer programs in science fiction (like R2-D2 or Wall-E), people can sometimes end up thinking the AI we have today is that smart. My experiments show how little our AIs understand about what we're really trying to get them to do.
Q. What is something you’d like to try with neural networks?
A. I'd love to build more neural networks that people can interact with.
Q. We often hear about the people designing AI and the diversity issues that go with it. Are you optimistic about this problem?
A. It's possible to build a biased AI without meaning to, and it's work to recognize and try and fix those problems. Because the current tech workforce has a lack of diversity, the companies that build these algorithm in general have not put enough work into addressing algorithmic bias, and as a result we have biased algorithms sorting resumes or making parole recommendations. The good news is that more people are becoming aware of the problem.
Q. Lots of people are afraid of AI and what it could do in the future. Are you?
A. I'm not worried about AI becoming too smart - I think it's much harder to make an algorithm that's truly smart in a human way than a lot of people believe. But I think there is a real danger of people putting too much trust in an algorithm's decision and not checking to see whether it's faulty, or particularly biased. In other words, the danger is not that AI's too smart, it's that it's not smart enough for some of the jobs we're giving it.
Q. Are there any books you recommend about the topics we discussed?
A. I just finished Hello World: Being Human in the Age of Algorithms by Hannah Fry, which is a fascinating look at some of the algorithms that run our world. It's full of entertaining anecdotes about algorithmic mistakes.
Janelle’s book You Look Like A Thing And I Love You, How Artificial Intelligence Works And Why It’s Making the Word a Weirder Place will be out on Nov 9, you can pre-order it here.