Ode to a “kabibble”
Or how to introduce 6th graders to the basics of A.I.
Artificial intelligence is a tricky topic to grasp for adults, let alone 6th graders. The team from Intellogo worked closely with Lisa Jacobsen, Pedagogy and Technology Integration Specialist & Technology Teacher at The Study school for girls in Montreal to define a lesson plan. Our goal is to teach the students how A.I.’s learn; what bias is; and how bias is naturally a part of every A.I. implementation. We specifically want the students to understand the role that humans play with bias entering into A.I. systems. We want them to appreciate that they can positively impact A.I. by recognizing and planning for bias.
The 6th graders at The Study already have a background with hands-on programming. They understand the basics of conditional logic and what a programming language is. At school, they are taught from kindergarten that a computer is only as intelligent as the person programming it. The robot or program will only do exactly what it is told to do. A.I. is quite different, so we build off this knowledge by explaining the difference between writing a program and teaching A.I.
Next up, explaining how A.I. works using the Kabibble game.
The students are told that they are now the A.I. behind Google image recognition. I am going to teach them to recognize a new image, a kabibble. I’ll do this by holding up a series of objects and will tell them the object is either a kabibble or it isn’t. After I hold up each object I’ll ask them “what is a kabibble?”
The first object is a DVD box for a cartoon. I say “This is a kabibble.” I ask the class “What is a kabibble?” and I get all kinds of answers: It’s a DVD; it’s a cartoon; it’s a box with lettering and an image on it; it’s something you watch; it’s just an object; etc. I point out how random all their answers are because there are no common features yet for them to latch onto.
The second object I hold up is a stapler. I say “This is not a kabibble.” I ask the class again, “What is a kabibble?” We get a lot of the same answers as before but we also get answers which tell them features which may not be present in a kabibble.
We continue this process holding up a book, an ipad, a sheet of paper, a waste paper basket and pretty soon almost everyone in the class is agreeing that a kabibble is a three dimensional, rectangular box. Now we explain that since they know how to recognize a kabibble we can hold up anything and they will pretty much correctly classify the object as either being a kabibble or not. No need to program each object on a case by case basis. This is the difference between programming and artificial intelligence. A.I. systems are taught a concept and once it is learned the A.I. system continues to make its own determination about a classification without needing to be explicitly programmed for it.
The kabibble exercise is great because it teaches the students some of the fundamental concepts of artificial intelligence. Through a fun interactive process we introduce the basic concepts of features, patterns, labels, classification, positive and negative samples, and teaching vs. programming. The last concept we need to teach is bias.
This is one of the most fun parts of the class for the students. It’s almost all hands-on and self-explanatory. To kick off the session we ask the students to go onto their computers and Google “Why is my dad…” and see how Google autocompletes their query. We give the students a whole bunch of different queries for Google to autocomplete like “why is my mom…”, “why is my teacher…”, “who was the first person in the world to…”, “never put a…”, “Is m…”
We explain how Google auto-completes using the queries from millions of previous users who have made the same query. The students love the auto-completed queries and we use it to show how bias enters into the A.I. training for these auto-completes.
Next we want to connect how bias has unintentional consequences. In other words, when we don’t consider bias when training A.I. systems bad things can happen. For this we show the following video.
Hands-on training a real A.I. system
As a final step in our journey toward understanding artificial intelligence we log the students into the Intellogo cognitive platform to teach it their own concepts. We divide the class into 4 groups. Each group chooses a concept and then finds web pages they consider either positive or negative examples of their concepts. For example, one group chooses sushi as their concept. They must find good examples of web pages that talk about sushi as well as good ones that don’t.
We help them input their positive and negative samples into the system and then have them press a button to initiate training. As the system tries to learn the concept and scans a million web pages to find what it thinks are pages about their concept we explain what’s going on. The system is using their positive samples to try and extract common features as well as their negative samples to try and see if there are common negative features.
When the results come back we ask them to debate why they are seeing false positives and false negatives. This involves them reading a result in question to determine if the system made an error or not. Great debates rage about whether a recipe about sushi rice is actually about sushi. We explain that if they choose to put the recipe in the negative samples, they will be biasing the system against recipes involving sushi rice. Is that okay? More debating.
That’s the real power of this lesson. We want the students to understand that A.I. doesn’t just sit out there alone. Humans are an integral part of it and their debates shape how these systems perform. At the end of the day we want the students to be inspired to continue their journey of understanding and working with artificial intelligence. After all, a great many of us will be depending on the systems these young minds create.
If you like this post, don’t forget to recommend and share it. Check out more great articles at Code Like A Girl.