This is a story of how algorithms can cause harm in unexpected ways.

It takes place in the land of Polynomia, just as summer is about to start.

The three cities in Polynomia, where everyone lives, are Upp, Dow, and Sides.

Polynomia is home to lots of shapes!

The shapes go to school, and during the summers they like to go to camp.

There are three popular summer camps in Polynomia: music camp, outdoor camp, and math camp. They're all in the middle of Polynomia.

In the past, only shapes from Upp went to camp because Upp sponsored it.

This year, camps will be open to all shapes in Polynomia for the first time!

The summer camp directors are expecting to receive hundreds of applications.

To make this process more efficient and fair, the directors have decided to use an algorithm to help make decisions about who goes to each camp.


What might be an advantage of using an algorithm?
What problems might come up?


How do we train this algorithm to make decisions?


There's a record of who has attended which camps in the past. We have a historical dataset.

These are the features the algorithm considers.

The algorithm uses the historical data to train a model, tweaking it little by little.

Once the model fits the data pretty well, the algorithm stops.

Let's use the historical dataset to train the algorithm, and see what happens.

Train from 2017 data
In your own words, how does the algorithm "learn" what to do?

Okay! Let's hear from shapes who are excited about summer camp this year, and sent in their applications.

Awesome! Now we can deploy the model that the algorithm trained.

Notice the model sorting the shapes into what summer camp it thinks they should attend.

Deploy model on this summer's applications

The next day at school, some of the shapes aren't too happy with the model's decisions.

If algorithms and models always knew better than humans...

...we'd never have to think critically about the results.

Let's use our own reasoning to discuss the outcomes of the model's decisions.

Let's start with one city, Upp, first.

What do you notice or wonder about the model's decisions?

Use the buttons below to visualize the results in a few different ways.

Explore more with from people on the People + AI Research team.
Is there any pattern to the shapes the algorithm didn't work well for?
Why might this be? Does this seem fair?
Let's see what this trained model decides in a different city.

Let's look at another city: Dow.

Explore more with from people on the People + AI Research team.
Is there any pattern to the shapes the algorithm didn't work well for in Dow?
How do these results compare to the results you saw earlier for Upp?

And finally, for Sides...

Explore more with from people on the People + AI Research team.
Is there any pattern to the shapes the algorithm didn't work well for in Sides?
How do these results compare to the results you saw earlier for Upp and Dow?
What does this mean for algorithms and models in the real world?
Some closing thoughts
1. “What’s fair?” is a complicated social question
2. Many algorithms are built from human judgments, which might not be fair
3. One size probably doesn’t fit all, even if it’s faster and cheaper
4. Algorithms can unintentionally amplify unintentional harms
5. People impacted by algorithms need a voice in what’s fair

Connect with other folks below to learn more!