The Pain Test

A lot of math students don’t like math. That’s a problem worth tackling. One point of The Pain Test is to question our assumptions. The assumption that many students don’t like math seems sound, but it’s worth asking “why?”

We could found a startup to tackle students’ dislike of math from any number of angles, and several would be valid. Students don’t like math because their teachers don’t know how to teach it clearly. Students don’t like math because they get bad grades in math, leading to all kinds of social stigmatization. Students don’t like math because their experience of math has been one of button-pushing and formula-memorization and not at all creative.

I’m pursuing the theory that students don’t like math class because math class seems divorced from the world they live in, with text-based problems describing (for instance) two trains leaving Philadelphia going in opposite directions.

To solve this problem, teachers rely on publishers to include problems in their curricula that are quote real world unquote, though as stipulated earlier, these problems don’t look like any real world a student has lived in. There are also various supplementary resources where teachers can find real world tasks they hope will engage their students.

Some of these supplementary resources are quite good. They offer interesting mathematical investigations of the world students live in. (I’ll discount most video games from this category, not because they aren’t effective, but because they’re effective at the drill and practice of existing skills. Less at the facilitation of new ones.) These resources are often content repositories, which you pay a monthly or yearly subscription fee to access, at which point you can download any learning materials created by the owners.

There are three problems with these sites.

  1. There aren’t enough of them. There is a wealth of adaptive assessment engines that will throw math problems at your students all day long but comparatively fewer places for teachers to purchase interesting math problems for use with their students.
  2. These problem banks are understocked. I won’t cite specific cases but the largest of these problem banks tops out at several dozen. Meanwhile, as a teacher, I don’t just want a lesson that interested the author’s students when she was a teacher. Her students were two years younger than mine and lived in Kansas. My students are two years older and like the beach. I want to pick from a broader selection of good problems so I can tailor them more specifically to my students’ interests.
  3. It takes time to learn to teach these lessons. In some cases, they come with teacher guides, true, but my working hypothesis is that every teacher has a threshold on the amount of time and effort she’ll invest in learning to teach a new lesson. Past that point, she’ll stick with whatever lesson she already had. It’s crucial, then, that the problem bank has a “house style” that’s easy to pick up on and extend to other tasks. As the teacher gets more experience teaching with that style, other problems will be easier to implement.

I’ll be surveying math educators about this shortly but check me out: is my assessment of the pain accurate?

The Idea

A three- to four-sentence summary of your idea.

A student with a question in her head is a useful thing for math teachers. That’s a student we can teach, particularly if the question is mathematical. Words on paper can provoke those questions but other media — pictures and short videos — can do better. We need a website where we can find pictures and short videos that are proven to provoke questions that mathematics can answer.

What problem does your idea solve?

Two problems collide: 1) kids are, by and large, bored in math class. 2) the Common Core State Standards ask students to do more than apply formulas to problems on paper in a textbook. Math problems that build from digital media have the potential to engage students and challenge them, but they’re hard to find. We need a crowdsourcing of those media and a unique curation mechanism for determining, not “which one is funny or interesting or worth retweeting?” but “which one provokes a question?”

How does your idea fix the problem?

At 101questions, people upload a short video or an image. It goes onto the homepage where people either ask a question about it (if they have a question) or click “skip it” (if they don’t). We get a clear picture very quickly of the images and videos that provoke the most questions. Then users can upload or link to the tools, information, and resources that will help students answer their question — a complete, engaging, and challenging learning experience.

Why do you want to fix the problem?

I’ve seen firsthand the transformative effect of this kind of curriculum design. I see it in workshops and classrooms scattered around the world. It’s time to centralize that process and automate some of its pieces.

Space: Dan Meyer On Personalization

Here’s Einstein:

Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted.

So what can computers count? The enthusiasm for personalized learning (one of the “trends to watch” in our assignment description) requires we answer that question.

If you’re Netflix, personalization means you know a) what movies a customer has rented and b) how she’s rated those movies on a scale from one to five. That’s what can be counted. The system counts it and then does its best to recommend another movie you’ll like. Netflix does a good job with those two data points, but it’s far from perfect.

Netflix might do a better job personalizing your movie-watching experience by having you type in a brief paragraph explaining your ranking. (Did you like a particular actor? Did one of the underlying themes resonate with you?) Then a Netflix employee would read that information and send you a recommendation.

I’m not saying that’s a good idea. That kind of personalization would be very expensive, for one. But that’s the axiom.

Good personalization is expensive.

Many math education startups (including those that have caught the eye of our instructors) have already met the challenge of cheap personalization, but good personalization still eludes them. Cheap personalization is easy if you look at math from the right angle. From that angle you see a lot of binary, right-or-wrong answers. You see multiple choice questions. You see text fields that are easy to parse as integers and then evaluate against a key. And those data are cheap. You can feed them into a machine that will then tell a student which video lecture she should watch next.

It’s cheap but it isn’t good. Students don’t enjoy it (see the section titled “Beyond the Videos“) and, pace Einstein, it turns out that just because it can be counted easily and cheaply doesn’t mean it counts. These startups take feeble data and pile a year’s worth of personalized recommendations on top of them. But feeble data return too many false positives (they claim a student knows something she really doesn’t) and too many false negatives (they claim a student doesn’t know something she really does). The personalization fails and so do the students.

So what’s good?

What’s good is giving students constructed-response tasks that require them to empty out the contents of their head, exposing all kinds of misconceptions about math to a trained educator’s eye who can recognize them and help.

But educators are expensive. And a lot of teachers don’t have the training or interest in gathering those kind of data, anyway, which results in the worst of both worlds: bad, expensive data.

Is there a third way here? Good, cheap data? ActiveGrade is interesting. It features a similar teacher dashboard to Khan Academy. The evaluations are made by a student’s teacher, not a machine, though, so it allows for better, more accurate personalization, if also more expensive.

It would be an interesting experiment to scan and send lots of class’ constructed responses to the same trained math educator who could evaluate those hundreds of assessments at a desk for an hour. I’m not volunteering and I’m not saying I prefer that hypothetical option to a trained educator’s understanding of her own students’ understanding. But I prefer that hypothetical option to the current reality of personalized math education today.

Dan Meyer — Introduction

The first week’s project in #edstartup is to introduce ourselves by way of a short video. Here’s mine.

The text:

My name is Dan Meyer. I’m a doctoral student at Stanford University in math education. It’s impossible to be at Stanford, in the belly of Silicon Valley, and not be curious about startup culture and startups generally. In math ed, a lot of them seem to be focused on cutting costs and finding inefficiencies rather than creating new sources of value. Those are interesting goals, certainly, but their end result so far seems to be putting video lectures online, which I think seems to fall short of the promise of technology so I’m real interested in ways of capturing that value and creating that value. So that’s my goal here, for this course. My initial efforts are at a website called 101questions, which aims to help math teachers find interesting, perplexing math problems. Looking forward to learning with you guys.