Apathy is the cancer of today’s classroom. Once it plants its nasty little head in a student’s mind, it can be one tough beast to eradicate. Complaints like “I don’t care about this” and “When would I even use this?” are frighteningly common in higher education and indicate a malady far worse than boredom: time wasted. Not only are students wasting time being in class, cranking out hours of work to learn material that won’t be retained or appreciated, but teachers are also wasting their time preparing mechanical, need-agnostic material that will ultimately make no impact on their students’ interaction with the world.

In one way or another, the point of education is to learn how to live in this world—whether to learn specific skills for a job that will pay for our livelihoods or to learn ideas that shape how we react to the people, laws, and situations around us. And one of the most important ways in which education applies in real life is in being able to recognize when and how different concepts pertain to the situation at hand. Too often in classrooms are we given the punch line before the build-up. As education blogger Dan Meyer puts it, teachers are too excited to present the concept without spending an adequate enough time *motivating* the question behind the concept. And the result is a lack of internalization of ideas, a failure to understand why the material being taught is relevant.

As I explained in my last post, the Interactive Mathematics Program (IMP) puts in an admirable effort to motivate the need for mathematical concepts. It presents a tough multi-faceted problem at the beginning of each unit and develops the need for various math topics as it pertains to this overarching problem. I felt that one of the most memorable and instructive units was one in which we attempted to solve the unit problem on the very first day without any formal tools whatsoever. The best part (though frustrating at the time for me) was that this was *really hard*. There is definitely something that can be said for having students try to do things inefficiently to learn the *merits* of the concepts that teachers are so excited to get to. Or in Dan’s words, there is definitely something that can be said for “being less helpful”—not jumping straight to teaching students about power tools but momentarily convincing them that the logs on their desks can only be cut with butter knives.

Introductory statistics courses can definitely benefit from a more problem-oriented and “less helpful” mentality. For one, I think there is a deluge of formulae for students to learn, and most of the time, the reasons for using them were never really developed or lost in the memorization process. The topic of confidence intervals serves as a great example here. We drill into students’ minds that the formula for a confidence interval is an estimate plus or minus some multiple of the standard error. And we can tell them that the interval gives us a range of possible values for the true population mean. But why are these possible values good ones? Why do we even have to give a confidence interval? Why isn’t it enough to just say that the mean is *around* 5, say? When do I ever see confidence intervals in real life?

A better way to teach this idea than to present a formula and give a two-minute interpretation is to make students see how the idea of a confidence interval is one they are exposed to frequently but in disguise. One way to do this is to present the students with common types of advertisement tricks.

The statistics presented on these ads are instinctively somewhat convincing. Hey a 4% reduction in cholesterol is pretty good. Wow, 90% of doctors recommend this product! These are definitely worth buying! But we can also think about what numbers on these ads would make us less convinced of the products’ worth. A cholesterol reduction of 0 to 1% would definitely not make me want to buy Cheerios. And I would be much less impressed by this Colgate toothpaste if 50% or less of doctors recommended it.

Now we give the students data—several sets of data that support or fail to substantiate the Cheerios claim to varying degrees—and ask them to make their own conclusions based on this data. I would expect many of them to take the average lowering of cholesterol as a summary measure; some might look at the median or mode; some might look at the percentage of people who had their cholesterol lowered by some minimal percentage level. No matter how they choose to look at the data the key idea is that the students see how their chosen summary measure(s) vary from dataset to dataset. Sometimes the claims in the ad are supported, and other times they are definitely not. It’s just that companies often only report their summary measures and not how much that measure would vary had they tested their product on different groups of individuals.

After this point, I think that students would be a little more ready to learn about confidence intervals because they have seen how advertisements, something they encounter all the time, can use (or really conceal) them in misleading ways. And no one likes being duped.

This is by far not the best way of motivating confidence intervals, but it is a lot more than can be said for the majority of introductory statistics classes. Just taking a little bit more time to think about the everyday applications of statistics can go a long way in making lessons less formulaic and more engaging, and this is something that the statistics community should strive for.