I’m sure many of us remember or are familiar with the “Math Wars.” Two divergent schools of thought vehemently clashing over how mathematics should be taught in schools: procedural fluency vs. conceptual understanding. And in some ways the battle rumbles on, despite most teaching materials and methods these days being an approximation of “a bit of both.” However, recent advances in widely available commercial AI software have rendered this battle completely moot.

There has been a succession of products, “photomath” and now several AI programs that allow anyone with a subscription to drop in a screenshot of any math problem, and within a few seconds, produce an absolutely perfect response. On one level, this renders all math homework somewhat irrelevant, but on another level it raises serious questions about the role of humans and the learning of mathematics. Math class has long felt pretty irrelevant to many students, and the old Math Teachers’ adage of “you won’t always have a calculator in your hand” is now so outdated it’s tragic. Not only will today’s students likely always have a calculator in their hands, they will also have a tool that can solve any word problem and give them perfect answers. In this scenario, really, what’s the point of learning math?

About now, it has become clear that the old math wars are over. As I discussed in an earlier blog, “My Daughter Doesn’t Need My Mother’s Math Education,” it was already obvious that learning procedures for the sake of procedural fluency were a dead end—despite the protests of “but that’s how I learned it.” Now, with the power of commercially available AI, the details of any symbolic or math word problem can almost immediately be laid bare.

So what *does* have value anymore? What is the role of humans alongside AI? And what the heck are we supposed to teach the kids?

I’d like to look at two math problems, both concerning fractions at the upper elementary level.

Problem A is a released item from the SBAC Common Core testing consortium

Problem B comes from youcubed.org

First, let’s ask this question, “Which of these problems is more difficult for students?”

While there may be some difference of opinion on this, most teachers are likely to say that A is more difficult because it’s a word problem. This would match my experience both in and out of the classroom at the Los Angeles Unified School District during the first decade of this century.

Now let’s ask a different question, “Which of these problems is more difficult for an AI?”

Hmm, neither of them is difficult? A is more difficult? B? This question intrigued me, so I bought a subscription to an AI math solver and decided to try some experiments. Will it solve, I asked myself!

The AI took about 0.3 seconds to answer with complete perfection, spelling out in detail the correct method for word problem A, converting 3/10 to 30/100, and told me with unerring accuracy that C was the answer. For the record, it did exactly the same with a Calculus word problem involving partial differential equations. The complexity of the math was no match for it.

So then I dropped in B. At first, I was stunned by what I was reading. It was so good. I’m paraphrasing, but it said, “First we need to determine the total number of squares in a grid. Each grid is a 10 by 10 square with 100 squares.” This is excellent, I thought, and it followed it up with statements of immense certainty, “In the first grid there are 4 vertical stripes each consisting of 10 squares, so 40/100 are shaded which can be reduced to ⅖.” Nice…

What? The eagle-eyed among you may want to look at the picture again. 4 vertical strips? I don’t think so; there are three. It also only found answers (and wrong ones at that); it made no mention of the strategy you’d likely want to emerge from a class discussion with students that although the first three are different patterns, you can see that in each one there are always three shaded squares in each column and three in each row, so you don’t have to count the squares, once you realize the pattern you can tell it’s 3/10 for each one. And there’s a dog!

So this got me really thinking. I wonder how well the AI is trained in visual problem solving, especially given that for the last 15 years, I have designed visual math games and interactive learning experiences for a living. So I threw in an ST Math puzzle, Alien Bridge, which is also about fractions, to see what it would make of that.

For those of you unfamiliar with ST Math, it’s a game-based learning system that has students use Spatial Temporal (ST) reasoning to solve visual puzzles, helping a penguin, JiJi, across the screen. In the above puzzle, the question being asked, visually, is we have an alien spaceship that is shaded with ½ + another shaded with ¼. The students have to manipulate the area model below the spaceships to create an amount equivalent to this sum; in this case, they would build 6/8. And here’s where the AI really started to lose the plot. It told me in no uncertain terms that the answer was 3 + 5 = 8. At first, I thought maybe it was seeing something connected to the denominator being 8?

And then I realized what it was actually likely doing. It is so desperate to see language and symbols, it was interpreting the half shaded square on the left spaceship as a 3—can you see it?—and then the quarter-shaded square on the right spaceship as a slightly weirdly drawn 5. The AI loves language and symbols, and I’m sure its ability to see symbols in badly written text is awesome, but in this context, it’s trying to see what’s not even there. It really struggles to make ANY sense of visual mathematics.

OK, now I’m ready to throw the AI the ultimate test. An ST Math problem that is pure spatial reasoning that we give to Kinder and First grade students but has literally no symbols at all: Upright JiJi.

In this game, students have to choose a series of 3 rotational moves to get JiJi the penguin from the current position (legs pointing out of the screen, beak to the left, etc.), to an upright position ready to walk off as seen in ghostly form on the right. What would the AI make of this?

I could not have been more shocked. It did find some symbols on the screen after all—the dummy demo account student name, “I. Newton”. It loved that. It spat out a brilliant summary of the life and achievements of Isaac Newton:

So this was fascinating. The more accessible we make the mathematics and the thinking to students (humans) by making it visual, challenging, and maybe even interactive, the LESS accessible we make to AI, trained on a diet of language and symbols to be the ultimate in math homework cheat codes.

Now it’s clearer than ever: the math wars are so long done. The role of humans in learning mathematics is what it always should have been—it is rigorous training in a system of thought about patterns and problem-solving. Fluency within this system still has massive value. We can talk another time about the need to reduce working memory load within the process of solving non-routine problems, but procedural fluency is no longer useful as the sole objective of math class—the goal is your ability to show how human you are and develop your creative reasoning, your productive struggle, and your problem-solving skills.

To all the Math teachers, especially those in Middle and High School grappling with kids using apps to do their homework: assign more visual tasks. Having a student explain and discuss with others how they solved 1 good, interesting puzzle is worth 100 textbook repetitions of the same question over and over again with different numbers. And to make sure you really throw the AI off the scent, maybe just add the name “Isaac Newton” somewhere on the page and see what happens.

Nigel Nisbet is the Vice President of Content Creation at MIND Education, a non-profit organization dedicated to equipping all students to solve the world's most challenging problems. He is also the author of the E-book “I think, I try, I learn” and presenter of the TEDx talk “The Geometry of Chocolate."

Copyright © 2024 MIND Education®. All rights reserved.

## Comment