Skip to main content


Welcome, the Hub connects all projects

Voices From The Field


MSPnet Blog: “Apples, oranges, and TIMSS”

See All Blog Posts

posted December 5, 2016 – by Brian Drayton

New TIMSS test scores are out, and the commentators are commenting.  Generally speaking, international comparisons are used in the popular media, and in policy debates, as rhetorical weapons, to renew or update the hair-on-fire language of A Nation At Risk about how the poor quality of our educational system is causing the USA to decline from Top Nation status.  The tut-tutting has been muted thus far on this release, since the US election and its aftermath have been the big stories, but the overall story line is set already, as in the blog post  by Ed Week’s  Sarah Sparks, “Rising, but mixed, math science performance.”   I note that the goal posts are shifting, and that comparisons over time must be muddied by the re-design of the tests themselves.  Sparks quotes a researcher at Boston College:

 “When we started [conducting TIMSS] in 1995, our math was all content—algebra, geometry—and in science, chemistry, physics…but now we also include cognitive demands, thinking skills … school is getting to have a broader dimension that is quite different than it was 20 years ago.”

Looking at the horse race, we can see that the US math scores come in well behind the front-runners (US 539 for middle-school, compared to e.g.  Singapore at 618, Korea at 608) as well as some non-East Asian stars, such as Northern Ireland (570) and Kazakhstan (544).  Similar results are seen in science: US high school 530, compared with, e.g. Singapore (top again, 597), Japan (571), and Kazakhstan (533).

What do we think about this?  I have to say I don’t think about it much at all, because there is persistent evidence that these comparisons are not very informative. A paper by Carnoy et al., from the Economic Policy Institute from about a year ago, “Bringing it back home,” argues that country-to-country comparisons are often deceptive — and there are many nuances to this.  For example:  While we sometimes hear that impoverished students in the US are pulling our scores down, Carnoy and his co-authors cite evidence that these “average scores” obscure positive trends among such students:

Focusing on national progress in average test scores obscures the fact that socioeconomically disadvantaged U.S. students in some states have made very large gains in mathematics on both the PISA and TIMSS—gains larger than those made by similarly disadvantaged students in other countries.

In this study, and in another by Carnoy alone, “International test comparisons and educational policy,” the researchers suggest that because of the tremendous differences between the educational systems of other nations, and the 51 systems here in this country, “comparison” is hard to establish rigorously.   By contrast, the important differences among the United States might be more fruitful ground for seeking comparisons to understand why some states rank very high (even on the international comparisons) and others very low.  For example, Carnoy et al write:

As a suggestive strategy for further (qualitative) policy research, we paired off states with different patterns of gains in 8th grade math. This reveals, for example, that 8th grade students in Massachusetts made much larger gains after 2003 than students in Connecticut, that students in New Jersey made larger gains than students in New York after 2003, and that students in Texas already started out scoring higher in 8th grade math in 1992, but still made larger gains over 1992–2013 than students in California, especially after 2003.

This strategy might have the additional benefit of opening paths to more coherence across this country in educational inputs, e.g. in the opportunities for learning available to all children;  or in methodologies, e.g. a significant shift towards an inquiry approach, or a reduction in harmful levels of testing.

Yong Zhao writes about this year’s math scores in a piece that appears in Washington Post’s Answer Sheet blog. His piece is entitled “East Asians topped US students again on international tests.  But are their schools really better?”  He points out first that US students have never scored at the top of international comparisons. Indeed, US scores have stayed roughly where they are, relative to other countries’, during all the era of international testing, through one administration and “reform” wave after another.  This has meant that test scores have served as perennial go-to ammunition for people making the case that schools are in decline and national mediocrity will result — even as the US has remained durably near the top in world measures of competitiveness, creativity, and productivity.

Zhao brings some other results from international comparisons that are thought-provoking, rarely mentioned, and in my mind argue for the intra-national comparisons that Carnoy et al. advocate.  For example (I present only the claims, he provides the stats!)

1) East Asian parents are not “very satisfied” with their schools.2) East Asian schools do not necessarily put a “very high emphasis” on academic success.  3) East Asian teachers are not “very satisfied” with their jobs.4) East Asian students do not have a “high sense of school belonging.”  5) East Asian students do not necessarily receive more classroom instruction compared to the United States, Australia, Canada or England. 6) East Asian systems are not the top users of computers in math lessons. 7) East Asian students receive the least engaging math lessons in the world.8) East Asian students DO NOT “very much like learning mathematics.”9) East Asian students have very little confidence in mathematics. 10). East Asian students don’t value math much.

So, he says, what does this tell us about the schools?  What lessons should US schools learn from these high-scoring systems in Asia (which are not all identical by any means!)?     Zhao summarizes:

So compared with most of the students who participated in the TIMMS 2015 study, East Asian students have less engaging math lessons, they spend less time studying math in schools, they like math or value math less, and they are less confident in math. So how did the East Asian students achieve the best scores?

His answer adroitly points up many of the oversimplifying and stereotyping tendencies rife in educational policy — with regard to international comparisons, yes, but elsewhere, too:

The answer may lie outside schools. To me, the answer has to be the chopsticks, something common to all these East Asian students interact with on a daily basis. To improve math scores, we should all begin using chopsticks.