At some point, in pretty much every class I teach, a student will ask me what the GMAT really measures. The tone of the question invariably suggests that the student doesn’t believe that the test accurately assesses anything of real significance, that the frustrations and anxieties we endure when preparing for the exam are little more than a form of admissions sadism.
When it comes to standardized testing, a certain amount of cynicism is understandable – if person A has a better grasp on the fundamentals of geometry and algebra than person B, why on earth would we conclude on that basis that person A will be more likely to have a successful career in a field totally unrelated to geometry and algebra?
Of course, I have my stock answer: the test is designed to reward flexible thinking, to provide feedback on our ability to make good decisions under pressure. And though I do believe this, I’m also well aware that tests have their limitations. There are many incredibly talented and intelligent people who struggle in the artificial conditions of a testing environment, and no 3.5 hour exam will be able to fully capture an individual’s potential. At some level, we all know this. It’s why the admissions process is holistic. Still, your GMAT score is important, so I thought it worthwhile to do a bit of research about what the data says regarding how well the test predicts future success.
In 2005, GMAC issued a report in which it examined data from 1997-2004 about the correlation between GMAT scores and graduate school grades. The report summarizes a regression analysis in which researchers generated what they term a “validity coefficient.” A coefficient of “1” would mean that the correlation between the GMAT and graduate school grades was perfect – the two variables would move in lockstep. According to this report, any coefficient between .3 and .4 is considered useful for admissions.
The GMAT’s validity coefficient came out to .459, suggesting that the test does, in fact, have some predictive value, and this predictive value seems to be superior to other variables that admissions committees consider. The validity coefficient for undergraduate grades, for example, was .283. (And when the variables are combined, the validity coefficient is higher any individual coefficient.) So is that the end of the story? Can I rebuff my students’ complaints about standardized testing by sending them an abstract of this report? It’s not quite that simple.
In the conclusion section of the paper, we’re offered the following: “When examining the validity data in this study, one should recognize that there is a great deal of variability across programs and that the relative importance for each of the investigated variables differs for each program. This is to be expected.”
So one interpretation of the data is that the GMAT does a pretty good job of predicting how well students will do in their MBA programs. But if you’ve been studying for the GMAT for any length of time, hopefully your “correlation is not causation” reflex was triggered. What if students with higher GMAT scores attend more selective schools and then it turns out that those selective schools have more lenient grading policies because they figure that the necessary vetting has already been performed? In this case, the correlation between GMAT score and grades wouldn’t be shedding much light on how well the test-takers would perform academically, but rather, would be providing information about what kinds of programs test-takers would eventually attend.
Moreover, one could argue that looking at the correlation between GMAT scores and grad school grades is of limited usefulness. Schools no doubt hope their students do well in their classes, but it stands to reason that admissions decisions are also informed by predictions about what prospective students can contribute to the school’s community, as well as what kind of future career success these students can expect after they graduate. What, then, is the correlation between graduate grades and career success beyond the classroom? And how would we even begin to measure or define “success”? These are complex questions with no good answer.
Furthermore, while the paper appeared statistically rigorous to me, amateur that I am, we still have to consider that it was commissioned by GMAC, the company that administers the test, so there is a conflict of interest to bear in mind. A recent article by the Journal of Education for Business questioned the results of the earlier research and insisted that the section of the GMAT that best predicted conventional managerial qualities, such as leadership initiative and communication skill, was the Analytical Writing section, the component of the test that admissions committees care about least and that had the lowest validity coefficient, according to the earlier paper.
Needless to say, though I found these papers interesting, they provided me with no definitive answers to offer my students when they ask about what the GMAT really measures. And, paradoxically enough, this is something we should find encouraging. If the GMAT were measuring any kind of fixed inherent quality, there’d be little point in prepping for the test. But if the test requires a unique skillset, that skillset can be mastered, irrespective of how directly applicable that skillset will be to future endeavors. Pragmatically speaking, the thing that matters most is that admissions committees do care about the GMAT score. So my ultimate message to my students is this: stop worrying about what the GMAT measures, and instead, harness that energy to focus on what you need to do to maximize your score.