[GMAC] The Case for the Objective Evaluation of Talent

The disruptions caused by the COVID-19 pandemic, along with the imperatives of the racial justice movement, have reignited the debate about the use of standardized testing as an admissions instrument for Graduate Management Education (GME) programs.


There are generally three arguments put forward by the proponents of the no-test or test-optional movements: 1) that standardized tests lack validity and predictive ability correlated to real-world performance in academic programs and that undergraduate GPAs (UGPA) combined with a holistic admissions process are an effective substitute; 2) that dropping the use of standardized testing would decrease racial and gender disparities in admitted cohorts; and 3) that standardized tests are biased against historically underrepresented communities, undermining our societal goals of equality. These arguments are well intended but are either not backed up by empirical evidence in GME or, when data is cited, such data is generally drawn from undergraduate admissions and tests such as the SAT/ACT rather than graduate level admissions tests such as the GMAT.

On the contrary, a close reading of the data shows that use of the GMAT in GME admissions is an effective tool for the admissions professional and reduces subjectivity and the potential for underlying bias. The data also shows that the GMAT provides a more accurate predictor of performance than other available data such as the UGPA, particularly in constructing a diverse classroom. Reliable test scores reflect objective, science-based data that an experienced admissions professional can use to complement their judgement about an applicant’s merits and fit within the GME program under consideration. In doing so, they provide an objective anchor point that is free of the variations of GPA across systems and countries and the unconscious biases that may be inherent in even the best trained admissions professional. It is the science that complements the art.

The GMAT has demonstrated predictive power and other available data, such as undergraduate GPAs, are ineffective substitutes.

A common argument against standardized tests is that they lack predictive validity and that other data, such as the UGPA, are effective substitutes. This is not backed by empirical evidence. On the contrary, recent studies1 have shown the UGPA to be an unreliable indicator for GME programs, largely due to the wide variability of GPA scoring across the US and the inherent differences that exist across the globe. As the tables below illustrate, the incremental contribution of the GMAT over the UGPA in predicting classroom success in US GME programs is significant across racial groupings of US citizens and amongst the region of origin for international students.For African Americans, the UGPA seems to have no predictive ability and all such predictive validity comes from GMAT scores.



Increment_validity_GMAT_AgeThe incremental predictive value of the GMAT also increases with age, the raw predictive value of the UGPA declines as the population gets older while the incremental predictive value of the GMAT increases. As time passes, classroom grades such as the UPGA seems to have lesser relevance. This is of importance to programs designed for a post-experienced student base.

GMAT test taking diversity closely mirrors the mix of undergraduate degree holders in the US

Sangeet - US Bachelor Degrees-1

Another common argument against the use of standardized tests is that their test taking populations do not adequately reflect the racial mix that schools are looking for in their classrooms. This is the result of an incomplete reading of the data. It is true that the racial mix of the GMAT (and GRE) test taking population is not representative of the US population overall. For example, African Americans in the 20-34 age group comprise 14 percent of their age cohort but only 8 percent of GMAT test takers (and a similar 9% for GRE). What is left out in this comparison is the fact that African Americans comprise 9 percent of bachelor’s degree holders in the US and, since a bachelor’s degree is a pre-requisite of graduate education, that and not the overall population mix is the appropriate benchmark.

Analyzed further, data shows that the GMAT test taker mix in the US is more diverse – with the lowest percentage of test taker who are white Americans – 59 percent for the GMAT, 65 percent for GRE, and 69 percent amongst the bachelor’s degree universe.

Sangeet - GMAT 2019Sangeet - GRE 2019

This is not to diminish the importance of the gap in college attainment that the falloff from 14 percent to 9 percent for African Americans represents, and what it says about the progress that we, as a society, still need to make in order to achieve greater equality in America. It’s just that graduate admissions tests reflect current reality today. Waiving them will not change this picture – graduate schools would still recruit from the population that has a bachelor’s degree, not the entire population – but could have the unintended consequence of introducing more bias as we will discuss later.

The GMAT is designed so that each test item is free of racial bias

African Americans and Hispanic Americans have lower median scores across the SAT, GRE and GMAT. But, attributing this to the tests themselves and ignoring the underlying reasons is akin to blaming the messenger for an inconvenient message. This underperformance is endemic across the education chain – high school graduation rates, college enrollment and completion. It is emblematic of the malaise that exists in our society but suppressing the signal – removing the test – is like not going for your annual physical because it may show what is wrong with you. It may feel good for a while, but the long-term outlook will not be positive. The right answer is to recognize these score differences and adjust for them in our admissions processes.

Firstly, we must ensure that the tests themselves do not discriminate and are free of biases. Most testing agencies take this seriously and all test items in the GMAT go through an extensive Differential Item Fairness review process where they are pretested to ensure that they yield the same result against different population groupings. But while the tests may be free of bias themselves, we cannot ignore that there are inequalities in our educational and preparation environment and that certain communities may not have access to the same foundational education or time and means of preparation as others.

If we accept, as shown, that tests such as the GMAT are efficient predictors of success in the classroom, we must also accept that low scoring candidates, regardless of background, will have a lower probability of success in that class. We cannot therefore simply waive test requirements and admit candidates who are not adequately prepared – having them expend their precious time and financial resources thus – but must use the diagnostic data that we gain to build additional support mechanisms. This is, after all, an important part of holistic admissions – the ability to see the diamond, even though all the facets are not shining through and then polish these facets so that they can stand out, and shine, amongst their peers.

Rankings distort the fair use of the test

Business schools are driven by the rankings and these rankings are important signals of quality to the prospective student market. The unfortunate fact is that some important rankings agencies use GMAT scores as a proxy for the quality of admitted talent. This is a misuse of test data, for while test scores have strong predictive validity as we have demonstrated, they were never designed to be the only indicator of student quality – background, experience, diversity must all play a part.

The question then is our response. Waiving test scores is certainly one way to respond for if there is no data to share, such data cannot be used in rankings. The potential flaw in this approach is that, unless we believe that ranking will be made to disappear by this move, rankings agencies will simply adopt different, less objective, methodologies to address the issue of student quality. Perhaps though student ratings about the “quality” of their cohorts is the fashion of our times – creating a new arms race to keep students “happy”, not prepare them for challenging careers. A better answer is not to waive the tests, but work with the agencies to develop better proxies, GMAT scores yes, but also other factors.

GMAT is the science that complements the art

We like to think about the process of creating a class as a combination of science and art. The science is objective data and the art is the experience of the practitioner. Working together, this combination has delivered high performing classroom cohorts in many of our greatest business schools. I like to think of them as the combination of diagnostic instruments and physicians. An experienced physician can make an accurate diagnosis with the use of radiological and laboratory testing. It is possible and even probable. But when we are bringing on new staff, developing their experience, dealing with corner cases and managing increasing volumes, the probability of error is magnified. The science – whether it is radiology or pathology in medicine or standardized testing in admissions – complements the experience of the practitioner, adds objectivity and ensures accuracy and fairness.


We have shown that the GMAT is an objective instrument that increases the predictive power of an admissions decision, and that it adds standardized objectivity and reduces subjective bias. Quite simply, it allows comparisons based upon data and not subjective judgements alone. It brings other benefits as well. Preparing for the GMAT is preparing for business school and an important signal about grit and commitment to the GME journey. Requiring the GMAT is also an important signaling mechanism for schools. After all, if one of the benefits of a management degree is the network that will be developed as a result, it is important that we can assure prospective students that we have a robust evaluation process. Students will demand, not only that they be admitted, but that all others who are admitted are held to an equally high yardstick – a factor that is even more important to international students who have to make enrollment decisions with less than perfect knowledge.

At the end of the day, we are all trying to find what we call “right-fit”. That combination of intellectual capability, experience, teamwork and determination in the individual while ensuring diversity of background and experiences in the cohort. The GMAT, I put to you, is a critical element in establishing that right-fit. The science that complements your art.

[1] Differential Validity and Differential Prediction of the GMAT exam – Eileen Talento-Miller – April 2017

Posted by Sangeet Chowfla
Sangeet Chowfla is the president and chief executive officer of the Graduate Management Admission Council (GMAC)