In the four months since we launched the Veritas Prep GMAT Question Bank, we have collected nearly 300,000 responses and helped thousands of users get ready for the GMAT. Students’ responses have been nothing short of terrific, sharing success stories with us and giving us some great ideas for how to make the GMAT Question Bank even better.
We’re already hard at work implementing enhancement ideas that have been streaming in from all over the world, and today we’re excited to announce that the single most requested feature has now been added: After you complete a GMAT practice quiz and review your results, you now receive detailed information about each question’s difficulty level!
One of the most common questions we hear from students is, “How am I doing compared to everyone else?” While you should always focus on YOU more than anything as you prepare for the GMAT, it always helps to know where you stand: Do other students struggle with the same questions that give you trouble? Are you falling for a trap that most students are able to avoid? Our new item difficulty feedback information will tell you! Here’s an example taken from one of our Data Sufficiency questions:
We’ll start with the bar graphs on the bottom… In this example, 72% of students have answered this question correctly. Pretty straightforward, right? The GMAT Question Bank will now not only tell you what percent of all students got the question right or wrong, but will also tell you exactly what answers people choose. Are you worried that you’re falling into a common trap? This is an easy way to tell!
Also, we have introduced a new concept called our CAT Rating. Simply put, this rating gives you an idea of how easy or hard a computer-adaptive testing system would judge a problem to be. This is not simply based on the percentage of people who get it right (Beware… many test prep companies’ practice tests are no more sophisticated than this!), but is instead calculated off of exactly WHO got it right and wrong, across the hundreds or thousands of students who have answered the problem. In the example above, the item received a CAT Rating of 58, meaning that if an adaptive test estimated a user’s ability to be around the 58th percentile among all test takers for this subject, then this is a question that the system would likely serve to the student. Depending on whether the student got this question right or wrong, the system would then raise or lower the student’s estimated ability level from here.
Actually, this is all a bit simplified from how our computer-adaptive testing engine works. We say “simplified” because the system does not track your estimated percentile at any given time, but rather your estimated ability level, measured on an entirely different scale. When you’re done with the test, the engine then compares your final ability level with those of every other student, and assigns you a score and a percentile. For this, we took a shortcut to show you roughly where the test would judge you to be. So, if you get this question right, then there’s a good chance that you’re above the 58th percentile for this subject.
Note that this item feedback looks a little different for Integrated Reasoning. Since that section is not adaptive, we go the simpler route and simply report each item’s estimated difficulty level based on the percent of students who get it right or wrong. Also, remember that the GMAT Question Bank is NOT adaptive (it is deliberately random, in fact), but all 15 of the GMAT practice tests that come with any Veritas Prep GMAT course are indeed adaptive.
Go ahead and give the new and improved GMAT Question Bank a try, and tell us what you think!
By Brian Galvin