Thursday, April 7, 2016

Comparing Whole Numbers

Earlier I wrote about using NWEA’s Conditional Growth Index as a way of measuring growth in a school district.  One of the reasons this is a good mathematical way to look at data is because it compares whole numbers that are less likely to be greatly affected by future norms changes.  I will use two examples from our own data to explain.


In the first sample containing real data from our school district, Column AQ indicates whether a student met “typical” growth as projected by NWEA.  Column AR indicates students’ Conditional Growth Index from the measurement period.  Note that in two of the instances a “yes” counts as exactly 0, meaning the projected and observed for both of these students was exactly the mean (0 CGI and 50th %ile).  Also note that for two students, their “no” counted for a positive 0.01 CGI and their growth was actually in the 51st percentile.  What this means is that, although the students grew less than projected, their peers with the same starting RIT and similar instructional days in between the measurement periods (tests) grew even less than what was projected.  So, in these cases a missed projected score counts as a positive value when looking at the CGI scores because the students actually grew more than the mean of the “pool” of their like peers.


In another real example from our data, two students had the same exact projected growth of 8 and the same exact observed growth of 8.  Yet, one student’s CGI counts as a negative value (-0.08) while the other’s counts as a positive value (0.07).  Why might this be?  


Let’s start with what exactly is being considered when stating what is “typical” and comparing it with what is “observed”.  When making a projection of what “typical” growth is, NWEA simply takes a student’s starting RIT, starting grade level, and number of instructional days in between measurement periods and puts all of those scores into a pool.  In order to make a projection they use past examples and say, “students who began this grade with this starting RIT in the past and had this many instructional days in between exams in the past experienced this much growth on average in the past.”  They then use that information to make projections of what might be typical of similar students in the future.

When calculating observed growth they simply subtract the starting RIT from the ending RIT and the difference is what is called “observed”.  However, with CGI they are calculating the observed as compared with the same pool of students as described above.  The ones in the same grade, with the same RIT, and a similar number of instructional days between measurements.  The result?  In the first instance where the projected and observed both equal 8, but the CGI is negative, what it means is that the “pool” grew more than projected at the beginning of the measurement period.  In this case, an observed of 8 is actually -0.08 of a standard deviation below the mean growth of this pool.  In other words, most kids in the pool grew more than the projected of 8.  The mean growth was a bit closer to 9, so this student did not observe as much growth as the mean.  In the second instance where a projected of 8 and an observed of 8 results in a positive CGI of 0.07, the pool actually grew less than what was projected so that “meeting” what was projected results in the growth being 0.07 of a standard deviation above the mean growth of the “pool”.

I hope this clarifies why one would see these types of values in their data and how to use it.  Comparing whole numbers when measuring growth makes sense.  And the CGI allows us to do what makes sense with the data.


No comments:

Post a Comment