We’ve been encouraged by the attention received thus far from journalists, bloggers, and the general public about our study, “Do High Flyers Maintain Their Altitude? Performance Trends of Top Students,” published by the Thomas B. Fordham Institute. We’ve received some positive feedback, some constructive criticism, and some not-so-constructive criticism, too (Why yes, SamsDad67, all our parents were married, though in my own case it was a close call).
“It's always been easier politically to defend the needs of struggling students than those of advanced students. The elitist whiff of arguing for gifted students is hard to shake; however sound the argument, it can come off like trying to nab more for the already fortunate.”
We’ve been monitoring response to the High Flyers study and wanted to respond to some of the more interesting comments. In this study, we reported that there was large and depressing attrition in the high achieving group, 40% - 50% depending on grade and subject. One argument offered in response (from several sources) is that we’ve simply rediscovered regression to the mean. The best (and snarkiest) comment offered Fordham the Nobel Prize for this landmark achievement. We were not aware that Nobel Prizes were offered in this category, but we want to make it clear that this should be OUR NOBEL PRIZE and not theirs.
In partnership with the Thomas B. Fordham Institute, we are releasing an interesting new report today, Do High Flyers Maintain their Altitude? Performance Trends Among High Performing Students. The report monitors the progress of a cohort of about 80,000 – 90,000 elementary students and a second cohort about 45,000 middle school students as they progress through school, with a particular focus on the high performers among the group. The study is one of the few that’s followed a relative large cohort of high performers through several years of schooling.
Our intern, Clay Johnson, recently posted a blog about the NCES Report linking the proficiency cut scores of the 50 state tests (plus the District of Columbia) onto the scale of the NAEP assessment. The report provided updated cut score estimates and noted changes of these estimates from prior NCES reports released in 2007 and 2005. As Clay observed in his blog post, the general findings from the NCES report reflected those from our own recently published State of Proficiency report. Although the sets of rankings produced by the studies were not in perfect agreement (Spearman’s Rho coefficients for the four sets of rankings of states included in both studies ranged from a low of .32 for fourth grade math up to a high value of .67 for eighth grade reading), both studies found a similar general pattern of variation across states in what constitutes reading and mathematics proficiency standards: some states set very easy proficiency standards that can be met by most students with reasonable effort, whereas other states set their standards at such high levels that only the most gifted students could reasonably be expected to meet them.
As we move toward using value-added methodologies to evaluate teachers, the issue of non-random assignment of students to teachers is gaining a bit more visibility. Walt Garner blogs on the issue today in Education Week.
The technical problem is quite simple. Value-added models evaluate teacher performance by trying, through a form of statistical regression, to compare the progress of a teacher’s students to a predicted result for those same students. To make the comparison fair, many of the models introduce factors such as starting achievement of the students, a student’s gender and ethnicity, and school poverty rate (among many other possible factors) as controls. However, the assumption of any value-added model is that once these controls are introduced, in all other respects students are randomly assigned to teachers.
Please welcome our guest blogger, Kingsbury Center Summer Intern Clay Johnson! Clay is a doctoral student in Educational Statistics and Research Methods at the University of Arkansas at Fayetteville.
This week the US Department of Education released a new report linking individual states’ cut scores for labeling a student “proficient” with the latest results from NAEP (the National Assessment of Educational Progress, or “The Nation’s Report Card”). The resulting headlines focus on the change in some states’ difficulty level for scoring proficient since the last round of NAEP scores. More interesting for me though are these four new sets of rankings comparing the difficulty of passing each state’s test, since a very similar set of state rankings was recently published online by the Kingsbury Center in an interactive “data gallery.”
On Saturday, thousands of educational advocates in DC and across the nation will be marching in order to Save Our Schools. A list of their guiding principles is posted here. Among their demands is “An end to high stakes testing used for the purpose of student, teacher, and school evaluation.” But it’s the very first bullet on the website under this principle that resonates with NWEA and the Kingsbury Center: we enthusiastically share SOS’s viewpoint that “The use of multiple and varied assessments…” is necessary “…to evaluate students, teachers, and schools.”
Check out the Data Quality Campaign’s new video advocating appropriate use of data for the field of education.
For anyone who missed it, there’s a good commentary in EdWeek called “My Nine ‘Truths’ of Data Analysis”. Former data coach and assistant superintendent, and current associate director of the Center for Leadership in Education Ronald Thomas outlines nine good, common-sense ideas about using data in the educational world. My favorites are #1 and #9.
“My first truth. We don’t need “data driven” schools. We desperately need “knowledge driven” schools. There is a big difference…”
“My ninth truth. None of these steps is going to have any impact unless, as educational leaders, we clearly articulate compelling reasons why teachers should invest time and effort in data analysis. The message to teachers must be that their work is not about abstract concepts of state accountability or school improvement. We did not get into this business to increase state test scores or to implement federal mandates. We are here to help children learn.”
Also interesting are the comments provided by thoughtful EdWeek readers. One commentator states:
“The one thing you left out, from my opinion, is that the data must be based on relevant testing. Most of the standardized tests are geared not towards gauging a student’s knowledge. They tend instead to seek those who are experienced at test taking. This is why I dislike the current data driven idea. It uses data that is inherently bad to make decisions that are often worse.”