In the middle of difficulty lies opportunity ― Albert Einstein
Now that the question of raising the charter cap in Massachusetts has been resolved, at least for now, I am hopeful that we can begin to examine and discuss other necessary changes to our educational system here in the Commonwealth. I say this with a certain amount of trepidation as it seems that every time those discussions arise the end results rarely align with what educators in the field believe is important to improve our system. However, there is an area of immediate concern that should be addressed.
The state accountability system here in Massachusetts is currently in a state of disorder. Over the last three years students in the commonwealth have taken four different state exams (MCAS, PARCC [paper based], PARCC [computer based], and MCAS 2.0). I understand that the DESE has done what it could to crosswalk the scores for these assessments in order to place schools in levels and assign percentile rankings. However, with many different assessment scores comprising districts' accountability rating based on a four-year calculation, the validity of those calculations are very much in question at least by many of us in the field. Furthermore, regardless of the efforts taken to be able to compare scores/growth/achievement levels on the various exams, many important variables have not been taken into account.
For example, student performance across the state has demonstrated that students who took the PARCC exam on paper score higher than those who took the PARCC exam on computers. Although analysis of the testing data has proven this to be true, this fact is not taken into account by the state for purposes of accountability and those scores are compared as if they were of equal weight. Additionally, at least 40 schools across the state saw their accountability levels negatively impacted due to opt-out students lowering participation rates. Not only do opt-out students impact participation rates, but this has the added effect of lowering scores as we have seen that the vast majority of those students are our higher achieving students. Thus, achievement in those schools is negatively impacted along with participation.
My reason for pointing out the above concerns is to encourage that, while we finalize both the exam and its impact on our accountability system, the department should reset accountability for next year when all districts have taken MCAS 2.0. I am not suggesting a moratorium on accountability, but rather a resetting of accountability determinations for all schools so that any variables, uncertainty or problems with having multiple state exams figure into the accountability determination is eliminated providing equity once again between districts.
In addition to resetting accountability determinations, we should also investigate the development of a calculation to weight the paper vs. computer based exam for MCAS 2.0 during this transition from one mode to the other. Some districts have made the decision to move immediately to full computer based testing because, although they know in the short term it will negatively impact their scores compared to those districts that remain with paper based exams, they feel it will be a benefit in the long run as their students will gain familiarity with that platform.
Some schools systems, however, even if they have the capability to take the MCAS 2.0 on computers, are reluctant to make that move before they absolutely have to because they know that students score higher on the paper based exams than they do on computer based ones. Basically, it's the difference between playing the long game instead of focusing on an immediate return. However, this fact is working to inhibit this transition resulting in an unequal playing field for districts.
These types of decisions are not educational decisions and they do not essentially effect the delivery of educational services to our students one way or the other. They are rather strategic calculations to take a big hit now rather than smaller ones over time so that in upcoming years the scores would be more competitive. This is a strategic gamesmanship decision and calculation all superintendents are being driven to consider due to the nature of this assessment system being in flux for years now. This should not be the case.
Furthermore, in looking to mitigate detrimental impacts to districts of this system in transition, the DESE’s idea of “hold harmless” is not a viable solution to this problem. To say that districts will be "held harmless" has no real meaning for those of us in the field for two primary reasons. First, districts are not truly held harmless as, if we continue with the current method of calculating accountability, individual year’s scores are still factored into a four-year accountability determination. Thus, those scores continue to follow (harm) us for 4 years. Second, even in the current year we are not "held harmless" The harm is in public perception not our actual accountability rating. That perception is dramatically shaped by the percentile ranking of a school even more so than the accountability "level". Consequently, since the drop in percentile ranking is still shown on the district profile, even though they are "held harmless", public perception of the district is harmed.
At a minimum, we need to reset accountability levels after the administration of the MCAS 2.0 this year so that we all have a level playing field with the same assessment. Additionally, taking this action would mean that districts truly were held harmless during this transition. As part of this calculation developing a method to weight computer versus paper based testing will add further validity to the system and help spur the transition to a fully computer based system by removing districts’ incentives to delay.