We recently provided our teachers with a simple view showing student scores by learning standard, or TEKS (Texas Essential Knowledge & Skills), across multiple assessments.
The TEKS are further categorized according to their STAAR reporting category and type and MAP Goal Area. Using the data as an input, teachers can target and prioritize skills for whole and small group reteach.
Several teachers have requested a “small group button” on these reports. At the push of this button, the scholars in a class or across a grade would be subdivided into groups based on the students’ results on specific TEKS.
So, for example, if John, Sandra, and Carlos are struggling to calculate the area of a rectangle, their teacher could provide targeted intervention to these three scholars on this skill.
The logic would also take into account the TEKS priority, the relative ranking of results (lowest to highest), and the desired size of the group. Other features: include the number of instructional blocks available for reteach to limit the number of TEKS selected, based on their ranking, and make these conditions parameters, to accommodate each teacher.
All of the above is technically possible. But, recently, I learned why it won’t work practically, at least not perfectly. I sat in a data meeting at one of our primary schools this morning. We were reviewing the results of their latest assessment and putting together small groups.
The small group button would have served us well, except that the decisions included these factors: “We can’t put Kevin, James, and Denise in the same group” and “Tina will respond better to your style of teaching; Michael will respond better to mine, so let’s switch those groups around.”
These are the qualitative pieces that our data doesn’t currently capture but are vital to small group decisions. While data can provide recommended groupings based on quantitative metrics, our solution must incorporate teacher feedback into the final listing. No amount of data wizardry can replace this teacher insight.