A new bar has been set for state-grants-funded digital math programs evaluation. The Utah STEM Action Center has released its 2020 annual report, which includes the results of their comprehensive statewide program evaluation. This represents a new high water mark in a seven-year progression towards the vital practice of holding education programs and school districts using state funds accountable for achieving student learning impacts.
Founded in 2013, the STEM Action Center is tasked with regular and rigorous reporting to the Utah state legislature on the use and impact of annual state funding for digital supplemental math programs, and for knowledge capture and dissemination for advancing STEM digital education best practices in the state. The program offers a variety of grants to Utah schools, including the K-12 Math Personalized Learning Software Grant that provides access to a selection of math personalized learning software programs to improve student outcomes in mathematics literacy. ST Math, created by MIND Research Institute, is one of the math programs available to schools and districts as part of the grant.
Clocking in at over 450 pages, the report tracks and unpacks digital program usage at a statewide level, and reports back usage fidelity and impact efficacy to the state legislature.
By conducting such a thorough and comprehensive analysis, the Utah STEM Action Center is a national pioneer and model in its transparent and highly visible commitment to accountability and continuous improvement.
Further, the STEM Action Center engaged university data researchers from the Utah Education Policy Center at the University of Utah to conduct the studies in this report – a strategic collaboration that further underscores the Center’s focus on sound research methodology, and independent and rigorous analysis.
Preparing a statewide analysis requires a commitment to rigor by all parties – the state, the researchers, the edtech program vendors, and the school and district users. Key challenges include:
Under the auspices of the STEM Action Center, I – along with representatives from all other program vendors – was delighted to have the regular opportunity to work closely with the researchers from the Utah Education Policy Center to share more about how we analyze the efficacy of ST Math using a model of repeatable results at scale.
Ultimately, digging so deeply into program efficacy, at scale and normalized across multiple programs, benefits every stakeholder. Edtech providers share and learn from apples-to-apples usage data that enables cross-program analysis. Schools and districts are provided a statewide lens on digital program implementation findings and benchmarks on usage and impact for them and for their neighbors… leading to more valid and data-informed decisions on product fits and product implementation planning for their teachers and students. And state stakeholders have insight into the effectiveness of their initiative, with evidence for the return on investment of taxpayer dollars.
The Impact of K-12 Math Personalized Learning Software on Student Achievement portion of the report begins on page 311, and compares software users to non-users on three outcomes of interest: proficiency, percentile rank, and student growth percentile (SGP).
Overall, the researchers found “students who use digital math software more and students who use digital math software with greater consistency outperform students who use the software less and use software with less consistency.” (see page 351)
Measures student performance relative to a predefined benchmark
An indicator of students performed relative to other students who took the same test
A statistical estimate of student growth relative to students who had similar performance in the past
The researchers take care to note (see page 350) that their analysis is observational and correlational in nature. The study was not a fully experimental, randomized control trial (RCT) and is not evidence of causal impact.
While RCTs are that ultimate causal test in terms of assessing the efficacy of a program, they are also lengthy, expensive, and rare. This means that RCTs may fall quickly out of external validity in terms of application beyond the experiment: for example after updates to the version of the program that is currently on the market, or to its implementation or support model. In my view, the Utah study is a sterling example of where the market should be moving – toward a large data set, using recent program versions, over many varied districts. At MIND, we believe a high volume of effectiveness studies is the future of a healthy market for product information in education.
While assembling a report of this scope is a significant undertaking, it provides an unparalleled look at the results of Utah’s major investment in STEM education.
There are many strong benefits that come from conducting a statewide analysis. The state grantor will see greater district accountability and buy-in for the fidelity of their implementation and quality of their usage. The state grantor will also have the ability to evaluate program usage and efficacy across districts, helping to uncover the best practices that lead to the strongest gains for students.
Ohio is another state that has invested in statewide efficacy analysis. For example, in 2015 the PAST Foundation conducted a formative evaluation of its Math Matters Projects, which studied ST Math implementation across nine districts in Fairfield and Franklin Counties. Funded by the Ohio Department of Education Straight A Fund, one of the most significant findings of that analysis was that teachers need more time, hours, and information in order to plan for a successful roll-out at scale of a new blended learning program.
It’s more important than ever for states to understand how effective their edtech spend has been – particularly in light of the need for CARES Act and CRRSA Act funding to go toward edtech programs that address unfinished learning and are effective in a distance learning environment. We believe the Utah STEM Action Center report is a strong exemplar for other states to review and follow for their own analyses.
EdSurge Article Recommends Edtech Efficacy Portfolios - Here's Ours
ST Math in Utah, where over 21% of elementary schools use the program
Andrew R. Coulson, Chief Data Science Officer at MIND Research Institute, chairs MIND Education's Science Research Advisory Board and drives and manages all research at MIND enterprises. This includes all student outcomes evaluations, usage evaluations, research datasets, and research partnering with grants-makers, NGOs, and universities. With a background in high-tech manufacturing engineering, he brings expertise in process engineering, product reliability, quality assurance, and technology transfer to edtech. Coulson holds a bachelor's and master's degree in physics from UCLA.
Comment