Every edtech product has a study that says it’s effective. But what are the impact results of that study really saying? How meaningful is the difference between outcomes? And how do we know that difference is not due to chance?
For quite some time now, the traditional approach to edtech evaluation—the goals, the conventional process, the vetting, description and use—has been broken. Because of this, administrators can be easily confused or misled by the scarce credible information about program effectiveness. Yet every school year, they still need to make decisions about what tools or programs to source, select, purchase, adopt and support in their schools and districts.
At MIND, we believe there needs to be more non-academic conversation about research, and that formal research findings can and should should be unpacked and translated to become useful to decision makers.
Demanding More from Edtech Evaluations gathers together content from MIND’s podcasts, blogs, webinars, videos and more into a single, easy-to-digest resource aimed at equipping administrators to expect and get more out of current and future edtech research.
You can learn more about our methodology and the impact of of our visual instructional ST Math program at stmath.com/impact.
Andrew R. Coulson, Chief Data Science Officer at MIND Research Institute, chairs MIND Education's Science Research Advisory Board and drives and manages all research at MIND enterprises. This includes all student outcomes evaluations, usage evaluations, research datasets, and research partnering with grants-makers, NGOs, and universities. With a background in high-tech manufacturing engineering, he brings expertise in process engineering, product reliability, quality assurance, and technology transfer to edtech. Coulson holds a bachelor's and master's degree in physics from UCLA.
Comment