You are here

More Research, More Problems: Fidelity of Implementation in Chess Education Research

by Emily Sholtis & Matthew Pepper, Basis Policy Research

 

c Even the most carefully designed randomized control study put together by Nobel prize winners will fall short if it is not implemented as intended “where the rubber hits the road.”  In this week’s blog, we dive into what fidelity of implementation means and why it is important to evaluate the fidelity of implementation when exploring the impact of chess in schools.

 

Recall from our earlier blog posting that at the heart of studying the impact of chess is comparing a group that receives a “treatment” to a similar group that does not receive a “treatment” (the control group).  This is true for medical studies – when the treatment can be a particular medication or behavioral change – as well as most experimental studies within education.  Fidelity of implementation is the extent that the “treatment” actually occurred.  Fidelity of implementation is broadly encompassing, covering everything from the appropriate use of a specific curriculums/instructional techniques or adhering to boundaries like time limits.  While there is certainly variation from study to study, the fidelity of implementation within educational chess studies would typically cover the following design elements:

  • The group of students – Did the students who were supposed to receive chess instruction actually receive it?  Have students dropped out or been transferred out of the class, or have new students been transferred into the class?

  • The course and instructor – Did all the classes occur and was the teacher present and teaching for all of them?  Was the teacher qualified to teach chess?

  • The instruction – Was the planned instruction delivered as intended?  Did the instruction follow the intended pacing, covering all the material in the time allotted?  Were students engaged in the instruction, and to what extent did behavior issues prevent the delivery of instruction?

  • The measures – Were outcome measures (surveys, assessments) administered at an appropriate time and under unbiased conditions?

The most rigorous and credible applied research studies do two things well.  They first provide an answer to the question “What impact did the treatment have?”, but second, they provide enough contextual information and programmatic review to convince the reader that the treatment occurred as intended.  Excluding the latter part leaves studies open to the critique that findings are invalid.  Revisit some of our older blog post like this one here and see if you can identify any of these threats to the fidelity to implementation (note the excellent, extensive review of program implementation within this study).  Which do you see?  Respond via Twitter or Facebook and let us know what you think!