Actions

Fidelity of implementation (FOI)

From The Learning Engineer's Knowledgebase

Fidelity of Implementation (FOI) is the degree to which an educational product or experience is implemented in the ways that are intended by the designer. Educational products are not often implemented by the designer themselves, but instead by educators (such as teachers or facilitators). As such, the educator becomes responsible for the delivery of the product to participants and carrying out the prescribed actions by the designer.

Definition

In cases where the designer is also not the implementer of an educational product, the fidelity of implementation (FOI) of an educational product is how closely the product is implemented as expected by the instructional designer.

FOI is concerned with how closely a product's implementation aligns with the expectations that are defined by the designer. This is an important aspect to measure for evaluation purposes, as it is an indicator of how a product or experience was used as expected.

Additional Information

The idea of implementation fidelity can be understood as specific implementation tasks that a designer expects an implementer to complete. Implementation tasks are specific actions or events that are expected to be done when using an educational product. In a well-documented design, each of the tasks of a teacher or participant are well described and supported so that people can do the activities that are designed for them.

In poor designs with little specification on how to do activities, it is not clear what people should do and thus the intention of the designer is harder to follow.

A goal of good design is to encourage and support use of a product or participation in an activity by clearly denoting the specific actions and steps that should be taken. In the case of someone implementing an educational product, this would be a clearly defined list or set of steps that should be done to implement the product, as well as any instructional tasks that need to be completed by the implementer to help participants learn (such as through differentiation, monitoring, and scaffolding/support mechanisms).

Thus, a well-documented design includes complete documentation of the expected implementation tasks that implementers should take, as well as when these tasks are scheduled. From the defined implementation tasks, fidelity of implementation can be better measured.

Aspects of fidelity of implementation

There are four typical aspects of measuring fidelity of implementation (FOI)[1][2][3]

  • Adherence. Adherence is the degree to which specific implementation tasks and events were completed during implementation. A list of tasks and procedures for the implementer to follow are provided as a part of a design, which are expected to be followed by the designer. Adherence measures whether each of the expected tasks are performed or attempted. Adherence does not necessarily mean that it was done well or unsatisfactorily, which is the focus of the "quality" aspect below.
  • Quality. The evaluation and judgement of whether the implementation and instructional activities were the expected level and type. Quality asks whether teaching and implementation activities were higher or lower quality than those that were expected, or whether the task was performed satisfactorily.
  • Adequate time given. Adequate time must be given to participants for the educational product to have its intended effects. This aspect measures the amount of time that was given by the implementer and the participants to each implementation task.
  • Ways of differentiation and adaptation. This aspect documents the ways in which implementers diverge from the set of implementation tasks and adapt the tasks to meet their specific needs. Capturing data about differentiation and adaptation of tasks and activities is extremely useful for a designer to know what was done instead of the expected implementation task.
Understanding how and why implementation diverges from the plan

As designers cannot plan and account for everything that could happen, teachers and facilitators must flexibly often differentiate and adapt the implementation of any given product to meet the needs of their individual participants. By measuring FOI during implementation for product evaluation purposes, a designer can know which things went as expected and which things were implemented differently by the teachers than was designed initially. This information can be subsequently used to generate insights on how to improve the product so that it the design more closely aligns with implementation in subseqent versions.

Divergence from expected performance is not necessarily bad or wrong. From a design perspective, the fidelity of implementation is not as much a measure of teacher quality or an evaluation of the teacher themselves, but instead simply a measure of whether things went according to plan. Knowing whether things go according to plan is valuable information for a designer so that the product may be evaluated and revised accordingly to account for real-world implementation contexts.

After a product is implemented, it is also highly valuable to consult with and interview implementers about their perspectives and opinions on how they understood and followed the prescribed implementation tasks. An evaluator can also collect information from teachers on their perspectives on how implementation tasks can be improved based on the specific contexts in which the product was implemented (e.g., a classroom, a volunteer group, a corporate training group).

Tips and Tricks

  • When considering how your product will be implemented, consider who is responsible for conducting the activities of the product and participating in activities. This includes both the participant learners, as well as any teachers, facilitators, or other support staff.
  • It is best to document how you expect people to use the product and interact with each other. By defining and documenting these intentions, you can provide a list of expected implementation tasks to implementers to help them stay aligned with your expectations.
  • When you are conducting a summary evaluation of the product, you can compare implementers' actual performance to the expected implementation tasks to evaluate fidelity of implementation. Remember, divergence from expected performance is not an indicator of a "bad" or "good" teacher or facilitator, but instead is an indicator that the design did not fit in the context in which it was deployed, or that the implementation tasks were not understood.

Related Concepts

Examples

None yet - check back soon!

External Resources

None yet - check back soon!

References

  1. Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review, 18(1), 23-45.
  2. Gersten, R., Baker, S. K., Haager, D., & Graves, A. W. (2005). Exploring the role of teacher quality in predicting reading outcomes for first-grade English learners: An observational study. Remedial and Special Education, 26(4), 197-206.
  3. Graham, L., & Fredenberg, V. (2015). Impact of an open online course on the connectivist behaviours of Alaska teachers. Australasian Journal of Educational Technology, 31(2).
Cookies help us deliver our services. By using our services, you agree to our use of cookies.