Comparing educational products and alternatives
From The Learning Engineer's Knowledgebase
Comparing educational products and alternatives is a category of research questions for evaluation that is concerned with comparing the effectiveness of an educational product or version of a product with alternative products or approaches. Comparison of products is typically performed through a quantitative experimental design, but qualitative methods are also essential for data collection and interpretation. Two or more products are required for comparison.
Definition
The comparison of educational products and alternatives is a category of research question where the evaluator will compare data on performance (which can be any type of learning, competence, or participation, for instance) or other outcomes to determine how one product generally compares to one or more alternative products or approaches. With this type of research question, evaluators often ask which product performed better than others to which they are being compared.
Additional Information
With this research question category, there is always some form of comparison of one product's performance to another. This could be in the form of asking whether a treatment condition performed better than a control condition (i.e., a traditional experiment), or if an educational product was comparable to other products or approaches.
Comparing educational products and alternatives is similar to the research question category of Evaluating learning and competency outcomes, which only examines the effectiveness of a single product or experience in achieving the intended outcomes.
Common Research Questions
In this category of research questions, the actual questions typically take one or more of the following general forms:
- Which product or approach worked best at achieving desired outcomes?
- Did a product perform better than another (or a control condition)?
- Was a new product or approach more effective than a normal or conventional educational practice?
- In what ways did a product perform better than another product?
- Did a product and its design features get used more by people in comparison to another product?
- How cost effective is a product in comparison to alternative products or approaches? (i.e., cost effectiveness and efficiency analysis)
Common Instruments and Data
There are a variety of instruments and data sources that are commonly used for this category of research questions:
- Tests, quizzes, and surveys that measure performance, knowledge, skill, or learning
- Surveys and interviews that prompt participants to share their perceptions and satisfaction, particularly through the use of rubrics or scales to quantify information.
- Work products, documents, and artifacts of participants that can be evaluated and scored for indicators or measures of competence and learning, particularly with rubrics or systematically coded for the presence or absence of concepts or themes.
- Participation, behavior, and interaction, which can be observed and measured using observation notes, digital log files, learning analytics
- Implementation data, such as participation levels and observations of implementation tasks, or self-reported information about how a teacher or facilitator conducted the activities as the designer expected (note: variation in implementation is an important factor in whether a person is exposed to and uses an educational product)
Common Variables and Concepts of Interest
To answer research questions in this category, evaluators are typically interested in measuring and examining one or more of these concepts or variables:
- Performance, which can be anything that a person is expected to do or demonstrate during or after participation in the educational product. This includes:
- Knowledge of the participant
- Competency (typically measured from a test, quiz, performance, or rubric)
- Learning (typically measured via a pre-post change in learners' performance at two or more time points)
- Behaviors that are not competency or learning
- The types and frequency of participation and interactions of people (including participants, teachers/facilitators, and any other actors)
- Knowledge of the participant
- The affect, attitudes, and perceptions of participants and teachers
- Implementation, which identifies how the product was implemented, specifically fidelity of implementation
- Any factors, forces, and external considerations that might influence a person's performance with the educational product
- Examples could include any factors that represent the diversity of human experience, such as:
- Culture, race, history, language, location
- (dis)ability, gender, sexuality, age, physical or mental health conditions
- Education, prior knowledge, experience, history with formal education
- Economic status, income, employment, access to technology
- Religious beliefs, values, customs, philosophies, political beliefs
- Examples also include any institutional requirements or pressures, organizational culture and expectations, changes to the product during implementation, and other forces that influence the behaviors of a person's participation.
- Examples could include any factors that represent the diversity of human experience, such as:
Common Analysis Methods to Answer the Research Questions
Both quantitative and qualitative analysis methods are commonly used to compare educational products in how, why, and to what degree they achieve the intended goals. Method choices should align with the intents and desires of the people who will be using the evaluation data. Quantitative data on effectiveness is typically preferred in most contexts, but both standalone qualitative methods and mixed methods that combine both are becoming increasingly practiced.
Note: It is beyond the scope of this knowledgebase to expand on each of these methods. It is recommended that researchers and evaluators seek additional training, web resources, or courses on individual methods they would like to use.
Quantitative Methods
- Experiments, which are also called randomized controlled trials (RCTs). Experiments that compare the results between two or more educational products or experiences are traditionally designed and structured so that the results can reliably and validly compare the performance between two or more comparison groups (or conditions).
- Experiments use quantitative measures to demonstrate for an average person who uses the product, that the probability or degree of certainty that the differences between groups that were observed happened as a result of the group itself and not some other force or influencing factor. Experiments are never exact and there are always exceptions to the rule.
- Experiments attempt to isolate the variables and concepts of interest so that they are the only thing that is different between the two or more products being studied. This is so that the study can control for outside influencing factors that could confound or otherwise explain. Experimental methods also often require large numbers of participants who are representative of the population being studied in order to make claims about generalizability and that the results from the evaluation are trustable.
- It is difficult to completely isolate the differences between comparison groups in educational product research, so quasi-experimental designs are more frequently used, which account for the inability of evaluators to completely blindly randomize participants into comparison groups or isolate all possible variables or forces that could influence the outcomes of the experiment. Typically, educational experiments will attempt to control for external differences between products so that the content, topic areas, audience, and implementation conditions will remain constant between all comparison groups. This allows the evaluator to know that the differences that are observed between the compared products are because of the product itself, and not from alternative explanatory factors.
- Experiments typically employ statistical analyses like ANOVA, linear regression, and multilevel/hierarchical linear models. The choice of statistical approach will depend on
- Experiments often statistically take into account the degree of change in participants on the performance measures through pre-post design. This is because people that are randomly assigned to each experimental group may start with different levels of knowledge, skills, or dispositions. A pre-post design and be used to control for the prior knowledge and experiences of participants. However, post-only designs are also common if a pre-assessment is not feasible.
Qualitative Methods
- Basic qualitative analysis. In any basic qualitative analysis[1], the evaluator will sort and examine qualitative and quantitative data to identify common themes that are evident in the data. The identified themes will be sorted into categories, which can then be described in detail by the evaluator.
- Instead of attempting to demonstrate that a product is statistically better than another, the evaluator doing any kind of basic qualitative analysis will instead attempt to identify, describe, and compare how and why the products are different. Additionally, using systematic criteria, the evaluator can provide an argument that a product performed better at achieving goals and outcomes descriptively.
- Qualitative analyses are a useful approach to understanding how and why products differ, which can provide a high degree of interpretability and "real-world" context for readers in comparison to some of the more quantitative methods. Quantitative methods also use large datasets of participants to make inferences and an experimental approach may not always be possible, making a qualitative approach more realistic and appealing.
- Some evaluations require comparisons that use quantitative statistical analyses of outcomes and comparisons, such as those by funders or purchasers of educational products. However, in circumstances where it is appropriate, high quality qualitative results with high validity and reliability and that argue for one product having better outcomes than another can also be achieved if the evaluator uses qualitative evidence that is collected and analyzed systematically by an evaluator.
- Two- or multiple-case study. A case study is a systematic approach to richly describing an educational product and the factors that influence how people learn from it. A multiple case study is when two or more "cases", or educational products, are evaluated simultaneously and compared from multiple perspectives.
- A case study, especially a comparative case study between two products, will investigate and detail differences in the design of products, how people use the product, and what kinds of outcomes were observed. Although case study is a method that is considered to be qualitative, case studies also often use quantitative data to enrich the descriptions on how, why, and with what effect a product is used. So, case study is not limited to qualitative or text-based data only!
- Case studies are very valuable for understanding how individual participants are influenced while using a product. In a case study that compares two or more products, the evaluator can critically identify comparisons and contrasts between the products throughout writing the case study.
- Through rich descriptions and even storytelling about how products were designed and used, a reader of a case study is ideally informed about how the products differ and why the results between products were observed. However, it is difficult to infer in a measurable and generalizable way that the observed effects between products occurred by chance, which is something that quantitative experimental studies can show with greater degrees of certainty.
Mixed Methods
- Both qualitative and quantitative methods may be used in comparing how, why, and to what degree educational products differ. Both methods may be used simultaneously to confirm and support arguments.
- Mixed methodology is becoming an increasingly common approach toward showing multiple sides of the same type of analysis to answer research questions. Such approaches add cross-validation and triangulation so that multiple perspectives are considered and support each other when making claims about a product.
Related Concepts
Examples
None yet - check back soon!
External Resources
None yet - check back soon!
References
- ↑ Merriam, S. B., & Tisdell, E. J. (2015). Qualitative research: A guide to design and implementation. Chicago: John Wiley & Sons