Design feature use evaluation
From The Learning Engineer's Knowledgebase
Evaluations that examine the use of design features question how the multiple design features of an educational product or technology were used by people and if their usage was as expected by the designer.
Definition
An evaluation of the design feature use specifically examines the design features of a technology to investigate how they were used. This type of evaluation identifies whether features were used based on expectations. It can also examine the type of use, participants' varying level of use, and identify potential categories of how features were used to describe usage patterns.
If varying types and levels of use are measured, they can be compared to measures of learning outcome achievement to indicate the impact the feature had on achievement.
This differs from usage pattern analysis, as the focus in usage pattern analysis is on the participant and their personal patterns of interaction throughout the learning experience, not on the design feature itself.
Additional Information
The primary focus on design feature use research questions is on the feature itself. The goal of these types of evaluations is to understand how the specified design features are used as expected. The insights gained from the analysis can inform designers how to improve each feature to better meet goals and answer questions as to why the features did or did not meet the expected level of use. Such studies can also identify new categories of use that were not expected by the designers.
Common analysis methods for design feature use evaluations
- Analytics. Learning analytics and unobtrusively collected participation usage data from digital logfiles can be used to collect data on every interaction that a participant makes with the design feature. Analysis can be performed to understand what, how, and for how long a participant uses the feature.
- Participants' perceived use. In addition to digital logs and analytics, participant perceptions can also be gauged through self report methods to ask the participant about how much they used a feature and what their reasons were for using it. Additionally, think-aloud methods where participants are asked to talk about what they are doing (or what they did), the things they notice while they are performing the task, and the reasons for why they made the decisions that they did can be helpful for evaluators trying to understand how and why people use a design feature.
- Usability study. Usability studies investigate how people use and feel about design features and interfaces. These studies target whether people can easily use interfaces and technology and gauge the participants' emotions regarding their use. Usability studies are aimed to directly improve design features by revealing reasons why and how people use (or do not use) a design feature or interface. Usability studies often use think aloud and task analysis methods to allow the participant to give the researchers explanations and reasons why or why not they perceive something to be a certain way. These methods "make thinking visible" through dialogue where a participant can talk about their experience and how it can be improved.
- Process analysis. Process analyses focuses on identifying the specific patterns of action and verbs among participants. Each categorically distinct verb is identified and coded in the data, being as specific as possible in the code. Patterns of activity can be defined from the codes in an attempt to discover how people use the educational product's features. In this analysis, the focus of this methods is not counting participation or seeing if it met a threshold of expectation, but instead identifying the types of participation exist.
- Thematic analysis. Thematic analyses are a general approach in qualitative studies to identify and categorize the themes within a set of qualitative data. Often, data sets are coded very loosely at high resolution to identify small-level themes in the data. Next, the researcher usually identifies larger themes from the first round of coding, and nests the first level themes into the larger themes. Multiple layers of themes can be created in this method. In the case of design feature use, participation with features can be categorized in a first round of coding, and then larger, more broader themes can be identified that categorize the types of participation that exist. Similar to process analysis, the focus of this methods is not counting participation or seeing if it met a threshold of expectation, but instead identifying the types of participation exist.
- Interaction tracing. Used in many different qualitative analysis approaches, interaction tracing is a family of approaches that track how the sequence of a single person's interactions with an interface, activities, and other people within a learning environment. Usage examples, patterns, and cases of participation can be generated that reveal commonalities in people's use of technologies and design features, as well as specific purposes and timing on part of the participant for using each feature.
As with any evaluation approach, any given specific approach is beyond the scope of this wiki. Each method could be an entire college course of its own. Readers are encouraged to research methods more in depth if they are interested in the research questions that they answer.
Tips and Tricks
- The design features of your educational product should be well defined. By defining them, you can conduct evaluations on how and why features were used to help participants with performing their goals and achieve learning outcomes.
- Consider what types of features are in your product, as well as how you will measure how people will use the feature. The ways that you measure usage of a design feature should align with the types of expectations that you have for people to participate in the activity. For instance, if you require people to participate in online discussions, you could measure the frequency that they participate, length of conversations, number of replies, and even the frequency of how much they click on the page. You then should set a level of expected usage, which would be a threshold for how much you think is adequate for a person's participation. This threshold sets the mark for further evaluation later as to why or why not people met the threshold.
- When using digital technologies, you can more easily measure how people use it through digital logs that are captured by the software, such as frequency of clicks, duration of active sessions, or dialogue between participants in discussion forums.
- In face-to-face settings, use of a feature may be measured by how often a person uses an object or resource, the duration of conversation between participants, or through technologies that can capture duration and direction of eye gaze, or audio recordings that can capture the content and duration of participants.
Related Concepts
Examples
None yet - check back soon!
External Resources
None yet - check back soon!