B. Centralize infrastructure for short-format training assessment and evaluation
Most short-format training instructors and instructional designers develop and maintain their own assessments and evaluation. Centralizing assessment and evaluation can reduce duplicated efforts while improving quality and allowing for flexibility to meet local needs.
Learner assessment and program evaluation are important for effective instruction, but there are many barriers to implementing these in SFT. Centralizing infrastructure (i.e., human expertise, software, and technology) for SFT assessment and evaluation could promote consistency within and across programs and instructors. Centralization could also promote use of both formal assessment and evaluation, while reducing burden on individual instructors and instruction designers. This effort could include repositories of resources for assessment and evaluation design, online tools, question banks, help desks, etc., for use in SFT contexts.
How might this work:🚲
SFT instructors and instructional designers could use or contract with a centralized assessment and evaluation resource to help them determine their objectives and needs. Consultation would generate recommendations on resources (e.g., assessment activities or items, evaluation methods or instruments), highlighting their relevance and utility. Web services could also support long-term assessment, follow up, and evaluation. SFT participants could be directed to online tools to provide feedback using validated methods. Instructors would then receive actionable feedback reports to improve their teaching 1 and additional consultation if needed. Insights generated could also be used to report to funders.
Currently organizations such as ELIXIR and Melbourne Genomics use standardized question sets for diagnostic assessment and to document program effectiveness and impact in reports for stakeholders 2 3. Centralized evaluation projects such as NISER 4 have been used for undergraduate STEM evaluation. In the US, the National Institute for Learning Outcomes Assessment has also been successful at promoting learning outcomes assessment 5. There is ample evidence that a strategy that formalizes and centralizes these services could work.
Benefits to the learners:🚲
- The existence of independent, high-quality assessment and evaluation data on any given SFT will give learners confidence in the quality of the SFT. This could help learners choose between “competing” SFT options that have similar topic areas and objectives.
- As learners become familiar with high quality assessment and evaluation in their SFT options, they will be more likely to require these features, which will support broader adoption of this recommendation.
- If widely adopted, learners could have a consistent and familiar approach to rating and leaving feedback for SFT. This may also empower learners with the knowledge that their feedback can lead to changes.
Incentives to implementers:🚲
For Instructors and Instructional Designers
- Feedback from effective assessment and evaluation could provide objective and actionable information to improve both learning and instruction.
- Positive evaluation results from a trusted authority could increase enrollment and engagement.
For Instructional Designers, Funders, and Organizations
- Expertise in assessment and evaluation is not typically available for SFT. A centralized resource can increase availability and consistency in both.
- Implementing assessment and evaluation at reduced time and cost.
- Collecting long-term learner feedback is a challenge for SFT, this resource could help.
- Evaluation data could be used for reporting to stakeholders and securing further funding.
Barriers to implementation:🚲
- Creating, leading, resourcing, and sustaining a shared resource will take significant effort, time, energy, and expertise.
- Use of the resource will depend on interest by instructors, instructional designers, or organizations. Potentially, funding agencies or other funders may encourage the use of a centralized resource.
- Groups, organizations, or individuals may resist standardized assessments or evaluations as they might highlight current inadequacies. Funders’ and organizations’ encouragement of use of the centralized resource may be important to offset this worry.
After reading the survey instructions and consent page and the recommendation above, please rate and leave feedback using the survey questions below:
The Value and Effectiveness of Feedback in Improving Students' Learning and Professionalizing Teaching in Higher Education. M Ahea, M Ahea, R Kabir, I Rahman - Journal of Education and Practice, 2016 https://files.eric.ed.gov/fulltext/EJ1105282.pdf. ↩
A framework to assess the quality and impact of bioinformatics training across ELIXIR Gurwitz KT, Singh Gaur P, Bellis LJ, Larcombe L, Alloza E, et al. (2020) A framework to assess the quality and impact of bioinformatics training across ELIXIR. PLOS Computational Biology 16(7): e1007976. https://doi.org/10.1371/journal.pcbi.1007976. ↩
Maher F, Lynch E, Marty M, Tytherleigh R, Charles T, Nisselle A, Gaff C. Genomic education for medical specialists: case-based workshops and blended learning. (in preparation). ↩
Sondra Marie LoRe, Pam Bishop, Kevin Kidder, Robin Taylor (2018). National Institute for STEM Evaluation and Research (NISER). Wicked Problems: Investigating real world problems in the biology classroom (SW 2018), (Version 2.0). QUBES Educational Resources. doi:10.25334/Q4J98F. ↩
National Institute for Learning Outcomes Assessment (2016, May). Higher education quality: Why documenting learning matters. Urbana, IL: University of Illinois and Indiana University, Author. https://files.eric.ed.gov/fulltext/ED567116.pdf. ↩