External review (Assessments of the IRDL program)

This post is part of a series, describing the assessments used to develop the Institute for Research Design in Librarianship (IRDL).

When we initially developed the components of the program, we could only imagine what the experience of it would be like for the Scholars. We hired an external evaluator to be an observer throughout the time of the Summer Research Workshop, asking that person to focus on how the Scholars engaged with each other, the curriculum, and the environment we created during the Workshop. We requested immediate feedback during the workshop, if the evaluator noticed something that should be changed, and a written reflection after the workshop, to provide us with their thoughts about how we might improve the design of the future workshops.

In the first year of the program, we hired Linda Hofschire of the Colorado State Library to act in the capacity of external evaluator. In addition to observing during the Workshop, we asked her to review some of the data we gathered (confidence scale data and proposal evaluation data), to provide her perspective on how it may impact future iterations of the Workshop.

After the first year, we hired Nina Exner, an academic librarian pursuing a PhD, as the external reviewer, and asked her to continue in the role every year after, so that she could observe and comment on the changes we were making over time, in the curriculum, cohort-building activities, and environment. We continued with Nina even when we moved to the online environment (IRDL Online, 2021-2024). In addition to evaluating the program, Nina acted as a research consultant during the Workshop, providing one-on-one consultations with the Scholars on their research design.

My reflection on the use of this tool for assessing the program
When we designed the program, I thought carefully about the environment we were creating, wondering about things like the pacing of the Workshop, to make sure the Scholars had enough time to absorb the material and reflect on it, so they felt empowered by what they were learning and not overwhelmed. I thought about how the setup of the room might affect the interactions the Scholars would have, in formal hands-on group exercises as well as informal chatting during breaks. I thought about how their energy may be sustained throughout the day, and planned for a variety of snacks, yoga breaks, and walking breaks, so that they could retain their focus on the learning content when needed.

Having someone whose sole role was to observe the program in action, to look out for the Scholars during the workshop was reassuring to me, to make sure that they were being cared for as I expected. Based on the evaluators’ observations, we nimbly adjusted things for comfort and learning during the workshop and then later considered their written reflections for the design of the future years of the workshop. Over time we noticed that their written feedback suggested minimal changes, for a better impact; overall, the things they observed and reported on was what we hoped for the Scholar experience.

Working with the same evaluator year after year was practical for us, in both the continuity for the program, having someone to notice if a change we made based on their recommendation worked as they imagined it would, as well as interpersonally, for developing a long-term professional relationship with an expert in our field.

The cost of this assessment tool
Travel to and from Los Angeles, housing and food during the workshop, and an honorarium.

Earlier posts in this series:
Introduction post, Confidence scale, Research networks of the Scholars

About Marie Kennedy

Putting everything into neat piles.
This entry was posted in IRDL. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.