[The American University in Cairo] Authentic Learner Assessment for Online Programs: Rethinking Approaches

Recount of Yasmine Yehia, Instructors Affairs Manager and Naglaa Fawzy, Senior Program Development Manager

Only a year ago, before the COVID-19 wave began, the Program Development Unit was just a new initiative, with instructors working tirelessly and enthusiastically with instructional designers to set a new culture for program design and development. Between training subject-matter experts, reviewing designs and deliverables, and working closely with program managers, the overarching goal was to develop and deliver best in-class programs.

“We were at the doorstep of success and were waiting to celebrate it when we were hit by a new reality along with its challenges: COVID-19 and virtual learning,” reflects Naglaa, accurately describing the situation, and adding, “At one point, we felt like we were standing on shaky grounds again, overwhelmed by tasks and lacking clear directions.”

While the pandemic had hit earlier than March 22, it was only then that we were all confronted with its repercussions: shifting to online learning. The transition presented its unique set of challenges, but for Yasmine it meant that “My top priority was to adapt our 38 instructors to this changing learning environment, while turning to the program development unit for support. Primarily, my top concern was finding the sweet spot to blend swift action with a coherent learning experience that encourages interaction and meets the programs’ objectives. We decided to start by launching instructor training sessions to equip them for full-scope online learning.”

But then, there was a discussion regarding assessments in the new context. Traditionally, we had two methods for assessment: project-based assessments and timed exams. Within the online learning environment, timed exams were no longer feasible due to the difficulty of proctoring and their general ineffectiveness during the situation.

Accordingly, all timed exams were replaced with ‘major assignments,’ taking different formats, including reflection papers, case studies with presentations, or project-based assignments and presentations. This was undoubtedly a major variation from standard practice and required its own training to acquaint both instructors and participants with guidelines, outlines, and rubrics that evaluations were to be based on to corroborate the course objectives.

While seemingly simple, this was, in fact, a lengthy trial and error phase that involved significant A/B testing and experimentation. We continuously reset out training strategies to meet our almost daily changing needs. Not to mention that instructors were understandably taken aback by the situation in the beginning, and accordingly, their engagement levels weren’t at par with what was usual; we could sense it.

Read More Here!