TYPE:
Research
SUMMARY:
Moderated usability testing on the learning management system adopted by the University of Washington, Canvas, with a focus on on-the-go usability of the mobile app. Conducted as part of Masters’ program coursework at the University of Washington.
IMPACT:
Delivered a prioritized list of next steps categorized by scale of change and suggested time frame, which would facilitate more effective collaboration with cross-functional stakeholders in a real-world setting.
METHODS:
Moderated Usability Test
DURATION
6 WEEKS
2024.02 - 2024.03
ROLE
Research mentor in a team of 4 researchers, project manager
DELIVERABLES
Research report and recommendations
Context
Canvas is the learning management system (LMS) adopted by University of Washington to help students access and manage course materials such as syllabi, announcements, and assignments. We wanted to investigate pain points student users face when using the mobile version to check feedback from professors, to-do items, calendars, and announcements. As fellow students at UW, the product and topic of study was close to our hearts.
Research Methodology
Knowing that both a desktop and a mobile version of the LMS exists, we ran an informal poll of our classmates (who were 3 months into our program and novice users of Canvas) to understand their usage of the two versions.
Based on their answers, we learned that student users use Canvas Mobile to check things on-the-go - such as new announcements, to-do’s, newly graded assignments
We wanted to test the usability of the mobile app with novice / less confident users, as
- UW gets new students regularly
- Students don’t get a say in the LMS used
- Learnability is important
- For fast-paced programs like ours, on-the-go usability is important
Research Questions
How easily and successfully can users find the optimal paths that support their tasks? What are the key pain points that are preventing users from effectively using Canvas Mobile on-the-go? Does the system’s behavior match the user's expectations?
Research set-up
We first did a heuristics evaluation to help us understand the app more. Thereafter, we ran a moderated usability test.
Data we collected:
Qualitative and quantitative user behavior data
- Qual: Interviews, Observation, Think Aloud
- Quant: Click count, System Usability Scale
Pre-test and post-test questions on attitudes
Participants:
Current undergraduate or graduate student at UW iPhone user Either (1) self-rated as 3 or below for confidence in navigating Canvas Mobile App or (2) used Canvas Mobile for less than 6 months.
My role
As the most experienced researcher on the team, I mentored teammates who were newer to user research on research best practices.
I drafted the screener, and created the data log; we collaboratively worked on refining the task list and success criteria. I moderated the pilot session to test run the moderation guide drafted by a teammate, and provided feedback based on my industry experience and experience during the pilot. I coached teammates on moderating interviews, observed the sessions that they moderated, and provided feedback.
Findings
Our research uncovered 7 key findings ranging in severity (view our full report here).
Main sources of error on the Canvas App:
Information architecture that did not match user expectations Misleading labels and icon choices Poor visual hierarchy
These sources of error led to poor learnability and discoverability of features on the app.
Recommendations
We generated a prioritized list of recommendations, which I organised graphically into 3 categories and 2 time frames based on effort and severity. The prioritizations and categorization would help us to more effectively collaborate with relevant colleagues and stakeholders in a real-world environment.
Reflections
- Behavioral data, such as click count, was very valuable in this study as some participants rated tasks as easy despite struggling significantly. We realized this was related to self-efficacy - participants who self-declared “I’m not good with apps / this app” tended to rate tasks as easy, as they attributed their struggles to their ability, instead of to poor design of the app.
- That being said, we felt that click count data was rather noisy. In future, I would explore other data sources, such as error rate.
- I learned the importance of task framing and success criteria framing, as small lexical choices could have a great impact on how participants interpret tasks, and how we analyzed data after.