EduGREAT: Researching and Designing a Digital Collaborative Toolkit for K-12 Educators
My Role
This Capstone project was completed during the HCI Master of Science program at DePaul University. I made the following contributions to this project:
Managed project timeline
Conducted secondary research about K-12 educators, and remote and hybrid teaching tools and challenges
Defined project goals
Wrote interview protocol and discussion guide
Collaborated on interview analysis
Recruited K-12 educators for interviews, card sorts, and usability testing
Conducted interviews with K-12 educators
Programmed and analyzed card sorts
Created the journey map
Wrote usability testing protocol and task list
Conducted usability testing on low fidelity and high fidelity prototypes
Sketched initial design concepts
Assisted in the design of the low fidelity prototype
Wrote the abstract, introduction, and goals sections of the final report, and portions of the methods and results sections
Abstract
This project explores the challenges of virtual and hybrid education, which have become more apparent and frequently evolving, due to COVID-19. Remote education presents a number of opportunities for improvement. Based on a preliminary literature review and competitive analysis, findings from live remote interviews and usability testing with remote and hybrid K-12 educators, we propose a collaborative digital toolkit. We believe this virtual education platform will be better adapted to the needs of educators, students and guardians in the current educational climate, than the existing remote education platforms we reviewed.
Our literature review indicated that there are currently few online platforms that allow educators to effectively collaborate with their peers on educational materials, or effectively organize them. We discovered through our competitive analysis and live interviews with K-12 educators that there is a dire need to integrate multiple apps and resources into one platform. This would allow educators to better manage their time and promote easier access to and utilization of educational resources. The educators we interviewed also shared a wide range of needs in regards to adapting educational materials to their diverse student populations. The usability tests we conducted showed that our users appreciated the all-in-one toolkit concept, and largely found it simple to complete the tasks we gave them. They also liked the increased communication and ability to share resources with other teachers, as well as training and support materials being quickly and easily accessible. While we exclusively targeted remote and hybrid educators for our research purposes, we believe this product will also benefit K-12 teachers who are educating in-person. The EduGREAT Digital Collaborative Toolkit for Educators addresses the major challenges we discovered through our research.
Introduction
COVID-19 has dramatically impacted K-12 education over the past year. Our team felt it would be timely and help further raise awareness around some of the specific challenges educators are currently experiencing, by focusing our efforts on remote virtual education for this project. To gain a better understanding of the goals, frustrations, and adjustments of teachers who have had to adapt to their virtual teaching environments, we conducted live virtual interviews with K-12 remote and hybrid educators. We included general classroom educators and teachers who specialized in Special Education (SPED), English Language Learning (ELL), and Social and Emotional Learning (SEL). This gave us a glimpse into how these teachers have adjusted the resources they use, their technical skills, and in some cases, large parts of their personal lives, in order to provide the best education they can to students under the current circumstances.
The interviews, literature review and competitive analysis that we conducted revealed that although there are a number of online education platforms available to K-12 educators, no singular platform exists that addresses all three of the major needs that we identified. We found that a complete digital toolkit for educators would 1) Integrate multiple apps and resources into one platform for easier access and utilization, 2) Simplify the process of sharing tools, lessons, and encouragement with other educators, and 3) Facilitate more effective collaboration and communication with students, guardians, and colleagues.
We believe streamlining teachers’ tools and encouraging them to share resources will allow them to better manage their time and apply a more personalized approach to educating students with diverse needs and abilities. For the purposes of this project, our team decided to focus exclusively on the educator-facing side of our Toolkit, because teachers create and manage the content. We envision our product as a responsive platform that can be used on mobile, tablet or website. We decided to prototype this in a web format, because we found that educators were using their computers most frequently for more robust tasks like lesson planning.
Goals
Goal 1: Evaluate K-12 educators’ level of satisfaction with their current educational resources for remote or hybrid teaching.
Measure: Interview a minimum of six K-12 educators who are currently teaching remote or hybrid classes, and ask them to use a 5-point Likert scale (1=Not at all Satisfied, 5=Very Satisfied) to rank their satisfaction with their current educational resources for remote or hybrid teaching. We will calculate the median and mode of the collective responses to determine overall satisfaction.
Goal 2: Evaluate the level of interest for an integrated collaborative digital toolkit among educators.
Measure: Interview a minimum of six K-12 educators who are currently teaching remote or hybrid classes. Ask them to use a 5-point Likert scale (1=Not at all Interested, 5=Very Interested) to rank their interest in an integrated collaborative digital toolkit. We will calculate the median and mode of the collective responses to determine overall interest.
Goal 3: Evaluate the effectiveness of the prototype features by conducting remote usability testing on the prototype with current K-12 educators. Use a testing task list focused on collaboration activities and adding and organizing multiple digital resources.
Measure: Rate each completed task as “pass” and each failed task as “fail”. The number of successfully completed tasks will be used to determine the effectiveness of the prototype.
Goal 4: Evaluate the perceived usefulness of the prototype amongst current K-12 educators by conducting a post-usability test evaluation.
Measure: Conduct a post-test evaluation with a minimum of five K-12 educators who are currently teaching remote or hybrid classes. Ask them to use a 5-point Likert scale (1=Not at all Useful, 5=Very Useful) to rank their perceived usefulness of the prototype. We will calculate the median and mode of the collective responses to determine overall perceived usefulness.
Methods & Participants
We utilized a number of research and design methods throughout this 10-week project.
Competitive Review
This helped us gain a better understanding of the market by evaluating competitors’ products in the online learning space that support remote educators. Identify the strengths and weaknesses of each product to determine areas of opportunity that we can apply to our product. After reviewing over a dozen potential competitor products in the market, we chose to analyze four that include useful features for a digital teacher toolkit.
Remote Interviews
We conducted in-depth remote interviews through Zoom with 7 current K-12 teachers to determine remote education tools, goals, challenges, and potential recommendations for improvement. Interview findings were used to develop design implications and shape the product direction.
Figure 1: Interview Participant Data Summary
User Archetypes
We developed user archetypes to help the team better understand different target users who will be interacting with our product. Based on our research, we created archetypes with specific goals, motivations, and pain points, to ensure that our product is designed to meet the needs of these users. Throughout the product development process, we referenced these archetypes to ensure we were staying on track with our design decisions. The user archetypes were created in Figma using the synthesized data from our interviews with K-12 educators.
Figure 2: Primary Archetype
Figure 3: Secondary Archetype
Journey Map
We designed a journey map to provide a visual representation of our primary archetype’s actions as they relate to remote education. This helped the team understand the mental models of our users and prioritize potential areas of opportunity for the prototype design. The Journey map was created in Figma, based on information gathered from the interviews conducted with K-12 remote and hybrid educators.
Figure 4: Journey Map for our Primary Archetype
Hybrid Digital Card Sort Round 1: Pilot
We recruited 8 classmates for this hybrid card sort class project activity through a class discussion post. Participants were not screened for participation, and demographic information was not collected since they did not meet our target user criteria, and this was treated strictly as a pilot test. Optimal Sort was used to create and analyze this digital hybrid card sort.
Hybrid Digital Card Sort Round 2: K-12 Educators
We recruited 10 K-12 remote and hybrid educators were recruited for this hybrid card sort through social media requests for participants and emails to personal connections who met the target user criteria. Optimal Sort was used to create and analyze this digital hybrid card sort.
Figure 5: Card Sort Participants’ Grades and Subjects Taught
Crazy-8s Sketching
The crazy-8s sketching exercise was used to create eight distinct ideas for our prototype. Each team member quickly sketched design concepts. Each team member created their own eight distinct ideas for prototype screens that addressed the remote education problems we were exploring. Each member presented their design ideas. We then selected components and concepts from multiple designs and combined them to create a framework for our digital toolkit prototype.
Figure 6: Selection of Crazy-8s sketches used to brainstorm prototype ideas
Low-fidelity Prototyping
Our team created the low-fidelity prototype in Figma. We incorporated the selected Crazy-8s sketch interface designs from the team. We created a desktop prototype that fused multiple features from some of the existing remote education products that teachers are using, into one product.
Figure 7: Low-Fidelity Prototype Dashboard
Figure 8: Low-Fidelity Educational Resources Page
Figure 9: Low-fidelity Teacher Collaboration Page
Low-fidelity Usability Testing
We recruited 5 K-12 educators for remote low-fidelity usability testing through personal connections. We conducted a low-fidelity prototype evaluation to identify potential usability problems our users might encounter when trying to complete tasks. This evaluation feedback allowed us to modify the prototype early in the design process, test ideas, and collect feedback on revised designs more efficiently.
Additionally, participants were asked to rate their perceived utility of the collaborative digital toolkit using a 5-point Likert scale (1= Not at all Useful, 5= Very Useful) in order to evaluate how users felt about the toolkit.
Figure 10: Low-Fidelity Usability Testing Participant Summary
Figure 11: Low-Fidelity Usability Tasks
High-fidelity Prototyping
We created the high-fidelity prototype in Figma using the low-fidelity screens as a base. The low-fidelity usability testing results informed the changes we made to the high-fidelity prototype. We added 20 additional screens to complete the six task flows for our second round of usability testing.
Some of the design changes we made to the high fidelity prototype included:
Color and Contrast
We selected a color palette with high enough contrast to meet or exceed WCAG guidelines.
Logo
We designed a simple logo that combined colors and visual elements of the prototype.
Images
We added photos and illustrations in some areas of the high-fidelity prototype to make it more realistic.
Font Size
We increased the font size to a minimum of 16 pt throughout the prototype to meet WCAG guidelines.
Helper Text
After some users had difficulty initiating the Lesson Creation task, we created a help screen to guide them if they clicked on the “Need help creating a lesson” link.
More Flexibility
Because low-fidelity usability testing participants tried to initiate the Lesson Creation task from the Dashboard and Educational Resources in addition to the Lesson Portal, we added additional buttons and screens for the Lesson Creation task.
Figure 12: High-Fidelity Prototype Dashboard
Figure 13: High-fidelity Educational Resources Page
Figure 14: High-fidelity Teacher Collaboration Page
High-fidelity Usability Testing
We recruited 7 K-12 educators for participation in remote high-fidelity usability testing through personal connections. We conducted usability testing on the high-fidelity prototype to evaluate updates made based on the low-fidelity evaluation. We then used quantitative and qualitative data from our testing to create a list of findings, recommendations, and future areas of focus for the Collaborative Digital Toolkit for Educators.
Additionally, participants were asked to rate their perceived usefulness of the collaborative digital toolkit using a 5-point Likert scale (1= Not at all Useful, 5= Very Useful) in order to evaluate how they felt about the toolkit.
Figure 15: High-Fidelity Usability Testing Participant Summary
Figure 16: High-Fidelity Usability Tasks
Results
Remote Interviews
Interview Findings
A total of 7 remote user interviews were successfully completed with K-12 educators utilizing Zoom meetings. Following transcription of each interview, the team created a shared affinity diagram in Mural to synthesize and organize findings into three main focus categories; 1) Collaboration, 2) Remote Teaching Methods and Challenges, and 3) Lesson Planning and Resources. While teachers were able to learn new programs and adapt available resources to their needs in the short transition period since the sudden arrival of the pandemic, there are a number of difficulties they have faced throughout this process.
Collaboration
Access to student progress, lesson plans, and assignments
Lack of collaboration options for specialized educators
Remote Teaching Methods and Challenges
Managing multiple platforms and resources
Lack of training and support
Lack of time to focus on teaching
Lesson Planning and Resources
Utilizing arbitrary organizational methods
The need for lesson modification and adaptation
Lack of effective supplemental resources
Design Implications
Based on these findings, we integrated the following features into our system:
Ability to add supporting team members to student profiles to increase communication, access, and collaboration between main classroom instructors and supporting teachers.
IEP/504 and other specialized plan inclusion on student profile pages to increase ease of access of these items for the main classroom and specialized teachers to better support their students.
Teacher groups, forums, and messaging to aid in collaboration, discussion, and connection between educators.
The system should be multi-functional in order to save educators time and effort, aid in task completion, and allow educators more time for teaching and focusing on the needs of students
Training videos and tech support should be readily available to aid educators in utilizing software and new applications
An easy lesson and resource organization system to help educators stay organized and aid in lesson planning.
The ability to modify lesson plans and resources to fit specific educator needs.
The ability to share lesson plans and resources to aid in collaboration between main classroom teachers and support educators.
A rating and review system to help educators find effective supplemental resources and minimize the time spent looking for them.
A resource verification feature to indicate resources that have been vetted by an authoritative source to minimize frustration over determining which resources are legitimate.
Goal-Related Statistical Results
Within our interviews we addressed user satisfaction with current available resources. We asked participants to rate their level of satisfaction with their current educational resources for remote or hybrid teaching on a scale of 1-5, with 1 being “Not Satisfied at All” and 5 being “Very Satisfied”. Only 1 participant reported feeling satisfied with their current resources, and 6 of 7 rated their level of satisfaction as neutral, or unsatisfied.
Figure 17: Level of Satisfaction with Current Resources
We then addressed interest in an all-in-one solution such as our proposed system. We asked them to rate their interest in an integrated collaborative digital toolkit on a scale of 1-5, with 1 being “Not at All Interested” and 5 being “Very Interested”. Only 1 participant rated their interest in the toolkit as neutral, and 6 of 7 were interested or very interested.
Figure 18: Interest in a Digital Collaborative Toolkit
Both sets of results had low mean absolute deviations, indicating consensus amongst participants in both levels of satisfaction with current resources, and interest in a digital collaborative toolkit. Overall, participants were neutral about their current resources, and rated their level of satisfaction at an average of 3.14 out of 5. Interest in a digital collaborative toolkit was high, with an average rating of 4.43 out of 5.
Hybrid Digital Card Sort Round 1: Pilot
This card sort was attempted by 10 participants, and completed by 8. None of the 8 participants had experience as a K-12 educator, so this was treated strictly as a pilot test.
Participants were asked to categorize a total of 29 cards into 5 categories:
Class Directory
Educational Resources
Lesson Portal
Support and Development
Teacher Collaboration
The highest percentage of agreement reached among cards was 88%, and the lowest percentage was 25%. The table below lists the top ten cards with the highest percentage of agreement from highest to lowest, and associated categories participants placed cards into.
Figure 19: Top 10 Cards with Highest Percentage of Agreement
We identified one issue among these top ten cards:
We intended for Permission Settings to be used when resource sharing with other teachers. We determined that this was confusing because it was a setting, rather than housing content, as indicated on other card labels.
The table below lists the bottom ten cards by lowest percentage of agreement from lowest to highest, and associated categories participants placed cards into.
Figure 20: Bottom 10 Cards with Lowest Percentage of Agreement
The overall percentage of agreement among cards categories was low. The highest percentage of agreement reached among categories was 39%, and the lowest percentage was 22%. The table below lists the categories by percentage of agreement from highest to lowest.
Figure 21: Categories by Percentage of Agreement and Number of Cards
We concluded that the primary reason category agreement was so low amongst participants was because they placed so many different cards in each category.
We identified three reasons that participants placed so many different cards in each category:
Some of the cards we created were too vague, and did not clearly define if their content was for teachers or students.
Without context around the categories and cards, participants were able to place most cards into multiple logical categories.
Our participants were not K-12 educators, so some card labels and categories were unfamiliar to them.
This indicated that we needed to revisit some of the card and category names to make them more clear for our Hybrid Digital Card Sort 2 with K-12 Educators.
The quantitative data we collected from participants focused on two areas; 1) confusing cards or categories and 2) conflicting cards or cards that were too similar. Analysis of these comments indicated that:
Some participants were confused by cards because they needed more context or a more descriptive label. Otherwise these could fit into multiple categories. My Favorites, My Lessons, Teacher Resources, Teacher Collaboration and Images were all mentioned.
7 of 8 participants mentioned that there were no conflicting, or too similar cards.
Based on all data collected from this card sort, we made the following changes to categories and cards for the Hybrid Digital Card Sort 2 with K-12 Educators:
We eliminated the Permission Settings card, because we felt these needed additional context that could not be conveyed in a card sort. We also decided to refine our focus to navigational labels.
We changed the name of three cards to provide more clarity:
Images was changed to Images and Graphics
Discussion Forum was changed to Community Forum
Learning Strategies was changed to SEL, a term more specific to education
Hybrid Digital Card Sort Round 2: K-12 Educators
This card sort was attempted by 14 participants, and completed by 10. All 10 participants had experience as remote or hybrid K-12 educators.
Participants were asked to categorize a total of 29 cards into 5 categories:
Class Directory
Educational Resources
Lesson Portal
Support and Development
Teacher Collaboration
The highest percentage of agreement reached among cards was 100%, and the lowest percentage was 40%, which was significantly higher than agreement in the first card sort. The table below lists the top ten cards with the highest percentage of agreement from highest to lowest, and associated categories participants placed cards into.
Figure 22: Top 10 Cards with Highest Percentage of Agreement
The table below lists the bottom ten cards by lowest percentage of agreement from lowest to highest, and associated categories participants placed cards into.
Figure 23: Bottom 10 Cards with Lowest Percentage of Agreement
The overall percentage of agreement among cards categories was still low, but higher than the first card sort. The highest percentage of agreement reached among categories was 49%, and the lowest percentage was 28%. The table below lists the categories by percentage of agreement from highest to lowest.
Figure 24: Categories by Percentage of Agreement and Number of Cards
As in the first card sort, we concluded that the primary reason category agreement was so low amongst participants was because they placed so many different cards in each category.
We identified two reasons that participants placed so many different cards in each category:
Some of the categories we created were still too vague, and did not clearly define if their content was for teachers or students.
Without context around the categories and cards, participants were able to place most cards into multiple logical categories.
This indicated that we needed to adjust some of the card and category names for our prototype navigation and content.
The quantitative data we collected from participants focused on two areas; 1) confusing cards or categories and 2) conflicting cards or cards that were too similar. Analysis of these comments indicated that:
Some participants were confused by cards because they needed more context or a more descriptive label to categorize them. SEL, Submissions, and Ratings and Reviews were all mentioned.
One participant said that it was difficult to distinguish whether some cards were intended for teachers or students without context, like Messaging and My Favorites.
One participant was concerned with having too many cards in one category, which is why she created the Other category.
4 of 10 participants said that there were no confusing cards or categories
One participant suggested combining Educational Resources and Teacher Collaboration.
Another participant said Educational Resources and Development overlapped.
6 of 10 participants mentioned that there were no conflicting, or too similar cards.
Based on all data collected from this card sort, we made the following changes to navigation and content labels in our high-fidelity prototype:
We realized that the Support and Development category did not clearly indicate this was support for teachers, not students. To further differentiate from this from the Educational Resources category, we changed this navigation label to Teacher Development.
We changed the name of two cards to provide more clarity:
Messaging was changed to Teacher Messaging
Team Directory was changed to Teacher Directory
Low-fidelity Usability Testing
We conducted 5 remote moderated usability tests with K-12 educators. Zoom enabled with screen sharing was used to view the participants interacting with the low-fidelity prototype.
To evaluate the effectiveness of the prototype features through remote usability testing on the prototype with current K-12 educators, we used a testing task list of six tasks. We rated each completed task as “pass” or “1” and each failed task as “fail” or “0.” We then calculated the percentage of success.
Figure 25: Low-Fidelity Usability Task Completion for 6 participants
When we analyzed the results we found some areas for improvement, especially in the Create a Lesson task and in the Teacher Development - Tips and Tricks task, which each had 60 % effectiveness. We modified the high-fidelity prototype to achieve a higher success rate and hopefully increase effectiveness. Overall, the effectiveness of the prototype was at 77% which we considered acceptable.
In addition, to analyze whether we were meeting our goal of creating a useful tool for teachers, we asked participants to use a 5-point Likert scale (1=Not at all Useful, 5=Very Useful) to rank their perceived usefulness of the prototype. We then calculated the mean, median and mode and Mean Absolute Deviation (MAD).
Figure 26: Perceived Usefulness of the Toolkit, post Low-Fidelity Usability Testing
The results show a mean of 4.7 and a low MAD score of 0.24, indicating consensus amongst participants in high perceived usability of the digital collaborative toolkit.
During post-test debriefing, 4 of 5 participants expressed their satisfaction with the tool and the “having everything in one place” concept. One participant who currently uses Microsoft Teams really liked how easy and intuitive everything was and told us that if the teacher side looks this good then the ‘student side’ would be just as easy. She said many times she is working with students who have limited tech help at home and some features of Teams are not that intuitive to use for her students, so having everything in an easy-to-find and use format would be wonderful.
High-fidelity Usability Testing
We conducted 7 remote moderated usability tests with K-12 educators. Zoom enabled with screen sharing was used to view the participants interacting with the low-fidelity prototype.
To evaluate the effectiveness of the prototype features through remote usability testing on the prototype with current K-12 educators, we used a testing task list of six tasks. We rated each completed task as “pass” or “1” and each failed task as “fail” or “0.” We then calculated the percentage of success.
Figure 27: High-Fidelity Usability Task Completion for 7 participants
In our analysis of the results, the success rate increased to 100% for the Create a Lesson and Teacher Development - COVID Safety tasks and by adding an additional quick link on the dashboard, as well as adjusting the prompts. Each of these had 60% effectiveness in the prior low-fidelity usability testing.
One task that declined from an 80% to a 57% success rate was the Join a Group task. We were unsure what prompted the decline in success, but based on participant feedback we would likely place the My Groups section into the Teacher Directory where 2 of 3 participants that were unable to complete the task said they would expect to find it. All other tasks were 100% successfully completed, which signaled that our changes from the low-fidelity to high-fidelity prototype, including additional paths and clearer task focus, helped the users complete the tasks. The average successful task completion rates increased from 77% in the low-fidelity usability testing to 93% in the high-fidelity usability testing.
In addition, to analyze whether we were meeting our goal of creating a useful tool for teachers, we asked them to use a 5-point Likert scale (1=Not at all Useful, 5=Very Useful) to rank their perceived usefulness of the prototype. We then calculated the mean, median and mode and Mean Absolute Deviation (MAD).
Figure 28: Perceived Usefulness of the Toolkit, post High-Fidelity Usability Testing
The results show a mean of 4.7 and low mean absolute deviation of .33, indicating consensus amongst participants in high perceived usability of the digital collaborative toolkit. The mean “perceived usability” score of the high-fidelity prototype was the same as the low-fidelity indicating the toolkit prototype had maintained it’s high score among educator participants.
One participant, who is teaching at two different schools and who primarily uses Seesaw and Microsoft Teams told us at the end of the usability testing, “This makes sense to me. It is working better than some of the systems I currently use!” In post-test debriefing a 4th grade teacher looking through the Teacher Development offerings said that she liked seeing the Mental Health Resources offering for teachers. She said it was important and “often overlooked.”
Accessibility & Diversity
We strongly believe that accessibility must be considered and prioritized for any K-12 education platform, so we designed EduGREAT to meet the latest Web Content Accessibility Guidelines (WCAG) international standards. We used the WebAim Web Accessibility Evaluation Tool (WAVE) to evaluate our prototype. Our product meets AA and AAA guidelines for normal and large text and has a contrast ratio of 8.6:1, which meets or exceeds contrast ratios for the visually impaired. Many of our usability testing participants commented on how clean and easy to read the EduGREAT interface was. Clean, simple interfaces reduce cognitive load for those educators, and their students who may be dealing with attention deficit disorders, Zoom fatigue and stressful conditions intensified by the pandemic.
The educators who participated in our project are currently working with students whose differing abilities range from gifted to developmentally delayed. Their school districts include income levels from housing insecure to affluent. While we did not specifically ask about the racial make-up of the populations, some teachers shared that they are working in majority Black, Latinx, white or racially diverse schools. Some are teaching students that are first-generation non-native English speaking students. The teachers work with a range of special education students from homebound to contained class, and more mainstream classroom settings. The teachers we spoke with teach a variety of subjects from general classroom subjects like Math, English, Reading, Writing, Science and Social Studies, and more specialized subjects like Social and Emotional Learning (SEL), Foreign Languages and Art education.
The struggles educators discussed in the initial interviews drove our decision to design a streamlined, clean platform so they can easily collaborate and save time. This will allow teachers to truly focus on teaching their students, especially those who need extra attention. Teachers discussed the challenges of delivering their lessons virtually in resource-challenged schools. Most of our teachers were creating their own methods to reach diverse students with little direction or assistance from their schools.
We developed the EduGREAT Toolkit to meet the needs of educators and their students across a wide spectrum of areas that include ability, economics, race, and resources. Overall, we maintained a diversity, equity, and accessibility focus throughout the project. We feel that this all human-centered design approach should be central to all areas of research and design, and we plan to promote this approach in our future work.
Project Limitations
There were several limitations to our work on EduGREAT. COVID was a factor, because it limited our interviews exclusively to remote Zoom interviews. It also affected the availability of some of our interview and usability participants, which made scheduling them more difficult. In addition, our participants were all K-12 remote or hybrid educators, so they are under a tremendous amount of stress. They often interacted with us at the end of a full day of teaching, which may have impacted some of their interview responses and usability testing results. The relatively short ten week timeline of this project was another limitation, which forced us to prioritize our efforts. Finally, all of our interviewees and usability testing participants were personal connections. While we would have preferred to utilize participants that we did not know, given our participant criteria and time limitations, using acquaintances was our only viable option. Our connection to participants could have positively biased some of their responses, as we noted above in relation to our goals. In addition, the majority of our interviewees were female, which means there could be a gender bias.
Future Considerations
For future work, we would like to explore the student and guardian facing side of the platform, so that we can better understand and build the holistic Toolkit experience for all potential users. Our future work may include additional research on existing platforms that guardians and students are currently interacting with, to help us assess the challenges, successful features, and limitations of these. We would also like to interview guardians and students to determine their goals, challenges, and needs as they relate to remote educational platforms. These additional research methods would help us gather key insights and implement potential solutions for the guardian and student facing side of the EduGREAT Toolkit. Finally, we would refine our high-fidelity prototype so that it includes additional features such as Zoom integration, ability to upload additional resources, and incorporating student feedback. Once these design changes were implemented, we would evaluate these additional features with a usability test, then further refine it as needed.