top of page
Untitled design (2)_edited.jpg

​

Hiya Connect is a B2B SaaS product that helps businesses to connect with customers through branded calls. Users can access their services through a web-based dashboard console.

​

As part of a graduate course, my team and I partnered with the Hiya Connect team to conduct a usability study to improve the functionality of their web-based console.

 

I recruited and moderated 5 user interviews to assess task performance metrics and reveal user pain points. We presented our key findings and recommendations to Hiya stakeholders which were used to inform their upcoming product updates.

​

Project Deliverables:

​

OVERVIEW

Timeline
Jan 2023 - Mar 2023

 

My Role

UX Researcher

​

Team Members

Juan Flores, Amodini Khade,
Brandon Curley

​

Tools
Miro, Google Sheets, Zoom

​

Research Methods

Heuristic Evaluations, Interviews,  Survey

​

OUR PROCESS: SIX STAGES

PROJECT TIMELINE
DEFINE & SCOPE

     DEFINE & SCOPE 

Context of Produc

Hiya Connect is a B2B SaaS product that helps businesses to reach more customers through branded calls. Through its dashboard console, businesses can manage customer phone numbers and related information to be displayed during outbound calls.

​

Hiya Connect Dasboard Console View

with and without hiya.png
dashboard view.png
Blue Aesthetic Professional Gantt Graph (7).png

Stakeholder Kick-off
In the initial two weeks, my team and I consulted with our stakeholders to understand our project context:​

 

 

​

Heuristic Evaluations

Given that our stakeholders did not know which features the study should focus on, I conducted a heuristic evaluation based on Nielsen Norman's heuristicto explore the key usability issues of the console. 

​

KEY FINDINGS:
I identified several usability issues but the most severe issues involved:

​

​

​

​

​

​

​

​

​

​

We then presented these key findings to our stakeholders, which helped us scope the goals of our project.

​

​

Project Goals

 

 

​
 

​

Research 
Questions

​

Blue Aesthetic Professional Gantt Graph (8).png
heuristics summary.JPG
PLAN & PILOT

      PLAN & PREPARE

Test Materials 

Our usability test kit included: a screener survey, consent form, recruitment email templates, usability interview guide, task instructions slide deck, SUS questionnaire, and data collection spreadsheet. 

I prepared these materials:


     Interview Guide                             Task Instructions Slide-deck          Data Collection Spreadsheet

​

​

usability guide pg1.JPG
task slidedeck.JPG
rainbow_edited.jpg
rainbow 2.PNG

​​

Usability Tests â€‹

We conducted remote, moderated usability tests because:

  • Moderation helps to facilitate deep-dive insight into what users feel and think (what stakeholders wanted).

  • Remote tests are easier to recruit for in a short period of time, compared to in-person tests.

  • We had a source of funding that allowed us to compensate users for the larger time commitment for interviews.

​

Each test had 4 parts1) introduction, 2) usability tasks + ease of use scale, 3) post-test interview, and 4) debrief.

 

Tasks List

Based on our research goals, tests involved a series of 6 usability tasks. Task instructions were delivered on a slide deck. 

​

​

Note:
To reduce sequence effects, we counterbalanced the order of tasks 2+3 and 4+5
.

task list updated.JPG
bracket

Post-Task Quetsionnaire

After each task, we asked users to rate the task on the 5-pt Likert Ease of Use Scale because it aligns with our research question to evaluate how easily users can complete the tasks.

​

Post-Study Questionnaire

After all tasks, we conducted semi-structured interviews to understand users' overall
experience, what they liked and disliked, and suggestions because this aligns with
stakeholder's need for qualitative user data.


We then had users complete the System Usability Scale (SUS) questionnaire on
Google Forms because it is a quick and well-validated test that aligns with our project
goals to assess the overall functionality of the console system. 

​

Pilot Testing 

I moderated a pilot test with 1 user to quality-check our test procedure and materials. Based on our findings, I improved the clarity of wording of our task instructions, reformatted the test guide, and familiarized myself with setting up Zoom meeting logistics.

RECRUIT
SUS questionnaire_edited.jpg

     PARTICIPANT RECRUITMENT

Target Users â€‹

Our study focused on users who satisfied these criteria:

  • Ages 21 years and older

  • Have 6+ months of current or prior experience as a working professional

  • Prior experience with a business-related SaaS product (must be web-based)

  • Have no prior experience with the Hiya Connect console

 

Sample Size: We aimed to recruit 8 users for our usability tests to maximize the benefit-cost ratio.

​

 

​

Recruiting Users

       1. We first developed a screener survey on Google Forms which was approved by stakeholders.

       2. We sent out recruitment ads to our school channels and professional marketing networks via LinkedIn.

       3. We emailed the best-fit users and had them complete the consent form.

       4. Once consent was obtained, we scheduled their tests on Calendly through email.

​

Our survey received 31 responses that fit all the criteria, but we only needed 8 users. Thus, we decided to narrow it down to those that also indicated being familiar with spreadsheets and had direct experience with outbound marketing/calling software.

CONDUCT

      CONDUCT TESTS

Participant Demographics 

We tested a total of 8 users who were predominantly:

- Female-identified (75%)

- Between the ages of 22-32 years (88%)

- Have a Bachelor’s degree (63%)

- Worked in the technology (75%) or marketing/advertising (25%) industries

​

During the Test

I moderated 3 (of 8 total) and took notes for 2 usability tests. All tests took 45-60 minutes and were video-recorded upon receiving consent. During the tasks, users shared their screens while I encouraged them to "think aloud" their honest thoughts and feelings.

​

After the Test
I cleaned my interview notes and debriefed with other researchers and discussed what went well and what could be improved.

 

​

Blue Aesthetic Professional Gantt Graph (5).png
ANALYZE

      ANALYZE TEST DATA

severity matrix 2.JPG

Severity Rating

We developed a severity matrix to classify our observations into 1 of 3 severity levels: LOW, MEDIUM, HIGH.

​

The severity of each observation considered two measures:

​

​

​

​

​

We then plotted each observation point onto the matrix to 
identify which severity level
they were in. The most frequent
observations that occurred in the challenge tasks w
ere
ranked as the most severe (top right quadrant).

severity matrix 2.JPG
severity matrix 2.JPG

Thematic Analyses â€‹â€‹

We analyzed the qualitative data from our usability tests with a 2-step thematic analysis: 

1. I individually coded my interview notes for key insights about user pain points, expectations, and notable quotes.

2. Our group then synthesized all our findings on an affinity map to identify common observations between users.

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

 

​

​


Task Metrics

I calculated and compared the completion rates, task counts, and average ease of use scores across all tasks. We then ranked all six tasks from highest to lowest impact on users' experience to inform severity ratings.

​

​

​

​

​

​

​

​

​

 

​
 

SUS Calculations 

I calculated the console's SUS (System Usability Scale) score based on the average of users' survey responses. 

 

affinity map negative feedback.JPG
affinity map positive feedbacl.JPG

We identified a total of 20 positive (green) and negative (red) user observations. Each user is coded in a different colored sticky.

KEY FINDINGS:

Negative Feedback

Most negative feedback occurred during the Bulk number upload and Onboarding tasks.

Bulk upload task instructions were confusing and lacked clear feedback [task 3].

Onboarding experience was confusing [task 1].
Search and filter functions were inadequate compared to user expectations [task 3 +5]
.

​

Similar observations were clustered into one theme.

Positive Feedback 
         Users felt overall satisfied with their console experience.
         The console interface looked clean and simple.
        Uploading single numbers was most intuitive [task 2].

​

severity impact table.JPG

    Lower ease of use score = Higher impact level = Higher severe pain point

KEY FINDINGS: Console SUS grade = 68.1 (Grade C), which means that it is satisfactory with minor room for improvement.

KEY FINDINGS:

- Bulk number upload task is the most difficult and high impact.

- Single number upload was the easiest and lowest impact.

SUS score.JPG

Design Recommendations â€‹

Lastly, we synthesized design recommendations to address each user pain point we found. I leveraged my understanding of design principles and findings from our usability tests to inform our design recommendations. 


​

​

​

design recommendations.png
DELIVER

     DELIVER FINDINGS

Stakeholder Presentation â€‹

At the finale of our project, we presented a slide deck of findings to the Hiya Connect product team on Zoom. Due to the time limit of 20 minutes for the presentation, we focused on highlighting our key findings and the most severe findings. 

I designed the slide deck to visually match the company brand and organized our context in a clear and intuitive manner.

​

Scroll through to see the highlights of our presentation!                          View Full Deck

​

Research Impact

While we couldn't identify our exact research impact due to confidentiality issues, we received lots of feedback from our stakeholders after the presentation!

 

Notably, Hiya Connect's team mentioned that our research:

      1. Helped to validate their prior assumptions about users.

      2. Helped to inform improvements to the console's Bulk Number Upload and Onboarding functionalities.

      3. Better understand which functionalities to prioritize improvements for the next product update.

​

stakeholder quote.png

REFLECTIONS

Things I would have done differently...

  • If we had more time, we would have conducted our usability interviews in person as initially preferred by our stakeholders during the study planning stage.

  • I would break down our task instructions into smaller sub-tasks, as this can greatly reduce task fatigue and make it easier for participants to complete the task. We found that many of our users were getting discouraged and disengaged during the longer tasks because.

  • I would familiarize myself with the console interface and task flows more which would have helped me better guide users when they ran into difficulties with the console. 

 

My main takeaways are...​

Stakeholder Engagement:
I learned about the importance of engaging stakeholders as early as possible and obtaining buy-in at every step of our study. Keeping our stakeholders informed about our process through cadenced meetings and email communication allowed us to build a sense of trust which ultimately helped with obtaining the buy-in of our final findings and recommendations. 

​

Importance of Pilot Testing:

I learned that pilot testing is highly valuable in ensuring a streamlined and well-planned usability interview. Through the pilot study, I identified key flaws with our study process and materials such as poorly-worded task instructions or the inability to give participants screen-sharing abilities. These kinks may have potentially compromised our interview findings if we had not conducted this important step and I am glad that the pilot test helped us to improve our test materials before using real participants. 

​

bottom of page