Pattern
Getting That Sweet, Sweet Validation

The Brief

The Subject

Pattern is an emerging food ordering platform, with a twist. Following the COVID-19 pandemic, Pattern worked with local eateries to increase footfall in restaurants and encourage people to dine-in more often. It does this through loyalty programs and discounts offered in collaboration with restaurants.

The Problem

Pattern was facing a user retention issue. There was a large amount of user drop-off, especially after users signed up for the application. Pattern understood that the app needed an overhaul, and they came to us to understand how to improve it.

Summary of Process

  1. First, We Audit

  2. Competitor Analysis

  3. Planning the Study

  4. Usability Testing and Results

  5. Re-testing After Improvements

  6. Why This Was Fun + Key Takeaways

First, We Feast Audit ✍🏼

I began by auditing the website and building a skeleton of the user flows. I approached the website the way a completely new user would, and followed the user journey from sign up all the way to placing an order. This helped us identify gaps in the user experience, potential pain-points, and any other design or functional problems that we could highlight to the app’s leadership team.

Competitor Analysis 🤼

Next, we wanted to understand what competitors in the space were doing. As things currently stood, Pattern looked more like a regular food delivery application. Our goal was to consider how each feature within Pattern could be more aligned with its vision to be a loyalty program app, first and foremost; how could an app that wanted to target dine-in users service those users better?

There was a lot of scope to be creative with the types of applications I was looking at, so that I didn’t only look at other players in the food/loyalty space, such as Chope, but also turned my attention towards apps like ClassPass and Groupon, which sold experiences and relied on people actively choosing to go somewhere. While there were some basic fixes necessary for the app, there was also a lot of room to be creative.

As part of this exercise, I also did some guerrilla interviewing and spoke to a few people within the target demographic for the app about how they made decisions regarding where to go out to eat, which reaffirmed the notion that alongside budget and location, one of the things users most wanted to know was the “vibe” of the eatery, and for Pattern it was critical to be able to communicate that to effectively encourage users to dine in at their partner restaurants.

Usability Study Planning + Execution 🍔

Based on the findings from the audit and compscan, we built a user study to assess how real users of the application reacted to it. Initially we wanted to work with both existing users as well as new users of the application for a more holistic approach. However, due to budgeting and access restraints we had to limit my approach to just new users of the application.

I tested users in person. The fun part for them was that they got to order something for themselves at a partner restaurant; the downside was that they had to answer my questions and let me watch them while they did it through the Pattern app. No such thing as free lunch.

I found that users faced frustrations at various points in the app, while searching for restaurants and placing orders, in a lot of problem areas we had identified during our own audit of the application. Based on this, my recommendations for the application included some quick fixes that would vastly improve user experience at the onset, and then some longer term, strategy based feature implementations, based on the competitor analysis I had done earlier and the information I gathered during the post user study interview.

Re-testing Post Design Changes 🔄

Pattern returned to us with some of the changes we suggested built into their application, and we decided to do a secondary round of user testing to test for whether or not these changes actually improved the overall user experience. Because I was testing users in person rather than using a software, I used task completion times and ratings as metrics of comparison, coupled with observational insights. I executed the same test but with a new cohort of users who were unfamiliar with the application so that testing results were comparable and unbiased.

Across the board I found that the fixes we’d recommended and that had already been implemented greatly improved the user experience. Users were able to complete tasks not only faster, but there was less friction and their ratings of the tasks as well as the app overall were significantly better.

Continued usability studies not only help validate design decisions, but also help find future areas to build on. This emphasizes the importance of going back to the user again and again with successive design iterations so that improvements are continuous and valuable.

Learnings and Takeaways

Its critical to be adaptive in your approach; we weren’t able to execute what we had initially planned but we worked around it and found our way to actionable and valuable insights.