Services
Product management
User research
User interface design
Interaction design
Information architecture
Tools
Figma
Adobe Photoshop
Whimsical
Hotjar
Google Analytics
Usertesting.com
Timeline
March 2020 - July 2022
Tiger Algebra has been a successful EdTech startup since before “EdTech” was even a term. Their core offering is an online algebra search engine that converts algebra students’ math problems into solutions and easy-to-understand, step-by-step explanations. Unlike most of their competitors, TigerAlgebra is completely free, operating solely on ad-revenue.
Early on, I quickly got the sense that my hiring as the UX/UI guy was something of a box-checking exercise with the added bonus that I could potentially make the site look a little bit better and answer some design-related questions, if needed. However, I was determined to show our scrappy team of seven (four developers, a content strategist, an SEO marketer, and me) that there is far more to product design and management than what meets the eye. Frankly, Tiger Algebra was probably a -0.5/5 on Invision’s Design Maturity Model at the time, which, to be fair, is where a lot of startups are at that point in their journey. However, I was determined to bring them up to at least a 3/5. So how do you go from “we just need it to look good” to user-driven, iterative product design? You start with what you know.

While the product-user’s experience is undoubtedly important, it is also simply a means for an organization to reach its goals. In established companies, like Tiger Algebra, business goals intersect with user goals in a trickle-down fashion, with business goals at the top trickling down to user goals at the bottom. If an organization’s goal is to increase revenue, for example, and part of reaching that goal is making sure they retain paying customers, then that organization has a financial incentive to make their products and/or services highly user-friendly. For this reason, it doesn’t make much sense to develop a business strategy from the bottom-up, starting from the product. In fact, it’s famously how so many companies fail to find product-market fit. Instead, I take a top-down approach, first identifying company and stakeholder goals, as well as the metrics used to measure their progress, and structure the whole opportunity space in a way that creates a causal relationship between these goals and user goals. Teresa Torres, who invented this method for visualizing the problem space, calls it an opportunity-solution tree and explains it in detail here. Here is a branch of the opportunity-solution tree we used to map the opportunity space at Tiger Algebra.

All opportunities stem, either directly or indirectly, from the company goal of increasing site usage by at least 1000 visitors/month by the end of quarter three of 2022. The only two ways to really do this would be to increase user retention and increase new user visits. Site analytics showed that, while site visits by new visitors were increasing, user retention was mostly stagnant. We made the decision to target this as our first opportunity.
Considering the sizable competitive advantage Tiger Algebra possesses by offering their service for free, it was not a huge leap to assume that perhaps the website was simply not as user-friendly as its competitors’ websites. To validate/invalidate this hypothesis, we quickly turned to remote usability testing via usertesting.com and tested a few inputs, which revealed a handful of major usability issues. Having trouble getting the team to understand the nature of these issues and to grasp their severity, I planned two viewing sessions in which all team members who touch the product in any way, including me, would watch the recorded usability tests.

This was so effective that I have since made it a personal policy to always present user insights as directly as possible to anyone who works on product, as well as anyone else from the organization who might be interested. I then held a series of solutions workshops with the team to review the problems and to generate solutions using the ideation strategies discovered by researchers Runa Korde and Paul Paulus that are summarized in Teresa Torres’s book Continuous Discovery Habits. As Torres notes in her book, there are several reasons brainstorming can prove totally ineffective, and her suggestions for minimizing or avoiding these have allowed the TigerMilk team and me to continually walk away from workshops with solutions we all feel good about that would not have been reached without the contributions of the entire team.


In my opinion, the best case scenario for user research is to directly involve anyone working on the product in order to cover more ground and ensure alignment on discovered opportunities. Sadly, this is not always doable.

The next step was to list all of the assumptions that our solutions would rely on and test the most important ones. In an ideal setting, we would have gone through them all, moving from the most important to the least, until we had substantial evidence that our proposed solution was the right one. But settings are rarely ideal and, instead, due to a number of constraints, we only tested our most important one: Users will go out of their way to find, click on, and read a feedback guide. To test this, we performed a classic smoke-screen test: The main developer on the team would implement a link labelled “Formatting help,” that would take anyone who clicked on it to a Google Form with some simple questions that would help us gauge interest. The form allowed respondents to vote on whether or not they would be in favor of a formatting guide, but, knowing this would likely not yield statistically significant results, we measured the number of users who clicked on the formatting guide link against unique sessions on the website. This allowed us to validate, with a fair degree of certainty, that users had interest in and could find the link to the formatting guide.

Then came the fun parts: UI design and implementation. To cut down on production time and maintain consistency, I was careful to use existing elements of the site’s UI. Production, including content, took less than a week.
So now what? The feature has been implemented; the problem has been addressed, right? Well... no. New features pretty much always need to be tested, so we put our beloved formatting guide back through the wringer via remote usability testing. We uncovered a few bugs and usability issues, which we addressed according to severity, but for the most part it seemed to be in good shape. The work that remains to be done on the formatting guide is to continuously track its use via a usability widget I particularly enjoy called Hotjar, which, amongst other things, gathers user ratings and comments to create a more general like-ability score that’s not so different from a Net Promoter Score. This, of course, helps us learn what we are doing well and what we are doing poorly, but it also sometimes reveals new opportunities worth pursuing.

We are still at the beginning of the measure phase of the feedback guide, but in the weeks since implementation, we have seen an average 0.3 (out of a total 5.0) increase on Hotjar. User retention is a bit of lagging indicator, meaning we may never have a clear picture of the formatting guide’s effect on it—any number of factors can cause a random spike or dip—but we know from experience that reducing friction, especially in high-use areas of a product, often improve user retention rates. The key to maintaining forward progress on a metric like user retention, that is so heavily dependent on subjective experience, is finding ways to gather the right kinds of feedback and responding to it in ways that make users happy.

Ultimately, this experience was a very productive one for everyone involved, and I feel strongly that we moved the company toward a 3/5 on design maturity, evidenced by the team’s enthusiastic embrace of—and leadership’s strategic shift toward—a more user-guided approach to product and business. If I had to do it over again, I would have made a bigger push early on for establishing KPI’s and systems for measuring them. It became clear throughout the process that I could holler and shout about clunky user journeys and buggy site interactions until my throat was sore, but that what the team responded to was seeing evidence for themselves via quantitative data and recorded usability sessions.