The problem

The survey is the most cross-functional aspect of the Dia&Co product.  It involves everyone from growth, merchandise, data, engineering, and stylists.  Every question feeds important data into our models and impacts every corner of the business.

Our survey design of 36 questions tended to lag behind our competitors in several ways. We lacked rich visual imagery in our questions.  We also used a multi-page approach, instead of a single question, which adding some friction on mobile.  We had also moved to building a lot of current features in React, and the survey was still not broken up into components.  All of this added up to an experience that felt a bit unexciting, un-engaging, and leaving our users with lots of questions.  
Research

What UI changes would we want to incorporate into a newly designed survey? What would provide the best survey-taking experience? We were inspired by the excellent UI of Typeform, which we were using for additional questions as part of post-conversion follow-ups. Using Typeform we built a demo for usability testing, which saved a lot of time, and allowed us to test with the quickest way to our “optimal” UX. I also made a list of design elements that we agreed might lead to improvement:
• progress bar, more indication of % completion
• auto advance on single questions  
• title cards to help onboard through sections
• aesthetically more on brand
• micro-interactions

Restrictions

After consultation with stakeholders, it was decided that it would be best that Design and Product did not change or alter any underlying data to any question, due to impact such changes would have on our Data and Merchandise teams.  The redesign test would have to be successful on UI/ experience only, although in some cases we could alter or add copy. We knew this would limit our ability to make a great experience, but the upside was it would keep us focused strictly to interactivity.
The full survey redesign would be an A/B test experiment against current survey. Users visiting the site would see one survey or the other.
User testing plan

Because of this focus, we set out to find people to test in person and record their experiences taking the type form demo. I reached out to 50 customers and was able to secure commitments from 5 people. I then conducted In-person usability sessions using various mobile devices. Each sessions lasted about 20-30 minutes, and comprised of the user taking the survey as a new customer, and then some follow up questions about their experience. The goals were
Validate the Typeform UI- make sure users understood the interface and could complete the survey
Understand how visuals impact people’s experience- Record any positive or negative reactions and comments on imagery
Good understanding of how people react to the photos- See if people would understand the context of each photo, and the ideas that each represented (as opposed to mistaking them for inventory, for instance)
Sense of fatigue/disinterest due to length- See if  the new single question format cause a different user behavior form our current page/ scrolling interaction
INSIGHTS
Testing usability was smooth, with few surprises along out line of testing.  Most of the feedback about the experience centered around the content and how we were asking the survey questions, which was off limits for the design.
However, this feedback around the question wording was something we had heard before, and it led me to look for a deeper discrepancy in how we address aspiration vs getting to know her. It appeared our content was switching back and forth and was sometimes vague about how we wanted her to respond.  I believed this cognitive whiplash had detrimental effects to our user's perception of the survey, but I’d have to table this line of research for another time.
Hypothesis


We had several goals as we defined what redesigning the experience meant:
• Make the experience more mobile first
• Create a new question-by-question format for survey taking across site in React.  
• Design reusable question components that could be used for quizzes, polls, ratings, not just on-boarding.
• Make it more of a brand moment, and increase trust for the product.
• Boost conversion 10% on checkout, due to the prior experience taking the improved survey.

Execution

After testing validation, I had the green light to begin the high fidelity designs.  We already had a style guide and an existing survey, so it meant building and expanding on those principles to hopefully give the survey more visual appeal.
I'd also have to work closely with our Creative Team to develop the photo assets.  I prepared a shot style guide, providing examples of how I wanted the clothing and models shot.  I also provide a photo asset checklist and shared with the team so they could track how they tackled the shoots and provide raw photos. I'd handle all  retouching, editing, and preparation for the final product.
For animation, I built a high fidelity mock up in Principle to demonstrate and test out the motion design elements of the experience.  Engineering and I could make sure the ease-curves matched others in use across the site, and discuss feasibility and catch anything too difficult and revise.
I did layout for each individual question, in both mobile and desktop.  
I then imported each question into Zeplin and organized them by question category.  This allowed us to leverage Zeplin as a shareable project home base to version questions individually and to add additional ones with ease.  Further development on the survey would get easier using this system
learnings

The results of our test were fairly flat, but in retrospect not too surprising. Considering all the underlying upgrades and visual improvements we made to the survey, we had hoped to see some positive change.  However it was good that such changes could be implemented without a negative effect as well.
• There was no statistically significant change in conversion. There was limited impact that UI brought to the table in isolation.  In this case, it just wasn't the strongest lever for changing behavior.
• The budget questions were resulting in more customers in the lowest price bracket. Was it somehow possible the new design was causing lower LTV customers? I was able to solve for the lower LTV by doing further QA of the budget questions, and realizing that we had a UI issue that would cause users to pick the lower bracket.  For a further iteration I put forth a solution that would solve for this problem. 
The engineering work that went into the new single question system would not go to waste.  We planned further experiments for other areas of the site, and much of the design and code base work was used for the launch of a new product line. 

Back to Top