UX RESEARCH & UI SOLUTIONS

Championing UX within a low-code platform

FOXO Technologies is defining the next generation of life insurance innovation, and our team successfully launched the research tool to get them there.

Their proprietary underwriting protocol, fueled by artificial intelligence, is based upon molecular biomarkers of health and wellness, encouraging longevity as an obvious benefit for the customer and a profit-generator for term life insurance. In order to collect those biomarkers at all, a research tool was needed to collect data from studies.

An independent biometric study platform would have content facilitated by SOCRA-certified clinical research professionals.

These studies would be extensive and invasive; they required in-depth health questionnaires plus blood and urine samples, and those samples included a third-party collection company.

The mobile view of the Scientific Testing Partners dashboard, viewed within an iPhone.

Question to be answered

How could we build a platform that was clear to understand, accommodated complex question formats, and, most importantly, users trusted with their personal information?

A graphic of a mouse clicking a multiple choice question.

As a startup company, FOXO was looking to ship this product as fast as possible. The team’s goal was within a 12-month project timeline. Prior to development, the entire platform was decided to be built on low-code platform Unqork in order to speed up the build and save costs; however, it did end up being more expensive in the end because we had to work extensively with an Unqork rep to customize the product to our exact specifications.

I consulted as a primary UX/UI designer for the Study Tool, working under a Lead Product Designer at FOXO and alongside another designer from SDG. Our development team was small and Unqork was unfamiliar to all of us. We had learning curves when discovering how to customize the software to make complex data entry malleable for users. While we were all entirely remote, a number of us were local to Minneapolis, so we met up in the office occasionally.

User groups

A study participant.

PRIMARY USER

Study participants

Motivation: Contribute to scientific research and get paid in the process.

  • Dislikes long forms and questionnaires
  • Data protection and privacy is very important
  • Wants a more convenient application experience
A scientist analyzing test samples.

SECONDARY USER

Study administrators

Motivation: Collect a large amount of honest and accurate user data.

  • Needs to pull data efficiently
  • Has to understand if study participant demographic quotas are being met

An individualized experience allowed users to skip questions that weren't relevant to them.

We structured our build using a "decision tree" format. The questionnaire displayed the minimum amount of questions at any given time; only if a response required a follow-up did more questions appear.

The design of the questions were modular and interchangeable. This allowed for fast repetition of code and quick scanning by participants.

Two screen mockups: One that shows one question with two radio button options, and a second that shows a reflexive question that relates to the response of the participant.

User testing responses to the dashboard design prompted more survey progress indicators.

To test our assumptions, the initial designs then went through one moderated user testing session plus one unmoderated session. We recruited participants to walk through the entire questionnaire, then virtually observed and recorded their actions and emotions. Our hopes for the user testing were that they would reveal friction points, provide data output from the questions we've asked, and to measure satisfaction and completion metrics over time.

Participants really enjoyed the Study Tool dashboard and knowing the progress of their survey. This led to requests to include more progress tracking within the study itself, instead of needing to go back to the homepage to view it. A section completion indicator was added next to the questionnaire.

The Scientific Testing Partners dashboard and progress sidebar indicator.

Auto-filling repetitive data could reduce time spent onboarding.

When the participant first signed up, a certain amount of demographic data was needed in order to start matching them with potential study opportunities.

At the beginning of each questionnaire, the same demographic data needed to be re-collected for HIPAA compliance. To reduce mental load, we decided to take the existing data and pre-fill as much of this repetitive input as possible.

The mobile view of the Demographics portion of the questionnaire, viewed within an iPhone.

Tobacco history input was incredibly tricky and required multiple rounds of user testing to get right.

Designs in the "Tobacco" section that we thought were intuitive turned out to be much harder for users to understand.

A few of our initial ideas were received well in testing. The entry style provided a sentence format to connect all of the details together, and users seemed to grasp that. There was also a checkbox for users to select if they "smoked a cigarette today" or "still currently smoke", which auto-fills today's date instead of requiring the user to select it; this saved time for current tobacco users.

However, when we asked users to enter their detailed tobacco history directly within the page, testing revealed that users ran into issues when entering their tobacco use changes during different age ranges of their life. Users added a bunch of age ranges by mistake because they “didn’t realize it was popping down”; then, the button to add the next age range kept getting lost within all the field elements.

To remedy this, we referenced "Medications", a section that was successfully filled out every time and had each medication entry in a separate pop-up. Each entry about the participant's detailed tobacco use was added in a pop-up modal, separate from the rest of the UI. A sentence-format statement displayed their completed entry.

Mockups of the Tobacco section of the questionnaire, where the user interacts with a modal to input each age range of their history with tobacco.
An image of a carton of cigarettes, accented by brand graphics in the background.

"The pictures you had with the nicotine are the most helpful thing. It’s a faster way to explain the tobacco type instead of multiple sentences.”

Pictures were a quick read, and people were typically drawn more to them than to the description text. When we were trying to ask questions about a specific type of tobacco, we relied on images to compensate for users potentially scanning the text and missing the details. However, we still had to keep much of the text for legal compliance.

Our team put just as much care into the study administrator's view.

I'm a strong proponent that the experience that our internal team members have with a product is just important as the customer's experience. We put a lot of detail into the backend administrative view.

Progress bars and percentages displayed how the size of a demographic bucket compared to the target amount. Participants were filterable by status, allowing held participants to be quickly released.

The study manager's view in the backend where they can see the demographic data of participants who have completed or been recruited for the study.

"Josie has ... fostered relationships with project sponsors, and harmoniously integrated into delivering high-value bodies of work with efficiency and speed. ... This impact takes effort and focus and she has delivered that since she kicked off this engagement."

– JARED JOHNSON, SENIOR UX CONSULTANT, SOLUTION DESIGN GROUP
A liquidy blob with purple and blue bubbles laid over a hexagon graphic. This image is intended to reinforce the product's branding and add visual interest.

We successfully launched our MVP on time, with user validation, and within low-code constraints.

As we knew from the beginning, working with a low-code platform would present limitations. An incredible amount of communication between design, engineers, and Unqork’s representatives was required to achieve the final output. My engineering handoff skills grew astronomically within this project.

It was fulfilling to validate our successful ideas with user testing, and I was equally grateful for the issues we found and the iterations we designed. Without that input, our studies may not have been completed at all, or the data collected might have been poor.

User testing revealed that we achieved our completion metric goals, resulting in an MVP launch within the 12-month timeline.

  • Ratio of users that completed test without issue (Goal 8/10): 8/10
  • Average time of individual questionnaire completion (Goal less than 10 min): 9 minutes per user
  • Ratio of users that completed the entire test in less than 15 minutes (Goal 8/10): 9/10
  • Ratio of users who would complete a survey like this again (Goal 6/10): 8/10

Next case study: Adding a feature to empower operations

An arrow pointing to the right.