As a Product Manager for a Startup Venture Studio and Maker in my spare time, I've undertaken a lot of customer discovery research, user interviews, questionnaires, focus groups and usability testing sessions for various professional and personal projects.
Through the years I’ve learned that you only really only get one chance to document this upfront, and if you don’t you can never go back and get that information. Resourcing is also typically limited in early-stage startups and you don’t usually have a lot of people to throw at this problem to document things, so I needed to have a good way to store and analyse our data to share with the wider team.
- The issue is, gathering this amount of insight, feedback and its associated data can be very time-consuming and I personally believe this is part of the reason why so many founders, teams and PMs in particular don’t conduct enough user testing and discovery sessions.
- It’s also a very messy process at times due to the vast amounts of data spread across various tools and mediums and requires great organisational skills to manage effectively.
- There are hundreds of different tools we use to conduct research, all of which can complicate and slow down the process of customer discovery and user testing.
- There is a risk that the data collected is isolated within particular teams and not shared with the wider team.
When creating a personal side project a few years ago I wanted to try and find a solution to this problem whilst building the product. I did some research online for any software solutions that allowed me to store and manage user feedback, data and insight in one solution — however, I couldn’t find a solution to fit my particular needs. I also reached out to find examples from other PMs in a similar situation but it seemed a lot of their personal solutions still required a lot of manual processes using a variety of different tools, something which I was trying to avoid.
So I thought I would give it a shot using some no-code tools I have used in the past and try to create a faster, more efficient solution.
Firstly, I want to quickly provide some context to the first part of the solution as I have already covered this in a previous post which you can find below 👇
How to Completely Automate User Feedback Before you Launch
Nurture your early adopters, gather their feedback and drum up some excitement with a fully automated feedback loop for…
TLDR: I created an automated user feedback loop using a basic landing page, email capture form and drip campaign with a sequence of emails for newly registered users. Attached to the second email is a survey to gather feedback on my prototype using Typeform. This process is fully automated, allowing me the time to focus on driving traffic to my landing page and reviewing the data gathered.
However, now I had responses coming in I was quickly becoming overwhelmed in managing this data.
So my next project was to improve the process of storing, analysing, segmenting and sharing the data.
As shown in the diagram below, I now added Step 7 to the existing system.
The missing component of my previous flow was the storing and management of the data, something which Typeform doesn’t provide with its basic reporting tool.
There were two obvious choices I could use that had great integrations with Typeform, MailChimp and Zapier: Gsheets and Airtable.
I ended up going with Airtable as it allowed me to attach files to cells and it also visually looked much better which is a bonus.
The integration between Typeform and Airtable was created via Typeform👇
Quick setup- Step 1: Create an Airtable account > create a workshop > create a new base from a blank template. Step 2: Create Typeform account > Create a new survey > Connections > Airtable > link to Airtable base > select questions to send responses to Airtable > ✅
Now the Typeform and Airtable connection was in place I wanted to be able to manage the data better. So I added a few additional columns to the base, including:
By adding a new long text column to the base I can now add all my quick notes based on the user's feedback from the Typeform survey. This feature is particularly helpful when working as part of a team so you can see each other notes in a central repository.
Next up is attachments — on my Typeform survey I usually add a question at the end of the survey asking:
“Finally, would you be happy to take part in a brief call to discuss some of your answers? If so leave your email address below and I will be in touch”
If the user consents and provides their email address I will reach out and arrange a phone/video call to dig deeper into some of their responses to my questions and gather more detailed insights.
I tend to create a script prior to my call using Gdocs and add my notes and their responses. I then append this to the participant’s row on Airtable so I can refer back to it.
👩🏻💻 Review process (optional)
If you’re part of a team you could add your profile image and assign yourself to the participant's row so you know a team member has reviewed this and who they are.
Alternatively, you can do this using the collaborator field type which pulls in the list of contributors on your Airtable workspace.
❌ ✅ Have I or my team been able to reach the user for an interview Y/N?
Next, I tick off that I have reached out to that user so I don’t forget or if I’m working with a team so they are aware.
🔢 User scoring system
Lastly — I add a score to that participant's row (1: Low — 5 High). This is based on a few key points.
Persona match: How closely do they match my user persona?
Willingness to help and quality of response: Did they provide good quality insight and feedback in their initial survey response? Did they agree to arrange an interview call? Did they provide good quality insight and feedback during their interview call?
Intent to use the product: How interested were they to help test the beta product once it was ready? Is this a problem they are actively looking for a new solution to solve?
I then multiply the score into a total score column.
This allows me to segment my list by score — for example, if I want to conduct a new usability test with my prototype I will filter my list by a score of ~10+ or segment by score type + score — then reach out to these users again as they will be more likely to respond and more importantly provide good responses and insight.
To ensure I didn’t miss any new landing page signups or Typeform entries I added two Zaps via Zapier.
Each time I received a new sign up via my landing page via MailChimp I would get a new notification in my dedicated Slack channel — same goes for new Typeform entries. This way I didn’t need to check Typeform and MailChimp multiple times a day.
Now I have an automated system all feeding into one central repository, I can continue to use this system through the lifecycle of the product, continuing to enrich the dataset. Additionally, this system can scale as new team members join and the products scales.
Lastly, one of the most interesting benefits of this system is reducing the amount of time I personally need to spend writing up summaries, attending meetings, sitting in on sessions with the design team. I can now rest assured the design team know how to use this system, allowing them the autonomy to investigate the data themselves.
What’s your current process like? Any suggestions on how to improve this process? I would love to hear from you.
Update: I’m currently in the process of building a Kanban system to visualise the process better. Once it’s ready to share I'll update this post.