Evaluative Research Design Examples, Methods, And Questions For Product Managers

Evaluative Research Design Examples, Methods, And Questions For Product Managers cover

Looking for excellent evaluative research design examples?

If so, you’re in the right place!

In this article, we explore various evaluative research methods and best data collection techniques for SaaS product leaders that will help you set up your own research projects.

Sound like it’s worth a read? Let’s get right to it then!

TL;DR

What is evaluative research?

Evaluative research, aka program evaluation or evaluation research, is a set of research practices aimed at assessing how well the product meets its goals.

It takes place at all stages of the product development process, both in the launch lead-up and afterward.

This kind of research is not limited to your own product. You can use it to evaluate your rivals to find ways to get a competitive edge.

Evaluative research vs generative research

Generative and evaluation research have different objectives.

Generative research is used for product and customer discovery. Its purpose is to gain a more detailed understanding of user needs, define the problem to solve, and guide product ideation.

Evaluative research, on the other hand, tests how good your current product or feature is. It assesses customer satisfaction by looking at how well the solution addresses their problems and its usability.

Why is conducting evaluation research important for product managers?

Ongoing evaluation research is essential for product success.

It allows PMs to identify ways to improve the product and the overall user experience. It helps you validate your ideas and determine how likely your product is to satisfy the needs of the target consumers.

Types of evaluation research methods

There are a number of evaluation methods that you can leverage to assess your product. The type of research method you choose will depend on the stage in the development process and what exactly you’re trying to find out.

Formative evaluation research

Formative evaluation research happens at the beginning of the evaluation process and sets the baseline for subsequent studies.

In short, its objective is to assess the needs of target users and the market before you start working on any specific solutions.

Summative evaluation research

Summative evaluation research focuses on how successful the outcomes are.

This kind of research happens as soon as the project or program is over. It assesses the value of the deliverables against the forecast results and project objectives.

Outcome evaluation research

Outcome evaluation research measures the impact of the product on the customer. In other words, it assesses if the product brings a positive change to users’ lives.

Quantitative research

Quantitative research methods use numerical data and statistical analysis. They’re great for establishing cause-effect relationships and tracking trends, for example in customer satisfaction.

In SaaS, we normally use surveys and product usage data tracking for quantitative research purposes.

Qualitative research

Qualitative research uses non-numerical data and focuses on gaining a deeper understanding of user experience and their attitude toward the product.

In other words, qualitative research is about the ‘why?’ of user satisfaction or its lack. For example, it can shed light on what makes your detractors dissatisfied with the product.

What techniques can you use for qualitative research?

The most popular ones include interviews, case studies, and focus groups.

Best evaluative research data collection techniques

How is evaluation research conducted? SaaS PMs can use a range of techniques to collect quantitative and qualitative data to support the evaluation research process.

User feedback surveys

User feedback surveys are the cornerstone of the evaluation research methodology in SaaS.

There are plenty of tools that allow you to build and customize in-app and email surveys without any coding skills.

You use them to target specific user segments at a time that’s most suitable for what you’re testing. For example, you can trigger them contextually as soon as the users engage with the feature that you’re evaluating.

Apart from quantitative data, like the NPS or CSAT scores, it’s good practice to follow up with qualitative questions to get a deeper understanding of user sentiment towards the feature or product.

Evaluative Research Design Examples: in-app feedback survey

A/B testing

A/B tests are some of the most common ways of evaluating features, UI elements, and onboarding flows in SaaS. That’s because they’re fairly simple to design and administer.

Let’s imagine you’re working on a new landing page layout to boost demo bookings.

First, you modify one UI element at a time, like the position of the CTA button. Next, you launch the new version and direct half of your user traffic to it, while the remaining 50% of users still use the old version.

As your users engage with both versions, you track the conversion rate. You repeat the process with the other versions to eventually choose the best one.

Evaluative Research Design Examples: A/B testing

Usability testing

Usability testing helps you evaluate how easy it is for users to complete their tasks in the product.

There is a range of techniques that you can leverage for usability testing:

As with all the qualitative and quantitative methods, it’s essential to select a representative user sample for your usability testing. Relying exclusively on the early adopters or power users can skew the outcomes.

Beta testing

Beta testing is another popular evaluation research technique. And there’s a good reason for that.

By testing the product or feature prior to the launch with real users, you can gather user feedback and validate your product-market fit.

Most importantly, you can identify and fix bugs that could otherwise damage your reputation and the trust of the wider user population. And if you get it right, your beta testers can spread the word about your product and build up the hype around the launch.

If you’re looking at expanding into new markets, you may opt for users who have no experience with your product. You can find them on sites like Ubertesters, in beta testing communities, or through paid advertising.

Otherwise, your active users are the best bet because they are familiar with the product and they are normally keen to help. You can reach out to them by email or in-app messages.

Evaluative Research Design Examples: Beta Testing

Fake door testing

Fake door testing is a sneaky way of evaluating your ideas.

Why sneaky? Well, because it kind of involves cheating.

If you want to test if there’s demand for a feature or product, you can add it to your UI or create a landing page before you even start working on it.

Next, paid adverts or in-app messages like the tooltip below, to drive traffic and engagement.

Evaluative Research Design Examples: Fake Door Test

By tracking engagement with the feature, it’s easy to determine if there’s enough interest in the functionality to justify the resources you would need to spend on its development.

Of course, that’s not the end. If you don’t want to face customer rage and fury, you must always explain why you’ve stooped down to such a mischievous deed.

A modal will do the job nicely. Tell them the feature isn’t ready yet but you’re working on it. Try to placate your users by offering them early access to the feature before everybody else.

In this way, you kill two birds with one stone. You evaluate the interest and build a list of possible beta testers.

Evaluative Research Design Examples: Fake Door Test

Evaluation research questions

The success of your evaluation research very much depends on asking the right questions.

Usability evaluation questions

Product survey research questions

How Userpilot can help product managers conduct evaluation research

Userpilot is a digital adoption platform. It consists of three main components: engagement, product analytics, and user sentiment layers. While all of them can help you evaluate your product performance, it’s the latter two that are particularly relevant.

Let’s start with the user sentiment. With Userpilot you can create customized in-app surveys that will blend seamlessly into your product UI.

Easy survey customization in Userpilot

You can trigger these for all your users or target particular segments.

Where do the segments come from? You can create them based on a wide range of criteria. Apart from demographics or JTBDs, you can use product usage data or survey results. In addition to the quantitative scores, you can also use qualitative NPS responses for this.

Segmentation is also great for finding your beta testers and interview participants. If your users engage with your product regularly and give you high scores in customer satisfaction surveys, they may be happy to spare some of their time to help you.

power-users-user-segments-userpilot-evaluative-research-design-examples

Conclusion

Evaluative research enables product managers to assess how well the product meets user and organizational needs, and how easy it is to use. When carried out regularly during the product development process, it allows them to validate ideas and iterate on them in an informed way.

If you’d like to see how Userpilot can help your business collect evaluative data, book the demo!