The value of user research

Why we do user research and what we have learned so far…

When we started building Climate Policy Radar, we had lots of assumptions about who our users were and what solutions would help them. User research has helped us ensure our research tool meets the needs of the people we’re building it for - mainly “knowledge workers”. These are people processing large quantities of messy textual data: policy analysts, academics and risk modellers. In this blog post, we’ll explain why we carry out user research, how we’ve done it so far, what we’re learning, and some of the improvements we’re making in response.

Why we do user research

User research is a systematic process of understanding the needs, preferences, behaviours, and motivations of the people who use our tools. It has helped us refine our approach to building our tools and helped us tweak, validate, and sometimes completely invalidate the assumptions that we started with nearly three years ago.

We’re also using AI technologies that are still quite new, and we operate in a domain where rapid change is needed and expected. The best way to ensure we focus on the right areas is to talk to our users regularly.

Things we want to learn during user research include

  • The type of knowledge work the user does (e.g. are they a policymaker or academic) and what questions they are trying to answer 

  • Their existing research process and which parts they most struggle with 

  • How and why people use our tools and what might make them more useful

Insights from user research help us make sure that we are evidence-based in our decision making and in setting our priorities for future improvements.  

How we carry out user feedback conversations

There are many different approaches to user research, but the main method that we’ve used so far is user feedback conversations. These are 15-45-minute conversations with someone who has used our product or might use it in the future. We invite every user who contacts us to join a user feedback session. This includes people who submit a law or policy document or request to download our data.

Our feedback conversations typically involve us asking the user

  • questions about how they found us, what their role is, and what their work in climate policy involves

  • to share their screen and show us how they work and how they use our tools, or try out new features that we are developing

  • what problems they’re struggling with and how our tools might evolve to help them in the future

We have had lots of valuable chats with users at events, like UN climate conferences, and in workshops. In addition, more than 70 people have signed up for 20-minute conversations. Most have given us much more than 20 minutes of their time and agreed to follow-up conversations, and none have been paid. 

What we have learnt so far

Conversations with users have helped us validate that people find our tools valuable,  identify problems with our design that are making them difficult to use and find ways to make our tools better. 

We have spoken to a wide variety of users across the world who use our tools in many different ways. This includes civil servant policy analysts, NGO researchers, PhD students, risk modellers, scientific advisors and many more. A common finding across these conversations has been that our users are knowledge workers who answer research questions as part of their jobs.

The knowledge workers that we are concerned with specifically answer their research questions by applying a research process to large libraries of long, dense, unstructured documents. They are searching for nuggets of information to use in artifacts like policy options papers, policy reports or data models. These knowledge workers have given us a lot of positive feedback: “CPR are like the Google of climate policy” and “Instead of having to look in 30 different places I can find it all in a one stop shop” are two of my favourites. We have also learnt a lot about things we need to improve.

“Translation is the problem you need to solve.”

We spoke to a senior civil servant involved in a recent UNFCCC COP summit. In the build up to the summit, hundreds of documents were released in numerous different languages, which people needed to read and synthesise quickly. A particular pain point for them was not being able to easily read and understand documents written in languages other than English. As a result of this and other feedback, we brought UNFCCC documents into our tools and enabled translation of search results so that our users can now read documents written in any language.

“I didn’t realise that was how it worked”

We spoke to many academics who were using our search tool to speed up the process of finding information in long and complex documents. Interestingly, some were opting to use our exact match search instead of our semantic search tool, which identifies related phrases such as “zero-emission cars” and “EVs” if you search for “electric vehicles”. They revealed that this was because they were surprised when they saw matches highlighted that didn’t match their search query and assumed that the search was broken. We adjusted the design on our search page, using new language to make it easier for people to understand how our search function works. In subsequent user feedback sessions we tested whether people understood this new content before releasing it to users.

“Extracting structured data from policy documents ourselves would be ideal, but we never have time”

We spoke to an economist at an international organisation. They are using spreadsheet based models to help governments estimate the impact of climate policy. A pain point for them is access to structured data. They don’t have time to extract data from long policy documents themselves, and rely on data from third parties who curate this information. The ability to easily find all passages containing climate targets in a document and extract it into a table would massively improve this economist’s models. This feature is currently in development, and this user has agreed to provide early feedback on prototypes.  

Help us improve even more

These are just some of the many things that we have learnt from user research so far, but we have a lot more to learn. We have many thousands of people using our tools every week.  We want to speak to as many users as we can so we can understand what areas of improvement to focus on and test out designs for new features.

Would you like to have a user feedback conversation with us? Or do you know anyone who might? Book a slot to speak to our team.

Previous
Previous

Apprenticeships, flight simulators & guide-dogs: Katy Baulch

Next
Next

New feature: English translation of climate law and policy