Three key ways to make the most of your measures

Different research tools work with different audiences. Photograph © CC0 Public Domain
Different research tools work with different audiences. Photograph © CC0 Public Domain

Our experience has taught us to design our measurement systems around the audience, writes Rebecca Gulc

qa-logo-square-cutout-60This is one of a monthly series of blogs exploring what we have learned from our research into helping improve services for people with mental health problems, children, young people and families, and those in later life
Standardised measures are now used extensively by academics and clinicians in a range of research fields. Also known as validated measures, they are rigorously tested for accuracy.

At Qa Research we have made increasing use of such tools in recent years. And we have found that they need to be deployed with care to maximise their usefulness.

After all, these tools place the emphasis on uniformity in delivery – and most of the audiences we work with are anything but uniform.

Meanwhile the settings for our research are never the same, another variable which needs to be considered.

So what have we learned from using standardised measures? Here are three key discoveries…

1. Purpose is key

A lot of measures ask research participants to declare very personal information and feelings. In these cases, the benefits of using a particular question must outweigh the negatives, including the risk of making a situation worse for a participant, or even alienating them.

That is particularly true when covering topics such as mental health and isolation, using tools such as the Warwick-Edinburgh Mental Wellbeing Scale and the UCLA Loneliness Scale. As researchers often have only once chance to interview a participant it is vital to get this judgement call right.

One of our clients, the Campaign to End Loneliness, has made it clear how other, similar tools can be used to garner the same kind of information. Each method has its unique pros and cons as this client has clearly outlined in their toolkit (PDF).

For example, will a leading negative question provide you with any better information than a more positively worded one?

Below are statements from two different measures aimed at eliciting the same kind of information regarding loneliness. The difference is that one is negatively construed and the other is positive:

I miss having people around me [Yes / More or less / No]

I have enough people I feel comfortable asking for help at any time (Strongly Disagree / Disagree / Neutral / Agree / Strongly Agree / Don’t Know)

Here, the end product should be considered. Are the findings going to be compared to other national studies? Is the information to be used for a hard-hitting media campaign?

If so you may need to use particular tools. If not, more positively-worded questions, together with bespoke questions, can be worth considering.

2. Balance is key

To be ethical a survey has to be balanced. We may use closed questions to elicit personal feelings. But can we then empower the participants – perhaps by asking for their opinions on how to improve a given situation that affects them?

We find that a balance of questions works well and retains engagement. No one wants to take part in a depressing survey, and it’s important for us to end on a positive/constructive line of questioning whenever possible.

When it comes to asking sensitive questions, it is crucial that we preface them with care, essentially in order to help participants manage their emotions. Follow-up bespoke questions can also soften the somewhat harsh wording in some of the measures regularly used.

3. Knowing your participants is key

Standardised measures are often tested with specific kinds of audiences. With niche audiences it’s important to ask…

  • Realistically, will this method work?
  • Will the wording be understood?
  • Is it relevant to this audience?

If a measure isn’t going to be understood, a researcher is left to explain around the question – devaluing the worth of the response. It has to produce useable data.

Matching the tool to the audience is critical. Will showcards help? Make asking the questions seem less personal?

We’ve found such small considerations can make a big difference to how both participants and researchers feel about sensitive research.