Child Health and Nutrition Research Initiative (CHNRI) approach to research priority setting

Paper summarised:Igor Rudan, Jennifer L. Gibson, Shanthi Ameratunga, Shams El Arifeen, Zulfiqar A. Bhutta, Maureen Black, Robert E. Black, Kenneth H. Brown, Harry Campbell, Ilona Carneiro, Kit Yee Chan, Daniel Chandramohan, Mickey Chopra, Simon Cousens, Gary L. Darmstadt, Julie Meeks Gardner, Sonja Y. Hess, Adnan A. Hyder, Lydia Kapiriri, Margaret Kosek, Claudio F. Lanata, Mary Ann Lansang, Joy Lawn, Mark Tomlinson, Alexander C. Tsai, Jayne Webster. Setting Priorities in Global Child Health Research Investments: Guidelines for Implementation of CHNRI Method. Croat Med J. 2008;49:720-33. doi:10.3325/cmj.2008.49.720

Summary of the paper by Jill Pooler


The authors (Child Health and Nutrition Research Initiative (CHNRI)) propose a systematic yet flexible method for setting research priorities for global child health. The rationale for this project is the view that current research prioritization approaches may be flawed and thereby partly responsible for persistent high levels of mortality among children globally. Moreover it is argued that whilst the purpose of all health research is to reduce the existing burden of disease and disability and improve health, many investments in research will never sufficiently achieve these goals.  The purpose of the CHNRI priority setting method is to inform those who invest in research about the risks associated with their investments.  The target audience are international agencies, large research funding donors, national governments and policy makers.  

This paper sets out the method in fifteen steps. These are summarised in the table below:

Step 1

Selecting managers of the process

A small team of people who represent investors in health research, their interests and visions (stakeholders).

Their role is to assess the likelihood that the proposed research will reduce the burden of disease within the context of the investments being made.

Step 2

Process managers to specify the context and risk management preferences

  1. Context in space: what is the population in which the investments in health research should contribute to a reduction in the burden of disease and improve health?
  2. Disease, disability and death burden: what is known about the problem to be addressed by the research?
  3. Context in time: what is time lag between the intervention and detectable disease reduction?
  4. Stakeholders: whose values and interests should be respected when setting research investment priorities?
  5. Risk management preferences: how will investment risk be managed?

Step 3

Process managers to discuss criteria for setting health research priorities

Define criteria specific to the ‘context’ for discriminating between competing ‘investment options’.  For example: i) answerability ii) attractiveness iii) novelty iv) potential for translation v) effectiveness vi) affordability vii) deliverability viii) sustainability ix) public opinion x) ethical issues xi) potential impact on disease burden xii) equity xiii) community involvement xiv) cost and feasibility xv) enterprise generation. However the longer the criteria the greater the possibility of overlap reducing their usefulness as independent criteria.

Step 4

Process managers to choose a limited set of the most useful and important criteria

Using milestones which set out the aims of any health research select from the previous list,criteriathat should discriminate between competing options (merging criteria if necessary).   

See Figure 1 from Rudan et al article. 

Step 5

Process managers to develop the means to assess the likelihood that proposed health research options will satisfy selected criteria

Invite a group of technical experts (e.g. methodologist; economist; statistician; health impact assessor) to work closely with the process managers to list, check and score research options/questions using a simple yes/no question proforma addressing each of the criteria individually. An example question regarding criterionanswerabilityis:  Is the research option/question well framed and endpoints well defined?

Step 6

Systematically list a large number of proposed health research options

Whatever the funding circumstances that research priorities are responding to list and map  i) the research domain e.g., research to assess health burden to ii) the research avenue e.g., measuring the burden to iii) the research option e.g., duration of research and iv) the research question, in order to identify the most important and specific questions to be investigated.   

Step 7

Pre-score all competing research options

Using the framework in Step 4 map the research options/questions to the milestones.

Step 8

Score health research options using the chosen set of criteria

Technical experts to score the research options/questions independently against the criteria selected by the process managers in Step 4: 0= I disagree/1=I agree/0.5 neither agree nor disagree

Step 9

Calculating intermediate scores for each health research option

The scores of the technical experts from Step 8 are calculated for each research options/questions and divided by the number of received answers. The results are assigned a value of 0% and 100% and each represents a measure of collective optimism among the technical experts of the likelihood that each option/question would satisfy each priority setting criterion in turn. The scores can now be ranked.

Step 10

Obtaining further input from stakeholders

Involve stakeholders to i) define minimal score (threshold) for each criterion that needs to be achieved in order to consider any research option a funding priority ii) allocate different weights to these scores so they are not just a simple arithmetic mean, but a weighted mean.

Step 11

Adjusting intermediate scores taking into account the values of stakeholders

Calculate weighted mean of scores of stakeholders in Step 11. Discard research options that fail to reach all the suggested thresholds.

Step 12

Calculating overall priority scores and assigning marks

Calculate mean scores of technical experts in Step 8 for all criterion in Step 4 see figure from Rudan et al article. 

Step 13

Performing an analysis of agreement between scorers

For transparency, assess level of agreement between technical experts for each research option/question using Kappa calculation.

Step 14

Linking computed research priority scores with investment decisions

All decisions that need to be made must be based on i) research priority scores (RPS) and cost of each research option/question, either already supported or proposed as an alternative ii) maximising the sum of RPS values of supported research options within a given fixed budget iii) if the sum of the RPS scores within an existing program is lower than the sum of the alternative, resources should be shifted from the existing into the new research options.

Step 15

Feedback and revision

Adjust the research investment portfolio to new contexts and aim to reduce the existing disease burden in the most cost-effective and equitable way by i) adding further research options/questions to the list ii) adding additional criteria ii) re-scoring all research options in the redefined context iv) revising thresholds and weights.

The advantages of this method are i) transparent presentation of the context and criteria in the priority setting process ii) management of the process by stakeholders/investors over its entire duration iii) structured way of scoring which should limit specific interest or personal biases iv) involvement of non-technical stakeholders v) flexibility of the process according to context  vi) potential to revise weights and thresholds  according to context vii) simple presentation of the strengths and weaknesses of competing research options viii) ability to rank research options ix) a simple quantitative outcome x) exposure of points of agreement and controversy.

The authors report that whilst the process outlined above attempts to deal with complex issues, it would benefit from independent validation. Concerns relate to the potential for a limited number of research options/questions, and the role of bias by involving a very limited group of technical experts and stakeholders.