Multiple Criteria Decision Analysis for Health Technology Assessment

Thokala, P., Duenas, A. (2012) Multiple Criteria Decision Analysis for Health Technology Assessment. Value in Health. 15. 1172-1181 http://dx.doi.org/10.1016/j.jval.2012.06.015

Paper summarised by Jill Pooler

Introduction

Whilst some health care organizations in a few countries have made attempts to incorporate different criteria into their decision-making, most approaches use the weighted-sum approach, whereby the degree to which one decision option is preferred over another is represented by constructing and comparing numerical scores (overall value). These scores are developed for each individual criterion initially and then aggregated.

It is suggested that multi-criteria decision analysis (MCDA) is a method of capturing benefits beyond quality adjusted life-years in a transparent and consistent manner. The main aspects of MCDA  methods are i) consideration of the alternatives to be appraised, ii) the criteria (or attributes) against which the alternatives are appraised, iii) scores that reflect the value of an alternative’s expected performance on the criteria, and iv) criteria weights that measure the relative importance of each criterion as compared with others. And that MCDA approaches can be classified broadly into three categories: i)value measurement models,ii)outranking models,and ii)goal, aspiration, or reference-level models.

This paper argues that whilst the evidence suggests MCDA methods can support decision makers faced with evaluating alternatives, approaches are however limited, and that further research is needed before their implementation in the health technology appraisal process.

The objectives of this article are to i) analyze the possible application of MCDA approaches in health technology assessment (HTA) and ii) to describe their relative advantages and disadvantages.

Methods

Using the National Institute of Clinical Excellence (NICE) as an example the paper first compares the MCDA approach to the NICE appraisal process and extrapolates the findings to other international health care decision-making organizations.

Using a case study, the authors demonstrate the three MCDA approaches mentioned earlier:  i)value measurement models, ii)outranking models,and ii)goal, aspiration, or reference-level modelsto demonstrate the potential advantages and pitfalls of the different approaches.

Results

MCDA versus NICE Appraisal Process

The findings reveal that both MCDA and NICE i) scope or structure the problem and ii) capture evidence. However it is in the decision-making stage that the MCDA and the NICE appraisal processes differ: NICE presents the evidence in report form, to an appraisal committee which makes a decision, using the Incremental Cost Effectiveness Ratio (ICER), to measure the incremental cost per quality-adjusted life year (QALY) gained by recipients of treatment. Conversely the MCDA approach quantifies evidence to identify the best alternative(s).

Case Study

The case study is based on a hypothetical NICE technology appraisal process where a recommendation is needed to be made between two drugs A and B, where drug A is the current intervention and drug B is the new intervention. The characteristics of each drug when compared against the best standard care are shown in Table 1 of the article. 

The three MCDA approaches mentioned earlier have been used to illustrate decision-making in this case study, against cost effectiveness (C/E), equity, innovation, patient compliance and quality of evidence. The aim being to compare different MCDA techniques for incorporating multiple criteria into the decision process once the relevant criteria are identified.

Value measurement models

This approach is based on constructing a single overall value for each alternative to establish a preference order of alternatives. It is described as simple to use, but as observed in this scenario, poor performance on a criteria (C/E) can be overcome by doing well in other criteria depending on the weights and partial value functions (Table 2 of the article).

Outranking approach

This principle of outranking is based on the general concept of dominance. Strict domination, however, rarely occurs in practice, and thus the evidence needs to be evaluated in a systematic manner. More generally, drug A outranks alternative drug B if there is sufficient evidence to justify a conclusion that drug A is at least as good as drug B, taking all criteria into account.

The performance scores of drugs against the individual criteria are shown in Table 3 of the artile.

The matrix of outranking relations along with the relative weights for different criteria is shown in Table 4 of the article. 

The outranking approach recognizes that performance scores are imprecise measures. This method does not need the theoretical requirement of trade-offs for weights as required in the value measurement models; the weights just convey the relative importance of the different criteria. This method is intuitive, and the use of indifference and veto thresholds allows more flexible/realistic decision rules to be specified. This approach might lead to incomparability if two drugs are quite similar; however, one could argue that this is appropriate for the appraisal process as further deliberation might be needed to choose between the drugs if their performance is quite similar.

Goal programming

Goal programming involves a mathematical formulation of the satisficing heuristic; the term “satisficing” is a combination of the terms “satisfy” and “suffice.” The emphasis of the satisficing model is on attaining satisfactory levels of performance on each criterion, considering the preference of criteria in their order of importance. Satisficing levels are predefined as “goals,” and a programming algorithm is used to identify the alternatives that satisfy the goals in the specified priority order.

In this case study, it was assumed that patient compliance and equity are difficult to change but cost-effectiveness (C/E) can be improved by changing the price of the drug. Both drugs have achieved the target C/E, the analysis moves to the next priority level, which includes all the other criteria. Drug A performs better than drug B in terms of getting closer to the equity and compliance goals; thus, it could be recommended on the condition that its price is reduced by 45% (to ensure that drug A satisfies the C/E goal) Table 5 of the article. 

This goal programming approach requires mathematical programming techniques to estimate the price of the drug, based on the definition of “value” chosen by the health organizations. Such computational time relates to the complexity of this approach. 

Having used a case study comparing two drugs to illustrate the three MCDA approaches, the authors then move to compare the three MCDA approaches.

Comparison of three MCDA approaches

Table 6 of the Article compares the different MCDA approaches on a number of dimensions to provide an indication of the potential benefits and limitations of each of the approaches.

Table 6 shows that there are different requirements of the weights depending on the MCDA approach, with value measurement models requiring additional effort compared with outranking and goal programming methods, because of the time needed to interpret swing weights. Similarly, value measurement models need significant effort to develop the performance value scores, while the goal programming and outranking methods can be implemented on the attribute values directly. Value measurement models, however, are easy to understand and can enable real-time sensitivity analysis. Both outranking and goal programming methods are easy to follow, but significant computational time is needed for goal programming. Furthermore results from value measurement models lend themselves for easy visual presentation while results from outranking and goal programming methods are difficult to follow. Finally, uncertainty is easier to incorporate in value measurement models than in outranking or goal programming approaches.

Generic issues with MCDA

The Authors suggest that the MCDA process is flexible and can be tailored to any appraisal. However for the MCDA method to be transparent, consistent, auditable, and defendable the authors argue that the processes used must be made explicit including weighting, aggregation of scores, values, conflicts. Moreover ‘uncertainty’ regarding problem structuring (i.e., choosing the right MCDA model, criteria, level of detail, etc.); uncertainty with evidence of different alternatives; variation in preferences (i.e., uncertainty in performance scores, criteria weights, thresholds, etc.) and uncertainty in clinical effectiveness have a direct effect on committee members’ preferences. The authors argue that a range of complex analyses can be conducted to assess these challenges for example: scenario analyses;  multi-attribute utility theory; fuzzy logic and stochastic multi-criteria acceptability analysis can be used to capture uncertainty. Moreover variation in criteria values, weights, and thresholds evident during the aggregation of the preferences of individuals in the decision committee, can be displayed using standard deviations associated with the mean values. Sensitivity analysis can be performed to check the robustness of results to changes in the model parameters. Because of the interdependence of uncertainty with evidence of different alternatives and variation in committee members’ preferences, probabilistic sensitivity analysis can be used to capture and propagate uncertainty using Monte-Carlo simulation techniques.

There is a practical burden associated with MCDA techniques which rely on methods of data capturing, preference capturing, data aggregation, statistical analysis, and synthesizing data, which may require specialist software or input during or in between meetings. Moreover the model outputs will need to be visualized and incorporated into the final documentation along with the recommendations, and this may also require specialist input. A question for decision-makers therefore is whether to train all the committee members in the relevant techniques of MCDA, or whether to have a facilitator(s) to help use the techniques in the decision process. Collectively this, the authors argue, results in a substantial resource burden for decision-makers, which needs to be balanced against the transparency and consistency achieved by using the MCDA.

Conclusion

The objectives of this paper were to i) analyze the possible application of MCDA approaches in health technology assessment and ii) to describe their relative advantages and disadvantages.

Using the National Institute of Clinical Excellence (NICE) as an example the paper first compares the MCDA approach to the NICE appraisal process and extrapolates the findings to other international health care decision-making organizations. The authors conclude that MCDA approaches can support rather than replace the deliberative process already existent in NICE, with the addition of a formal mathematical approach to decision making, to ensure consistency and transparency through explicit scoring and weighting of criteria.

Using a case study, the authors demonstrate the three MCDA approaches i) value measurement models, ii) outranking models, and ii) goal, aspiration, or reference-level models to demonstrate the potential advantages and pitfalls of the different approaches. The authors conclude that potential users need to understand the general practical issues that might arise from using an MCDA approach in the HTA process and choose an appropriate MCDA method to ensure the success of MCDA techniques in the appraisal process.