Glossary

Glossary

 

 

Closed loop

A set of three or more interventions in a network diagram that are connected by a polygon. In a closed loop, it is possible to follow a path from an intervention node back to that same node via two or more intermediate interventions.   A closed loop occurs in a network of three comparisons when each intervention has been compared directly with both of the others. This is shown graphically inFigure 1.  For the interventions A, B and C, A has been compared with both B and C, and B has been compared with C, so there are AB, AC and BC intervention comparisons. In a closed loop, each direct source of evidence can be complemented by an indirect source of evidence for the same comparison.

 

 

 

Direct comparison

A comparison of two or more interventions made within a study.

 

Directevidence

Evidence on the relative effects of interventions derived entirely from direct comparisons.

 

Design of a study

The set of treatments being compared in a study. A study comparing treatment A to B has an “AB design”, a three-arm study comparing treatments A, B and C has an ABC design.

 

Edge[in  a network diagram]

A line connecting two intervention nodes in a network diagram.The term originates from graph theory.

 

GRADE

A system developed by the Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group for grading the quality of evidence. In a systematic review, GRADE defines the quality of a body of evidence as the extent to which one can be confident that an estimate of effect is close to the quantity of interest.

Inconsistency(Loop inconsistency)(synonyms: incoherence, incongruence)

A situation in which an intervention effect measured using an indirect comparison is not equivalent to the intervention effect measured using a direct comparison. This usually refers to the mean intervention effect in the context of a random-effects meta-analysis, therefore allowing for the usual variation due to heterogeneity within the direct evidence. In the presence of mixed evidence, it is possible to estimate the amount of loop inconsistency and evaluate it statistically by comparing direct and indirect estimates of intervention effect.

 

Inconsistency(Design inconsistency)(synonyms: design-by-treatment interaction)

A situation in which an intervention effect measured in direct evidence derived from a particular design is  not equivalent to the intervention effect measured in a different design. That is, the relative treatment effect of A versus B is different when estimated in studies with AB design and ABC design. In the presence of only two-arm studies in a network loop inconsistency and design inconsistency coincide. It is possible to estimate the amount of design inconsistency and evaluate it statistically by comparing estimates of the intervention effect from different designs.

 

 

Indirect comparison

A comparison of two interventions via one or more common comparator. For example, the combination of intervention effects from AC and intervention effects from BC studies may (in some situations) be used to learn about the intervention effect AB.

 

Indirectevidence

Evidence on the relative effectiveness of two interventions derived entirely from indirect comparisons. Indirect evidence may be available via different routes and via more than one intermediate comparator; we consider these to becompound indirect evidence. Networks in which there is compound indirect evidence are most conveniently analysed using network meta-analysis.

 

Mixedevidence

Evidence on the relative effectiveness of two interventions derived from a combination of direct and indirect comparisons. If the indirect evidence comes from only one route (i.e. simple indirect evidence) then a mixed estimate can be obtained as a weighted average of the direct and the indirect estimates of intervention effect. When there are multiple indirect routes between A and B in the network, this approach can be extended, but the most convenient approach to combining them all is a network meta-analysis.  

 

Network diagram(synonym: networkmap)

A graphical depiction of how each intervention is connected to the others through direct comparisons. Each line, or edge, depicts a direct comparison between two intervention nodes.

 

Figure 2: example of a network diagram

 

 

Network meta-analysis(synonyms:multiple treatments meta-analysis,mixed treatment comparison)

An analysis that syntheses information over a network of comparisons to assess the comparative effects of more than two alternative interventions for the same condition. A network meta-analysis synthesizes direct and indirect evidence over the entire network, so that estimates of intervention effect are based on all available evidence for those comparisons. This evidence may be direct evidence, indirect evidence or mixed evidence. Typical outputs of a network meta-analysis are (a) relative intervention effects for all comparisons; and (b) a ranking of the interventions.

 

Node[in  a network diagram]

A discrete intervention in a network diagram.The term originates from graph theory.

 

Rankogram

A two-dimensional treatment-specific plot presenting on the horizontal axis the possible ranks of the treatment and on the vertical axis the probability for the treatment to assume each of the possible ranks according to a specific outcome.  Cumulative rankograms present on the vertical axis the cumulative probability for the treatment to assume each of the possible ranks. The cumulative rankogram is a step function but is often presented equivalently as a segmented line (with interpolation at the mean of each step).

 

Similarity

A term used to describe the situation in which different sources of direct or indirect evidence are similar with respect to moderators of the intervention effects. There are two types of moderators: (a) clinical similarity refers to similarity in patients’ characteristics, interventions, settings, length of follow up, and outcomes measured; (b) methodological similarity refers to aspects of trials associated with the risk of bias.

In direct comparisons, similarity refers to clinical and methodological homogeneity of the studies and a meaningful summary estimate is obtained when the studies are similar in their moderators of intervention effect. In indirect and mixed comparisons, similarity refers to the distribution of the effect modifiers across the different sets of studies grouped by comparison. Similarity in a network of trials can be thought of as an extension of the idea of clinical or methodological homogeneity in standard meta-analysis. See alsotransitivity.

 

Surface under the cumulative ranking curve (SUCRA)

The area under the cumulative rankogram is a value between 0 and 1 and can be re-expressed as percentage. The larger the SUCRA, the higher the treatment in the hierarchy according to the outcome.

 

 

Summary of findings

A table in a Cochrane intervention review that presents the main findings of a review in a transparent and simple form. Summary of findings tables provide key information concerning the quality of the evidence (GRADE), the magnitude of intervention effects and the sum of the available data on the main outcome. 

 

Transitivity

The situation in which an intervention effect measured using an indirect comparison is valid and equivalent to the intervention effect measured using a direct comparison. Specifically, the transitivity assumption states that (the benefit of A over B) is equal to (the benefit of A over C) plus (the benefit of C over B). Equivalently, this may be written as (the benefit of A over C) minus (the benefit of B over C). In practice, transitivity requires similarity; that is that the sets of studies used to obtain the indirect comparison are sufficiently similar in characteristics that moderate the intervention effect.

Transitivity can be thought of as a network meta-analysis extension of the idea of homogeneity in a standard meta-analysis. The terms ‘inconsistency’, ‘coherence’ and ‘congruence’ all refer to the same idea. However, they are sometimes used to refer to evidencefrom the datain favour of the transitivity assumption. Specifically, in the presence of mixed evidence, it is possible to examine statistically whether the transitivity assumption holds by comparing direct and indirect estimates of intervention effect. Such analyses may be described as investigations of (in)consistency, (in)coherence or (in)congruence. See alsosimilarity,inconsistency.