Please refer to the latest version of Chapter 8 of the Cochrane Handbook for the most up to date version of the Risk of Bias Tool.
A bias is a systematic error, or deviation from the truth, in results or inferences. Biases can operate in either direction: different biases can lead to underestimation or overestimation of the true intervention effect. Biases can vary in magnitude: some are small (and trivial compared with the observed effect) and some are substantial (so that an apparent finding may be entirely due to bias). Even a particular source of bias may vary in direction: bias due to a particular design flaw (e.g. lack of allocation concealment) may lead to underestimation of an effect in one study but overestimation in another study. It is usually impossible to know to what extent biases have affected the results of a particular study, although there is good empirical evidence that particular flaws in the design, conduct and analysis of randomized clinical trials lead to bias. Because the results of a study may in fact be unbiased despite a methodological flaw, it is more appropriate to consider risk of bias.
Differences in risks of bias can help explain variation in the results of the studies included in a systematic review (i.e. can explain heterogeneity of results). More rigorous studies are more likely to yield results that are closer to the truth. Meta-analysis of results from studies of variable validity can result in false positive conclusions (erroneously concluding an intervention is effective) if the less rigorous studies are biased toward overestimating an intervention’s effect. They might also come to false negative conclusions (erroneously concluding no effect) if the less rigorous studies are biased towards underestimating an intervention’s effect (Detsky 1992).
It is important to assess risk of bias in all studies in a review irrespective of the anticipated variability in either the results or the validity of the included studies. For instance, the results may be consistent among studies but all the studies may be flawed. In this case, the review’s conclusions should not be as strong as if a series of rigorous studies yielded consistent results about an intervention’s effect. In a Cochrane review, this appraisal process is described as the assessment of risk of bias in included studies.
Bias should not be confused with imprecision. Bias refers to systematic error, meaning that multiple replications of the same study would reach the wrong answer on average. Imprecision refers to random error, meaning that multiple replications of the same study will produce different effect estimates because of sampling variation even if they would give the right answer on average. The results of smaller studies are subject to greater sampling variation and hence are less precise. Imprecision is reflected in the confidence interval around the intervention effect estimate from each study and in the weight given to the results of each study in a meta-analysis. More precise results are given more weight.
Selection bias refers to systematic differences between baseline characteristics of the groups that are compared. The unique strength of randomization is that, if successfully accomplished, it prevents selection bias in allocating interventions to participants. Its success in this respect depends on fulfilling several interrelated processes. A rule for allocating interventions to participants must be specified, based on some chance (random) process. We call this sequence generation. Furthermore, steps must be taken to secure strict implementation of that schedule of random assignments by preventing foreknowledge of the forthcoming allocations. This process if often termed allocation concealment, although could more accurately be described as allocation sequence concealment. Thus, one suitable method for assigning interventions would be to use a simple random (and therefore unpredictable) sequence, and to conceal the upcoming allocations from those involved in enrolment into the trial.
For all potential sources of bias, it is important to consider the likely magnitude and the likely direction of the bias. For example, if all methodological limitations of studies were expected to bias the results towards a lack of effect, and the evidence indicates that the intervention is effective, then it may be concluded that the intervention is effective even in the presence of these potential biases.
Performance bias refers to systematic differences between groups in the care that is provided, or in exposure to factors other than the interventions of interest. . After enrolment into the study, blinding (or masking) of study participants and personnel may reduce the risk that knowledge of which intervention was received, rather than the intervention itself, affects outcomes. Effective blinding can also ensure that the compared groups receive a similar amount of attention, ancillary treatment and diagnostic investigations. Blinding is not always possible, however. For example, it is usually impossible to blind people to whether or not major surgery has been undertaken.
Detection bias refers to systematic differences between groups in how outcomes are determined. Blinding (or masking) of outcome assessors may reduce the risk that knowledge of which intervention was received, rather than the intervention itself, affects outcome measurement. Blinding of outcome assessors can be especially important for assessment of subjective outcomes, such as degree of postoperative pain.
Attrition bias refers to systematic differences between groups in withdrawals from a study. Withdrawals from the study lead to incomplete outcome data. There are two reasons for withdrawals or incomplete outcome data in clinical trials. Exclusions refer to situations in which some participants are omitted from reports of analyses, despite outcome data being available to the trialists. Attrition refers to situations in which outcome data are not available.
Reporting bias refers to systematic differences between reported and unreported findings. Within a published report those analyses with statistically significant differences between intervention groups are more likely to be reported than non-significant differences. This sort of ‘within-study publication bias’ is usually known as outcome reporting bias or selective reporting bias, and may be one of the most substantial biases affecting results from individual studies (Chan 2005).
In addition there are other sources of bias that are relevant only in certain circumstances. These relate mainly to particular trial designs (e.g. carry-over in cross-over trials and recruitment bias in cluster-randomized trials); some can be found across a broad spectrum of trials, but only for specific circumstances (e.g. contamination, whereby the experimental and control interventions get ‘mixed’, for example if participants pool their drugs); and there may be sources of bias that are only found in a particular clinical setting.
Support for judgement
Review authors’ judgement
Random sequence generation
Describe the method used to generate the allocation sequence in sufficient detail to allow an assessment of whether it should produce comparable groups.
Selection bias (biased allocation to interventions) due to inadequate generation of a randomised sequence.
Describe the method used to conceal the allocation sequence in sufficient detail to determine whether intervention allocations could have been foreseen in advance of, or during, enrolment.
Selection bias (biased allocation to interventions) due to inadequate concealment of allocations prior to assignment.
Blinding of participants and personnel. Assessments should be made for each main outcome (or class of outcomes).
Describe all measures used, if any, to blind study participants and personnel from knowledge of which intervention a participant received. Provide any information relating to whether the intended blinding was effective.
Performance bias due to knowledge of the allocated interventions by participants and personnel during the study.
Blinding of outcome assessment. Assessments should be made for each main outcome (or class of outcomes).
Describe all measures used, if any, to blind outcome assessors from knowledge of which intervention a participant received. Provide any information relating to whether the intended blinding was effective.
Detection bias due to knowledge of the allocated interventions by outcome assessors.
Incomplete outcome data. Assessments should be made for each main outcome (or class of outcomes).
Describe the completeness of outcome data for each main outcome, including attrition and exclusions from the analysis. State whether attrition and exclusions were reported, the numbers in each intervention group (compared with total randomized participants), reasons for attrition/exclusions where reported, and any re-inclusions in analyses performed by the review authors.
Attrition bias due to amount, nature or handling of incomplete outcome data.
State how the possibility of selective outcome reporting was examined by the review authors, and what was found.
Reporting bias due to selective outcome reporting.
Other sources of bias
State any important concerns about bias not addressed in the other domains in the tool.
If particular questions/entries were pre-specified in the review’s protocol, responses should be provided for each question/entry.
Bias due to problems not covered elsewhere in the table.