We use cookies to ensure that we give you the best experience on our website. You can change your cookie settings at any time. Otherwise, we'll assume you're OK to continue.

Durham University

Department of Anthropology


Publication details for Dr Adetayo Kasim

Bigirumurame, Theophile & Kasim, Adetayo S. (2017). Can testing clinical significance reduce false positive rates in randomized controlled trials? A snap review. BMC Research Notes 10(1): 775.

Author(s) from Durham


The use of minimum clinically important difference in the hypothesis formulation for superiority trials is similar in principle to the concept of non-inferiority or equivalence trial. However, most clinical trials are analysed testing zero clinical difference. Since the minimum clinically important difference is pre-defined for power calculation, it is important to incorporate it in both the hypothesis testing and the interpretation of findings from clinical trials.

We reviewed a set of 50 publications (25 with binary outcome, and 25 with survival time outcome). 20% of the 50 published trials that were statistically significant, were also clinically significant based on the minimum clinically important risk differences (or hazard ratio) used for their power calculations. This snap review seems to suggest that most published trials with statistically significant results were less likely to be clinically significant, which may partly explain the high false positive findings associated with findings from superiority trials. Furthermore, none of the reviewed publications explicitly used minimum clinically important difference in the interpretation of their findings. However, a systematic review is needed to critically appraise the impact of the current practice on false positive rate in published trials with significant findings.