Critical Analysis of Clinical Research Articles: A Guide for Evaluation
ISSN: 2471-9919
Evidence based Medicine and Practice

Like us on:

Make the best use of Scientific Research and information from our 700+ peer reviewed, Open Access Journals that operates with the help of 50,000+ Editorial Board Members and esteemed reviewers and 1000+ Scientific associations in Medical, Clinical, Pharmaceutical, Engineering, Technology and Management Fields.
Meet Inspiring Speakers and Experts at our 3000+ Global Conferenceseries Events with over 600+ Conferences, 1200+ Symposiums and 1200+ Workshops on Medical, Pharma, Engineering, Science, Technology and Business
  • Editorial   
  • Evidence Based Medicine and Practice, Vol 2(1)
  • DOI: 10.4172/2471-9919.1000e116

Critical Analysis of Clinical Research Articles: A Guide for Evaluation

Leonardo Roever1*, Elmiro Santos Resende1, Angélica Lemos Debs Diniz1, Nilson Penha-Silva1, Giuseppe Biondi-Zoccai2,3, Antonio Casella-Filho4, Paulo Magno Martins Dourado4 and Antonio Carlos Palandri Chagas4,5
1Department of Clinical Research, Federal University of Uberlandia, Uberlândia, MG, Brazil
2Department of Medico-Surgical Sciences and Biotechnologies, Sapienza University of Rome, Latina, Italy
3Eleonora Lorillard Spencer Cenci Foundation, Rome, Italy
4Heart Institute (InCor), University of São Paulo Medical School, São Paulo, SP, Brazil
5Faculty of Medicine of ABC, Santo André, SP, Brazil
*Corresponding Author: Leonardo Roever, Department of Clinical Research, Av Pará, 1720-Bairro Umuarama, Uberlândia-MG-CEP 38400-902, Brazil, Tel: +553488039878, Email: [email protected]

Received Date: Dec 21, 2015 / Accepted Date: Dec 28, 2015 / Published Date: Jan 04, 2016


Critical evaluation is used to identify the strengths and weaknesses of an article, in order to evaluate the usefulness and validity of research results. The components of the critical appraisal are the appropriateness of the study design for the research question and a thorough evaluation of important methodological characteristics of this study, the adequacy of the used statistical methods and their subsequent interpretation, potential conflicts of interest and the relevance of research for clinical practice. This review steps for review and also helps in identifying highquality studies that can guide clinical practice safely and evidence-based.

Keywords: Critical appraisal; Clinical practice; Decision making; Evidence-based practice


Health professionals need to apply the results of scientific research according to the individual circumstances of the patients, this should be able to select and evaluate the scientific literature that is relevant to their field, should understand the implications of the research for patients individual, stimulate own preferences of patients and develop an appropriate management plan based on the combination of this information [1-8].

The selection and critical appraisal of the literature to assess the validity and relevance of a research paper is presented in Table 1[7-27].

Local Questions
Title Is the title clear, accurate and concise, avoiding unnecessary words and without abbreviations?
Abstract Does the abstract contain what was done, how it was done, the results and their implications?
Definition of The Study Theme Has the problem been properly defined?
Is problem linked to the article already published on the subject?
Is the research goal described and correctly defined?
Research Design Is the study controlled? What is the hypothesis? Is the hypothesis clearly defined?
What is the kind of study?
Is the type of study appropriate to achieve the objective of the investigation? Are there inherent
limitations in the employed method that may have affected the results?
Is the method correctly applied?
Were the ethical aspects properly conducted?
Sample Is the target group appropriate to achieve the objective?
How was done the sample selection?
Was the sample selected at random? Was it somehow flawed?
In the case of experimental study or clinical trial, was there randomization?
Was the sample size discussed?
Was the sample size enough for the purposes of this study?
Gauging of Information Are the used indicators and procedures the most appropriates?
Have the variables been properly set?
Is the effect evaluation objective and proper in relation to the study goals? Is the response variable
properly used to measure the effect?
Are there preparation (pre-test) and data collection instruments (questionnaires, appliances)?
Was there training of collectors and examiners?
What is the reliability of the information?
May the observation process have affected the outcome?
In the case of experimental study, was there adherence to treatment and did the study use double-blind
Statistical Analysis Do the authors correctly showed the sample calculation?
Were the employed statistical techniques adequate to the problem?
Were they used in the right way?
Were the confidence intervals calculated and the accuracy of the results informed?
Internal Consistency of Results Were figures and tables added up correctly?
Are the totals in a table the same as the totals in another table? If they are different, are there
explanations for the differences?
Interpretation of Results Is there coherence between the methods of the original protocol and end methods actually used?
May the differences be simply due to "chance" - type I error - or false positive results? What was the alpha used?
If there were no statistically significant differences, can a Type II error (false negative) be present?
What was the employed beta? What was the study power (1-beta)?
Where there multiple comparisons (i.e., multiple hypotheses) to test various effects? If so, has the alpha for each hypothesis been a priori fixed?
Can the differences be attributed to a selection bias (i.e., the one that occurs in the sample composition or the constitution of study groups)?
Was there loss of results in the follow-up? What was the non-response rate?
In the case of experimental study, Hawthorne or placebo effects could explain the results? Was there co-intervention or contamination?
Can the differences be attributed to measurement bias (i.e., the one which focuses on how to obtain the data)?
Can the differences be attributed to confounding bias (i.e., the ones that can be explained by other factors such as age, gender or other confounding variable), due to differences in the composition of the groups? In other words, were the techniques to control the confounding variables properly used?
Are the results discussed and compared with those of previous studies?
Can the results be generalized to different populations in relation to the population here studied?
For what kind of populations would you apply the results?
Does the study change your practice?
Conclusions Are the conclusions justified when compared to the presented results?
Are there conclusions not based on study data?
Have the authors commented about the study limitations?
Have the authors identified possible defects, estimated their magnitude and pointed out their likely implications?
Are the findings relevant to the problem and to the study objectives?
Style Is the style clear and direct, without unnecessary repetition?
Are the use of technical terms and the language in general correct?
Bibliographic References Are they current and timely?
Are they presented in the right way?
Conflicts of Interest Are there any conflicts of interest?

Table 1: Critical appraisal of bias in systematic reviews.

Use of this guide may assist in the evaluation of the studies and their incorporation into clinical practice.


Citation: Roever L, Resende ES, Diniz ALD, Penha-Silva N, Biondi-Zoccai G, et al. (2015) Critical Analysis of Clinical Research Articles: A Guide for Evaluation. Evidence Based Medicine and Practice 2: e116. Doi: 10.4172/2471-9919.1000e116

Copyright: © 2016 Roever L. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.