alexa Critical Appraisal of a Questionnaire Study
ISSN: 2471-9919
Evidence based Medicine and Practice

Like us on:

Make the best use of Scientific Research and information from our 700+ peer reviewed, Open Access Journals that operates with the help of 50,000+ Editorial Board Members and esteemed reviewers and 1000+ Scientific associations in Medical, Clinical, Pharmaceutical, Engineering, Technology and Management Fields.
Meet Inspiring Speakers and Experts at our 3000+ Global Conferenceseries Events with over 600+ Conferences, 1200+ Symposiums and 1200+ Workshops on Medical, Pharma, Engineering, Science, Technology and Business
  • Editorial   
  • Evidence Based Medicine and Practice, Vol 2(1)
  • DOI: 10.4172/2471-9919.1000e110

Critical Appraisal of a Questionnaire Study

Leonardo Roever*
Department of Clinical Research, Federal University of Uberlandia, Uberlândia, Brazil
*Corresponding Author: Leonardo Roever, Department of Clinical Research, Av Pará, 1720 - Bairro Umuarama, Uberlândia-MG-CEP 38400-902, Brazil, Tel: +553488039878, Email: [email protected]

Received Date: Dec 11, 2015 / Accepted Date: Dec 18, 2015 / Published Date: Dec 25, 2015


Surveys and questionnaires are an essential component of various types of research and gather information from a sample of questions made to participants on a particular topic. The Table 1 shows the checklists needed to make a critical analysis of a questionnaire study [1-4].

Appraisal Questions
What information did the researchers seek to obtain? Was there a clear research question, and was this important and sensible? Was a questionnaire the most appropriate research design for this question, what design might have been more appropriate?
What was the sampling frame and was it sufficiently large and representative? Did all participants in the sample understand what was required of them, and did they attribute the same meaning to the terms in the questionnaire?
Were there any existing measures (questionnaires) that the researchers could have used? If so, why was a new one developed and was this justified?
Were the views of consumers sought about the design, distribution, and administration of the questionnaire?
What claims for reliability and validity have been made, and are these justified? Did the questions cover all relevant aspects of the problem in a non-threatening and non-directive way? Were open-ended (qualitative) and closed-ended (quantitative) questions used appropriately? Was a pilot version administered to participant’s representative of those in the sampling frame, and the instrument modified accordingly?
What claims for validity have been made, and are they justified? (In other words, what evidence is there that the instrument measures what it sets out to measure?)
What claims for reliability have been made, and are they justified? (In other words, what evidence is there that the instrument provides stable responses over time and between researchers?)
Was the title of the questionnaire appropriate and if not, what were its limitations?
What formats did the questionnaire take, and were open and closed questions used appropriately?
Were easy, non-threatening questions placed at the beginning of the measure and sensitive ones near the end?
Was the questionnaire kept as brief as the study allowed? What was the response rate and have non-responders been accounted for?
Did the questions make sense, and could the participants in the sample understand them? Were any questions ambiguous or overly complicated?
id the questionnaire contain adequate instructions for completion—e.g. example answers, or an explanation of whether a ticked or written response was required?
Were participants told how to return the questionnaire once completed?
Did the questionnaire contain an explanation of the research, a summary of what would happen to the data, and a thank you message?
Was the questionnaire adequately piloted in terms of the method and means of administration, on people who were representative of the study population?
How was the piloting exercise undertaken? What details are given?
In what ways was the definitive instrument changed as a result of piloting?
What was the sampling frame for the definitive study and was it sufficiently large and representative?
Was the instrument suitable for all participants and potential participants? In particular, did it take account of the likely range of physical/mental/cognitive abilities; language/literacy, understanding of numbers/scaling, and perceived threat of questions or questioner?
How was the questionnaire distributed?
How was the questionnaire administered?
Were the response rates reported fully, including details of participants who were unsuitable for the research or refused to take part?
Have any potential response biases been discussed?
What sort of analysis was carried out and was this appropriate? (e.g. correct statistical tests for quantitative answers, qualitative analysis for open ended questions)
What measures were in place to maintain the accuracy of the data, and were these adequate?
Is there any evidence of data dredging—that is, analyses that were not hypothesis-driven?
What were the results and were all relevant data reported?
Are quantitative results definitive (significant), and are relevant non-significant results also reported?
Have qualitative results been adequately interpreted (e.g. using an explicit theoretical framework), and have any quotes been properly justified and contextualized?
Was the analysis appropriate (e.g. statistical analysis for quantitative answers, qualitative analysis for open-ended questions) and were the correct techniques used? Were adequate measures in place to maintain accuracy of data?
What do the results mean and have the researchers drawn an appropriate link between the data and their conclusions?
Have all relevant results (‘significant’ and ‘non-significant’) been reported?  Is there any evidence of ‘data dredging’ (i.e., analyses that were not ‘hypothesis driven’)?
Have the researchers drawn an appropriate link between the data and their conclusions?
Have the findings been placed within the wider body of knowledge in the field (e.g. via a comprehensive literature review), and are any recommendations justified?
Can the results be applied to your organization?
Conflicts of interest are declared.
Rate the overall methodological quality of the study, using the following as a guide:
High quality (++): Majority of criteria met. Little or no risk of bias.
Acceptable (+): Most criteria met. Some flaws in the study with an associated risk of bias.
Low quality (-): Either most criteria not met, or significant flaws relating to key aspects of study design.
Reject (0): Poor quality study with significant flaws. Wrong study type. Not relevant to guideline.

Table 1: Critical appraisal of a questionnaire study.

Use this checklist can improve the evaluation of a questionnaire study.


Citation: Roever L (2015) Critical Appraisal of a Questionnaire Study. Evidence Based Medicine and Practice 1: e110. Doi: 10.4172/2471-9919.1000e110

Copyright: © 2016 Roever L. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Select your language of interest to view the total content in your interested language

Post Your Comment Citation
Share This Article
Relevant Topics
Article Usage
  • Total views: 12027
  • [From(publication date): 0-2016 - Dec 07, 2019]
  • Breakdown by view type
  • HTML page views: 11657
  • PDF downloads: 370
Share This Article