• UNC Libraries
  • HSL Subject Research
  • Qualitative Research Resources
  • Assessing Qualitative Research

Qualitative Research Resources: Assessing Qualitative Research

Created by health science librarians.

HSL Logo

  • What is Qualitative Research?
  • Qualitative Research Basics
  • Special Topics
  • Training Opportunities: UNC & Beyond
  • Help at UNC
  • Qualitative Software for Coding/Analysis
  • Software for Audio, Video, Online Surveys
  • Finding Qualitative Studies

About this Page

Legend (let evidence guide every new decision) assessment tools: cincinnati children's hospital, equator network: enhancing the quality and transparency of health research, other tools for assessing qualitative research.

  • Writing Up Your Research
  • Integrating Qualitative Research into Systematic Reviews
  • Publishing Qualitative Research
  • Presenting Qualitative Research
  • Qualitative & Libraries: a few gems
  • Data Repositories

Why is this information important?

  • Qualitative research typically focuses on collecting very detailed information on a few cases and often addresses meaning, rather than objectively identifiable factors.
  • This means that typical markers of research quality for quantitative studies, such as validity and reliability, cannot be used to assess qualitative research.

On this page you'll find:

The resources on this page will guide you to some of the alternative measures/tools or means you can use to assess qualitative research.

Evidence Evaluation Tools and Resources

This website has a number of resources for evaluating health sciences research across a variety of designs/study types, including an Evidence Appraisal form for qualitative research (in table), as well as forms for mixed methods studies from a variety of clinical question domains. The site includes information on the following:

  • Evaluating the Evidence Algorithm (pdf download)
  • Evidence Appraisal Forms ( see Domain of Clinical Questions Table )
  • Table of Evidence Levels (pdf download)
  • Grading a Body of Evidence (pdf download)
  • Judging the Strength of a Recommendation (pdf download)
  • LEGEND Glossary (pdf download)
  • EQUATOR: Qualitative Research Reporting Guidelines
  • EQUATOR Network Home

The EQUATOR Network is an ‘umbrella’ organisation that brings together researchers, medical journal editors, peer reviewers, developers of reporting guidelines, research funding bodies and other collaborators with mutual interest in improving the quality of research publications and of research itself. 

The EQUATOR Library contains a comprehensive searchable database of reporting guidelines for many study types--including qualitative--and also links to other resources relevant to research reporting:

  • Library for health research reporting:  provides an up-to-date collection of guidelines and policy documents related to health research reporting. These are aimed mainly at authors of research articles, journal editors, peer reviewers and reporting guideline developers.
  • Toolkits to support writing research, using guidelines, teaching research skills, selecting the appropriate reporting guideline
  • Courses and events
  • Librarian Network

Also see Articles box, below, some of which contain checklists or tools. 

Most checklists or tools are meant to help you think critically and systematically when appraising research.  Users should generally consult accompanying materials such as manuals, handbooks, and cited literature to use these tools appropriately.  Broad understanding of the variety and complexity of qualitative research is generally necessary, along with an understanding of the philosophical perspectives plus knowledge about specific qualitative research methods and their implementation.  

  • CASP/Critical Assessment Skills Programme Tool for Evaluating Qualitative Research 2018
  • CASP Knowledge Hub Includes critical appraisal checklists for key study designs; glossary of key research terms; key links related to evidence based healthcare, statistics, and research; a bibliography of articles and research papers about CASP and other critical appraisal tools and approaches 1993-2012.
  • (Joanna Briggs Institute) Manual for Evidence Synthesis (2024) See the following chapters: Chapter 2: Systematic reviews of qualitative evidence. Includes appendices: • Appendix 2.1: Critical Appraisal Checklist for Qualitative Research • Appendix 2.2: Discussion of Qualitative critical appraisal criteria • Appendix 2.3 Qualitative data extraction tool Chapter 8: Mixed methods systematic reviews more... less... Aromataris E, Munn Z (Editors). JBI Manual for Evidence Synthesis. JBI, 2020. Available from https://synthesismanual.jbi.global. https://doi.org/10.46658/JBIMES-20-01
  • McGill Mixed Methods Appraisal Tool (MMAT) Front Page Public wiki site for the MMAT: The MMAT is intended to be used as a checklist for concomitantly appraising and/or describing studies included in systematic mixed studies reviews (reviews including original qualitative, quantitative and mixed methods studies). The MMAT was first published in 2009. Since then, it has been validated in several studies testing its interrater reliability, usability and content validity. The latest version of the MMAT was updated in 2018.
  • McGill Mixed Methods Appraisal Tool (MMAT) 2018 User Guide See full site (public wiki link above) for additional information, including FAQ's, references and resources, earlier versions, and more.
  • McMaster University Critical Review Form & Guidelines for Qualitative Studies v2.0 Includes links to Qualitative Review Form (v2.0) and accompanying Guidelines from the Evidence Based Practice Research Group of McMaster University's School of Rehabilitation Science). Links are also provided for Spanish, German, and French versions.
  • NICE Quality Appraisal Checklist-Qualitative Studies, 3rd ed, 2012, from UK National Institute for Health and Care Excellence Includes checklist and notes on its use. From Methods for the Development of NICE Public Health Guidance, 3rd edition. more... less... Produced by the National Institute for Health and Clinical Excellence © Copyright National Institute for Health and Clinical Excellence, 2006 (updated 2012). All rights reserved. This material may be freely reproduced for educational and not-for-profit purposes. No reproduction by or for commercial organisations, or for commercial purposes, is allowed without the express written permission of the Institute.
  • NICE Quality Appraisal Checklist-Qualitative Studies, 3rd ed. (.pdf download) Appendix H Checklist and Notes download. © Copyright National Institute for Health and Clinical Excellence, 2006 (updated 2012). All rights reserved. This material may be freely reproduced for educational and not-for-profit purposes. No reproduction by or for commercial organisations, or for commercial purposes, is allowed without the express written permission of the Institute.
  • Qualitative Research Review Guidelines, RATS
  • SBU Swedish Agency for Health Technology Assessment and Assessment of Social Services Evaluation and synthesis of studies using qualitative methods of analysis, 2016. Appendix 2 of this document (at the end) contains a checklist for evaluating qualitative research. more... less... SBU. Evaluation and synthesis of studies using qualitative methods of analysis. Stockholm: Swedish Agency for Health Technology Assessment and Assessment of Social Services (SBU); 2016.
  • Users' Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice, 3rd ed (JAMA Evidence) Chapter 13.5 Qualitative Research
  • Slides: Appraising Qualitative Research from Users' Guide to the Medical Literature, 3rd edition Click on the 'Related Content' tab to find the link to download the Appraising Qualitative Research slides.

These articles address a range of issues related to understanding and evaluating qualitative research; some  include checklists or tools.

Clissett, P. (2008) "Evaluating Qualitative Research." Journal of Orthopaedic Nursing 12: 99-105.

Cohen, Deborah J. and Benjamin F. Crabtree. (2008) "Evidence for Qualitative Research in Health Care: Controversies and Recommendations." Annals of Family Medicine 6(4): 331-339.

  • Supplemental Appendix 1. Search Strategy for Criteria for Qualitative Research in Health Care
  • Supplemental Appendix 2. Publications Analyzed: Health Care Journals and Frequently Referenced Books and Book Chapters (1980-2005) That Posited Criteria for "Good" Qualitative Research.

Dixon-Woods, M.,  R.L. Shaw, S. Agarwal, and J.A. Smith. (2004) "The Problem of Appraising Qualitative Research." Qual Safe Health Care 13: 223-225.

Fossey, E., C. Harvey, F. McDermott, and L. Davidson. (2002) "Understanding and Evaluating Qualitative Research." Australian and New Zealand Journal of Psychiatry 36(6): 717-732.

Hammarberg, K., M. Kirkman, S. de Lacey. (2016) "Qualitative Research Methods: When to Use and How to Judge them." Human Reproduction 31 (3): 498-501.

Lee, J. (2014) "Genre-Appropriate Judgments of Qualitative Research." Philosophy of the Social Sciences 44(3): 316-348. (This provides 3 strategies for evaluating qualitative research, 2 that the author is not crazy about and one that he considers more appropriate/accurate).

Majid, Umair and Vanstone,Meredith (2018). "Appraising Qualitative Research for Evidence Syntheses: A Compendium of Quality Appraisal Tools." Qualitative Health Research  28(13): 2115-2131.   PMID: 30047306 DOI:  10.1177/1049732318785358

Meyrick, Jane. (2006) "What is Good Qualitative Research? A First Step towards a Comprehensive Approach to Judging Rigour/Quality." Journal of Health Psychology 11(5): 799-808.

Miles, MB, AM Huberman, J Saldana. (2014) Qualitative Data Analysis.  Thousand Oaks, Califorinia, SAGE Publications, Inc. Chapter 11: Drawing and Verifying Conclusions . Check Availability of Print Book . 

Morse, JM. (1997) "Perfectly Healthy but Dead:"The Myth of Inter-Rater Reliability. Qualitative Health Research 7(4): 445-447.  

O’Brien BC, Harris IB, Beckman TJ, et al. (2014) Standards for reporting qualitative research: a synthesis of recommendations . Acad Med 89(9):1245–1251. DOI: 10.1097/ACM.0000000000000388 PMID: 24979285

The Standards for Reporting Qualitative Research (SRQR) consists of 21 items. The authors define and explain key elements of each item and provide examples from recently published articles to illustrate ways in which the standards can be met. The SRQR aims to improve the transparency of all aspects of qualitative research by providing clear standards for reporting qualitative research. These standards will assist authors during manuscript preparation, editors and reviewers in evaluating a manuscript for potential publication, and readers when critically appraising, applying, and synthesizing study findings.

Ryan, Frances, Michael Coughlin, and Patricia Cronin. (2007) "Step by Step Guide to Critiquing Research: Part 2, Qualitative Research." British Journal of Nursing 16(12): 738-744.

Stige, B, K. Malterud, and T. Midtgarden. (2009) "Toward an Agenda for Evaluation of Qualitative Research." Qualitative Health Research 19(10): 1504-1516.

Tong, Allison and Mary Amanda Dew. (2016-EPub ahead of print). "Qualitative Research in Transplantation: Ensuring Relevance and Rigor. "   Transplantation 

Allison Tong, Peter Sainsbury, Jonathan Craig; Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups , International Journal for Quality in Health Care , Volume 19, Issue 6, 1 December 2007, Pages 349–357, https://doi.org/10.1093/intqhc/mzm042

The criteria included in COREQ, a 32-item checklist, can help researchers to report important aspects of the research team, study methods, context of the study, findings, analysis and interpretations. Items most frequently included in the checklists related to sampling method, setting for data collection, method of data collection, respondent validation of findings, method of recording data, description of the derivation of themes and inclusion of supporting quotations. We grouped all items into three domains: (i) research team and reflexivity, (ii) study design and (iii) data analysis and reporting.

Tracy, Sarah (2010) “Qualitative Quality: Eight ‘Big-Tent’ Criteria for Excellent Qualitative Research.” Qualitative Inquiry 16(10):837-51

  • Critical Appraisal Skills Programme
  • IMPSCI (Implementation Science) Tutorials
  • Johns Hopkins: Why Mixed Methods?
  • Measuring, Learning, and Evaluation Project for the Urban Reproductive Health Initiative This project ran 2010-2015. Some project resources are still available.
  • NIH OBSSR (Office of Behavioral & Social Sciences Research) Best Practices for Mixed Methods Research in Health Sciences, 2011 The OBSSR commissioned a team in 2010 to develop a resource that would provide guidance to NIH investigators on how to rigorously develop and evaluate mixed methods research applications. more... less... John W. Creswell, Ph.D., University of Nebraska-Lincoln Ann Carroll Klassen, Ph.D., Drexel University Vicki L. Plano Clark, Ph.D., University of Nebraska-Lincoln Katherine Clegg Smith, Ph.D., Johns Hopkins University With the Assistance of a Specially Appointed Working Group
  • NIH OBSSR eSource: Introductory Social and Behavioral Science Training Materials eSource is a collection of online chapters that provide an introduction to selected behavioral and social science research approaches, including theory development and testing, survey methods, measurement, and study design. more... less... Link not working on OBSSR website, here https://obssr.od.nih.gov/about-us/publications/ Formerly: https://obssr-archive.od.nih.gov/pdf/Qualitative.PDF
  • NSF Workshop on Interdisciplinary Standards for Systematic Qualitative Research On May 19-20, 2005, a workshop on Interdisciplinary Standards for Systematic Qualitative Research was held at the National Science Foundation (NSF) in Arlington, Virginia. The workshop was cofunded by a grant from four NSF Programs—Cultural Anthropology, Law and Social Science, Political Science, and Sociology… It is well recognized that each of the four disciplines have different research design and evaluation cultures as well as considerable variability in the emphasis on interpretation and explanation, commitment to constructivist and positivist epistemologies, and the degree of perceived consensus about the value and prominence of qualitative research methods. more... less... Within this multidisciplinary and multimethods context, twenty-four scholars from the four disciplines were charged to (1) articulate the standards used in their particular field to ensure rigor across the range of qualitative methodological approaches;1* (2) identify common criteria shared across the four disciplines for designing and evaluating research proposals and fostering multidisciplinary collaborations; and (3) develop an agenda for strengthening the tools, training, data, research design, and infrastructure for research using qualitative approaches.
  • Technical Note: Mixed-Methods Evaluations (USAID) This open source resource from USAID (2013) discusses the mixing of qualitative and quantitative methods in mixed methods research.
  • Qualitative Research Methods: A Data Collector's Field Guide (2005) From FHI 360/Family Health International with support from US AID. Natasha Mack, Cynthia Woodsong, Kathleen M. MacQueen, Greg Guest, and Emily Name. The guide is divided into five modules covering the following topics: Module 1 – Qualitative Research Methods Overview Module 2 – Participant Observation Module 3 – In-Depth Interviews Module 4 – Focus Groups Module 5 – Data Documentation and Management
  • Robert Wood Johnson Foundation Guidelines for Designing, Analyzing, and Reporting Qualitative Research
  • Robert Wood Johnson Foundation: Qualitative Research Guidelines Project

Qualitative Literacy

Cover Art

Not a checklist, this is a thorough discussion of assessing the scientific merit of a study based on in-depth interviews or participant observation, first by assessing exposure (e.g. time exposed in the field). Then, assuming sufficient exposure, the authors propose looking for signs of

  • cognitive empathy, how those interviewed or observed perceive themselves and their social world, the meaning they attach to those perceptions, the motives they express for their actions 
  • palpability: the evidence would be palpable or concrete rather than abstract or general
  • heterogeneity: showing diversity across people, over time, among situations, or between contexts 
  • follow-up: responding to the unexpected; following up on unanticipated statements or observations 
  • self-awareness: showing that the author is explicitly aware of the impact of their presence on who was accessed and what they disclosed
  • << Previous: Finding Qualitative Studies
  • Next: Writing Up Your Research >>
  • Last Updated: Jul 28, 2024 4:11 PM
  • URL: https://guides.lib.unc.edu/qual
  • - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Critically appraising...

Critically appraising qualitative research

  • Related content
  • Peer review
  • Ayelet Kuper , assistant professor 1 ,
  • Lorelei Lingard , associate professor 2 ,
  • Wendy Levinson , Sir John and Lady Eaton professor and chair 3
  • 1 Department of Medicine, Sunnybrook Health Sciences Centre, and Wilson Centre for Research in Education, University of Toronto, 2075 Bayview Avenue, Room HG 08, Toronto, ON, Canada M4N 3M5
  • 2 Department of Paediatrics and Wilson Centre for Research in Education, University of Toronto and SickKids Learning Institute; BMO Financial Group Professor in Health Professions Education Research, University Health Network, 200 Elizabeth Street, Eaton South 1-565, Toronto
  • 3 Department of Medicine, Sunnybrook Health Sciences Centre
  • Correspondence to: A Kuper ayelet94{at}post.harvard.edu

Six key questions will help readers to assess qualitative research

Summary points

Appraising qualitative research is different from appraising quantitative research

Qualitative research papers should show appropriate sampling, data collection, and data analysis

Transferability of qualitative research depends on context and may be enhanced by using theory

Ethics in qualitative research goes beyond review boards’ requirements to involve complex issues of confidentiality, reflexivity, and power

Over the past decade, readers of medical journals have gained skills in critically appraising studies to determine whether the results can be trusted and applied to their own practice settings. Criteria have been designed to assess studies that use quantitative methods, and these are now in common use.

In this article we offer guidance for readers on how to assess a study that uses qualitative research methods by providing six key questions to ask when reading qualitative research (box 1). However, the thorough assessment of qualitative research is an interpretive act and requires informed reflective thought rather than the simple application of a scoring system.

Box 1 Key questions to ask when reading qualitative research studies

Was the sample used in the study appropriate to its research question.

Were the data collected appropriately?

Were the data analysed appropriately?

Can I transfer the results of this study to my own setting?

Does the study adequately address potential ethical issues, including reflexivity?

Overall: is what the researchers did clear?

One of the critical decisions in a qualitative study is whom or what to include in the sample—whom to interview, whom to observe, what texts to analyse. An understanding that qualitative research is based in experience and in the construction of meaning, combined with the specific research question, should guide the sampling process. For example, a study of the experience of survivors of domestic violence that examined their reasons for not seeking help from healthcare providers might focus on interviewing a …

Log in using your username and password

BMA Member Log In

If you have a subscription to The BMJ, log in:

  • Need to activate
  • Log in via institution
  • Log in via OpenAthens

Log in through your institution

Subscribe from £184 *.

Subscribe and get access to all BMJ articles, and much more.

* For online subscription

Access this article for 1 day for: £50 / $60/ €56 ( excludes VAT )

You can download a PDF version for your personal record.

Buy this article

how to evaluate qualitative research

  • Open access
  • Published: 27 May 2020

How to use and assess qualitative research methods

  • Loraine Busetto   ORCID: orcid.org/0000-0002-9228-7875 1 ,
  • Wolfgang Wick 1 , 2 &
  • Christoph Gumbinger 1  

Neurological Research and Practice volume  2 , Article number:  14 ( 2020 ) Cite this article

779k Accesses

369 Citations

90 Altmetric

Metrics details

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 , 8 , 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 , 10 , 11 , 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

figure 1

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

figure 2

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

figure 3

From data collection to data analysis

Attributions for icons: see Fig. 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 , 25 , 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

figure 4

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 , 32 , 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 , 38 , 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Availability of data and materials

Not applicable.

Abbreviations

Endovascular treatment

Randomised Controlled Trial

Standard Operating Procedure

Standards for Reporting Qualitative Research

Philipsen, H., & Vernooij-Dassen, M. (2007). Kwalitatief onderzoek: nuttig, onmisbaar en uitdagend. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Qualitative research: useful, indispensable and challenging. In: Qualitative research: Practical methods for medical practice (pp. 5–12). Houten: Bohn Stafleu van Loghum.

Chapter   Google Scholar  

Punch, K. F. (2013). Introduction to social research: Quantitative and qualitative approaches . London: Sage.

Kelly, J., Dwyer, J., Willis, E., & Pekarsky, B. (2014). Travelling to the city for hospital care: Access factors in country aboriginal patient journeys. Australian Journal of Rural Health, 22 (3), 109–113.

Article   Google Scholar  

Nilsen, P., Ståhl, C., Roback, K., & Cairney, P. (2013). Never the twain shall meet? - a comparison of implementation science and policy implementation research. Implementation Science, 8 (1), 1–12.

Howick J, Chalmers I, Glasziou, P., Greenhalgh, T., Heneghan, C., Liberati, A., Moschetti, I., Phillips, B., & Thornton, H. (2011). The 2011 Oxford CEBM evidence levels of evidence (introductory document) . Oxford Center for Evidence Based Medicine. https://www.cebm.net/2011/06/2011-oxford-cebm-levels-evidence-introductory-document/ .

Eakin, J. M. (2016). Educating critical qualitative health researchers in the land of the randomized controlled trial. Qualitative Inquiry, 22 (2), 107–118.

May, A., & Mathijssen, J. (2015). Alternatieven voor RCT bij de evaluatie van effectiviteit van interventies!? Eindrapportage. In Alternatives for RCTs in the evaluation of effectiveness of interventions!? Final report .

Google Scholar  

Berwick, D. M. (2008). The science of improvement. Journal of the American Medical Association, 299 (10), 1182–1184.

Article   CAS   Google Scholar  

Christ, T. W. (2014). Scientific-based research and randomized controlled trials, the “gold” standard? Alternative paradigms and mixed methodologies. Qualitative Inquiry, 20 (1), 72–80.

Lamont, T., Barber, N., Jd, P., Fulop, N., Garfield-Birkbeck, S., Lilford, R., Mear, L., Raine, R., & Fitzpatrick, R. (2016). New approaches to evaluating complex health and care systems. BMJ, 352:i154.

Drabble, S. J., & O’Cathain, A. (2015). Moving from Randomized Controlled Trials to Mixed Methods Intervention Evaluation. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 406–425). London: Oxford University Press.

Chambers, D. A., Glasgow, R. E., & Stange, K. C. (2013). The dynamic sustainability framework: Addressing the paradox of sustainment amid ongoing change. Implementation Science : IS, 8 , 117.

Hak, T. (2007). Waarnemingsmethoden in kwalitatief onderzoek. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Observation methods in qualitative research] (pp. 13–25). Houten: Bohn Stafleu van Loghum.

Russell, C. K., & Gregory, D. M. (2003). Evaluation of qualitative research studies. Evidence Based Nursing, 6 (2), 36–40.

Fossey, E., Harvey, C., McDermott, F., & Davidson, L. (2002). Understanding and evaluating qualitative research. Australian and New Zealand Journal of Psychiatry, 36 , 717–732.

Yanow, D. (2000). Conducting interpretive policy analysis (Vol. 47). Thousand Oaks: Sage University Papers Series on Qualitative Research Methods.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 , 63–75.

van der Geest, S. (2006). Participeren in ziekte en zorg: meer over kwalitatief onderzoek. Huisarts en Wetenschap, 49 (4), 283–287.

Hijmans, E., & Kuyper, M. (2007). Het halfopen interview als onderzoeksmethode. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [The half-open interview as research method (pp. 43–51). Houten: Bohn Stafleu van Loghum.

Jansen, H. (2007). Systematiek en toepassing van de kwalitatieve survey. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Systematics and implementation of the qualitative survey (pp. 27–41). Houten: Bohn Stafleu van Loghum.

Pv, R., & Peremans, L. (2007). Exploreren met focusgroepgesprekken: de ‘stem’ van de groep onder de loep. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Exploring with focus group conversations: the “voice” of the group under the magnifying glass (pp. 53–64). Houten: Bohn Stafleu van Loghum.

Carter, N., Bryant-Lukosius, D., DiCenso, A., Blythe, J., & Neville, A. J. (2014). The use of triangulation in qualitative research. Oncology Nursing Forum, 41 (5), 545–547.

Boeije H: Analyseren in kwalitatief onderzoek: Denken en doen, [Analysis in qualitative research: Thinking and doing] vol. Den Haag Boom Lemma uitgevers; 2012.

Hunter, A., & Brewer, J. (2015). Designing Multimethod Research. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 185–205). London: Oxford University Press.

Archibald, M. M., Radil, A. I., Zhang, X., & Hanson, W. E. (2015). Current mixed methods practices in qualitative research: A content analysis of leading journals. International Journal of Qualitative Methods, 14 (2), 5–33.

Creswell, J. W., & Plano Clark, V. L. (2011). Choosing a Mixed Methods Design. In Designing and Conducting Mixed Methods Research . Thousand Oaks: SAGE Publications.

Mays, N., & Pope, C. (2000). Assessing quality in qualitative research. BMJ, 320 (7226), 50–52.

O'Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine : Journal of the Association of American Medical Colleges, 89 (9), 1245–1251.

Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B., Burroughs, H., & Jinks, C. (2018). Saturation in qualitative research: Exploring its conceptualization and operationalization. Quality and Quantity, 52 (4), 1893–1907.

Moser, A., & Korstjens, I. (2018). Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis. European Journal of General Practice, 24 (1), 9–18.

Marlett, N., Shklarov, S., Marshall, D., Santana, M. J., & Wasylak, T. (2015). Building new roles and relationships in research: A model of patient engagement research. Quality of Life Research : an international journal of quality of life aspects of treatment, care and rehabilitation, 24 (5), 1057–1067.

Demian, M. N., Lam, N. N., Mac-Way, F., Sapir-Pichhadze, R., & Fernandez, N. (2017). Opportunities for engaging patients in kidney research. Canadian Journal of Kidney Health and Disease, 4 , 2054358117703070–2054358117703070.

Noyes, J., McLaughlin, L., Morgan, K., Roberts, A., Stephens, M., Bourne, J., Houlston, M., Houlston, J., Thomas, S., Rhys, R. G., et al. (2019). Designing a co-productive study to overcome known methodological challenges in organ donation research with bereaved family members. Health Expectations . 22(4):824–35.

Piil, K., Jarden, M., & Pii, K. H. (2019). Research agenda for life-threatening cancer. European Journal Cancer Care (Engl), 28 (1), e12935.

Hofmann, D., Ibrahim, F., Rose, D., Scott, D. L., Cope, A., Wykes, T., & Lempp, H. (2015). Expectations of new treatment in rheumatoid arthritis: Developing a patient-generated questionnaire. Health Expectations : an international journal of public participation in health care and health policy, 18 (5), 995–1008.

Jun, M., Manns, B., Laupacis, A., Manns, L., Rehal, B., Crowe, S., & Hemmelgarn, B. R. (2015). Assessing the extent to which current clinical research is consistent with patient priorities: A scoping review using a case study in patients on or nearing dialysis. Canadian Journal of Kidney Health and Disease, 2 , 35.

Elsie Baker, S., & Edwards, R. (2012). How many qualitative interviews is enough? In National Centre for Research Methods Review Paper . National Centre for Research Methods. http://eprints.ncrm.ac.uk/2273/4/how_many_interviews.pdf .

Sandelowski, M. (1995). Sample size in qualitative research. Research in Nursing & Health, 18 (2), 179–183.

Sim, J., Saunders, B., Waterfield, J., & Kingstone, T. (2018). Can sample size in qualitative research be determined a priori? International Journal of Social Research Methodology, 21 (5), 619–634.

Download references

Acknowledgements

no external funding.

Author information

Authors and affiliations.

Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany

Loraine Busetto, Wolfgang Wick & Christoph Gumbinger

Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Wolfgang Wick

You can also search for this author in PubMed   Google Scholar

Contributions

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

Corresponding author

Correspondence to Loraine Busetto .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Busetto, L., Wick, W. & Gumbinger, C. How to use and assess qualitative research methods. Neurol. Res. Pract. 2 , 14 (2020). https://doi.org/10.1186/s42466-020-00059-z

Download citation

Received : 30 January 2020

Accepted : 22 April 2020

Published : 27 May 2020

DOI : https://doi.org/10.1186/s42466-020-00059-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Mixed methods
  • Quality assessment

Neurological Research and Practice

ISSN: 2524-3489

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

how to evaluate qualitative research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Qualitative Research? | Methods & Examples

What Is Qualitative Research? | Methods & Examples

Published on June 19, 2020 by Pritha Bhandari . Revised on September 5, 2024.

Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research.

Qualitative research is the opposite of quantitative research , which involves collecting and analyzing numerical data for statistical analysis.

Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc.

  • How does social media shape body image in teenagers?
  • How do children and adults interpret healthy eating in the UK?
  • What factors influence employee retention in a large organization?
  • How is anxiety experienced around the world?
  • How can teachers integrate social issues into science curriculums?

Table of contents

Approaches to qualitative research, qualitative research methods, qualitative data analysis, advantages of qualitative research, disadvantages of qualitative research, other interesting articles, frequently asked questions about qualitative research.

Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data.

Common approaches include grounded theory, ethnography , action research , phenomenological research, and narrative research. They share some similarities, but emphasize different aims and perspectives.

Qualitative research approaches
Approach What does it involve?
Grounded theory Researchers collect rich data on a topic of interest and develop theories .
Researchers immerse themselves in groups or organizations to understand their cultures.
Action research Researchers and participants collaboratively link theory to practice to drive social change.
Phenomenological research Researchers investigate a phenomenon or event by describing and interpreting participants’ lived experiences.
Narrative research Researchers examine how stories are told to understand how participants perceive and make sense of their experiences.

Note that qualitative research is at risk for certain research biases including the Hawthorne effect , observer bias , recall bias , and social desirability bias . While not always totally avoidable, awareness of potential biases as you collect and analyze your data can prevent them from impacting your work too much.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Each of the research approaches involve using one or more data collection methods . These are some of the most common qualitative methods:

  • Observations: recording what you have seen, heard, or encountered in detailed field notes.
  • Interviews:  personally asking people questions in one-on-one conversations.
  • Focus groups: asking questions and generating discussion among a group of people.
  • Surveys : distributing questionnaires with open-ended questions.
  • Secondary research: collecting existing data in the form of texts, images, audio or video recordings, etc.
  • You take field notes with observations and reflect on your own experiences of the company culture.
  • You distribute open-ended surveys to employees across all the company’s offices by email to find out if the culture varies across locations.
  • You conduct in-depth interviews with employees in your office to learn about their experiences and perspectives in greater detail.

Qualitative researchers often consider themselves “instruments” in research because all observations, interpretations and analyses are filtered through their own personal lens.

For this reason, when writing up your methodology for qualitative research, it’s important to reflect on your approach and to thoroughly explain the choices you made in collecting and analyzing the data.

Qualitative data can take the form of texts, photos, videos and audio. For example, you might be working with interview transcripts, survey responses, fieldnotes, or recordings from natural settings.

Most types of qualitative data analysis share the same five steps:

  • Prepare and organize your data. This may mean transcribing interviews or typing up fieldnotes.
  • Review and explore your data. Examine the data for patterns or repeated ideas that emerge.
  • Develop a data coding system. Based on your initial ideas, establish a set of codes that you can apply to categorize your data.
  • Assign codes to the data. For example, in qualitative survey analysis, this may mean going through each participant’s responses and tagging them with codes in a spreadsheet. As you go through your data, you can create new codes to add to your system if necessary.
  • Identify recurring themes. Link codes together into cohesive, overarching themes.

There are several specific approaches to analyzing qualitative data. Although these methods share similar processes, they emphasize different concepts.

Qualitative data analysis
Approach When to use Example
To describe and categorize common words, phrases, and ideas in qualitative data. A market researcher could perform content analysis to find out what kind of language is used in descriptions of therapeutic apps.
To identify and interpret patterns and themes in qualitative data. A psychologist could apply thematic analysis to travel blogs to explore how tourism shapes self-identity.
To examine the content, structure, and design of texts. A media researcher could use textual analysis to understand how news coverage of celebrities has changed in the past decade.
To study communication and how language is used to achieve effects in specific contexts. A political scientist could use discourse analysis to study how politicians generate trust in election campaigns.

Qualitative research often tries to preserve the voice and perspective of participants and can be adjusted as new research questions arise. Qualitative research is good for:

  • Flexibility

The data collection and analysis process can be adapted as new ideas or patterns emerge. They are not rigidly decided beforehand.

  • Natural settings

Data collection occurs in real-world contexts or in naturalistic ways.

  • Meaningful insights

Detailed descriptions of people’s experiences, feelings and perceptions can be used in designing, testing or improving systems or products.

  • Generation of new ideas

Open-ended responses mean that researchers can uncover novel problems or opportunities that they wouldn’t have thought of otherwise.

Researchers must consider practical and theoretical limitations in analyzing and interpreting their data. Qualitative research suffers from:

  • Unreliability

The real-world setting often makes qualitative research unreliable because of uncontrolled factors that affect the data.

  • Subjectivity

Due to the researcher’s primary role in analyzing and interpreting data, qualitative research cannot be replicated . The researcher decides what is important and what is irrelevant in data analysis, so interpretations of the same data can vary greatly.

  • Limited generalizability

Small samples are often used to gather detailed data about specific contexts. Despite rigorous analysis procedures, it is difficult to draw generalizable conclusions because the data may be biased and unrepresentative of the wider population .

  • Labor-intensive

Although software can be used to manage and record large amounts of text, data analysis often has to be checked or performed manually.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2024, September 05). What Is Qualitative Research? | Methods & Examples. Scribbr. Retrieved September 18, 2024, from https://www.scribbr.com/methodology/qualitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, how to do thematic analysis | step-by-step guide & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

how to evaluate qualitative research

Dr Karen Lumsden

trainer / coach / consultant / researcher

how to evaluate qualitative research

Assessing the ‘Quality’ of Qualitative Research

One of the questions that comes up regularly in training courses on qualitative methods is how we should assess the quality of a qualitative study. At some point in their research career, qualitative researchers will inevitably experience the ‘apples versus oranges’ phenomenon, whereby our qualitative research is evaluated based on quantitative principles and criteria, instead of qualitative principles. The quality standards used in quantitative research do not directly translate to qualitative studies.

Should We Use Standardized Criteria to Evaluate Qualitative Research?

Over the years, many qualitative scholars have proposed frameworks and criteria for assessing qualitative research (see Guba and Lincoln 1989; Lather 1993; Schwandt 1996; Bochner 2000; Ritchie et al. 2003; Tracy 2010; Altheide and Johnson 2011). Some have also argued that standardized criteria in are unhelpful in qualitative inquiry (i.e. see Schwandt 1996; Altheide and Johnson 2011). For example, Bochner (2000) argues that ‘traditional empiricist criteria’ are ‘unhelpful’ when applied to new ethnographic approaches (cited in Tracy 2010: 838). As Altheide and Johnson (2011: 582) argue:

“There are many ways to use, practice, promote, and claim qualitative research, and in each there is a proposed or claimed relationship between some field of human experience, a form of representation, and an audience. Researchers and scholars in each of these areas have been grappling with issues of truth, validity, verisimilitude, credibility, trustworthiness, dependability, confirmability, and so on. What is valid for clinical studies or policy studies may not be adequate or relevant for ethnography or autoethnography or performance ethnography.”

Qualitative research is conducted within different research paradigms, which complicates the assessment of the quality of a particular study.

how to evaluate qualitative research

As Tracy (2010) notes, many of these critiques result in the development of new quality standards and criteria for evaluating qualitative inquiry which are seen as more flexible than quantitative standard and of more sensitive to the context bound nature of qualitative research. Below, we explore the main criteria proposed for assessing qualitative research:

Criteria for Assessing Qualitative Research

  • Trustworthiness

In the 1980s, Guba and Lincoln (1989 see also Krefting 1991) developed criteria which can be used to determine rigor in a qualitative inquiry. Instead of ‘rigor’, they focus on the development of trustworthiness in qualitative inquiry through determining: credibility, transferability, reliability and confirmability .

Rigor
Internal validity
External validity or generalizability
Reliability
Objectivity
  • Credibility

Credibility asks us to consider if the research findings are plausible and convincing. Questions to consider include:

  • How well does the study capture and portray the world it is trying to describe?
  • How well backed up are the claims made by the research?
  • What is the evidential base for the research?
  • How plausible are the findings?

As Stenfors et al. (2020) point out, there should be alignment between ‘theory, research question, data collection, analysis and results’ while the ‘sampling strategy, the depth and volume of data, and the analytical steps taken’ must be appropriate within that framework.

  • Transferability

Here, we are interested in how clear the basis is for drawing wider inference (Ritchie et al. 2003) from our study. Can the findings of our study be transferred to another group, context or setting?

As Ritchie et al. (2003) argue, the findings of qualitative research can be generalized but the framework within which this can occur needs greater clarification. Instead, we refer to the transferability of findings in a qualitative study. For example, in an empirical sense: can findings from qualitative research studies be applied to populations or settings beyond the particular sample of the study? We can also explore the generation of theoretical concepts or propositions which are deemed to be of wider, or universal, application from a qualitative study.

When attempting to extrapolate from a qualitative study we should be conscious that meanings and behaviours are context bound. Therefore extrapolation may be possible if offered as a working hypothesis to help us to make sense of findings in other contexts.

Questions to consider include:

  • Sample coverage: did the sample frame contain any known bias; were the criteria used for selection inclusive of the constituencies thought to be of importance?
  • Capture of the phenomena: was the environment, quality of questioning effective for participants to fully express their views?
  • Identification or labelling: have the phenomena been identified, categorised and named in ways that reflect the meanings assigned by participants?
  • Interpretation: is there sufficient internal evidence for the explanatory accounts that have been developed?
  • Display: have the findings been portrayed in a way that remains true to the original data and allows others to see the analytic constructions which have occurred? (see Ritchie et al. 2003)
  • Dependability

Dependability is ‘the extent to which the research could be replicated in similar conditions’ (Stenfors et al. 2020). The researcher should have provided enough information on the design and conduct of their study that another researcher could follow these and take the same steps in their study. Given the context specific nature of qualitative research, it can be difficult to demonstrate which features of the qualitative data should be expected to be consistent, dependable or reliable.

Questions to consider for reliability include:

  • Was the sample design/selection without bias, ‘symbolically’ representative of the target population, comprehensive of all known constituencies; was there any known feature of non-response or attrition within the sample?
  • Was the fieldwork carried out consistently, did it allow respondents sufficient opportunities to cover relevant ground, to portray their experiences?
  • Was the analysis carried out systematically and comprehensively, were classifications, typologies confirmed by multiple assessment?
  • Is the interpretation well supported by the evidence?
  • Did the design/conduct allow equal opportunity for all perspectives to be identified or were there features that led to selective, or missing, coverage? (see Ritchie et al. 2003).
  • Confirmability

Here, we are looking for a clear link between the data and the findings. For example, researchers should evidence their claims with the use of quotes/excerpts of data. Qualitative researchers should avoid the temptation to quantify findings with claims such as ‘70% of participants felt that xxx…’ It is also important in the Discussion to demonstrate how the research findings relate to the wider body of literature and to answer the research question. Any limitations of the study should also be flagged up.

  • Reflexivity

Stenfors et al. (2020) draw attention to reflexivity as another important criteria in assessing qualitative inquiry. For Guba and Lincoln (1989) the reflexive journal is a further means of helping to assess qualitative inquiry. A reflexive approach helps us to be aware of the social, ethical and political impact of our research, the central, fluid and changing nature/s of power relations (with participants, gatekeepers, research funders, etc.) and our relationships with the researched (Lumsden 2019).

We can ask whether the researcher has stepped back and critically reflected on their role in the research process, their relationships with the researched, and their social position? It should be clear how reflexivity has been embedded in the research process (Stenfors et al. 2020). As Altheide and Johnson (2011: 581) write:

‘Good qualitative research—and particularly ethnographies—shows the hand of the ethnographer. The effort may not always be successful, but there should be clear “tracks” that the attempt has been made.’

Additional Criteria: Ethics

Tracy (2010) also provides a useful overview of 8 key criteria for excellent qualitative research: worthy topic, rich rigor, sincerity, credibility, resonance, significant contribution, ethical, meaningful coherence (p.840). There is overlap with the above criteria and some elements could be said to be already subsumed in the above discussion, therefore I will not delve into them all here. However, it is important to draw attention to ethical considerations in qualitative studies. As Tracy notes, the research should consider:

  • Procedural ethics (such as human subjects);
  • Situational and culturally specific ethics;
  • Relational ethics;
  • Exiting ethics (leaving the scene and sharing the research) (see Tracy 2010: 840).

how to evaluate qualitative research

Strategies for Determining Trustworthiness (Rigor)

The strategies adopted in order to determine the trustworthiness of a qualitative study depend on a variety of factors including: research paradigm, the specifics of each research design, the research methods utilised (i.e. interviews, ethnography, observation, focus groups, creative methods, visual methods, secondary data analysis, narratives etc.) and the type of qualitative analysis being conducted.

Moore (2015) provides a useful evaluation of the use of various strategies for ensuring rigor in qualitative studies. Strategies which she evaluates as typically used in attempts to ensure validity and reliability include:

  • Prolonged engagement in ethnographic research via time spent in the field to reduce researcher effect;
  • Prolonged observation in ethnographic research reduces researcher effect;
  • Thick description;
  • Triangulation;
  • Development of a coding system and inter-rater reliability in semi-structured interviews;
  • Researcher bias;
  • Negative case analysis;
  • Peer review debriefing (in team research);
  • Member checks;
  • External audits (viewed as problematic and not routinely used) (see pages 1217-1220).

She provides a useful evaluation of the appropriateness and success of these strategies for ensuring rigor, for those who wish to explore this further. Interestingly, through her critique of these strategies, Moore also suggests that ‘qualitative researchers return to the terminology of social sciences, using rigor, reliability, validity, and generalizability’ (p.1212) instead of those proposed in the 1980s by Guba and Lincoln (1989).

Awareness of the criteria used when assessing the quality of qualitative research is key for anyone conducting qualitative research. As we have seen these criteria typically include: trustworthiness, credibility, transferability, dependability, confirmability, reflexivity and ethics.

However the strategies which each researcher adopts in order to ensure the trustworthiness (rigor) of their study, will depend on a variety of factors specific to each qualitative research project including the research method they adopt and the research paradigm. As Moore (2019: 1219) writes: ‘…rigor, comprising both validity and reliability, is achieved primarily by researchers in the process of data collection and analysis’. In addition, the assessment criteria which are valid when assessing fields such as clinical studies may not be relevant for those working in areas such as ethnography or narrative studies (see Altheide and Johnson 2011). There is no easy route or ‘one size fits all’ approach for assessing the quality of qualitative research, but the above criteria give us a good starting point which we can refer to when designing and conducting our qualitative inquiries.

References and further reading

Altheide, D.L. and Johnson, J.M. (2011) ‘Reflections on Interpretive Adequacy in Qualitative Research.’ In N.K. Denzin and Y.S. Lincoln (eds) Handbook of Qualitative Research, Fifth Edition (pp. 581-594). London: Sage.

Bochner, A. (2000) ‘Criteria Against Ourselves.’ Qualitative Inquiry , 6: 266-272.

Braun, V. and Clarke, V. (2013) Successful Qualitative Research . London: Sage.

Guba, E. and Lincoln, Y. (1989) Fourth Generation Evaluation . Newbury Park, CA: Sage.

Krefting, L. (1991) ‘Rigor in Qualitative Research: The Assessment of Trustworthiness.’ American Journal of Occupational Therapy , 45: 214–222.

Lather, P. (1993) ‘Fertile Obsession: Validity after Poststructuralism.’ Sociological Quarterly , 34: 673-693.

Lingard L. (2015) ‘Joining a Conversation: The Problem/Gap/Hook Heuristic.’ Perspectives on Medical Education , 4(5): 252–253.

Lumsden, K. (2019) Reflexivity: Theory, Method and Practice . London: Routledge.

Morse, J.M. (2015) ‘Critical Analysis of Strategies for Determining Rigor in Qualitative Inquiry.’ Qualitative Health Research , 25(9): 1212-1222.

Schwandt, T.A. (1996) ‘Farewell to Criteriology.’ Qualitative Inquiry , 2: 58-72.

Spencer, L., Ritchie, J., Lewis, J., and Dillon, L. (2003) Quality in Qualitative Evaluation: A Framework for Assessing Research Evidence , GCSRO.  Available at: www.policyhub.gov.uk/publications

Stenfors, T., Kajamaa, A. and Bennett, D. (2020) ‘How to… Assess the Quality of Qualitative Research.’ The Clinical Teacher , https://doi.org/10.1111/tct.13242

Tracy, S.J. (2010) ‘Qualitative Quality: Eight “Big-Tent” Criteria for Excellent Qualitative Research.’ Qualitative Inquiry , 16: 837–851.

Share this:

' src=

Published by Dr Karen Lumsden

trainer / coach / consultant / researcher I am a social scientist with expertise in qualitative research methods. I have a passion for delivering qualitative methods training and coaching to clients. I am also Assistant Professor in Criminology at the University of Nottingham, UK and have experience of delivering qualitative methods training via the Social Research Association and to various universities and organisations. View all posts by Dr Karen Lumsden

Leave a Reply Cancel reply

Discover more from dr karen lumsden.

Subscribe now to keep reading and get access to the full archive.

Type your email…

Continue reading

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals

You are here

  • Volume 22, Issue 1
  • How to appraise qualitative research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Calvin Moorley 1 ,
  • Xabi Cathala 2
  • 1 Nursing Research and Diversity in Care, School of Health and Social Care , London South Bank University , London , UK
  • 2 Institute of Vocational Learning , School of Health and Social Care, London South Bank University , London , UK
  • Correspondence to Dr Calvin Moorley, Nursing Research and Diversity in Care, School of Health and Social Care, London South Bank University, London SE1 0AA, UK; Moorleyc{at}lsbu.ac.uk

https://doi.org/10.1136/ebnurs-2018-103044

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

In order to make a decision about implementing evidence into practice, nurses need to be able to critically appraise research. Nurses also have a professional responsibility to maintain up-to-date practice. 1 This paper provides a guide on how to critically appraise a qualitative research paper.

What is qualitative research?

  • View inline

Useful terms

Some of the qualitative approaches used in nursing research include grounded theory, phenomenology, ethnography, case study (can lend itself to mixed methods) and narrative analysis. The data collection methods used in qualitative research include in depth interviews, focus groups, observations and stories in the form of diaries or other documents. 3

Authenticity

Title, keywords, authors and abstract.

In a previous paper, we discussed how the title, keywords, authors’ positions and affiliations and abstract can influence the authenticity and readability of quantitative research papers, 4 the same applies to qualitative research. However, other areas such as the purpose of the study and the research question, theoretical and conceptual frameworks, sampling and methodology also need consideration when appraising a qualitative paper.

Purpose and question

The topic under investigation in the study should be guided by a clear research question or a statement of the problem or purpose. An example of a statement can be seen in table 2 . Unlike most quantitative studies, qualitative research does not seek to test a hypothesis. The research statement should be specific to the problem and should be reflected in the design. This will inform the reader of what will be studied and justify the purpose of the study. 5

Example of research question and problem statement

An appropriate literature review should have been conducted and summarised in the paper. It should be linked to the subject, using peer-reviewed primary research which is up to date. We suggest papers with a age limit of 5–8 years excluding original work. The literature review should give the reader a balanced view on what has been written on the subject. It is worth noting that for some qualitative approaches some literature reviews are conducted after the data collection to minimise bias, for example, in grounded theory studies. In phenomenological studies, the review sometimes occurs after the data analysis. If this is the case, the author(s) should make this clear.

Theoretical and conceptual frameworks

Most authors use the terms theoretical and conceptual frameworks interchangeably. Usually, a theoretical framework is used when research is underpinned by one theory that aims to help predict, explain and understand the topic investigated. A theoretical framework is the blueprint that can hold or scaffold a study’s theory. Conceptual frameworks are based on concepts from various theories and findings which help to guide the research. 6 It is the researcher’s understanding of how different variables are connected in the study, for example, the literature review and research question. Theoretical and conceptual frameworks connect the researcher to existing knowledge and these are used in a study to help to explain and understand what is being investigated. A framework is the design or map for a study. When you are appraising a qualitative paper, you should be able to see how the framework helped with (1) providing a rationale and (2) the development of research questions or statements. 7 You should be able to identify how the framework, research question, purpose and literature review all complement each other.

There remains an ongoing debate in relation to what an appropriate sample size should be for a qualitative study. We hold the view that qualitative research does not seek to power and a sample size can be as small as one (eg, a single case study) or any number above one (a grounded theory study) providing that it is appropriate and answers the research problem. Shorten and Moorley 8 explain that three main types of sampling exist in qualitative research: (1) convenience (2) judgement or (3) theoretical. In the paper , the sample size should be stated and a rationale for how it was decided should be clear.

Methodology

Qualitative research encompasses a variety of methods and designs. Based on the chosen method or design, the findings may be reported in a variety of different formats. Table 3 provides the main qualitative approaches used in nursing with a short description.

Different qualitative approaches

The authors should make it clear why they are using a qualitative methodology and the chosen theoretical approach or framework. The paper should provide details of participant inclusion and exclusion criteria as well as recruitment sites where the sample was drawn from, for example, urban, rural, hospital inpatient or community. Methods of data collection should be identified and be appropriate for the research statement/question.

Data collection

Overall there should be a clear trail of data collection. The paper should explain when and how the study was advertised, participants were recruited and consented. it should also state when and where the data collection took place. Data collection methods include interviews, this can be structured or unstructured and in depth one to one or group. 9 Group interviews are often referred to as focus group interviews these are often voice recorded and transcribed verbatim. It should be clear if these were conducted face to face, telephone or any other type of media used. Table 3 includes some data collection methods. Other collection methods not included in table 3 examples are observation, diaries, video recording, photographs, documents or objects (artefacts). The schedule of questions for interview or the protocol for non-interview data collection should be provided, available or discussed in the paper. Some authors may use the term ‘recruitment ended once data saturation was reached’. This simply mean that the researchers were not gaining any new information at subsequent interviews, so they stopped data collection.

The data collection section should include details of the ethical approval gained to carry out the study. For example, the strategies used to gain participants’ consent to take part in the study. The authors should make clear if any ethical issues arose and how these were resolved or managed.

The approach to data analysis (see ref  10 ) needs to be clearly articulated, for example, was there more than one person responsible for analysing the data? How were any discrepancies in findings resolved? An audit trail of how the data were analysed including its management should be documented. If member checking was used this should also be reported. This level of transparency contributes to the trustworthiness and credibility of qualitative research. Some researchers provide a diagram of how they approached data analysis to demonstrate the rigour applied ( figure 1 ).

  • Download figure
  • Open in new tab
  • Download powerpoint

Example of data analysis diagram.

Validity and rigour

The study’s validity is reliant on the statement of the question/problem, theoretical/conceptual framework, design, method, sample and data analysis. When critiquing qualitative research, these elements will help you to determine the study’s reliability. Noble and Smith 11 explain that validity is the integrity of data methods applied and that findings should accurately reflect the data. Rigour should acknowledge the researcher’s role and involvement as well as any biases. Essentially it should focus on truth value, consistency and neutrality and applicability. 11 The authors should discuss if they used triangulation (see table 2 ) to develop the best possible understanding of the phenomena.

Themes and interpretations and implications for practice

In qualitative research no hypothesis is tested, therefore, there is no specific result. Instead, qualitative findings are often reported in themes based on the data analysed. The findings should be clearly linked to, and reflect, the data. This contributes to the soundness of the research. 11 The researchers should make it clear how they arrived at the interpretations of the findings. The theoretical or conceptual framework used should be discussed aiding the rigour of the study. The implications of the findings need to be made clear and where appropriate their applicability or transferability should be identified. 12

Discussions, recommendations and conclusions

The discussion should relate to the research findings as the authors seek to make connections with the literature reviewed earlier in the paper to contextualise their work. A strong discussion will connect the research aims and objectives to the findings and will be supported with literature if possible. A paper that seeks to influence nursing practice will have a recommendations section for clinical practice and research. A good conclusion will focus on the findings and discussion of the phenomena investigated.

Qualitative research has much to offer nursing and healthcare, in terms of understanding patients’ experience of illness, treatment and recovery, it can also help to understand better areas of healthcare practice. However, it must be done with rigour and this paper provides some guidance for appraising such research. To help you critique a qualitative research paper some guidance is provided in table 4 .

Some guidance for critiquing qualitative research

  • ↵ Nursing and Midwifery Council . The code: Standard of conduct, performance and ethics for nurses and midwives . 2015 https://www.nmc.org.uk/globalassets/sitedocuments/nmc-publications/nmc-code.pdf ( accessed 21 Aug 18 ).
  • Barrett D ,
  • Cathala X ,
  • Shorten A ,

Patient consent for publication Not required.

Competing interests None declared.

Provenance and peer review Commissioned; internally peer reviewed.

Read the full text or download the PDF:

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals

You are here

  • Volume 25, Issue 1
  • Critical appraisal of qualitative research: necessity, partialities and the issue of bias
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0001-5660-8224 Veronika Williams ,
  • Anne-Marie Boylan ,
  • http://orcid.org/0000-0003-4597-1276 David Nunan
  • Nuffield Department of Primary Care Health Sciences , University of Oxford, Radcliffe Observatory Quarter , Oxford , UK
  • Correspondence to Dr Veronika Williams, Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford OX2 6GG, UK; veronika.williams{at}phc.ox.ac.uk

https://doi.org/10.1136/bmjebm-2018-111132

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

  • qualitative research

Introduction

Qualitative evidence allows researchers to analyse human experience and provides useful exploratory insights into experiential matters and meaning, often explaining the ‘how’ and ‘why’. As we have argued previously 1 , qualitative research has an important place within evidence-based healthcare, contributing to among other things policy on patient safety, 2 prescribing, 3 4 and understanding chronic illness. 5 Equally, it offers additional insight into quantitative studies, explaining contextual factors surrounding a successful intervention or why an intervention might have ‘failed’ or ‘succeeded’ where effect sizes cannot. It is for these reasons that the MRC strongly recommends including qualitative evaluations when developing and evaluating complex interventions. 6

Critical appraisal of qualitative research

Is it necessary.

Although the importance of qualitative research to improve health services and care is now increasingly widely supported (discussed in paper 1), the role of appraising the quality of qualitative health research is still debated. 8 10 Despite a large body of literature focusing on appraisal and rigour, 9 11–15 often referred to as ‘trustworthiness’ 16 in qualitative research, there remains debate about how to —and even whether to—critically appraise qualitative research. 8–10 17–19 However, if we are to make a case for qualitative research as integral to evidence-based healthcare, then any argument to omit a crucial element of evidence-based practice is difficult to justify. That being said, simply applying the standards of rigour used to appraise studies based on the positivist paradigm (Positivism depends on quantifiable observations to test hypotheses and assumes that the researcher is independent of the study. Research situated within a positivist paradigm isbased purely on facts and consider the world to be external and objective and is concerned with validity, reliability and generalisability as measures of rigour.) would be misplaced given the different epistemological underpinnings of the two types of data.

Given its scope and its place within health research, the robust and systematic appraisal of qualitative research to assess its trustworthiness is as paramount to its implementation in clinical practice as any other type of research. It is important to appraise different qualitative studies in relation to the specific methodology used because the methodological approach is linked to the ‘outcome’ of the research (eg, theory development, phenomenological understandings and credibility of findings). Moreover, appraisal needs to go beyond merely describing the specific details of the methods used (eg, how data were collected and analysed), with additional focus needed on the overarching research design and its appropriateness in accordance with the study remit and objectives.

Poorly conducted qualitative research has been described as ‘worthless, becomes fiction and loses its utility’. 20 However, without a deep understanding of concepts of quality in qualitative research or at least an appropriate means to assess its quality, good qualitative research also risks being dismissed, particularly in the context of evidence-based healthcare where end users may not be well versed in this paradigm.

How is appraisal currently performed?

Appraising the quality of qualitative research is not a new concept—there are a number of published appraisal tools, frameworks and checklists in existence. 21–23  An important and often overlooked point is the confusion between tools designed for appraising methodological quality and reporting guidelines designed to assess the quality of methods reporting. An example is the Consolidate Criteria for Reporting Qualitative Research (COREQ) 24 checklist, which was designed to provide standards for authors when reporting qualitative research but is often mistaken for a methods appraisal tool. 10

Broadly speaking there are two types of critical appraisal approaches for qualitative research: checklists and frameworks. Checklists have often been criticised for confusing quality in qualitative research with ‘technical fixes’ 21 25 , resulting in the erroneous prioritisation of particular aspects of methodological processes over others (eg, multiple coding and triangulation). It could be argued that a checklist approach adopts the positivist paradigm, where the focus is on objectively assessing ‘quality’ where the assumptions is that the researcher is independent of the research conducted. This may result in the application of quantitative understandings of bias in order to judge aspects of recruitment, sampling, data collection and analysis in qualitative research papers. One of the most widely used appraisal tools is the Critical Appraisal Skills Programme (CASP) 26 and along with the JBI QARI (Joanna Briggs Institute Qualitative Assessment and Assessment Instrument) 27 presents examples which tend to mimic the quantitative approach to appraisal. The CASP qualitative tool follows that of other CASP appraisal tools for quantitative research designs developed in the 1990s. The similarities are therefore unsurprising given the status of qualitative research at that time.

Frameworks focus on the overarching concepts of quality in qualitative research, including transparency, reflexivity, dependability and transferability (see box 1 ). 11–13 15 16 20 28 However, unless the reader is familiar with these concepts—their meaning and impact, and how to interpret them—they will have difficulty applying them when critically appraising a paper.

The main issue concerning currently available checklist and framework appraisal methods is that they take a broad brush approach to ‘qualitative’ research as whole, with few, if any, sufficiently differentiating between the different methodological approaches (eg, Grounded Theory, Interpretative Phenomenology, Discourse Analysis) nor different methods of data collection (interviewing, focus groups and observations). In this sense, it is akin to taking the entire field of ‘quantitative’ study designs and applying a single method or tool for their quality appraisal. In the case of qualitative research, checklists, therefore, offer only a blunt and arguably ineffective tool and potentially promote an incomplete understanding of good ‘quality’ in qualitative research. Likewise, current framework methods do not take into account how concepts differ in their application across the variety of qualitative approaches and, like checklists, they also do not differentiate between different qualitative methodologies.

On the need for specific appraisal tools

Current approaches to the appraisal of the methodological rigour of the differing types of qualitative research converge towards checklists or frameworks. More importantly, the current tools do not explicitly acknowledge the prejudices that may be present in the different types of qualitative research.

Concepts of rigour or trustworthiness within qualitative research 31

Transferability: the extent to which the presented study allows readers to make connections between the study’s data and wider community settings, ie, transfer conceptual findings to other contexts.

Credibility: extent to which a research account is believable and appropriate, particularly in relation to the stories told by participants and the interpretations made by the researcher.

Reflexivity: refers to the researchers’ engagement of continuous examination and explanation of how they have influenced a research project from choosing a research question to sampling, data collection, analysis and interpretation of data.

Transparency: making explicit the whole research process from sampling strategies, data collection to analysis. The rationale for decisions made is as important as the decisions themselves.

However, we often talk about these concepts in general terms, and it might be helpful to give some explicit examples of how the ‘technical processes’ affect these, for example, partialities related to:

Selection: recruiting participants via gatekeepers, such as healthcare professionals or clinicians, who may select them based on whether they believe them to be ‘good’ participants for interviews/focus groups.

Data collection: poor interview guide with closed questions which encourage yes/no answers and/leading questions.

Reflexivity and transparency: where researchers may focus their analysis on preconceived ideas rather than ground their analysis in the data and do not reflect on the impact of this in a transparent way.

The lack of tailored, method-specific appraisal tools has potentially contributed to the poor uptake and use of qualitative research for informing evidence-based decision making. To improve this situation, we propose the need for more robust quality appraisal tools that explicitly encompass both the core design aspects of all qualitative research (sampling/data collection/analysis) but also considered the specific partialities that can be presented with different methodological approaches. Such tools might draw on the strengths of current frameworks and checklists while providing users with sufficient understanding of concepts of rigour in relation to the different types of qualitative methods. We provide an outline of such tools in the third and final paper in this series.

As qualitative research becomes ever more embedded in health science research, and in order for that research to have better impact on healthcare decisions, we need to rethink critical appraisal and develop tools that allow differentiated evaluations of the myriad of qualitative methodological approaches rather than continuing to treat qualitative research as a single unified approach.

  • Williams V ,
  • Boylan AM ,
  • Lingard L ,
  • Orser B , et al
  • Brawn R , et al
  • Van Royen P ,
  • Vermeire E , et al
  • Barker M , et al
  • McGannon KR
  • Dixon-Woods M ,
  • Agarwal S , et al
  • Greenhalgh T ,
  • Dennison L ,
  • Morrison L ,
  • Conway G , et al
  • Barrett M ,
  • Mayan M , et al
  • Lockwood C ,
  • Santiago-Delefosse M ,
  • Bruchez C , et al
  • Sainsbury P ,
  • ↵ CASP (Critical Appraisal Skills Programme). date unknown . http://www.phru.nhs.uk/Pages/PHD/CASP.htm .
  • ↵ The Joanna Briggs Institute . JBI QARI Critical appraisal checklist for interpretive & critical research . Adelaide : The Joanna Briggs Institute , 2014 .
  • Stephens J ,

Contributors VW and DN: conceived the idea for this article. VW: wrote the first draft. AMB and DN: contributed to the final draft. All authors approve the submitted article.

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer reviewed.

Correction notice This article has been updated since its original publication to include a new reference (reference 1.)

Read the full text or download the PDF:

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Understanding and evaluating qualitative research

Affiliation.

  • 1 School of Occupational Therapy, La Trobe University, Melbourne, Victoria, Australia. [email protected]
  • PMID: 12406114
  • DOI: 10.1046/j.1440-1614.2002.01100.x

Qualitative research aims to address questions concerned with developing an understanding of the meaning and experience dimensions of humans' lives and social worlds. Central to good qualitative research is whether the research participants' subjective meanings, actions and social contexts, as understood by them, are illuminated. This paper aims to provide beginning researchers, and those unfamiliar with qualitative research, with an orientation to the principles that inform the evaluation of the design, conduct, findings and interpretation of qualitative research. It orients the reader to two philosophical perspectives, the interpretive and critical research paradigms, which underpin both the qualitative research methodologies most often used in mental health research, and how qualitative research is evaluated. Criteria for evaluating quality are interconnected with standards for ethics in qualitative research. They include principles for good practice in the conduct of qualitative research, and for trustworthiness in the interpretation of qualitative data. The paper reviews these criteria, and discusses how they may be used to evaluate qualitative research presented in research reports. These principles also offer some guidance about the conduct of sound qualitative research for the beginner qualitative researcher.

PubMed Disclaimer

Similar articles

  • Promoting and evaluating scientific rigour in qualitative research. Baillie L. Baillie L. Nurs Stand. 2015 Jul 15;29(46):36-42. doi: 10.7748/ns.29.46.36.e8830. Nurs Stand. 2015. PMID: 26174283
  • Qualitative research in psychiatry. Whitley R, Crawford M. Whitley R, et al. Can J Psychiatry. 2005 Feb;50(2):108-14. doi: 10.1177/070674370505000206. Can J Psychiatry. 2005. PMID: 15807227 Review.
  • Qualitative research and its methods in epilepsy: Contributing to an understanding of patients' lived experiences of the disease. Rapport F, Clement C, Doel MA, Hutchings HA. Rapport F, et al. Epilepsy Behav. 2015 Apr;45:94-100. doi: 10.1016/j.yebeh.2015.01.040. Epub 2015 Apr 3. Epilepsy Behav. 2015. PMID: 25847427 Review.
  • Achieving trustworthiness in qualitative research: a pan-paradigmatic perspective. Williams EN, Morrow SL. Williams EN, et al. Psychother Res. 2009 Jul;19(4-5):576-82. doi: 10.1080/10503300802702113. Psychother Res. 2009. PMID: 19579089
  • Philosophical ponderings on qualitative research methods in nursing. Munhall PL. Munhall PL. Nurs Sci Q. 1989 Spring;2(1):20-8. doi: 10.1177/089431848900200109. Nurs Sci Q. 1989. PMID: 2927787
  • South African speech-language therapists' and audiologists' professional quality of life. De Andrade VM, Da Silva C, Israel N. De Andrade VM, et al. S Afr J Commun Disord. 2024 Aug 23;71(1):e1-e9. doi: 10.4102/sajcd.v71i1.1042. S Afr J Commun Disord. 2024. PMID: 39221744 Free PMC article.
  • From not knowing, to knowing more needs to be done: health care providers describe the education they need to care for sex trafficked patients. Mason R, Recknor F, Bruder R, Quayyum F, Montemurro F, Du Mont J. Mason R, et al. BMC Med Educ. 2024 Jul 31;24(1):824. doi: 10.1186/s12909-024-05776-6. BMC Med Educ. 2024. PMID: 39085862 Free PMC article.
  • An ethnography of construction and characteristics of curriculum for inheritance of intangible cultural heritage martial arts in universities. Cheng Y, Guo N. Cheng Y, et al. Front Sports Act Living. 2024 May 15;6:1395128. doi: 10.3389/fspor.2024.1395128. eCollection 2024. Front Sports Act Living. 2024. PMID: 38812891 Free PMC article.
  • Evolving Adult ADHD Care: Preparatory Evaluation of a Prototype Digital Service Model Innovation for ADHD Care. Patrickson B, Shams L, Fouyaxis J, Strobel J, Schubert KO, Musker M, Bidargaddi N. Patrickson B, et al. Int J Environ Res Public Health. 2024 May 1;21(5):582. doi: 10.3390/ijerph21050582. Int J Environ Res Public Health. 2024. PMID: 38791796 Free PMC article.
  • Health workers' perspectives on school-based mass drug administration control programs for soil-transmitted helminthiasis and schistosomiasis in Ogun State, Nigeria. Akinsolu FT, Abodunrin OR, Olagunju MT, Adewole IE, Ola OM, Abel C, Sanni-Adeniyi R, Rahman NO, Akanni OO, Njuguna DW, Soneye IY, Salako AO, Ezechi OC, Varga OE, Akinwale OP. Akinsolu FT, et al. PLoS One. 2024 May 8;19(5):e0302509. doi: 10.1371/journal.pone.0302509. eCollection 2024. PLoS One. 2024. PMID: 38718082 Free PMC article.

Publication types

  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Our systems are now restored following recent technical disruption, and we’re working hard to catch up on publishing. We apologise for the inconvenience caused. Find out more: https://www.cambridge.org/universitypress/about-us/news-and-blogs/cambridge-university-press-publishing-update-following-technical-disruption

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

  • > Journals
  • > BJPsych Bulletin
  • > The Psychiatrist
  • > Volume 37 Issue 6
  • > Qualitative research: its value and applicability

how to evaluate qualitative research

Article contents

What questions are best answered using qualitative research, countering some misconceptions, in conclusion, qualitative research: its value and applicability.

Published online by Cambridge University Press:  02 January 2018

Qualitative research has a rich tradition in the study of human social behaviour and cultures. Its general aim is to develop concepts which help us to understand social phenomena in, wherever possible, natural rather than experimental settings, to gain an understanding of the experiences, perceptions and/or behaviours of individuals, and the meanings attached to them. The effective application of qualitative methods to other disciplines, including clinical, health service and education research, has a rapidly expanding and robust evidence base. Qualitative approaches have particular potential in psychiatry research, singularly and in combination with quantitative methods. This article outlines the nature and potential application of qualitative research as well as attempting to counter a number of misconceptions.

Qualitative research has a rich tradition in the social sciences. Since the late 19th century, researchers interested in studying the social behaviour and cultures of humankind have perceived limitations in trying to explain the phenomena they encounter in purely quantifiable, measurable terms. Anthropology, in its social and cultural forms, was one of the foremost disciplines in developing what would later be termed a qualitative approach, founded as it was on ethnographic studies which sought an understanding of the culture of people from other societies, often hitherto unknown and far removed in geography. Reference Bernard 1 Early researchers would spend extended periods of time living in societies, observing, noting and photographing the minutia of daily life, with the most committed often learning the language of peoples they observed, in the hope of gaining greater acceptance by them and a more detailed understanding of the cultural norms at play. All academic disciplines concerned with human and social behaviour, including anthropology, sociology and psychology, now make extensive use of qualitative research methods whose systematic application was first developed by these colonial-era social scientists.

Their methods, involving observation, participation and discussion of the individuals and groups being studied, as well as reading related textual and visual media and artefacts, form the bedrock of all qualitative social scientific inquiry. The general aim of qualitative research is thus to develop concepts which help us to understand social phenomena in, wherever possible, natural rather than experimental settings, to gain an understanding of the experiences, perceptions and/or behaviours of those studied, and the meanings attached to them. Reference Bryman 2 Researchers interested in finding out why people behave the way they do; how people are affected by events, how attitudes and opinions are formed; how and why cultures and practices have developed in the way they have, might well consider qualitative methods to answer their questions.

It is fair to say that clinical and health-related research is still dominated by quantitative methods, of which the randomised controlled trial, focused on hypothesis-testing through experiment controlled by randomisation, is perhaps the quintessential method. Qualitative approaches may seem obscure to the uninitiated when directly compared with the experimental, quantitative methods used in clinical research. There is increasing recognition among researchers in these fields, however, that qualitative methods such as observation, in-depth interviews, focus groups, consensus methods, case studies and the interpretation of texts can be more effective than quantitative approaches in exploring complex phenomena and as such are valuable additions to the methodological armoury available to them. Reference Denzin and Lincoln 3

In considering what kind of research questions are best answered using a qualitative approach, it is important to remember that, first and foremost, unlike quantitative research, inquiry conducted in the qualitative tradition seeks to answer the question ‘What?’ as opposed to ‘How often?’. Qualitative methods are designed to reveal what is going on by describing and interpreting phenomena; they do not attempt to measure how often an event or association occurs. Research conducted using qualitative methods is normally done with an intent to preserve the inherent complexities of human behaviour as opposed to assuming a reductive view of the subject in order to count and measure the occurrence of phenomena. Qualitative research normally takes an inductive approach, moving from observation to hypothesis rather than hypothesis-testing or deduction, although the latter is perfectly possible.

When conducting research in this tradition, the researcher should, if possible, avoid separating the stages of study design, data collection and analysis, but instead weave backwards and forwards between the raw data and the process of conceptualisation, thereby making sense of the data throughout the period of data collection. Although there are inevitable tensions among methodologists concerned with qualitative practice, there is broad consensus that a priori categories and concepts reflecting a researcher's own preconceptions should not be imposed on the process of data collection and analysis. The emphasis should be on capturing and interpreting research participants' true perceptions and/or behaviours.

Using combined approaches

The polarity between qualitative and quantitative research has been largely assuaged, to the benefit of all disciplines which now recognise the value, and compatibility, of both approaches. Indeed, there can be particular value in using quantitative methods in combination with qualitative methods. Reference Barbour 4 In the exploratory stages of a research project, qualitative methodology can be used to clarify or refine the research question, to aid conceptualisation and to generate a hypothesis. It can also help to identify the correct variables to be measured, as researchers have been known to measure before they fully understand the underlying issues pertaining to a study and, as a consequence, may not always target the most appropriate factors. Qualitative work can be valuable in the interpretation, qualification or illumination of quantitative research findings. This is particularly helpful when focusing on anomalous results, as they test the main hypothesis formulated. Qualitative methods can also be used in combination with quantitative methods to triangulate findings and support the validation process, for example, where three or more methods are used and the results compared for similarity (e.g. a survey, interviews and a period of observation in situ ).

‘There is little value in qualitative research findings because we cannot generalise from them’

Generalisability refers to the extent that the account can be applied to other people, times and settings other than those actually studied. A common criticism of qualitative research is that the results of a study are rarely, if ever, generalisable to a larger population because the sample groups are small and the participants are not chosen randomly. Such criticism fails to recognise the distinctiveness of qualitative research where sampling is concerned. In quantitative research, the intent is to secure a large random sample that is representative of the general population, with the purpose of eliminating individual variations, focusing on generalisations and thereby allowing for statistical inference of results that are applicable across an entire population. In qualitative research, generalisability is based on the assumption that it is valuable to begin to understand similar situations or people, rather than being representative of the target population. Qualitative research is rarely based on the use of random samples, so the kinds of reference to wider populations made on the basis of surveys cannot be used in qualitative analysis.

Qualitative researchers utilise purposive sampling, whereby research participants are selected deliberately to test a particular theoretical premise. The purpose of sampling here is not to identify a random subgroup of the general population from which statistically significant results can be extrapolated, but rather to identify, in a systematic way, individuals that possess relevant characteristics for the question being considered. Reference Strauss and Corbin 5 The researchers must instead ensure that any reference to people and settings beyond those in the study are justified, which is normally achieved by defining, in detail, the type of settings and people to whom the explanation or theory applies based on the identification of similar settings and people in the study. The intent is to permit a detailed examination of the phenomenon, resulting in a text-rich interpretation that can deepen our understanding and produce a plausible explanation of the phenomenon under study. The results are not intended to be statistically generalisable, although any theory they generate might well be.

‘Qualitative research cannot really claim reliability or validity’

In quantitative research, reliability is the extent to which different observers, or the same observers on different occasions, make the same observations or collect the same data about the same object of study. The changing nature of social phenomena scrutinised by qualitative researchers inevitably makes the possibility of the same kind of reliability problematic in their work. A number of alternative concepts to reliability have been developed by qualitative methodologists, however, known collectively as forms of trustworthiness. Reference Guba 6

One way to demonstrate trustworthiness is to present detailed evidence in the form of quotations from interviews and field notes, along with thick textual descriptions of episodes, events and settings. To be trustworthy, qualitative analysis should also be auditable, making it possible to retrace the steps leading to a certain interpretation or theory to check that no alternatives were left unexamined and that no researcher biases had any avoidable influence on the results. Usually, this involves the recording of information about who did what with the data and in what order so that the origin of interpretations can be retraced.

In general, within the research traditions of the natural sciences, findings are validated by their repeated replication, and if a second investigator cannot replicate the findings when they repeat the experiment then the original results are questioned. If no one else can replicate the original results then they are rejected as fatally flawed and therefore invalid. Natural scientists have developed a broad spectrum of procedures and study designs to ensure that experiments are dependable and that replication is possible. In the social sciences, particularly when using qualitative research methods, replication is rarely possible given that, when observed or questioned again, respondents will almost never say or do precisely the same things. Whether results have been successfully replicated is always a matter of interpretation. There are, however, procedures that, if followed, can significantly reduce the possibility of producing analyses that are partial or biased. Reference Altheide, Johnson, Denzin and Lincoln 7

Triangulation is one way of doing this. It essentially means combining multiple views, approaches or methods in an investigation to obtain a more accurate interpretation of the phenomena, thereby creating an analysis of greater depth and richness. As the process of analysing qualitative data normally involves some form of coding, whereby data are broken down into units of analysis, constant comparison can also be used. Constant comparison involves checking the consistency and accuracy of interpretations and especially the application of codes by constantly comparing one interpretation or code with others both of a similar sort and in other cases and settings. This in effect is a form of interrater reliability, involving multiple researchers or teams in the coding process so that it is possible to compare how they have coded the same passages and where there are areas of agreement and disagreement so that consensus can be reached about a code's definition, improving consistency and rigour. It is also good practice in qualitative analysis to look constantly for outliers – results that are out of line with your main findings or any which directly contradict what your explanations might predict, re-examining the data to try to find a way of explaining the atypical finding to produce a modified and more complex theory and explanation.

Qualitative research has been established for many decades in the social sciences and encompasses a valuable set of methodological tools for data collection, analysis and interpretation. Their effective application to other disciplines, including clinical, health service and education research, has a rapidly expanding and robust evidence base. The use of qualitative approaches to research in psychiatry has particular potential, singularly and in combination with quantitative methods. Reference Crabb and Chur-Hansen 8 When devising research questions in the specialty, careful thought should always be given to the most appropriate methodology, and consideration given to the great depth and richness of empirical evidence which a robust qualitative approach is able to provide.

Declaration of interest

Crossref logo

This article has been cited by the following publications. This list is generated based on data provided by Crossref .

  • Google Scholar

View all Google Scholar citations for this article.

Save article to Kindle

To save this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Volume 37, Issue 6
  • Steven J. Agius (a1)
  • DOI: https://doi.org/10.1192/pb.bp.113.042770

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox .

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive .

Reply to: Submit a response

- No HTML tags allowed - Web page URLs will display as text only - Lines and paragraphs break automatically - Attachments, images or tables are not permitted

Your details

Your email address will be used in order to notify you when your comment has been reviewed by the moderator and in case the author(s) of the article or the moderator need to contact you directly.

You have entered the maximum number of contributors

Conflicting interests.

Please list any fees and grants from, employment by, consultancy for, shared ownership in or any close relationship with, at any time over the preceding 36 months, any organisation whose interests may be affected by the publication of the response. Please also list any non-financial associations or interests (personal, professional, political, institutional, religious or other) that a reasonable reader would want to know about in relation to the submitted work. This pertains to all the authors of the piece, their spouses or partners.

how to evaluate qualitative research

Qualitative Research with Socio-Technical Grounded Theory

A Practical Guide to Qualitative Data Analysis and Theory Development in the Digital World

  • © 2024
  • Rashina Hoda   ORCID: https://orcid.org/0000-0001-5147-8096 0

Faculty of Information Technology, Monash University, Melbourne, Australia

You can also search for this author in PubMed   Google Scholar

  • Enables researchers to conduct rigorous qualitative research on socio-technical topics in the digital world
  • Covers the foundations of research including research design, research philosophy, and literature review
  • Explains qualitative data collection, preparation, data analysis, theory development, and the possible role of AI

11 Accesses

This is a preview of subscription content, log in via an institution to check access.

Access this book

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

About this book

This book is a timely and practical guide to conducting qualitative research with a socio-technical approach. It covers the foundations of research including research design, research philosophy, and literature review; describes qualitative data collection, qualitative data preparation and filtering; explains qualitative data analysis using the techniques of socio-technical grounded theory (STGT); and presents the advanced techniques of qualitative theory development using emergent or structured modes. It provides guidance on evaluating qualitative research application and outcomes; and explores the possible role of Artificial Intelligence (AI) in qualitative research in the future.

The book is structured into five parts. Part I – Introduction includes three chapters that serve to provide: an overview of the book in Chapter 1; a brief history of the origins and evolution of the GT methods in Chapter 2; and an introduction to STGT in Chapter 3. Part II – Foundations of Research includes three chapters that cover: the building blocks of empirical research through a simple yet powerful approach to designing research methods (the research design canvas) in Chapter 4; the fundamental concepts of research philosophy in Chapter 5; and the myriad of literature review methods including those suited to STGT in Chapter 6. Part III – Qualitative Data Collection and Analysis includes four chapters that explain: the key concepts related to collecting qualitative data in Chapter 7; techniques used for collecting qualitative data in Chapter 8; how to go about preparing and filtering qualitative data in Chapter 9; and the qualitative data analysis procedures of open coding, constant comparison, and memoing in Chapter 10. Part IV – Theory Development includes two chapters that explain: what is considered theory (or theoretical outcomes) in Chapter 11; and the advanced STGT steps of theory development in Chapter 12. Eventually, Part V – Evaluation and Future Directions includes two chapters that: present the evaluation guidelines for assessing STGT applications and outcomes in Chapter 13; and explore new opportunities in qualitative research using large language models in Chapter 14.

This book enables new and experienced researchers in modern as well as traditional disciplines to conduct rigorous qualitative research on socio-technical topics in the digital world. They will be able to approach qualitative research with confidence and produce valuable research outcomes in the form of rich descriptive findings, taxonomies, theoretical models, theoretical frameworks, preliminary and mature theories, recommendations, and guidelines, all grounded in empirical evidence.

  • Empirical Software Engineering
  • Design Science
  • Grounded Theory
  • Behavioral Science
  • Collaborative and Social Computing

Table of contents (14 chapters)

Front matter, introduction, about this book.

Rashina Hoda

Traditional Grounded Theory Methods

Socio-technical grounded theory: an overview, foundations of research, research design canvas, research philosophy, literature reviews, qualitative data collection and analysis, basics of qualitative data collection, techniques of qualitative data collection, qualitative data preparation and filtering, socio-technical grounded theory for qualitative data analysis, theory development, what is theory, evaluation and future directions, evaluation guidelines, future directions in qualitative research.

“This book is a valuable and timely contribution to the software engineering (SE) community. It is a gold mine for PhD students who may learn by examples and be empowered to teach their supervisors to adopt socio-technical grounded theory to further advance SE research. It is published at the right time, when the acceptance for qualitative research is growing, and before new attempts to quantify it ... have taken ground, and before ChatGPT’s successors have taken over.”  (Prof. Per Runeson, Lund University, Sweden [from the Foreword])

Authors and Affiliations

About the author.

Rashina Hoda is a Professor of Software Engineering in the Faculty of Information Technology at Monash University, Australia, where she researches the human and socio-technical aspects of software engineering (SE) and artificial intelligence. Rashina has been immersed in empirical qualitative research for two decades, practicing, supervising, reviewing, and editing qualitative SE research. She is an active member of the international SE research community, serving as an Associate Editor of IEEE Transactions on Software Engineering and on the organizing and program committees for top-rated international conferences as ICSE, CHASE, FSE, ICIS, and XP. She is a public speaker, presenting her research at Agile conferences, industry and public events, and in the media.

Bibliographic Information

Book Title : Qualitative Research with Socio-Technical Grounded Theory

Book Subtitle : A Practical Guide to Qualitative Data Analysis and Theory Development in the Digital World

Authors : Rashina Hoda

DOI : https://doi.org/10.1007/978-3-031-60533-8

Publisher : Springer Cham

eBook Packages : Computer Science , Computer Science (R0)

Copyright Information : The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024

Hardcover ISBN : 978-3-031-60532-1 Published: 14 September 2024

Softcover ISBN : 978-3-031-60535-2 Due: 28 September 2025

eBook ISBN : 978-3-031-60533-8 Published: 13 September 2024

Edition Number : 1

Number of Pages : XXV, 368

Number of Illustrations : 6 b/w illustrations, 50 illustrations in colour

Topics : Software Engineering/Programming and Operating Systems , IT in Business

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

9 Data Collection Methods in Qualitative Research

9 Data Collection Methods in Qualitative Research

Explore top methods for collecting qualitative data, from interviews to social media monitoring, to gain deeper customer insights for your strategy.

In the world of customer insights, having access to the right data is crucial. Numbers and metrics can provide valuable direction, but they often fail to capture the full picture of how your customers truly feel, what they need, or why they behave in certain ways.

That’s where qualitative research shines. Using multiple qualitative data collection methods is like casting a wider net for insights — the more varied your approach, the better your chances of capturing nuanced feedback that standard surveys might miss.

Whether it’s through in-depth interviews or mining customer chat logs, the diversity of data sources can help build a robust understanding of your customers’ experiences.

In this article, we’ll cover the top methods you can use to collect qualitative data to inform your customer experience strategy .

Table of contents

Qualitative vs quantitative methods, 9 essential qualitative data collection methods.

In-depth Interviews

Focus Groups

Observational Research

Case Studies

Surveys with Open-ended Questions

Ethnographic Research

Customer Support Center Chat History

Social Media Conversation Monitoring

Review Sites

Pitfalls to Avoid in Qualitative Data Collection

Analyzing qualitative data.

When it comes to gathering customer insights, there are two main avenues: qualitative and quantitative research. Both are crucial, but they serve different purposes.

Quantitative methods rely on numerical data. Think of it as your go-to for answering “how many?” and “how much?” questions. It’s all about measurable facts, trends, and patterns. For example, you might run a large-scale survey asking customers to rate their satisfaction on a 1-10 scale, and you’ll get hard numbers to analyze. This kind of data is easy to visualize in graphs and charts, which helps you track customer satisfaction metrics like NPS or CSAT scores over time.

But qualitative methods ? This is where you dig deeper. These methods focus on the “why” and “how,” uncovering insights into the emotions, motivations, and thought processes behind customer behaviors. Instead of numerical data, qualitative research gives you rich, detailed feedback in the form of words. The qualitative data collected through these methods provides detailed and nuanced insights into individuals' or groups' experiences, perspectives, and behaviors. It’s an excellent way to get to the heart of customer experiences and understand their pain points on a human level.

Why Qualitative Research Is Critical for Customer Experience Strategy

Quantitative data can tell you what’s happening, but qualitative data tells you why it’s happening. The qualitative data collected through various methods can explain the underlying reasons behind customer satisfaction scores. If your quantitative research shows a drop in customer satisfaction scores, qualitative research can explain why. By diving into customer stories, open-ended survey responses, or even analyzing chat logs, you gain invaluable insights into where things might be going wrong (or right!).

Let’s dive into the most impactful methods you can use to gather valuable customer insights. Each of these methods offers a unique lens into the customer experience, helping you build a comprehensive understanding of your audience. Understanding both qualitative and quantitative data is essential for building a comprehensive understanding of your audience.

Data Collection Methods in Qualitative Research In-Depth Interviews

1. In-Depth Interviews

In-depth interviews are one-on-one conversations where the researcher asks open-ended questions , allowing the customer to share their thoughts and experiences in detail. These interviews are incredibly useful when you want to understand the “why” behind customer behavior or preferences. The qualitative data collected through in-depth interviews provides rich, detailed insights into customer behavior and preferences.

Maximizing the method: To get the most out of in-depth interviews, focus on creating a comfortable environment where participants feel free to express their honest opinions. Listen actively, ask follow-up questions, and don’t shy away from allowing the conversation to go off-script if it leads to richer insights.

Example: Imagine you’re an insights manager at a retail brand conducting an in-depth interview with a frequent shopper. By asking about their shopping habits, you can uncover that the customer values sustainability and chooses brands with eco-friendly packaging. This insight could inform future product packaging decisions.

Data Collection Methods in Qualitative Research Focus Groups

2. Focus Groups

A focus group is a facilitated discussion with a small group of customers – usually around 6-10 people. The goal is to encourage interaction between participants, sparking conversations that reveal insights through group dynamics. The collective experience of a focus group can surface opinions that may not emerge in individual interviews. The qualitative data collected through focus groups can reveal collective opinions and insights that may not emerge in individual interviews.

Maximizing the method: Ensure that the focus group facilitator is skilled at guiding discussions without leading them. It’s important to let the conversation flow naturally, but the facilitator should know when to probe deeper or refocus the group when necessary.

Example: Let’s say a tech company runs a focus group with power users of their app. During the session, one participant mentions a feature they find confusing, which prompts others to agree. This shared feedback provides the company with a clear signal to revisit that feature for usability improvements.

Data Collection Methods in Qualitative Research Focus Groups

3. Observational Research

Observational research (sometimes called field research) involves observing customers in their natural environment, whether it’s a store, website, or another setting. Instead of asking questions, researchers watch how customers interact with products, services, or environments in real-time. The qualitative data collected through observational research provides real-time insights into customer interactions and behaviors.

Maximizing the method: The key to observational research is to remain unobtrusive. Customers should behave naturally without being influenced by the researcher’s presence. It’s also crucial to take detailed notes on both the behaviors you expected, and any surprising actions that arise.

Example: A coffee shop chain might use observational research to see how customers navigate their in-store experience. Do they head straight to the counter or linger at the menu? Are they confused about the ordering process? These observations could highlight ways to improve the store layout or ordering flow.

Data Collection Methods in Qualitative Research Case Studies

4. Case Studies

Case studies are in-depth analyses of individual customer experiences, often focusing on how a product or service has solved a specific problem for them. By following a single customer’s journey from problem to solution, case studies offer detailed narratives that can illustrate the broader impact of your offerings. The qualitative data collected through case studies offers detailed narratives that illustrate the broader impact of your offerings.

Maximizing the method: Choose case study subjects that reflect common challenges or experiences within your customer base. The more relatable the story, the more likely other customers will see themselves in the narrative.

Example: A B2B SaaS company could create a case study around a client that successfully used their software to reduce employee churn. By detailing the challenges, implementation, and results, the case study could serve as a powerful testimonial for potential clients.

Data Collection Methods in Qualitative Research Open Ended Survey Questions

5. Surveys with Open-Ended Questions

While many surveys are typically quantitative, surveys with open-ended questions provide a qualitative element by allowing customers to write out their responses in their own words. This method bridges the gap between structured data and personal insights, making it easier to spot recurring themes or unique perspectives. The qualitative data collected through open-ended survey questions bridges the gap between structured data and personal insights.

Maximizing the method: Be strategic with the placement of open-ended questions. Too many can overwhelm respondents, but including one or two at key points in your survey allows for deeper insights without causing survey fatigue.

Example: A travel company might send out a post-trip survey asking, “What was the most memorable part of your experience?” The open-ended responses could reveal customer preferences that the company wasn’t previously aware of, informing future offerings or services.

Data Collection Methods in Qualitative Research Ethnographic Research

6. Ethnographic Research

Ethnographic research takes immersion to a new level. In this method, researchers embed themselves in the customer’s environment for extended periods to observe and experience their behaviors firsthand. It’s about gaining a deep understanding of customer culture, motivations, and interactions. The qualitative data collected through ethnographic research provides a deep understanding of customer culture and interactions.

Maximizing the method: This method works best when researchers fully integrate into the customer’s world, whether that’s living among a target community or spending time on-site with customers in their daily routines. It’s a time-intensive process, but the insights can be incredibly rich.

Example: A researcher for a clothing brand might spend several weeks with a group of customers, observing how they shop for and wear clothes in their daily lives. This immersive research could uncover nuanced preferences about fabric types, fit, and style that surveys alone wouldn’t reveal.

Data Collection Methods in Qualitative Research Customer Support Chat History

7. Customer Support Center Chat History

Your customer support center chat history can be a treasure trove of qualitative data. By analyzing conversations between customers and support agents, you can identify recurring issues, concerns, and sentiments that might not surface in formal surveys or interviews. This method provides an authentic view of how customers feel in real-time as they interact with your brand for problem-solving. The qualitative data collected from chat histories provides an authentic view of customer sentiments in real-time.

Maximizing the method: Use text analysis tools to sift through large volumes of chat data, identifying common themes and patterns. Pay special attention to moments of frustration or satisfaction, as these often hold the key to customer experience improvements.

Example: A software company analyzes its chat history and notices that many customers express confusion about a particular feature. This insight leads the product team to create clearer in-app tutorials, ultimately reducing the number of support requests related to that feature.

Data Collection Methods in Qualitative Research Social Media Conversation Monitoring

8. Social Media Conversation Monitoring

Social media platforms are filled with candid, unsolicited customer feedback. Social media conversation monitoring involves tracking brand mentions, hashtags, and keywords to gauge customer sentiment and uncover insights about your audience. This method gives you access to a wide range of voices, including those who may never participate in formal research. The qualitative data collected from social media conversations offers a wide range of customer insights.

Maximizing the method: Leverage social listening tools to automate the process of monitoring and analyzing conversations across platforms like Instagram, Meta, or X. Be sure to track both direct mentions of your brand and broader industry-related conversations that could reveal trends or shifting customer preferences.

Example: A beauty brand might notice that customers are frequently discussing a competitor’s eco-friendly packaging on social media. By monitoring this trend, the brand could introduce more sustainable packaging solutions to align with emerging customer values.

Data Collection Methods in Qualitative Research Social Media Conversation Monitoring

9. Review Sites

Review sites such as Yelp, Google Reviews, and Trustpilot are another goldmine for qualitative data. Customers who leave reviews are often highly motivated to share their experiences, whether positive or negative. By mining these reviews, you can gather insights into customer satisfaction, product issues, and potential areas for improvement. The qualitative data collected from review sites provides insights into customer satisfaction and areas for improvement.

Maximizing the method: Don’t just focus on star ratings—read through the text of each review to extract the underlying emotions and motivations. Look for patterns in the language used and the specific aspects of your product or service that are frequently mentioned.

Example: A restaurant chain may notice through online reviews that customers often comment on the long wait times during dinner hours. This feedback prompts management to reassess staffing levels during peak times, improving both operational efficiency and customer satisfaction.

As with any research process, there are a few key pitfalls to watch out for when collecting qualitative data. Avoiding these three common mistakes will ensure that your insights are both accurate and actionable.

how to evaluate qualitative research

1. Bias in Data Collection

Bias can creep into qualitative research in many forms, from how questions are phrased in interviews or surveys to how data is interpreted. For example, leading questions might push respondents toward a specific answer. Similarly, during observational research or focus groups, the presence or behavior of the researcher could unintentionally influence participants.

How to avoid it: Ensure your research methods are designed to be neutral and that questions are open-ended. It’s also important to train researchers to minimize their influence during interviews or observations. Using standardized protocols can help maintain consistency across different data collection methods.

Data Collection Methods in Qualitative Research Pitfalls

2. Over-reliance on a Single Method

While one method may seem like the easiest or most convenient to implement, relying solely on one form of data collection can lead to incomplete or skewed insights. For example, in-depth interviews might provide detailed information, but they won’t capture broad patterns across your entire customer base.

How to avoid it: Combine multiple data collection methods, like surveys, focus groups, and social media monitoring, to get a fuller picture. Each method will reveal different aspects of customer experience, and when analyzed together, they provide more comprehensive insights.

Data Collection Methods in Qualitative Research Pitfalls

3. Failing to Document the Research Process

One of the easiest ways to undermine the quality of your qualitative data is by failing to document the research process adequately. Without a clear record of how data was collected, analyzed, and interpreted, it becomes difficult to validate findings or replicate the study in the future.

How to avoid it: Keep detailed notes, records, and transcriptions of every stage of the research process. Having a clear audit trail ensures that your findings are credible and can be trusted by decision-makers.

With these qualitative data collection methods at your disposal, you’ll find yourself with a wealth of unstructured qualitative data. While an abundance of data is valuable, it also presents a significant challenge: how to make sense of it all efficiently.

This is where advanced tools and technology come into play.

The Challenge of Unstructured Data

Qualitative research methods produce, by their nature, unstructured data. Whether you’re working with transcripts from focus groups, feedback from review sites, or social media conversations, the data doesn’t neatly fit into rows and columns like quantitative data does. Instead, you’re dealing with text—rich, narrative-driven, and full of context. This makes it incredibly insightful but also hard to analyze manually.

Manually categorizing themes, identifying patterns, and summarizing key takeaways from large datasets is time-consuming and prone to human error. It’s easy to miss out on emerging trends or nuances that could offer strategic value, especially if you're dealing with diverse data sources.

How Kapiche’s AI-Powered Auto-Theming Can Help

Kapiche’s automatic theming feature is designed to solve this problem. By leveraging AI-powered technology, Kapiche cleans, categorizes, and analyzes your text data quickly and accurately. The platform automatically identifies themes, clusters related data points, and even provides summaries that help you interpret what your customers are saying.

Kapiche qualitative research auto-theming

For example, Kapiche can scan through customer support chat histories or social media mentions and instantly group similar pieces of feedback together—whether customers are talking about product performance, customer service, or price sensitivity. With these insights readily available, you can take faster action to improve your customer experience.

Benefits of Auto-Theming for Insights Managers

Here's how an auto-theming can transform your qualitative data analysis:

Speed and Efficiency: Automating the process saves you countless hours of manual work.

Comprehensive Analysis: By aggregating data from multiple sources, you get a fuller picture of customer sentiment across various touchpoints.

Uncover Hidden Insights: The AI detects patterns that you might not notice through manual analysis, offering deeper insights into customer behavior.

Actionable Summaries: Instead of wading through raw text, Kapiche provides concise summaries of key themes and trends, enabling you to act on insights faster.

With tools like this at your disposal, the overwhelming task of analyzing qualitative data becomes manageable, empowering your insights team to make data-driven decisions more effectively.

Let Us Help You

Navigating the complexities of qualitative data collection and analysis can be challenging, but you don’t have to do it alone. At Kapiche, we’re committed to helping insights teams like yours make the most of your qualitative customer data.

Our AI-powered auto-theming capabilities simplify the process by automatically categorizing, analyzing, and summarizing your data. This means you can quickly uncover key insights and trends without getting bogged down by the sheer volume of unstructured information.

Ready to see how Kapiche can transform your research process? Click the link below to watch an on-demand demo and discover how our platform can enhance your customer insights strategy.

Book a Demo with Kapiche

You might also like

Thematic Analysis in Qualitative Research_ A Step-by-Step Guide

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Family Med Prim Care
  • v.4(3); Jul-Sep 2015

Validity, reliability, and generalizability in qualitative research

Lawrence leung.

1 Department of Family Medicine, Queen's University, Kingston, Ontario, Canada

2 Centre of Studies in Primary Care, Queen's University, Kingston, Ontario, Canada

In general practice, qualitative research contributes as significantly as quantitative research, in particular regarding psycho-social aspects of patient-care, health services provision, policy setting, and health administrations. In contrast to quantitative research, qualitative research as a whole has been constantly critiqued, if not disparaged, by the lack of consensus for assessing its quality and robustness. This article illustrates with five published studies how qualitative research can impact and reshape the discipline of primary care, spiraling out from clinic-based health screening to community-based disease monitoring, evaluation of out-of-hours triage services to provincial psychiatric care pathways model and finally, national legislation of core measures for children's healthcare insurance. Fundamental concepts of validity, reliability, and generalizability as applicable to qualitative research are then addressed with an update on the current views and controversies.

Nature of Qualitative Research versus Quantitative Research

The essence of qualitative research is to make sense of and recognize patterns among words in order to build up a meaningful picture without compromising its richness and dimensionality. Like quantitative research, the qualitative research aims to seek answers for questions of “how, where, when who and why” with a perspective to build a theory or refute an existing theory. Unlike quantitative research which deals primarily with numerical data and their statistical interpretations under a reductionist, logical and strictly objective paradigm, qualitative research handles nonnumerical information and their phenomenological interpretation, which inextricably tie in with human senses and subjectivity. While human emotions and perspectives from both subjects and researchers are considered undesirable biases confounding results in quantitative research, the same elements are considered essential and inevitable, if not treasurable, in qualitative research as they invariable add extra dimensions and colors to enrich the corpus of findings. However, the issue of subjectivity and contextual ramifications has fueled incessant controversies regarding yardsticks for quality and trustworthiness of qualitative research results for healthcare.

Impact of Qualitative Research upon Primary Care

In many ways, qualitative research contributes significantly, if not more so than quantitative research, to the field of primary care at various levels. Five qualitative studies are chosen to illustrate how various methodologies of qualitative research helped in advancing primary healthcare, from novel monitoring of chronic obstructive pulmonary disease (COPD) via mobile-health technology,[ 1 ] informed decision for colorectal cancer screening,[ 2 ] triaging out-of-hours GP services,[ 3 ] evaluating care pathways for community psychiatry[ 4 ] and finally prioritization of healthcare initiatives for legislation purposes at national levels.[ 5 ] With the recent advances of information technology and mobile connecting device, self-monitoring and management of chronic diseases via tele-health technology may seem beneficial to both the patient and healthcare provider. Recruiting COPD patients who were given tele-health devices that monitored lung functions, Williams et al. [ 1 ] conducted phone interviews and analyzed their transcripts via a grounded theory approach, identified themes which enabled them to conclude that such mobile-health setup and application helped to engage patients with better adherence to treatment and overall improvement in mood. Such positive findings were in contrast to previous studies, which opined that elderly patients were often challenged by operating computer tablets,[ 6 ] or, conversing with the tele-health software.[ 7 ] To explore the content of recommendations for colorectal cancer screening given out by family physicians, Wackerbarth, et al. [ 2 ] conducted semi-structure interviews with subsequent content analysis and found that most physicians delivered information to enrich patient knowledge with little regard to patients’ true understanding, ideas, and preferences in the matter. These findings suggested room for improvement for family physicians to better engage their patients in recommending preventative care. Faced with various models of out-of-hours triage services for GP consultations, Egbunike et al. [ 3 ] conducted thematic analysis on semi-structured telephone interviews with patients and doctors in various urban, rural and mixed settings. They found that the efficiency of triage services remained a prime concern from both users and providers, among issues of access to doctors and unfulfilled/mismatched expectations from users, which could arouse dissatisfaction and legal implications. In UK, a care pathways model for community psychiatry had been introduced but its benefits were unclear. Khandaker et al. [ 4 ] hence conducted a qualitative study using semi-structure interviews with medical staff and other stakeholders; adopting a grounded-theory approach, major themes emerged which included improved equality of access, more focused logistics, increased work throughput and better accountability for community psychiatry provided under the care pathway model. Finally, at the US national level, Mangione-Smith et al. [ 5 ] employed a modified Delphi method to gather consensus from a panel of nominators which were recognized experts and stakeholders in their disciplines, and identified a core set of quality measures for children's healthcare under the Medicaid and Children's Health Insurance Program. These core measures were made transparent for public opinion and later passed on for full legislation, hence illustrating the impact of qualitative research upon social welfare and policy improvement.

Overall Criteria for Quality in Qualitative Research

Given the diverse genera and forms of qualitative research, there is no consensus for assessing any piece of qualitative research work. Various approaches have been suggested, the two leading schools of thoughts being the school of Dixon-Woods et al. [ 8 ] which emphasizes on methodology, and that of Lincoln et al. [ 9 ] which stresses the rigor of interpretation of results. By identifying commonalities of qualitative research, Dixon-Woods produced a checklist of questions for assessing clarity and appropriateness of the research question; the description and appropriateness for sampling, data collection and data analysis; levels of support and evidence for claims; coherence between data, interpretation and conclusions, and finally level of contribution of the paper. These criteria foster the 10 questions for the Critical Appraisal Skills Program checklist for qualitative studies.[ 10 ] However, these methodology-weighted criteria may not do justice to qualitative studies that differ in epistemological and philosophical paradigms,[ 11 , 12 ] one classic example will be positivistic versus interpretivistic.[ 13 ] Equally, without a robust methodological layout, rigorous interpretation of results advocated by Lincoln et al. [ 9 ] will not be good either. Meyrick[ 14 ] argued from a different angle and proposed fulfillment of the dual core criteria of “transparency” and “systematicity” for good quality qualitative research. In brief, every step of the research logistics (from theory formation, design of study, sampling, data acquisition and analysis to results and conclusions) has to be validated if it is transparent or systematic enough. In this manner, both the research process and results can be assured of high rigor and robustness.[ 14 ] Finally, Kitto et al. [ 15 ] epitomized six criteria for assessing overall quality of qualitative research: (i) Clarification and justification, (ii) procedural rigor, (iii) sample representativeness, (iv) interpretative rigor, (v) reflexive and evaluative rigor and (vi) transferability/generalizability, which also double as evaluative landmarks for manuscript review to the Medical Journal of Australia. Same for quantitative research, quality for qualitative research can be assessed in terms of validity, reliability, and generalizability.

Validity in qualitative research means “appropriateness” of the tools, processes, and data. Whether the research question is valid for the desired outcome, the choice of methodology is appropriate for answering the research question, the design is valid for the methodology, the sampling and data analysis is appropriate, and finally the results and conclusions are valid for the sample and context. In assessing validity of qualitative research, the challenge can start from the ontology and epistemology of the issue being studied, e.g. the concept of “individual” is seen differently between humanistic and positive psychologists due to differing philosophical perspectives:[ 16 ] Where humanistic psychologists believe “individual” is a product of existential awareness and social interaction, positive psychologists think the “individual” exists side-by-side with formation of any human being. Set off in different pathways, qualitative research regarding the individual's wellbeing will be concluded with varying validity. Choice of methodology must enable detection of findings/phenomena in the appropriate context for it to be valid, with due regard to culturally and contextually variable. For sampling, procedures and methods must be appropriate for the research paradigm and be distinctive between systematic,[ 17 ] purposeful[ 18 ] or theoretical (adaptive) sampling[ 19 , 20 ] where the systematic sampling has no a priori theory, purposeful sampling often has a certain aim or framework and theoretical sampling is molded by the ongoing process of data collection and theory in evolution. For data extraction and analysis, several methods were adopted to enhance validity, including 1 st tier triangulation (of researchers) and 2 nd tier triangulation (of resources and theories),[ 17 , 21 ] well-documented audit trail of materials and processes,[ 22 , 23 , 24 ] multidimensional analysis as concept- or case-orientated[ 25 , 26 ] and respondent verification.[ 21 , 27 ]

Reliability

In quantitative research, reliability refers to exact replicability of the processes and the results. In qualitative research with diverse paradigms, such definition of reliability is challenging and epistemologically counter-intuitive. Hence, the essence of reliability for qualitative research lies with consistency.[ 24 , 28 ] A margin of variability for results is tolerated in qualitative research provided the methodology and epistemological logistics consistently yield data that are ontologically similar but may differ in richness and ambience within similar dimensions. Silverman[ 29 ] proposed five approaches in enhancing the reliability of process and results: Refutational analysis, constant data comparison, comprehensive data use, inclusive of the deviant case and use of tables. As data were extracted from the original sources, researchers must verify their accuracy in terms of form and context with constant comparison,[ 27 ] either alone or with peers (a form of triangulation).[ 30 ] The scope and analysis of data included should be as comprehensive and inclusive with reference to quantitative aspects if possible.[ 30 ] Adopting the Popperian dictum of falsifiability as essence of truth and science, attempted to refute the qualitative data and analytes should be performed to assess reliability.[ 31 ]

Generalizability

Most qualitative research studies, if not all, are meant to study a specific issue or phenomenon in a certain population or ethnic group, of a focused locality in a particular context, hence generalizability of qualitative research findings is usually not an expected attribute. However, with rising trend of knowledge synthesis from qualitative research via meta-synthesis, meta-narrative or meta-ethnography, evaluation of generalizability becomes pertinent. A pragmatic approach to assessing generalizability for qualitative studies is to adopt same criteria for validity: That is, use of systematic sampling, triangulation and constant comparison, proper audit and documentation, and multi-dimensional theory.[ 17 ] However, some researchers espouse the approach of analytical generalization[ 32 ] where one judges the extent to which the findings in one study can be generalized to another under similar theoretical, and the proximal similarity model, where generalizability of one study to another is judged by similarities between the time, place, people and other social contexts.[ 33 ] Thus said, Zimmer[ 34 ] questioned the suitability of meta-synthesis in view of the basic tenets of grounded theory,[ 35 ] phenomenology[ 36 ] and ethnography.[ 37 ] He concluded that any valid meta-synthesis must retain the other two goals of theory development and higher-level abstraction while in search of generalizability, and must be executed as a third level interpretation using Gadamer's concepts of the hermeneutic circle,[ 38 , 39 ] dialogic process[ 38 ] and fusion of horizons.[ 39 ] Finally, Toye et al. [ 40 ] reported the practicality of using “conceptual clarity” and “interpretative rigor” as intuitive criteria for assessing quality in meta-ethnography, which somehow echoed Rolfe's controversial aesthetic theory of research reports.[ 41 ]

Food for Thought

Despite various measures to enhance or ensure quality of qualitative studies, some researchers opined from a purist ontological and epistemological angle that qualitative research is not a unified, but ipso facto diverse field,[ 8 ] hence any attempt to synthesize or appraise different studies under one system is impossible and conceptually wrong. Barbour argued from a philosophical angle that these special measures or “technical fixes” (like purposive sampling, multiple-coding, triangulation, and respondent validation) can never confer the rigor as conceived.[ 11 ] In extremis, Rolfe et al. opined from the field of nursing research, that any set of formal criteria used to judge the quality of qualitative research are futile and without validity, and suggested that any qualitative report should be judged by the form it is written (aesthetic) and not by the contents (epistemic).[ 41 ] Rolfe's novel view is rebutted by Porter,[ 42 ] who argued via logical premises that two of Rolfe's fundamental statements were flawed: (i) “The content of research report is determined by their forms” may not be a fact, and (ii) that research appraisal being “subject to individual judgment based on insight and experience” will mean those without sufficient experience of performing research will be unable to judge adequately – hence an elitist's principle. From a realism standpoint, Porter then proposes multiple and open approaches for validity in qualitative research that incorporate parallel perspectives[ 43 , 44 ] and diversification of meanings.[ 44 ] Any work of qualitative research, when read by the readers, is always a two-way interactive process, such that validity and quality has to be judged by the receiving end too and not by the researcher end alone.

In summary, the three gold criteria of validity, reliability and generalizability apply in principle to assess quality for both quantitative and qualitative research, what differs will be the nature and type of processes that ontologically and epistemologically distinguish between the two.

Source of Support: Nil.

Conflict of Interest: None declared.

  • Research Article
  • Open access
  • Published: 14 September 2024

A qualitative investigation into the effectiveness of a housing navigator program linking justice-involved clients with recovery housing

  • Jodie M. Dewey   ORCID: orcid.org/0000-0002-0255-7289 1 ,
  • Patrick Hibbard 1 ,
  • Dennis P. Watson 1 ,
  • Juleigh Nowinski Konchak 2 &
  • Keiki Hinami 2  

Health & Justice volume  12 , Article number:  37 ( 2024 ) Cite this article

Metrics details

Roughly 24–36% of people who are incarcerated in the U.S. are formally diagnosed with opioid use disorder (OUD). Once released, individuals involved with the criminal legal system (CLS) face increased risks of return to use and fatality and are 129 times more likely to die from an overdose within the first two weeks of release compared to those without CLS involvement. People who are CLS-involved and who are seeking a recovery living environment can access temporary stable housing through recovery homes. However, entering a recovery home can be difficult due to fragmentation among recovery housing organizations and their non-uniform application and screening procedures. A navigation pilot program was implemented to provide clients with recovery home placement advice, pre-screening, and referrals in Cook County, IL. Existing research on recovery homes has rarely examined the importance of recovery housing navigation for enhancing service engagement among CLS-involved individuals receiving medications for OUD.

Semi-structured qualitative interviews were conducted with 22 clients and three recovery housing navigators as part of a program evaluation of the navigation program pilot. Qualitative software was used to organize and qualitatively analyze transcripts through several rounds of coding producing emergent themes, which were then triangulated, and expanded using navigator data.

Clients seeking recovery home services reported multiple prior challenges securing safe and supportive recovery living environments. Despite low initial expectations, clients described their interactions with housing navigators in favorable terms and felt navigators worked with them effectively to identify and meet their housing and substance use needs in a timely manner. Clients also commented on their partnerships with the navigator throughout the process. Interactions with navigators also calmed fears of rejection many clients had previously experienced and still harbored about the process, which bolstered client-navigator relationships and client motivation to engage with additional services.

Evidence from this study suggests recovery home navigation can improve the speed and efficiency with which clients are connected to appropriate services that are tailored to their specific needs as well as increase client motivation to engage with a myriad of recovery services.

Prior research focusing on individuals who have been incarcerated demonstrates considerable health-related inequities among them, including increased risks of mental and physical health issues, lower engagement with health services, and higher rates of substance misuse (Fazel, 2016 ; Ferguson et al., 2016 ). 85% of those housed in prisons and jails either meet DSM criteria for a substance use disorder (SUD) or were under the influence at the time of their crime (de Viggiani, 2007 ; Gutierrez, 2021 ; Han, 2020 ; Krawczyk et al., 2018 ; Pho et al., 2021 ), with roughly 24–36% of this population diagnosed with an opioid use disorder (OUD; Bronson et al., 2020 ). Research has shown providing medication-based opioid use disorder treatment (MOUD; e.g., methadone, buprenorphine, or naltrexone) during incarceration reduces reincarceration rates (Knight et al., 2012 ; McCollister & French, 1998 ; Wexler, 1995 ) and, when continued after release, reduces active drug use (Polcin et al., 2010 ; Wexler, 1995 ). However, carceral settings inconsistently provide MOUDs (Krawczyk et al., 2018 ; Pho et al., 2021 ; Williams et al., 2019 ), resulting in less than 33% of the incarcerated population receiving MOUDs (Kirk & Wakefield, 2018 ; Martin et al., 2021 ; Williams et al., 2019 ).

People returning from incarceration face an increased risk of use and overdose, with some studies showing them 129 times more likely to die from an overdose compared with those who have no CLS involvement, especially within the first two weeks of release (Binswanger et al., 2007 ; Hartung et al., 2023 ; Joudrey et al., 2019 ; Ranapurwala, 2018 ; Victor et al., 2021 ). Additional research has found the lack of stable post-release housing increases the risk of opioid-related emergency department visits and overdose and contributes to a dual public health crisis (Baggett, 2013 ; Yamamoto et al., 2019 ) as well as recidivism (Knight et al., 2012 ; Lo Sasso et al., 2012 ; Wexler, 1995 ). Recovery housing is evolving as a support service that may provide an answer for people at risk by interrupting the chains of risk and the revolving door of incarceration (Almquist & Walker, 2022 ).

Recovery Housing offers a solution for people in recovery with insecure housing

Recovery homes are an evidence-based solution for simultaneously addressing housing insecurity and substance use for people in recovery, and they can be an important source of reintegration support for residents who are CLS-involved (Mericle et al., 2017 ; Polcin, 2006 , 2018a ). In addition to improvement in traditionally measured outcomes like substance use (e.g., Jason et al., 2007 ; see Reif et al., 2014 for a review), recovery homes can drastically reduce opioid use and recidivism for CLS-involved individuals by linking them to recovery support systems (Polcin, 2018b ), thus reducing reintegration barriers that undermine long-term treatment retention (Majer et al., 2020 ). For example, a 2022 study indicated a higher percentage of respondents serving probation and living in recovery housing reported improved health (Phelps et al., 2022 ), though this study included more housing options and study design limited rigor (e.g., cross-sectional, potential selection bias). Additionally, Polcin and colleagues ( 2010 ) found CLS-involved recovery-home residents had improved outcomes on arrests, employment, and substance use over 18 months. Recovery homes can also help CLS-involved individuals adhere to the conditions of probation or parole by providing drug- and alcohol-free housing compatible with court-ordered monitoring (DeGuzman et al., 2019 ; Jason et al., 2014 ; Lo Sasso et al., 2012 ; Zywiak et al., 2009 ).

While recovery homes can provide the stability needed for successful recovery and reintegration post-incarceration, securing a recovery home placement is challenging for many CLS-involved people receiving MOUD (Clayman et al., n.d.; Jason et al., 2022 ; Kepple et al., 2019 ; Wood et al., 2022 ). In addition to a broad set of barriers finding recovery home services (e.g., costs, availability; see, e.g., Duffy & Baldwin, 2013 ; Komaromy et al., 2023 ; Manuel et al., 2017 ), individuals with OUD seeking recovery home placement often face policies and practices that do not support the use of methadone or buprenorphine (Miles et al., 2020 ; Wood et al., 2022 ), including within the very recovery homes they hope to reside (Miles et al., 2020 ). Anti-medication stigma derives from the belief that people receiving methadone or buprenorphine treatment are not in proper recovery because these medications are themselves opioids (Szalavitz, 2018 ). For example, a 2019 study of providers ( n  = 360) that included many recovery home workers found that 25% were unwilling to serve those receiving MOUD (Kepple et al., 2019 ). Add to this the complications of returning from incarceration, and the situation compounds (Cernasev et al., 2021 ). In addition to these barriers, those attempting to engage with recovery or treatment services often confront a confusing array of options. The sheer amount of information necessary to navigate such a system can be daunting. Misinformation or, worse, prior negative experiences (e.g., MOUD stigma) may leave potential service recipients with the perception of a hostile environment, impacting their motivation to engage in treatment and recovery (Cernasev et al., 2021 ; DiClemente et al., 2016 ; Stanojlović & Davidson, 2021 ). Patient navigation models may offer an effective approach for successfully linking this population to services.

The evidence base for recovery housing is still growing and in need of rigorous study designs (e.g., RCT) to demonstrate overall effectiveness and evaluate specific elements (Reif et al., 2014 ).

Patient navigation and types of support

Patient navigation programs link individuals to services by serving as a bridge between service recipients and providers. To improve overall health outcomes (Bovaird et al., 2015 ), health and service navigators work directly with clients and providers to identify barriers to care, minimize access delays, and improve health-related outcomes and client satisfaction for those most vulnerable to illness and disconnected from formalized health care systems (Paskett et al., 2011 ; Wang et al., 2015 ). This co-production process reformulates traditional paternalistic welfare models by inviting clients to participate more fully throughout the healthcare continuum (Needham & Carr, 2009 ). Navigators often know more about the availability of services, allowing them to more efficiently broker vital information on behalf of clients with multiple and complex needs (Ashford et al., 2020 ) and ideally foster a positive relationship that will increase client motivation (Bovaird et al., 2015 ; Coote, 2022 ; Durose et al., 2017 ; Needham & Carr, 2009 ). Existing research demonstrates the effectiveness of navigators providing both instrumental (e.g., transportation and making appointments) and relational support (e.g., building client-provider relationships; Davis, 2008 ; Paskett et al., 2011 ) within general healthcare contexts and, over the past 30 years, patient navigation models have expanded to connect patients to a broader range of health and social supports including housing (Freeman, 2012 ).

The current study is a qualitative investigation of how implementing a housing navigation program might improve the recovery home engagement process for CLS-involved individuals receiving MOUD. Data were collected as part of an evaluation of a housing navigation pilot program for Cook County Health. While a few existing studies focus on the effectiveness of navigation programs that connect those with unstable housing to health resources, no data exists on navigators’ role in connecting CLS-involved individuals who are seeking recovery housing.

The current study

The current study is a qualitative investigation to understand both clients’ and recovery housing navigators’ experiences with a housing navigation program. Understanding their perspective is essential to fully grasp the complexity of issues that undermine linkage to recovery housing, a significant issue for CLS-involved individuals. Data were collected as part of a larger evaluation of The Cook County Recovery Home Coordinated Capacity Pilot Program, a three-year collaborative effort to enhance treatment and recovery service engagement among CLS-involved individuals receiving MOUD. This pilot utilized the expertise of recovery housing navigators, employed by a local MOUD provider, to place individuals referred to the navigation program into one of eight Chicago-based recovery homes willing to accept CLS individuals receiving MOUD. One recovery navigator, who also served as the supervisor, shared their lived experience with substance use and recovery and indicated the purpose of the navigation process was to center clients’ needs to ensure a collaborative process. Clients were often referred to the housing navigation program through various agencies, such as hospitals, in-patient treatment facilities, and in some cases were referred by court staff (e.g., probation, parole). While this program was specifically developed to meet the needs of CLS individuals receiving MOUD, housing navigators assisted anyone who contacted them.

Participants

Qualitative interview participants included both clients and recovery housing navigators. Recovery navigators invited all clients referred to housing navigation program to participate in the qualitative study. Of the 95 individuals who called for housing navigation and agreed to be contacted for a follow-up qualitative interview, 22 were successfully contacted, verbally consented, and completed a semi-structured interview. Only one client refused to participate. The remaining individuals could not be reached as many failed to return messages left by the researcher, or their phone had been disconnected. Without their own cell phone, many clients left agency numbers (e.g., hospitals, in-patient facilities) but were no longer at these sites once the researcher attempted to locate them. While not all participants were referred through a correctional site, the referral population does have an elevated risk of being CLS-involved (See Table  1 ).

Client participants discussed several challenges that impeded their interest or ability to seek out recovery housing. Lack of support, fear of homelessness while trying to recover from substances, prior CLS involvement, and MOUD stigmatization all contribute to clients’ future help-seeking behavior. One client described the significant challenge for those with little to no support, especially when being released from incarceration:

The lead author also interviewed recovery housing navigators employed by a not-for-profit behavioral healthcare organization ( n  = 3). Two of the three navigators began at the start of the pilot, while one joined six months later. All three navigators identified as women and averaged 10 years’ experience working at the healthcare organization; 1 held a master’s in social work, 1 in counseling, and two of the three were Certified Alcohol and Drug Counselors (CADC). Of the three navigators, one came to work with direct lived experience with substance use and recovery.

Data collection

Various agencies referred clients to the navigation program (e.g., hospitals, detox centers, corrections-based facilities). Once connected, navigators asked if clients would be interested in sharing their experiences by participating in a 30-minute qualitative phone interview. If interested, navigators filled out a consent to contact form and provided it to the lead author, a PhD-level research scientist with over 20 years of qualitative interviewing experience. The lead author would then contact participants introducing herself and describing the goal of the project. She would then provide additional details regarding the project, answer questions, and, obtained verbal consent from those individuals interested in being interviewed. Interviews were audio-recorded and transcribed, removing all identifying information. Following the interview, clients were mailed a $25 debit card to compensate for their time. The lead author also conducted two dyad interviews with housing navigators. During the first year, the lead author interviewed one housing navigator and one supervisor while during the second year, the lead author interviewed the same housing navigator and a newly hired navigator. Following each interview, the lead author jotted summaries about key findings and shared them with the research team.

Individual client and navigator dyad interviews followed a semi-structured interview guide developed by the lead PI (third author) in collaboration with the housing navigation programs’ project leader. Questions for clients focused on reasons for seeking housing, prior challenges accessing housing, current substance use/MOUD status, and past/current CLS involvement. Clients were also asked to share their perception and satisfaction with the housing navigation program and encouraged to offer suggestions for program improvements. Questions for recovery housing navigators sought to uncover the procedures and decision-making processes that linked clients to recovery housing. The client and navigator interview guides were initially pilot tested and iteratively modified to allow further exploration into topics and issues introduced during earlier interviews. Data collection will continue through the duration of this three-year project.

All study procedures were approved by the lead author organization’s Institutional Review Board.

The first author used MAXQDA qualitative analytic software, to identify, code, and cluster key themes found in the transcripts (VERBI Software, 2022 ). The lead author used both inductive and deductive approaches when coding the data. During the first round of coding, the lead author used the interview guide categories to reveal differences and similarities across client responses. Next, in alignment with grounded theory, the lead and third author engaged in line-by-line coding specifically designed to extract the cultural knowledge shared by clients to organize their behaviors and interpret their experiences. Grounded theory is an iterative method of data collection and analysis with the goal of inductively developing ideas through an illumination of variations across descriptive and process categories (Charmaz, 2014 ). By noting those factors that impede, accelerate, or change the process used to link clients to housing, the lead author utilized her existing knowledge of the recovery home literature to transform process codes to conceptual codes (Saldana, 2021 ). As a final step, findings were discussed with the other authors (who also have prior experience in recovery home research) to ensure data findings fit with the authors’ understandings of the functioning of recovery homes. Data was presented to the recovery navigators and the housing navigation program project leaders during monthly meetings. Through “member checking” authors were able to refine patterns used to establish the final analysis presented in this paper (Candela, 2019 ).

Prior experiences shape willingness to seek help

Client participant demographic information appears in Table  1 . As displayed in the table, over 80% of the sample experienced some type of CLS involvement (e.g., incarceration, detention, community supervision, open court cases). The table also shows that more than half of those interviewed were receiving MOUDs.

Oh man , one of my hardest challenges was when I , before I was arrested , I didn’t have anything or , I left a bad situation , and as I begin to get closer and closer to being released from the penitentiary , I don’t have anything that I can go back out there to. It’s like , they just throwing me back out into society with nothing , no job , no financial support , no housing , or anything. We go back to what we had left out there. (Male , 56)

Feeling as if society is ‘just throwing’ them back, clients perceive they will only be denied assistance due to stigmas around CLS involvement and substance use:

It makes it very hard on you because most people look at you like , “he’s a convict and he’s a drug addict and if we put him in an apartment , it’s just gonna be a waste , because it’s gonna be drugs being sold out of the apartment. It’s gonna be people coming in here doing drugs and stuff like that.” ( Male , 56) .

The belief that past CLS involvement will harm their chances of being accepted into recovery housing prevented many from even attempting to access available services.

It is very difficult because every place I looked up , they say … you have to pass a background check and credit check. And my credit is not that good and I have a background. I haven’t been in trouble in over 20 years … [but] every place I check , when I see that they do a background check , I just pass it up … ( Male , 50) .

Finding housing that accepts clients receiving MOUD treatment added another layer of challenges for CLS-involved individuals attempting to secure housing. One client nearing the completion of an in-patient program and seeking housing stated: “I called like 20 or 30 different places and only two of them said they took methadone , but they were filled up.” (Male , 24) Although this client’s methadone treatment center provided him with a list of housing options, he was required to contact the sites independently, only to find a small percentage even accepted methadone patients. When he asked the two homes accepting methadone patients when beds would likely be available, the client was told to “keep calling” , something he described as “frustrating”. Study clients further wondered if a screening question to determine methadone dosage levels commonly utilized by the few homes that accept methadone clients was designed to deny entry, with one client sharing: “A lot of recovery homes , you have to be on 40 milligrams or lower in order to even get through the door.” (Female , 37) When asked why she thought this was the case, the client indicated that homes might believe a higher dose was used just to get high, and the home would not want the liability of a potential overdose. Another client iterates this point, stating “A lot of people don’t want to take somebody who is on methadone. [They think] that we’re not necessarily in recovery and we’re still using.” (Female , 41).

Finally, for one client, mental health issues posed additional challenges to obtaining timely recovery support and housing.

There is such a high demand [for female housing] that there’s a two-month waiting list. For somebody who is in a dire situation , [and if] you’re not properly medicated , you know , clinical depression with PTSD and severe anxiety … I need help now. ( Female , 41)

Without recovery housing support, clients fear return to use and the potential to overdose:

I am in recovery and fell on hard times [he was evicted]. I had nothing to fall back on as far as family … I didn’t want to be on the street in recovery and relapse and be in a situation where there won’t be a return for me … I was clean and I knew I was gonna pick up the drink and the drugs and I was going to be back off to the races. ( Male , 57) Prior experiences of being turned away from housing due to criminal legal system involvement, substance use, mental health conditions and MOUD usage can undermine how individuals seek out help, especially at time when their recovery and life depend on it.

Navigators and types of support

Study participants initially contacted the navigation program because they were in search of housing, with some “desperate for housing”. (Male , 31) Although not all participants were successfully linked with recovery housing, all but one found the program helpful. Clients shared their experiences accessing recovery housing through the navigation program which can be divided into instrumental and relational support.

Instrumental support

Despite facing housing challenges for years, several individuals were unaware of available housing programs or how to navigate the process to secure a spot. One client stated:

Oh , well it [the navigator program] was great for me because I don’t know how to do any of that kind of stuff. So , I mean , she was very diligent and , you know , right on task and on the ball. I couldn’t believe that I got a phone call as soon as I did. ( Female , 41)

Another client shared:

I didn’t know anything about it [housing]. But … she was very helpful … and she gave me a lot of directions to go. Gave me a couple of places to call , and actually , she helped me find sober living … and she even followed up on me. (Male , 53)

Clients recognized and appreciated how efficiently the navigator found them timely placement.

I contacted her. She called me back that afternoon. Then she called me back after [the recovery home] called. Then I called [the recovery home]. They did my phone interview the next morning. They called me and told me I was accepted. So , it was a day , and I was in. ( Male , 43) Navigators were helpful as several clients were unaware that recovery housing existed and felt supported to have someone help them get connected and follow-up to see their progress in reaching the homes.

Some felt the navigators had such a firm understanding of the recovery housing landscape, they helped them chart a successful pathway. One client participant stated, “ It’s very helpful in directing me in the right place to go versus me just going out there blind … I have somebody already there to do that for me to save me time.” ( Female , 37) .

Part of charting a successful pathway is valuing client needs, as this person expresses:

I thought I was going to be somewhere that I didn’t want to go. I thought it was going to be more of people pushing their own agenda , just to get paid and things like that. It wasn’t a bad experience … It was very honest and very open … helpful … straightforward. This is what I need , this is the help I need. ( Male , 32)

It is important to note that this client also valued how the assistance was provided in addition to receiving important instrumental support (e.g., connection to housing services). Past paternalistic experiences had led him to expect similar treatment, yet he described his interaction as honest and genuine, demonstrating added relational benefits from his interaction with the navigator.

Relational support

Many clients related a sense of lost optimism in finding housing, something navigators helped to rebuild. Prior experiences of being denied housing can leave these individuals in disbelief that navigators will be successful in helping them secure housing:

I told her , I go , ‘this place does not have a bed for me.’ She’s like , ‘what do you mean?’ She’s like , ‘I just talked to them.’ I’m like , I go , ‘I walked to this place in person.’ I go , ‘I’ve called them.’ I’m like , ‘they do not have a spot for me.’ And she’s like , ‘let me explain something to you.’ She’s like , ‘they have a bed waiting for you right now , they know your name , and they’re expecting you.’ And so , I’m like , ‘really?’ And she’s like , ‘yeah.’ I’m like , ‘you’re not lying?’ Because , like , this is the place I really wanted to be. I’d done the research , and , like , I talked to guys who have been here and I’m like , ‘this is the spot.’ (Male , 40) .

Previous encounters with rejection and stigmatization leave many individuals with no hope, something navigators can provide:

She just gave me a lot of hope. And , she was real useful , I thought. When I , like , first made that phone call , I was expecting this to be a waste of my time. And then after her talking to me , I realized , like , oh man , like , this could be something. Like , this might be really helpful and , it did , it was. It was super helpful. (Male , 40)

Navigators appear to strengthen client-navigator relationships throughout the housing process by reducing client anxiety and stress:

I was really comfortable talking to the navigator and it made me feel at ease with all my issues that I had , as far as drugs and all that. She made me feel real comfortable telling her what’s my drug choice and what she could do to help. ( Male , 52)

Also, one 30-year-old female stated, “It was a stress relief. I felt like I had someone there for me and helping me , and , like , actually cared about my well-being. It made me feel good.”

More than the services provided, how navigators engage with clients appeared to instill self-worth. One client shared that the navigator was “easy and informative. She spoke directly with me.” (Male , 24) To stress that one was spoken ‘ with’ suggests that clients are typically spoken “ at” by staff tasked with helping them. Another client perceived something quite similar. When asked what she felt was the best part of the navigator program:

Her helping me. ‘Cause I was , like I said , I was in dire straits. And for her to take her time and brainstorm with me , ‘cause we were brainstorming together to figure out a way to help me out in my situation , [it] was beautiful. ( Male , 57)

Navigators seemed to successfully quell pre-conceived fears and broker positive, cooperative relationships between the client and recovery home staff by rewriting the traditionally paternalistic welfare model scripts of providing services.

Brokering relationships and co-production: the navigator as ‘matchmaker’

Navigators recognize the importance of their position between clients and recovery homes, orchestrating these relationships to ensure successful placement and increase co-production. One navigator shared, “I think it’s been working. I think it’s actually a fun experience , just having someone call in and us , kind of , play matchmaker in a weird way to give them the services that they need.” ( Navigator 2) .

By developing trust and rapport with their clients, navigators also develop increased opportunities by identifying client needs beyond housing, expanding available services, and pivoting their approaches:

So , one of the things that we started doing maybe halfway or three-quarters of the way through this last year [was] tracking other resources or other things that we helped our callers out with because … sometimes a caller will need other services. So , maybe they’re not really even appropriate for recovery help , but they want some help with substance use. So , maybe referring them to a treatment center , or AA meeting , or , I think [referring to another navigator] you have helped someone with employment resources. So , that’s something that we’ve expanded , probably since our initial interview and , you know , tracking that and seeing that , you know , we will always ask is there anything else we can help you with or , you know , kind of focus on … ( Navigator 2) .

Navigators understand different housing programs offer different services and that clients will thrive when placed in a facility best suited to their needs. Through careful questioning, navigators ascertain client needs and preferences as they target potential housing options:

For some referrals , I ask about locations that they’re comfortable in. Because you might have some referrals where a certain part of time , a certain part of Chicago or Cook County , is triggering. So , that’s important to know , because I would never want to put anyone in a place that reminds them of where they just came from. So , that’s an important question to always ask. ( Navigator 2)

This navigator further explained:

Like , if you have a referral who , they used to pick up drugs from the north side , then having them on the north side , where they know where all the spots are , might not be the best thing. So that , then you look in other areas. So , it’s really , kind of , just gauging the best results for the referral. (Navigator 2)

Once clients meet the necessary requirements for placement, navigators use their previously established connections with recovery homes to connect clients:

We’ve established really good relationships with the staff at the recovery homes. We have , you know , some people’s cell phone numbers , we’ll text and say , ‘hey , you know , do you have a bed available? Can you phone screen this person?’ (Navigator 1) .

The navigator continued to discuss how the client interview process is also informed by the known requirements of each recovery home, information that is updated regularly.

So , then the recovery home … will screen the patient … They have their own , you know , type of assessment. So , we , kind of , do , like , a phone screen and each recovery home has different criteria … And so , we coordinate them to connect with the recovery home. Sometimes we’ll do a 3-way call , sometimes we … contact the recovery home first. (Navigator 1)

Despite their connection with recovery home staff and knowledge of the process, navigators are barred from participating in the recovery home’s initial intake and subsequent assessment of potential clients, resulting in somewhat of a “black box” placement process. Although unable to advocate for their clients at this juncture, navigators attempt to broker a favorable outcome by advocating for their clients during initial calls to the recovery homes.

Kind of , maybe , go over a few questions they [the recovery home] might have about the individual … and just , you know , ‘hey , they’re on methadone or they’re , whatever. They just got out of [detox] , is that okay?’ Or , you know , someone ‘recently got a surgery , but they can , you know , move on their own , they can rewrap the bandage.’ So , we definitely talk to [the recovery home] first. We try to do a warm hand off. (Navigator 1)

This black box of recovery home decision-making was most evident for navigators when evaluating clients’ mental health stability and MOUD usage. One navigator explained how she handles describing mental health status to recovery home intake staff:

We’re not being deceitful. I just think , like , especially when it comes to the mental health piece , it could really go one way or the other. We’re just taking a chance , you know. We want to be honest with everyone , of course , but sometimes it’s almost like , ‘let’s just [answer] these big questions , leave it here , and [the recovery homes] can do the rest because they ultimately make that decision.’ And I think we rarely have someone that won’t even screen them if they have [an open bed]. Like , if they meet the basic [requirements] , you know , whatever , if we say , ‘can you screen this person , ’ they’ll do it. (Navigator 3)

While navigators appeared to understand and accept the black box approach and recognized how client mental health might play a role in recovery homes’ decision-making, they conveyed discomfort with how some homes understand clients’ prescribed methadone levels. While homes participating in this project were required to accept those who were CLS-involved and receiving MOUD, navigators were stunned when the interviewer revealed that clients were asked to provide their MOUD dosage levels and some clients were told their dosages were too high. Navigators considered this revelation a potential teaching moment that could build their relationship with recovery homes:

Now , if there is a cap for their dose [for placement] , then that’s probably something we need to either revisit as a larger group or even take it to a smaller group and , kind of , figure out , because maybe it’s some education that needs to be used or something. (Navigator 3)

The strategic placement and unique knowledge of patient navigators offer increased opportunities to continue conversations and coordinate efforts regarding service delivery. Rather than encountering a confusing system, clients who interact with a receptive and competent navigator are more likely to engage in the process. By providing understanding, instrumental, and relational support, navigators do much more than meet the immediate housing needs of their clients: they create something sustainably much larger—improved recovery health habits for self and others, and faith in the system.

Improved recovery health habits

Clients expressed that sensing the navigators’ care and contribution of time and energy on their behalf helped increase motivation to engage with the process. The navigator program empowered some clients to engage in improved recovery health habits. This client indicated he was more likely to continue with recovery:

Optimistic. Because I was , like , eager to try something new. I was eager to get my life back. It was , like , if [the navigator] says they gonna help me , I might as well trust it. They haven’t lied to me , so , I was , like , ‘you know what , these people breaking their back for me in treatment , these people putting their neck out there for [me] , so I’m gonna try to reciprocate that and do what they say.’ (Male , 32) .

These improvements in motivation may extend to other healthcare efforts, as comments also revealed how these new partnerships between clients and the navigators impacted clients’ overall self-esteem and sense of wellness. When asked whether he thought he would be accepted into a recovery home, one client shared:

You know , I didn’t. Because it seemed like everybody I called , once they knew all my information about my medication … [but the navigator told me] ‘there’s no problem , oh that’s not a problem , that’s not a problem’ and you know , actually I was accepted to both places … [The navigator] didn’t treat me like a dope addict … that right there was the most powerful thing ever , to be treated like a human being , a normal person. That is powerful…Even if I didn’t get a place , that means a lot.” (Male , 53) . Another client added: “The most helpful part of it , I would say , when I was informed that the best is yet to come , and then when the navigator was speaking to me and said that we’re gonna do all we can to help you , and don’t give up before the miracle happens. I like that. I really like that.” (Male , 56) .

Through this approach, navigator efforts seem to reverberate further than the immediate navigator-client relationship, as clients readily shared their success with others, encouraging friends to also contact the helpline. One person shared, “ I’ll tell you , it was so helpful that I gave her number to , like , five other people I knew.” ( Male , 40)

As these examples indicate, the patient navigators’ approach to building relationships with clients potentially strengthens self and other-care, helping clients improve their recovery health habits and those of others in their situation. The process of assessing needs and preferences through the navigator program also instilled faith in the health system, increasing the likelihood that clients will engage with the delivered services, a requirement for cultivating long-term recovery and ensuring programs are effective and successful.

Stable housing is crucial yet difficult to access for those experiencing substance use and CLS issues, a subpopulation at increased risk for relapse and overdose (Binswanger et al., 2007 ; Joudrey et al., 2019 ; Mericle et al., 2017 ; Ranapurwala, 2018; Victor et al., 2021 ). Recovery housing is an evidence-based solution to address substance use and provide reintegration services for those released from prisons and jails (Mericle et al., 2017 ; Polcin, 2006 , 2018a ). Finding effective pathways to link this often-disenfranchised group to recovery housing is critical for addressing health disparities (Mericle et al., 2017 , 2020 ; Polcin et al., 2023 ). With a paucity of research investigating navigator models within this context, this study provides the clients’ perspective to help understand what does and does not work in the current efforts to link clients with recovery housing.

Engaging with something as fundamental as housing can be frustrating; attempting to access a service within an unfamiliar, formalized process can be positively daunting. This study provides support that an intervention such as client navigation can improve efficiency (Ashford et al., 2019), connects clients with appropriate services tailored to individual needs and preferences, and may strengthen client confidence and motivation to engage with housing and other recovery services. Our findings indicate navigators provided a service that was either previously unknown to the individuals or assisted them through a process that was perceived as intimidating and, at times, stigmatizing. This evidence also points toward clients not accustomed to being spoken with (as opposed to being spoken to ) or having someone follow up to ensure their needs were effectively met. More than securing housing, interactions with navigators also seemed to calm fears of rejection that many clients harbored about the process, and, in so doing, bolstered client-navigator relationships. The evidence laid out above indicates navigators served a vital function in matching client needs with the expectations of the recovery home site (Paskett et al., 2011 ; Wang et al., 2015 ) and can do so based on previously established relationships with recovery home staff. Therefore, through the well-orchestrated engagement of co-production, navigators can match clients to a recovery home that meets their unique needs faster and more efficiently than clients might accomplish on their own (Bovaird et al., 2015 ; Needham & Carr, 2009 ).

For clients who have experienced stigma around seeking help, including the use of MOUD, navigators’ approach in working with clients, specifically in taking time to ask them what they need and preferred in recovery housing options potentially altered clients’ perceptions by placing clients in an equal partnership to find services (Bovaird & Loeffler, 2013 ; Coote, 2022 ; Needham & Carr, 2009 ). As opposed to more paternalistic approaches (talking “at” people), the navigator approach of talking “with” clients help alleviate trepidation associated with past experiences. Further, clients indicated this collaborative process motivated them beyond the immediate recovery housing issue – that they were more likely to continue a host of healthy recovery habits. The program’s perceived success on behalf of participants may help to expand its reach through word of mouth and instilling trust in a system many may otherwise choose to avoid. By brokering the relationship between potential clients and recovery home staff who have historically rejected this population—particularly CLS-involved clients engaged with MOUD—navigators develop trust with a vulnerable population. This level of trust translates into a heightened willingness to seek help and, as one client explained, a dedication to recovery, demonstrating that the process of co-production generates effective health outcomes (Bovaird & Loeffler, 2013 ). Further, through a close and trusting relationships with clients, recovery housing navigators may glean important information about how potentially discriminatory practices and help recovery homes to come into alignment with The U.S. Department of Justice’s guidance on protections for people with opioid use disorder who are protected under the American with Disabilities Act (DOJ, 2022).

A concerning note comes from clients who shared the belief that recovery housing programs may continue to discriminate against those who use MOUD, although not in ways previously observed. Recovery housing programs involved in this program were required to accept clients’ prescribed MOUD, yet some homes asked potential clients about their dosage levels during the initial screening process. While the current data represents study participants perceived this question to reject clients using high doses (i.e., the data do not provide definitive proof), such a finding speaks to the potential need for more conversation/education around MOUD usage and treatment support.

The results of this study provide several implications. First, providing a single point of contact for individuals attempting to engage with recovery services appears to improve efficiency. Rather than attempting to find relevant services within a confusing system, clients’ needs and preferences can be matched quickly and effectively by an individual providing informational and relationship brokering services (Davis, 2008; Paskett et al., 2011 ). Second, providing this instrumental support in an understanding way (i.e., not stigmatizing) leads to relational support (DiClemente et al., 2016 ). Clients expressed improved perceptions of both the specific service (recovery housing) and other general services. Third, this improved perception may lead to higher levels of service engagement; that is, clients appeared to be more willing to engage in recovery services (Bovaird & Loeffler, 2013 ). Finally, services within recovery ecosystems may continue to discriminate against potential clients engaged in MOUD, though this current finding may be more related to stigma against individuals on higher dosages rather than the use of this medication altogether.

Results also point to the need for further investigation. As this study provides exploratory analysis, more research is required to reach causal and more generalizable conclusions regarding improvements in efficiency and perceptions, as well as co-production activities. A study involving a larger sample size across a more representative geographic area is required, as well as one that includes both psychometrically sound instruments measuring constructs like motivation and more distal outcomes (e.g., continued engagement with services). More investigation is needed to explore whether recovery services continue to create barriers to entry for people prescribed MOUD.

Limitations

Given the nascency of this line of inquiry, the current study serves as an opening salvo into the question of housing navigation for CLS-involved individuals who use substances but also includes several limitations. The exploratory nature of this analysis provides data from a small sample within one geographic area, so results cannot be generalized beyond this limitation. Further, the data represents individuals who could be reached by phone while those most marginalized and disconnected from support were potentially lost to contact, and therefore, we may be missing valuable data to improve services for those who need it most. The inability to contact a significant portion of those seeking services introduces sample bias, however their absence illuminates ongoing challenges some populations experience accessing vital services and those research activities which can build evidence to address them.

This study evaluated a recovery housing navigation program, specifically investigating the role navigators play in improving CLS-involved individuals with recovery housing. Qualitative data from semi-structured interviews with clients was evaluated for common themes triangulated and expanded using separate interviews with patient navigators. Results indicate housing navigation, specifically through co-production, improved service engagement efficiency by brokering information and the formal relationship between clients and services. Additionally, clients expressed improved perceptions of housing and recovery services more generally and higher levels of motivation to continue engagement with healthy recovery habits. Though more work is needed to make causal and generalizable interpretations possible, navigation services can improve proximate outcomes, which will lead to downstream improvements (e.g., longer periods of recovery, and lower overdose rates).

Data availability

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

Almquist, L., & Walker, S. (2022). Reciprocal associations between housing instability and youth criminal legal involvement: A scoping review. Health Justice , 10 (15), 2–14. https://doi.org/10.1186/s40352-022-00177-7

Article   Google Scholar  

Ashford, R., Brown, A., Ryding, R., & Curtis, B. (2020). Building recovery ready communities: The recovery ready Ecosystem model and community framework. Addiction Research & Theory 28(1). https://doi.org/10.1080/16066359.2019.1571191

Baggett, T., Hwang, S., O’Connell, J., Porneala, B., Stringfellow, E., Orav, E., Singer, D., & Rigotto, N. (2013). Mortality among homeless adults in Boston: Shifts in causes of death over a 15-Year period. JAMA Internal Medicine , 173 (3), 189–195. https://doi.org/10.1001/jamainternmed.2013.1604

Binswanger, I. A., Stern, M. F., Deyo, R. A., Heagerty, P. J., Cheadle, A., Elmore, J. G., & Koepsell, T. D. (2007). Release from prison—A high risk of death for former inmates. New England Journal of Medicine , 356 (2), 157–165. https://doi.org/10.1056/NEJMsa064115

Bovaird, X., & Loeffler, X. (2013). We’re all in this together: Harnessing user and community co-production of public outcomes . In C. Straite (Ed.), Making sense of the future: Do we need a new model of public services? (pp. 1–15) University of Birmingham: INLOGOV. https://www.birmingham.ac.uk/Documents/college-social-sciences/government-society/inlogov/publications/2013/chapter-4-bovaird-loeffler.pdf

Bovaird, T., Ryzin, G. G. V., Loeffler, E., & Parrado, S. (2015). Activating citizens to participate in collective co-production of public services. Journal of Social Policy , 44 (1), 1–23. https://doi.org/10.1017/S0047279414000567

Bronson, J., Zimmer, S., & Berzofsky, M. (2020). Drug use, dependence, and abuse among state prisoners and jail inmates, 2007–2009. U.S Department of Justice Special Report . https://bjs.ojp.gov/content/pub/pdf/dudaspji0709.pdf

Candela, A. (2019). Exploring the function of Member checking. The Qualitative Report . https://doi.org/10.46743/2160-3715/2019.3726

Cernasev, A., Hohmeier, K. C., Frederick, K., Jasmin, H., & Gatwood, J. (2021). A systematic literature review of patient perspectives of barriers and facilitators to access, adherence, stigma, and persistence to treatment for substance use disorder. Exploratory Research in Clinical and Social Pharmacy , 2 . https://doi.org/10.1016/j.rcsop.2021.100029

Charmaz, K. (2014). Constructing grounded theory . Sage Publications Ltd. https://us.sagepub.com/en-us/nam/constructing-grounded-theory/book235960

Clayman, M., Salisbury-Afshar, E., DePatie, H., Ali, M., & Arnold, J. (2020). Assessment of medications for opioid use disorder (MOUD) treatment related services in Marion County, Indiana. American Institutes of Research. https://www.rmff.org/wp-content/uploads/2021/05/RMFF_Federal-Opioid-Funding-Report_Final.pdf

Coote, A. (2022). Towards a sustainable welfare state: The role of universal basic services. Social Policy and Society , 21 (3), 473–483. https://doi.org/10.1017/S1474746421000385

Davis, C., Darby, K., Likes, W., & Bell, J. (2008). Social workers as patient navigators for breast cancer survivors: What do African-American medically underserved women think of this idea? Social Work in Health Care , 48 (6), 561–578. https://doi.org/10.1080/00981380902765212

de Viggiani, N. (2007). Unhealthy prisons: Exploring structural determinants of prison health. Sociology of Health & Illness , 29 (1), 115–135. https://doi.org/10.1111/j.1467-9566.2007.00474.x

DeGuzman, R., Korcha, R., & Polcin, D. (2019). I have more support around me to be able to change: A qualitative exploration of probationers’ and parolees’ experiences living in sober living houses. The International Journal of Therapeutic Communities , 40 (1), 51–65. https://doi.org/10.1108/TC-04-2018-0008

DiClemente, C. C., Norwood, A. E. Q., Gregory, W. H., Travaglini, L., Graydon, M. M., & Corno, C. M. (2016). Consumer-centered, collaborative, and comprehensive care: The core essentials of recovery-oriented system of care. Journal of Addictions Nursing , 27 (2), 94–100. https://doi.org/10.1097/JAN.0000000000000120

Duffy, P., & Baldwin, H. (2013). Recovery post treatment: Plans, barriers and motivators. Substance Abuse Treatment Prevention and Policy , 8(6). http://www.substanceabusepolicy.com/content/8/1/6

Durose, C., Needham, C., Mangan, C., & Rees, J. (2017). Generating ‘good enough’ evidence for co-production. Evidence & Policy , 13 (1), 135–151. https://doi.org/10.1332/174426415X14440619792955

Fazel, S. (2016). Mental health of prisoners: Prevalence, adverse outcomes, and interventions. Lancet Psychiatry , 3 (9). https://doi.org/10.1016/S2215-0366(16)30142-0 . 871 – 81.

Ferguson, W. J., Cloud, D., Spaulding, A. C., Shelton, D., Trestman, R. L., Altice, F. L., Champion-Lippmann, C., Thomas, D., & Taxman, F. S. (2016). A call to action: A blueprint for academic health sciences in the era of mass incarceration. Journal of Health Care for the Poor and Underserved , 27 (2), 5–17. https://doi.org/10.1353/hpu.2016.0051

Freeman, H. (2012). The origin, evolution, and principles of patient navigation. Cancer Epidemiology Biomarkers & Prevention , 21 (10), 1614–1617. https://doi.org/10.1158/1055-9965.EPI-12-0982

Gutierrez, C. (2021). E. Patterson (Ed.), Risk and implications of COVID-19 among the community supervised population. Criminology Public Policy 20 3 437–461 https://doic.org10.1111/1745-9133.12563.

Han, B., Williams, B., & Palamar, J. (2020). Medical multimorbidity, mental illness, and substance use disorder among middle-aged and older justice-involved adults in the USA, 2015–2018. Journal General Internal Medicine , 6 (5), 1258–1263. https://doi.org.0.1007/s11606-020-06297

Hartung, D. M., McCracken, C. M., Nguyen, T., Kempany, K., & Waddell, E. N. (2023). Fatal and nonfatal opioid overdose risk following release from prison: A retrospective cohort study using linked administrative data. Journal of Substance Use and Addiction Treatment , 147 , 208971. https://doi.org/10.1016/j.josat.2023.208971

Jason, L. A., Davis, M., & Ferrari, J. (2007). The need for substance abuse after-care: Longitudinal analysis of Oxford House. Addictive Behaviors , 32 (4), 803–818. https://doi.org/10.1016.j.addbeh.2006.06.014

Jason, L. A., Light, J. M., Stevens, E. B., & Beers, K. (2014). Dynamic social networks in recovery homes. American Journal of Community Psychology , 53 (3), 324–334. https://doi.org/10.1007/s10464-013-9610-6

Jason, L. A., Majer, J. M., Bobak, T. J., & O’Brien, J. (2022). Medication assisted therapy and recovery homes. Journal of Prevention & Intervention in the Community , 50 (2), 178–190. https://doi.org/10.1080/10852352.2021.1934940

Joudrey, P. J., Khan, M. R., Wang, E. A., Scheidell, J. D., Edelman, E. J., McInnes, D. K., & Fox, A. D. (2019). A conceptual model for understanding post-release opioid-related overdose risk. Addiction Science & Clinical Practice , 14 (1), 17. https://doi.org/10.1186/s13722-019-0145-5

Kepple, N. J., Parker, A., Whitmore, S., & Comtois, M. (2019). Nowhere to go? Examining facility acceptance levels for serving individuals using medications for opioid used disorder. Journal of Substance Abuse Treatment , 104 , 42–50. https://doi.org/10.1016/j.jsat.2019.06.004

Kirk, D. S., & Wakefield, S. (2018). Collateral consequences of punishment: A critical review and path forward. Annual Review of Criminology , 1 (1), 171–194. https://doi.org/10.1146/annurev-criminol-032317-092045

Knight, K., Hiller, M., & Simpson, D. (2012). Evaluating corrections-based treatment for the drug-abusing criminal offender. Journal of Psychoactive Drugs, 31(3), 299–304. https://doi.org/10.1080/02791072.1999.10471760

Komaromy, M., Stone, A., Peterson, A., Gott, J., Koenig, R., & Taylor, J. (2023). Facilitating exit from encampments: Combining low-barrier transitional housing with stabilizing treatment for substance related problems. Addiction Science & Clinical Practice , 18 (66). https://doi.org/10.1186/s13722-023-00420-y

Krawczyk, N., Negron, T., Nieto, M., Agus, D., & Fingerhood, M. I. (2018). Overcoming medication stigma in peer recovery: A new paradigm. Substance Abuse , 39 (4), 404–409. https://doi.org/10.1080/08897077.2018.1439798

Lo Sasso, A. T., Byro, E., Jason, L. A., Ferrari, J. R., & Olson, B. (2012). Benefits and costs associated with mutual-help community-based recovery homes: The Oxford House model. Evaluation and Program Planning , 35 (1), 47–53. https://doi.org/10.1016/j.evalprogplan.2011.06.006

Majer, J. M., Jason, L. A., Hickey, P., Joshua, N., Jeong, H., & Bobak, T. J. (2020). Social support among oxford house residents utilizing medication-assisted treatments. Alcoholism Treatment Quarterly , 38 (2), 199–214. https://doi.org/10.1080/07347324.2019.1678445

Manuel, J., Yuan, Y., Herman, D., Svikis, D., Nichols, O., Palmer, E., & Deren, S. (2017). Barriers and facilitations to successful transition from long-term residential substance abuse treatment. Journal of Substance Abuse Treatment , 74 , 16–22. https://doi.org/10/1016/j.jsat.2016.12.001.

Martin, R. A., Stein, L. A. R., Rohsenow, D. J., Belenko, S., Hurley, L. E., Clarke, J. G., & Brinkley-Rubinstein, L. (2021). Using implementation interventions and peer recovery support to improve opioid treatment outcomes in community supervision: Protocol. Journal of Substance Abuse Treatment , 128 , 108364. https://doi.org/10.1016/j.jsat.2021.108364

McCollister, K., & French, M. (1998). The economic cost of substance abuse treatment in criminal justice settings. Treatment of drug offenders: Policies and issues, 22–38. https://www.researchgate.net/publication/312889922_The_economic_cost_of_substance_abuse_treatment_in_criminal_justice_settings

Mericle, A. A., Polcin, D. L., Hemberg, J., & Miles, J. (2017). Recovery housing: Evolving models to address resident needs. Journal of Psychoactive Drugs , 49 (4), 352–361. https://doi.org/10.1080/02791072.2017.1342154

Mericle, A. A., Karriker-Jaffe, K., Patterson, D., Mahoney, E., Cooperman, L., & Polcin, D. L. (2020). Recovery in context: Sober living houses and the ecology of recovery. Journal of Community Psychology , 48 (8), 2589–2607. https://doi.org/10.1002/jcop.22447

Miles, J., Howell, J., Sheridan, D., Braucht, G., & Mericle, A. (2020). Supporting individuals using medications for opioid use disorder in recovery residences: Challenges and opportunities for addressing the opioid epidemic. The American Journal of Drug and Alcohol Abuse , 46 (3), 266–272. https://doi.org/10.1080/00952990.2020.1719410

Needham, C., & Carr, S. (2009). Co production: An emerging evidence base for adult social care transformation. Social Care Institute of Excellence https://www.researchgate.net/publication/237126419_Co_production_An_Emerging_Evidence_Base_for_Adult_Social_Care_Transformation

Paskett, E., Harrop, J., & Wells, K. (2011). Patient navigation: An update on the state of science. CA Cancer Journal for Clinicians , 61 (4), 237–249. https://doi.org/10.3322/caac.20111

Phelps, M., Osman, I., Robertson, C., & Shlafer, R. (2022). Beyond pains and gains: Untangling the health consequences of probation. Health & Justice , 10 (1), 1–29. https://doi.org/10.1186/s40352-022-00193-7

Pho, M., Erzouki, F., Boodram, B., Jimenez, A. D., Pineros, J., Shuman, V., Claypool, E. J., Bouris, A. M., Gastala, N., Reichert, J., Kelly, M., Salisbury-Afshar, E., Epperson, M. W., Gibbons, R. D., Schneider, J. A., & Pollack, H. A. (2021). Reducing opioid mortality in Illinois (ROMI): A case management/peer recovery coaching critical time intervention clinical trial protocol. Journal of Substance Abuse Treatment , 128 , 108348. https://doi.org/10.1016/j.jsat.2021.108348

Polcin, D. L. (2006). What about sober living houses for parolees? Criminal Justice Studies , 19 (3), 291–300. https://doi.org/10.1080/14786010600921712

Polcin, D. L. (2018a). Role of recovery residences in criminal justice reform. International Journal of Drug Policy , 53 , 32–36. https://doi.org/10.1016/j.drugpo.2017.10.009

Polcin, D. L. (2018b). Role of recovery residences in criminal justice reform. International Journal of Drug Policy , 53 , 32–36. https://doi.org/10.1016/j.drugpo.2017.10.009

Polcin, D. L., Korcha, R. A., Bond, J., & Galloway, G. (2010). Sober living houses for alcohol and drug dependence: 18-Month outcomes. Journal of Substance Abuse Treatment , 38 (4), 356–365. https://doi.org/10.1016/j.jsat.2010.02.003

Polcin, D. L., Mericle, A. A., Braucht, G. S., & Wittman, F. D. (2023). Moving social model recovery forward: Recent research on sober living houses. Alcoholism Treatment Quarterly , 41 (2), 173–186. https://doi.org/10.1080/07347324.2023.2167528

Ranapurwala, X., Shanahan, M., Apostolos, A., Proescholdbell, S., Naumann, R., Edwards, D., & Marshall, S. (2018). Opioid overdose mortality among former North Carolina inmates: 2000–2015. American Journal of Public Health , 108 (9), 1207–1213. https://doi.org/10.2105/AJPH.2018.304514

Reif, S., Preethy, G., Braude, L., Dougherty, R., Daniels, A., Ghose, S., & Delphin-Rittmon, M. (2014). Recovery housing: Assessing the evidence. Psychiatric Services, 65(3). https://doi.org/10.1176.appi.ps.201300243

Saldana, J. (2021). The coding manual for qualitative researchers . Sage Publications, Ltd.

Stanojlović, M., & Davidson, L. (2021). Targeting the barriers in the substance use disorder continuum of care with peer recovery support. Substance Abuse: Research and Treatment , 15 . https://doi.org/10.1177/1178221820976988

Szalavitz, M. (2018). January 17). Opinion | the wrong way to treat opioid addiction. The New York Times . https://www.nytimes.com/2018/01/17/opinion/treating-opioid-addiction.htmlI

United States Department of Justice Office of Public Affairs (2022). Justice Department issues guidance on protections for people with opioid use disorder under the Americans with Disabilities Act. https://www.justice.gov/opa/pr/justice-department-issues-guidance-protections-people-opioid-use-disorder-under-americans

VERBI Software. (2022). MAXQDA 2022 [Computer software] . VERBI Software. maxqda.com.

Victor, G. A., Bailey, K., & Ray, B. (2021). Buprenorphine treatment intake and critical encounters following a nonfatal opioid overdose. Substance Use & Misuse , 56 (7), 988–996. https://doi.org/10.1080/10826084.2021.1901933

Wang, M. L., Gallivan, L., Lemon, S. C., Borg, A., Ramirez, J., Figueroa, B., McGuire, A., & Rosal, M. C. (2015). Navigating to health: Evaluation of a community health center patient navigation program. Preventive Medicine Reports , 2 , 664–668. https://doi.org/10.1016/j.pmedr.2015.08.002

Wexler, H. K. (1995). The success of therapeutic communities for substance abusers in American prisons. Journal of Psychoactive Drugs , 27 (1), 57–66. https://doi.org/10.1080/02791072.1995.10471673

White House (2023, January 11). Dr. Rahul Gupta releases statement on CDC’s new overdose death data. White House Press Release . https://www.whitehouse.gov/ondcp/briefing-room/2023/01/11/dr-rahul-gupta-releases-statement-on-cdcs-new-overdose-death-data2/#:~:text=Rahul%20Gupta%2C%20Director%20of%20the,period%20ending%20in%20August%202022

Williams, A. R., Nunes, E. V., Bisaga, A., Levin, F. R., & Olfson, M. (2019). Development of a cascade of care for responding to the opioid epidemic. The American Journal of Drug and Alcohol Abuse , 45 (1), 1–10. https://doi.org/10.1080/00952990.2018.1546862

Wood, C. A., Duello, A., Miles, J., Lohmann, B., Gochez-Kerr, T., Richardson, K., Anderson-Harper, R., & Winograd, R. P. (2022). Acceptance of medications for opioid use disorder in recovery housing programs in Missouri. Journal of Substance Abuse Treatment , 138 , 108747. https://doi.org/10.1016/j.jsat.2022.108747

Yamamoto, A., Needleman, J., Gelberg, L., Kominski, G., Shoptaw, S., & Tsugawa, Y. (2019). Association between homelessness and opioid overdose and opioid-related hospital admissions/emergency department visits. Social Science & Medicine , 242 , 112585. https://doi.org/10.1016/j.socscimed.2019.112585

Zywiak, W. H., Neighbors, C. J., Martin, R. A., Johnson, J. E., Eaton, C. A., & Rohsenow, D. J. (2009). The important people drug and alcohol interview: Psychometric properties, predictive validity, and implications for treatment. Journal of Substance Abuse Treatment , 36 (3), 321–330. https://doi.org/10.1016/j.jsat.2008.08.001

Download references

Acknowledgements

Mona Stivers provided detailed editing services. This work is supported in part by the U.S. Department of Justice Comprehensive Opioid, Stimulant, and Substance Use Programs (2020-AR-BX-0074).

Not applicable.

Author information

Authors and affiliations.

Chestnut Health System’s Lighthouse Institute, 221 W. Walton Street, Chicago, IL, 60610, USA

Jodie M. Dewey, Patrick Hibbard & Dennis P. Watson

Cook County Health, 1950 W. Polk Street, Chicago, IL, 60612, USA

Juleigh Nowinski Konchak & Keiki Hinami

You can also search for this author in PubMed   Google Scholar

Contributions

JD collected, analyzed, and interpreted the data. PH assisted in the interpretation of the data. JD and PH examined and organized the literature review. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Jodie M. Dewey .

Ethics declarations

Ethics approval and consent to participate.

The Institutional Review Board at Chestnut Health Systems in Chicago approved this study. Consent was obtained from each participant during the project enrollment process. Additionally, each participant signed a consent to contact to be reached at a later date by the qualitative researcher.

Consent for publication

Competing interests.

The authors declare they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Dewey, J.M., Hibbard, P., Watson, D.P. et al. A qualitative investigation into the effectiveness of a housing navigator program linking justice-involved clients with recovery housing. Health Justice 12 , 37 (2024). https://doi.org/10.1186/s40352-024-00293-6

Download citation

Received : 18 August 2023

Accepted : 05 September 2024

Published : 14 September 2024

DOI : https://doi.org/10.1186/s40352-024-00293-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Recovery housing
  • Patient navigation
  • Criminal legal system involvement

Health & Justice

ISSN: 2194-7899

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

how to evaluate qualitative research

  • Open access
  • Published: 13 September 2024

Evaluating the implementation of the Pediatric Acute Care Education (PACE) program in northwestern Tanzania: a mixed-methods study guided by normalization process theory

  • Joseph R. Mwanga 1 ,
  • Adolfine Hokororo 1 , 2 ,
  • Hanston Ndosi 1 ,
  • Theopista Masenge 2 ,
  • Florence S. Kalabamu 2 , 3 ,
  • Daniel Tawfik 4 ,
  • Rishi P. Mediratta 4 ,
  • Boris Rozenfeld 5 ,
  • Marc Berg 4 ,
  • Zachary H. Smith 6 ,
  • Neema Chami 1 , 2 ,
  • Namala P. Mkopi 2 , 7 ,
  • Castory Mwanga 2 ,
  • Enock Diocles 1 ,
  • Ambrose Agweyu 8 &
  • Peter A. Meaney 4  

BMC Health Services Research volume  24 , Article number:  1066 ( 2024 ) Cite this article

Metrics details

In low- and middle-income countries (LMICs), such as Tanzania, the competency of healthcare providers critically influences the quality of pediatric care. To address this issue, we introduced Pediatric Acute Care Education (PACE), an adaptive learning program to enhance provider competency in Tanzania’s guidelines for managing seriously ill children. Adaptive learning is a promising alternative to current in-service education, yet optimal implementation strategies in LMIC settings are unknown.

(1) To evaluate the initial PACE implementation in Mwanza, Tanzania, using the construct of normalization process theory (NPT); (2) To provide insights into its feasibility, acceptability, and scalability potential.

Mixed-methods study involving healthcare providers at three facilities. Quantitative data was collected using the Normalization MeAsure Development (NoMAD) questionnaire, while qualitative data was gathered through in-depth interviews (IDIs) and focus groups discussions (FGDs).

Eighty-two healthcare providers completed the NoMAD survey. Additionally, 24 senior providers participated in IDIs, and 79 junior providers participated in FGDs. Coherence and cognitive participation were high, demonstrating that PACE is well understood and resonates with existing healthcare goals. Providers expressed a willingness to integrate PACE into their practices, distinguishing it from existing educational methods. However, challenges related to resources and infrastructure, particularly those affecting collective action, were noted. Early indicators point toward the potential for long-term sustainability of the PACE, but assessment of reflexive monitoring was limited due to the study’s focus on PACE’s initial implementation.

This study offers vital insights into the feasibility and acceptability of implementing PACE in a Tanzanian context. While PACE aligns well with healthcare objectives, addressing resource and infrastructure challenges as well as conducting a longer-term study to assess reflexive monitoring is crucial for its successful implementation. Furthermore, the study underscores the value of the NPT as a framework for guiding implementation processes, with broader implications for implementation science and pediatric acute care in LMICs.

Peer Review reports

Contributions to the literature

Introduces PACE : This study uniquely evaluated the PACE program in a low-resource setting, offering initial evidence on its implementation and potential impact on pediatric care.

Utilizes the NPT framework : By employing a NPT framework, this research provides a novel methodological example of how to assess the incorporation of e-learning in LMIC clinical settings.

Informs Implementation Strategies : These findings contribute to the design of effective e-learning strategies for healthcare education in LMICs, suggesting practical steps for broader application.

Expands Local Capacity : Demonstrates how PACE can build local healthcare capacity, informing ongoing efforts to sustainably improve pediatric care through education in similar environments.

Context and importance of the study

Pediatric in-service education for healthcare providers in Low- and Middle-income countries (LMICs) often lacks reach, effectiveness, and sustainability, contributing to millions of child deaths annually [ 1 , 2 ]. Pneumonia, birth asphyxia, dehydration, malaria, malnutrition, and anemia cause over 4 million child deaths annually, with half occurring in sub-Saharan Africa and thousands in Tanzania [ 3 , 4 ]. The Tanzanian government aims to reduce neonatal mortality from 20/100,000 to the Sustainable Development Goals (SDGs) target of 12/100,000 by 2030 [ 5 ].

Brief review of the literature

Provider knowledge and skills competency are crucial for care quality in LMICs [ 2 , 6 ]. However, conventional in-service education methods are often inadequate and unsustainable [ 6 ]. These methods do not adapt to individual providers’ knowledge or schedules, target minimal competency, and lack long-term refresher learning, limiting their effectiveness [ 7 , 8 , 9 , 10 ].

Adaptive learning can address these limitations by customizing the timing and sequence of combined e-learning and in-person skills training, creating individualized pathways that reinforce learning and enhance skills competency. This approach helps mitigate manpower and resource shortages in LMICs and represents a strategic innovation in knowledge dissemination.

The World Health Organization (WHO) emphasizes the importance of e-learning solutions for healthcare workers globally [ 11 ]. Adaptive learning, with its capacity to adjust to individual needs, holds significant promise for enhancing training efficiency. However, formal studies on adaptive learning in LMIC contexts are scarce. Establishing best practices in e-learning and adaptive methodologies will enhance the dissemination of evidence-based interventions and improve clinical practice and patient outcomes.

To address current educational limitations for healthcare workers in LMICs, we developed the Pediatric Acute Care Education (PACE) program [ 12 , 13 ]. This adaptive e-learning program offers 340 learning objectives across 10 assignments, covering newborn and pediatric care guidelines for management of seriously ill children. The PACE program’s implementation strategy includes an adaptive e-learning platform optimized for mobile phones, a steering committee, a full-time PACE coordinator, and an escalating nudge strategy to encourage participation.

Study aims and objectives

The primary aim of this research is to assess the preliminary implementation of the PACE intervention across two types of pediatric acute care facilities: zonal hospitals and health centers. The study has two principal objectives: (1) To evaluate the initial PACE implementation in Mwanza, Tanzania, using the constructs of Normalization Process Theory (NPT); (2) To provide insights into its feasibility, acceptability, and scalability potential.

Study design

This study employed a mixed methods approach to evaluate the implementation of the PACE program in three healthcare settings in northwestern Tanzania, nested within a larger pilot implementation of PACE within eight health facilities of the Pediatric Association of Tanzania’s Clinical Learning Network. The study utilized NPT as a framework, combining quantitative and qualitative methods. Quantitatively, a tailored NoMAD survey instrument evaluates the integration of PACE into routine clinical practice. Qualitatively, in-depth interviews and focus group discussions enrich the data.

Theoretical framework

NPT has been described as a sociological toolkit for helping us understand the dynamics of implementing, embedding, and integrating new technology or a complex intervention into routine practice [ 14 ]. NPT provides a conceptual framework for understanding and evaluating the processes (implementation) by which new health technologies and other complex interventions are routinely operationalized in everyday work (embedding) and sustained in practice (integration) [ 15 , 16 , 17 , 18 , 19 , 20 ]. The theory is organized around four main constructs, each of which has its own subconstructs [ 15 ]. These constructs collectively offer insights into the feasibility, acceptability, and scalability of an intervention or innovation (Fig. 1 ). Each of these constructs and subconstructs offers a unique lens through which the feasibility, acceptability, and scalability of a new practice can be evaluated, thereby aiding in its effective implementation.

figure 1

Boxplot of participant responses to NoMAD survey by NPT construct and subconstruct

Study setting

The study was conducted between August 2022 and July 2023 at three healthcare facilities in Mwanza, Tanzania. The Bugando Medical Centre (BMC), an urban zonal referral and teaching hospital, sees about 7,000 births per year and 6,550 pediatric admissions per year for children aged 1 month to 5 years; the urban Makongoro Health Centre, handles approximately 359 births per year but refers newborn and pediatric admissions to the nearby regional or zonal hospital; and the rural Igoma Health Centre sees about 3,850 births per year and 959 pediatric admissions per year for children aged 1 month to 5 years.

Eligibility criteria

Providers included in the study were required to have a minimum command of English and be actively providing pediatric care to sick patients at least part-time. Eligible providers encompassed a wide range of professional cadres, reflecting the diversity of healthcare providers in Tanzania. These included specialists (medical officers with 3 additional years of specialization), medical officers (5 years of education and 1-year internship), nursing officers (4 years of education and 1-year internship), assistant medical officers (clinical officers with 2 additional years of clinical training), assistant nursing officers (3 years of education), clinical officers (3 years of education), clinical assistants (2 years of education), enrolled nurses (2 years of education), and medical attendants (1 year of education). In addition to providers, senior facility staff with administrative roles who supervise PACE providers, such as ward matrons, medical officers-in-charge, and nursing officers-in-charge, were eligible to participate. The bulk of the care is provided by junior medical officers and nurses, who have limited training and experience caring for children with severe illnesses

Recruitment process

Healthcare providers were informed about the study through their facility leaders, and individuals who responded to the survey were not necessarily the same as those who participated in the focus groups or in-depth interviews.

Data collection tools

Nomad questionnaire.

The NoMAD is a 23-item questionnaire based on the NPT that was designed to assess the social processes influencing the integration of complex interventions [ 18 , 21 ]. It includes 3 general items and 20 related to specific NPT constructs (4 Coherence, 7 Collective Action, 4 Cognitive Participation, 5 Reflexive Monitoring). The general items were scored on a scale of 0-100, and the NPT construct items were modified to include a five-point Likert scale (1-Strongly Agree, 5-Strongly Disagree) and additional options for respondents to indicate whether a question was not relevant to their role, stage, or intervention itself. The NPT subconstruct survey items are listed in Table 1 , and the complete survey is provided in the Supplementary Materials.

In-depth interviews (IDIs) and focus group discussions (FGDs)

Interview guides were developed based on previous experience with similar data collection tools. The training and pretesting of the tools were conducted by the study investigators.

Data collection process

Nomad survey.

All PACE participants were invited via WhatsApp to complete the NoMAD survey directly in REDCap, 30 days post-intervention or upon completion of the PACE course.

Focus group discussions and in-depth interviews

We employed a purposeful sampling strategy for the qualitative components, selecting senior healthcare providers for in-depth interviews (IDIs) and junior providers for focus group discussions (FGDs). This approach ensured junior providers felt comfortable speaking openly, avoiding inhibition from senior participants in focus groups, and facilitated methodological triangulation to enhance the credibility and validity of the findings. Data was triangulated using three different types: methodological triangulation with IDIs and FGDs, investigator triangulation with different research assistants collecting data, and data triangulation using data from IDIs, FGDs and NoMAD surveys. Data collection began with a series of field visits, guided by NPT constructs, and included IDIs and FGDs. FGDs, segregated by sex but including a mix of cadres from each health facility, enriched the diversity of perspectives. The iterative nature of our methodology allowed for continuous refinement of our theoretical framework, methodologies, and sampling strategies, informed by emerging data. Consequently, the guides for both the IDIs and FGDs were dynamically modified to reflect the evolving study themes. All sessions, including IDIs and FGDs, were conducted in Kiswahili at the providers’ work premises, adding contextual depth. The IDI and FGD interview guides were originally developed in English, translated into Kiswahili (the national language), and then back translated into English to ensure that the meaning was retained. Both IDIs and FGDs were meticulously audio-recorded, transcribed verbatim, and then translated into English for analysis. Back-translation was employed to ensure validity.

Data analysis

Quantitative analysis.

Descriptive statistics are reported as frequencies and percentages or medians and interquartile ranges, with comparisons via Fisher’s exact test or the Mann‒Whitney U test as appropriate. Analyses were conducted using Stata 17.0 (Stata Corp, College Station, TX, USA).

Qualitative analysis

The analysis process, conducted concurrently with data collection, was instrumental in achieving theoretical saturation, marked by the cessation of new information from ongoing IDIs and FGDs. To ensure the validity and depth of our findings, we implemented member checking and investigator triangulation, with two independent investigators coding and interpreting the data using NVivo 2020 software (QSR International Pty Ltd., Sydney, Australia). This software facilitated a hybrid coding approach in which blended deductive and inductive methods were used for comprehensive thematic content analysis. Contextual insights from the IDIs and FGDs were key to interpreting the findings, with representative quotations included to illustrate the identified themes. Data triangulation was achieved using diverse data sources, and the research team’s expertise further enhanced the rigor and reflexivity of the analysis.

Summary of feasibility, acceptability and scalability

We used the Proctors definition of implementation outcomes and mapped the NoMAD survey results to NPT subconstructs using the definition of May et al. [ 22 , 23 ].

Feasibility is concerned with the practical aspects of implementing a new intervention, including resource allocation, training, and ease of integration into existing work. In the NPT, this aligns closely with the construct of “collective action,” which refers to the operational work that people do to enact a set of practices. To assess feasibility, we interpreted our responses as follows: “Sufficient training is provided to enable staff to use PACE” (collective action, skill set workability); “Sufficient resources are available to support PACE”; “Management adequately supports PACE” (collective action, contextual integration); and “I can easily integrate PACE into my existing work” (collective action, interactional workability).

Acceptability refers to the extent to which the new intervention is agreeable or satisfactory among its users. To assess acceptability, we interpreted our responses as follows: “Staff in this organization have a shared understanding of the purpose of PACE” (coherence: communal specification); “I believe that participating in PACE is a legitimate part of my role” (cognitive participation, legitimation); “The staff agree that PACE is worthwhile” (reflexive monitoring, communal appraisal); and “I value the effects PACE has had on my work” (reflexive monitoring, individual appraisal). In addition, we compared scores between zonal hospitals and health centers.

Scalability involves the ability to expand the intervention to other settings while maintaining its effectiveness. To assess scalability, we interpreted our responses as “I will continue to support PACE” (cognitive participation, activation); “Work is assigned to those with skills appropriate for PACE” (collective action, skill set workability); “feedback about PACE can be used to improve it in the future”; and “I can modify how I work with PACE” (reflexive monitoring, reconfiguration).

Ethical considerations

All the providers provided informed consent, and the study was approved by the Institutional Review Board of the Tanzania National Institute of Medical Research (NIMR/HO/R.8a/Vol. IX/3990), Stanford University (60379), the ethics committee of the Catholic University of Health and Allied Science (no ID number given), and the Mwanza Regional Medical Officer (Ref. No. AG.52/290/01A/115).

Techniques to enhance trustworthiness

Techniques to enhance trustworthiness included a purposeful sampling strategy, meticulous data collection in Kiswahili with back-translation, and the use of methodological, investigator, and data triangulation [ 24 ]. The analysis process was iterative and concurrent with data collection, employing hybrid coding and member checking to ensure systematic, explicit, and reproducible findings.

Reporting guidelines

This study adheres to the STROBE and SRQR reporting guidelines for comprehensive and explicit reporting of observational and qualitative studies, respectively [ 25 , 26 ].

Provider demographics

Eighty-two of the 272 eligible healthcare providers from the three facilities completed the NoMAD survey, resulting in a 30% response rate. Of the 82 respondents, 59 were from zonal hospitals and 23 from health centers (Table 2 ). The median ages were 27 and 29 years for zonal hospital and health center staff, respectively. The gender distribution was similar in both settings, with 39% female in the zonal hospital group and 43.5% in the health centers.

There were significant differences in cadre distribution: zonal hospitals had more medical staff (47.5% vs. 8.7%) and nurses (42.4% vs. 30.4%), while health centers had more clinical officers (30.4% vs. 0%). Clinical experience also varied, with a median of 1 year at zonal hospitals and 4 years at health centers ( p = 0.004). Previous participation in newborn or pediatric in-service education (e.g., Helping Babies Breathe, Helping Children Survive) was similar across the facilities, ranging from 71% to 73%. Job satisfaction scores did not significantly differ between the two groups.

A total of seventy-nine healthcare providers participated in IDIs or FGDs. Twenty-four senior providers completed IDIs, 18 from the zonal hospital and 6 from health centers., 13 FGDs with an average of 4 junior providers per group were conducted to achieve thematic saturation, including 39 participants from zonal hospitals and 16 from health centers. The represented cadres included medical officers (26, 32.9%), nurses (19, 24.1%), interns (16, 20.3%), clinical officers (12, 15.2%), assistant medical officers (3, 3.8%), and medical attendants (3, 3.8%). Clinical experience among participants ranged from 1 to 20 years. Compared to the NoMAD survey, participants in IDIs and FGDs included a higher proportion of medical officers (including interns) and clinical officers, but a lower proportion of nursing officers and other cadres.

NoMAD survey results

General items.

Familiarity and general satisfaction with PACE were high, with median scores of 89 and 91, respectively, and both showed moderate, balanced variability (interquartile ranges of 76-100 and 75-100, respectively) (Table 3 , Fig. 1 ). Optimism for the future use of PACE was highest, with a median score of 99 and narrow variability (87-100), indicating a strong skew towards higher scores. No significant differences were observed between the zonal hospitals and health centers.

NPT constructs

Providers reported understanding how to work together and plan the activities to put PACE and its components into practice. Strong agreement on the value of PACE is indicated by the median score for “Internalization" (1, “strongly agree,” IQR [1, 2]) (Table 3 , Fig. 1 ). Agreement on PACE’s purpose and its differentiation from existing work is indicated by the median scores for “Communal Specification” (2, “agree,” IQR [1, 2]), “Differentiation” (2, “agree,” IQR [1, 4]) and “Individual Specification” (2, “agree,” IQR [2, 4]), respectively. No significant differences were observed between the zonal hospitals and health centers.

Cognitive participation

Providers reported understanding how to work together to create networks of participation and communities of practice around PACE and its components. Strong agreement for ongoing PACE support, PACE participation and leadership, and PACE integration into work is indicated by the median scores for “Activation” (1, “strongly agree,” IQR [1, 1]), “Enrollment” (1, “strongly agree,” IQR [1, 1]), “Initiation” (1, “strongly agree,” IQR [1, 1]), and “Legitimation” (1, “strongly agree,” IQR [1, 2]), respectively. Narrow IQRs highlight the homogeneous support among providers. No significant differences were observed between the zonal hospitals and health centers.

Collective action

Providers reported understanding how to work together to enact PACE and its components, with greater certainty of not disrupting working relationships in the zonal hospital compared to health centers. Strong agreement that the work required by PACE is manageable, has sufficient training and resources, and receives strong organizational support is indicated by the median scores for “Interactional Workability” (1, “strongly agree,” IQR [1, 1]), “Skill-set Workability” (1, “strongly agree,” IQR [1, 2]) and “Contextual Integration” (2, “agree,” IQR [2, 2]) (Table 3 , Fig. 1 ). Agreement that PACE does not disrupt working relationships is indicated by the median score for “Relational Integration” (4, “disagree,” IQR [4, 5]). Zonal hospital providers had significantly less variability that PACE would not disrupt working relationships (relational integration)compared to health centers(IQR [4, 5] vs [3, 5] p= 0.02).

Reflexive monitoring

Providers reported understanding how to work together to evaluate the benefits of PACE and its components. Strong agreement on how people individually assess the value of PACE is indicated by the median score for “Individual Appraisal” (1, “strongly agree,” IQR [1, 1]). Agreement on how people access information to assess the value of PACE, how to value PACE collectively, and work adjustments needed for PACE is indicated by the median scores for “Systematization” (2, “agree,” IQR [2, 2]), “Communication Appraisal” (2, “agree,” IQR [2, 2]), and “Reconfiguration” (2, “agree,” IQR [2, 2]) (Table 3 , Fig. 1 ). No significant differences were observed between the zonal hospitals and health centers.

IDI and focus group results

Coherence themes.

Providers value PACE for its detailed guidance on specific pediatric cases, such as difficulty breathing, which was not covered in their basic training (Table 4 ). PACE is seen as a tool for empowering providers to reduce child mortality and improve service quality, aligning with facility goals. Providers believe that PACE has enhanced their understanding and management of seriously ill children. They find that PACE is consistent with Tanzanian and WHO guidelines and useful both in their work and in training medical students.

Cognitive participation themes

Providers were introduced to PACE by colleagues and supervisors, prompting them to enroll (Table 4 ). They mainly use PACE individually but also share modules to spread knowledge. PACE is seen as empowering providers to enhance their pediatric care. Despite busy schedules, providers are committed to PACE training.

Collective action themes

PACE's digital format allows for individual study and facilitates group discussions (Table 4 ). Initially, providers engaged with PACE for personal benefit but later saw the value in sharing knowledge. Providers value the practical application of PACE knowledge in patient care. Challenges like inadequate supplies and a lack of electricity hinder PACE implementation, but the availability of tools and support from PACE management facilitates implementation.

Reflexive monitoring themes

Providers find PACE valuable for educating junior doctors, simplifying complex topics, and boosting confidence (Table 4 ). They believe that PACE has enriched their knowledge and confidence in pediatric care. A notable challenge is the inaccessibility of learned material for future reference, hindering providers’ ability to refresh their knowledge.

Summary of feasibility, acceptability, and scalability

Overall, data from NoMAD survey responses indicated that PACE is generally feasible across healthcare settings, with providers either agreeing or strongly agreeing that people do the work required by interventions and their components (interactional workability median 1 “strongly agree” [1, 2]) or that the work of interventions and their components is supported by host organizations (contextual integration median 2 “agree” [1, 2]).

Furthermore, NoMAD survey responses indicated that PACE is also generally acceptable among healthcare providers. Providers collectively agreed about the purpose of PACE and its components (communal specification median 2 “agree” [1, 2]), agreed that PACE and its components are the right thing to do and should be part of their work (legitimation median 1 “strongly agree” [1, 2]), and collectively and individually agreed that PACE is worthwhile (communal appraisal median 2 “agree” [1, 2]; individual appraisal median 1 “strongly agree” [1, 2]).

Lastly, NoMAD survey responses indicated that PACE appears to be scalable, with some variability in its adaptability and skill-set alignment. Providers strongly agreed that they would continue to support PACE and its components (activation median 1 “strongly agree” [1, 1]), that they could modify their work in response to their appraisal of PACE, and that feedback could be used to improve it in the future (reconfiguration median 1 “strongly agree” [1, 2]). Providers agreed or were neutral about the work of PACE and its components being appropriately allocated to people (skill-set workability median 3 “neutral” [2, 4]), indicating that additional work is needed to identify the correct providers to participate in PACE or that additional support needs to be allocated to those providers to complete PACE.

This mixed-methods pilot study explored the feasibility, acceptability, and scalability of the PACE intervention among healthcare providers in Mwanza, Tanzania, using the NPT framework. The study demonstrated that PACE is generally well understood, aligns with existing healthcare goals, and is feasible to providers. There was strong acceptance and understanding that PACE should become part of normal work. Challenges to scalability lie in ensuring adequate resource and infrastructure support. Qualitative data from IDIs and FGDs enriched the findings by providing detailed insights that supported and contrasted with the NoMAD survey results, highlighting both the strengths and challenges of implementing PACE in a resource-limited setting.

Interpretation of findings

The study demonstrated that PACE is feasible. It is generally well understood by healthcare providers and aligns with existing healthcare goals. Providers found PACE to be practical in enhancing their ability to manage pediatric cases, particularly those not adequately covered in their basic training. For instance, providers appreciated the detailed guidance PACE offers for managing conditions like difficulty breathing, which they found invaluable. This alignment with healthcare objectives, such as focusing on improving service quality of newborn and child acute care to reduce child mortality underscores PACE’s potential for integration into routine clinical practice. The fact that PACE aligns with both Tanzanian and WHO guidelines further reinforces its relevance and applicability in the local healthcare context.

Provider training programs that focus on improving specific clinical performance objectives tend to yield better outcomes compared to those that cover broad topics. Targeted training programs, such as those designed to enhance specific clinical skills, have been shown to significantly improve the competency and confidence of healthcare providers. For instance, a systematic review we conducted in 2010 demonstrated that provider education programs in LMICs that focused on the needs and resources of the local healthcare environment had greater effectiveness [ 8 ]. Similarly, a study by Bluestone et al. (2013) found that focused training in neonatal resuscitation improved the performance of healthcare providers in emergency situations, as evidenced by increased neonatal survival rates [ 27 ]. In contrast, broad-topic training programs, while valuable for general knowledge enhancement, often lack the specificity needed to address critical clinical skills gaps effectively. As highlighted by Frenk et al. (2010), broad educational approaches may not adequately prepare providers for the complex, high-stakes situations they encounter in practice [ 28 ]. Therefore, training programs with a clear focus on enhancing specific clinical skills are generally more effective in improving clinical performance and patient outcomes.

The study demonstrated that PACE is acceptable. There is strong acceptance and understanding among providers that PACE should become part of their normal work. This cognitive participation reflects a high level of engagement and willingness to incorporate PACE into daily routines. Providers recognized the value of PACE in improving their knowledge and skills, with many noting that the program had significantly enhanced their understanding and management of seriously ill children. They also found PACE useful in training medical students and junior doctors, indicating its potential for broader educational impact. This widespread acceptance and integration into daily work routines suggest that PACE is viewed not just as an additional resource but as a vital component of their professional development.

When individuals perceive that a new activity should become part of their normal work, it is often associated with increased usage and integration into their daily routines [ 29 , 30 ]. This concept, known as cognitive participation, reflects a high level of engagement and commitment, which positively influences the adoption and sustained use of new practices. For instance, a study by May et al. (2009) on Normalization Process Theory highlighted that when healthcare providers viewed new clinical practices as integral to their work, they were more likely to implement them consistently [ 31 ]. Similarly, if people recognize the value of a new activity, there is substantial evidence that this recognition leads to increased usage and behavior change. Michie et al. (2011) found that perceived usefulness and perceived ease of use are significant predictors of the intention to use and actual usage of new interventions [ 32 ]. Furthermore, Rogers’ Diffusion of Innovations theory (2003) emphasizes that when individuals see clear benefits and value in a new practice, they are more likely to adopt it, leading to a transformation in their behavior and routines [ 33 ]. These findings collectively suggest that cognitive acceptance and perceived value are critical drivers of the successful implementation and sustained usage of new activities in various contexts.

However, challenges to scalability remain, particularly in ensuring adequate resource and infrastructure support. While the program itself is well-received, practical barriers such as adequate time to complete adaptive e-learning or participate in skills practice sessions and health system internet support hinder its full implementation. In addition, providers reported difficulties in accessing necessary equipment and managing cases during power outages, which directly impact their ability to apply PACE training effectively and cement long-term knowledge and skills. These challenges highlight the need for systemic improvements in resource allocation and infrastructure to support the sustainable and effective integration of PACE into the healthcare system. Without addressing these critical barriers, the scalability of PACE may be limited, preventing it from reaching its full potential impact.

Three strategies would address these challenges: 1) Strengthening Digital Infrastructure, 2) Flexible Scheduling and Time Management, and 3) Provision of Essential Equipment and Resources.

Strengthening digital infrastructure

Investing in robust digital infrastructure is crucial for the successful implementation of e-learning programs. Ensuring reliable internet connectivity and access to digital devices can significantly enhance the feasibility of adaptive learning modules. UNICEF’s conducted a review of digital learning programs in low-resource settings that highlights the positive impact of improved digital infrastructure [ 34 ]. Additionally, providing technical support and maintenance can prevent disruptions and ensure the smooth operation of online learning platforms (Aranda-Jan et al., 2014).

Flexible scheduling and time management

Allowing healthcare providers flexible scheduling to complete adaptive e-learning modules and participate in skills practice sessions can mitigate time-related barriers. Research by Yardley et al. (2012) demonstrates that flexible learning schedules increase participation and completion rates in professional development programs. Implementing self-paced learning options and modular training formats can help healthcare providers integrate training into their busy schedules without compromising clinical duties.

Provision of essential equipment and resources

Ensuring the availability of necessary medical equipment and resources is essential for the practical application of training programs. Partnerships with governmental and non-governmental organizations can facilitate the procurement and distribution of essential tools. A study by Bertram et al. (2018) suggests that strategic resource allocation and collaborative efforts can address equipment shortages and improve healthcare delivery. Additionally, creating contingency plans for managing power outages, such as providing backup power solutions, can enhance the reliability of training programs in resource-limited settings.

Qualitative data from focus groups and interviews enriched the findings by providing detailed insights that both supported and contrasted with the NoMAD survey results. These qualitative insights highlighted the strengths of PACE, such as its alignment with Tanzanian guidelines and its educational value, while also revealing challenges like resource constraints. Providers shared specific examples of how PACE had positively impacted their clinical practice, such as improving their ability to manage emergencies and enhancing their confidence in providing care. However, they also pointed out the difficulties in sustaining PACE’s benefits without adequate support and clinical resources to translate this knowledge into improved care delivery. This mixed-methods approach offered a comprehensive understanding of the implementation process, emphasizing the importance of addressing both the strengths and weaknesses of PACE in a resource-limited setting. The contrast between the high satisfaction reported in surveys and the practical challenges discussed in interviews underscores the need for a mixed methods approach when implementing new complex interventions in such environments.

Implications for implementation science and pediatric acute care

This study highlights the utility of the NPT as a conceptual framework for understanding the complexities involved in implementing adaptive learning interventions in LMICs. The findings provide valuable insights into the various factors that influence the implementation of adaptive learning, which can be applied to other healthcare interventions.

For pediatric acute care, the strong agreement among healthcare providers on the benefits of PACE for managing specific pediatric cases suggests that the program could significantly enhance provider proficiency and improve patient outcomes. Given the often time-sensitive nature of pediatric acute care, where timely and effective interventions such as oxygen therapy, intravenous fluids, and anti-microbial therapy can have a significant impact on patient outcomes, the effective and efficient training provided by PACE could lead to improved patient outcomes. Additionally, the consistency of these findings across various implementation contexts points to the scalability of the program, indicating its potential to be effectively expanded to other healthcare settings.

Limitations

The study has several limitations. The small sample size limits the generalizability of the findings, and the low response rate of 30% may introduce response bias. Additionally, the study's short duration did not allow for a comprehensive assessment of all NPT constructs, particularly reflexive monitoring. The reliance on self-reported data may also introduce social desirability bias. Our mixed methods approach, and methodological triangulation enhance the robustness of the findings despite these limitations.

Recommendations for future research

Future research should focus on longitudinal studies to assess the long-term sustainability and impact of PACE on provider proficiency, patient outcomes, and the quality of care. More rigorous qualitative research designs, such as detailed case studies and ethnographic studies, could provide a deeper understanding of the challenges and opportunities associated with implementing PACE. Additionally, research should explore the scalability of PACE, assessing how the program can be adapted for different healthcare settings and evaluating the resource implications of scaling up the intervention.

Conclusions

This study offers valuable insights into the feasibility, acceptability, and scalability of implementing PACE in a Tanzanian context. While PACE aligns well with healthcare objectives, addressing resource and infrastructure challenges is crucial for its effective and sustainable implementation. The study underscores the value of the NPT as a framework for guiding implementation processes, with broader implications for implementation science and pediatric acute care in LMICs. Future researchers can apply these insights by ensuring alignment with facility goals, engaging stakeholders early, planning for long-term evaluations, addressing resource challenges proactively, and considering the specific context and available resources when assessing scalability.

Availability of data and materials

Deidentified participant data from this study are available upon reasonable request. Interested researchers may obtain the data by contacting the corresponding author, Dr. Peter Meaney, at [email protected]. Access to the data will be granted following approval by an independent review committee established to evaluate the scientific validity and ethical justification of the proposed use of the tool. Please note that only the deidentified participant data are available, and no additional supporting information, such as study protocols or statistical analysis plans, will be provided. This process ensures that the data are used responsibly and in accordance with ethical research standards.

Data availability

Deidentified participant data from this study are available upon reasonable request. Interested researchers may obtain the data by contacting the corresponding author, Dr. Peter Meaney, at [email protected]. Access to the data will be granted following approval by an independent review committee, established to evaluate the scientific validity and ethical justification of the proposed use. Please note that only the deidentified participant data is available, and no additional supporting information, such as study protocols or statistical analysis plans, will be provided. This process ensures that the data is used responsibly and in accordance with ethical research standards.

Abbreviations

Bugando Medical Centre

Electronic Learning

Focus Group Discussion

In-Depth Interview

Low- or -Middle-Income Country

normalization process theory

normalization measure development

Pediatric Acute Care Education

Kruk ME, Gage AD, Joseph NT, Danaei G, García-Saisó S, Salomon JA. Mortality due to low-quality health systems in the universal health coverage era: a systematic analysis of amenable deaths in 137 countries. Lancet. 2018;392(10160):2203–12.

Article   PubMed   PubMed Central   Google Scholar  

Lewis TP, Roder-DeWan S, Malata A, Ndiaye Y, Kruk ME. Clinical performance among recent graduates in nine low- and middle-income countries. Trop Med Int Health. 2019;24(5):620–35.

Perin J, Mulick A, Yeung D, Villavicencio F, Lopez G, Strong KL, et al. Global, regional, and national causes of under-5 mortality in 2000–19: an updated systematic analysis with implications for the sustainable development goals. Lancet Child Adolesc Health. 2022;6(2):106–15.

Sharrow D, Hug L, Lee S, Liu Y, You D. Levels & Trends in child mortality: report 2021, estimates developed by the United Nations Inter-agency Group for Child Mortality Estimation. New York: United Nations Children’s Fund; 2021.

Google Scholar  

UNICEF Data. United Republic of Tanzania (TZA) - demographics, health & infant mortality. Available from: https://data.unicef.org/country/tza/ . Cited 2023 Jun 11.

Di Giorgio L, Evans DK, Lindelow M, Nguyen SN, Svensson J, Wane W, et al. Analysis of clinical knowledge, absenteeism and availability of resources for maternal and child health: a cross-sectional quality of care study in 10 African countries. BMJ Glob Health. 2020;5(12):e003377.

Meaney PA, Sutton RM, Tsima B, Steenhoff AP, Shilkofski N, Boulet JR, et al. Training hospital providers in basic CPR skills in Botswana: acquisition, retention and impact of novel training techniques. Resuscitation. 2012;83(12):1484–90.

Meaney PA, Topjian AA, Chandler HK, Botha M, Soar J, Berg RA, et al. Resuscitation training in developing countries: a systematic review. Resuscitation. 2010;81(11):1462–72.

Article   PubMed   Google Scholar  

Bloom BS. The 2 sigma problem: the search for methods of group instruction as effective as one-to-one tutoring. Educ Res. 1984;13(6):4–16.

Article   Google Scholar  

Rowe SY, Peters DH, Holloway KA, Chalker J, Ross-Degnan D, Rowe AK. A systematic review of the effectiveness of strategies to improve health care provider performance in low- and middle-income countries: methods and descriptive results. PLoS ONE. 2019;14(5):e0217617.

Article   CAS   PubMed   PubMed Central   Google Scholar  

WHO. Recommendations on digital interventions for health system strengthening. World Health Organization: Geneva; 2019. Licence: CC BY-NC-SA 3.0 IGO.

Meaney PA, Hokororo A, Masenge T, Mwanga J, Kalabamu FS, Berg M, et al. Development of pediatric acute care education (PACE): an adaptive electronic learning (e-learning) environment for healthcare providers in Tanzania. Digit Health. 2023;1(9):20552076231180470.

Meaney P, Hokororo A, Ndosi H, Dahlen A, Jacob T, Mwanga JR, et al. Feasibility of an Adaptive E-Learning Environment to Improve Provider Proficiency in Essential and Sick Newborn Care in Mwanza, Tanzania. medRxiv; 2023. p. 2023.07.11.23292406. Available from: https://www.medrxiv.org/content/10.1101/2023.07.11.23292406v1 . Cited 2023 Aug 30.

Finch TL, Girling M, May CR, Mair FS, Murray E, Treweek S, et al. Improving the normalization of complex interventions: part 2 - validation of the NoMAD instrument for assessing implementation work based on normalization process theory (NPT). BMC Med Res Methodol. 2018;18(1):135.

May C, Finch T. Implementing, embedding, and integrating practices: an outline of normalization process theory. Sociology. 2009;43(3):535–54.

May C. A rational model for assessing and evaluating complex interventions in health care. BMC Health Serv Res. 2006;6(1):86.

May CR, Mair F, Finch T, MacFarlane A, Dowrick C, Treweek S, et al. Development of a theory of implementation and integration: normalization process theory. Implement Sci. 2009;4(1):29.

May CR, Cummings A, Girling M, Bracher M, Mair FS, May CM, et al. Using normalization process theory in feasibility studies and process evaluations of complex healthcare interventions: a systematic review. Implement Sci. 2018;13(1):80.

Murray E, Treweek S, Pope C, MacFarlane A, Ballini L, Dowrick C, et al. Normalisation process theory: a framework for developing, evaluating and implementing complex interventions. BMC Med. 2010;8(1):63.

McEvoy R, Ballini L, Maltoni S, O’Donnell CA, Mair FS, MacFarlane A. A qualitative systematic review of studies using the normalization process theory to research implementation processes. Implement Sci. 2014;9(1):2.

Finch TL, Rapley T, Girling M, Mair FS, Murray E, Treweek S, et al. Improving the normalization of complex interventions: measure development based on normalization process theory (NoMAD): study protocol. Implement Sci. 2013;8(1):43.

May CR, Albers B, Bracher M, Finch TL, Gilbert A, Girling M, et al. Translational framework for implementation evaluation and research: a normalisation process theory coding manual for qualitative research and instrument development. Implement Sci. 2022;17(1):19.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health Ment Health Serv Res. 2011;38(2):65–76.

Greenhalgh T, Taylor R. How to read a paper: papers that go beyond numbers (qualitative research). BMJ. 1997;315(7110):740–3.

von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP, et al. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. BMJ. 2007;335(7624):806–8.

O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med J Assoc Am Med Coll. 2014;89(9):1245–51.

Bluestone J, Johnson P, Fullerton J, Carr C, Alderman J, BonTempo J. Effective in-service training design and delivery: evidence from an integrative literature review. Hum Resour Health. 2013;11(1):51.

Frenk J, Chen L, Bhutta ZA, Cohen J, Crisp N, Evans T, et al. Health professionals for a new century: transforming education to strengthen health systems in an interdependent world. Lancet. 2010;376(9756):1923–58.

Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients’ care. Lancet Lond Engl. 2003;362(9391):1225–30.

Eccles MP, Grimshaw JM, MacLennan G, Bonetti D, Glidewell L, Pitts NB, et al. Explaining clinical behaviors using multiple theoretical models. Implement Sci. 2012;7(1):99.

May C, Finch T, Mair F, Ballini L, Dowrick C, Eccles M, et al. Understanding the implementation of complex interventions in health care: the normalization process model. BMC Health Serv Res. 2007;7(1):148.

Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci IS. 2011;23(6):42.

Rogers EM, Singhal A, Quinlan MM. Diffusion of innovations. In: An integrated approach to communication theory and research. London: Routledge; 2014. p. 432–48.

Effectiveness of digital learning solutions to improve educational outcomes | UNICEF. Available from: https://www.unicef.org/documents/effectiveness-digital-learning-solutions-improve-educational-outcomes . Cited 2024 Aug 7.

Lawrence CE, Dunkel L, McEver M, Israel T, Taylor R, Chiriboga G, et al. A REDCap-based model for electronic consent (eConsent): oving toward a more personalized consent. J Clin Transl Sci. 2020;4(4):345–53.

Download references

Acknowledgements

We are grateful to healthcare providers from Bugando Medical Centre, Makongoro and Igoma Health Centres in Mwanza City Tanzania, who participated in the study for their time, cooperation, and invaluable feedback. The authors would like to thank the Pediatric Association of Tanzania; the Tanzanian Ministry of Health, Regional and Council Health Management Teams for participating in stakeholder meetings. We thank the Catholic University of Health and Allied Sciences, London School of Hygiene and Tropical Medicine, Paediatric Association of Tanzania, and Area 9 for practical support.

REDCap database

Study data were collected and managed using REDCap electronic data capture tools hosted at Stanford University. 71,72 Research Electronic Data Capture (REDCap) is a secure, web-based software platform designed to support data capture for research studies, providing 1) an intuitive interface for validated data capture; 2) audit trials for tracking data manipulation and export procedures; 3) automated export procedures for seamless data downloads to common statistical packages; and 4) procedures for data integration and interoperability with external sources. The Stanford REDCap platform ( http://redcap.stanford.edu ) was developed and operated by the Stanford Medicine Research IT team. The REDCap platform services at Stanford are subsidized by a) the Stanford School of Medicine Research Office and b) the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health, through grant UL1 TR001085. Area9 Rhapsode™ meets the requirements for full GDPR compliance, including encryption, data security, and ‘forget me’.

Use of ChatGPT

We employed ChatGPT in two ways: 1) to edit and revise manuscripts for clarity to ensure adherence to writing standards (such as STROBE and SRQR) and to format manuscripts, tables, and figures; and 2) to conduct preliminary data screening/monitoring using deidentified data exported from our REDCap databases. All the statistical results and conclusions in this manuscript were confirmed by a biostatistician or member of the author group.

Patient and public involvement

This research was performed without patient involvement. Patients were not invited to comment on the study design and were not consulted to develop patient-relevant outcomes or interpret the results. Patients were not invited to contribute to the writing or editing of this document for readability or accuracy.

1. This study was funded by the Laerdal Foundation for Acute Medicine, Stanford University School of Medicine Maternal and Child Health Research Institute, Stanford Center for Innovation in Global Health, and the Stanford University School of Medicine Division of Pediatric Critical Care Medicine.

2. Funding sources had no role in the project design; data collection, analysis, or interpretation; reporting; or decision to submit results for publication.

3. Stanford CTSA award number UL1 TR001085 from the NIH/NCRR.

Author information

Authors and affiliations.

Catholic University of Health and Allied Sciences, Mwanza, Tanzania

Joseph R. Mwanga, Adolfine Hokororo, Hanston Ndosi, Neema Chami & Enock Diocles

Pediatric Association of Tanzania, Dar Es Salaam, Tanzania

Adolfine Hokororo, Theopista Masenge, Florence S. Kalabamu, Neema Chami, Namala P. Mkopi & Castory Mwanga

Hubert Kairuki Memorial University, Dar es Salaam, Tanzania

Florence S. Kalabamu

Stanford University School of Medicine, Palo Alto, CA, USA

Daniel Tawfik, Rishi P. Mediratta, Marc Berg & Peter A. Meaney

Area9 Lyceum, Boston, MA, USA

Boris Rozenfeld

Kaiser Permanente, Oakland, CA, USA

Zachary H. Smith

Muhimbili National Hospital, Dar es Salaam, Tanzania

Namala P. Mkopi

London School of Hygiene and Tropical Medicine, London, UK

Ambrose Agweyu

You can also search for this author in PubMed   Google Scholar

Contributions

R.M., A.H., H.N., and P.A.M. made substantial contributions to the conception and design of the study, as well as to the acquisition, analysis, and interpretation of data. T.M., N.C., N.P.M., and A.A. were involved in the conception and design of the study and contributed significantly to the interpretation of the data. F.S.K., M.B., and Z.S. played key roles in the conception and design of the study and contributed to data analysis. D.T., R.P.M., B.R., and C.M. were primarily responsible for data analysis and interpretation. E.D. contributed to the interpretation of data. All authors were involved in drafting the manuscript or revising it critically for important intellectual content. Each author has approved the final version to be published and agrees to be accountable for their contributions. They have also committed to ensuring that questions related to the accuracy or integrity of any part of the work, including parts in which they were not personally involved, are appropriately investigated, resolved, and documented in the literature. In accordance with the guidelines of BMC journals, we have collaborated with local colleagues where the research was conducted. They have been included as co-authors as they meet all the authorship criteria mentioned above. Contributors who did not meet all criteria for authorship are acknowledged separately. Changes to the authorship list post-submission, including any changes in the order of authors, the deletion or addition of authors, or changes in the corresponding author, will be managed according to the journal’s policy. Such changes will be finalized only upon agreement by all authors and completion of the required change of authorship form.

Author’s information

1. Joseph R. Mwanga, B.A., MSc., PhD: Senior Lecturer and Qualitative Researcher at the Catholic University of Health and Allied Sciences, Tanzania. Focuses on pediatric care and public health initiatives.

2. Adolfine Hokororo, M.D., MMed, MSc: Chief of Quality Improvement at CUHAS and Clinical Epidemiologist. Brings extensive experience in clinical quality leadership, with significant influence on healthcare policies in Tanzania through her work with NIH and USAID.

3. Hanston Ndosi, M.D.: PACE Program Manager at the Catholic University of Health and Allied Sciences, Tanzania. A crucial role is played in healthcare program management and implementation.

4. Theopista Masenge, M.D.: Vice Chair of the Pediatric Association of Tanzania and Pediatric GI Specialist. Leads implementation programs in child health and holds an MBA.

5. Florence S Kalabamu, M.D.: Pediatric Specialist at Hubert Kairuki Memorial University. Research has focused on health law and ethics.

6. Daniel Tawfik, M.D.: Assistant Professor of Pediatrics (Critical Care) at Stanford University School of Medicine. We are practicing at Stanford Medicine Children's Health, specializing in Pediatric Critical Care Medicine, and is a member of the MCHRI.

7. Rishi P Mediratta, M.D., MSc, MA: Clinical Associate Professor in Pediatrics at the Stanford University School of Medicine. A Pediatric Hospitalist at Lucile Packard Children’s Hospital and Faculty Fellow at CIGH. His work includes contributions to pediatrics and public health research in Ethiopia and teaching about the implications of COVID-19.

8. Boris Rozenfeld, M.D.: Learning Architect for Healthcare Education at Area9 Lyceum. Leads a team of medical learning engineers with a strong background in CME and educational technology.

9. Marc Berg, M.D.: Clinical Professor in Pediatrics - Critical Care at Stanford University School of Medicine. Her research interests included CPR performance and pediatric defibrillation science through simulation. Medical Director of the Revive Initiative for Pediatric Resuscitation Excellence.

10. Zack Haines Smith, M.D., M.P.H.: Pediatric Critical Care Physician at Kaiser Permanente, Oakland, CA.

11. Neema Chami, M.D.: Neonatologist at Bugando Medical Center, Pediatric Faculty at CUHAS, Subject Matter Expert and Learning Engineer for PACE, and Key Member of the Pediatric Association of Tanzania.

12. Namala P Mkopi, M.D.: Tanzania’s first Pediatric Critical Care Physician, leading Pediatric Critical Care at Muhimbili National Hospital and Subject Matter Expert and Learning Engineer for PACE.

13. Castory Mwanga, M.D.: Senior Pediatrician and Leader at Simiyu Designated District Hospital, Subject Matter Expert and Learning Engineer for PACE, focusing on pediatric healthcare services.

14. Enock Diocles, M.D.: Nurse Educator at Mwanza College of Allied Sciences, Researcher, Clinician, and Founding Member of the Tanzanian Pediatric Nursing Association.

15. Ambrose Agweyu, M.D., MSc.: Kenyan Pediatrician and Professor of Epidemiology at the London School of Hygiene and Tropical Medicine. Influential in adapting WHO guidelines for Kenya and leading clinical trials for childhood pneumonia treatment.

16. Peter Andrew Meaney, M.D., MPH: Clinical Professor at Stanford University School of Medicine specializing in Pediatric Critical Care Medicine. His research focuses on provider education, implementation science, and quality care in resource-limited settings. He is an Associate Program Director for the T-32 Pediatric Subspecialty Global Health Fellowship and a Global Health Faculty Fellow at CIGH.

Corresponding author

Correspondence to Peter A. Meaney .

Ethics declarations

Ethics approval and consent to participate.

The Institutional Review Board of the Tanzania National Institute of Medical Research (NIMR/HO/R.8a/Vol. IX/3990), Stanford University (60379), the ethics committee of the Catholic University of Health and Allied Science (no ID number given), and the Mwanza Regional Medical Officer (Ref. No. AG.52/290/01A/115) approved the study protocol, including consent. The data collection procedures were completed in compliance with the guidelines of the Health Insurance Portability and Accountability Act (HIPAA) to ensure subject confidentiality. Informed electronic consent was obtained through REDCap from all providers who participated in the PACE [ 35 ]. All providers who provided consent were included. All the surveys and questionnaires were entered directly by providers into REDCap. This study is reported according to the Consolidated Standards of Reporting Trials (CONSORT) 2010 extension to randomized pilot and feasibility trials.

Consent for publication

All individual persons’ data included in this study are entirely unidentifiable, and there are no details on individuals reported within the manuscript. Therefore, consent for publication is not applicable to this study. If there were any identifiable details, images, or videos relating to individual persons, consent would be obtained from those persons or, in the case of children, their parent or legal guardian as noted in this section accordingly.

Competing interests

BR and MB are compensated by Area 9 Lyceum as Senior Learning Architect and Medical Director, respectively.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Mwanga, J.R., Hokororo, A., Ndosi, H. et al. Evaluating the implementation of the Pediatric Acute Care Education (PACE) program in northwestern Tanzania: a mixed-methods study guided by normalization process theory. BMC Health Serv Res 24 , 1066 (2024). https://doi.org/10.1186/s12913-024-11554-3

Download citation

Received : 16 May 2024

Accepted : 06 September 2024

Published : 13 September 2024

DOI : https://doi.org/10.1186/s12913-024-11554-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Adaptive learning
  • Feasibility
  • Acceptability
  • Normalization Process Theory
  • Implementation Science

BMC Health Services Research

ISSN: 1472-6963

how to evaluate qualitative research

  • Open access
  • Published: 16 September 2024

Gaps in communication theory paradigms when conducting implementation science research: qualitative observations from interviews with administrators, implementors, and evaluators of rural health programs

  • Nicole L. Johnson   ORCID: orcid.org/0000-0001-5686-2062 1 , 2 ,
  • Jennifer Van Tiem 1 , 2 ,
  • Erin Balkenende 1 , 3 ,
  • DeShauna Jones 1 , 4 ,
  • Julia E. Friberg 1 , 2 ,
  • Emily E. Chasco 1 , 4 ,
  • Jane Moeckli 1 , 2 ,
  • Kenda S. Steffensmeier 1 , 2 ,
  • Melissa J. A. Steffen 1 , 2 ,
  • Kanika Arora 5 ,
  • Borsika A. Rabin 6 , 7 &
  • Heather Schacht Reisinger 1 , 3 , 4  

Implementation Science volume  19 , Article number:  66 ( 2024 ) Cite this article

3 Altmetric

Metrics details

Communication is considered an inherent element of nearly every implementation strategy. Often it is seen as a means for imparting new information between stakeholders, representing a Transaction orientation to communication. From a Process orientation, communication is more than information-exchange and is acknowledged as being shaped by (and shaping) the individuals involved and their relationships with one another. As the field of Implementation Science (IS) works to strengthen theoretical integration, we encourage an interdisciplinary approach that engages communication theory to develop richer understanding of strategies and determinants of practice.

We interviewed 28 evaluators, 12 implementors, and 12 administrators from 21 Enterprise-Wide Initiatives funded by the Department of Veteran Affairs Office of Rural Health. Semi-structured interviews focused on experiences with implementation and evaluation strategies. We analyzed the interviews using thematic analysis identifying a range of IS constructs. Then we deductively classified those segments based on a Transaction or Process orientation to communication.

We organized findings using the two IS constructs most commonly discussed in interviews: Collaboration and Leadership Buy-in. The majority of segments coded as Collaboration ( n  = 34, 74%) and Leadership Buy-in ( n  = 31, 70%) discussed communication from a Transaction orientation and referred to communication as synonymous with information exchange, which emphasizes the task over the relationships between the individuals performing the tasks. Conversely, when participants discussed Collaboration and Leadership Buy-in from a Process orientation, they acknowledged both constructs as the result of long-term efforts to develop positive relationships based on trust and respect, and emphasized the time costliness of such strategies. Our findings demonstrate that participants who discussed communication from a Process orientation recognized the nuance and complexity of interpersonal interactions, particularly in the context of IS.

Conclusions

Efficient, reliable information exchange is a critical but often overemphasized element of implementation. Practitioners and researchers must recognize and incorporate the larger role of communication in IS. Two suggestions for engaging a Process orientation to communication are to: (a) use interview probes to learn how communication is enacted, and (b) use process-oriented communication theories to develop interventions and evaluation tools.

Peer Review reports

Contributions to the literature

Communication is a vital part of implementation. Yet, predominant discussions about implementation strategies are limited to a Transactional orientation. Conversely, the Process orientation to communication acknowledges the multiple moving elements in an implementation context that influences collaboration and leadership buy-in.

Exemplars of interview segments about communication engaging a Process orientation were identified to demonstrate ways interviewers can probe to gain a deeper understanding of communication as a process.

We provide examples and suggestions for qualitatively examining communication processes to better understand the impact of implementation strategies.

Several theories with a Process orientation are identified for consideration in future research and implementation planning and evaluation.

Most implementation strategies include a communication component, particularly when evidence-based interventions are introduced and promoted throughout an organization. When implementing new programming, it is common to consider communication as simply a means through which information is imparted [ 1 , 2 ]. Implementation Science (IS) researchers have an imperative to understand the role of communication as more than a means for information exchange [ 3 ]. Yet, even as a means for information exchange, Manojlovich and colleagues recognized the lack of attention on communication in implementation research [ 1 ].

Broadly, the study of communication focuses on how messages are used to generate meanings [ 4 ], and provides perspective for moving beyond an emphasis on information exchange, thus moving beyond the task dimension and recognizing the value of the relational dimension. Despite its relatively young development both academically and professionally, the communication discipline offers valuable insight to IS research [ 5 ]. There are two predominant ways to characterize communication: (1) communication as Transaction, and (2) communication as Process. When communication is viewed as a Transaction, it is discussed as a linear one-way flow of information [ 3 ]. The materiality – the element of substantive value – of communication is found in accurate, efficient information transfer, thus putting emphasis on the task dimension and channel (e.g., phone, handout) through which information is exchanged. When practitioners focus their efforts on preparing thoughtful and detailed educational sessions intended to increase program adoption, but do not allow time for interactive questions or develop opportunities for building relationships between key personnel responsible for successful adoption, then we see a reliance on the Transaction orientation to communication. When communication is conceptualized as a Process, we emphasize its constitutive nature wherein our environments – social, organizational, political, etc. – shape and are shaped through communication [ 3 ]. From a Process orientation, the transformative properties of communication emphasize its relational dimension and bring about a materiality from the intangible elements of the process (e.g., tone of voice, relational history, contextual exigency), and concepts such as psychological safety, mutual respect, and trust foreground the mechanics of information exchange. For example, someone may schedule multiple options for the same information session to ensure real-time interactivity for questions and build in opportunities for small group breakouts and post-presentation networking for relationship-building. When understanding of communication shifts to encompass more than information exchange, we begin to recognize the role of communication in building relationships and influencing long term cultural shifts, which is often the goal for implementation scientists [ 3 ]. If the Process orientation is overlooked in favor of a Transaction orientation, we may miss opportunities for identifying evidence-based communication strategies to support implementation.

The majority of subsequent work engaging Manojlovich et al.’s assertions agree on the imperative to engage a Process orientation to communication, but they make no strides in designing approaches for exploring the characteristics of communication surrounding effective implementation strategies (e.g., [ 6 , 7 , 8 ]). As the conversation initiated by Manojlovich and colleagues about the role of communication in implementation science has progressed, recognition of communication has grown, but emphasis continues to focus on formal contexts (e.g., trainings and webinars) [ 1 ]. Further, quantitative measures that assess information accuracy like the one used in Zhao and colleagues’ work overlook the importance of informal communication (e.g., rapport-building before meetings, impromptu connections) and the nuanced influence of the relational dimension that contributes to effective implementation. Bustos et al.’s (2021) analysis acknowledges both the formal and informal strategies through which communication might occur, but the communication they refer to is discussed from a Transaction orientation (i.e., “how information… was communicated to program staff” (p. 10)) [ 9 ].

For this study, we draw on interviews with employees of the Department of Veterans Affairs (VA) who evaluated, implemented, and administered interventions focused on improving the health and well-being of rural Veterans or the clinical staff who serve them. These interviews were exploratory and wide-ranging; for the purposes of this manuscript, we treat the interviews as akin to direct observations of intervention stakeholders discussing their real-world experiences operationalizing implementation strategies. Instead of focusing on what we could learn from the communication described in the interviews, we directed our attention to what lessons could be missing because of the way participants discussed communication. In this manuscript, we provide examples of how Transaction and Process orientations to communication appear in the data when individuals described their experiences, as well as their relationships that supported IS strategies and facilitated intervention goals. We also suggest interview strategies to elicit detail about communication from a Process orientation to support ongoing learning of these informal communication processes. Though these interviews were not focused on communication, we use data from the interviews to argue that noticing communication helps us discover how to do implementation science better. Specifically, a Process orientation emphasizes the space between IS strategies and outcomes, and advances understanding of implementation challenges and solutions.

Study setting and context

The VA’s Office of Rural Health (ORH) supports the creation of Enterprise-Wide Initiatives (EWIs) to address issues facing rural Veterans from mental health and primary care access to training and education of VA staff who serve rural Veterans. As a part of the funding cycle, EWI teams must conduct annual evaluations. The Center for the Evaluation of Enterprise-Wide Initiatives (CEEWI) was created through a 2019 partnership between ORH and the VA’s Quality Enhancement Research Initiative to support EWI evaluation and disseminate best practices. The CEEWI team, consisting of implementation science experts and qualitative data analysts, reviews the annual reports and provides feedback to EWI teams on reporting standards.

Data collection

As part of the initial CEEWI project, EWI evaluators, implementors, and administrators were interviewed about effectiveness of IS strategies they used and why, in part, to assist the CEEWI team in understanding key aspects of EWI implementation and evaluation. The interview guide included questions about the participant’s role on the EWI, the core components of the EWI, implementation strategies and their impact on desired outcomes, outcome measures used for evaluation, and the evaluation process. CEEWI team members and EWI leadership identified the evaluators, implementors, and administrators to recruit for the study. While recruitment sought a purposive sample of roles from each EWI, ultimately the sample was a convenience sample based on availability and willingness to participate during the first nine months of the COVID-19 pandemic. Additional details about recruitment and data collection can be found in an earlier manuscript from this larger project [ 10 ]. We conducted 43 semi-structured interviews, which averaged 51 min (range 20–77 min), from April – December 2020 with evaluators, implementors, and administrators from 21 EWIs. While most interviews were conducted one-on-one, 8 were group interviews ranging from 2 to 4 participants [ 10 ]. This study uses these interviews as an example on how communication is described when discussing implementation strategies.

Data analysis

Audio-recordings were transcribed, reviewed for accuracy, and uploaded into MAXQDA, a qualitative data management software [ 11 ]. Two doctorally trained qualitative analysts (NJ & JVT) leveraged their previous IS knowledge and conducted primary-cycle inductive coding to identify IS constructs and trends in the data [ 12 ]. The analysts initially coded all transcripts together in real-time and resolved discrepancies immediately. During this first round of coding, several IS constructs were identified in participants’ discussion of their implementation strategies, including Staff Buy-in, Tailoring, Rapport, Fidelity, and Mentorship. Collaboration and Leadership Buy-in emerged as the two most discussed IS constructs among participants. For secondary-cycle deductive coding to interpret how communication was conceptualized in discussions of Collaboration and Leadership Buy-in, the lead author, a Health Communication scholar, used an iterative process to develop a codebook to identify the language representing a Process or Transaction orientation for each construct (i.e., Collaboration and Leadership Buy-in) [ 3 , 12 ]. The analysis focused on the how communication was discussed, not about the form of communication that took place.

Collaboration, a term often characterizing various levels of formal and informal partnerships between individuals, departments or organizations, is defined as a mutually beneficial and well-defined relationship between two or more parties to achieve common goals [ 13 ]. An example of discussing Collaboration from a Transaction orientation to communication would be using the term Collaboration to describe monthly meetings where the parties update one another about the status of their tasks and goals. From a Process orientation, Collaboration would be discussed in relational terms, describing the trust and rapport the team members have among one another.

Leadership Buy-in represents the role of support from individuals in leadership positions for a program’s adoption and sustainability, particularly when competing clinical and administrative demands are at play [ 7 ]. An example of discussing Leadership Buy-in using a Transaction orientation to communication would be a description of strategies for adoption that only focused on leadership education. However, someone who engaged a Process orientation to communication might: (1) discuss tailored persuasive strategies for demonstrating value to specific decision-makers, or (2) acknowledge the necessity for long-term relationships with individuals in leadership roles for sustainment.

We conducted 43 interviews with 28 evaluators, 12 implementors, and 12 administrators. We coded a total of 90 segments as Collaboration ( n  = 46) and Leadership Buy-in ( n  = 44) across all the interviews. Most segments coded as Collaboration ( n  = 34, 74%) and Leadership Buy-in ( n  = 31, 70%) discussed communication from a Transaction orientation. The following results present examples of the discussion of Collaboration and Leadership Buy-in from the Transaction and Process orientations to communication.

Transaction orientation to communication

When communication is treated as a transaction, it is discussed as a one-way flow of information traveling from one party to another during a discrete moment in time [ 3 ]. The materiality of communication is reduced to accurate, efficient information transfer, thus putting emphasis on the channel (e.g., Teams meeting, email) through which information is exchanged and the task dimension of the interaction.

Collaboration as transaction

Participants sometimes discussed Collaboration in a way that missed its nuance and treated communication as merely a means for transferring information that produced Collaboration. For example, one participant implied that communication, regardless of quality, is inherently good, thus the more there is, the better. They identified “communication across the team level” as an important strategy having the most impact on desired outcomes. “The more communication there is, the more people are able (…) to divide up [responsibilities].” (1A) In this instance, communication is synonymous with information exchange. While we do not have enough information to assess the quality of communication that Participant 1A is referring to, the fact they only discussed the parties involved and quantity of communication is an example of the Transaction orientation to communication.

In another example, a participant explained what they felt did not work as well in their evaluation process. “We have excellent communication with some, but not all members of the [EWI] (…) I’m not sure they’re always on the same page with each other, and then depending on who we’re having a meeting with, we might hear one thing but then that’s not what someone else was going to do (…) that’s one of the pieces that I think is hard for us.” (2A) Again, we see the Transaction orientation, and the barometer for effective communication is accuracy. The participant went on to discuss ways to improve this lack of alignment among team members, suggesting that “even if it’s just being invited to join calls (…) [for us] to answer questions about the [evaluation] data” would improve teamwork. (2A) This passage highlights an important aspect of communication – being present for an interaction and having the opportunity to answer questions enables information exchange.

One participant described the communication that occurred during a monthly videoconference:

The learning collaborative is focused on bringing people [together] to share their experiences and how various facilitators identify ways to shape their program, but also the way that our national team gives feedback about the data (…) One call a month is right after a report (…) they do a data review on the call where they go over the numbers with the entire learning collaborative, everyone in the program, giving them feedback from a national perspective and always reminding people of the milestones of the metrics that they’ve agreed to under the ORH grant. (3A)

Here, we see another example of a participant discussing communication in terms of information exchange.

Leadership buy-in as transaction

Participants also discussed Leadership Buy-in from a Transaction orientation. In the following passage, participant 4A described the benefits of the EWI leadership team visiting sites in-person:

They would do a site visit to all the hubs (…) and meet with the local leadership team and that’s where they confirmed if there were any issues that they might have. They would do like a 2–3 day site visit (…) so it helped create that structure where people knew exactly who to report to and how these programs were established and plenty of opportunities to address any concerns or any issues they might have.

There are substantial implications for local Leadership Buy-in through in-person visits, yet the only aspect of communication discussed here is information exchange and clarifying the information flow hierarchy (i.e., who to report to).

Participant 5A described their program’s efforts to obtain Leadership Buy-in:

Simple outreach and education, that was really the only things that we could do, and then as they continued, training kind of showed its usefulness. That had an impact on leadership buy-in.

Here, buy-in is attributed to education, which may account for some or even most of buy-in, but it does not recognize the relational dimension of communication.

For another EWI, leadership turnover at the facility presented a significant barrier to program sustainment, because Leadership Buy-in was perpetually reset, which exacerbated a “conflict between implementation and sustainment strategies” when the decision-maker for sustainment funding was not the same person to “sign off on it originally” (9A). Given the EWI provided seed-funding for specialty staff to implement the program, the expectation was that the facility would eventually incur the expense for sustainment, but the plan for funds was not made explicit at the time of application for the seed-funding. Participant 9A went on to explain how their program responded to the unforeseen challenge obtaining sustainment funding from sites:

Our clinical director worked really hard with the first cohort of sites prior to their funding ending to try to come up with strategies to pitch the program to leadership (…) Most sites had challenges with changing leadership priorities.

In response, the interviewer clarified their sources for funding, then changed topics: “Interviewer: Ok, alright. How about strategies that were intended to optimize the effectiveness outcomes for your EWI?” In this example, the interviewer seems to be approaching the participant’s description of Leadership Buy-in from a Transactional orientation. A Process-oriented approach that asked about the nature and details of pitching the EWI to leadership may have provided more information about implementation strategy.

Process orientation to communication

From the process perspective, no single interaction serves as the cause or proof of effective Collaboration. Rather, the Process orientation recognizes the value of communication lies in the cumulative outcomes of consistent, often routine, interactions.

Collaboration as process

Collaborations require shared responsibility, mutual authority, accountability, and sharing of resources and rewards for success [ 13 ]. Collaboration in implementation has focused on strategies to enhance partners’ ability to work together to achieve mutual benefits. We identified examples from participants discussing Collaboration with a Process orientation to communication. From these examples we see that Collaboration is seen as a product of long-term efforts to develop positive relationships and establish trust and autonomy to make one’s own decisions. Many participants recognized the uniqueness and value in reaching the point of Collaboration. For example, Participant 10A shared, “The partnerships, it’s like a very special kind of relationship–, where we have to trust them, we rely on each other, but we also need to be able to make independent decisions.” Participant 6A also recognized the importance of relationships, “I would say they’re collegial but they’re not fully collaborative (…) when they’re really more deeply integrated and their role is understood and recognized (…) they are more collaborative members.”

One participant on a different EWI echoed this sentiment that individuals’ intent and motivations for the work should extend beyond the assignment to be considered Collaboration, “It’s not just trying to check off a box (…) there truly is a passion behind it, on all of our parts, and that has been wonderful.” (7B) Recognizing others’ intent for their work allows one to acknowledge how interpersonal communication is influenced by more than information exchange.

In the following exemplars, we can see how interviewers were able to elicit detail about the interactions surrounding the implementation strategies they were discussing.

Exemplar for Probing Collaboration . In Table  1 , we share an exemplar for engaging the Process orientation to communication, which led to greater explication of the role of communication in the implementation process.

Through this example, we see a more nuanced treatment of communication as a process after the interviewer probed twice to understand the participant’s use of “facilitation” as an implementation strategy. We gained description of the collaborative atmosphere within a team and how individuals’ psychological safety is manifested through authentic interactions.

Leadership buy-in as process

It takes more than information-exchange to garner support (e.g., financial, staff) for facilitation and sustainment. One participant acknowledged the web of influence that contributes to Leadership Buy-in and effective implementation:

We reached out to all the rural sites their leadership… sort of advertising the program, so we would schedule a conference call with a director, chief of staff, emergency room chief, to sort of discuss the program (…) then we would follow up with an actual 1-day on-site visit (…) where we meet with again, leadership, but we also meet with the [staff from several departments] (…) It’s an all-day visit to further introduce our program, to the team on site, as well as learn more about their program, and how [our EWI] might incorporate itself, and what challenges (…) we might face in implementation. (2B)

Here, we see an acknowledgement of reciprocal relationship-building to learn about priorities and needs.

Several participants discussed how time costly it is to gain Leadership Buy-in to ease the burden of change on an organization and staff, particularly for a nationwide program. One participant reflected:

Ten years ago, it was a [regional] project, so the main kind of instruction came from a [regional] level down, you know. The site visit was just a medical director and the nurse manager telling you that, ‘Hey, this is what’s going to happen,’ and it happened. Now (…) it’s like a year-long process to get people familiarized (…) go live went from one day to four days long. (11B)

Despite its value, garnering Leadership Buy-in has its challenges. Sometimes identifying the right individuals who represent the relevant leadership roles is not clear cut.

Once we have identified that our program can go to that site, we ask the local (…) program manager to identify who (…) key local leaders are (…) It’s important to have the managers of those sites involved in this process from the beginning (…) We (…) set up an initial meeting (…) where we review the implementation process plan with everybody on that call, and answer questions about what we and [specialty care] services will provide as part of the training opportunity and clearly delineate what we need the site or the facility to commit to provide (…) we answer questions, alleviate concerns, things like that. (7B)

Participant 7B went on to describe the challenge of identifying the right leadership representative:

The only barrier that we’ve encountered is some challenges in getting the right leadership on the call to review this in real time and answer questions (…) whether it is due to leadership turnover at the site, even from the time that we set up the call to the time that we actually do the call, there have been some change-overs, and that has been a challenge.

Again, we see this participant engaging a strong Process orientation to communication as they emphasize the importance of relationship-building for Leadership Buy-in.

Exemplar for Probing Leadership Buy-in . In the following example, the interviewer engaged the Process orientation to communication with probes that led to greater explication of the role of communication in developing Leadership Buy-in (Table  2 ).

Results illustrate ways administrators, implementors, and evaluators characterized communication related to Collaboration and Leadership Buy-in. From the Transaction orientation, we saw that the term communication was used synonymously for information exchange. The problem of implementation lies beyond efficient and reliable information transfer, and instead centers on cooperative sensemaking and learning within and among teams situated in an organization that is influenced by its social, geographic, and political environments [ 2 , 14 , 15 ]. Communication necessary for effective implementation is based on improvisation and reciprocity and constitute relationships over time [ 2 , 15 ]. Our data indicate these processes are occurring in implementation, but we may not always be paying close enough attention to their occurrence. If most discussions about communication engage a Transaction orientation, then practitioners and evaluators will never have the insight necessary to maximize the impact of their communication efforts.

Participants often discussed Leadership Buy-in more as an outcome of education, and less as a byproduct of improvisational relationship-building, which demonstrates the predominant Transaction orientation to communication privileging rehearsed, often unidirectional, and mostly controlled interactions. Formal information exchange is undoubtedly an important element of effective implementation; the Transaction orientation aligns well with the goals of dissemination and implementation as a field [ 15 ]. However, our data point to the importance of thinking about communication from a Process orientation for improving effectiveness of implementation strategies—and show how members of implementation and evaluation teams too often focus on the transaction elements of communication. Previous work that engages the Transaction orientation and points to the benefits of reliable information exchange has paved the way for more exploratory naturalistic methods for studying IS from a Process orientation to communication [ 3 , 14 , 15 , 16 ]. As noted in our findings, the Transaction orientation overlooks the intricacies of processes that occur among individuals to build trust, cultivate buy-in, and influence team decision-making, all of which are markers of successful implementation.

Suggestions for engaging process orientation to communication

Given the purpose of IS is to promote the adoption of research and evidence-based practices, it would behoove implementation scientists to tap into the richness of interdisciplinary theorizing and engage a Process orientation to communication [ 17 ]. As thinking about communication has evolved from a Transaction orientation, scholars recognized the symbolic process that humans use to create meaning through informal, improvised interactions over a period of time [ 2 ]. Recent analysis of implementation strategies for behavioral health interventions called for explicit attention to the supportive role communication may play in most, if not all, strategies [ 15 ]. The Process orientation to communication enriches theorizing and elevates scholars’ and practitioners’ understanding of how to leverage implementation strategies to be meaningfully responsive to the relationships among the interested parties [ 18 ]. However, we warn against over-characterizing communication into a ‘nebulous, global process’ [ 2 , 19 ]. For gaining insight on communication processes, we suggest two strategies: 1) interviewers focused on understanding implementation strategies could probe their interviewees to learn more about how communication is enacted; and 2) IS practitioners could utilize process-oriented communication theories in developing interventions and evaluation tools (e.g., interview guides).

The supplementary material accompanying this article includes excerpts from our interview data as examples demonstrating hypothetical ways interviewers can elicit more nuanced understanding of communication processes (see Tables S1 and S2).

Our analysis identified examples of missed opportunities for interviewers to probe about communication from a Process orientation recognizing the relational dimension of communication. Interview probes like those recommended in Tables S1 and S2 could lead to valuable understanding of the processes of communication, allowing exploration of the relational dimension of communication and implementation, and insight to individuals’ attitudes and sensemaking about those experiences. This may contribute to a more nuanced understanding of the importance of communication in implementation strategies beyond a transactional information exchange. We also provided examples highlighting the constitutive role communication plays in relationship-building. Our goal is to help attune IS researchers to the value of the processes of communication as a critical component of many implementation strategies.

Probing for communication processes in interviews

Challenges to implementing any new program may be significantly varied and widespread. No single barrier serves as an intervention’s fatal flaw, but rather, implementation is affected by numerous factors shaped through informal interactions [ 17 , 20 ]. A recent study that aimed to identify which implementation strategies should be most closely considered for which determinants of practice reported one of its limitations was the heterogeneity of responses [ 21 ]. This variation in responses among administrators, implementors, and evaluators points to the value of a more nuanced understanding of the unique, context-dependent, and relationally based communication processes undergirding implementation strategies [ 21 ]. Further, in their ethnographic study on hand hygiene programs, Goedken and colleagues poignantly emphasized the importance of understanding how implementation strategies are used and defined in real-world settings for understanding determinants of practice [ 22 ]. By looking below the surface of implementation strategies and focusing on the interactions surrounding those strategies, we may begin to recognize the determinants of practices, the mechanisms for change, more precisely. Discussing communication from a Process orientation allows us to access what is happening below the surface that cannot be observed as an outsider. With greater insight on communication processes occurring throughout implementation, the field of IS would be poised to provide meaningful guidance for combining implementation strategies [ 22 ]. In a similar vein, IS researchers should consider the temporality of IS strategies and how this underscores the role of communication. The role of Leadership Buy-in at all stages of development and implementation on effectiveness cannot be overstated [ 23 ]. Albright suggests shifting away from the predominant focus of research on the active implementation period to explore activities occurring during design and preparation [ 15 ].

Most implementation strategies have a communication component representing the channel for education and promotion (e.g., workshops, webinars, brochures) [ 15 ]. Our proposed interview strategies interrogate communication in a way that recognizes the relational dimensions of interpersonal interactions, providing insight about what truly results in effective implementation. By understanding communication from a Process orientation, we may enrich our understanding of implementation strategies [ 24 ].

Utilize process-oriented theories

Theories that engage a Transaction orientation to communication often ascribe to the traditional knowledge-intention-behavior paradigm that proposes a stable, linear positive relationship between knowledge and behavior change (e.g., Theory of Reasoned Action, a predictive theory suggesting a strong relationship among individuals’ attitudes about a behavior, their intention, and their behavior [ 25 ]) and tends to overlook the nuance of communication processes. However, humans are more complicated and inconsistent than these theories acknowledge. The Process orientation to communication allows for more realistic approaches that privilege the constitutive nature of communication to co-create meaning socially. In a recent scoping review of 158 studies in implementation research on maternity care, effective communication was noted as a key factor for promoting change across the body of work, but the majority of research was atheoretical and ambiguous in operationalization of communication [ 26 ].

Health communication scholars are trained to be sensitive to the cooperative nature of establishing shared meaning, multiple interpretations of behaviors, and the challenges of coordinating interactions when studying implementation strategies. Several theories, including two that pay special attention to how meaning is created socially, Coordinated Management of Meaning (CMM) [ 27 ] and Structuration Theory [ 28 ], could highlight perspectives that recognize communication as a complex process and translate well to practice. CMM is a constructivist theory that provides a practical heuristic for interpreting interpersonal communication events that comprise larger conversations. As such, CMM informs practitioners’ decision-making by illuminating patterns of interactions to find ways of talking that could result in desired outcomes [ 29 ]. Structuration Theory, coined by sociologist Anthony Giddens in the late 1970s, describes the dynamic relationship between individuals and their environment that constrains and enables social practices [ 28 ]. Through its critical lens, Structuration Theory highlights the (lack of) agency individuals perceive for themselves and others, and the rules and resources perpetuated through social interactions. Lastly, Diffusion of Innovations, a framework well-entrenched in IS research and practice, also engages a process paradigm [ 30 , 31 ]. There is ample opportunity and an imperative to employ a Process orientation to better understand communication in implementation science.

Limitations

This study has multiple limitations. We did not collection demographic data to describe our participants beyond the role they held on their EWI teams. The data represents a convenience sample of administrators, implementors, and evaluators working on EWIs funded at the time of data collection, which resulted in variability in representation across EWIs and staff roles. Further, because of the diversity of foci, designs, and timelines of EWIs, we cannot draw conclusions about effectiveness of strategies discussed in this paper. Lastly, the interviews were not conducted to assess communication explicitly. Despite these limitations, our analysis facilitates concrete suggestions for improving understanding of the role of communication in implementation.

Future directions for research

Research analyzing the role of communication from a Process orientation would enrich the field of IS. Similar to Fishman et al.’s work comparing measurement and operationalization of attitude among IS studies and those grounded in psychology, our work emphasizes the importance of interdisciplinary collaboration [ 32 ]. The interviewees and interviewers in our study focused predominantly on a Transaction orientation to communication; more studies are needed that focus on this level of distinction, particularly how to adopt a Process orientation to communication for implementation strategy specification. There is great potential for a body of knowledge about communication processes that has been systematically developed to inform IS strategies supporting a range of aspects crucial to effectiveness including Leadership Buy-in and Collaboration. Future research may do well to conduct direct observation to characterize communication processes related to implementation strategies from a rich Process orientation. Dissemination Science, as one facet of Dissemination and Implementation Science, is firmly rooted in the mechanics of communication and would greatly benefit from engaging the Process orientation. A recent scoping review demonstrated that the field of Dissemination Science lacks insight to communication from the Process orientation; in their review of dissemination determinants, the Transaction orientation persists in focusing on imparting information from one party to the next [ 33 ].

This study described instances of two broadly accepted orientations to communication engaged by implementation scientists. The findings demonstrate opportunities – and strategies – for engaging in the Process orientation of communication to gain greater insight into the role communication plays in implementation outcomes. We hope this work inspires dialogue, new interdisciplinary collaboration, and innovative methods to highlight the utility of engaging the Process orientation to communication to undergird the value of communication theory to implementation science for improving health services. When communication is understood as a process, practitioners will be better able to prepare for the unpredictability and uniqueness of the relational dimensions of communication.

Availability of data and materials

The datasets presented in this article are not readily available in accordance with federal requirements and standards and guidelines for the protection of participants’ privacy and to maintain confidentiality. Requests to access the datasets should be directed to Dr. Heather Reisinger ([email protected]).

Abbreviations

Center for the Evaluation of Enterprise-Wide Initiatives

Coordinated Management of Meaning

Enterprise-Wide Initiative

Implementation Science

Office of Rural Health

Department of Veterans Affairs

Zhao X, et al. Perceived communication effectiveness in implementation strategies: a measurement scale. Implement Sci Commun. 2022;3(1):38.

Article   PubMed   PubMed Central   Google Scholar  

Jordan ME, et al. The role of conversation in health care interventions: enabling sensemaking and learning. Implement Sci. 2009;4: 15.

Manojlovich M, et al. Hiding in plain sight: communication theory in implementation science. Implement Sci. 2015;10:58.

What is communication? 2023. Available from: https://www.natcom.org/about-nca/what-communication . Cited 2023 July 11.

Kreps G. Analysis of the interdisciplinary credibility of communication as a social science. Assoc Communication Bull. 1982;42:40–3.

Google Scholar  

Lette M, et al. Unknown makes unloved-a case study on improving integrated health and social care in the Netherlands using a participatory approach. Health Soc Care Community. 2020;28(2):670–80.

Article   PubMed   Google Scholar  

Lewis MA, et al. Multilevel communication to improve well-being during a pandemic. Implement Res Pract. 2021;2:2633489520988265.

Salvador JG, et al. Use of concept mapping to support evidence-based practice implementation improvement in rural areas. J Rural Mental Health. 2018;42(1):3–19.

Article   Google Scholar  

Bustos TE, Sridhar A, Drahota A. Community-based implementation strategy use and satisfaction: a mixed-methods approach to using the ERIC compilation for organizations serving children on the autism spectrum. Implement Res Pract. 2021;2:26334895211058086.

Chasco EE, et al. RE-AIM for rural health innovations: perceptions of (mis) alignment between the RE-AIM framework and evaluation reporting the Department of Veterans Affairs enterprise-wide Initiative program. Front Health Serv. 2024;4:1278209.

MAXQDA Plus 2022. Version 22.3.0. VERBI GmbH 1995–2023. Updated 2023. https://www.maxqda.com . Accessed 4 Mar 2023.

Saldana J. The coding manual for qualitative researchers. Sage; 2009.

Huang KY, et al. Unpacking partnership, engagement, and collaboration research to inform implementation strategies development: theoretical frameworks and emerging methodologies. Front Public Health. 2018;6: 190.

Kislov R, et al. Harnessing the power of theorising in implementation science. Implement Sci. 2019;14(1):103.

Albright K, et al. Communication strategies to facilitate the implementation of new clinical practices: a qualitative study of community mental health therapists. Transl Behav Med. 2022;12(2):324–34.

Salas E, et al. Communicating, coordinating, and cooperating when lives depend on it: tips for teamwork. Jt Comm J Qual Patient Saf. 2008;34(6):333–41.

PubMed   Google Scholar  

Eccles MP, et al. An implementation research agenda. Implement Sci. 2009;4: 18.

Ornstein JT, et al. Rugged landscapes: complexity and implementation science. Implement Sci. 2020;15(1):85.

Donovan EE, LeBlanc K, Farris. Interpersonal communication and coping with cancer: a multidisciplinary theoretical review of the literature. Communication Theory. 2019;29(2):236–56.

McCullough MB, et al. The interplay of contextual elements in implementation: an ethnographic case study. BMC Health Serv Res. 2015;15:62.

Waltz TJ, et al. Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions. Implement Sci. 2019;14(1):42.

Goedken CC, et al. The role as a champion is to not only monitor but to speak out and to educate: the contradictory roles of hand hygiene champions. Implement Sci. 2019;14(1):110.

Ross J, et al. Factors that influence the implementation of e-health: a systematic review of systematic reviews (an update). Implement Sci. 2016;11(1):146.

Baker R, et al. Tailored interventions to address determinants of practice. Cochrane Database Syst Rev. 2015;2015(4):CD005470.

PubMed   PubMed Central   Google Scholar  

Montano DE, Kasprzyk D. Theory of reasoned action, theory of planned behavior, and the integrated behavioral model. In: Glanz K, Rimer BK, Viswanath K, editors. Health behavior: theory, research, and practices. Jossey-Bass; 2015. p. 75–94.

Dadich A, Piper A, Coates D. Implementation science in maternity care: a scoping review. Implement Sci. 2021;16(1):16.

Pearce WB, Pearce KA. Extending the theory of the coordinated management of meaning (CMM) through the community dialogue process. Communication Theory. 2000;10(4):405–23.

Hardcastle MA, Usher KJ, Holmes CA. An overview of structuration theory and its usefulness for nursing research. Nurs Philos. 2005;6(4):223–34.

Imran M, et al. A critical study of coordinated management of meaning theory: a theory in Practitioners’ hands. Int J Engl Linguistics. 2019;9(5):301.

Rogers EM. Diffusion of innovations. 5th ed. Free Press; 2003.

Powell BJ, et al. A mixed methods multiple case study of implementation as usual in children’s social service organizations: study protocol. Implement Sci. 2013;8(1):92.

Fishman J, Yang C, Mandell D. Attitude theory and measurement in implementation science: a secondary review of empirical studies and opportunities for advancement. Implement Sci. 2021;16(1):87.

Baumann AA, et al. A scoping review of frameworks in empirical studies and a review of dissemination frameworks. Implement Sci. 2022;17(1):53.

Download references

Acknowledgements

We would like to thank the interview participants who participated in this study for their time and insights. We would also like to acknowledge Office of Rural Health (ORH) program analysts Dr. Kelly Lora Lewis, Karyn Johnstone, Nicole Sanchez, Maura Timm, Anthony Achampong, Richard Huang, and Janice Garland for their assistance, as well as Dr. Sheila Robinson, former Deputy Director of ORH, Dr. Peter Kaboli, Executive Director of ORH, and Dr. Thomas Klobucar, former Executive Director of ORH, for their support. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government.

VA Office of Rural Health and QUERI Project #: PEC 19–456.

Author information

Authors and affiliations.

Center for Access and Delivery Research and Evaluation, Iowa City VA Healthcare System, Iowa City, IA, USA

Nicole L. Johnson, Jennifer Van Tiem, Erin Balkenende, DeShauna Jones, Julia E. Friberg, Emily E. Chasco, Jane Moeckli, Kenda S. Steffensmeier, Melissa J. A. Steffen & Heather Schacht Reisinger

Veterans Rural Health Resource Center-Iowa City (VRHRC-Iowa City), VA Office of Rural Health, Iowa City, IA, USA

Nicole L. Johnson, Jennifer Van Tiem, Julia E. Friberg, Jane Moeckli, Kenda S. Steffensmeier & Melissa J. A. Steffen

Division of General Internal Medicine, Department of Internal Medicine, Carver College of Medicine, University of Iowa, Iowa City, IA, USA

Erin Balkenende & Heather Schacht Reisinger

Institute for Clinical and Translational Science, University of Iowa, Iowa City, IA, USA

DeShauna Jones, Emily E. Chasco & Heather Schacht Reisinger

Department of Health Management and Policy, College of Public Health, University of Iowa, Iowa City, IA, USA

Kanika Arora

Herbert Wertheim School of Public Health and Human Longevity Science, University of California San Diego, La Jolla, CA, San Diego, USA

Borsika A. Rabin

UC San Diego ACTRI Dissemination and Implementation Science Center, University of California San Diego, La Jolla, CA, San Diego, USA

You can also search for this author in PubMed   Google Scholar

Contributions

HSR, EC, JVT, NJ, EB, DJ, and JF are responsible for the concept for this manuscript. NJ drafted the initial manuscript and HSR, JVT, EC, EB, DJ, KSS, and JF contributed substantially in the form of manuscript structure and revisions. HSR developed the proposal for this project and obtained funding, with input from JVT, EB, and JM. HSR, JVT, EB, JM, and MS conducted interviews. KA and BR advised on all aspects of the project including development of the standardized evaluation reporting template and manuscript revisions.

Corresponding author

Correspondence to Nicole L. Johnson .

Ethics declarations

Ethics approval and consent to participate.

As designated by the IRB, this review is not human subject research (Protocol #202001043).

Consent for publication

Not applicable.

Competing interests

HSR is an associate editor of Implementation Science . All decisions on this paper were made by another editor. The authors declare that they have no other competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1. , rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Johnson, N.L., Van Tiem, J., Balkenende, E. et al. Gaps in communication theory paradigms when conducting implementation science research: qualitative observations from interviews with administrators, implementors, and evaluators of rural health programs. Implementation Sci 19 , 66 (2024). https://doi.org/10.1186/s13012-024-01395-3

Download citation

Received : 07 May 2024

Accepted : 03 September 2024

Published : 16 September 2024

DOI : https://doi.org/10.1186/s13012-024-01395-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Leadership buy-in
  • Collaboration
  • Interviewing
  • Qualitative methods
  • Communication theory
  • Implementation science strategies

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

how to evaluate qualitative research

This site uses session cookies and persistent cookies to improve the content and structure of the site.

By clicking “ Accept All Cookies ”, you agree to the storing of cookies on this device to enhance site navigation and content, analyse site usage, and assist in our marketing efforts.

By clicking ' See cookie policy ' you can review and change your cookie preferences and enable the ones you agree to.

By dismissing this banner , you are rejecting all cookies and therefore we will not store any cookies on this device.

Qualitative evaluation of Ultra Low Dose Molecular Breast Imaging

Research type

Research Study

Contact name

Nerys Forester

Contact email

[email protected]

Sponsor organisation

Newcastle upon Tyne Hospitals NHS Foundation Trust

Duration of Study in the UK

1 years, 10 months, 27 days

Research summary

Primary screening for breast cancer in the UK utilises mammography in most women. Studies have shown that women with dense breast tissue have an increased risk of cancer as well as mammography having a reduced sensitivity in these women. Women with extremely dense breast tissue make up around 10% of the routine risk screening population of women in the UK. These women are currently not treated any differently in the UK breast screening programme. In line with national targets to make cancer screening more personalised, current research is investigating whether supplementary imaging may be of benefit in this cohort of women. Molecular Breast Imaging (MBI) is a nuclear medicine technique involving a radioactive substance being injected into the body; this substance is taken up more preferentially by cancerous cells than by the surrounding healthy tissue. An image of the breast is then taken by a camera that detects the radiation and shows the tumour clearly regardless of breast density. So therefore, in theory, this new technique should be more sensitive in women with dense breasts; this is supported by a small number of studies showing superior diagnostic accuracy of MBI compared to mammograms. Previous qualitative work highlighted that current MBI imaging is too time consuming, and the required dose of radiation is considered too high to be of use in the UK. Technical improvements are underway by a commercial radiation detection company, to produce new detectors which will work more quickly and at a lower radiation dose. In this study we aim to conduct online or face to face, qualitative interviews, with patient and healthcare staff stakeholders, to learn about optimal patient populations, implementation of new interventions, MBI user requirements, future clinical study design and health economic model inputs.

Wales REC 3

REC reference

Date of rec opinion, rec opinion.

Further Information Favourable Opinion

  • Privacy notice
  • Terms & conditions
  • Accessibility statement
  • Feedback or concerns

COMMENTS

  1. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  2. Criteria for Good Qualitative Research: A Comprehensive Review

    Fundamental Criteria: General Research Quality. Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3.Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy's "Eight big‐tent criteria for excellent ...

  3. Presenting and Evaluating Qualitative Research

    The purpose of this paper is to help authors to think about ways to present qualitative research papers in the American Journal of Pharmaceutical Education. It also discusses methods for reviewers to assess the rigour, quality, and usefulness of qualitative research. Examples of different ways to present data from interviews, observations, and ...

  4. Qualitative Research Resources: Assessing Qualitative Research

    The SRQR aims to improve the transparency of all aspects of qualitative research by providing clear standards for reporting qualitative research. These standards will assist authors during manuscript preparation, editors and reviewers in evaluating a manuscript for potential publication, and readers when critically appraising, applying, and ...

  5. Critically appraising qualitative research

    Six key questions will help readers to assess qualitative research #### Summary points Over the past decade, readers of medical journals have gained skills in critically appraising studies to determine whether the results can be trusted and applied to their own practice settings. Criteria have been designed to assess studies that use quantitative methods, and these are now in common use.

  6. What Is Qualitative Research? An Overview and Guidelines

    Through a critical lens and the power of synthesis, 2 this guide navigates the complexities of qualitative research to provide a clear and structured pathway from conceptualization to implementation. This guide underscores the importance, necessity, and relevance of qualitative methods in addressing real-world issues, and emphasizes the urgency of equipping the next generation of researchers ...

  7. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  8. How to … assess the quality of qualitative research

    Typically, the authors then move into presenting a comprehensive literature review, which should map the existing and up-to-date research knowledge and explicate the research problem that the study at hand will address, as well as its significance to the audience. 1 A high-quality qualitative study, just like any other study, should have a ...

  9. 32 Evaluating Qualitative Research

    Abstract. This chapter addresses a wide range of theories and practices related to the evaluation of qualitative research (EQR). First, six categories of EQR are presented: (1) a positivist category, (2) Lincoln and Guba's alternative category, (3) a "subtle-realist" category developed by Hammersley and Atkinson, and Seale, (4) a general ...

  10. How to use and assess qualitative research methods

    This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common ...

  11. What Is Qualitative Research?

    Qualitative research methods. Each of the research approaches involve using one or more data collection methods.These are some of the most common qualitative methods: Observations: recording what you have seen, heard, or encountered in detailed field notes. Interviews: personally asking people questions in one-on-one conversations. Focus groups: asking questions and generating discussion among ...

  12. Commentary: Writing and Evaluating Qualitative Research Reports

    Objective To provide an overview of qualitative methods, particularly for reviewers and authors who may be less familiar with qualitative research.Methods A question and answer format is used to address considerations for writing and evaluating qualitative research.Results and Conclusions When producing qualitative research, individuals are encouraged to address the qualitative research ...

  13. Assessing the 'Quality' of Qualitative Research

    Strategies for Determining Trustworthiness (Rigor). The strategies adopted in order to determine the trustworthiness of a qualitative study depend on a variety of factors including: research paradigm, the specifics of each research design, the research methods utilised (i.e. interviews, ethnography, observation, focus groups, creative methods, visual methods, secondary data analysis ...

  14. PDF How to appraise qualitative research

    In crit-ically appraising qualitative research, steps need to be taken to ensure its rigour, credibility and trustworthiness. (table 1). Some of the qualitative approaches used in nursing research include grounded theory, phenomenology, ethnography, case study (can lend itself to mixed methods) and narrative analysis.

  15. How to evaluate qualitative research

    Member checking. In member checking, the research­er presents findings to the study participants and asks them if the interpretations he or she made are consistent with their experiences. The researcher incorporates participants' feedback into the findings. Thus, member checking ensures that the data are true to the participants' experience.

  16. How to appraise qualitative research

    Title, keywords, authors and abstract. In a previous paper, we discussed how the title, keywords, authors' positions and affiliations and abstract can influence the authenticity and readability of quantitative research papers,4 the same applies to qualitative research. However, other areas such as the purpose of the study and the research question, theoretical and conceptual frameworks ...

  17. Critical appraisal of qualitative research: necessity, partialities and

    Qualitative evidence allows researchers to analyse human experience and provides useful exploratory insights into experiential matters and meaning, often explaining the 'how' and 'why'. As we have argued previously1, qualitative research has an important place within evidence-based healthcare, contributing to among other things policy on patient safety,2 prescribing,3 4 and ...

  18. Evaluating the findings of qualitative research

    Research consumers should critique qualitative research to see how the concepts of credibility, transferability, and confirmability are met in a study. Further, as qualitative research evolves and transforms the understanding of phenomena, the strategies for evaluating credibility may evolve and change, as well.

  19. Understanding and evaluating qualitative research

    Abstract. Qualitative research aims to address questions concerned with developing an understanding of the meaning and experience dimensions of humans' lives and social worlds. Central to good qualitative research is whether the research participants' subjective meanings, actions and social contexts, as understood by them, are illuminated.

  20. Qualitative research: its value and applicability

    Qualitative research has a rich tradition in the study of human social behaviour and cultures. Its general aim is to develop concepts which help us to understand social phenomena in, wherever possible, natural rather than experimental settings, to gain an understanding of the experiences, perceptions and/or behaviours of individuals, and the meanings attached to them.

  21. Qualitative Research with Socio-Technical Grounded Theory

    It provides guidance on evaluating qualitative research application and outcomes; and explores the possible role of Artificial Intelligence (AI) in qualitative research in the future. The book is structured into five parts. Part I - Introduction includes three chapters that serve to provide: an overview of the book in Chapter 1; a brief ...

  22. PDF Qualitative Research Methods in Program Evaluation ...

    The starting point for deciding whether or not to invest in qualitative research is always the research questions to be answered and consideration of the type of data needed to answer them. The preceding section on the role of qualitative methods in program evaluation provides broad guidance on this point.

  23. PDF N RESEARCH How to evaluate qualitative research

    findings more trustworthy. They're based on criteria developed by Lin-coln and Guba in 1985 (considered the gold standard for evaluating the quality and trustworthiness of quali-tative research). The strategies fall into the broad categories of con-firmability, credibility, dependabili-ty, and transferability. Confirmability.

  24. 9 Data Collection Methods in Qualitative Research

    Qualitative research methods produce, by their nature, unstructured data. Whether you're working with transcripts from focus groups, feedback from review sites, or social media conversations, the data doesn't neatly fit into rows and columns like quantitative data does. Instead, you're dealing with text—rich, narrative-driven, and full ...

  25. Intercoder reliability for use in qualitative research and evaluation

    Intercoder reliability can increase trustworthiness, accuracy, rigor, collaboration, and power sharing in qualitative research. Though not every qualitative design can utilize intercoder reliability, this article highlights how positivist qualitative research, community-based participatory research, and participatory evaluation all strengthen when intercoder reliability is sought among coding ...

  26. Validity, reliability, and generalizability in qualitative research

    Five qualitative studies are chosen to illustrate how various methodologies of qualitative research helped in advancing primary healthcare, from novel monitoring of chronic obstructive pulmonary disease (COPD) via mobile-health technology, informed decision for colorectal cancer screening, triaging out-of-hours GP services, evaluating care ...

  27. A qualitative investigation into the effectiveness of a housing

    Semi-structured qualitative interviews were conducted with 22 clients and three recovery housing navigators as part of a program evaluation of the navigation program pilot. Qualitative software was used to organize and qualitatively analyze transcripts through several rounds of coding producing emergent themes, which were then triangulated, and ...

  28. Evaluating the implementation of the Pediatric Acute Care Education

    Introduces PACE: This study uniquely evaluated the PACE program in a low-resource setting, offering initial evidence on its implementation and potential impact on pediatric care.. Utilizes the NPT framework: By employing a NPT framework, this research provides a novel methodological example of how to assess the incorporation of e-learning in LMIC clinical settings.

  29. Gaps in communication theory paradigms when conducting implementation

    Communication is considered an inherent element of nearly every implementation strategy. Often it is seen as a means for imparting new information between stakeholders, representing a Transaction orientation to communication. From a Process orientation, communication is more than information-exchange and is acknowledged as being shaped by (and shaping) the individuals involved and their ...

  30. Qualitative evaluation of Ultra Low Dose Molecular Breast Imaging

    Previous qualitative work highlighted that current MBI imaging is too time consuming, and the required dose of radiation is considered too high to be of use in the UK. Technical improvements are underway by a commercial radiation detection company, to produce new detectors which will work more quickly and at a lower radiation dose.