Empirical evidence suggests that for in-depth interview studies, saturation occurs at 12 interviews for thematic saturation, but typically more would be needed for a heterogenous sample higher degrees of saturation [55, 56]. Both DQIP and OPAL case studies were huge with OPAL designed to interview each of the 40 individual cases four times and DQIP designed to interview the lead DQIP general practitioner (GP) twice (to capture change over time), another GP and the practice manager from each of the 10 organisational cases. Despite the plethora of mixed methods research textbooks, there is very little about sampling as discussions typically link to method (e.g. interviews) rather than paradigm (e.g. case study). Process evaluations need to be tailored to the trial, the intervention and the outcomes being measured [45]. For example, in a stepped wedge design (where the intervention is delivered in a phased manner), researchers should try to ensure process data are captured at relevant time points or in a two-arm or multiple arm trial, ensure data is collected from the control group(s) as well as the intervention group(s).
This finding is supported by a literature overview on process evaluations in public health published by Linnan and Steckler in 2002 [29]. We would encourage researchers to employ terms that are utilized by other researchers to facilitate making meaningful comparisons across studies in the future and to be mindful of comprehensively including the key components of a process evaluation, context, implementation, and mechanisms of impact [12]. The majority of process evaluations used qualitative forms of data collection (43.4%) and individual interviews as the predominant data collection method.
The seven process evaluations were from low- and middle-income countries (LMICs) that had either completed or nearly completed their process evaluations including Fiji and Samoa, South Africa, Kenya, Peru, India, Sri Lanka, Tanzania and indigenous communities in Canada, (Table 1). The countries referred to in this manuscript were countries in which projects were a) funded through the GACD and b) contained process evaluations at a sufficiently advanced stage to include in the analysis. These process evaluations were part of pragmatic trials of innovative interventions to prevent and manage hypertension in the areas of salt reduction, task redistribution, mHealth, community engagement and blood pressure control [8].
The choice of the appropriate conceptual or theoretical framework is, however, dependent on the philosophical and professional background of the research. The two examples within this paper used our own framework for the design of process evaluations, which proposes a number of candidate processes which can be explored, for example, recruitment, delivery, response, maintenance and context [45]. This framework was published before the MRC guidance on process evaluations, and both the DQIP and OPAL process evaluations were designed before the MRC guidance was published. The DQIP process evaluation explored all candidates in the framework whereas the OPAL process evaluation selected four candidates, illustrating that process evaluations can be selective in what they explore based on the purpose, research questions and resources.
When designing a process evaluation, it is important to be mindful that the results may later be included in systematic reviews. Complex interventions usually undergo some tailoring when implemented in different contexts. We extracted and analyzed data on any theoretical guidance that was identified and discussed for the process evaluation stage of the included studies.
Our findings suggest a current propensity to collect data after intervention delivery (as compared to before and/or during). It is unclear if our findings are the result of a lack of forethought to employ data collection pre- and during implementation, a lack of resources, or a reliance on data collection approaches post-intervention. This aside, based upon our findings, we recommend that KT researchers planning process evaluations consider data collection earlier in the implementation process to prevent challenges with retrospective data collection and to maximize the potential power of process evaluations. Consideration of key components of process evaluations (context, implementation, and mechanisms of impact) is critically important to prevent inference-observation confusion from an exclusive reliance on outcome evaluations [12].
The choice for including in-depth case study analyses necessitates a level of trust between the authors and the research teams, especially since most teams have not yet finalized their analyses nor published their findings. We chose not to include other GACD projects, thereby reducing the scope of projects that could contribute to the analyses. We believe the current approach facilitated more in-depth analyses, thereby enriching the findings of this study. Process evaluations are an important component of an effectiveness evaluation as they focus on understanding the relationship between interventions and context to explain how and why interventions work or fail, and whether they can be transferred to other settings and populations. However, historically, context has not been sufficiently explored and reported resulting in the poor uptake of trial results.
Case study is one appropriate methodology, but there is little guidance about what case study design can offer the study of context in trials. We address this gap in the literature by presenting a number of important considerations for process evaluation using a case study design. In the NERS process evaluation, implementation measures indicated that the intervention comprised a common core of health professional referrals to discounted, supervised, group based exercise. The case study methodology is ideally suited to real-world, sustainable intervention development and evaluation because it can explore and examine contemporary complex phenomena, in depth, in numerous contexts and using multiple sources of data [8]. Case study design can capture the complexity of the case, the relationship between the intervention and the context and how the intervention worked (or not) [8].
Many research projects experience delay in the formative and implementation phases of their projects. Some of the findings from this study align with findings from other authors which are quoted in this paragraph. The causal relationship between implementation and outcome is, in real life implementation, affected by the adaptability (or unpredictability) of actors, and the wide range of influencing elements [25] including geographical and community setup. Using a mixed-methods approach deepens the understanding by providing different perspectives, validation and triangulation by using multiple sources [2, 26].
However, without good relationships, close observation of the intervention can be challenging. Evaluators also need to ensure that they maintain sufficient independence to observe the work of stakeholders critically. Transparent reporting of relationships with policy and practice stakeholders, and being mindful of how these affect the evaluation, is crucial. Issues considered may include training and support, communication and management structures, and how these structures interact with implementers’ attitudes and circumstances to shape the intervention. As with all reviews, there is the possibility of incomplete retrieval of identified research; however, this review entailed a comprehensive search of published literature and rigorous review methods.
During the implementation phase, the evaluation seeks to understand how the initiative is taking shape, where there is early progress, and how to maximize the ongoing success of the project. At the close of a project, the evaluation assesses the extent to which project aims were met and identifies circumstances that led to high and low success levels. The evaluation also probes throughout for important unintended https://www.globalcloudteam.com/ consequences of the work (e.g., a program designed to promote child car seat usage also motivates parents to use safety belts for themselves.) All of this together helps to tell the full project story. FL, JW, GP, MM, JG, RV, RJ, JJM, BO, MP, JO, RW, KY, MR, AS, KT and AT were responsible for leading, providing oversight and developing case studies for process evaluations of their respective studies.
At present, research studies often list common contextual factors, but without a depth of meaning and understanding, such as government or health board policies, organisational structures, professional and patient attitudes, behaviours and beliefs [27]. The case study methodology is well placed to understand the relationship between context and intervention where these boundaries may not be clearly evident. It offers a means of unpicking the contextual conditions which are pertinent to effective implementation.
A major finding from this systematic review is the lack of methodological rigor in many of the process evaluations. Almost 40% of the studies included in this review had a MMAT score of 50 or less, but the scores varied significantly in terms of study designs used by the investigators. Moreover, the frequency of low MMAT scores for multi-method and mixed method studies suggests a tendency for lower methodological quality which could point to the challenging nature of these research designs [32] or a lack of reporting guidelines. Incorporating process evaluation data collection tools into the intervention process from the onset was identified as crucial for process evaluation.
The focus of process evaluation will vary according to the stage at which it is conducted. The MRC framework recommends a feasibility and piloting phase after an intervention has been developed.1 3 At this stage, process evaluation can have a vital role in understanding the feasibility of the intervention and optimising its design and evaluation. Even when a process evaluation has been conducted at the feasibility stage, another will usually be needed alongside the full trial because new problems are likely to emerge when the intervention is tested in a larger more diverse sample. The implementation of research into healthcare practice is complex [1], with multiple levels to consider such as the patient, healthcare provider, multidisciplinary team, healthcare institution, and local and national healthcare systems. The implementation of evidence-based treatments to achieve healthcare system improvement that is robust, efficient, and sustainable is crucially important. However, it is well established that improving the availability of research is not enough for successful implementation [2]; rather, active knowledge translation (KT) interventions are essential to facilitate the implementation of research to practice.
Although the studies generally took place in LMICs, there were varying geographical, cultural and economic settings within and across countries (Table 1). Most of the interventions (five) were tested in randomized controlled trials with one stepped wedged trial and one pre-post study design. Single case studies usually sample typical or unique cases, their advantage being the depth and richness that can be achieved over a long period of time. The advantages of multiple case study design are that cases can be compared to generate a greater depth of analysis.