Frequently Asked Questions
Website and content citations:
Please review and cite “www.RE-AIM.org” as well as our recent 20-year review of the application of RE-AIM for the most up-to-date information. These are the most recent and integrative sources for up to date information on RE-AIM:
Glasgow, R. E., Harden, S. M., Gaglio, B., Rabin, B., Smith, M. L., Porter, G. C., Ory, M. G., & Estabrooks, P. A. (2019). RE-AIM Planning and Evaluation Framework: Adapting to New Science and Practice With a 20-Year Review. Frontiers in public health, 7, 64. https://doi.org/10.3389/fpubh.2019.00064
The Glasgow, Vogt, and Boles’ (1999) original paper is also a good historical resource for the initial call for public health reporting, but in citing it please be aware that many things in this paper have changed or evolved over the past 20 years and it is no longer the authoritative source regarding RE-AIM issues. In particular, it should no longer be the source for making characterizations about RE-AIM.
- What is RE-AIM?
- Is RE-AIM solely a quantitative model?
- How do you define each element?
- How do the RE-AIM elements relate to planning?
- Which RE-AIM element is the most important?
- Why isn’t cost one of the RE-AIM dimensions?
- How is RE-AIM different from other evaluation approaches?
- How is the RE-AIM definition of Implementation different from concepts such as intervention delivery, receipt of intervention, or implementation fidelity?
- Is RE-AIM used to design programs, or just to evaluate them?
Question: What Is RE-AIM?
- Reach – The absolute number, proportion, and representativeness of individuals who are willing to participate in a given initiative, intervention, or program, and reasons why or why not.
- Effectiveness – The impact of an intervention on important individual outcomes, including potential negative effects, and broader impact including quality of life and economic outcomes; and variability across subgroups( generalizability or heterogeneity of effects)
- Adoption– (Setting levels) The absolute number, proportion, and representativeness of settings and intervention agents (people who deliver the program) who are willing to initiate a program, and why. Note- adoption can have many (nested) levels- e.g. staff under a supervisor under a clinic or school, under a system, under a community.
- Implementation– At the setting level, implementation refers to the intervention agents’ fidelity to the various elements of an intervention’s key functions or components, including consistency of delivery as intended and the time and cost of the intervention. Importantly, it also includes adaptations made to interventions and implementation strategies.
- Maintenance – At the setting level, the extent to which a program or policy becomes institutionalized or part of the routine organizational practices and policies. Within the RE-AIM framework, maintenance also applies at the individual level. At the individual level, maintenance has been defined as the long-term effects of a program on outcomes after a program is completed. The specific time frame for assessment of maintenance or sustainment varies across projects
Read about Applying the RE-AIM FrameworkRE-AIM was originally developed as a framework for consistent reporting of research results and later used to organize reviews of the existing literature on health promotion and disease management in different settings. The acronym stands for Reach, Effectiveness, Adoption, Implementation, and Maintenance which together determine public health impact. Since the original paper in 1999, there have been over 200 on RE-AIM by a variety of authors in fields as diverse as aging, cancer screening, dietary change, physical activity, medication adherence, health policy, environmental change, chronic illness self-management, well-child care, eHealth, worksite health promotion, women’s health, smoking cessation, quality improvement, weight loss, diabetes prevention, and practice-based research. See the article on RE-AIM at 20 in Frontiers of Public Health (Glasgow et al, 2019). More recently, RE-AIM has been used to translate research into practice and to help plan programs and improve their chances of working in “real-world” settings. The framework has also been used to understand the relative strengths and weaknesses of different approaches to health promotion and chronic disease self-management-such as in-person counseling, group education classes, telephone counseling, and internet resources. The overall goal of the RE-AIM framework is to encourage program planners, evaluators, researchers, readers of journal articles, funders, and policy-makers to pay more attention to essential program elements including external validity that can improve the sustainable adoption and implementation of effective, generalizable, evidence-based interventions to produce public health or population impact. See Applying the RE-AIM Framework for answers to some basic questions. You’ll also find ideas to improve your chances at having a positive impact on public health.
Question: Is RE-AIM solely a quantitative model?
No- RE-AIM is definitely not just a quantitative framework. There have been a very large number of qualitative applications of RE-AIM. Some of these are cited below, we are in the process of including specific qualitative interview guides, and a useful summary article of how to qualitatively apply RE-AIM is provided in Holtrop et al. 2018.
Question: How do you define each element?
Reach – The absolute number, proportion, and representativeness of individuals who participate in a given initiative, intervention or program, and reasons why or why not.
Representativeness refers to whether participants have characteristics that reflect the target population’s characteristics. For example, if your intent is to increase physical activity in sedentary people between the ages of 35 and 70, you wouldn’t test your program on triathletes.
Effectiveness/Efficacy – The impact of an intervention on important outcomes. This includes potential negative effects, quality of life, and economic outcomes. Also important to understand variability across subgroups (heterogeneity) and why.
Adoption – The absolute number, proportion, and representativeness of settings and staff who are willing to initiate a program or approve a policy, and reasons why or why not. Note settings and staff can each be multi-level: delivery staff nested under supervisors, clinics or schools, health systems, communities, etc.
Implementation – At the setting level, implementation refers to how closely staff members follow the program that the developers provide. Importantly, this includes consistency of delivery as intended, adaptations made to the intervention or implementation strategies, and the time and cost of the program.
Maintenance – At the setting level, the extent to which a program or policy becomes part of the routine organizational practices and policies. Newer guidance includes tailoring the time frame of maintenance to specific issues and programs, and evaluation of adaptations made for sustainment.
At the individual level, maintenance refers to the longer-term effects of a program on outcomes after the most recent intervention contact. Time frame of maintenance assessment should be tailored to the program and health issue.
The principles if you will, or underlying emphases across all RE-AIM dimensions are those of the percent or proportion meeting a given RE-AIM criterion; the representativeness (or equity) and reasons for this result (often assessed qualitatively).
Question: How do the RE-AIM elements relate to planning?
Answer: RE-AIM has successfully been used to plan programs for many years (see Klesges 2004 for early application). As you design, plan, or evaluate an intervention, there are questions that you should ask yourself.
- Reach your intended target population
- Efficacy (or more often effectiveness)
- Adoption by target staff, settings, systems or communities
- Implementation consistency, costs, and adaptations made during delivery
- Maintenance of intervention effects in individuals and settings over time
Question: Which RE-AIM element is the most important? (Isn't Reach really the bottom line in what you are trying to accomplish?)
Answer: Some have argued that Reach is the most important criteria, but we think that all five RE-AIM dimensions are very important, and in a given study some may be more important than others. An intervention with high Reach, but little or no Effectiveness will have limited impact. Similarly, even if an intervention has high Reach and impressive Effectiveness, if no organizations will Adopt the intervention, or if only a handful of experts can successfully Implement the program, it will have limited real-world impact.
Which dimensions are most important should be decided with stakeholders a priori during the planning stages of a project. This priority setting should also inform pragmatic use of RE-AIM (Glasgow and Estabrooks, Prev Chronic Dis, 2018) if comprehensive application is not feasible.
The principles if you will, or underlying emphasis across all RE-AIM dimensions are those of the percent or proportion meeting a given RE-AIM criterion; the representativeness (or equity) and reasons for this result (often assessed qualitatively).
Question: Why isn't cost one of the RE-AIM dimensions - Isn't it so important to adoption and other issues?
Answer: We agree that cost is often one of the key factors in determining how widely Adopted an intervention will be. However, we view cost, (or replication costs, cost-effectiveness, return on investment and cost-benefit), as one of the factors that influence several RE-AIM dimensions. For convenience and to simplify application we have listed costs under Implementation, recognizing that some costs are devoted to many of the RE-AIM dimensions
Question: How is RE-AIM different from other evaluation approaches?
Answer: RE-AIM draws upon previous work in several areas including diffusion of innovations, multi-level models, and PRECEDE-PROCEED. The primary ways that it is different is that it a) is intended specifically to facilitate translation of research to practice, b) it places equal emphasis on internal and external validity issues and emphasizes representativeness, and c) it provides specific and standard ways of measuring key factors involved in evaluating potential for public health impact and widespread application.
Question: How is the RE-AIM definition of Implementation different from concepts such as intervention delivery, receipt of intervention, or implementation fidelity?
Answer: In the RE-AIM framework, Implementation is closely related to the above issues. However, it has a greater focus on the intervention setting level and on the staff delivering the program and what they do, rather than on what the individual participant who receives a program does. Both are important, but RE-AIM places emphasis on the potential implications for delivering intervention in applied settings, and on assessing Implementation for different components of the program and across diverse intervention staff and settings.
In addition, implementing RE-AIM is also very concerned with both cost and with adaptations that are made to the program, policy, or implementation strategies.
Question: Is RE-AIM used to design programs, or just to evaluate them?
Answer: Both. Although historically it has been used more commonly to report results or compare interventions, it has also been demonstrated useful as a planning tool; as a method to review intervention studies; and been used iteratively to guide mid-course adjustments to interventions or implementation strategies..
These articles provide examples of reporting results:
Evaluating Initial Reach and Robustness of a Practical Randomized Trial of Smoking Reduction.
Glasgow RE, Estabrooks PA, Marcus AC, Smith TL, Gaglio B, Levinson AH, Tong S. (2008).
Health PsycholNov 27(6):78-788.
Implementation, generalization, and long-term results of the “Choosing Well” diabetes self-management intervention.
Glasgow, R.E., Toobert, D.J., Hampson, S.E., & Strycker, L.A. (2002).
Pt Educ Couns,48(2):115-122.
Tailored Behavioral Support for Smoking Reduction: Development and Pilot Results of an Innovative Intervention.
Levinson AH, Glasgow RE, Gaglio B, Smith TL, Cahoon J, Marcus AC. (2008).
Health Educ Res2008 Apr;23(2):335-46.
The following articles discuss using RE-AIM for planning:
Beginning with the Application in Mind: Designing and Planning Health Behavior Change Interventions to Enhance Dissemination.
Klesges, L.M., Estabrooks, P.A., Glasgow, R.E., Dzewaltowski, D.A. (2005).
Ann Behav Med.29:66S-75S.
RE-AIM for Program Planning and Evaluation: Overview and Recent Developments.
Glasgow, R.E., Toobert, D.J. (2007).
Center for Health Aging: Model Health Programs for Communities/National Council on Aging (NCOA).
These articles provide examples of using RE-AIM to evaluate evidence and review the literature:
Promoting smoking abstinence in pregnant and postpartum patients: A comparison of 2 approaches.
Lando, H.A., Valanis, B.G., Lichtenstein, E.L., et al. (2001).
American Journal of Managed Care, 7, 685-693.
Reporting of Validity from School Health Promotion Studies Published in 12 Leading Journals.
Estabrooks, P.A., Dzewaltowski, D.A., Glasgow, R.E., Klesges, L.M. (2003).
1996-2000.Journal of School Health, 73(1):21-28.
Review of External Validity Reporting in Childhood Obesity Prevention Research.
Klesges LM, Dzewaltowski DA, Glasgow RE. (2008).
Am J Prev Med; 34(3):216-223.
Smoking cessation interventions among hospitalized patients: What have we learned?
France, E.K., Glasgow, R.E., Marcus, A. (2001).
Preventive Medicine, 32(4):376-388.
Translating physical activity interventions for breast cancer survivors into practice: an evaluation of randomized controlled trials.
White SM, McAuley E, Estabrooks PA, Courneya KS. (2009).
Ann Behav Med.Feb;37(1):10-9. Epub 2009 Mar 3.
- Does RE-AIM apply to efficacy studies?
- Can data on all RE-AIM dimensions be collected in a single study?
- Is there a simple way to get an overall RE-AIM score?
- How specifically can RE-AIM be used to help translate research into practice?
- Do you have an example of RE-AIM being applied to a real program that promotes healthy behaviors?
- If sites say no when calculating adoption, then are their potential participants included in the reach calculations?
Question: We are only conducting efficacy studies; do the RE-AIM framework and evaluation criteria still apply to us?
Answer:Yes, they do, but the specific types of information that you collect may be different than in an effectiveness or dissemination study. Specifically, you may want to select your participant sample or setting(s) to be similar to the population to which you want to generalize. Also you may want to consider the practicality and intensiveness of your intervention, so that it has good potential for later implementation, but not actually collect measures of cost-effectiveness until later studies.
Across all RE-AIM criteria, you may still want to have discussions with your intended target audiences of participants, of implementers and of potential settings even though you do not collect formal data. The effect of moderator variables is very important to assess in efficacy studies, although often the research team will end up purposefully selecting one or more levels of a plausible moderator variable (e.g., education level, the experience of intervention agent, number of chronic conditions a patient has) rather than attempting to ensure complete representativeness at this stage of research.
Question: Can data on all RE-AIM dimensions be collected in a single study? Is it really possible to collect data on all or most of the RE-AIM dimensions in a single study?
Answer: Yes, it is possible, and there are relatively inexpensive ways of collecting data on most RE-AIM dimensions as you are making arrangements for your study. Example publications that have reported on all five or several RE-AIM factors are listed below. You can also refer readers to other documents or websites that report on RE-AIM issues such as representativeness in more detail than may be possible in a given study.
That said, we recognize that funding and time constraints can preclude collecting data on all RE-AIM dimensions. A frequent example is maintenance if funding runs out at the end of the intervention period. In such cases, we recommend assessing intentions to maintain (or adapt a program or implementation strategies).
We think it important to consider all RE-AIM dimensions in every case. Along with stakeholders, it should be decided a priori if it is possible to address all RE-AIM dimensions or not, and if so if the project will focus on enhancing all dimensions, assessing a given dimension or both. This reasoning should then be transparently reported along with reasons for these decisions.
These issues and recommendations are discussed in detail in Glasgow and Estabrooks (2018 Prev Chronic Dis)
These issues and recommendations are discussed in detail in Glasgow and Estabrooks (2018)
A brief smoking cessation intervention for women in low-income Planned Parenthood Clinics. Glasgow, R.E., Whitlock, E.P., Eakin, E.G., Lichtenstein, E. (2000). American Journal of Public Health, 90(5):786-789.
The D-Net Diabetes Self-Management Program: Long-term implementation, outcomes, and generalization results. Glasgow, R.E., Boles, S.M., McKay, H.G., Feil, E.G., Barrera, M., Jr. (2003). Preventive Medicine36(4): 410-419.
Implementation, generalization, and long-term results of the “Choosing Well” diabetes self-management intervention. Glasgow, R.E., Toobert, D.J., Hampson, S.E., & Strycker, L.A. (2002). Patient Education and Counseling48(2):115-122.
Long-term Results of Smoking Reduction Program. Glasgow, R.E., Gaglio, B., Estabrooks, P.A., Marcus, A.C., Ritzwoller, D.P., Smith, T.L., Levinson, A.H., O’Donnell, C. (2009). Med Care47(1):115-120.
Promoting smoking abstinence in pregnant and postpartum patients: A comparison of 2 approaches. Lando, H.A., Valanis, B.G., Lichtenstein, E.L., et al. (2001). American Journal of Managed Care, 7, 685-693.
Question: Is there a simple way to get an overall RE-AIM score? The RE-AIM model seems very complicated-is there a simple way to get an overall score?
Some have suggested that a multiplicative model best fits the intent of the model, and an assumption is made that all five dimensions are equally important (this is not always the case) , and that if a program has a zero value on any dimension, that its overall public health impact will be zero. This builds upon the increasingly accepted notion of Reach X Efficacy = Impact to become Public Health Impact = R x E x A x I x M.
Others object to such formulas and feel that there is no one best way to combine the RE-AIM elements. One approach that allows the user to define their own criteria and to emphasize the issues that are most important to them is to visually display different programs on the RE-AIM dimensions so that the strengths and limitations of different programs can be quickly seen. Such visual displays appear in the articles listed below, and the following links display examples of such visual displays.
The key issues are: a) transparency of decisions; b) that these decisions including if an overall composite score will be reported (and which dimensions will receive greatest importance and attention- see above) should be made during the planning stage of a project; and c) these decisions should be made with all relevant stakeholders.
Sample Visual Displays
1. This figure visually displays the relative strengths and limitations of interactive computer vs. in-person based behavior change counseling along the various RE-AIM dimensions (higher scores are better on the hypothetical scale).
2. This figure visually displays the hypothetical performance of a group counseling program versus a policy approach to smoking behavior change on the various RE-AIM dimensions (higher scores are better on the hypothetical scale).
Articles containing visual displays of RE-AIM dimensions:
Making a difference with interactive technology: Considerations in using and evaluating computerized aids for diabetes self-management education. Glasgow, R.E., & Bull, S. (2001). Diabetes Spectrum, 14(2): 99-106
The RE-AIM framework for evaluating interventions: What can it tell us about approaches to chronic illness management? Glasgow, R.E., McKay, H.G., Piette, J.D., Reynolds, K.D. (2001). Patient Education and Counseling, 44:119-127.
Question: How specifically can RE-AIM be used to help translate research into practice?
Answer: By providing a set of standard criteria (Reach, Efficacy/Effectiveness, Adoption, Implementation, and Maintenance) it focuses attention on key factors important for application. By considering this set of RE-AIM issues in planning, conducting, evaluating and reporting on intervention programs or policies, one should be able to anticipate and prepare for most of the major challenges in translating research programs into real world applications.
Also, by comparing alternative interventions (SeeLando et al, 2001.), program delivery modalities (see Glasgow, McKay et al, 2001.) or policies (SeeJilcott et al, 2007.) on the RE-AIM criteria, decision makers in applied settings should be better able to judge the fit of a possible program with their needs and priorities.
Finally, we refer you to these following two articles for more information on specific summary score formulas you may want to consider, especially the Efficiency Metric (Cost / (Reach x Effectiveness). See above for considerations on whether or not to and how you might want to construct composite scores.
Evaluating the impact of health promotion programs: Using the RE-AIM Framework to form summary measures for Decision Making Involving Complex Issues. Glasgow, R.E., Klesges, L.M., Dzewaltowski, D.A., Estabrooks, P.A., Vogt, T.M. (2006). Health Educ Res21(3):688-694
Using RE-AIM Metrics to Evaluate Diabetes Self-Management Support Interventions. Glasgow, R.E., Nelson, C.C., Strycker, L.A., King, D.K. (2006). Am J Prev Med30(1):67-73.
Question: Do you have an example of RE-AIM being applied to a real program that promotes healthy behaviors?
Answer: In Australia, a program that helps doctors promote physical activity was evaluated using the RE-AIM framework.
The Victoria Council on Fitness and General Health Inc. (VICFIT) was established through the Ministers for Sport and Recreation and Health to provide advice to government and to coordinate the promotion of fitness in Victoria.
One of VICFIT’s initiatives, the Active Script Program (ASP), is designed to enable all general practitioners in Victoria to give consistent, effective and appropriate physical activity advice in their particular communities. Phase II of ASP was implemented from July 2000 to June 2001.
Visit theVICFITwebsite for more details and for the complete report.
Question: If sites decline to participate, when you are calculating adoption, then are their potential participants included in the reach calculations?
For example: 5 clinics are approached to participate, each having 5,000 patients. Two say no, thus 3 do the intervention. When calculating reach, is the target population for recruitment all 25,000 patients at all clinics approached or only 15,000 in the three clinics that said yes?
Answer: First step is Adoption. This defines the subset of potentially eligible participants. But for analyses of Reach, you only want to analyze participation among those who potentially could have participated – who had a chance, invited, etc.
Note: the overall impact of both reach and adoption could be indicated in the combined RE-AIM metrics and adoption X reach to get percent of population. See above for caveats and issues to be considered in combining scores across dimensions
Support and Evidence of RE-AIM
Question: Is it a theory of behavior change?
Answer: RE-AIM is not a theory, rather it is a framework and a set of criteria for planning and evaluating interventions that are intended to eventually be broadly implemented or widely adopted. As such, it is difficult to think of how one would “validate” RE-AIM or other approaches to evaluation such as the Precede-Proceed framework. The ultimate value of RE-AIM will be if both researchers and decision makers from potential adopting organizations feel that the framework is helpful to them in planning, conducting, reporting, and selecting interventions.
Although RE-AIM has been used most often as an Evaluation model, it also has many of the characteristics of an implementation framework, a process framework (guiding implementation) and a determinants framework (when used as part of PRISM that addresses contextual factors that impact RE-AIM outcomes). We do not think that RE-AIM should be restricted to or categorized under only one type of D&I model. Such restricted categorization has led to confusion and to unnecessarily restricted use of the framework.
Question: Have any literature reviews been conducted using the RE-AIM framework? Have any reviews of the literature been conducted using the RE-AIM framework? If so, what have these reviews found?
Answer: Yes, several reviews have been conducted using the RE-AIM criteria. The general conclusion is that issues of representativeness (of both individuals and especially organizations and intervention agents) participating are the least often reported RE-AIM elements. Greater attention needs to be paid to the AIM factors.
If you are interested in the coding criteria used to score the various RE-AIM dimensions, view our Coding Sheet for Publications Reporting on RE-AIM Elements.
Behavior change intervention research in health care settings: A review of recent reports, with emphasis on external validity
Glasgow, R.E., Bull, S.S., Gillette, C., Klesges, L.M., & Dzewaltowski, D.A.
(2002).American Journal of Preventive Medicine. 23(1):62-69.
The future of health behavior change research: What is needed to improve translation of research into health promotion practice?
Glasgow, R.E., Klesges, L.M., Dzewaltowski, D.A., Bull, S.S., Estabrooks, P.
Ann Behav Med. 2004;27(1):3-12.
Reaching those most in need: A review of diabetes self-management interventions in disadvantaged populations.
Eakin, E.G., Bull, S.S., Glasgow, R.E., & Mason, M.
(2002)Diabetes Metab Res Rev., Jan.-Feb.(1):26-35.
Review of External Validity Reporting in Childhood Obesity Prevention Research.
Klesges LM, Dzewaltowski DA, Glasgow RE.
Am J Prev Med2008;34(3):216-223.
Smoking cessation interventions among hospitalized patients: What have we learned?
France, E.K., Glasgow, R.E., Marcus, A. (2001)
Preventive Medicine, 32(4):376-388.
Translating physical activity interventions for breast cancer survivors into practice: an evaluation of randomized controlled trials.
White SM, McAuley E, Estabrooks PA, Courneya KS.
Ann Behav Med2009 Feb;37(1):10-9. Epub 2009 Mar 3.
- How do I calculate reach if denominator (target population) is unknown?
- How do I calculate adoption if the number of potentially eligible settings is unknown?
- What variables should I use to determine the representativeness of the organizations involved in my program?
- What are the most important characteristics to assess the representativeness of participants when evaluating Reach?
- Implementation is not well articulated in the model and thus the metric is a little ambiguous.
Question: How do I calculate Reach when our "denominator" or target population is not known?
Answer: There are several ways of estimating Reach, and there are some websites listed on our Links page that may be of help. It is often possible to use either census data, data from national representative surveys such as NHANES or the CDC BRFSS, or information available from public agencies or marketing organizations that will allow you to estimate the number of eligible persons in a given geographic area.
Two important guidelines to remember are to report the exclusion rate for your study (as well as exclusion criteria) and to state specifically if you drew your sample from some exhaustive list (such as all patients in a health plan, all students in the school); or if your initial list of potential participants were interested volunteers (such as those responding to an advertisement or self-selecting to participate).
Because of the assumptions involved in such estimates, we recommend that you consider providing a ‘sensitivity analysis’ that presents both a best-case and a worst-case estimate of Reach.
See a table illustrating the impact of different ways to calculate Reach and our recommendations.
View a flow-diagramon “My Path” recruitment results.
See also Calculating and Reporting Reach
Read a list of suggestions for estimating target populations or “Reach denominators.”
See a table providing RE-AIM scoring examples for two built environment interventions.
See a tableon RE-AIM Perspectives on Built Environment Strategies: Definitions, Challenges, and Metrics
Question: How do I calculate Adoption if the population of potentially eligible settings is not known?
Answer: There are several ways of estimating Adoption rates, and some websites listed below that may be of help. It is often possible to use either census data, data from national surveys, professional associations (e.g., of businesses, schools, churches, employers), chambers of commerce, state licensing bureaus, marketing organizations or even phone books that will allow you to estimate the number of eligible organizations or settings in a given geographic area. Two important guidelines to remember are to report the exclusion rate for your study (as well as exclusion criteria) and to state specifically if you drew your sample from some exhaustive list versus approached those settings that you judged to be best able to implement your protocol or most interested.
Because of the assumptions involved in such estimates, we recommend that you consider providing a ‘sensitivity analysis’ that presents both a best case and a worst case estimate of Adoption.
See also Adoption Calculator
Question: What types of variables should I use to determine the representativeness of the organizations involved in my program (and those who decline)?
Answer: This depends on what is known about organizational characteristics that are related to outcomes in the particular area you are studying. A fairly common list of issues to consider would include: size of organization; history and stability of the organization; number of full-time versus part time staff; if the organization is unionized; history of health promotion; relevant health policies; whether time off work is provided for participation in such activities; percent of members or employees by gender; race and ethnicity, level of education, and job title. Of course, not all of this information will be available in every case. Often information can be collected over the phone from a personnel or human resources representative, or from standard sources described above in the section on Adoption.
For community-wide applications, see a table giving examples of how to calculateadoption for built environment interventions
Question: What are the most important characteristics to assess the representativeness of participants when evaluating Reach?
Answer: This varies depending upon the science of what factors are most strongly related to the health behavior of interest (e.g., medication adherence vs. physical activity). In general, we recommend assessing factors demonstrated to be related to the outcome of interest and/or health disparities, such as race/ethnicity, age, education, health literacy and numeracy, quality of life, social determinants of health, etc.
Question: Implementation is not well articulated in the model and thus the metric is a little ambiguous. As I see it there are two elements to Implementation: Fidelity and Dose/Exposure. You can "measure" the degree to which the staff/site implement the program as intended (# of lessons taught, # of minutes spent per session, # elements at each site) BUT you also want to know to what degree the participants were exposed to the program - meaning attendance, adherence to behavior change requirements, etc. (question and responses obtained via email discussion on 11 Feb 2010).
We have restricted our implementation results in published work to the percent of intervention delivered as intended, or if multiple components, then the percent to which (and quality with which if measures or applicable) program delivered. This would include consistency across delivery staff and settings. We have not used individual level indicators of participant receipt and enactment of intervention materials in our calculations, but typically provide descriptive statistics on attendance, participant reading of materials etc. Not sure how best to add this to a calculation, but in a paper we included participant reports of reading intervention materials and demonstrated that it moderated the outcomes. It might be that due to the different modalities and structure of interventions that the ‘quality’ indices at the individual level may be more intervention specific while the quantity/proportion measures at the delivery level are more generic?
Answer: The “Dose” question raised comes up also in our community-level work, where we’ve been discussing whether impact on individual health/health behavior is greater for strategies where the exposure is continuous (i.e., an environmental change, such as opening a new grocery store) versus single or limited exposures (a cooking class or farmer’s market that’s only available on Saturday mornings during the summer).
Another Answer: Every evaluation approach, including RE-AIM, addresses this somewhat, but none to our knowledge, comprehensively. There are several different constructs involved here…… and the interpretation of results is heavily dependent upon one’s perspective.
What others above are referring to as ‘dose/exposure’ is- from our perspective a mixture- and likely an interaction among several constructs…some of which are at the individual level- e.g. engagement; motivation, follow-through, participation, adherence….these behaviors are themselves the likely result of a combination of personal, setting level, and program variables; and are often influenced heavily by societal, environmental and contextual variables such as income, class, competing demands, policy and economic factors.
Other aspects of this ‘dose/exposure’ are related to setting /program characteristics- some of which are measured by RE-AIM and other evaluation approaches- e.g., things like cost, consistency of delivery, and ESPECIALLY extensiveness (related to but distinct from intensiveness ) of intervention.