Formative Evaluation: Fostering Real-Time Adaptions and Refinements to Improve the Effectiveness of Patient-Centered Medical Home Interventions

March 2013
AHRQ Publication No: 13-0025-EF
Prepared For:
Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services, 540 Gaither Road, Rockville, MD 20850,

Prepared by: Kristin Geonnotti, Ph.D. (, Deborah Peikes, Ph.D., and Winnie Wang, B.A. (Mathematical Policy Research); and Jeffrey Smith, Ph.D. (VA Mental Health Quality Enhancement Research Initiative (QUERI), Central Arkansas Veterans Healthcare System)


This brief focuses on using formative evaluation methods in studies of patient-centered medical home (PCMH) models. It is part of a series commissioned by the Agency for Healthcare Research and Quality (AHRQ) and developed by Mathematica Policy Research under contract, with input from other nationally recognized thought leaders in research methods and PCMH models. The series is designed to expand the toolbox of methods used to evaluate and refine PCMH models. The PCMH is a primary care approach that aims to improve quality, cost, and patient and provider experience. PCMH models emphasize patient-centered, comprehensive, coordinated, accessible care, and a systematic focus on quality and safety.

Suggested Citation

Geonnotti K, Peikes D, Wang W, Smith J. Formative Evaluation: Fostering Real-Time Adaptations and Refinements to Improve the Effectiveness of Patient-Centered Medical Home Models. Rockville, MD: Agency for Healthcare Research and Quality. February 2013. AHRQ Publication No. 13-0025-EF.

This brief and companion briefs in this series are available for download from

I. Formative Evaluation

A comprehensive program evaluation ideally includes both formative and summative components. Both approaches can examine how an intervention was implemented, the barriers and facilitators to implementation, and the effects of the intervention on various outcomes. Although both components can provide feedback on the effectiveness of an intervention and offer ways to improve it, they differ in frequency, aim, and focus. Formative evaluations stress engagement with stakeholders when the intervention is being developed and as it is being implemented, to identify when it is not being delivered as planned or not having the intended effects, and to modify the intervention accordingly. The stakeholders include payers, clinicians, practice staff, patients and their caregivers, and other decisionmakers.

This conceptual distinction was first suggested by Scriven (1967) to describe the two main functions of evaluation: (1) to foster development and improvement of a program (formative) and (2) to assess whether the results of that program met its stated goals (summative). A summative evaluation generally provides feedback to stakeholders at the end of program implementation. In contrast, a formative evaluation focuses attention on ongoing, midstream assessments that feed information back to intervention implementers, allowing them to make real-time adaptations and refinements to ineffective aspects of an intervention. Formative feedback often leads to decisions about program development (such as whether to modify or revise the intervention), whereas summative feedback often leads to decisions about whether to ultimately continue, expand, or adopt the program (Worthen, Sanders, and Fitzpatrick, 1997).

“All assessments can be summative (i.e., have the potential to serve a summative function), but only some have the additional capability of serving formative functions.” (Scriven, 1967)

An example can illustrate the difference between the two approaches. Suppose that a quality improvement intervention allows patients at a primary care practice to obtain laboratory results through a secure Web portal. As part of a formative evaluation, an implementation analysis might note “All assessments can be summative (i.e., have the potential to serve a summative function), but only some have the additional capability of serving formative functions.” (Scriven, 1967) 2 that patients either were not notified when their laboratory results became available or had difficulty logging into the portal. Formative evaluations provide findings such as these to practices and program sponsors on an ongoing basis, along with specific recommendations on how to improve patient access. In this example, this information could be used to refine the intervention by sending email alerts to patients when new lab results are entered. In contrast, a summative evaluation might also note the problem, but might wait to do so until the next scheduled summary report. A summative evaluation would not provide the intervention implementers with timely information or concrete suggestions that they could use to refine the intervention as it unfolds. In practice, most evaluations contain both formative and summative approaches.

back to top

II. Uses of Formative Evaluation

Implementing complex interventions in complex settings (such as the PCMH) is a difficult task that requires researchers and program managers to have a clear understanding of what should be implemented, how to best implement a suggested strategy, which elements may hinder or facilitate the implementation process, and why a strategy did or did not work once implemented. A formative evaluation can provide this information on an ongoing basis as the intervention is being delivered. Stetler, Legro, Smith, et al. (2006) conceptualize four components of a formative evaluation according to whether each occurs before, during, or after intervention implementation (see Figure 1).

Complete a needs assessment. Formative evaluations focus on pre-planning for the intervention design before it is implemented, which Stetler, Legro, Smith, et al. (2006) term the developmental component. Before the intervention begins, the evaluator conducts a needs assessment about areas where the practice should focus improvements by understanding the context the practice operates in, potential barriers and facilitators to practice change, and the feasibility of implementing the intervention as initially designed.

Stetler, Legro, Smith, et al. (2006) also describe three other components that can occur during or after a formative evaluation: (1) an implementation-focused analysis, (2) a progress-focused analysis, and (3) analysis of interpretive data. Below we discuss these three components and the types of information they contribute. Unlike summative evaluations, formative evaluations use these components to deliver information back to intervention implementers, with a certain frequency and intensity, to change the delivery of the intervention itself. During the intervention, combining implementation and progress analyses can provide a comprehensive assessment of the intervention Stetler, Legro, Smith, et al. (2006).

Conduct implementation analysis. An implementation-focused analysis assesses discrepancies between the implementation plan and the execution of that plan. This can include assessing fidelity to the implementation strategy and the clinical intervention, understanding the nature and implications of local adaptation, identifying barriers, identifying new intervention components or refining the original strategy to optimize the potential for success, and identifying the critical details necessary to replicate the implementation strategy in other settings. Data sources might include semistructured interviews with stakeholders, structured surveys, focus groups, direct observations through site visits, document reviews, electronic health records or charts, and management information systems.

Perform progress analysis. A progress-focused analysis monitors progress toward implementation and improvement goals during the intervention. Outcomes for the intervention practices are monitored on an ongoing basis. For example, audit and feedback of clinical performance data can give providers and practices data on key process and patient outcome indicators that can be used to refine the intervention during implementation. This information may also be used as positive reinforcement for high performers and as encouragement for low performers. Intervention impacts can then be estimated by comparing the outcomes of intervention practices with those of a comparison group to determine whether the intervention is having the intended effects on quality, cost, and patient and provider experience. Data sources for the progress-focused analysis typically include claims or billing data, electronic health records or charts, and structured surveys.

Analyze interpretive data. After implementation of the intervention, interpretive data collected before and during implementation can be used to generate hypotheses about why the intervention did or did not work. It is critical to obtain stakeholders’ views on the usefulness and value of an intervention, barriers to and facilitators of implementation success, and any recommendations for refinements to the implementation strategy, regardless of the strategy’s ultimate success. Interpreting formative data in this way offers the opportunity to maximize learning from the implementation effort and distills lessons learned for future projects.

Figure 1.Stages of Formative Evaluation

Figure 1. Stages of Formative Evaluation

Adapted from: Stetler, Legro, Smith, et al. (2006).

back to top

III. Advantages

Facilitate mid-stream modifications. The main advantage of a formative approach is that it encourages mid-stream modifications to improve the intervention, rather than taking a more “hands-off” approach for the sake of research objectivity. If interim feedback can provide insights about ways to improve the intervention, this information can be used to increase the chances of implementation success and also focus resources most efficiently (Worthen, Sanders, and Fitzpatrick, 1997).

Refine complex interventions. Although formative evaluations are useful for a variety of interventions, they are particularly useful for helping to refine wide-ranging and complex PCMH interventions. Primary care practices often implement multiple intervention components concurrently, and these components interact with the practice and external setting. Therefore, there are numerous possibilities for implementing each intervention component and for them to interact with one another.

For example, suppose that a practice would like to provide after-hours care to its patients. Depending on the context, the practice could implement the intervention by establishing a nurse call line, rotating physician coverage, establishing an agreement with a local after-hours clinic, or sharing coverage with another practice. The providers could use ongoing formative feedback to continually improve the delivery of their complex intervention.

back to top

IV. Limitations

Formative evaluations present several challenges for researchers, which we discuss below.

Require additional time and resources. Formative evaluations are time- and resource- intensive because they require frequent data collection, analysis, and reporting, as well as rapid refinement of the intervention strategy as new information about implementation effectiveness becomes available.

Become part of the intervention. Formative feedback leads to real-time refinements to the intervention, which makes this evaluation component a part of the intervention itself. Although this maximizes the chances of program success, it also raises a question as to whether the intervention would work if replicated without the formative evaluation component. To be successful, ongoing PCMH interventions may wish to include formative feedback. This feedback might come from data or feedback reports from payers about the patients’ quality and cost outcomes, information the practices collect from patients using surveys or other feedback mechanisms, and process and outcome metrics the practices collect using their own internal tracking systems. Ultimately, formative feedback should be viewed as an integral part of the delivery of PCMH models and other complex health services interventions.

Create methodological challenges. This rapid refinement process also poses several methodological challenges when trying to evaluate the impact of the intervention. Outcomes can only be measured during the period when a particular variant of the intervention was implemented, which may lead to short followup periods—with corresponding smaller sample sizes—and less power to detect overall intervention effects. In addition, it is difficult to determine when a change in the intervention will emerge in outcomes. When evaluators examine the entire time frame of the intervention, estimated effects will reflect the combined effects of the different variants of the intervention over time, essentially estimating an average treatment effect rather than the effectiveness of any one version of the evolving intervention.

Necessitate interpreting results in relation to implementation. Given the interim nature of formative assessments, results should be interpreted in the context of program implementation, rather than as a complete assessment of program effectiveness. For example, midstream assessments might show somewhat unfavorable outcomes, but the effects could become more favorable after collecting summative data from a longer time period. In practice, decisions based on midstream formative data need to be made with a lower certainty of evidence. Evaluators should combine these efforts with summative approaches that use the fuller sample over longer periods of time to ensure a balance between rapid interim feedback and rigorous conclusions.

Compel evaluators to preserve objectivity. The last challenge, one of objectivity, arises because evaluators are shaping the intervention with their ongoing feedback. Strategies should be devised to maintain appropriate distance and objectivity while still offering detailed and robust formative feedback. For example, evaluators may wish to pre-specify in a design report the statistical significance levels and the subgroups to be analyzed, to ensure that they follow pre-specified, objective decision rules.

back to top

V. Conclusion

Formative evaluation, in conjunction with a traditional summative evaluation, is an invaluable tool to determine how to refine the PCMH. There is much to be learned from formative analyses, which can be used to refine interventions as they unfold and maximize the chances of success.

back to top

VI. References

  • Scriven M. The methodology of evaluation. In: Tyler RW, Gagne RM, Scriven M, eds. Perspectives of curriculum evaluation. Chicago: Rand McNally; 1967. p. 39-83.
  • Stetler C, Legro M, Smith J, et al. The role of formative evaluation in implementation research and the QUERI experience. J Gen Intern Med 2006 Feb;21:S1-8.
  • Worthen B, Sanders J, Fitzpatrick J. Program evaluations: alternative approaches and practical guidelines. Washington: Longman; 1997.
back to top

VII. Resources

General Overview Information on Formative Evaluation and Its Role in Implementation Research

  • Dehar MA, Casswell S, Duignan P. Formative and process evaluation of health promotion and disease prevention programs. Eval Rev 1993;17:204-20.
  • Hagedorn H, Hogan MM, Smith JL, et al. Lessons learned about implementing research evidence into clinical practice: experiences from VA QUERI. J Gen Intern Med 2006;21:S21-24.
  • Kealey E. Assessment and evaluation in social work education: formative and summative approaches. J Teach Soc Work 2010 Jan;30(1):64-74.
  • Schoster B, Altpeter M, Meier A, et al. Methodological tips for overcoming formative evaluation challenges: the case of the Arthritis Foundation Walk With Ease Program. Health Promot Pract 2012 Mar;13(2):198-203.
  • Scriven M. The methodology of evaluation. In: Tyler RW, Gagne RM, Scriven M, eds. Perspectives of curriculum evaluation. Chicago: Rand McNally; 1967. p. 39-83
  • Stetler C, Legro M, Smith J, et al. The role of formative evaluation in implementation research and the QUERI experience. J Gen Intern Med 2006 Feb;21:S1-8.
  • Vastine A, Gittelsohn J, Ethelbah B, et al. Formative research and stakeholder participation in intervention development. Am J Health Behav 2005 Jan;29(1):59-69.
  • Worthen B, Sanders J, Fitzpatrick J. Program evaluations: alternative approaches and practical guidelines. Washington: Longman; 1997.

Examples of Formative Evaluations Applied in Various Fields of Research

  • Brown AH, Cohen AN, Chinman MJ, et al. EQUIP: implementing chronic care principles and applying formative evaluation methods to improve care for schizophrenia: QUERI qeries. Implement Sci 2008;3:9.
  • Curran G, Mukherjee S, Allee E, et al. Process for developing an implementation intervention: QUERI series. Implement Sci 2008;3:17.
  • Curran GM, Pyne J, Fortney JC, et al. Development and implementation of collaborative care for depression in HIV clinics. AIDS Care 2011 Dec;23(12):1626-36.
  • Farley DO, Battles JB. Evaluation of the AHRQ patient safety initiative: framework and approach. Health Serv Res 2009 Apr;44(2 Pt 2):628-45.
  • Harvey J, Avery A, Waring J, et al. A constructivist approach? Using formative evaluation to inform the electronic prescription service implementation in primary care, England. Stud Health Technol Infor 2011;169:374-8.
  • McLaren S, Woods L, Boudioni M, et al. Implementing a strategy to promote lifelong learning in the primary care workforce: an evaluation of leadership roles, change management approaches, interim challenges and achievements. Qual Prim Care 2008;16(3):147-55.
  • Miake-Lye IM, Amulis A, Saliba D, et al. Formative evaluation of the telecare fall prevention project for older veterans. BMC Health Serv Res 2011;11:119.
  • Schoster B, Altpeter M, Meier A, et al. Methodological tips for overcoming formative evaluation challenges: the case of the Arthritis Foundation Walk With Ease Program. Health Promot Pract 2012 Mar;13(2):198-203.
  • Smith JL, Williams JW, Jr., Owen RR, et al. Developing a national dissemination plan for collaborative care for depression: QUERI series. Implement Sci 2008; 3:59.

Using Conceptual Frameworks to Guide Formative Evaluations

  • Damschroder LJ, Aaron DC, Keith RE, et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci 2009;5:50.
  • Stetler CB, Damschroder LJ, Helfrich CD, et al. A guide for applying a revised version of the PARIHS framework for implementation. Implement Sci 2011;6:99.