|
||||||||
|
Volume 5: No. 1, January 2008
COMMUNITY CASE STUDY
|
|
||||||
Box. Tactics Used to Guide Partner Engagement in the Prevention Research Centers (PRC) Program’s Project DEFINE (Developing an Evaluation Framework: Insuring National Excellence) | ||||||
The evaluation planning process formally began when the contract was awarded in 2001. The core evaluation work group comprised evaluation contractors and PRC Program staff. One of the first activities of the work group was to form an advisory committee, named the Collaborative Evaluation Design Team (CEDT). The core evaluation work group and CEDT worked closely throughout the project and learned several lessons as a result of the collaborative project activities. Realizing that these lessons could be helpful to other programs and responding to the support of reflective practice in the evaluation field (9), we (the core evaluation work group and the CEDT) reflected on the project and reviewed project documents. We then refined and came to consensus on four tactics that guided the planning process and led to the project’s success (Box). The first and second tactics were explicit within the contract’s scope of work, and we discuss how they were implemented and refined to engage stakeholders. The third and fourth tactics resulted from our observations on feedback from stakeholders and our retrospective assessment of the project. These four tactics respond to the recommendations in the CDC framework, the program evaluation standards for increasing the utility and support of a project, and the multiple perspectives of the partners in the PRC Program.
When initiating Project DEFINE, CDC communicated with PRC partners through the existing standing committees: an overall steering committee and five topic-specific committees. These committees, whose members are PRC representatives and provide input and guidance to CDC’s PRC Program staff, were key to communicating with and seeking input on the evaluation design from PRC directors and other PRC leaders. Involving these committees had two benefits. First, because the PRCs elected the committee members, a committee stamp of approval on evaluation activities offered a form of peer endorsement. Second, one of the committees, the National Community Committee (NCC), included leaders from the PRCs’ partnering communities, thereby ensuring the incorporation of community perspectives into the deliberations of the core evaluation work group as well as relaying important information on Project DEFINE to other community leaders involved with each PRC.
Because no PRC committee was devoted to an evaluation planning process, the CEDT was created to help guide the project and create additional avenues for stakeholder input. Consistent with the CDC framework (4), the CEDT included an array of perspectives and expertise from across the PRC Program. The PRCs nominated CEDT members, and the committee ultimately included representatives of the PRC directors, PRC staff, state health departments, community members, national partner organizations, and an evaluation expert experienced in conducting community-based research. The CDC project officer for Project DEFINE (L.A.A.) and a PRC director (R.C.B.) co-led the CEDT. Discussions between the core evaluation work group and the CEDT during monthly conference calls and semiannual in-person meetings were recorded in meeting minutes and summary documents, as were all written communications and feedback from participants.
The CEDT helped to ensure project relevance, feasibility, and utility and to facilitate input from stakeholders. The CEDT’s involvement in decision making and project oversight enabled stakeholders to trust that the project would represent their perspectives and not just those of the core evaluation work group. Finally, the CEDT and core evaluation work group reflected on the lessons learned and made adjustments as the project unfolded.
The contract’s scope of work outlined a series of participatory methods that were used during Project DEFINE. These methods led to the development of a program description in the form of a logic model that reflected input from all PRC Program partners. Concept mapping was the primary method used to develop the basic constructs of the logic model (10). Other methods used provided a deeper understanding of the context, program realities, and viewpoints of the PRCs, which are necessary to develop a logic model that accurately describes the program (11). In particular, two in-person methods were useful in eliciting partners’ perspectives on the PRC Program overall and the evaluation planning project specifically: 1) visiting six PRCs to understand contextual factors and 2) holding three regional meetings for obtaining feedback and encouraging open dialogue between partner groups. A written document that served as a structured feedback tool was then sent to four main groups (the PRCs, the NCC, state partners, and CDC’s PRC Program staff); the feedback tool allowed for comments on the final logic model and narrative. The core evaluation work group also consistently used other methods to inform various partner groups, provide opportunities for discussion, and obtain feedback on activities and products before broader release. These methods included presenting updates at the semiannual meetings of PRC directors and the meetings of other partner groups, participating in CEDT monthly conference calls, conducting semiannual in-person meetings, and discussing issues with members of the standing committees during their regularly scheduled calls or retreats.
Through these activities, stakeholders shared diverse perspectives, values, and priorities. The approach encouraged partners to work through challenges, such as the differences among stakeholders in program understanding, project expectations, and power differentials that are common in community–academic partnerships (12,13).
Because a transparent decision-making process, information sharing, and feedback for project changes are essential in partnerships and participatory projects (4,8,12,13), we paid careful attention to listening and responding to feedback from partners, to following through on recommendations, and to actively communicating the resulting changes. For example, the CEDT initially recommended developing two logic models for the PRC Program, one highlighting the national perspective and national-level outcomes and the second delineating the community perspective and community-level outcomes. After reviewing the two draft models at the regional meetings, representatives from all of the PRCs recommended that the models be combined. The core evaluation work group, with the CEDT’s guidance, then developed a single logic model for the program that incorporated the ideas of community members, PRCs, and other partners and distributed the draft for review using a structured feedback tool. The example illustrates how the core evaluation work group was committed to responding to feedback in order to increase trust and buy-in. Seeing their ideas implemented confirmed for the partners that the project was genuinely aimed at creating products that were relevant and useful to all partner groups. Thus, partners experienced benefits of the participatory process and continued to participate in project activities.
A participatory project often has a more fluid process than a nonparticipatory project, and therefore funding organizations, researchers, or evaluators must be able to adjust project plans as needed (12,13). On the basis of partners’ participation and input, Project DEFINE’s direction changed several times, each time becoming more relevant and useful. For example, partners asked for another opportunity to review the logic model before it was finalized, particularly because the two logic models were being combined into one, and this step was added to the planning process. Development of a structured feedback tool for soliciting comments to be used in finalizing the national logic model was added to the contract.
Flexibility in the use of project resources was also an important factor in Project DEFINE. When the core evaluation work group modified plans, resources were also reallocated across tasks. CDC supported the project year by year. CDC’s up-front commitment of both staff and funding ensured that the project was participatory, even though this approach to evaluation planning has higher costs and takes more time to complete (12,13). Foremost, however, the core evaluation work group recognized the importance of having dedicated project leaders and a committed group of partners who believed in the PRC Program and willingly gave their time and energy to the project.
The four tactics described allowed us to continually listen and provide feedback to our national community of researchers and others with a vested interest in prevention research. The processes also led to lessons learned, which we describe below.
Using participatory methods and having a utilization-focused approach for Project DEFINE resulted in several benefits. First, CDC’s PRC Program office increased its abilities to strategically manage the program. The PRC Program’s national logic model, a tangible product of the participatory process and available on the PRC Program Web site (3), was used to improve the 5-year cooperative agreement program announcement (14), protocols for CDC site visits to PRCs, and templates for grantee work plans and progress reports. These new materials not only reflect the perspectives of the PRCs but also assess partner engagement in all PRC activities. The data from Project DEFINE continue to be used in the development of a national evaluation protocol. The increased evaluation activities respond to CDC’s accountability needs, addressing the long-term investment in the PRC Program by establishing mechanisms to understand how PRCs operate, their uniqueness and breadth, and the impact of the PRC Program overall.
Second, individual PRCs have increased their evaluation activities over the years. During the project, the involvement of PRC partners increased as they understood that CDC would use the logic model for planning and that their input could influence future program decisions. Since then, each PRC has been required to develop its own logic model and a related evaluation plan for the 5-year cooperative agreement application. Several PRCs have used these logic models for strategic planning and evaluation or have created logic models for specific research projects.
Third, academic, community, and state partners of the PRCs now expect to be engaged in PRC planning and research activities. The focus on community-based research has intensified during the 20-year history of the PRC program and is now explicit in the national program’s requirements. Community partners have stated that they appreciate having their role more formally defined.
Fourth, unexpected benefits beyond the PRC program have also resulted, including adaptation of the PRC logic model by CDC’s National Academic Centers of Excellence (ACE) on Youth Violence Prevention (15). The constructs and narrative description for the ACE logic model reflect input from ACE program partners and are consistent with the youth violence prevention research policies and Congressional language.
A primary challenge of the participatory process was the time and effort required for Project DEFINE. The work described here extended over 2 years. This time was necessary to engage the network of diverse program partners and address the complex issues involved in developing the logic model. A second challenge was the investment in program resources. The type of planning required for Project DEFINE can be costly, although methods can be tailored for various budgets and data requirements. A third challenge was identifying the critical project decision points for each step and the people to engage in feedback and decision-making processes. For example, when should program partners be consulted? When should CDC bring draft documents to the CEDT for comment? The best way to answer these questions was to communicate openly with members of the CEDT, asking for their perspective on the methods to use and the decisions in which they should be involved.
Having the infrastructure and processes in place to ensure routine and repeated communication with, and engagement of, stakeholders throughout the project was extremely valuable. By using a participatory model and staying attentive to the project’s practical use, we were motivated to use a variety of methods to involve all partner groups throughout the process, all of which influenced the development of the national logic model. Consistent communication and commitment to bringing in diverse viewpoints from the PRCs and their partnering communities led to stakeholder support of, and involvement in, Project DEFINE. Ongoing feedback also served as a periodic touchstone to ensure that the project remained pertinent and responsive to the needs of all partners. The project established the groundwork that the PRC Program needed to prepare for national evaluation and created momentum to continually engage partners in these activities.
The CDC framework and its program evaluation standards offer a structure for planning a public health evaluation project and principles to follow as the project progresses (4). Participatory mechanisms and methods for obtaining feedback or sharing updates need to be tailored for individual projects and programs. The experiences from Project DEFINE offer examples for other programs that need to cohesively and effectively engage diverse partners and stakeholders. We hope that our reflections on this evaluation planning project help guide others engaging in large-scale public health program evaluations and assist those working to involve a broader representation of program stakeholders, whether for building an evaluation framework, for developing a logic model, or for other purposes. For the national community of researchers and partners involved in the PRC Program, Project DEFINE propelled the program forward in documenting how PRCs conduct prevention research and in assessing whether the program is having the intended impact on public health research, policy, and practice.
This work was supported by the PRC Program, CDC’s One-Percent Evaluation Program, and CDC’s National Center for Chronic Disease Prevention and Health Promotion. We are grateful to all the partners who participated in Project DEFINE. Their dedication of time and their abundance of ideas set the stage for building a national evaluation for the PRC Program. The contractor for this project was COSMOS Corporation, Bethesda, Maryland.
Corresponding Author: Demia Sundra Wright, MPH, Centers for Disease Control and Prevention, 4770 Buford Hwy NE, Mailstop K-45, Atlanta, GA 30341. Telephone: 770-488-5506. E-mail: dswright@cdc.gov.
Author Affiliations: Lynda A. Anderson, Centers for Disease Control and Prevention and Rollins School of Public Health, Emory University, Atlanta, Georgia; Ross C. Brownson, Prevention Research Center at Saint Louis University School of Public Health, St. Louis, Missouri; Margaret K. Gwaltney, COSMOS Corporation, Bethesda, Maryland (now with Abt Associates Inc, Bethesda, Maryland); Jennifer Scherer, COSMOS Corporation, Bethesda, Maryland; Alan W. Cross, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina; Robert M. Goodman, Tulane University School of Public Health and Tropical Medicine, New Orleans, Louisiana (now with Indiana University School of Health, Physical Education, and Recreation, Bloomington, Indiana); Randy Schwartz, American Cancer Society, New England Division, Framingham, Massachusetts; Tom Sims, West Virginia Bureau for Public Health, Charleston, West Virginia; Carol R. White, University of Kentucky, Lexington, Kentucky.
Franks AL, Brownson RC, Bryant C, Brown KM, Hooker SP, Pluto DM, et al. Prevention Research Centers: contributions to updating the public health workforce through training. Prev Chronic Dis 2005;2(2). http://www.cdc.gov/pcd/issues/2005/apr/04_0139.htm. Accessed August 14, 2006.
Franks AL, Simoes EJ, Singh R, Gray BS. Assessing prevention research impact: a bibliometric analysis. Am J Prev Med 2006;30(3):211–6.
Prevention Research Centers Program. Atlanta (GA): Centers for Disease Control and Prevention. http://www.cdc.gov/prc. Updated June 14, 2006. Accessed Jun 2, 2006.
Centers for Disease Control and Prevention. Framework for program evaluation in public health. MMWR Recomm Rep 1999;48(RR-11):1–40.
Gilliam A, Davis D, Barrington T, Lacson R, Uhl G, Phoenix U. The value of engaging stakeholders in planning and implementing evaluations. AIDS Educ Prev 2002;14(3 Suppl A):5–17.
Cousins JB, Whitmore E. Framing participatory evaluation. In: Whitmore E, editor. Understanding and practicing participatory evaluation. New Dir Eval 1998;(80):5–23.
Patton M. Utilization-focused evaluation. 3rd ed. Thousand Oaks (CA): SAGE Publications; 1997.
Greene JC. Stakeholder participation in evaluation design: is it worth the effort? Eval Program Plann 1987;10:379–94.
Stevahn L, King JA, Ghere G, Minnema J. Establishing evaluator competencies for program evaluators. Am J Eval 2005;26(1):43-59.
Anderson LA, Gwaltney MK, Sundra DL, Brownson RC, Kane M, Cross AW, et al. Using concept mapping to develop a logic model for the Prevention Research Centers Program. Prev Chronic Dis 2006;3(1). http://www.cdc.gov/pcd/issues/2006/jan/05_0153.htm. Accessed June 2, 2006.
Rossi PH, Freeman HF, Lipsey MW. Evaluation: a systematic approach. 6th ed. Thousand Oaks (CA): SAGE Publications; 1999.
Israel BA, Schulz AJ, Parker EA, Becker AB. Review of community-based research: assessing partnership approaches to improve public health. Annu Rev Public Health 1998;19:173–202.
Springett J. Issues in participatory evaluation. In: Minkler M, Wallerstein N, editors. Community-based participatory research for health. San Francisco (CA): Jossey-Bass; 2003. pp. 263–88.
Health Promotion and Disease Prevention Research Centers. Fed Regist 2003 Mar 27;68(59):14984–90.
Cooperative Agreement Program for the National Academic Centers of Excellence on Youth Violence Prevention. Fed Regist 2004 Nov 22;69(224):67915–30.
|
|
|
|
The opinions expressed by authors contributing to this journal do not necessarily reflect the opinions of the U.S. Department of Health and Human Services, the Public Health Service, the Centers for Disease Control and Prevention, or the authors’ affiliated institutions. Use of trade names is for identification only and does not imply endorsement by any of the groups named above.
Privacy Policy | Accessibility This page last reviewed March 30, 2012
|
|