Avoid These Common Mistakes in Your Program Evaluation Brief Submission
Manuscript Title
- Manuscript title overreaches the manuscript’s findings and results.
General
- Mixed use of key terms throughout the manuscript to describe the name of the program, phases of the program, population (human subjects), and outcomes (eg, measures, indicators, variables, study/evaluation/research).
- Use of inside language, acronyms, jargon, and program announcement names that are unfamiliar to those working outside local/state health departments and CDC.
- Making claims, establishing a position, and/or borrowing original ideas or the work of others without providing references.
- Outdated or unrelated references used to describe programs, interventions, and implications for public health practice.
- Failure to review the manuscript to ensure narrative flow, consistency, and integration of the text when multiple authors generated different sections of the paper.
- Ignoring or failing to follow the journal’s manuscript requirements to use AMA style when citing references within papers and in reference lists.
- Multiple terms used interchangeably when referring to a key partner (eg, local public health agency, health department, localities, agencies, and local programs) when only one term would be appropriate to avoid confusion.
Abstract
- Abstracts aren’t revised after the manuscript is finalized and prior to submission to the journal.
Introduction
- Manuscript’s focus on health disparities of a particular population are not justified early in the manuscript with examples, such as inadequate access to health care, behavioral factors, educational inequalities, etc; if examples are present, they are not referenced appropriately.
- Key terms that could be unfamiliar to the reader aren’t defined in the context of the public health topic under discussion in the manuscript.
Program and Objectives
- Program objectives aren’t guided by recent peer-reviewed literature that would establish how the results are/are not aligned with the manuscript’s proposed program intervention.
Intervention Approach
- Use of too many frameworks, models, logic models, and theories to describe the program’s foundation, making the paper difficult to follow.
- Use of frameworks, models, logic models, and theories without justifications for their development and use (ie, supported by existing literature).
- Generating frameworks, models, and logic models before reviewing published peer-reviewed literature to discover whether one already exists that can be considered and cited.
- Intervention approaches lack sufficient detail to understand how they were implemented.
- Intervention approaches are not appropriately linked to results being reported.
Evaluation Methods
- Selecting and using the wrong statistical test(s) for the type of data collected.
- Evaluation questions are not supported by current peer-reviewed literature.
- Inadequate understanding or inappropriate use of evaluation study designs (eg, feasibility study vs pilot study), depending on the purpose of the evaluation.
- Inadequately defining and distinguishing between what is considered short-, intermediate-, and long-term outcomes throughout the manuscript.
- Referring to the use of a multi-method approach when authors are really using a mixed methods approach.
Results
- Failure to state limitations of sample size and/or inappropriate use of small sample size for the purposes of the research reported in the manuscript.
- Generating and reporting results too soon when the intervention period is too brief to adequately measure impact.
- Contributing impact to factors that cannot be proven or validated based on poorly described frameworks or interventions and the type data collected.
- Overreliance on qualitative data to make broad statements about program reach, impact, and sustainability (eg, multiple statements from individuals at multiple sites indicating “something” was successful).
Implications for Public Health
- Both positive and negative findings of equal scientific merit aren’t emphasized equally.
- Findings from the evaluation aren’t compared with findings of similar evaluations in the peer-reviewed literature.
- Findings aren’t presented in the context of how they can advance the field, but only as a justification for use and/or need for additional funding.
- Limitations of the evaluation are often either not provided or under discussed, which can potentially make findings less credible.
Tables and Figures
- Figures duplicate information in the text or tables.
- Tables duplicate information in the text or figures.
The opinions expressed by authors contributing to this journal do not necessarily reflect the opinions of the U.S. Department of Health and Human Services, the Public Health Service, the Centers for Disease Control and Prevention, or the authors’ affiliated institutions.