Why is evaluation important in health promotion




















Evaluation approaches and frameworks have been applied effectively to diverse health programs with useful learnings. Conclusion: Evaluation of health promotion interventions is imperative to deliver benefits of research into improved health outcomes.

Understanding of evaluation concepts including levels, approaches and framework and methods is needed to facilitate consistent use of evaluation in research. Ultimately, health promotion programs require context specific adaptable evaluations. Greater opportunities exist for shared learnings to build evaluation capacity and to deliver greater health impacts. Keywords: evaluation, health promotion, evaluating health programs, evaluation frameworks, translational research. As the global prevalence of obesity and chronic diseases continues to rise, the need for effective health promotion programs is imperative.

Whilst research into effectiveness of health promotion programs is needed to improve population health outcomes, translation of these research findings into policy and practice is crucial. Translation requires not only efficacy data around what to implement, but also information on how to implement it.

Evaluation seeks to optimise translation by answering questions related to how to implement evidence-based interventions under real world conditions. Hence evaluation is now recognised as an integral component of all health promotion programs. Presently, evaluation has been applied inconsistently to health promotion programs, 5 limiting the translation of knowledge. Information related to evaluation is often difficult to access through conventional academic literature and is more often located in non-academic grey literature.

Additionally, terminology in this field is inconsistent and there is considerable overlap between areas such as implementation research and evaluation. In this context, we aimed to review and summarize the literature and discuss planning and development of comprehensive health promotion evaluations.

This is relevant to health professionals, researchers and end users, who seek insight into evaluation designs and are engaged in driving evidence into policy and practice. An enhanced appreciation of evaluation methodologies and terminology aims to provide a foundation for those new to evaluation.

This method involves scanning reference lists of all full text papers and using judgment to decide whether to pursue texts further. Publications were included and reviewed if it clearly described an evaluation planning process or methodology and had applied these methods to a health promotion intervention. Publications were excluded if it did not include a formal evaluation methodology inclusive of evaluation levels, approaches and frameworks within the context of health promotion programs.

Literature suggests that in depth planning is critical to comprehensive evaluation. The first step in evaluation planning is to identify the purpose for undertaking the evaluation and formulating clear evaluation objectives and questions. Stakeholders may include funders, end users, service providers, government employees or the general public. Importantly, providing ample time to plan and conduct an evaluation is imperative.

Areas for consideration when developing an evaluation plan can include evaluation levels also referred to as types , approaches, framework also referred to as models and data collection tools. The levels are determined by the purpose of the evaluation and are influenced by the state of the program under development, settled and the timing of data collection before program roll out, during implementation or post implementation.

Common elements assessed in a process evaluation include program reach, fidelity in relation to program protocol, program context, quality and dose delivered and received by participants. However, process evaluation requires preplanning with stakeholders to enable data collection throughout program delivery. We suggest that combining a range of evaluation levels [E. Clinical research as defined by the National Institute of Health incorporates; 1 patient orientated research direct human interaction , 2 epidemiological and behavioural studies, 3 outcomes and health service driven research.

Evaluation is a comparative assessment and comparison of an intervention of interest, against a standard of acceptability,54 utilising systematically collected data. Is the systematic application of social research procedures for assessing the conceptualisation, design, implementation and utility of social intervention programs. Explored the initial impact of the intervention whether it has done more good than harm amongst a target population under specific conditions.

Describes the notion of moving health knowledge generated into products, practices, policies and can include knowledge exchange, transfer and mobilization. Measures the activities of the program including reach, implementation, satisfaction, quality and capacity of the program. It determines whether a program is delivered as intended to the target audience.

Measures the immediate effect of the health intervention. A combination of measurements are obtained and judgements are made before or during the implementation phase of materials, methods, activities, in order to improve the quality of performance or the delivery of the program.

Conducted after completion of a program and draws conclusions regarding the quality, impact, outcomes and benefits of a program. Each evaluation approach has a number of associated steps, guiding the processes and activities of the evaluation Table 3. There are a number of evaluation approaches available some of which include objective-based, needs-based, theory based, collaborative, utilization focused and realistic evaluation.

Typically there is a large degree of overlap between varied evaluation approaches; however the emphasis and the tasks related to each step of the evaluation varies in accordance with the nature and purpose of the evaluation. The effectiveness and worth of a program is based purely on whether pre-defined objectives have been successfully achieved.

Whilst, this study was not strictly described as an objective based evaluation, large elements of this study support this type of evaluation, as unintended outcomes were not explored.

In this program the predefined end-points were determine as weight, blood pressure, serum lipids and fitness. An evaluation in which the determination of worth is identified by the communities wants or needs, which will then be addressed by the program planned.

A needs assessment was conducted in patients with advanced cancer to determine the most suitable delivery methods of health information. Participants were asked to complete a survey and indicated that the preferred mode of delivery of information was one-to-one education sessions. The evaluation is based on program theory and the logical relationship between program inputs and outcomes. This approach involves having a good appreciation of the nature of the program, context and environment.

The Being Active Eat Well program was developed to address childhood obesity by promoting healthy lifestyles. In this programme the intervention activities inputs focused on capacity building, policy development and community empowerment.

As a result of the program inputs the children in the intervention group had significantly lower increases in weight and BMI scores. Stakeholders are involved in the evaluative endeavour, including interpreting and conclusions. To determine the success of a community based rehabilitation program in individuals with disabilities, a participatory evaluation was conducted.

Program participants, staff and managers of the program were engaged in the evaluation process by part-taking in interviews and focus groups to explore satisfaction of the service. Stakeholders reported that the program had supported the needs of the community. The approach is formulated on the premise of who the primary intended users and stakeholders are and how the results will be employed.

Utilization-focused evaluation is highly individualised, flexible and situational. A utilization focused evaluation was conducted to assess the role and effectiveness of nurse practitioners in an acute hospital. This theory-driven evaluation focus on the context in which a program is implemented and describes the mechanism responsible for outcomes achieved.

The mixed-methods evaluation explored the processes used during program implementation to provide contextual information and establish relationships between program inputs and outcomes. Table 3 Evaluation approaches and previous applications within the health promotion setting. Evaluation frameworks or models: Similarly, evaluation frameworks provide detailed guidance for evaluators, as they ensure the evaluation design considers the origins and contexts of the program being examined.

An evaluation framework can encourage the prioritisation of evaluation purpose and the selection of data collection tools. Here we describe briefly these evaluation frameworks, prior application and practical insights from our experiences of utilising these frameworks. RE-AIM framework: The RE-AIM Reach, efficacy, adoption, implementation and maintenance framework is an adaptable and simple framework for evaluating large scale projects, which considers both the individual and population impacts of the program.

The associated resources provide clear examples of evaluation questions, potential measurement indicators and data collection tools which are helpful for novice evaluators. The context of the program refers to the evaluation environment in which the program will be implemented.

The input refers to the resources and activities required to meet the objectives of the program. As a result, there may be inadequate consideration of evaluation or evaluation is not planned at all. Data are not always suitable to address desired evaluation questions, or are not collected well.

Consider the use of surveys to obtain evaluation data. People creating surveys often have no training in good survey design. A number of circumstances can contribute further to limited evaluation. For example, when resources are scarce or there is instability in the workforce, or when a program receives recurrent funding with no requirement for evidence of program effectiveness, the perceived value of rigorous evaluation is low.

Furthermore, in any funding round, the program may no longer be supported making any evaluative effort seem wasteful. Consequently, there appear to be few incentives for quality evaluation used to inform program improvements, to modify, expand or contract, or to change direction in programming. Political influence also plays a role.

The difficulties of working with hard to reach and unengaged target groups may be ignored in the urgency to act quickly.

As a result, getting buy-in from community groups may not be actively sought, or may be tokenistic at best, with the potential for exacerbating the social exclusion experienced by marginalised groups [ 35 ]. There is also some reluctance by practitioners to evaluate due to potential risk of unfavourable findings. Yet, it is important to share both positive and negative outcomes so that others learn from and do not make similar mistakes.

Confidence to publish lessons learned firstly requires developing the skills to be able to disseminate knowledge through peer-reviewed journals and other forums. Secondly, practitioners require reassurance from funding agencies that funds will not automatically be withdrawn without opportunity for practitioners to adjust programs to improve outcomes.

Information about how to conduct evaluations for different types of programs is plentiful and readily available to practitioners in the public domain see for example [ 7 , 9 , 10 , 16 ]. Strategies for building evaluation capacity and creating sustainable evaluation practice are also well documented [ 36 ]. Yet barriers to translating this knowledge into health promotion practice clearly remain [ 34 , 37 ].

What is needed to support practitioners to undertake improved evaluation? We propose that multi-level strategies are needed to address the organisational, capacity and translational factors that contribute to the currently limited program evaluation focused on health promotion program development [ 6 , 12 , 14 ].

We also suggest that supporting health promotion practitioners to conduct evaluations that are more meaningful for program development is a shared responsibility. We identify strategies and roles for an array of actors including health promotion practitioners, educators, policymakers and funders, organisational leadership and researchers.

Many strategies also require collaboration between different roles. Examples of the strategies and shared responsibilities needed to improve health promotion program evaluation are shown in Figure 3 and are discussed below. We have questioned here the expectations of funding agencies in relation to evaluation.

We concur with Smith [ 6 ] and question the validity of the outcomes some organisations may require, or expect, from their own programs. Assisting organisations to develop achievable and relevant goals and objectives, and processes for monitoring these, should be a focus of capacity building initiatives.

Organisational leadership needs to place a high value on evaluation as a necessary tool for continuous program development and improvement. There should be greater focus on quantifying the extent of impact needed. Practitioners need to feel safe to ask: could we be doing better?

Asking this question in the context of raising the standards and quality of programs to benefit the target groups [ 6 ] is recommended rather than a paradigm of individual or group performance management. Sharing both the processes and results of program evaluations in this way is especially important for the wider health practitioner community.

Furthermore, identifying organisations that use evaluation well for program development may assist in understanding the features of organisations and the strategies and practices needed to overcome common barriers to evaluation.

In some organisations, there is limited awareness of what constitutes evaluation beyond a survey, or collecting operational data. We would argue that with some modification, many existing program activities could be used for evaluation purposes to ensure systematic and rigorous collection of data.

Examples include recording journal entries of program observations, and audio or video recording of data to better understand program processes and participant involvement. Specialist evaluation skills are not always required. Practitioners may wish to consider appreciative inquiry methods [ 38 ] which focus on the strengths of a program and what is working well rather than program deficits and problems.

Examples include the most significant change technique, a highly participatory story-based method for evaluating complex interventions [ 39 ] and the success case method for evaluating investments in workforce training [ 40 ]. These methods use storytelling and narratives and provide powerful participatory evaluation methods for integrating evaluation and program development.

Boydell and colleagues provide a useful scoping review of arts-based health research [ 42 ]. Such arts-based evaluation strategies may be particularly suited to programs that already include creative components. They may also be more culturally acceptable when used as community engagement tools for groups where English is not the native language or where literacy may be low.

In our experience, funding agencies are increasingly open to the validity of these data and their potential for wider reach, particularly in vulnerable populations [ 43 , 44 ]. The outputs of arts-based methods for example, photography exhibitions, theatre performances are also powerful channels for disseminating results and have the potential to influence policy if accepted as rigorous forms of evidence [ 42 ].

We encourage practitioners to begin a dialogue with funders to identify relevant methods of evaluation and types of evidence that reflect what their programs are actually doing and that provide meaningful data for both reporting and program development purposes. Other authors have also recognised the paucity of useful evaluations and have developed a framework for negotiating meaningful evaluation in non-profit organisations [ 45 ]. Workforce development strategies, including mentoring, training, and skills building programs, can assist in capacity building.

There are several examples of centrally coordinated capacity building projects in Australia which aim to improve the quality of program planning and evaluation in different sectors through partnerships and collaborations between researchers, practitioners, funders and policymakers see for example SiREN, the Sexual Health and Blood-borne Virus Applied Research and Evaluation Network [ 46 , 47 ]; the Youth Educating Peers project [ 48 ]; and the REACH partnership: Reinvigorating Evidence for Action and Capacity in Community HIV programs [ 49 ].

These initiatives seek to provide health promotion program planning and evaluation education, skills and resources, and assist practitioners to apply new knowledge and skills. The Western Australian Centre for Health Promotion Research WACHPR is also engaged in several university-industry partnership models across a range of sectors including injury prevention, infectious diseases, and Aboriginal maternal and child health.

These models have established formal opportunities for health promotion practitioners to work alongside health promotion researchers, for example, co-locating researchers and practitioners to work together over an extended period of time. Immersion and sharing knowledge in this way seeks to enhance evaluation capacity and evidence-based practice, facilitate practitioner contributions to the scholarly literature, and improve the relevance of research for practice.

There has been some limited evaluation of these capacity building initiatives and it is now timely to collect further evidence of their potential value to justify continued investment in these types of workforce development strategies. Also important is evaluability assessment, including establishing program readiness for evaluation [ 7 ] and the feasibility and value of any evaluation [ 22 ].

Though rare, in some cases, agencies may over-evaluate without putting the results to good use. Organisations need to be able to assess when to evaluate, consider why they are evaluating, and be mindful whether evaluation is needed at all. Clear evaluation questions should always guide data collection. The use of existing data collection tools which have been proven to be reliable is advantageous for comparative purposes [ 7 ].

Evaluation has to be timely and meaningful, not simply confirming what practitioners know, otherwise there is limited perceived value in conducting evaluation. Practical evaluation methods that can be integrated into daily activities work well and may be more sustainable by practitioners [ 50 ]. It is not always possible to collect baseline data.

Where no baseline data has been collected against which to compare results, this does not have to be a barrier to evaluation, as comparisons against local and national data may be possible.

Post-test only data, if constructed well, can also provide some indication of effectiveness, for example, collecting data on ratings of improved self-efficacy as a result of a project.

Practitioners should be encouraged that evaluation is a dynamic and evolving process. Hence theory-based approaches to evaluation move on from the clarification of a programme's aims, objectives and outcomes to articulating the assumptions underlying a programme's design in order to understand more about how and why the programme is supposed to operate to achieve the outcomes.

The Theory of Change approach, developed for the evaluation of comprehensive community initiatives in the US Connell et al, , suggests that all programmes have explicit or implicit 'theories of change' about how and why they will work Weiss, Once these theories have been made explicit they can influence the design of the evaluation to ensure that it assesses whether the theory is correct when it is implemented.

This approach reconciles process and outcome measurement, and ensures that practitioners and evaluators draw on established theory and their own observations about how change will happen, and has been used in for example, the Health Action Zone evaluation in England Judge et al, They consider the approach to be 'post-positive' in that it recognizes realities that can be investigated robustly and used to shape policy. On the other hand it views that the strict positivist approach of experimental design, particularly RCTs, is insufficient to understand the context of programmes and the constant changeability and potential intrusion of 'new contexts and new causal powers'.

Realistic evaluation considers that:. The understanding of how mechanisms are fired in certain contexts to produce certain outcomes generates theories about the effectiveness of the programme design 'through a detailed analysis of the programme in order to identify what it is about the measure which might produce change, which individuals and sub-groups and locations might benefit most readily and what social and cultural resources are necessary to sustain the changes.

Both these approaches to evaluating health promotion will utilise both qualitative and quantitative data as appropriate, and critically 'open up the black box' of the intervention in order to understand what is working and why, and to improve the implementation of the intervention in order to increase effect. London: Sage Publications. Weiss CH Nothing as practical as a good theory: exploring theory-based evaluation for comprehensive community initiatives.

It has increasingly been recognised that these issues also apply to public health evaluation in a wider sense, including that of healthcare interventions. Previous proponents of the RCT, and of strict systematic review processes based on RCTs and meta-analysis have more recently revised their stance on these issues. As a brief consideration of this three recent papers are discussed as an example of this widening debate.

Oakley et al argue the case, somewhat belatedly, for the inclusion of process evaluation in RCTs of complex interventions such as peer led sex education in school based health promotion. They conclude that:. Process evaluations should specify prospectively a set of process research questions and identify the processes to be studied, the methods to be used, and procedures for integrating process and outcome data. Expanding models of evaluation to embed process evaluations more securely in the design of randomised controlled trails is important to improve the science of testing approaches to health improvement.

It is also crucial for persuading those who are sceptical about using randomised controlled trials to evaluate complex interventions not to discard them in favour of non-randomised or non-experimental studies. In their definition complex health interventions include for example surgery and physiotherapy.

They summarise their position as:. Quality can be assessed when other research provides clear indications of how interventions should be administered.

Such analyses should be specified in the review protocol and should focus on interactions between the quality and the effects of the intervention'. Hawe et al propose a radical way of standardising complex community interventions for RCTs in comparison to simple interventions, which pays less attention to the replicability of individual components of an intervention by form eg patient information kit, in-service training sessions and more to their function eg all sites devise information tailored to local circumstances, resources are provided to support all sites to run training appropriate to local circumstances etc.

While recognising the complexity of the systems under investigation Hawe et al state that 'complex systems rhetoric should not become an excuse to mean 'anything goes'. In complex interventions, the function and process of the intervention should be standardised and not the components themselves. Intervention integrity would be defined as evidence of fit with the theory or principles of the hypothesised change process'.

British Medical Journal British Medical Journal, However, as we have seen, the direct transference of the methods used for assessing research evidence in clinical medicine can be problematic when applied to health promotion.

Kelly summarises issues to consider when building the evidence base for health promotion:. Evidence of the effectiveness of interventions to reduce health inequalities is poor, and less than 0. Of the evidence that exists, there is more about 'downstream' interventions eg individual behaviour change than 'upstream' interventions eg policy or environmental change. The RCT dominates the effectiveness literature which has led to the consideration of other forms of evidence as inferior.

As Kelly states 'This is not helpful, because while RCTs are good on internal validity, they tend to be much less informative about issues of process and implementation which are vital to know about if the intervention is to be transferred…in health promotion where the issues involved are often highly complex and the settings difficult to control … key information will not be available from trial data.

The problems of synthesising evidence from different research traditions Dixon-Woods et al , and the difficulty of grading the evidence, which applies to both the quality of systematic reviews and of primary research studies. Notwithstanding these difficulties there has been considerable investment in developing robust review methodology for both secondary research, and for tertiary research, ie reviews of reviews for public health. The following tables provide some brief examples of effective health promotion actions in the areas of:.

Tobacco use - reducing initiation, increasing cessation, and reducing exposure to environmental tobacco smoke Table 5. Food-support programmes for low-income and socially disadvantaged childbearing women in developed countries Table 5. Increasing physical activity - Informational approaches, behavioural and social approaches and environmental and policy approaches Table 5. Motor vehicle occupant injury - Increasing child safety seat use, increasing safety belt use, reducing alcohol impaired driving Table 5.

Housing and Public Health - rehousing and neighbourhood regeneration, refurbishment and renovation , accidental injury prevention, and prevention of allergic respiratory disease Table 5. This is by no means a comprehensive list. Comments in the tables about recommended interventions only list those where there is good evidence, and do not include the extensive caveats in the various reports about the research base. Similarly they do not include any listings of interventions where there is insufficient evidence to make a judgement about effectiveness.

Readers are encouraged to read the full reports to understand more about the underlying issues. The topics selected are to provide a range of recent examples across health issues, and include upstream and downstream interventions. They also vary between systematic reviews and reviews of reviews. The final narrative review on empowerment Wallerstein, provides an interesting example of an inclusive and rigorous approach to reviewing the literature around a key health promotion concept and principle of practice.

Three main sources of reviews have been chosen for their relevance to public health and health promotion evidence:. New York, OUP www. National Institute for Health and Clinical Excellence. To access all NICE public health documents go to www. HEN gives rapid access to reliable health information and advice to policy-makers in evidence-based reports and summaries and access to other sources.

Risk behaviour in health and the effect of interventions in influencing health-related behaviour in professionals, patients and the public. New York, OUP. Food-support programmes for low-income and socially disadvantaged childbearing women in developed countries. Food-support programmes aim to improve key maternal and perinatal outcomes. The lack of any significant impact on low birth weight LBW , pre-term birth and other perinatal outcomes along with the favourable impact on maternal weight gain and nutrient intakes provide a basis both for re-thinking the aims and objectives of current food-support programmes.

Setting out-of-reach goals for food-support programmes such as reduction in rates of LBW and pre-term birth is probably not useful until there is strong evidence of what works to improve those outcomes. With respect to the primary outcome of interest, LBW, the results of this review do not provide evidence that food-support programmes have any impact. However, there are favourable impacts on other outcomes. There is indicative evidence of an increase in mean birth weight of babies born to heavy smokers, and of the beneficial impact of food support on maternal weight gain and dietary intake in a woman's first pregnancy.

Long-term measures, in the context of workplace health promotion, typically relate to things like reductions in disease or injury and the costs associated with them.

These are often similar to the goals of the program and these long-term outcomes often take years to observe. Whatever program components have been included in the evaluation, it is important that they be both measurable and realistic. Finally, once the key outcomes have been identified and written as measureable and realistic, identify when each will be measured e. Some outcomes may be measured only early or late in the program, while others may be measured several times, as long as the program is active.

The need for baseline measures is one key reason for designing the evaluation plan before implementation begins because they establish a starting place and frame of reference for the workplace health program. These can usually be developed from data collected during the initial assessment activities and summarized in the assessment final report. Baseline measures determine where the organization currently is on a given health problem e.

The evaluation guidance so far has been general guidance that can apply to any outcome. The evaluation module has been organized by the specific health topics listed above, and for each one, potential measures for the following four main outcome categories of interest to employers and employees have been developed.

Top of Page. Centers for Disease Control and Prevention. Framework for program evaluation in public health. Morbidity and Mortality Weekly Report ;48 No. RR : Skip directly to site content Skip directly to page options Skip directly to A-Z link.



0コメント

  • 1000 / 1000