Putting it All Together
In the ERME program, outcomes/results achieved by producer participants is an expectation of program delivery. The combination of pairing realistic outcomes for program participants, with a well-designed evaluation system, can lead to relevant and long lasting impacts. Attempts to develop appropriate outcomes for a program audience may be diminished if the applicant organization lacks a clear understanding of the types of outcomes being sought by the funder. If you are unsure of what may constitute good risk management outcomes, a signature resource is the Introduction to Risk Management (pdf). This document clearly defines agricultural risk categories along with choices, decision aids and other changes in practice or strategy needed to keep farm and ranch operations viable. Because there is great diversity in farming in terms of size, crops and livestock produced, as well as in the knowledge level and culture of your target audience, outcomes will need to be tailored to meet these variables. If priorities have been established by the funding organization, determine if they will fit within the scope of your proposed outcomes.
In the “How will your verify” column of the Results section in the ERME application/reporting system, the type of data collection (evaluation) instrument will depend upon what you want to know and what kind of information you will need to gather from participants that will verify the achievement of each the proposed outcomes. The “When measured” column makes it easy to align each outcome with your program delivery schedule (Figure 4).
To be effective, data collection tools should be decided upon and/or conceived of in the same timeframe that you are developing a proposed set of participant outcomes. Otherwise it may stand in the way of gathering accurate data. Use technology to the greatest degree possible; for example, making use of Excel spreadsheets and online data collection tools, such as Survey Monkey, will help to organize your data collection process. Streamline the process from start to finish with a well thought out plan for measuring (1) short-term outcomes (knowledge gained, actions taken), (2) mid-term outcomes (what participants did differently through actions and practice) through follow up evaluations, which may occur within 3 to 6 months after program delivery; and (3) long-term impacts (changes in condition of the environment, community, economy, and so forth).
Finally, re-visit your proposed risk management outcomes and think about each of the following: Does the focus of your outcomes accurately reflect the risk management topics and issues you will be covering? Are the outcomes achievable and pragmatic? And finally, do the outcomes provide a current roadmap for the risk management progress you want participants to achieve? A healthy review of your proposed outcomes and data collection techniques during the application process, after the program is funded and before program delivery begins, while your program is being delivered, and lastly, at the end of your program, will ensure an evaluation process that can successfully assess producer risk management results.
A later article will address evaluation specifics using the Logic Model as a framework for evaluation and verifying risk management results.
Ripple Effects Mapping (REM)
Another evaluation technique that lends itself well to impact reporting is Ripple Effects Mapping (REM). The next article will feature a project in Utah that used REM with a ranching family, which resulted in unforeseen positive changes; including helping them to address and implement risk management practices not covered in the program delivery. Look for an in depth article on the mechanics of Ripple Effects Mapping in our next newsletter. In the meantime you may click on the following resource link for more information on Ripple Effects Mapping.
References & Citations:
Crane, L., Gantz, G., Isaacs, S., Jose, D., & Sharp, R. (2013). Introduction to risk management (2nd Ed.). Extension Risk Management Education and Risk Management Agency. Retrieved from: http://extensionrme.org/pubs/Intro-Risk-Mgmt.pdf.
Gertler, S., Premand, P., Rawlings, L., & Vermeersch, C. (2016). Impact evaluation in practice, (Second Ed.). World Bank: Washington, D.C.
Hoggarth, L. & Comfort, H. (2010). A practical guide to outcome evaluation. London: Jessica Kingsley Publishers.
Kalambokidis, L. (2013). Proceedings from Agricultural and Applied Economics Association (AAEA) Conference: ; Tell us about your extension program’s public value-level impacts. Washington D.C.
Morell, J. (2005). Why are there unintended consequences of program action, and what are the implications for doing evaluation? American Journal of Evaluation, 26. 444-463.
Patton, M.Q. (2008). Utilization-focused evaluation. Los Angeles: Sage.
Saunders, R. (2015). Implementation monitoring and process evaluation. Los Angeles: Sage.
Warner, J.(2013). Proceedings from Agricultural and Applied Economics Association (AAEA) Conference: Designing outcome/results based programs for participant success & impact reporting. Washington D.C.
Witkin, B. & Altschuld, J. (1994). Planning and conducting needs assessments: A practical guide. Los Angeles: Sage
W.K Kellogg Foundation (n.d.). Evaluation handbook. Battle Creek, MI: Sanders, J.