Rethinking Monitoring and Evaluation in Complex Systems — when Learning Is a Result in Itself
!tags:: #lit✍/📰️article/highlights
!links::
!ref:: Rethinking Monitoring and Evaluation in Complex Systems — when Learning Is a Result in Itself
!author:: medium.com
=this.file.name
Reference
=this.ref
Notes
Capture impact in the aggregate: We cannot evaluate individual interventions in isolation because we usually tackle systems challenges through portfolios of interconnected interventions.
- No location available
-
- [note::"Impact" is inherently complex - it is the net change produced by numerous interacting entities/mechanisms.]
Focus on contribution over attribution: We should focus on capturing our contribution to bigger change processes rather than seek to directly attribute change to our own work. Coe and Schlangen explain this well: In reality, contribution is not singular and additive; instead multiple interacting causes make an effect more likely. Recognizing this, we should shift the lens from the “amount” of contribution a single actor makes to an understanding of the typologies of the different actors and how they combine to contribute to change.
- No location available
-
- [note::As in business, we should develop a portfolio of interventions to help bring about the outcome we want, instead of hoping that a single intervention will be successful.]
Over the past 12 months I have spoken with many people and organizations across the world who are also trying to tackle or measure complex systems challenges and it turns out that most of them are grappling with these same challenges. Some are much further ahead than UNDP while others are just beginning their journey. For instance, the Bill and Melinda Gates Foundation is investing in building an evidence-base around how to document systems change in areas such as food and agriculture and Co-Impact has developed a learning, measurement, and evaluation guidebook for systems change. Meanwhile, the Cynefin Centre and Climate-KIC offer useful thoughts on developing transformative theories of change for complex systems, while the Small Foundation has developed a framework for measuring and managing impact networks. Blue Marble Evaluation is rethinking the role and approach of evaluators when it comes to global systems change, and organizations such as UNFPA and the Open Government Partnership are deploying developmental evaluation approaches to help them continuously learn and adapt in the wake of complexity. Furthermore, the adaptive management community has built a solid evidence base and practice, while a variety of publications such as CEDIL’s Methods Briefs and work by Aston and Colnar discuss complexity appropriate evaluation methods. Lastly, outfits such as the Transformative Innovation Policy Consortium, the Rockwool Foundation’s System Innovation Initiative, and FSG’s Water of Systems Change work offer useful conceptual frameworks for thinking about what to measure when documenting systems change.
- No location available
-
- [note::Wow - based on the resources shared here, it seems like the author is familiar with a ton of resources that might be useful the disseminate to the EA community.
I've added these resources to my Zotero]
UNDP has set up an “M&E Sandbox” to nurture and learn from innovative M&E efforts that we hope can help address these challenges. The Sandbox was originally intended as an internal (corporate) space for experimentation to support M&E innovations already emerging across UNDP. However, we decided to progressively open-up the space for others to join as we experienced the strong appetite among partners for this type of community.
- No location available
-
dg-publish: true
created: 2024-07-01
modified: 2024-07-01
title: Rethinking Monitoring and Evaluation in Complex Systems — when Learning Is a Result in Itself
source: hypothesis
!tags:: #lit✍/📰️article/highlights
!links::
!ref:: Rethinking Monitoring and Evaluation in Complex Systems — when Learning Is a Result in Itself
!author:: medium.com
=this.file.name
Reference
=this.ref
Notes
Capture impact in the aggregate: We cannot evaluate individual interventions in isolation because we usually tackle systems challenges through portfolios of interconnected interventions.
- No location available
-
- [note::"Impact" is inherently complex - it is the net change produced by numerous interacting entities/mechanisms.]
Focus on contribution over attribution: We should focus on capturing our contribution to bigger change processes rather than seek to directly attribute change to our own work. Coe and Schlangen explain this well: In reality, contribution is not singular and additive; instead multiple interacting causes make an effect more likely. Recognizing this, we should shift the lens from the “amount” of contribution a single actor makes to an understanding of the typologies of the different actors and how they combine to contribute to change.
- No location available
-
- [note::As in business, we should develop a portfolio of interventions to help bring about the outcome we want, instead of hoping that a single intervention will be successful.]
Over the past 12 months I have spoken with many people and organizations across the world who are also trying to tackle or measure complex systems challenges and it turns out that most of them are grappling with these same challenges. Some are much further ahead than UNDP while others are just beginning their journey. For instance, the Bill and Melinda Gates Foundation is investing in building an evidence-base around how to document systems change in areas such as food and agriculture and Co-Impact has developed a learning, measurement, and evaluation guidebook for systems change. Meanwhile, the Cynefin Centre and Climate-KIC offer useful thoughts on developing transformative theories of change for complex systems, while the Small Foundation has developed a framework for measuring and managing impact networks. Blue Marble Evaluation is rethinking the role and approach of evaluators when it comes to global systems change, and organizations such as UNFPA and the Open Government Partnership are deploying developmental evaluation approaches to help them continuously learn and adapt in the wake of complexity. Furthermore, the adaptive management community has built a solid evidence base and practice, while a variety of publications such as CEDIL’s Methods Briefs and work by Aston and Colnar discuss complexity appropriate evaluation methods. Lastly, outfits such as the Transformative Innovation Policy Consortium, the Rockwool Foundation’s System Innovation Initiative, and FSG’s Water of Systems Change work offer useful conceptual frameworks for thinking about what to measure when documenting systems change.
- No location available
-
- [note::Wow - based on the resources shared here, it seems like the author is familiar with a ton of resources that might be useful the disseminate to the EA community.
I've added these resources to my Zotero]
UNDP has set up an “M&E Sandbox” to nurture and learn from innovative M&E efforts that we hope can help address these challenges. The Sandbox was originally intended as an internal (corporate) space for experimentation to support M&E innovations already emerging across UNDP. However, we decided to progressively open-up the space for others to join as we experienced the strong appetite among partners for this type of community.
- No location available
-