[Report] Measuring the Impact of Tech for Accountability Initiatives
BY the engine room | Tuesday, May 20 2014
In mid 2013 we started work on a report to document best practices and useful frameworks for measuring the impact of tech and accountability programming. There’s been a lot of attention paid to the question of measuring impact, so we expected that there would be a lot to draw from. There wasn’t.
In the course of interviewing experts, supporting our partners and talking with peers, it quickly became clear that there was very little good practice out there, and almost no tools or frameworks expressly tailored for tech and accountability initiatives.
The lack of good practice is understandable. Tech and for accountability initiatives tend to operate with very limited resources. Monitoring and evaluation doesn’t always get prioritized, and when it does, documentation presents additional costs and hurdles. It doesn’t help that there is no agreement on how to go about measuring the impact of technology (or the improvements in governance and accountability for that matter). In fact, we we didn’t find a single framework or methodology that could be used out-of-the-box for measuring the impact of technology and accountability programming.
What we did find was a lot of enthusiasm for developing such frameworks and eagerness among initiatives that frameworks be tailored to the way that they are working, the restraints they face, and the outcomes they want.
As a result, we shifted focus to produce a guide that will help tech for accountability initiatives to develop their own frameworks for monitoring and learning in real time. This Users’ Guide aims to do just that, providing introductory information, followed by concrete guidance and resources for further exploration. The Guide is organized as follows:
The first two sections describe how the Guide works, and provide an overview of the measurement problem, why it’s hard, and what’s unique about monitoring tech and accountability initiatives. This introduction is intended to help projects and project managers decide if real-time monitoring will give them the information they need, and what the costs will be.
The following section is the core of the guide, and provides practical guidance to developing a tailored monitoring framework in 17 steps, from deciding to measure, to mapping resources, to collaborative development to implementation:
The guide closes with a collection of resources for further exploration, providing deep context and explanation of methodologies, tools and strategies.
As far as we know, this is the first guide of its kind, that specifically targets small initiatives with limited resources, to help them develop tailored solutions and set their own agendas for measurement. Noting the demands on our audience, we’ve struggled with the balance between keeping this guide brief and accessible, and providing enough detail to make meaningful decisions. We’d love your feedback on how useful you find it, and you can reach out to us with questions or comments at any time at post [at] theengineroom [dot] org.
Click here for a direct download.
TechPresident partners with the engine room to surface and connect emerging tactics and initiatives.
Personal Democracy Media is grateful to the Omidyar Network and the UN Foundation for their generous support of techPresident's WeGov section.