Thank you, Nathan, for your comments and suggestions.

Each of the points you have raised are very much on our radar, but we are
still in a place in which we must make strategic choices in moving toward
the end goal of knowing which programs and activities have high potential
for impact, which have low or high costs for impact, and how to value
achieved impact in order to more clearly identify both successes and
failures. You are right, this will take a deep cooperation from those who
design, implement, and actually evaluate program work. We have a sense of
growing cooperation and collaboration on that front and are hopeful that
our team’s integration into Grantmaking will only work to strengthen those
connections and supports.

Responding to some of your discussion points below:

>The logic models are useful tools for thinking through and explaining to an
>audience the structure and goals of a program, but they are vulnerable to
>the same fuzziness that exists without the tools. They are also not well
>oriented to measuring performance, which is really the crux of the problem
>and of Pine's question. Let's look at the logic model you've used as an
>example from the WikiWomen's edit-a-thon[1]. Their logic model is great at
>explaining the goals of the program. This is a major improvement,
>particularly if it is standardized across all WMF-funded projects. But does
>it help us answer the question about impact? Using the Boulmetin Dutwin
>model of analysis, we can get clear information about program efficiency
>and program effectiveness. But we don't get anywhere on impact, despite the
>use of the logic model.

=The place for logic models=

To be clear, we also began with community derived logic models for each of
the programs we initiated reporting on this past six months, however, they
are also in need of some attention and better integration in our portal
resources:
https://meta.wikimedia.org/wiki/Programs:Share_Space/Overview_Logic_Model

Unfortunately, many program leaders with slight variations in theory of
change make for a crowded format compared to the basic community one on a
Wiki Women edit-a-thon that has been shared under community contributions
on the resource page.[1] (We are working to clean these up and include them
on the resource page also.) However, we did use these initial logic models
as our starting point in determining which basic metrics to pilot, which
areas we have measurement gaps, and resulting guidance for evaluation
measures for those programs that we mapped. Now, after piloting those
measures in the beta reports[2], we are asking for community input at:
https://meta.wikimedia.org/wiki/Programs:Evaluation_portal/Parlor/Dialogue

>Judging by meta I think Edward and the PE team have made a great start. But
>it's 2014 and the WMF is still at a starting point. Proposing that funding
>requests include SMART goals is not good enough, and I'd love to see Lila
>and the board empower Edward to do a lot more, and to insist on deep
>cooperation from entities receiving funds. At some point in the future we
>can move this discussion from "does anything anyone does have any impact?"
>to "knowing that we *can have an impact*, how much impact is enough to
>justify funding?"


=SMART Targets and Collaboration across Grantmaking=

Our team is working in collaboration with grantmaking programs to better
guide the expectations and resources for evaluation and this community
dialogue will also help to guide that. Still, this is not a top-down
approach and we must be reasonable in allowing the time to explore programs
and target measures that are reasonable and valid. We are still very much
in the process of drilling down while at the same time moving forward with
the most clear metrics we have identified. I appreciate also that SMART
goals themselves are not enough, still, they are one of many first steps in
advancing systematic program evaluation and design across Wikimedia
programs and activities and there is much collaborative planning going on
within Grantmaking to empower the initiative further.  Still, SMART targets
must be aligned to relevant impact targets and must actually be SMART in
order to include associated metrics and timelines. We have also added
guidance on writing SMART targets to our portal resources, however,
inclusion of SMART targets it is highly variable across grant applications.
As this has just been added this last round, it is not too surprising and
we expect that will improve as will all of the evaluation activities and
strategies that are still relatively new to the process.


I would like to encourage you (and anyone else interested in this
discussion) to view our question prompts on our dialogue page [3] as well.
If you do not mind, I would also like to migrate your comments to the
appropriate discussion spaces there so that your feedback is also captured
in our process.

Thank you for this feedback, please let us know if we can answer anything
further and feel welcome to contribute to the discussion further on the
Evaluation portal.

Sincerely,

Jaime

[1]
https://meta.wikimedia.org/wiki/Programs:Evaluation_portal/Library/Logic_models

[2]
https://meta.wikimedia.org/wiki/Programs:Evaluation_portal/Library/Overview
[3]
https://meta.wikimedia.org/wiki/Programs:Evaluation_portal/Parlor/Dialogue

-- 

Jaime Anstee, Ph.D
Program Evaluation Specialist
Wikimedia Foundation
+1.415.839.6885 ext 6869
www.wikimediafoundation.org

Imagine a world in which every single human being can freely share in the
sum of all knowledge. Help us make it a reality!
*https://donate.wikimedia.org <https://donate.wikimedia.org/>*
_______________________________________________
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 
<mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe>

Reply via email to