Hi Sam,
The main misconception (which is understandable, but also often pointed out
already) is that Wiki Loves Monuments can be fundamentally different
projects from a goals-and-outcomes point of view, based on the interests
and strenghts of the local organizers and the local situation. In some
On Thu, May 7, 2015 at 3:14 PM, Maria Cruz mc...@wikimedia.org wrote:
snip
All in all it is good to have something 'to shoot at' but I would prefer
that these reports are produces more in concert with the stakeholders
involved and affected, rather than 'announced' and 'presented' to the
Hi Lodewijk,
Thanks for your feedback about the process. It's been very valuable.
I have a few follow up questions below:
Sure, the team did reach out in the collection phase - after all, without
the data such evaluation would be impossible. But after that, the
conclusions were drafted
Regaring measuerment of editor retention - this is tricky - as in fact many
participant created new accounts only to join the contest. Some of them had
accounts on Wikipedia (but different) - some others - abandoned their
accounts and created a new ones for various reasons (the most trival - they
Editor retention really consists of three components
*new temporary contributors. WLM helps here, and even if they leave after a few
edits this is of value for the projects. They have learned to edit, and will be
more open to correct an error or complement an article very much later when
using
Hi,
I wasn't involved in this evaluation, but I would like to say that, as
someone who recently worked for WMF Learning and Evaluation, I believe that
the LE team is interested in producing useful and accurate reports. So, I
am optimistic that feedback from the community about methodology and
Hi Edward,
Thanks for the questions. The Wiki Loves Monuments mailing list would have
made a very logical starting place to ask for initial feedback. But also
sending an email to the people who shared their data with you to work with
in the first place, or people who worked on internal
On Thu, May 7, 2015 at 6:34 AM, Lodewijk lodew...@effeietsanders.org
wrote:
I hope that at some point WLM organizers can be given the tools, enthusiasm
and support to create their own evaluation on a larger scale. That way I
hope that some of the flaws can be avoided thanks to a better
I organized a contest Wiki Loves Monuments wiki and loves earth in algeria
and coordinate on the rest of each arabic country who has organised the
contest
I had a lot of fun to organize during 2013 2014 till 2015 now
In algeria ;with astonishment ;many do not know what that meant wikipedia;
Hi all,
In the past months the Wikimedia Foundation has been writing an evaluation
about Wiki Loves Monuments. [1]
At such it is fine that WMF is writing an evaluation, however they fail in
actual understanding Wiki Loves Monuments, and that is shown in the
evaluation report.
As a result on the
Hi Romaine,
Are there other evals of WLM projects that capture the complexity you want?
Perhaps single-community evaluations done by the WLM organizers there?
Sam
On Wed, May 6, 2015 at 7:21 AM, Romaine Wiki romaine.w...@gmail.com wrote:
Hi all,
In the past months the Wikimedia Foundation
Hi Sam,
I am sure there are figures and stories that the various orgs collect
and publish. But they are spread across different wikis and websites
and/or languages. E.g. many of the FDC orgs are looking into ways to
demonstrate these more qualitative aspects of our work (e.g. by
Claudia, I share your concerns about reducing subtle things to a few
numbers. Data can also be used in context-sensitive ways. So I'm
wondering if there are any existing quantitative summaries that you find
useful? Or qualitative descriptions that draw from more than one project?
Figuring out
Hello my friends,
I didn't have the opportunity to organize a WLM contest yet, but I had the
opportunity to organize the Brazilian WLE last year and I'm promoting that
same contest here in Brasil this year again.
Quantitative analysis are always easier to do than qualitative analysis. In
that
Hi all,
Thanks for the comments on the first two program evaluation reports. This
is the kind of feedback we are looking for coming from the community, and
for that reason, we want to continue this conversation and learn more about
what goals and metrics make more sense to program leaders.
As
Yes, I think that this may be considered the central problem.
It's easier to compare two different scenarios with a standard measure and
to use kilos to compare apples and oranges, for instance.
The problem is to understand that oranges will continue to be oranges after
this measure, and apples
16 matches
Mail list logo