Hello,

Many interesting observations here, typically pointing towards the "lip 
service" problem of evaluation: "There, we've evaluated it..now, let's file 
it away under 'F' as in 'Forgetting'..."

True, some summative reports are simpy one-offs, and rarely cause any 
raised eyebrows.

There's always room for improvement. I wonder if the group will also look 
at how evaluation loops are working, as summative evaluation (even 
"remedial evaluation") should be seen as  part of an evaluation cycle.

In my experience, usually, institutions who use evaluation actively 
(through in-house teams/external consultants or through combinations) are 
ones which practice the full cycle of audience research, i.e. 
baseline-front end-formative-summative and back.
But there are also different aspects to evaluate, not all of them covering 
evaluation of learning.

I also hope this evaluation-of-evaluations project will be promoted to the 
museum evaluation/audience research community, internationally, as it 
covers a very wide field (wherever interpretation activities are present).
In my mind, there's no simple answer to this one...there's no shortcut to 
good evaluation!

Best wishes,
Paul Henningsson

digital interpretation and evaluation

------------------------------------------------
musedia
box 12139
se-402 42 gothenburg
sweden

tel . +46 (0)735-52 23 36
e-mail. <mailto:paul at musedia.net>paul at musedia.net
www.musedia.net

http://blogg.museiteknik.com
------------------------------------------------

At 08:09 2012-06-07 -0400, you wrote:
>Hi everyone - I thought I'd bring over this interesting post from the MCG
>listserv.
>
>Thoughts?
>
>
>Sheila Carey (Chair, Metrics & Evaluation SIG)
>Analyste des publics et des programmes | Audience and Program Analyst
>R??seau canadien d'information sur le patrimoine (RCIP) | The Canadian
>Heritage Information Network (CHIN)
>Minist??re du Patrimoine canadien | Department of Canadian Heritage
>Gatineau, Canada K1A 0M5
>sheila.carey at pch.gc.ca
>T??l??phone | Telephone 819-934-5017
>T??l??copieur | Facsimile 819-994-9555
>T??l??imprimeur (sans frais) 1-888-997-3123 | Teletypewriter (toll-free)
>1-888-997-3123
>Gouvernement du Canada | Government of Canada
>
>
>
>
>--------------------
>
>
>
>
>Date:    Wed, 6 Jun 2012 12:51:14 +0100
>From:    Mia <mia.ridge at GMAIL.COM>
>Subject: 'Why evaluation doesn't measure up'
>
>There's an interesting post called 'Why evaluation doesn't measure up'
>on the Museums Association site
>http://www.museumsassociation.org/museums-journal/comment/01062012-why-evaluation-doesnt-measure-up
>
>or http://bit.ly/L9FlQz where they say:
>
>"No one seems to have done the sums, but UK museums probably spend
>millions on evaluation each year. Given that, it???s disappointing how
>little impact evaluation appears to have, even within the institution
>that commissioned it."
>
>and:
>
>"Summative evaluations are expected to achieve the impossible: to help
>museums learn from failure, while proving the project met all its
>objectives. Is it time to rethink how the sector approaches
>evaluation?"
>
>I'm curious to know what others think.  Are they right?  Or are they
>missing something?
>
>Cheers, Mia
>
>--------------------------------------------
>http://openobjects.org.uk/
>http://twitter.com/mia_out
>
>_______________________________________________
>You are currently subscribed to mcn-l, the listserv of the Museum Computer 
>Network (http://www.mcn.edu)
>
>To post to this list, send messages to: mcn-l at mcn.edu
>
>To unsubscribe or change mcn-l delivery options visit:
>http://mcn.edu/mailman/listinfo/mcn-l
>
>The MCN-L archives can be found at:
>http://toronto.mediatrope.com/pipermail/mcn-l/


Reply via email to