Hi All,

At the last unconference in Oxford I did a quick talk on some interesting data 
I'd gathered that seems to indicate a trend of increased examination 
performance (vs previous years) in a unit that made recorded lectures available 
for revision. If you're interested in this and didn't attend have a look at the 
"Learning Outcomes" video on this page:

http://opencast.org/video/opencast-matterhorn-2012-unconference-recordings

There's also some further detail here:
http://www.ucisa.ac.uk/bestpractice/awards/~/media/Files/members/awards/excellence/2011/Manchester.ashx

Anyway, I embarked on a larger study to see if the trend could be demonstrated 
on a larger scale and I said I'd post to the list when I had some initial 
findings. In short, although some units did show a trend of increased 
performance, others did not. There's still a fair bit of data analysis to do, 
basically a few paired t-test (across lecture capture and non-lecture capture 
units using the same cohort) with unequal sample size and unequal variance is 
required, but the system has quite a lot of noise in it. This is mainly due to 
changes in staff, LMS and other factors that could account for the variation in 
results. In addition, the selection criteria of the larger scale test led to 
teaching staff volunteering whose teaching standards were already excellent and 
this likely made any impact of lecture capture more difficult to measure.

I'll post again once I've done the in-depth analysis, but it'll take me a 
little while to get that done. Based on this the next  obvious question is what 
could be done next to investigate this further (in further investigation is 
warranted)? Ideally a single blind control group experiment on a unit or set of 
units would produce a more conclusive answer. However I doubt this would be 
ethical. I can't imagine the practicalities of dividing a class in half and 
then telling them only 50% of students would receive lecture recordings. Even 
if it were done in all likelihood the group receiving recorded lectures would 
share them with the control group. An alternative would be to target a set of 
units that showed low variation across an extended period of time, then measure 
short term / long term changes with the addition of lecture capture. It might 
be a bit tricky to resource this option, so it's probably more appealing if it 
were an activity done within a larger project to rollout lecture capture.

Sorry for the wall of text, but hopefully that's interesting to some of you.

Best Regards



Stuart Phillipson | Digital Media Projects Coordinator

Room 1.83 Simon Building
University of Manchester
Brunswick Street
Manchester
M13 9PL
United Kingdom

e-mail: [email protected]
Phone: 016130 60478

_______________________________________________
Community mailing list
[email protected]
http://lists.opencastproject.org/mailman/listinfo/community


To unsubscribe please email
[email protected]
_______________________________________________

Reply via email to