Hi Stuart,

I'm just finishing some statistical analysis on how our students are
using lecture capture as well, which you and others following this
thread might be interested in.  We took a second year science course
and made lecture capture available to them in two years back to back.
We analysed the students who used the system (n=135) and clustered them
into five clusters based on usage (we used k-means).  We then used these
clusters to characterize the next year's students, and compared marks.
We found for both the midterm and the final that there were
statistically different marks for one of the clusters in particular -
the one where student habitually watched videos on a weekly basis.  In
short, if you watch video every week, you will get a better mark.

We're verifying this on a larger data set this summer, but some initial
results (without the marks in it) was published at LAK11:

http://dl.acm.org/citation.cfm?id=2090128

> group experiment on a unit or set of units would produce a more
> conclusive answer. However I doubt this would be ethical. I can't
> imagine the practicalities of dividing a class in half and then
> telling them only 50% of students would receive lecture recordings.
> Even if it were done in all likelihood the group receiving recorded
> lectures would share them with the control group. An alternative

Using two back to back years with the same instructor helps mediate
this, but it's tough to get more controlled than that.  Dealing with
large numbers and averages is the best we've been able to do in this
regard.

> would be to target a set of units that showed low variation across an
> extended period of time, then measure short term / long term changes
> with the addition of lecture capture. It might be a bit tricky to
> resource this option, so it's probably more appealing if it were an
> activity done within a larger project to rollout lecture capture.

The problem is that marks are a very artificial proxy for learning,
especially year after year.  If lecture capture raised averages by 10%
across the board departments would just curve marks downwards 10%
(either directly in the short term, or making the course content harder
in the longer term).  This is an issue that the learning analytics
community is having troubles dealing with - evaluation of students !=
learning.

Regards,

Chris

-- 
Christopher Brooks, BSc, MSc
ARIES Laboratory, University of Saskatchewan

Web: http://www.cs.usask.ca/~cab938
Phone: 1.306.966.1442
Mail: Advanced Research in Intelligent Educational Systems Laboratory
     Department of Computer Science
     University of Saskatchewan
     176 Thorvaldson Building
     110 Science Place
     Saskatoon, SK
     S7N 5C9
_______________________________________________
Community mailing list
[email protected]
http://lists.opencastproject.org/mailman/listinfo/community


To unsubscribe please email
[email protected]
_______________________________________________

Reply via email to