Thanks Chris, that was an interesting read. I agree with many of the points you 
made, particularly about marking. In an exceptional year of performance you can 
easily still end up with the same mean mark. There's fairly extensive use of 
biweekly  MCQ tests in quite a few of our course units. They often have the 
same question set across multiple years; once the academic has developed them 
they tend to only undergo modification if there's a problem. So in theory the 
marking of these should be relatively constant; I'll have to run them through a 
couple of statistical tests to see if anything interesting come out.


Stuart Phillipson | Digital Media Projects Coordinator

Room 1.83 Simon Building
University of Manchester
Brunswick Street
Manchester
M13 9PL
United Kingdom

e-mail: [email protected]
Phone: 016130 60478

On 8 May 2012, at 21:17, Christopher Brooks wrote:

> Hi Stuart,
> 
> I'm just finishing some statistical analysis on how our students are
> using lecture capture as well, which you and others following this
> thread might be interested in.  We took a second year science course
> and made lecture capture available to them in two years back to back.
> We analysed the students who used the system (n=135) and clustered them
> into five clusters based on usage (we used k-means).  We then used these
> clusters to characterize the next year's students, and compared marks.
> We found for both the midterm and the final that there were
> statistically different marks for one of the clusters in particular -
> the one where student habitually watched videos on a weekly basis.  In
> short, if you watch video every week, you will get a better mark.
> 
> We're verifying this on a larger data set this summer, but some initial
> results (without the marks in it) was published at LAK11:
> 
> http://dl.acm.org/citation.cfm?id=2090128
> 
>> group experiment on a unit or set of units would produce a more
>> conclusive answer. However I doubt this would be ethical. I can't
>> imagine the practicalities of dividing a class in half and then
>> telling them only 50% of students would receive lecture recordings.
>> Even if it were done in all likelihood the group receiving recorded
>> lectures would share them with the control group. An alternative
> 
> Using two back to back years with the same instructor helps mediate
> this, but it's tough to get more controlled than that.  Dealing with
> large numbers and averages is the best we've been able to do in this
> regard.
> 
>> would be to target a set of units that showed low variation across an
>> extended period of time, then measure short term / long term changes
>> with the addition of lecture capture. It might be a bit tricky to
>> resource this option, so it's probably more appealing if it were an
>> activity done within a larger project to rollout lecture capture.
> 
> The problem is that marks are a very artificial proxy for learning,
> especially year after year.  If lecture capture raised averages by 10%
> across the board departments would just curve marks downwards 10%
> (either directly in the short term, or making the course content harder
> in the longer term).  This is an issue that the learning analytics
> community is having troubles dealing with - evaluation of students !=
> learning.
> 
> Regards,
> 
> Chris
> 
> -- 
> Christopher Brooks, BSc, MSc
> ARIES Laboratory, University of Saskatchewan
> 
> Web: http://www.cs.usask.ca/~cab938
> Phone: 1.306.966.1442
> Mail: Advanced Research in Intelligent Educational Systems Laboratory
>     Department of Computer Science
>     University of Saskatchewan
>     176 Thorvaldson Building
>     110 Science Place
>     Saskatoon, SK
>     S7N 5C9

_______________________________________________
Community mailing list
[email protected]
http://lists.opencastproject.org/mailman/listinfo/community


To unsubscribe please email
[email protected]
_______________________________________________

Reply via email to