On Thu, Feb 23, 2012 at 2:41 PM, Peter Bienstman
<[email protected]> wrote:
> * At the moment, the importer only copies over media files that are
> explicitly referenced in the cards, and not orphaned media. This makes
> correcting spelling mistakes to fix these errors more tedious for the user.
> I will change this behaviour to copy over all media files. You can still do
> that now, after import, just copy everything to
> dot_mnemosyne2/default_db.media. As long as you don't select 'deleted unused
> media files' before you fix your spelling errors, you should be fine.
>
> * I will fix the duplicate message box warning about missing media

I'd suggest maybe adding a text to the message box warning not to
delete unused media files before going through the tagged missing
cards.

> * future schedule dropping to 0 in a month's time: I cannot reproduce this
> from your data, but I've made some fixes to the code yesterday, and perhaps
> you were using a version from the day before yesterday. Updating from bzr
> and reimporting should do the trick.

Quite possible. I don't remember doing a bzr pull before trying your
suggestions which then made Mnemosyne 2 work (at which point I ceased
to mess with the source repo and began actually using it and taking
notes).

> * 100% retention rate: I checked your logs, and indeed, you never needed to
> use grades 0 and 1, so you always have perfect recall! Congratulations! Feel
> free to suggest another figure of merit which would be better suited for
> your experiments, it's trivial to add extra statistics in a plugin.

Hm. So retention rate is obviously no good, I simply don't use it and
it completely ignores large swathes of reviews. What metric would you
suggest? I was thinking 'average grade per day' might work, but seems
to me that this could be confounded - what if a bunch of cards with
low easiness are grouped onto one day?

The question is whether my grade-performance is lower than one would
expect; does the algorithm have an 'expected' grade for each card?
Perhaps calculated from the 'easiness' metric? Then a statistic could
be defined on that, sum & average the differences between all the
expected grade and actual grade. This would distinguish between a day
where I grade a lot of easy cards a 2 because I've accidentally given
myself metal poisoning, and days where I grade a lot of cards a 2
because they're jolly hard.

(This metric would also be good if I ever experiment with tDCS: cards
reviewed when using tDCS will have higher grades than they 'should' on
subsequent reviews.)

-- 
gwern
http://www.gwern.net

-- 
You received this message because you are subscribed to the Google Groups 
"mnemosyne-proj-users" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/mnemosyne-proj-users?hl=en.

Reply via email to