ok, that could be an assumption of course. I more or less expected
that. Anybody who uses mnemosyne regularly, adds all words he is
using, has just some some small random influence on particular items
by other methods is the "regular". and everybody else creates
noise...

Just thought it's pretty hard to assess the noise level - as checking
on myself I found that me alone am creating 3 types of "noise". But
then again, probably it is possible to identify what kind of noise
gets generated by the particular "misuse" pattern and try to asses the
level/filter it off. And datasets with big interruptions can be
included or excluded to see how they modify the picture. Good luck,
and I hope this data plan brings some interesting results.

In fact I think that info about item being a duplicate to other items
could be included in the upload as well. Of course there are still
ways to introduce them without getting "caught".. I keep different
courses in different databases, so duplicates would never show up as
such :-/

On Feb 7, 9:45 am, Peter Bienstman <[email protected]> wrote:
> On Sunday, February 06, 2011 04:41:44 pm normunds wrote:
>
> > I wonder is there a consistent interpretation of mnemosyne gathered
> > data possible. Has anybody analysed it
>
> Not really, Mnemosyne 2.0 is my priority now.
>
> > and what assumptions do you
> > make and how do you filter of "wrong data" from "regular ones". Or if
> > not, how do you try to account for presence of some unexpected use
> > patters.
>
> It's an enormous dataset, with many thousands of users. The assumption is that
> any 'particularities' will be just noise and overshadowed by 'regular'
> entries.
>
> Cheers,
>
> Peter

-- 
You received this message because you are subscribed to the Google Groups 
"mnemosyne-proj-users" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/mnemosyne-proj-users?hl=en.

Reply via email to