On Sun, Jul 26, 2009 at 10:32 PM, querido<[email protected]> wrote:
>
> I admit that most of the above is an overreaction to a problem I've
> given myself: I've been fanatically absorbed in Chinese study for the
> last seven months, and while I've made great progress, the rate can't
> be sustained. I might consolidate for a while.
>
> I have a big idea for you. I suggest you could skip to the last
> paragraph first if you'd rather avoid my unpolished verbosity below.
>
> About #1, above (This is about language, and especially relevant to my
> scenario of language learned from a graduated series of textbooks in
> which later lessons subsume earlier ones. I know your program is much
> more general than this, and other people use it all sorts of ways.):
>
> If I can show that all of the information on some subset of my cards,
> all of which are at intervals above some minimum, is present in
> composite form in some lesson or text I've studied, and if I can prove
> that I possess it now as language (by passing a "review scheduling"
> card that tests this whole chunk), then "graduating" from those cards
> looks reasonable, to be replaced by this scheduled reading/listening
> of the whole.
> We know the principle of atomic-data flashcards. But what I'm saying
> suggests a new theory of how information should be managed over
> time... leading toward the big, hard to flashcardize qualities of real
> language. Let's see: A subset of less-composite-data flashcards
> *should* be condensed into a more-composite-data flashcard as soon as
> come criteria are met. This would build toward "review assignment"
> cards (in a separate category to avoid interfering with your schedule
> of learning *new* things), like this: front "This month, read War and
> Peace (in Russian of course)" back "Did you understand everything to
> the standard that you demand of yourself?" At that point, you don't
> need the 50,000(?) atomic cards that it would break down into. You
> could declare yourself done, with a yearly reading. Corollary: the
> more composite, the less the interval should be stretched, leading
> asymptotically toward no-stretching, pure maintenance. Corollary: the
> more composite, the more time should be allowed for the card to avoid
> interfering with the normal reps of new cards. That means fewer of
> these cards per unit time, ultimately requiring let's say a button to
> indicate that you've started on the assignment, giving you the
> permitted day, week etc. to complete it. These cards would be "in
> progress", awaiting their grade.
>
> From: *learning* atoms, To: *maintaining* chunks of real language.
>
> Just as a software tool could chop up a book into atomic cards, a
> software tool could monitor the learning process and re-condense,
> letters into words, words into sentences, etc., as justified. (Chop up
> the book recursively down to letters or characters, storing the
> intermediate results in a database. Do the audio too!) Integrated into
> the flashcard program and automated, total card number would
> continually fold downward into fewer more complex cards with lower ef,
> until your flashcard displays a link to your favorite bookstore to
> fetch this month's assignment!
>
> A practical, partial alternative that acknowledges these principles
> and could be implemented now is this: Every time I correctly answer a
> composite card, every atom present on that card would have *its own*
> card's interval reset, from today, probably even incremented, because
> I just saw it, and knew it. The presence of these cards would be
> irrelevant then since their intervals should become astronomical! The
> list of its atoms, compiled when the composite card is made, could be
> stored like tags with the card. This would be huge, and is why
> increasing card-complexity should be sought. There you go.

This is an interesting idea, but I'm not sure about it. Aside from the
work of reconsolidating, is this efficient? eg. suppose we have 50k
cards, and half of them need to be reviewed in 6 months and the other
half in 16 months; then your proposal would schedule all of them at 12
months or whatever. Wouldn't this be wastefully soon for half the
cards and way too forgetfully late for the other half? It would seem
to encourage forgetting.

While we're on the topic of new approaches to learning languages,
here's one I found interesting, although I never could quite work out
how to incorporate it into Mnemosyne or SRS:
http://jtauber.com/blog/2008/02/10/a_new_kind_of_graded_reader/

The idea is that a student has a small core vocabulary of Greek verbs
& nouns. You scan some large corpus looking for sentences and
paragraphs which have as few words falling outside that corpus as
possible, and ideally just one unknown word, and you present all the
matching sentences for the student to study/translate/learn.* Then,
you re-scan the corpus, having updated the corpus with the new word,
and so on and so forth.

It's a nice idea - intuitively I feel it's automating something that
good students are doing already eg. consider one blogger's 'sentence
mining' approach:
http://www.glowingfaceman.com/2008/12/sentence-mining.html

But I couldn't figure out the best way to marry it with SRS. I figured
that one viable approach might be to take a corpus, take a set of
foreign vocab which it is mandatory for the user to have, and then
generate the 'minimal' learning path. That is, it'd create thousands
of cards, each covering the next most rare word, and the user could
just work his way through them linearly.

(Actually, maybe this approach isn't as bad as I thought. I've been
generating large numbers of cards for memorizing poems, and it hasn't
worked out too bad as long as I didn't use the randomization plugin.
Hm. I should look into whether the guy's software could be repurposed
for this. A static set of cards could work well: imagine such a
generated card deck for someone learning French: she can choose from
one targeted at _In Search of Lost Time_, or she could pick a deck
targeted at Rene Descartes if her interests inclined that way.)

* There's some extra stuff about translating parts of sentences into
English to focus on a particular word, but I think this is extra - a
hack to get around the fact that a 'small' corpus like the New
Testament isn't going to give you often sentences which have *only*
one unknown word. By translating, you can take a sentence with
multiple unknown words and translate it into a sentence with only one
unknown word.

-- 
gwern

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"mnemosyne-proj-users" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/mnemosyne-proj-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to