2008/12/9 duncan <[EMAIL PROTECTED]>:
> It does sound like the increase in time
> with number of items is greater than linear, and if that's the case
> there is probably only one spot that needs to be fixed. If this is
> inherent in the xml parser you're using I would consider it a bug in
> that library, and I would not bother to look for a cause in your code-
> I'd just switch in a better library. Do you know that your bottleneck
> is in the xml parser?

I think this assumption should be tested rather than... well, assumed
:) O(n^2) isn't all that bad really; in this case, it could very well
just be that the xml parsing bottleneck is a relatively large constant
(usually ignored in big-O notation but it could be the limiting factor
here).

You could write a script that generates random decks of 100, 500,
1000, 5000, 25000 cards etc., and time an import for each, to get a
quick estimate of the rate at which the runtime increases, rather than
spending possibly unnecessary time looking at the algorithm first. I
would if I knew any python. :)

Oisín

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"mnemosyne-proj-users" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/mnemosyne-proj-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to