Re: Reading directories faster
Thanks all!
It _was_ the 'dir? portion that was slowing things down.
'dir? uses 'info?, which uses 'query, so, as mentioned, there was a disk read on every
file.
I am currently using ...
either none? df: attempt [read %./][
tell directory "Error reading Directory"
][
forall df [insert either #"/" = last df/1 [dirs][files] df/1]
; either dir? df/1 ; old method
]
This is supremely superior to the old 'dir? method!
I do pre-set the size of the dirs & files blocks, but before I go to a new directory,
I 'clear them rather than resetting them ("clear dirs" instead of "dirs: make block!
16"). Then they do have to grow dynamically, but only when I go to a larger directory.
This way they never grow larger than needed for the largest directory I visit - no
wasted memory allocation.
Ashley, your "left field" sort works fine. It puts the directories at the head of the
output,
but I need to have them separated from the files. This is also a neat way to group
files by extension!
probe sort/compare read %. func [a b] [
either (last a) = (last b) [
a < b
][
(last a) < (last b)
]
]
--
__________________________________________________________
Sign-up for your own FREE Personalized E-mail at Mail.com
http://www.mail.com/?sr=signup
"Free price comparison tool gives you the best prices and cash back!"
http://www.bestbuyfinder.com/download.htm
--
To unsubscribe from this list, please send an email to
[EMAIL PROTECTED] with "unsubscribe" in the
subject, without the quotes.