>Well, my first suggestion would be not to do a linear search for free
>space. Keep all the free blocks in a balanced tree, sorted by size. You can
>then quickly search for a block of the propper size. And if the search is
>still too slow, though it won't be, do some hash-tabling to find a start
>point in the tree.

Anthony,

 I wasn't yet at that point, but I'm getting there.

>Adjacent free blocks should be combined into a single free block.

 I've been thinking about this, too. But we'd need to decide when we want
to keep excess space in a block because we'll be streaming to it and when
we want to split the block so the excess space can be put to good use.

>Optimize it as we'll need it: Getting objects of the current card (or
>background) should be fast, and should not require searching through all of
>the objects of that type. It should also be fast to iterate through all
>objects. And lookup by name, id, and number must also be fast.

 I'm planning to have it keep track of the last block and last branch
accessed and to check against that first to speed up accesses to most
recent blocks (sort of like the cache principle).

>Profile the thing with a LOT of blocks (say, 100,000 -- a million would be
>better), doing random operations with blocks of random size, and not
>compacting often, and see where the time is spent. Focus on those parts
>first. I've suggested the above because I'm fairly sure you're going to be
>spending lots of time with free-block searches.

 I did that, and I also found this problem. But I have to admit that my
Profiler doesn't work right now (some day I'll have to read the manual to
find out how I'm supposed to use that new one), so I just did an educated
guess.

>Also, if the file is small, read the whole thing in. It's silly to be
>paging a 80K file, for example. On some OS's, it won't matter (Linux, for
>example), because they do real disk caching -- and have fast calls. But on
>the MacOS, it does.

 Of course, this should be done only if RAM isn't currently tight, and
there should be a way to purge unused parts of even small files should
memory become tight. I would've added a "preload" call which we could've
used for files we consider small.

>Ever wondered why you can insert a character before three megs of
>characters in a word processor and not deal with waiting for it to
>reallocate and move three megs of data? Because it, like I hope you'd do,
>is willing to fragment things. It cleans up at idle time. Something like
>this can be done for streamed blocks:

 I would've added things like this when we have a decently working block
file format. This is "additional optimizations" in my book. We need some
simple basic components first, which can be used. Then when everything
works a bit, we can go around optimizing like this.

>Also, if someone starts streeming to a new block (or a small block), you
>could ask for an "expected final size" and place it somewhere where that
>much room is availible -- at the end of the file, for example.

 This is already supported. See the FluffBlock() method.

>I hope I've given you enough ideas to keep you busy until ResCraft is out :)

I'll be off on holidays next week, so you have a bit more time to get
ResCraft into a state where it'll work.

Cheers,
-- M. Uli Kusterer

------------------------------------------------------------
             http://www.weblayout.com/witness
       'The Witnesses of TeachText are everywhere...'

--- HELP SAVE HYPERCARD: ---
Details at: http://www.hyperactivesw.com/SaveHC.html
Sign: http://www.giguere.uqam.ca/petition/hcpetition.html

Reply via email to