Ben Rubinstein wrote:
I'm converting an old HyperCard stack for a client - it's a classic HC as
a single table DB job, with a bit of interesting functionality. But it's big
- slightly over 38,000 cards, all with one background; about 30 MB on disk.

It's _very_ slow to do various things - but possibly that's only in the IDE (clients would use the stack with StackRunner or similar).

Not entirely, the stack will be slow anywhere. Some IDE functions will be particularly unusable though.

 But doing anything with it in the IDE is a pain.

Yeah, don't open the Application Browser. And whatever you do, don't try to use the Object menu. :) On mousedown, that menu scans through all the cards looking for groups to populate its group-related items. In a stack that big, you could wait a very long time for the menu to appear. Rev will appear to hang, though it hasn't really.

I'm sure that there is wisdom in the community about this! Would storing the
data as stack properties and retrieving it into and out of a single card be
faster than leaving Rev to handle it as a card data?  (The data is all
unstyled text, btw - half a dozen or so small fields, a couple of largeish
ones.)  Or is there some magic trick to make Rev handle stacks of this kind
more efficiently? Or is it something to do with importing from HC? Or is it an IDE issue?

I don't think there are any tricks to speed it up, and it isn't an import or IDE issue, it's just that the Rev engine isn't optimized the same way HC was. And since the stack uses the find command a lot, it will be very slow to locate any text as well. HC had patented "hint bits" that made disk-based searching very fast, but Rev's engine literally has to look through all the field content on every card to find anything. It doesn't have the automatic indexing like HC did. So using "find" on a stack of this size isn't going to be very productive.

If I do move the data into custom properties and have a single card for editing, are there known benchmarks - disk space, memory usage, speed - comparing one property for each 'field', or a single property with '2d' keys, or those fancy new-fangled 2d arrays?

I don't know of any formal benchmarking. In general, arrays that live in RAM are very fast, but custom properties work almost as well too. Either would be fine, I'm sure. I did a similar thing using text files on disk (read into RAM on startup) and those were very fast too. That stack had about 40,000 cards with a single background, consisting of a couple dozen fields. It sounds pretty similar to your setup.

The main issue you'll run across is that the stack relies on "find", and when the data is no longer in fields you can't use that command. You'll have to write your own handlers to search the data wherever it's stored.

--
Jacqueline Landman Gay         |     jac...@hyperactivesw.com
HyperActive Software           |     http://www.hyperactivesw.com
_______________________________________________
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution

Reply via email to