Hmm... that's exactly what prompted the question. I have a database with 4030 entries, but one machine it generates a hash with 4096 and the other with 8192.. Is it checking whether the database fits in memory?
Just infinitely curious:
------------ Machine 1: 512KB ram Database Sz: 673725
mk4dump ~/.zinf/db/metadb | fgrep VIEW VIEW 1 rows = dbview:V dbview_H1:V VIEW 4030 rows = url:S type:S title:S artist:S album:S genre:S comment:S track:S year:S VIEW 4097 rows = _H:I _R:I
---------------------------------- Machine 2: 2GB ram Database sz : 689683
VIEW 1 rows = dbview:V dbview_H1:V VIEW 4030 rows = url:S type:S title:S artist:S album:S genre:S comment:S track:S year:S VIEW 8193 rows = _H:I _R:I
No, "fits in memory" is not considered.
What does matter is the order or adds/deletes. Space is reclaimed and re-used, when fill drops too low - there is hysteresis, i.e. the same number of rows can have a different hash table size depending on how entries were added and deleted.
Same build order, different platform? (if so, there could be a bug)
-jcw
_______________________________________________ metakit mailing list - [EMAIL PROTECTED] http://www.equi4.com/mailman/listinfo/metakit
