Greetings all,

I think I've finally found the *source* of the recursions etc in the 
database compression.  Once 3.2.0b5 is out, I'll remove all the hacks 
to limit explicit recursion, and to keep the cache clean...

The problem is with the freelist of pages used when the compressed 
page is larger than a "real" page.  It is part of the same 
environment as the rest of the database, and so shares the cache.  
That means that writing a page can cause access to the cache, which 
may require writing dirty pages etc.

The solution seems to be simply to make it a "standalone" database.

Can anyone see any problems with that approach?  Do we need the 
environment for anything?

Cheers,
Lachlan

On Wed, 18 Jun 2003 22:36, Lachlan Andrew wrote:

> I've just come across a database bug :(  It was reporting
>   WordKey::Compare: key length for a or b < info.num_length
> repeatedly when I ran a large dig without  -i.
>
> I haven't tried repeating it yet, because the dig that produced it
> takes three days!!  (It uses a rather inefficient
> external_transport) I'll try to replicate it using a more
> manageable data set.


-- 
[EMAIL PROTECTED]
ht://Dig developer DownUnder  (http://www.htdig.org)


-------------------------------------------------------
This SF.Net email is sponsored by: INetU
Attention Web Developers & Consultants: Become An INetU Hosting Partner.
Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission!
INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php
_______________________________________________
htdig-dev mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/htdig-dev

Reply via email to