--- Alexander Barkov <[EMAIL PROTECTED]> wrote:
> No, unfortunately we haven't found bug yet :-(
> Your last debug information should help.

it appears i was right about size_t causing the problem on Alpha.
Again, sizeof(size_t) on the Alpha is 8, while on i386 it's 4. i'm not
exactly sure what contributed to the coredump, but my feeling is that
because of the difference in size, cachelogd wrote the wrong records to
disk only for certain words. the size difference also affects splitter.
with this bug, record 77C.log always had the largest size. again, i'm
not sure why. why most other records were less than 10K for each
splitter time, 77C.log ranged from 100K to over 400K.

i started over from scratch by re-indexing everything. i tried using
the changed splitter and cachelogd on the existing ./var/tree data, but
it caused more core dumps, not at 77C but at other locations. i believe
the existing data were tainted, and therefore when checked by splitter
for comparison or delete processes, it core dumped.

but before that i made some changes to "cache.c",  "cachelogd.c". in
"cache.c" i replaced all occurances of "size_t" with "u_int32_t". for
"cachelogd.c" i replaced all "size_t" with "unsigned int". please note
that replacing "size_t" with "u_int32_t" for "cachelogd.c" will result
in extremely high and always increasing server load. mine went from 1
to over 36 for server load after trying that.

after erasing ./var/tree (is there a faster way than rm -rf) and
starting the new cachelogd, i started indexer. i've been running it for
3 days. i've used splitter 4 times and i've yet to get a core dump.
i've tested this on raw data of about 2mb or less. i've not let it
climb to 30mb like before. i'll do that soon here, but the indexing
process is extremely slow (4 indexers running, not threaded). maybe
it's because of the 1/2 million expired urls in pgsql's db.
> 
> Caffeinate The World wrote:
> > 
> > hi alex,
> > 
> > could you let me know if you found anything and if you
> > have a patch for 3.1.9pre13. i have indexers still
> > going and just building up files and i can't splitter
> > them large files unless i attend to the computer and
> > watch the size of the logs. thanks.
> > 
> > --- Alexander Barkov <[EMAIL PROTECTED]> wrote:
> > > We are trying to discover this bug now.
> > >
> > >
> > > Caffeinate The World wrote:
> > > >
> > > > mnogosearch 3.1.9-pre13, pgsql 7.1-current,
> > > > netbsd/alpha 1.5.1-current
> > > >
> > > > running cachemode. i've been indexing and
> > > splitter-ing
> > > > just fine. 'til today when after an overnight of
> > > > indexers running and gathering up a log file  of
> > > over
> > > > 31 MB, cachelogd automatically started a new log
> > > file.
> > > >
> > > > i ran 'splitter -p' on that 31 MB log file. it was
> > > > split up just fine. then i ran 'splitter' and it
> > > core
> > > > dumped almost half way thru.
> > > >
> > > > <cut>
> > > > ...
> > > > Delete from cache-file
> > > >
> > >
> > /usr/local/install/mnogosearch-3.1.9/var/tree/77/B/77BE1000
> > > > Delete from cache-file
> > > >
> > >
> > /usr/local/install/mnogosearch-3.1.9/var/tree/77/B/77BE2000
> > > >
> > >
> > /usr/local/install/mnogosearch-3.1.9/var/tree/77/C/77C15000
> > > > old:   2 new:   4 total:   6
> > > >
> > >
> > /usr/local/install/mnogosearch-3.1.9/var/tree/77/C/77C23000
> > > > old:   0 new:   1 total:   1
> > > >
> > >
> > /usr/local/install/mnogosearch-3.1.9/var/tree/77/C/77C2B000
> > > > old:   0 new:   2 total:   2
> > > >
> > >
> > /usr/local/install/mnogosearch-3.1.9/var/tree/77/C/77C2E000
> > > > old:   0 new:   1 total:   1
> > > >
> > >
> > /usr/local/install/mnogosearch-3.1.9/var/tree/77/C/77C2F000
> > > > old:   1 new:   1 total:   2
> > > >
> > >
> > /usr/local/install/mnogosearch-3.1.9/var/tree/77/C/77C30000
> > > > old:27049 new:13718 total:40767
> > > > Segmentation fault - core dumped
> > > > </cut>
> > > >
> > > > here is the backtrace:
> > > >
> > > > <cut>
> > > > ...
> > > > #0  0x120018c44 in UdmSplitCacheLog (log=
> > > > Cannot access memory at address 0x121f873bc.
> > > > ) at cache.c:591
> > > > 591
> > > >          table[header.ntables].pos=pos;
> > > > (gdb) bt
> > > > #0  0x120018c44 in UdmSplitCacheLog (log=
> > > > Cannot access memory at address 0x121f873bc.
> > > > ) at cache.c:591
> > > > warning: Hit heuristic-fence-post without finding
> > > > warning: enclosing function for address
> > > > 0xc712f381000470e1
> > > > </cut>
> > > >
> > > > sorry i don't think i compiled splitter with debug
> > > > flag on so i don't have much more info.
> > > >
> > > > here is the filesizes:
> > > >
> > > > -rw-r--r--  1 root  wheel   48888 Jan 14 10:56
> > > 77A.log
> > > > -rw-r--r--  1 root  wheel   11732 Jan 14 10:56
> > > 77B.log
> > > > -rw-r--r--  1 root  wheel  465360 Jan 14 10:56
> > > 77C.log
> > > >                            ^^^^^^
> > > ^^^^^^^
> > > > -rw-r--r--  1 root  wheel   73696 Jan 14 10:56
> > > 77D.log
> > > > -rw-r--r--  1 root  wheel   22764 Jan 14 10:56
> > > 77E.log
> > > >
> > > > notice 77C.log, that's where it core dumped. it's
> > > > unusually large.
> > > >
> > > > i think there is a bug in splitter. how do i
> > > continue
> > > > with the splitter process at this point so that
> > > > 77C.log and others get processed?


__________________________________________________
Get personalized email addresses from Yahoo! Mail - only $35 
a year!  http://personal.mail.yahoo.com/
______________
If you want to unsubscribe send "unsubscribe udmsearch"
to [EMAIL PROTECTED]

Reply via email to