[EMAIL PROTECTED] wrote:
>
> Is this file to be used with 3.1.9 sources, or 3.1.10? (Either is fine - I can
>adjust as necessary quite easily).
3.1.10
>
> Thanks for the fix. I have over a million urls inserted and climbing. :-)
>
OK. How does search with site or tag limits works?
Is this file to be used with 3.1.9 sources, or 3.1.10? (Either is fine - I can adjust
as necessary quite easily).
Thanks for the fix. I have over a million urls inserted and climbing. :-)
-- Dan
On Thu, 15 Feb 2001, Alexander Barkov wrote:
> Dan,please take new cache.c and recompile everythi
Dan,please take new cache.c and recompile everything.
It should fix the problem.
[EMAIL PROTECTED] wrote:
>
> I just have to put in my encounters here, because they seem very similar. I get a
>large amount of information indexed, but upon trying to run splitter, it will core
>dump somewhere m
i didn't get this error on my NetBSD/Alpha. compile was fine.
what system are you on?
--- Zenon Panoussis <[EMAIL PROTECTED]> wrote:
>
>
> Alexander Barkov skrev:
> >
>
> > We finally found a bug in cache.c. New version is in attachement.
> > Everybody who has problems with splitter's crashes
Zenon Panoussis skrev:
>
> Oops. Something else is not OK:
> cache.c:687:87: warning: #ifdef with no argument
[etc]
I think that the mailer is responsible for this. There are
lots of broken lines in the code, that shouldn't be broken.
Perhaps it's better to attach the file in .gz format i
Alexander Barkov skrev:
>
> We finally found a bug in cache.c. New version is in attachement.
> Everybody who has problems with splitter's crashes are welcome to test.
> Please, give feedback!
Oops. Something else is not OK:
cache.c:687:87: warning: #ifdef with no argument
cache.c:692:87: w
--- Alexander Barkov <[EMAIL PROTECTED]> wrote:
> Hello!
>
> We finally found a bug in cache.c. New version is in attachement.
> Everybody who has problems with splitter's crashes are welcome to
> test.
should the 'tree' directory be removed? can we split the raw log files
we have thus far o
Zenon Panoussis wrote:
>
> Alexander Barkov skrev:
> >
>
> > We finally found a bug in cache.c. New version is in attachement.
> > Everybody who has problems with splitter's crashes are welcome to test.
> > Please, give feedback!
>
> You guys are great! I'll re-compile and get back to you with
Alexander Barkov skrev:
>
> We finally found a bug in cache.c. New version is in attachement.
> Everybody who has problems with splitter's crashes are welcome to test.
> Please, give feedback!
You guys are great! I'll re-compile and get back to you with
reports.
BTW, can I remove http://se
>
> There was actually two bugs. The first one because of table[4096]
> in sql.c. Sometimes it may became more large. Now dinamic realloc'ing
> has been added.
Opps. I mean in cache.c, not sql.c
> The second one that cmpcache() function passed to qsort gave ordering
> slightly different with
> > ok. what exactly was the bug?
>
>
> There was actually two bugs. The first one because of table[4096]
> in sql.c. Sometimes it may became more large. Now dinamic realloc'ing
> has been added.
if there was a bug in sql.c, should we update that too before
re-compiling and testing this out?
Caffeinate The World wrote:
>
> --- Alexander Barkov <[EMAIL PROTECTED]> wrote:
> > Caffeinate The World wrote:
> > >
> > > --- Alexander Barkov <[EMAIL PROTECTED]> wrote:
> > > > Hello!
> > > >
> > > > We finally found a bug in cache.c. New version is in attachement.
> > > > Everybody who has
--- Alexander Barkov <[EMAIL PROTECTED]> wrote:
> Caffeinate The World wrote:
> >
> > --- Alexander Barkov <[EMAIL PROTECTED]> wrote:
> > > Hello!
> > >
> > > We finally found a bug in cache.c. New version is in attachement.
> > > Everybody who has problems with splitter's crashes are welcome
Caffeinate The World wrote:
>
> --- Alexander Barkov <[EMAIL PROTECTED]> wrote:
> > Hello!
> >
> > We finally found a bug in cache.c. New version is in attachement.
> > Everybody who has problems with splitter's crashes are welcome to
> > test.
>
> should the 'tree' directory be removed? can w
Hello!
We finally found a bug in cache.c. New version is in attachement.
Everybody who has problems with splitter's crashes are welcome to test.
Please, give feedback!
#include "udm_config.h"
#include
#include
#include
#include
#ifdef HAVE_UNISTD_H
#include
#endif
#include
#include
#in
I just have to put in my encounters here, because they seem very similar. I get a
large amount of information indexed, but upon trying to run splitter, it will core
dump somewhere midway through, and on one round left wierd directories in the $VAR/raw
directory:
[root@spider raw]# ls -al
total
Zenon Panoussis skrev:
>
> By now, I have almost 1 GB of indexed files, 4 indexer
> crashes and one splitter crash. I'll do the debugging and
> post its output tomorrow.
===
# gdb indexer core.indexer.01
GNU gdb 5.0
Copyright 2000 Free Software Foundation, Inc.
GDB is free sof
>
> >
> > The only one disadvantage is that it will not work on huge
> > search engines with millions documents. There is a limit on total
> > file number on file system in most unixes.
> > For example, my 30G /usr partition on FreeBSD box can create about 8
> > mln
> > files.
>
> is that a per f
>
> couldn't you do something like mount multiple FS:
>
> sd0a /data/part1
> sd1a /data/part2
> ...
> sdna /data/partn
>
> wouldn't that work?
>
this will work.
__
If you want to unsubscribe send "unsubscribe udmsearch"
to [EMAIL PROTECTED]
--- Alexander Barkov <[EMAIL PROTECTED]> wrote:
> Caffeinate The World wrote:
> > > The only one disadvantage is that it will not work on huge
> > > search engines with millions documents. There is a limit on total
> > > file number on file system in most unixes.
> > > For example, my 30G /usr pa
Caffeinate The World wrote:
> > The only one disadvantage is that it will not work on huge
> > search engines with millions documents. There is a limit on total
> > file number on file system in most unixes.
> > For example, my 30G /usr partition on FreeBSD box can create about 8
> > mln
> > files
--- Alexander Barkov <[EMAIL PROTECTED]> wrote:
> Alexander Barkov wrote:
> >
> > > i completely forgot about this feature!!! i read about it when i
> first
> > > started using mnogosearch, but never bothered to use it.
> > >
> > > with mirror feature, wouldn't it be easy to implement Google's
>
Alexander Barkov wrote:
>
> > i completely forgot about this feature!!! i read about it when i first
> > started using mnogosearch, but never bothered to use it.
> >
> > with mirror feature, wouldn't it be easy to implement Google's "cache"
> > feature where the user can view a cache of the page
> i completely forgot about this feature!!! i read about it when i first
> started using mnogosearch, but never bothered to use it.
>
> with mirror feature, wouldn't it be easy to implement Google's "cache"
> feature where the user can view a cache of the page from the last time
> you indexed.
I
--- Alexander Barkov <[EMAIL PROTECTED]> wrote:
> Zenon Panoussis wrote:
> >
> > Caffeinate The World skrev:
> > >
> >
> > > i've been going through this and back again time and time again.
> what
> > > would really be nice is indexer save the logs in a format that's
> easy
> > > to use again.
Zenon Panoussis wrote:
>
> Caffeinate The World skrev:
> >
>
> > i've been going through this and back again time and time again. what
> > would really be nice is indexer save the logs in a format that's easy
> > to use again. for instance, you can use the format re-index to sql etc.
>
> > or i
Caffeinate The World skrev:
>
> i've been going through this and back again time and time again. what
> would really be nice is indexer save the logs in a format that's easy
> to use again. for instance, you can use the format re-index to sql etc.
> or if you want to reindex again, you don't
i've been going through this and back again time and time again. what
would really be nice is indexer save the logs in a format that's easy
to use again. for instance, you can use the format re-index to sql etc.
or if you want to reindex again, you don't have to crawl through all
the external web
Zenon Panoussis skrev:
>
> Now for 31 MB adventures :)
# ./run-splitter -k
Sending -HUP signal to cachelogd...
Done
# ./run-splitter -p
Preparing logs...
Open dir '/var/mnogo3110/raw'
Preparing word log 982024900 [ 42176 bytes]
Preparing word log 982027284 [31465324 bytes]
Prepar
Alexander Barkov skrev:
>
> Could you check count, j, w, table[w], logwords[count+j]
> variable values? Use print gdb command.
AAARGH! I deleted the core dump. I didn't know that I could do
that :(
Z
--
oracle@everywhere: The ephemeral source of the eternal truth...
__
If y
Could you check count, j, w, table[w], logwords[count+j]
variable values? Use print gdb command.
Zenon Panoussis wrote:
>
> Zenon Panoussis skrev:
> >
>
> >And a really HARD hang at the same place as before. So hard
> >that I can't even kill splitter.
>
> BTW, although I couldn't
Zenon Panoussis skrev:
>
> I'll delete the entire tree directory and start re-indexing from
> scratch. I'll make and split a small file first, ca 5 MB, then a
> 31 MB file, if that works yet another 31 MB file, and so on until
> I get in problems again. Will report back later this evening.
Fi
Zenon Panoussis skrev:
>
>And a really HARD hang at the same place as before. So hard
>that I can't even kill splitter.
BTW, although I couldn't kill splitter, I did find a core dump
in sbin. Here's the backtrace:
# gdb splitter core
GNU gdb 5.0
This GDB was configured as "i386-red
Caffeinate The World skrev:
>
> in my tests your 3 little files wouldn't make a difference. he would
> have to run splitter -p and splitter on all the files starting from the
> first original RAW file, including all the 31 MB file. i believe in my
> case it was the original 31mb file which cau
--- Alexander Barkov <[EMAIL PROTECTED]> wrote:
> Zenon Panoussis wrote:
> >
> > Alexander Barkov skrev:
> > >
> >
> > > Could you please put zipped /var/mnogo319/tree/12/B/12BFD000 and
> > > a file /splitter/XXX.wrd with correspondent XXX.del which produce
> > > crash somewhere on the net?
> >
Zenon Panoussis wrote:
>
> Alexander Barkov skrev:
> >
>
> > Could you please put zipped /var/mnogo319/tree/12/B/12BFD000 and
> > a file /splitter/XXX.wrd with correspondent XXX.del which produce
> > crash somewhere on the net?
>
> http://search.freewinds.cx/logs/logs.tar.gz
>
There is no cr
Zenon Panoussis wrote:
>
> Alexander Barkov skrev:
> >
>
> > Could you please put zipped /var/mnogo319/tree/12/B/12BFD000 and
> > a file /splitter/XXX.wrd with correspondent XXX.del which produce
> > crash somewhere on the net?
>
> http://search.freewinds.cx/logs/logs.tar.gz
>
It does not cr
Alexander Barkov skrev:
>
> > http://search.freewinds.cx/logs/logs.tar.gz
> Not Found
I'm senile. It's fixed (the 404, not the senility ;)
Z
--
oracle@everywhere: The ephemeral source of the eternal truth...
__
If you want to unsubscribe send "unsubscribe udmsearch"
to [EMAI
did you try the 60mb file i emailed you the URL earlier alex?
--- Alexander Barkov <[EMAIL PROTECTED]> wrote:
> Zenon Panoussis wrote:
> >
> > Alexander Barkov skrev:
> > >
> >
> > > Could you please put zipped /var/mnogo319/tree/12/B/12BFD000 and
> > > a file /splitter/XXX.wrd with corresponde
Zenon Panoussis wrote:
>
> Alexander Barkov skrev:
> >
>
> > Could you please put zipped /var/mnogo319/tree/12/B/12BFD000 and
> > a file /splitter/XXX.wrd with correspondent XXX.del which produce
> > crash somewhere on the net?
>
> http://search.freewinds.cx/logs/logs.tar.gz
Not Found
The req
Alexander Barkov skrev:
>
> Could you please put zipped /var/mnogo319/tree/12/B/12BFD000 and
> a file /splitter/XXX.wrd with correspondent XXX.del which produce
> crash somewhere on the net?
http://search.freewinds.cx/logs/logs.tar.gz
Z
--
oracle@everywhere: The ephemeral source of the ete
Could you please put zipped /var/mnogo319/tree/12/B/12BFD000 and
a file /splitter/XXX.wrd with correspondent XXX.del which produce
crash somewhere on the net?
Zenon Panoussis wrote:
>
> Alexander Barkov skrev:
> >
>
> > Can you guys give us a log file produced by splitter -p which caused
>
in my tests your 3 little files wouldn't make a difference. he would
have to run splitter -p and splitter on all the files starting from the
first original RAW file, including all the 31 MB file. i believe in my
case it was the original 31mb file which caused the problem.
while processing the fi
Alexander Barkov skrev:
>
> Can you guys give us a log file produced by splitter -p which caused
> crash? We can't reproduce crash :-(
Huh? splitter doesn't accept the -v5 argument, so it won't give
more detailed logs than the normal ones. The only log I had, that
to stdout, is the one I in
Hi!
Can you guys give us a log file produced by splitter -p which caused
crash? We can't reproduce crash :-(
Caffeinate The World wrote:
>
> i reported this problems a while back. i believe it's being worked on.
> atleast the recently found the bug why it wasn't splitting out to FFF.
> the se
i reported this problems a while back. i believe it's being worked on.
atleast the recently found the bug why it wasn't splitting out to FFF.
the seg fault happens during the splitter process and not index. i've
been splitter when the logs are at about < 2 MB and i've not had
splitter core dump on
46 matches
Mail list logo