I'm getting this same error with the redhat htdig-3.2.0-2.011302 rpm. 
htdig just spews out this error and gobbles up memory (and wakes me up 
in the middle of the night by causing my pager to go off telling me that 
the memory on the server is quickly disappearing!).

The error occurs consistently when run with the same configuration file, 
but it does not repeat itself when I run htdig on just the last url 
listed by -v before the error spewing starts.  I've attached the 
relevant htdig configuration file (which indexes a public site, so you 
are free to run it yourself).

We did not get this error before upgrading to the version above, but I 
don't want to downgrade and lose the security fixes.

-hal

****
FROM: Gilles Detillieux
DATE: 03/26/2002 09:30:20
SUBJECT: RE:  [htdig] WordKey::Compare: key length for a or b < 
info.num_length



According to Chad Phillips:
 > I have been trying  3.2.0b4-20020217, for the most part it works fine.
 > But when I dig one site I keep getting this error part way through the
 > dig.
 > > WordKey::Compare: key length for a or b < info.num_length
 > WordKey::Compare: key length for a or b < info.num_length
 > WordKey::Compare: key length for a or b < info.num_length
 > .....
 > > It is like it is stuck in a loop at that point and I have to kill the
 > process.  I ran it a few times and it seems to get stuck around the same
 > url each time.  Any ideas?

These errors have been reported before, but they've always been too
elusive to nail down.  If you can narrow the problem down to a specific
URL, and the problem still occurs if you index just that one URL, please
let us know, as that would give us something solid to go on.

We're hoping that the upcoming mifluz code update in htdig will solve
some of these wierd, intermittent word database problems.

-- 
Gilles R. Detillieux              E-mail: <<EMAIL: PROTECTED>>
Spinal Cord Research Centre       WWW: 
http://www.scrc.umanitoba.ca/~grdetil
Dept. Physiology, U. of Manitoba  Phone:  (204)789-3766
Winnipeg, MB  R3E 3J7  (Canada)   Fax:    (204)789-3930

_______________________________________________
htdig-general mailing list <<EMAIL: PROTECTED>>
To unsubscribe, send a message to <<EMAIL: PROTECTED>>
with a subject of unsubscribe
FAQ: http://htdig.sourceforge.net/FAQ.html

                
#
# config file for ht://Dig.
# Last modified 3/19/00 by Wendy Seltzer
#
# This configuration file is used by all the programs that make up ht://Dig.
# Please refer to the attribute reference manual for more details on what
# can be put into this file.  (http://www.htdig.org/confindex.html)
# Note that most attributes have very reasonable default values so you
# really only have to add attributes here if you want to change the defaults.
#
# What follows are some of the common attributes you might want to change.
#

#
# Specify where the database files need to go.  Make sure that there is
# plenty of free disk space available for the databases.  They can get
# pretty big.
#
database_dir:           /opt/htdig/db_icann
common_dir:             /opt/htdig/common_icann

#
# This specifies the URL where the robot (htdig) will start.  You can specify
# multiple URLs here.  Just separate them by some whitespace.
# The example here will cause the ht://Dig homepage and related pages to be
# indexed.
#
#start_url:             http://www.htdig.org/
start_url: \
        http://cyber.law.harvard.edu/icann/ \
        http://www.icann.org \
        http://www.aso.icann.org \
        http://www.pso.icann.org \
        http://members.icann.org \
        http://www.dnso.org \
        http://cyber.law.harvard.edu/ifwp/ \
        http://cyber.law.harvard.edu/rcs/ \
        http://www.iana.org 

#
# This attribute limits the scope of the indexing process.  The default is to
# set it to the same as the start_url above.  This way only pages that are on
# the sites specified in the start_url attribute will be indexed and it will
# reject any URLs that go outside of those sites.
#
# Keep in mind that the value for this attribute is just a list of string
# patterns. As long as URLs contain at least one of the patterns it will be
# seen as part of the scope of the index.
#
limit_urls_to:          ${start_url} 

#
# If there are particular pages that you definitely do NOT want to index, you
# can use the exclude_urls attribute.  The value is a list of string patterns.
# If a URL matches any of the patterns, it will NOT be indexed.  This is
# useful to exclude things like virtual web trees or database accesses.  By
# default, all CGI URLs will be excluded.  (Note that the /cgi-bin/ convention
# may not work on your web server.  Check the  path prefix used on your web
# server.)
#
exclude_urls:           /cgi-bin/ .cgi get3 get4

#
# The string htdig will send in every request to identify the robot.  Change
# this to your email address.
#
#maintainer:            [EMAIL PROTECTED]
maintainer:     [EMAIL PROTECTED]
#
# The excerpts that are displayed in long results rely on stored information
# in the index databases.  The compiled default only stores 512 characters of
# text from each document (this excludes any HTML markup...)  If you plan on
# using the excerpts you probably want to make this larger.  The only concern
# here is that more disk space is going to be needed to store the additional
# information.  Since disk space is cheap (! :-)) you might want to set this
# to a value so that a large percentage of the documents that you are going
# to be indexing are stored completely in the database.  At SDSU we found
# that by setting this value to about 50k the index would get 97% of all
# documents completely and only 3% was cut off at 50k.  You probably want to
# experiment with this value.
# Note that if you want to set this value low, you probably want to set the
# excerpt_show_top attribute to false so that the top excerpt_length characters
# of the document are always shown.
#
max_head_length:        100000

#set later
#max_doc_size:          200000
#
# Depending on your needs, you might want to enable some of the fuzzy search
# algorithms.  There are several to choose from and you can use them in any
# combination you feel comfortable with.  Each algorithm will get a weight
# assigned to it so that in combinations of algorithms, certain algorithmsget
# preference over others.  Note that the weights only affect the ranking of
# the results, not the actual searching.
# The available algorithms are:
#       exact
#       endings
#       synonyms
#       soundex
#       metaphone
# By default only the "exact" algorithm is used with weight 1.
# Note that if you are going to use any of the algorithms other than "exact",
# you need to use the htfuzzy program to generate the databases that each
# algorithm requires.
#
search_algorithm:       exact:1 synonyms:0.2 endings:0.1

#
# local variables:
# mode: text
# eval: (if (eq window-system 'x) (progn (setq font-lock-keywords (list '("^#.*" . 
font-lock-keyword-face) '("^[a-zA-Z][^ :]+" . font-lock-function-name-face) '("[+$]*:" 
. font-lock-comment-face) )) (font-lock-mode)))
# end:


#local_urls:    http://eon.law.harvard.edu/=/home/httpd/html/
#local_user_urls:       http://eon.law.harvard.edu/=/home/,/public_html/
no_excerpt_show_top: yes
no_excerpt_text: 
noindex_start: <!---begin navbar--->
noindex_end: <!---end navbar--->
#server_aliases:
#cyber.law.harvard.edu=cyber.harvard.edu=www.cyber.law.harvard.edu

#to speed the database
#common_url_parts:      http://eon.law.harvard.edu \
#               http://cyber.law.harvard.edu \
#               .html
#removed momentarily 2/23

#what have we seen?             
create_url_list: yes

#template_map
#use the long.html instead of the default
template_map: Long long ${common_dir}/long.html Short builtin-short builtin-short
                
max_doc_size: 1000000 
#pdf_parser: /usr/bin/acroread -toPostScript
external_parsers: application/msword /usr/local/bin/parse_doc.pl \
                  application/postscript /usr/local/bin/parse_doc.pl \
                  application/pdf /usr/local/bin/parse_doc.pl

#get cctlds, IP
minimum_word_length: 2 
allow_numbers: true

maximum_pages: 50
wordlist_cache_size: 2500000

Reply via email to