- - - - - - - - - - - - - - - - - - - - - - - - - - - -
Name: dragon
Subject: About Indexer
Here is the parameter of indexer:
fprintf(stderr, "\nindexer from %s-%s-%s\n\(C)1998-2003, LavTech Corp.\
\n(C)2003-2006, Datapark Corp.\n\
\n\
Usage: indexer [OPTIONS] [configfile]\n\
\n\
Indexing options:"
#ifdef HAVE_SQL
"\n\
-a reindex all documents even if not expired (may be\n\
limited using -t, -u, -s, -c, -y and -f options)\n\
-m reindex expired documents even if not modified (may\n\
be limited using -t, -u, -c, -s, -y and -s options)\n\
-e index 'most expired' (oldest) documents first\n\
-o index documents with less depth (hops value) first\n\
-d index most popular docu,ents first\n\
-r try to reduce remote servers load by randomising\n\
url fetch list before indexing (recommended for very \n\
big number of URLs)\n\
-n n index only n documents and exit\n\
-c n index only n seconds and exit\n\
-q quick startup (do not add Server URLs)\n\
"
#endif
"\n\
-b block starting more than one indexer instances\n\
-i insert new URLs (URLs to insert must be given using -u or
-f)\n\
-p n sleep n milliseconds after each URL\n\
-w do not warn before clearing documents from database\n\
"
#ifdef HAVE_PTHREAD
" -N n run N threads\n\
-U use one connection to DB for all threads\n\
"
#endif
#ifdef HAVE_SQL
"\n\
Subsection control options (may be combined):\n\
-s status limit indexer to documents matching status (HTTP Status
code)\n\
-t tag limit indexer to documents matching tag\n\
-g category limit indexer to documents matching category\n\
-y content-type limit indexer to documents matching content-type\n\
-L language limit indexer to documents matching language\n\
-u pattern limit indexer to documents with URLs matching pattern\n\
(supports SQL LIKE wildcard '%%')\n\
-z maxhop limit indexer to documents with hops value less or equal to
maxhop\n\
-f filename read URLs to be indexed/inserted/cleared from file (with -a\n\
or -C option, supports SQL LIKE wildcard '%%'; has no
effect\n\
when combined with -m option)\n\
-f - Use STDIN instead of file as URL list\n\
"
#else
"\n\
URL options:\n\
-u URL insert URL at startup\n\
-f filename read URLs to be inserted from file\n\
"
#endif
"\n\
Logging options:\n\
"
#ifdef LOG_PERROR
" -l do not log to stdout/stderr\n\
"
#endif
" -v n verbose level, 0-5\n\
\n\
Misc. options:\n\
"
#ifdef HAVE_SQL
" -C clear database and exit\n\
-S print statistics and exit\n\
-T test config and exit\n\
-I print referers and exit\n\
-R calculate popularity rank\n\
-H send to cached command to flush all buffers\n\
(for cache mode only)\n\
-W send to cached command to write url data and to create
limits\n\
(for cache mode only)\n\
-Y optimize stored database at exit\n\
-YY optimize and check-up stored database at exit\n\
-Z optimize cached database at exit\n\
-ZZ optimize and check-up cached database at exit\n\
-ZZZ optimize, check-up and urls verify for cached database at
exit\n\
-Ecreate create SQL table structure and exit\n\
-Edrop drop SQL table structure and exit\n\
"
#endif
" -h,-? print help page and exit\n\
-hh print more help and exit\n\
\n\
\n",
hope can help everyone
- - - - - - - - - - - - - - - - - - - - - - - - - - - -
Read the full topic here:
http://www.dataparksearch.org/cgi-bin/simpleforum.cgi?fid=02;post=