Hey,
    Well, I would write two scripts.  One to pregen a reference database and
one to search using that database.

The pregen could use the readdir function that Matthew Luchak suggested.
You could use the fopen command to get the pages.
fopen("http://localhost/name-of-file") will return the rendered page instead
of the page with the mixed-in source.  You could then use the striptags()
function.  Now you have just the text and it has any dynamic content from
the php code.  Parse that file, throw out any words that are less than like
3 characters.

There are tons of ways to store the words and pages, the simpilest being to
just to store the word and then a list of the pages that it appears on.
Maybe even tie in how many times it appears in each page.

Anyway, throw that pregen page into a daily or hourly cron and boom, done.

Then write a search page that just looks for each word, find the correct
pages, and makes links to them.

I don't know of any sites that explain how to do search engines but this way
will work :)

SL.

----- Original Message -----
From: "Kevin A Williams" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, April 19, 2001 11:11 AM
Subject: [PHP] Site Searchable function


Hi,

I was wondering whether anyone could direct me in the direction of creating
a search function like the one on php.net?

I have tried looking rather fruitlessly to implement a system, would one
possible implementation be to use reference files for the information and
then use string matching to analyse the information, or search the source
.php files (although security issues of revealing scripts).

Thanks in advance




-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
To contact the list administrators, e-mail: [EMAIL PROTECTED]

Reply via email to