Hi All,

I'm putting together a spider to be used with a search engine, and have come
up to what looks like a limitation in using CF for the task. Here's what's
going on:

I'm using cfhttp to extract links from specific sites. It loops through,
collecting all hyperlinks that are within the site and adds them to a list
to be visited. In visiting/indexing these links, however, it is still
looping through only in one CF page, hence it times out (set to about ten
minutes on our server). This won't do it if I'm indexing a site of several
hundred pages.

Does anyone know of a better way to do the recursive call to cfhttp? Anyway
I see it, you are still calling one page, and it eventually times out.

Thanks for any ideas!

Phill Gibson
[EMAIL PROTECTED]
Velawebs Web Design
www.Velawebs.com


------------------------------------------------------------------------------------------------
Archives: http://www.mail-archive.com/[email protected]/
Unsubscribe: http://www.houseoffusion.com/index.cfm?sidebar=lists or send a message 
with 'unsubscribe' in the body to [EMAIL PROTECTED]

Reply via email to