Could it be that a certain web server sees you connect, you request a file,
and that file happens to take forever to load, leaving your script hanging
until memory runs out or something else?  Do you have timeouts set properly
to stop the HTTP GET/POST if nothing is happening on that connection?


On Tue, 11 Mar 2003, Nicholas Fitzgerald wrote:

> I'm having a heck of a time trying to write a little web crawler for my
> intranet. I've got everything functionally working it seems like, but
> there is a very strange problem I can't nail down. If I put in an entry
> and start the crawler it goes great through the first loop. It gets the
> url, gets the page info, puts it in the database, and then parses all of
> the links out and puts them raw into the database. On the second loop it
> picks up all the new stuff and does the same thing. By the time the
> second loop is completed I'll have just over 300 items in the database.
> On the third loop is where the problem starts. Once it gets into the
> third loop, it starts to slow down a lot. Then, after a while, if I'm
> running from the command line, it'll just go to a command prompt. If I'm
> running in a browser, it returns a "document contains no data" error.
> This is with php 4.3.1 on a win2000 server. Haven't tried it on a linux
> box yet, but I'd rather run it on the windows server since it's bigger
> and has plenty of cpu, memory, and raid space. It's almost like the
> thing is getting confused when it starts to get more than 300 entries in
> the database. Any ideas out there as to what would cause this kind of
> problem?
> Nick
> --
> PHP Database Mailing List (
> To unsubscribe, visit:

Peter Beckman                                                  Internet Guy
[EMAIL PROTECTED]                   

PHP Database Mailing List (
To unsubscribe, visit:

Reply via email to