On Tue, September 1, 2009 6:14 pm, Michael Torrie wrote: > William Attwood wrote: >> this takes in each line from STDIN, adds it to an array, and when the array >> hits 8000 (my memory limit at the time) it sends the array to a PHP function >> that will process and input it into the DB I am using. >> >> Just in case anyone needs to process large files, stream them in >> >> # more file.log | php process.php > > I'm a little confused as to why you don't just process the file one line > at a time with little or no memory consumption (file reads are normally > buffered anyway, so reading until a line break is not a bottleneck). > Why the big buffer? I don't see any speed increases coming from that.
Because the function he calls probably does a bulk-insert with the buffer, which is a LOT faster than inserting one at a time. -- Matthew Walker Kydance Hosting & Consulting, Inc. - http://www.kydance.net/ PHP, Perl, and Web Development - Linux Server Administration /* PLUG: http://plug.org, #utah on irc.freenode.net Unsubscribe: http://plug.org/mailman/options/plug Don't fear the penguin. */
