I think, the two approaches are not conflicting as in conflicting. :)

Sending line by line or in very tiny amounts as data becomes available can 
be what you need in case of reading a file as it is written. Sending data 
in bigger chunks is a better approach if you have big amounts of data.

In your solution having both a *match* and a *done* event is useful; but if 
you are using *done* to send all the matched lines at once, it is not a 
very good way of doing this if the files you are reading are big, and if 
they are log files, they are generally big. Reading too much into memory 
and doing nothing until everything is done is a performance killer. This is 
not something that you should leave to the end thinking that this is a very 
early optimization. No, it is not. What you are doing here is just a few 
functions and there is no early or late for such a short work.

You have a mound of pebbles on front yard and you want to carry them to 
back yard. You can find a big wheelbarrow, fill it to the top and by 
forcing your strength to its limits, try to carry all of it at once. You 
can carry or throw them one by one too. A better way is to find _an optimum 
amount_ to carry at once. This way, you will be at a middle way between 
one-by-one and all-at-once.

This way, you can also easily manage another situation where your pebble 
mound is fed by trucks as they bring and toss your order in front of your 
house. If you are reading from files as they are written, you can still 
read some and send them to client. Instead of sending every line separately 
or waiting for a number of lines to be filled in your wheelbarrow/bucket, 
you can also have *timeout*s. Think about the last pebbles left on front 
yard and it fills only half amount of your optimum amount to carry. To 
carry your optimum amount, you wouldn't want to wait there until next 
morning when the next truck arrives. So, for every line you read, you check 
also the time since you sent last page and if new data is not coming, send 
a page with half size of a normal page.

For reading from files as they are written, I had written a how-to answer 
with working code. You can read it here:
http://stackoverflow.com/questions/11225001/reading-a-file-in-real-time-using-node-js/11233045#11233045



On Tuesday, July 3, 2012 10:25:42 AM UTC-4, Adeel Qureshi wrote:
>
> hmm conflicting answers :) i guess ill try to run a test with a lot of 
> data and see which approach works more consistent .. meaning doesnt leads 
> to disconnects .. thanks for the input .. ill post my findings 
>
>

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en

Reply via email to