Is this signature not worth considering?
stream.writev([chunks,...], encoding, callback)
It's an easier API to use. No need to create an object for each chunk.
Of course it'd be no use where there's a need to different encodings for
some of the chunks, but how common a requirement is that? Maybe I'm naive
in thinking that a single encoding for all chunks is the more common
scenario.
Perhaps being able to provide either a single string or an array of strings
would help.
stream.writev([chunks,...], encoding | [encodings,...], callback)
The common use case of a single encoding for all chunks is nice and easy to
use, but the other use case is still catered for.
On Tuesday, 23 April 2013 01:01:50 UTC+1, Isaac Schlueter wrote:
>
> There's a syscall called `writev` that lets you write an array (ie,
> "Vector") of buffers of data rather than a single buffer.
>
> I'd like to support something like this for Streams in Node, mostly
> because it will allow us to save a lot of TCP write() calls, without
> having to copy data around, especially for chunked encoding writes.
> (We write a lot of tiny buffers for HTTP, it's kind of a nightmare,
> actually.)
>
> Fedor Indutny has already done basically all of the legwork to
> implement this. Where we're stuck is the API surface, and here are
> some options. Node is not a democracy, but your vote counts anyway,
> especially if it's a really good vote with some really good argument
> behind it :)
>
> Goals:
> 1. Make http more good.
> 2. Don't break existing streams.
> 3. Don't make things hard.
> 4. Don't be un-node-ish
>
> For all of these, batched writes will only be available if the
> Writable stream implements a `_writev()` method. No _writev, no
> batched writes. Any bulk writes will just be passed to _write(chunk,
> encoding, callback) one at a time in the order received.
>
> In all cases, any queued writes will be passed to _writev if that
> function is implemented, even if they're just backed up from a slow
> connection.
>
>
> Ideas:
>
>
> A) stream.bulk(function() { stream.write('hello');
> stream.write('world'); stream.end('!\n') })
>
> Any writes done in the function passed to `stream.bulk()` will be
> batched into a single writev.
>
> Upside:
> - Easier to not fuck up and stay frozen forever. There is basically
> zero chance that you'll leave the stream in a corked state. (Same
> reason why domain.run() is better than enter()/exit().)
>
> Downsides:
> - easier to fuck up and not actually batch things. eg,
> s.bulk(function(){setTimeout(...)})
> - bulk is a weird name. "batch" maybe? Nothing else really seems
> appropriate either.
> - somewhat inflexible, since all writes have to be done in the same
> function call
>
>
> B) stream.cork(); stream.write('hello'); stream.write('world');
> stream.end('!\n'); stream.uncork();
>
> Any writes done while corked will be flushed to _writev() when uncorked.
>
> Upside:
> - Easy to implement
> - Strictly more flexible than stream.bulk(writer). (Can trivially
> implement a bulk function using cork/uncork)
> - Useful for cases outside of writev (like corking a http request
> until the connection is established)
>
> Downsides:
> - Easy to fuck up and stay corked forever.
> - Two functions instead of just one (double the surface area increase)
>
>
> C) stream.writev([chunks,...], [encodings,...], callback)
>
> That is, implement a first-class top-level function called writev()
> which you can call with an array of chunks and an array of encodings.
>
> Upside:
> - No unnecessary surface area increase
> - NOW IT'S YOUR PROBLEM, NOT MINE, HAHA! (Seriously, though, it's
> less magical, simpler stream.Writable implementation, etc.)
>
> Downside:
> - A little bit tricky when you don't already have a list of chunks to
> send. (For example, with cork, you could write a bunch of stuff into
> it, and then uncork all at the end, and do one writev, even if it took
> a few ms to get it all.)
> - parallel arrays, ew.
>
>
> D) stream.writev([ {chunk:buf, encoding: blerg}, ...], callback)
>
> That is, same as C, but with an array of {chunk,encoding} objects
> instead of the parallel arrays.
>
> Same +/- as C, except the parallel array bit. This is probably how
> we'd call the implementation's stream._writev() anyway, so it'd be a
> bit simpler.
>
>
>
> Which of these seems like it makes the most sense to you?
>
> Is there another approach that you'd like to see here? (Note: "save
> all writes until end of tick always" and "copy into one big buffer"
> approaches are not feasible for obvious performance reasons.)
>
--
--
Job Board: http://jobs.nodejs.org/
Posting guidelines:
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en
---
You received this message because you are subscribed to the Google Groups
"nodejs" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.