So what's your remaining issue with cork/uncork exactly? Just that it's
ugly?
:Marco
On Wednesday, April 24, 2013 5:48:59 PM UTC-7, Isaac Schlueter wrote:
>
> > Are you saying that whatever we choose, users will get the benefits of
> this for free in the 80% case?
>
> Yes, that is what I'm saying.
>
> > Is this type of operation always synchronous? If s.bulk(function() {})
> suggests that you're ready to write everything within the execution of that
> function.
>
> Yeah, if you setTimeout in there, you're not in bulk() any more, so it
> fails. It doesn't feel very node-ish to me. It looks like
> domain.run(fn) but it's got wildly different semantics.
>
> > s.writev assumes you've got all of the chunks ready when you call it
>
> Yes, that is the case. Cork/uncork doesn't require you to already
> know what you're going to write.
>
> > If there's an error in the middle, you're still corked.
>
> Errors generally indicate that the stream is hosed (hah) anyway, so
> whatever. I don't care too much about that, really. If your stream
> has an error, it's broken, and should be considered poisonous.
>
> > Or you could just forget to uncork which is even worse.
>
> Sure, but I think that the idea of uncorking automatically when you
> call .end() solves most of that hazard.
>
> > So getting some perspective on the use cases might be helpful here.
>
> The primary use case is the http-tcp interaction, and saving syscalls
> in web sites.
>
>
> On Wed, Apr 24, 2013 at 5:12 PM, Marco Rogers
> <[email protected]<javascript:>>
> wrote:
> > I think you're answering my primary concern. Are you saying that
> whatever we
> > choose, users will get the benefits of this for free in the 80% case? If
> > that's true, then I think the unusual case should have the most flexible
> > interface. That feels like cork/uncork to me. But take my opinion with a
> > grain of salt, because I don't often work at that level.
> >
> > I'd be interested in hearing about some actual use cases, because I've
> got
> > another stupid question. Is this type of operation always synchronous?
> If
> > s.bulk(function() {}) suggests that you're ready to write everything
> within
> > the execution of that function. Unless the function parameter needs to
> have
> > a callback. In which case, I really don't like that option. s.writev
> assumes
> > you've got all of the chunks ready when you call it, but it also allows
> you
> > to build them up asynchronously if you need to. Cork/uncork also allows
> this
> > and is much more explicit in that affordance. But of course as you said,
> > it's easier to get wrong. If there's an error in the middle, you're
> still
> > corked. Or you could just forget to uncork which is even worse. So
> getting
> > some perspective on the use cases might be helpful here.
> >
> > :Marco
> >
> >
> > On Wed, Apr 24, 2013 at 4:57 PM, Isaac Schlueter <[email protected]<javascript:>>
> wrote:
> >>
> >> > Of course it'd be no use where there's a need to different encodings
> >> > for some of the chunks, but how common a requirement is that?
> >>
> >> It's as common a requirement as `res.write('some string that is
> >> probably utf8')`. Requiring all chunks to be the same encoding is not
> >> reasonable for the use case we care most about (http).
> >>
> >>
> >> Marco,
> >> We could certainly do s.bulk(function() { write() write() write() })
> >> on top of cork/uncork. But at that point, it's probably unnecessary,
> >> and could be something that userland streams do if they want to.
> >>
> >> In the r.pipe(w) case, it won't matter much. The reader will be
> >> calling write() and the writer will be writing usually one chunk at a
> >> time. If, for some reason, it backs up and multiple writes, and
> >> supports _writev, then yes, it'll writev it all at once.
> >>
> >>
> >> On Wed, Apr 24, 2013 at 2:05 PM, Mike Pilsbury
> >> <[email protected]<javascript:>>
>
> >> wrote:
> >> > Is this signature not worth considering?
> >> > stream.writev([chunks,...], encoding, callback)
> >> >
> >> > It's an easier API to use. No need to create an object for each
> chunk.
> >> >
> >> > Of course it'd be no use where there's a need to different encodings
> for
> >> > some of the chunks, but how common a requirement is that? Maybe I'm
> >> > naive in
> >> > thinking that a single encoding for all chunks is the more common
> >> > scenario.
> >> >
> >> > Perhaps being able to provide either a single string or an array of
> >> > strings
> >> > would help.
> >> > stream.writev([chunks,...], encoding | [encodings,...], callback)
> >> > The common use case of a single encoding for all chunks is nice and
> easy
> >> > to
> >> > use, but the other use case is still catered for.
> >> >
> >> >
> >> > On Tuesday, 23 April 2013 01:01:50 UTC+1, Isaac Schlueter wrote:
> >> >>
> >> >> There's a syscall called `writev` that lets you write an array (ie,
> >> >> "Vector") of buffers of data rather than a single buffer.
> >> >>
> >> >> I'd like to support something like this for Streams in Node, mostly
> >> >> because it will allow us to save a lot of TCP write() calls, without
> >> >> having to copy data around, especially for chunked encoding writes.
> >> >> (We write a lot of tiny buffers for HTTP, it's kind of a nightmare,
> >> >> actually.)
> >> >>
> >> >> Fedor Indutny has already done basically all of the legwork to
> >> >> implement this. Where we're stuck is the API surface, and here are
> >> >> some options. Node is not a democracy, but your vote counts anyway,
> >> >> especially if it's a really good vote with some really good argument
> >> >> behind it :)
> >> >>
> >> >> Goals:
> >> >> 1. Make http more good.
> >> >> 2. Don't break existing streams.
> >> >> 3. Don't make things hard.
> >> >> 4. Don't be un-node-ish
> >> >>
> >> >> For all of these, batched writes will only be available if the
> >> >> Writable stream implements a `_writev()` method. No _writev, no
> >> >> batched writes. Any bulk writes will just be passed to
> _write(chunk,
> >> >> encoding, callback) one at a time in the order received.
> >> >>
> >> >> In all cases, any queued writes will be passed to _writev if that
> >> >> function is implemented, even if they're just backed up from a slow
> >> >> connection.
> >> >>
> >> >>
> >> >> Ideas:
> >> >>
> >> >>
> >> >> A) stream.bulk(function() { stream.write('hello');
> >> >> stream.write('world'); stream.end('!\n') })
> >> >>
> >> >> Any writes done in the function passed to `stream.bulk()` will be
> >> >> batched into a single writev.
> >> >>
> >> >> Upside:
> >> >> - Easier to not fuck up and stay frozen forever. There is basically
> >> >> zero chance that you'll leave the stream in a corked state. (Same
> >> >> reason why domain.run() is better than enter()/exit().)
> >> >>
> >> >> Downsides:
> >> >> - easier to fuck up and not actually batch things. eg,
> >> >> s.bulk(function(){setTimeout(...)})
> >> >> - bulk is a weird name. "batch" maybe? Nothing else really seems
> >> >> appropriate either.
> >> >> - somewhat inflexible, since all writes have to be done in the same
> >> >> function call
> >> >>
> >> >>
> >> >> B) stream.cork(); stream.write('hello'); stream.write('world');
> >> >> stream.end('!\n'); stream.uncork();
> >> >>
> >> >> Any writes done while corked will be flushed to _writev() when
> >> >> uncorked.
> >> >>
> >> >> Upside:
> >> >> - Easy to implement
> >> >> - Strictly more flexible than stream.bulk(writer). (Can trivially
> >> >> implement a bulk function using cork/uncork)
> >> >> - Useful for cases outside of writev (like corking a http request
> >> >> until the connection is established)
> >> >>
> >> >> Downsides:
> >> >> - Easy to fuck up and stay corked forever.
> >> >> - Two functions instead of just one (double the surface area
> increase)
> >> >>
> >> >>
> >> >> C) stream.writev([chunks,...], [encodings,...], callback)
> >> >>
> >> >> That is, implement a first-class top-level function called writev()
> >> >> which you can call with an array of chunks and an array of
> encodings.
> >> >>
> >> >> Upside:
> >> >> - No unnecessary surface area increase
> >> >> - NOW IT'S YOUR PROBLEM, NOT MINE, HAHA! (Seriously, though, it's
> >> >> less magical, simpler stream.Writable implementation, etc.)
> >> >>
> >> >> Downside:
> >> >> - A little bit tricky when you don't already have a list of chunks
> to
> >> >> send. (For example, with cork, you could write a bunch of stuff
> into
> >> >> it, and then uncork all at the end, and do one writev, even if it
> took
> >> >> a few ms to get it all.)
> >> >> - parallel arrays, ew.
> >> >>
> >> >>
> >> >> D) stream.writev([ {chunk:buf, encoding: blerg}, ...], callback)
> >> >>
> >> >> That is, same as C, but with an array of {chunk,encoding} objects
> >> >> instead of the parallel arrays.
> >> >>
> >> >> Same +/- as C, except the parallel array bit. This is probably how
> >> >> we'd call the implementation's stream._writev() anyway, so it'd be a
> >> >> bit simpler.
> >> >>
> >> >>
> >> >>
> >> >> Which of these seems like it makes the most sense to you?
> >> >>
> >> >> Is there another approach that you'd like to see here? (Note: "save
> >> >> all writes until end of tick always" and "copy into one big buffer"
> >> >> approaches are not feasible for obvious performance reasons.)
> >>
> >> --
> >> --
> >> Job Board: http://jobs.nodejs.org/
> >> Posting guidelines:
> >> https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
> >> You received this message because you are subscribed to the Google
> >> Groups "nodejs" group.
> >> To post to this group, send email to [email protected]<javascript:>
> >> To unsubscribe from this group, send email to
> >> [email protected] <javascript:>
> >> For more options, visit this group at
> >> http://groups.google.com/group/nodejs?hl=en?hl=en
> >>
> >> ---
> >> You received this message because you are subscribed to a topic in the
> >> Google Groups "nodejs" group.
> >> To unsubscribe from this topic, visit
> >> https://groups.google.com/d/topic/nodejs/UNWhF64KeQI/unsubscribe?hl=en.
>
> >> To unsubscribe from this group and all its topics, send an email to
> >> [email protected] <javascript:>.
> >>
> >> For more options, visit https://groups.google.com/groups/opt_out.
> >>
> >>
> >
> >
> >
> > --
> > Marco Rogers
> > [email protected] <javascript:> | https://twitter.com/polotek
> >
> > Life is ten percent what happens to you and ninety percent how you
> respond
> > to it.
> > - Lou Holtz
> >
> > --
> > --
> > Job Board: http://jobs.nodejs.org/
> > Posting guidelines:
> > https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
> > You received this message because you are subscribed to the Google
> > Groups "nodejs" group.
> > To post to this group, send email to [email protected]<javascript:>
> > To unsubscribe from this group, send email to
> > [email protected] <javascript:>
> > For more options, visit this group at
> > http://groups.google.com/group/nodejs?hl=en?hl=en
> >
> > ---
> > You received this message because you are subscribed to the Google
> Groups
> > "nodejs" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an
> > email to [email protected] <javascript:>.
> > For more options, visit https://groups.google.com/groups/opt_out.
> >
> >
>
--
--
Job Board: http://jobs.nodejs.org/
Posting guidelines:
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en
---
You received this message because you are subscribed to the Google Groups
"nodejs" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.