Stefan Beller <sbel...@google.com> writes:
> So the API provided by these read/write functions is intended
> to move a huge chunks of data. And as it puts the data on the wire one
> packet after the other without the possibility to intervene and e.g. send
> a side channel progress bar update, I would question the design of this.
Hmph, I didn't think about it.
But shouldn't one be able to set up sideband and channel one such
large transfer on one band, while multiplexing other payload on
> If I understand correctly this will be specifically used for large
> files locally,
> so e.g. a file of 5 GB (such as a virtual machine tracked in Git), would
> require about 80k packets.
What is wrong about that? 4*80k = 320kB overhead for length fields
to transfer 5GB worth of data? I do not think it is worth worrying
But I am more surprised by seeing that "why not a single huge
packet" suggestion immediately after you talked about "without the
possibility to intervene". They do not seem to be remotely related;
in fact, they are going into opposite directions.
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html