I see it like Dimitry and have actually implemented what he describes a while ago. It is a one consumer, one producer circular buffer which offers a sliding window into the content. As typical for a stream(!) it operates on variable blocks of bytes and producer and consumer never actually negotiate a fixed block size. Instead the producer puts as many bytes into the buffer as it thinks is a good trade-off between the count of system calls and the delay before the consumer can start reading. The consumer on the other hand asks for as many bytes as it needs at minimum and gets blocked if this requirement is yet unmet. When there are enough bytes available for consumption the consumer may also decide to map *all* of the currently available data if that further improves processing performance.
Source: https://github.com/mleise/piped/blob/master/src/piped/circularbuffer.d The public primitives (to come back to the topic) are the following as exposed by a "get" and a "put" pointer into the buffer: commit(<byte count or data type>) Tells the buffer that the sliding window can be moved by X bytes, after they have been processed (read from or written to). mapAvailable() For the consumer: Returns a window spanning all of the buffer that the producer has committed already. For the producer: Returns all of the free memory in the buffer that can be written to. mapAtLeast(<byte count>) Same as mapAvailable() but blocks until a certain amount of bytes are ready. map(<byte count>) Same as mapAtLeast, but shortens the resulting slice to match the byte count it has actually been asked for: mapAtLeast(count)[0 .. count]; map(T)() Treats the start of the sliding window as data of type T and returns a pointer to it. (When enough bytes are available.) finish() This basically tells the counterpart that we are done with the stream. For the producer this is like setting an EOF flag. No more data will be written. All queries for more will result in an exception. In this buffer I replaced EOF checks with throwing exceptions on attempts to read past the end of the stream. In my limited use cases the expected length of the stream could always be established on the fly, for example by reading file header fields. There are also primitives for reading and writing bit runs mostly to accommodate compressed streams: peekBits(<bit count>) Return ubyte, uint, ulong, ... of the next bits at the start of the sliding window, but don't remove them yet. This is required for some compression algorithms to decide on the next steps to take. skipBits(<bit count>) For performance reasons, if you just peeked the bits and cached the result you can just skip them. It can also be useful for other cases where you have no use for some bits in the stream. commitBits(<bit count>) As above for integral bytes. The committed buffer bits become available to the counterpart. readBit() Read a single bit. Optimized. readBits(<bit count>) Works like peekBits() followed by skipBits(). skipBitsToNextByte() Very typical in compression. Skips the remainder of a byte until we are at an integral position again. -- Marco
