I think that you'll find the performance penalty is irrelevant. It doesn't matter how many constructors and destructors are called. In this context, all the sub-constructors/destructors can be fully inlined. Even if they couldn't be, the overhead of six function calls is completely irrelevant compared to the overhead of reading from a file descriptor. The deleting and re-allocating of the buffer is more expensive, but even that is likely to be dwarfed by the I/O costs. Please do feel free to try it out and get actual numbers, but my guess is that you'll find no real difference.
As for code cleanliness, I find the Reset() method awkward since the user has to remember to call it at the same time as they do some other operation, like seeking the file descriptor. Either calling Reset() or seeking the file descriptor alone will put the object in an inconsistent state. It might make more sense to offer an actual Seek() method which can safely perform both operations together with an interface that is not so easy to misuse. On Sun, Jan 17, 2010 at 2:46 PM, Jacob Rief <jacob.r...@gmail.com> wrote: > Hello Kenton, > > > What makes you think it is inefficient? It does mean the buffer has to > be > > re-allocated but with a decent malloc implementation that shouldn't take > > long. Certainly the actual reading from the file would take longer. > Have > > you seen performance problems with this approach? > > Well, in order to see any performance penalties, I would have to > implement FileInputStream::Reset() and compare the results with the > current implementation, (I can do that, if there is enough interest). > I reviewed the implementation and I saw that by reinstantiating a > FileInputStream object, 3 destructors and 3 constructors have to be > called, where one (CopyingInputStreamAdaptor) invalidates a buffer > which in the "Next() step" immediately afterwards has to be > reallocated. A Reset() function would avoid these unnecessary steps. > > > If there really is a performance problem with allocating new objects, > then > > sure. > > From the performance point of view, its certainly not a big issue, but > from the code cleanness point of view, it is. > I have written a class named LzipInputStream, which offers a Reset() > functionality to randomly access any part of the uncompressed input > stream without having to decompress everything. Therefore this Reset() > function is called quite often and it has to destroy and recreate its > lower layer, ie. the FileInputStream. If each stackable ...InputStream > offers a Reset() function, the upper layer then only has to call Reset > () on the lower layer, instead of keeping track how to reconstruct the > lower layered FileInputStream object. > > Regards, Jacob > > -- > You received this message because you are subscribed to the Google Groups > "Protocol Buffers" group. > To post to this group, send email to proto...@googlegroups.com. > To unsubscribe from this group, send email to > protobuf+unsubscr...@googlegroups.com<protobuf%2bunsubscr...@googlegroups.com> > . > For more options, visit this group at > http://groups.google.com/group/protobuf?hl=en. > > > >--
You received this message because you are subscribed to the Google Groups "Protocol Buffers" group.
To post to this group, send email to proto...@googlegroups.com.
To unsubscribe from this group, send email to protobuf+unsubscr...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/protobuf?hl=en.