On Wed, Oct 13, 2010 at 4:47 PM, Shin Fujishiro <[email protected]> wrote:
> The biggest reason why I think so is ranges' inaptitude for filtering > purposes. M-N conversions, which happens in base64 and character > code conversion etc., can't be supported by ranges without twisted > hacks. Most filters needs to control how many items to read and write > *by themselves*. > I'm not sure what you mean about having control over the number of items processed. Do you mean that because of caching more bytes can be encoded/decoded than are ever used? > > Input ranges can only support N-1 conversions in a sane way. They > can read as much items as needed from the 'front' of their underlying > source ranges, but can only expose a single item. > > Similarly, output ranges are restricted to 1-N conversions. > > Yeah, I know you can work around the problem by caching several items > inside a decorator range. It's done in your code and pretty works. :-) > But I think it is showing how ranges are unfit for filtering purposes. > > I see that caching may be undesirable in some situations, but this adapter (and I assume most others) can be implemented perfectly well without it. It's a flaw in implementation, not a limitation of ranges. When using a output range, I think there is an expectation that output has been completed after each call to put, which does prevent you from designing a range that only produces an output every second call to put. (I might be imagining this expectation, I haven't seen/written very much code using output ranges) When using forward ranges this problem doesn't exist, because they guarantee you are only consuming your view of the data. What other problems prevent ranges from modelling M:N filtering properly? (without twisted hacks of course) > I don't see much benefit to make filters decorator ranges in the first > place. You can implement them, but decorator ranges should be > considered extensions to core filters implemented as Masahiro's way. > > So, I believe that Masahiro's encode(src,sink) design wins. His base64 > filter has a control over the number of bytes to process, and hence no > need for extra caching. > > Of course, decorator ranges are useful in some situation, and we'll > eventually need them. But they should never supersede Masahiro's > filters. > > I don't see any real differences between the lazy range design and the conversion function design, apart from the usual lazy vs eager factors of performance, memory consumption and interface simplicity. I tend to see the lazy solution as the primary solution, and the conversion function as an alternative implementation optimized for speed and/or usability. One similar example is std.string.split vs std.algorithm.splitter. That being said, I think we do need both, as the conversion function should be more efficient and simpler to use for the most common case (buffer -> buffer). I'd hate to have to use copy(Base64.decode(inputbuffer), outputbuffer); over Base64.decode(inputbuffer, outputbuffer); just as I'd never want to write copy(repeat(5), buffer); over fill(buffer, 5); So, what am I missing? What does a conversion function design have to offer over that a range can't do? Thanks, Daniel.
_______________________________________________ phobos mailing list [email protected] http://lists.puremagic.com/mailman/listinfo/phobos
