Daniel Murphy <[email protected]> wrote: > This way encoding would convert an input range of ubyte to an input range of > char, and decoding would convert Range!char to Range!ubyte. > > This way you would be able to use it with std.algorithm, std.range etc. > When called with an array the range would be able to provide length and be > bidirectional. > This way there would be no allocations inside the range at all. > > You could create and fill a buffer using > auto buffer = array(encode(data)); > or fill an existing buffer using > copy(encode(data), buffer);
I don't see much benefit to make filters decorator ranges in the first place. You can implement them, but decorator ranges should be considered extensions to core filters implemented as Masahiro's way. The biggest reason why I think so is ranges' inaptitude for filtering purposes. M-N conversions, which happens in base64 and character code conversion etc., can't be supported by ranges without twisted hacks. Most filters needs to control how many items to read and write *by themselves*. Input ranges can only support N-1 conversions in a sane way. They can read as much items as needed from the 'front' of their underlying source ranges, but can only expose a single item. Similarly, output ranges are restricted to 1-N conversions. Yeah, I know you can work around the problem by caching several items inside a decorator range. It's done in your code and pretty works. :-) But I think it is showing how ranges are unfit for filtering purposes. So, I believe that Masahiro's encode(src,sink) design wins. His base64 filter has a control over the number of bytes to process, and hence no need for extra caching. Of course, decorator ranges are useful in some situation, and we'll eventually need them. But they should never supersede Masahiro's filters. Shin _______________________________________________ phobos mailing list [email protected] http://lists.puremagic.com/mailman/listinfo/phobos
