On Wed, 13 Oct 2010 19:03:41 +0900, Daniel Murphy <[email protected]> wrote:

On Wed, Oct 13, 2010 at 4:47 PM, Shin Fujishiro <[email protected]> wrote:

Input ranges can only support N-1 conversions in a sane way.  They
can read as much items as needed from the 'front' of their underlying
source ranges, but can only expose a single item.

Similarly, output ranges are restricted to 1-N conversions.

Yeah, I know you can work around the problem by caching several items
inside a decorator range.  It's done in your code and pretty works. :-)
But I think it is showing how ranges are unfit for filtering purposes.

I see that caching may be undesirable in some situations, but this adapter
(and I assume most others) can be implemented perfectly well without it.
 It's a flaw in implementation, not a limitation of ranges.

I wait no-caching implementation. I think Range should not have useless state. In addition, current implementation seems to be still more complex than necessary.

I don't see much benefit to make filters decorator ranges in the first
place.  You can implement them, but decorator ranges should be
considered extensions to core filters implemented as Masahiro's way.

So, I believe that Masahiro's encode(src,sink) design wins.  His base64
filter has a control over the number of bytes to process, and hence no
need for extra caching.

Of course, decorator ranges are useful in some situation, and we'll
eventually need them.  But they should never supersede Masahiro's
filters.


I don't see any real differences between the lazy range design and the
conversion function design, apart from the usual lazy vs eager factors of
performance, memory consumption and interface simplicity.

I tend to see the lazy solution as the primary solution, and the conversion
function as an alternative implementation optimized for speed and/or
usability.
One similar example is std.string.split vs std.algorithm.splitter.

I don't think so. Range is a good concept, but not all.
I think usability is a most important factor for module API.

That being said, I think we do need both, as the conversion function should
be more efficient and simpler to use for the most common case (buffer ->
buffer).

I agree.

I'd hate to have to use
  copy(Base64.decode(inputbuffer), outputbuffer);
over
  Base64.decode(inputbuffer, outputbuffer);
just as I'd never want to write
  copy(repeat(5), buffer);
over
  fill(buffer, 5);

Base64's API is following:

  encodeLength(length);
  encode(src, dst);
  encode(src);
  encoder(returns Encoder!(char[]) or Encoder!(char))
  decodeLength(length);
  decode(src, dst);
  decode(src);
  decoder(returns Decoder!(ubyte[]) or Decoder!(ubyte))

Do you see any problems?


Masahiro
_______________________________________________
phobos mailing list
[email protected]
http://lists.puremagic.com/mailman/listinfo/phobos

Reply via email to