On Wednesday, 18 February 2015 at 23:11:07 UTC, Vlad Levenfeld wrote:
On Wednesday, 18 February 2015 at 16:10:30 UTC, Pasqui23 wrote:
No.The main strenght of the range api is its ability to
preserve range categories.The loss of range categories is a
*price* we pay for lazy  evalutation,categories wich can be
restored by .array in exchange for the usual price for dynamic
allocation.

The argument naturally extends to ContiguousRanges:
The loss of contiguity is the price paid for lazy evaluation (note that retro is lazily evaluated), and the result of .array is always contiguous.

The point is that you want to pay the less possible for the most gain:)
you are describing ContiguousRange as
a type implicitly convertable to T[]

Not implicitly, but otherwise this may be a much nicer way to define ContiguousRange than what I had proposed.

I see a ContiguousRange as an underlying T[] with user-defined operations and constraints, e.g. a Bitmap, or an AudioClip. This would enable a generic function to operate on a custom ranges without losing information by decaying the argument into a T[]. For example, you could perform some generic search operation on an AudioClip and return a slice while preserving the AudioClip's sampling rate.


My proposal and yours are not mutually exclusive.
A range wich can be casted to T[] is automatically considered contiguous,but not all contiguous ranges are castable to T[](see for example BinaryHeap!(T[]) ).

As an aside,I prefer implicit cast as it ease the burden of both range authors and algorithms authors:
range authors have to just add
  T[]array;
  alias array this;
algorithms need to just accept T[](or R:T[] if it returns the range parameter)

for an array r, is r.retro contiguous or not?

I would say yes.In general,a good rule of thumb would be:this
range can be copied via memcpy?Then preserve contiguous-ness.

The problem with this rule is illustrated with an example:

  void copy (R1 src, R2 tgt)
  out {
    foreach (i; 0..min (src.length, tgt.length))
      assert (tgt[i] == src[i]);
  }
  body {
memcpy (tgt.ptr, src.ptr, ElementType!R1.sizeof * min (src.length, tgt.length));
  }

  auto s = [0,1,2,3,4,5];
  auto r = [9,8,7,6,5,4].retro;
  s.copy (r);

Originally, r == [4,5,6,7,8,9].
What's the value of r at the end of this program?

If *r.ptr == 4, then we have
  r == [0,5,6,7,8,9]
and a segfault if we're lucky.

If *r.ptr == 9, then we have
  r == [5,4,3,2,1,0]
and a failing postcondition.

There is no scalable, generic way to satisfy the postcondition and get our expected behavior, i.e.
  r == [0,1,2,3,4,5]

The second interpretation(*r.ptr==9) is the correct interpretation. You are raising an important point:the optimization I gave for copy is valid only if r1 and r2 are of the same type(in fact i gave it the signature void copy(R)(R r1,R r2)if(isContiguousRange!R);//only one type of range ),as different types indicate different access implementation.In general the use of contiguous ranges force the user to check that both source and destination range are of the same type,and this can be a source of errors.
Optimized data transfers don't care at all that the
bytes are in order,only that the data are in the slice you gave
them.

I have to disagree with this. While the transfer itself is free to occur out-of-order, if the result is out of order, then the program is most likely incorrect.


memcpy(src,dest,len) in general give undefined behavour if src or dest are of different types or their size is different than len,the same is true of contiguous ranges(that's why toContiguous is an explicit cast returning void[])

My proposal allow simple extension to multiple dimension.
if  r is a n-dimensional range,then then the following must
hold true:

void[]s=r.toContiguous;
foreach(i1;r)foreach(i2;i1)...foreach(in;inprev)
 assert(s.ptr<=&in<s.ptr+s.lenght);

So, the catch here is illustrated with another example:

Consider a 2D array in row-major order (where [i,j] is adjacent to [i,j+1]). Here, slicing along the rows will yield a ContiguousRange r[x..y, 0..$], but r[x..y, z..w] where z > 0 || w < r.length will be non-contiguous at the column boundaries. There's contiguity hiding in there, but it can't be leveraged.

This could be solved by considering r[x..y, z..w] as a RandomAccessRange of ContiguousRanges, which would imply that r[a,b] is equivalent to r[a][b]. On the other hand, it could be considered a RandomAccessRange which yields ContiguousRanges under 1-dimensional slicing... more verbose, but avoids the former interpretation's implications.


The first implication(r[a,b]==r[a][b])is checked by controlling that r is a contiguous range,the second one(r[a,b]!=r[a][b]) is checked by checking if r[a] is itself contiguous

But, like I said, extending range concepts to multiple dimensions is tricky. I've been working on the problem on my own for the last few months, and so far my solution has been to define linear spaces which convert to ranges under certain conditions.

This can be compatible with my proposal:both the 'linear space' and the single ranges are contiguous.

My proposal in general term aims to allow container to be copied by memcpy when possible and safe,and in general to call C funcion expecting untyped arrays(that's why toContiguous returns void[])like the splice api on linux. Yours aims to safely call C funcion wich expects arrays of a given type. Those are two use cases wich are more different than what I gave it credit,and so our proposal are orthogonal,maybe even complementary,as I've already said.

Reply via email to