I need to get this straight:

A normal single dimensional array in D is defined as

T[] arr

and is a linear sequential memory array of T's with an unbound length and is effectively the same as T*(although D treats them differently?)?

We can fix the length by adding a upper bound:

T[N] arr;

and this is equivalent to

auto arr = cast(T[])malloc(T.sizeof*N)[0..N];

possibly

auto arr = cast(T[N])malloc(T.sizeof*N);

But then arr is a fixed type and we can't use to resize the array later if needed.

these don't actually work, for some odd reason, so

auto arr = cast(T*)malloc(T.sizeof*N)[0..N];

seems to be the way to go.

So, what this "shows" is that in memory, we have
relative address         type
0                          T
1*sizeof(T)                T
2*sizeof(T)                T
...
N-1*sizeof(T)              T



This is pretty basic and just standard arrays, all that is fine and dandy!


Now, when it comes to multidimensional arrays:

T[][] arr;

There are two ways that the array can be laid out depending on how we interpret the order of the row/col or col/row.


The most natural way to do this is to extend single dimensional arrays:

T[][] is defined to be (T[])[]

or, lets used a fixed array so we can be clear;

T[N][M]

which means we have M sequential chunks of memory where each chunk is a T[N] array.

This is the natural way because it coincides with single arrays.

Similary, to access the element at the nth element in the mth chunk, we do

t[n][m] because, again, this conforms with out we think of single arrays.


Now, in fact, it doesn't matter too much if we call a row a column and a column a row(possibly performance, but as far as dealing with them, as long as we are consistent, everything will work).


BUT! D seems to do something very unique,

If one defines an array like

T[N][M]


one must access the element as

t[m][n]!

The accessors are backwards!

This is a huge problem!


int[3][5] a;

Lets access the last element:

auto x = a[4][2];
auto y = a[2][4]; <- the logical way, which is invalid in D

This method creates confusion and can be buggy. If our array is not fixed, and we use the *correct* way, then our bugs are at runtime and maybe subtle.

Why? Because the correct way only has one thing to get right, which is being consistent, which is easy.

In D, we not only have to be consistent, we also have to make sure to reverse our array accessors from how we defined it.

While it is a unique approach and may have something to do with quantum entanglement, I'm curious who the heck came up with the logic and if there is actually any valid reason?

Or are we stuck in one of those "Can't change it because it will break the universe" black holes?














Reply via email to