On Saturday, 28 December 2013 at 22:55:39 UTC, Ivan Kazmenko wrote:
Another quick question, two of them.

1. This is a minimal example of trying the permutations of a character array.

-----
import std.algorithm;
void main () {
    char [] a;
    do { } while (nextPermutation(a));
}
-----

This gives a compile error. However, it works when I change "char [] a" to "dchar [] a". Why?

I understand that permuting a char [] array might be wrong way to go when dealing with Unicode. But what if, at this point of the program, I am sure I'm dealing with ASCII and just want efficiency? Should I convert to ubyte [] somehow - what's the expected way then?

Because next permutation (AFAIK) works on ranges with *assignable* elements, and "char[]" is not such a range: It is a read-only range of dchars.

Arguably, the implementation *could* work for it, *even* while taking into consideration unicode (using the underlying UTF8 knowledge). You should file an ER for that. But currently, this is not the case, so you have to look for a workaround.

The "cast (ubyte [])" works, but "to!(ubyte [])" fails at runtime, expecting a string representation of the array, not its raw contents.

You can use "representation" to "conveniently" transform a string/wstring/dstring to its corresponding numeric type (ubyte/ushort/uint). That *should* work.

"Unfortunatly", "to!T(string)" often does a parse. As convenient as it is, I think the added ambiguity makes it often wrong.

2. Why does nextPermutation hang up for empty arrays? I suppose that's a bug?

I suppose so. Please file it. If it is deemed "illegal", at the very least, it should throw.

Ivan Kazmenko.

Reply via email to