On 12.07.21 03:37, someone wrote:
I ended up with the following (as usual advice/suggestions welcomed):
[...]> alias stringUTF16 = dstring; /// same as immutable(dchar)[];> alias stringUTF32 = wstring; /// same as immutable(wchar)[]; Bug: You mixed up `wstring` and `dstring`. `wstring` is UTF-16. `dstring` is UTF-32.

[...]
public struct gudtUGC(typeStringUTF) { /// UniCode grapheme cluster‐aware string manipulation

Style: `typeStringUTF` is a type, so it should start with a capital letter (`TypeStringUTF`).

[...]
    private size_t pintSequenceCount = cast(size_t) 0;
    private size_t pintSequenceCurrent = cast(size_t) 0;

Style: There's no need for the casts (throughout).

[...]
   @safe public typeStringUTF encode() { /// UniCode grapheme cluster to UniCode UTF‐encoded string

       scope typeStringUTF lstrSequence = null;
[...]
       return lstrSequence;

    }

Bug: `scope` makes no sense if you want to return `lstrSequence` (throughout).

   @safe public typeStringUTF toUTFtake( /// UniCode grapheme cluster to UniCode UTF‐encoded string
       scope const size_t lintStart,
       scope const size_t lintCount = cast(size_t) 1
       ) {
Style: `scope` does nothing on `size_t` parameters (throughout).

[...]
       if (lintStart <= lintStart + lintCount) {
[...]
          scope size_t lintRange1 = lintStart - cast(size_t) 1;

Possible bug: Why subtract 1?

          scope size_t lintRange2 = lintRange1 + lintCount;

         if (lintRange1 >= cast(size_t) 0 && lintRange2 <= pintSequenceCount) {

Style: The first half of that condition is pointless. `lintRange1` is unsigned, so it will always be greater than or equal to 0. If you want to defend against overflow, you have to do it before subtracting.

[...]
          }

       }
[...]
    }
[...]
   @safe public typeStringUTF toUTFpadL( /// UniCode grapheme cluster to UniCode UTF‐encoded string
       scope const size_t lintCount,
       scope const typeStringUTF lstrPadding = cast(typeStringUTF) r" "

Style: Cast is not needed (throughout).

       ) {
[...]
    }
[...]
}
[...]

Reply via email to