On Thursday, September 26, 2024 12:53:12 AM MDT Per Nordlöw via Digitalmars-d- learn wrote: > Should a function like > > ```d > uint parseHex(in char ch) pure nothrow @safe @nogc { > switch (ch) { > case '0': .. case '9': > return ch - '0'; > case 'a': .. case 'f': > return 10 + ch - 'a'; > case 'A': .. case 'F': > return 10 + ch - 'A'; > default: > assert(0, "Non-hexadecimal character"); > } > } > ``` > > instead return an ubyte?
I would argue that ubyte would be better, because it's guaranteed to fit into a ubyte, but if it returns uint, then anyone who wants to assign it to a ubyte will need to cast it, whereas you can just do the casts right here (which could mean a lot less casting if this function is used much). Not only that, but you'd be doing the casts in the code that controls the result, so if something ever changes that makes it so that the type needs to change (e.g. you make it operate on dchar instead of char), you won't end up with callers casting to ubyte when the result then doesn't actually fit into a ubyte - whereas if parseHex's function signature changes from returning ubyte to returning ushort or uint or whatnot, then the change would be caught at compile time with any code that assigned the result to a ubyte. Now, I'm guessing that it wouldn't ever make sense to change this particular function in a way that the return type needed to change, and returning uint should ultimately work just fine, but I think that restricting the surface area where narrowing casts are likely to happen will ultimately reduce the risk of bugs, and I think that it's pretty clear that there will be less casting overall if the casting is done here instead of at the call site unless the function is barely ever used. - Jonathan M Davis