I started to add a test or two for this issue, but then I found the following
test in S29-conversions/ord_and_chr.t:
#?rakudo.moar todo 'chr max RT #124837'
dies-ok {chr(0x10FFFF+1)}, "chr out of range (max)";
Looking at https://en.wikipedia.org/wiki/Code_point and
http://www.unicode.org/glossary/#code_point I understand that U+10FFFF is
indeed the maximum Unicode code point.
On the JVM backend we already throw for invalid code points (this is handled by
class Character, method toChars under the hood:
https://docs.oracle.com/javase/8/docs/api/java/lang/Character.html#toChars-int-):
$ ./perl6-j -e 'say chr(0x10FFFF+1)'
java.lang.IllegalArgumentException
in block <unit> at -e line 1
So, IMHO we could do be better on MoarVM as well. It feels to me that the check
for valid code points shouldn't be implemented in NQP, but in MoarVM. Actually,
MVM_unicode_get_name already has such a check implemented.