@xigoi
> Yeah, and `1e-1000` shouldn't be `0.0`. Unfortunately, computers have
> finitely many bits.
Point taken. I'm not saying `Natural` should be arbitrary precision; just that
`low` is a logical restriction from the `Natural` concept, `high` is a physical
restriction of the representation. I'm not sure a better way to present this,
though.
> Range checks are omitted with -d:danger, so making them catchable could break
> code.
Maybe I misunderstand `{.raises.}` or maybe I expect more from the compiler
that should be done with static analysis tools like `DrNim`.
@ElegantBeef
> it's not overly complex to make that an operator that compiles
Nice!
> Is this really a rough edge? `Positive` and `Natural` are two different sub
> range types. The `is` operator checks if the type(or type of an expression in
> this case) is the same type or apart of a typeclass.
I definitely think so. I understand that this is a distinction at the type
level, but also, it's an odd choice that numeric narrowing type conversions are
otherwise explicit.
Here's another few head-scratchers:
# you can create a bad value without casts:
pred(Natural 0) # -1: Natural
var n = Natural(0)
n -= 1 # -1: Natural
Positive.default # 0: Positive
# the type differs depending if you use ordinal operations or integer
operations
succ(Natural 1, Natural 1) # 2: Natural
(Natural 1)+(Natural 1) # 2: int
Run
> To the other issues, Natural is not a new data type it's a subrange type
> across the system integer range. Its goal like any subrange type is to easily
> give runtime and compiletime checks where needed and to make self documenting
> code.
I would rather have `positive` and `natural` be predicates that can be attached
(statically or dynamically) to a value than be concrete types themselves.
I should probably "put up or shut up" and write my own refinement type
implementation and see if I can make something both correct and ergonomic!