On Monday, July 29, 2019 1:35:18 PM MDT H. S. Teoh via Digitalmars-d-learn 
wrote:
> Generally, the idiom is to let the compiler do attribute inference by
> templatizing your code and not writing any explicit attributes, then use
> unittests to ensure that instantiations of the range that ought to have
> certain attributes actually have those attributes.  For example, instead
> of writing:
>
>   struct MyRange(R) {
>       ...
>       @property bool empty() const { ... }
>       ...
>   }
>
> write instead:
>
>   struct MyRange(R) {
>       ...
>       // No attributes: compiler does inference
>       @property bool empty() { ... }
>       ...
>   }
>
>   // unittest ensures .empty is callable with const object.
>   unittest {
>       const rr = MyRange!(const(Range))(...);
>       assert(rr.empty); // will fail compilation if .empty is non-const
>   }
>
> The unittest tests that a specific instantiation of MyRange has const
> .empty. It's still possible to use MyRange with a range that has
> non-const .empty, but this unittest ensures that the non-const-ness
> wasn't introduced by the implementation of MyRange itself, but only
> comes from the template argument.

Since when does const have anything to do with attribute inference? Unless
something has changed recently, const is _never_ inferred for functions.
Your unittest here should never compile regardless of whether empty could
have been marked as const or not. If you want empty to be const or not based
on the range being wrapped, you'd need to use two separate function
definitions (one const and one not) and use static if to choose which got
compiled in based on whether it could be const or not with the range type
that it's wrapping.

- Jonathan M Davis



Reply via email to