Hi Anders,

> "The floating-point arithmetic precision should match or exceed the precision 
> specified by computational_precision attribute. The allowed values of 
> computational_precision attribute are:
> 
> (table)
"32": 32-bit floating-point arithmetic
"64": 64-bit floating-point arithmetic

This is good for me.

> If the computational_precision attribute has not been set, then the default 
> value "32" applies."
> 
> That would ensure that we can assume a minimum precision on the user side, 
> which would be important. Practically speaking, high level languages that 
> support 16-bit floating-point variables, typically use 32-bit floating-point 
> arithmetic for the 16-bit floating-point variables (CPU design).

I'm not so sure about having a default value. In the absence of guidance from 
the creator, I'd probably prefer that the user is free to use whatever 
precision they would like. 

Thanks, David


-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://urldefense.us/v3/__https://github.com/cf-convention/cf-conventions/issues/327*issuecomment-854048598__;Iw!!G2kpM7uM-TzIFchu!mDmmC51KHcfr7XcIbx2_Ie-l2zMqZ4H0Ef03YZxmsD4h94ZeLOeIvexmL2EgCVG6OFbVA4TeD7U$
 
This list forwards relevant notifications from Github.  It is distinct from 
[email protected], although if you do nothing, a subscription to the 
UCAR list will result in a subscription to this list.
To unsubscribe from this list only, send a message to 
[email protected].

Reply via email to