I’ve considered whether it makes sense to have, say, a NullableBits{T,V} where
T is the bits type, e.g. Int64 or Float32, and V is the special value which is
considered null. Then you would use a similar set of functions as defined in
NullableArray to make operations on these numbers be polluted by the ‘special’
nullable values. No extra storage is needed to determine nullability, though of
course there is the cost of checks against the value. Systems where V was
unpredictable might result in excessive amounts of code being compiled for the
various specialized types, e.g. NullableBits{Float32, 9999.0f} vs
NulllableBits{Float32, 8888.0f}. But this is just speculation on my part; was
curious if the group had looked at this representation in their explorations.
On October 18, 2015 at 3:54:23 PM, David Gold ([email protected]) wrote:
@Sebastian: If I understand you correctly, then it should at least be possible.
If T is your datatype that behaves as such and x is the value of type T that
designates missingness, then it seems you could straightforwardly write your
own method to convert an Array{T} to a NullableArray{T} that sets a given entry
in the target isnull field to true iff the corresponding entry in the argument
array is x. I don't know how the pros and cons will play out for your specific
use case.
On Saturday, October 17, 2015 at 7:38:24 AM UTC-7, Sebastian Good wrote:
This brings to mind a question that commonly comes up in scientific computing:
nullable arrays simulated through means of a canonical "no-data value", e.g in
a domain where all values are expected to be negative, using +9999. It's ugly,
but it's really common. From what I can see of the implementation, this is a
different approach than is used in NullableArrays, where a lookaside table of
null/not-null is kept.
Is it sensible or possible to work with this kind of data with this package?