Cleve Moler's discussion is not quite as "contextually invariant" as are 
William Kahan's and James Demmel's.
In fact "the numerical analysis community" has made an overwhelmingly 
strong case that, roughly speaking,
one is substantively better situated where denormalized floating point 
values will be used whenever they may
arise than being free of those extra cycles at the mercy of an absent 
smoothness shoving those values to zero.
And this holds widely for floating point centered applications or 
libraries. 

If the world were remade with each sunrise by fixed bitwidth floating point 
computations, supporting denormals
is to have made house-calls with few numerical vaccines to everyone who 
will be relying on those computations
to inform expectations about non-trivial work with fixed bitwdith floating 
point types.  It does not wipe out all forms
of numerical untowardness, and some will find the vaccinces more 
prophylatic than others; still, the analogy holds.

We vaccinate many babies against measles even though there are some who 
would never have become exposed
to that disease .. and for those who forgot why, not long ago the news was 
about a Disney vaction disease nexus
and how far it spread -- then California changed its law to make it more 
difficult to opt-out of childhood vaccination.
Having denormals there when the values they cover arise brings benifit that 
parallels the good in that law change.
The larger social environment  gets better by growing stronger and that can 
happen because somethat that had
been bringing weakness (disease or bad consequences from subtile numbery 
misadventures) no longer operates.

There is another way denormals have been shown to be matter -- the way 
above ought to help you feel at ease
with deciding not to move your work from Float64 to Float32 for the purpose 
of avoiding values that hover around
smaller magnitudes realizable with Float64s.  That sounds like a headache, 
and you would not have changed
the theory in a way that makes things work  (or at all).  Recasting the 
approch to solving ot transforming at hand
to work with integer values would move the work away from any cost and 
benefit that accompany denormals.
Other that that, thank your favorite floating point microarchitect for 
giving you greater throughput with denormals
than everyone had a few design cycles ago.

I would like their presence without measureable cost .. just not enough to 
dislike their availability.

On Monday, July 13, 2015 at 8:02:13 AM UTC-4, Yichao Yu wrote:
>
> > As for doing it in julia, I found @simonbyrne's mxcsr.jl[1]. However, 
> > I couldn't get it working without #11604[2]. Inline assembly in 
> > llvmcall is working on LLVM 3.6 though[3], in case it's useful for 
> > others. 
> > 
>
> And for future references I find #789, which is not documented 
> anywhere AFAICT.... (will probably file a doc issue...) 
> It also supports runtime detection of cpu feature so it should be much 
> more portable. 
>
> [1] https://github.com/JuliaLang/julia/pull/789 
>
> > 
> > [1] https://gist.github.com/simonbyrne/9c1e4704be46b66b1485 
> > [2] https://github.com/JuliaLang/julia/pull/11604 
> > [3] 
> https://github.com/yuyichao/explore/blob/a47cef8c84ad3f43b18e0fd797dca9debccdd250/julia/array_prop/array_prop.jl#L3
>  
> > 
>

Reply via email to