I would probably attempt an n-body calculation first. That would allow us 
to check the hypothesis that uboxes form ellipsoidal clouds as the 
computation progresses, which is why Kahan came up with a form of 
arithmetic based on hyperellipsoids.

On Friday, July 31, 2015 at 12:51:21 PM UTC-7, Jeffrey Sarnoff wrote:
>
> What would be the first problem you address with this made hardware? 
>
> On Friday, July 31, 2015 at 3:39:01 PM UTC-4, John Gustafson wrote:
>>
>> I discuss this in the book; there have to be strict bounds on how long a 
>> computation remains in the *g*-layer (fused) or people would dump their 
>> entire calculation in there. I think i got most of the fused operations 
>> that make sense, and I pointed out some that do not make sense. It is key 
>> that you should have a finite and predictable bound on the memory 
>> requirement of the *g*-layer where scratch work is done. It cannot be 
>> regarded as unlimited, or limited only by available system memory. For 
>> every fused operation, I can predict how many bits will be needed to return 
>> a correct answer, which means there is hope for a hardware implementation 
>> someday.
>>
>> On Thursday, July 30, 2015 at 3:44:26 PM UTC-7, Stefan Karpinski wrote:
>>>
>>>  Fused polynomials do seem like a good idea (again, can be done for 
>>> intervals too), but what is the end game of this approach? Is there some 
>>> set of primitives that are sufficient to express all computations you might 
>>> want to do in a way that doesn't lose accuracy too rapidly to be useful? It 
>>> seems like the reductio ad absurdum is producing a fused version of your 
>>> entire program that cleverly produces a correct interval.
>>>
>>> On Thu, Jul 30, 2015 at 5:20 PM, Jason Merrill <[email protected]> 
>>> wrote:
>>>
>>>> On Thursday, July 30, 2015 at 4:22:34 PM UTC-4, Job van der Zwan wrote:
>>>>>
>>>>> On Thursday, 30 July 2015 21:54:39 UTC+2, Jason Merrill wrote:
>>>>>
>>>>>> <Analysis of examples in the book>
>>>>>>
>>>>>
>>>>> Thanks for correcting me! The open/closed element becomes pretty 
>>>>> crucial later on though, when he claims on page 225 that:
>>>>>
>>>>> a general approach for evaluating polynomials with interval arguments 
>>>>>> without any information loss is presented here for the first time.
>>>>>>
>>>>>  
>>>>> Two pages later he gives the general scheme for it (see attached 
>>>>> picture - it was too much of a pain to extract that text with proper 
>>>>> formatting. This is ok under fair use right?).
>>>>>
>>>>> Do you have any thoughts on that?
>>>>>
>>>>
>>>> The fused polynomial evaluation seems pretty brilliant to me. He later 
>>>> goes on to suggest having a fused product ratio, which should largely 
>>>> allow 
>>>> eliminating the dependency problem from evaluating rational functions. You 
>>>> can get an awful lot done with rational functions.
>>>>
>>>>
>>>> <https://lh3.googleusercontent.com/-f-sYnCMJFpQ/VbqE8zbN5AI/AAAAAAAAHOk/cNTnxAUAyoU/s1600/polynomial.png>I
>>>>  
>>>> actually think keeping track of open vs. closed intervals sounds like a 
>>>> pretty good idea. It might also be worth doing for other kinds of interval 
>>>> arithmetic, and I don't see any major reason that that would be 
>>>> impossible. 
>>>> I didn't meant to say that open vs closed intervals doesn't matter--I just 
>>>> meant that it doesn't seem to be the "secret sauce" in any of the 
>>>> challenge 
>>>> problems in Chapter 14. To me, the fused operations are the secret sauce 
>>>> in 
>>>> terms of precision, and the variable length representation *might be* the 
>>>> secret sauce for performance, but I can't really comment on that. 
>>>>
>>>
>>>

Reply via email to