Zenna, I had been wondering if there might be something in the tiling 
nature that makes unums particularly well suited to solving problems posed 
on higher dimensional surfaces? 


On Friday, July 31, 2015 at 3:54:29 PM UTC-4, Zenna Tavares wrote:
>
> The quick explanation for why unums/ubounds don't diverge like intervals 
>> do is that when the result gets wider, it divides like an amoeba into a 
>> collection of unums  that alternate between exact and inexact values that 
>> tile the interval
>
>
> It seems like this will incur another problem: doing Cartesian-product 
> computations with an ever increasing number of boxes.  In other words, if 
> at some intermediate stage, you split your unum into N uboxes, now you do 
> some operation with M other uboxes, you now have N*M uboxes in the output. 
>  This will continue until your memory explodes.  Is the unum solution to 
> try to merge boxes at every stage?
>
> In higher dimensions it gets worse. Fundamentally, you cannot approximate 
> (i.e. cover) high dimensional geometric shapes efficiently with many boxes 
> - you need a number exponential in the dimension for most shapes.
>
> Zenna
>
> On Friday, July 31, 2015 at 3:45:32 PM UTC-4, John Gustafson wrote:
>>
>> Existing methods for evaluation of polynomials do not produce tight 
>> bounds. There is indeed a mountain of papers written about doing it, all 
>> with the flavor of "our method sucks less" because it usually gets somewhat 
>> tighter bound, though still not as tight as possible. I was surprised to 
>> learn this, because I expected to just Google the evaluation of polynomials 
>> of intervals and make a unum version of it. That's when I discovered no one 
>> had solved the problem, and had even published proofs that it is NP-hard. 
>> Gulp! So it took me an extra *three months* to solve that problem, and I 
>> wondered if I should publish it as a standalone paper, but no, I needed to 
>> get the book done.
>>
>> I strongly agree that people cannot be asked to solve a new numerical 
>> analysis problem every time they program a new function. That is 
>> *exactly* the point I make over and over in Part 2. We need to simply 
>> express what we want calculated, and let the *computer* do all the work 
>> to get an answer that satisfies the environment setting of maximum allowed 
>> relative error. The last thing we need is a set of "numerical recipes" to 
>> choose from.
>>
>> On Thursday, July 30, 2015 at 3:46:52 PM UTC-7, Steven G. Johnson wrote:
>>>
>>> People have devised methods for evaluation of polynomials with interval 
>>> arithmetic, too (google it).  Not sure how his method compares.  It is well 
>>> known that you can often work around the dependency problem for very 
>>> specific expressions.
>>>
>>> However, is not practical to tell people that they need to solve a new 
>>> numerical-analysis research problem every time they have a new program in 
>>> which a variable is used more than once (used more than once => dependency 
>>> problem).
>>>
>>> And if you don't solve the dependency problem, your error bounds are 
>>> useless for general-purpose tasks.  And without accurate error bounds, 
>>> adaptive precision is a non-starter.
>>>
>>

Reply via email to