I thank  Fergus Henderson <[EMAIL PROTECTED]>
and      Dave Tweed <[EMAIL PROTECTED]>

for the explanations on the Cryptarithm test, its C++ program.
Fergus Henderson writes

> But the comment about the ratio being smaller will probably not be,
> because the conditions when pass-by-value must be used are when
> the routine makes internal use of destructive update, and in precisely
> those cases the routine is likely to be significantly more efficient
> than the Haskell equivalent.
>
> But of course the ratio will vary significantly from benchmark to benchmark.


In this example, all the functions  (condition1, condition2, main)
of C++ program do not need any copying of data.
Because first,  condition1, condition2  accept the pointer to vector and
do not change the vector components. And `main' changes this vector
in place each time. The nature of example is so that this leads to 
correct solution. 
Is not it the best possible task choice for C ?

As to Haskell, what might it do when, say,  condition1 p = all (< 20) p
starts to evaluate?
It tries first to find   head p < 20.
This has to apply `<' to  head p.
p  is needed for other part of the program. Probably, the interpreter 
sees that the node of  p  is referenced more than once at this moment. 
So, fearing that evaluation of  head p < 20  may destroy the value of  
p,  the interpreter copies  head p  before applying `<'.
In this manner, it will copy all the list  p  before  condition1 p
is evaluated.
What the Haskell implementors would say, does Haskell act this way?

On the other hand, the compiler might observe that   all (< 20)
does not change its argument list, hence             condition1 p  
does not need copying  p,  not a part of it. 
And it might compile this to the code where the pointer to the list 
p  is passed, not the whole list, and avoid copying. 
Generally, it is interesting, how Haskell evaluates  condition1.

But the Haskell program also contains  permutations [0..9]
that builds the long list.  10! times the current permutation is 
built on a new place (unlike in C).  After the memory is full, several
cells move to garbage - because only the current tail of this list
takes part in further computation.
Right?
I expect that  permutations [0..9]  implies in-avoidibly  10! * 10
allocations.
But doubt on  condition1, condition2.

Further, observing all these functions, the compiler might conclude
that the result does not need at all forming permutations on new place.
Could it code this into forming of permutations "in-place" ?
The language does not allow to express this. But the compiler has such 
freedom. It might use it in the simple cases when it succeeds to detect 
the "in-place" possibility.
- ?


------------------
Sergey Mechveliani
[EMAIL PROTECTED]

------------------------------------------------------------------
permutations :: [Int] -> [[Int]]
permutations    []     = [[]]
permutations    (j:js) = addOne $ permutations js
                             where
                             addOne []       = []
                             addOne (ks:pms) = (ao ks)++(addOne pms)

                             ao []     = [[j]]
                             ao (k:ks) = (j:k:ks):(map (k:) $ ao ks)

main = putStr $ shows (find condition $ permutations p0) "\n"
         where
         (p0, pLast) = ([0..9], reverse [0..9])
         condition p = all (< 20) p  &&  p==pLast
                       --condition1      condition2









Reply via email to