Re: Efficiency of using field labels vs pattern matching

2006-08-20 Thread Brian Hulley

Bulat Ziganshin wrote:

btw, if you want beter efficiency, you may use unboxed
references (http://haskell.org/haskellwiki/Library/ArrayRef)


Thanks for the pointer to your ArrayRef library. I've downloaded it and it 
will be very useful - its extremely neat that the fact that something is 
stored as unboxed can be hidden from the rest of the program.


One thing I wondered was if the functional dependency in Data.Ref.Universal 
from the result type to the monad is actually necessary, since this FD 
prevents me adding an instance for MonadIO ie the following instance is not 
valid:


   instance MonadIO m => URef m IOURef where
   -- m -> r is fine
   -- r -> m restricts m too much

Of course this isn't a big problem because I can simply define lifted 
versions separately ie:


   import Data.Ref hiding(newURef, readURef, writeURef)
   import GHC.Unboxed
   import Control.Monad.Trans

   -- instance MonadIO m => URef m IOURef where

   newURef :: (Unboxed a, MonadIO m) => a -> m (IOURef a)
   newURef v = liftIO $ newIOURef v

   readURef :: (Unboxed a, MonadIO m) => IOURef a -> m a
   readURef ref = liftIO $ readIOURef ref

   writeURef :: (Unboxed a, MonadIO m) => IOURef a -> a -> m ()
   writeURef ref v = liftIO $ writeIOURef ref v

   -- test monad
   newtype SomeIO a = SomeIO {runSomeIO :: (IO a)} deriving (Monad, 
MonadIO)


   foo :: SomeIO Int
   foo = do
   xRef <- newURef (57::Int)
   readURef xRef

   main = do
   x <- runSomeIO foo
   print x
   _ <- getChar
   return ()

Anyway thanks for sharing your library. I'm going to put the URef functions 
above into a module so I can use the same names for URef  functions (ie 
URef.T, new, read, write) as I'm already using for boxed refs.


Best regards,
Brian.
--
Logic empowers us and Love gives us purpose.
Yet still phantoms restless for eras long past,
congealed in the present in unthought forms,
strive mightily unseen to destroy us.

http://www.metamilk.com 


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Efficiency of using field labels vs pattern matching

2006-08-20 Thread Brian Hulley

Sigbjorn Finne wrote:

If you let the Simplifier have a crack at your code (i.e., compile
with -O or better), the same code should be generated for the two
defns of 'foo'. Try compiling with -ddump-simpl to verify.


Thanks. I tried that and you're right. It also works well with an even more 
complicated example:


   data R = R{_xRef :: !(Ref.T Int), _y :: Int}

   foo :: Bool -> R -> IO ()
   foo b r = do
   if b
   then Ref.write (_xRef r) ((_y r) * (_y r))
   else return ()
   x <- Ref.read (_xRef r)
   print x

Even though there are 4 applications of field label functions and _y is not 
even a strict field, GHC -O2 manages to get both fields with a single case 
statement:


   Main.foo :: GHC.Base.Bool -> Main.R -> GHC.IOBase.IO ()
   [GlobalId]
   [Arity 3
   Worker Main.$wfoo
   Str: DmdType SU(U(L)L)L]
   Main.foo = __inline_me (\ (w_s2sp :: GHC.Base.Bool)
   (w1_s2sq :: Main.R)
   (w2_s2sy :: GHC.Prim.State# GHC.Prim.RealWorld) ->
   case (# GHC.Prim.State# GHC.Prim.RealWorld, () #) w1_s2sq
   of w3_X2sB { Main.R ww_s2ss ww1_s2sw ->

   -- well done GHC :-)

   case (# GHC.Prim.State# GHC.Prim.RealWorld, () #) ww_s2ss
   of ww2_X2sI { GHC.STRef.STRef ww3_s2su ->
   Main.$wfoo w_s2sp ww3_s2su ww1_s2sw w2_s2sy
   }
   })

This is great because some of my records have about 20 fields so the pattern 
matching was getting really cumbersome in the source.


BTW I made a module to make the IORef functions work with any MonadIO and 
also give them better names (to use by qualified import) which I enclose 
below if anyone is interested - I don't know where to put stuff like this:



-- |
-- Module: Prime.IORef
-- Author: Brian Hulley
-- License: Public Domain
--
-- Lifts (most of) Data.IORef to any MonadIO
-- This module should be used qualified so we can avoid
-- writing "IORef" everywhere

module Prime.IORef
   ( T
   , new
   , read
   , write
   , modify
   , weak
   ) where

   import Prelude hiding(read)
   import qualified Data.IORef as D
   import Control.Monad.Trans
   import System.Mem.Weak

   type T = D.IORef

   new :: MonadIO m => a -> m (T a)
   new x = liftIO $ D.newIORef x

   read :: MonadIO m => T a -> m a
   read x = liftIO $ D.readIORef x

   write :: MonadIO m => T a -> a -> m ()
   write x v = liftIO $ D.writeIORef x v

   modify :: MonadIO m => T a -> (a -> a) -> m ()
   modify x f = liftIO $ D.modifyIORef x f

   weak :: MonadIO m => T a -> m (Weak (T a))
   weak r = liftIO $ D.mkWeakIORef r (return ())

Regards, Brian.

--
Logic empowers us and Love gives us purpose.
Yet still phantoms restless for eras long past,
congealed in the present in unthought forms,
strive mightily unseen to destroy us.

http://www.metamilk.com 


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Efficiency of using field labels vs pattern matching

2006-08-20 Thread Bulat Ziganshin
Hello Brian,

Sunday, August 20, 2006, 5:15:41 PM, you wrote:

> data R = R {_xRef :: !(IORef Int)}

> foo :: Int -> R -> IO ()
> foo i R{_xRef = xRef} = writeIORef xRef i

> is the above more efficient than:

> foo i r = writeIORef (_xRef r) i

i think that first _may_ be more efficient because of strictness
analysis. btw, if you want beter efficiency, you may use unboxed
references (http://haskell.org/haskellwiki/Library/ArrayRef)




-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Efficiency of using field labels vs pattern matching

2006-08-20 Thread Sigbjorn Finne


If you let the Simplifier have a crack at your code (i.e., compile with -O
or better), the same code should be generated for the two defns of
'foo'. Try compiling with -ddump-simpl to verify.

The first version is stricter, so it'll be preferable in -Onot mode.

--sigbjorn

Brian Hulley wrote:

Hi,
I'm wondering if it is more efficient to pattern match against the 
components of a record instead of using the field functions, though 
I'd prefer to just use the field functions.


For example, if I write:

   data R = R {_xRef :: !(IORef Int)}

   foo :: Int -> R -> IO ()
   foo i R{_xRef = xRef} = writeIORef xRef i

is the above more efficient than:

   foo i r = writeIORef (_xRef r) i

I know this is probably a bit silly to worry about the overhead of an 
extra function call etc but I've at the moment got a lot of code using 
the first version and I think it would look much neater just using 
field labels but I don't want to modify it if it will be less 
efficient since speed is quite critical here.


Thanks, Brian.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Efficiency of using field labels vs pattern matching

2006-08-20 Thread Brian Hulley

Hi,
I'm wondering if it is more efficient to pattern match against the 
components of a record instead of using the field functions, though I'd 
prefer to just use the field functions.


For example, if I write:

   data R = R {_xRef :: !(IORef Int)}

   foo :: Int -> R -> IO ()
   foo i R{_xRef = xRef} = writeIORef xRef i

is the above more efficient than:

   foo i r = writeIORef (_xRef r) i

I know this is probably a bit silly to worry about the overhead of an extra 
function call etc but I've at the moment got a lot of code using the first 
version and I think it would look much neater just using field labels but I 
don't want to modify it if it will be less efficient since speed is quite 
critical here.


Thanks, Brian.
--
Logic empowers us and Love gives us purpose.
Yet still phantoms restless for eras long past,
congealed in the present in unthought forms,
strive mightily unseen to destroy us.

http://www.metamilk.com 


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Monomorphism restriction - is this still needed for efficiency?

2006-06-12 Thread Brian Hulley

Simon Marlow wrote:

Brian Hulley wrote:


I'm wondering if the monomorphism restriction is now just an
anachronism from Haskell 98 or if it is still needed for efficiency
ie should I just now use -fno-monomorphism-restriction when
compiling all my code to consign it to the dustbin of history? :-)

A related question is, if it can still improve efficiency when
applied to local bindings, whether it would be worthwhile to have an
additional option to switch monomorphism off for top level bindings
but still keep it for local bindings, since I can't think of any
situation where I don't want a top level binding to be fully
polymorphic. Does anyone know of any situation where a top level binding 
should

not be fully polymorphic?


The discussion on the monomorphism restriction on the haskell-prime
list recently should help you:

http://www.haskell.org//pipermail/haskell-prime/2006-February/000252.html

I measured what happened to the nofib suite with
-fno-monomorphism-restriction.  The answer was, on the whole, "not
much".  But there were a couple of programs that got worse.

It's a matter of personal taste whether you should use
-fno-monomorphism-restriction, I wouldn't like to emphatically declare
one way or the other which is best.


Hi Simon -
Thanks for the link to H' MR discussion and measurements for nofib. I don't 
find the MR to be a big problem - it's just that I occasionally run into it 
so I don't really mind writing explicit type signatures if this makes sure 
my code runs as fast as possible.


Best regards, Brian.


--
Logic empowers us and Love gives us purpose.
Yet still phantoms restless for eras long past,
congealed in the present in unthought forms,
strive mightily unseen to destroy us.

http://www.metamilk.com 


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Monomorphism restriction - is this still needed for efficiency?

2006-06-12 Thread Simon Marlow

Brian Hulley wrote:

I'm wondering if the monomorphism restriction is now just an anachronism 
from Haskell 98 or if it is still needed for efficiency ie should I just 
now use -fno-monomorphism-restriction when compiling all my code to 
consign it to the dustbin of history? :-)


A related question is, if it can still improve efficiency when applied 
to local bindings, whether it would be worthwhile to have an additional 
option to switch monomorphism off for top level bindings but still keep 
it for local bindings, since I can't think of any situation where I 
don't want a top level binding to be fully polymorphic.


Does anyone know of any situation where a top level binding should not 
be fully polymorphic?


The discussion on the monomorphism restriction on the haskell-prime list 
recently should help you:


http://www.haskell.org//pipermail/haskell-prime/2006-February/000252.html

I measured what happened to the nofib suite with 
-fno-monomorphism-restriction.  The answer was, on the whole, "not 
much".  But there were a couple of programs that got worse.


It's a matter of personal taste whether you should use 
-fno-monomorphism-restriction, I wouldn't like to emphatically declare 
one way or the other which is best.


Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Monomorphism restriction - is this still needed for efficiency?

2006-06-09 Thread Brian Hulley

Hi -
I'm wondering if the monomorphism restriction is now just an anachronism 
from Haskell 98 or if it is still needed for efficiency ie should I just now 
use -fno-monomorphism-restriction when compiling all my code to consign it 
to the dustbin of history? :-)


A related question is, if it can still improve efficiency when applied to 
local bindings, whether it would be worthwhile to have an additional option 
to switch monomorphism off for top level bindings but still keep it for 
local bindings, since I can't think of any situation where I don't want a 
top level binding to be fully polymorphic.


Does anyone know of any situation where a top level binding should not be 
fully polymorphic?


Thanks, Brian.

--
Logic empowers us and Love gives us purpose.
Yet still phantoms restless for eras long past,
congealed in the present in unthought forms,
strive mightily unseen to destroy us.

http://www.metamilk.com 


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Int64 and efficiency

2005-06-06 Thread Simon Marlow
On 06 June 2005 12:44, Ketil Malde wrote:

> Recently, Marcin 'Qrczak' Kowalczyk posted a micro-benchmark on
> comp.lang.functional, illustrating performance with statically typed
> Int and Integer, and Kogut's dynamically typed automatically-promoted
> numbers.   (Int is fastest, Kogut second, and Integer quite a bit
> slower).
> 
> For fun, I tried to use Int64, but to my surprise it is a lot slower
> than the others.  Marcin suggested I look at stg and hc output (which
> I did without getting much wiser) and made some guesses as to what the
> reasons were.
> 
> I'm curious whether this is typical, and if so, whether there is a
> theoretical reason why Int64 is so slow?  (I would have expected a
> factor of 2-4 worse than Int, but in reality it was about 35x slower)
> 
> (Code attached, replace MyInt as appropriate.)

In theory, you're right.  I haven't investigated your example, but my
guess is that some inlining isn't happening - maybe some missing INLINE
pragmas somewhere.  You might be able to shed more light by comparing
the -ddump-stg output for the Int version with the Int64 version.

Oh, also the Int64 operations are currently performed by foreign calls,
rather than primitives, IIRC.  That will affect speed, but not by a huge
amount.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Int64 and efficiency

2005-06-06 Thread Ketil Malde

Hi,

Recently, Marcin 'Qrczak' Kowalczyk posted a micro-benchmark on
comp.lang.functional, illustrating performance with statically typed
Int and Integer, and Kogut's dynamically typed automatically-promoted
numbers.   (Int is fastest, Kogut second, and Integer quite a bit
slower).

For fun, I tried to use Int64, but to my surprise it is a lot slower
than the others.  Marcin suggested I look at stg and hc output (which
I did without getting much wiser) and made some guesses as to what the
reasons were.

I'm curious whether this is typical, and if so, whether there is a
theoretical reason why Int64 is so slow?  (I would have expected a
factor of 2-4 worse than Int, but in reality it was about 35x slower)

(Code attached, replace MyInt as appropriate.)

-kzm

module Main where

import Data.Int

type MyInt = Int  -- or Int64 or Integer

seqLength :: MyInt -> Int
seqLength x = loop x 0
  where
loop :: MyInt -> Int -> Int
loop 1 len = len
loop k len
  | even k= loop (k `quot` 2) $! len + 1
  | otherwise = loop (3 * k + 1) $! len + 1

main :: IO ()
main = print $ sum $ map seqLength [1..10]

-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: About unsafeThaw and unsafeFreeze efficiency ....

2003-11-05 Thread Simon Marlow
 
> "unsafeThaw :: (Ix i, IArray a e, MArray b e m) => a i e -
> > m (b i e)
> Converts an immutable array into a mutable array without 
> taking a copy. This function is "unsafe" because any 
> subsequent modifications made to the mutable version of 
> the array will be shared with the immutable version. It
> is safe to use, therefore, if the immutable version is 
> never referenced
> again. "

The documentation is a bit misleading.  It should say something like:

   Converts an immutable array into a mutable array.  The 
   implementation may either simply cast the array from
   one type to the other without copying the array, or it
   may take a full copy of the array.  The non-copying
   implementation is supported between certain pairs of
   array types only; one constraint is that the array
   types must have identical representations.  The following
   pairs of array types have a non-copying O(1) implementation
   of unsafeThaw:
   
 * UArray -> IOUArray
 * UArray -> STUArray
 * Array  -> IOArray
 * Array  -> STArray

   Note that because the array is possibly not copied, any subsequent
   modifications made to the mutable version of the array may be
   shared with the immutable version.  It is safe to use, therefore, if
   the immutable version is never referenced again.

Note in particular that unsafeThaw to a StorableArray currently doesn't
have an efficient implementation.

I'll update the docs with the text above.

Cheers,
Simon

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


About unsafeThaw and unsafeFreeze efficiency ....

2003-11-04 Thread heron_carvalho
Hello,

 in GHC documentation, it is said :

"unsafeThaw :: (Ix i, IArray a e, MArray b e m) => a i e -
> m (b i e)
Converts an immutable array into a mutable array without
taking a copy. This function is "unsafe" because any
subsequent modifications made to the mutable version of
the array will be shared with the immutable version. It
is safe to use, therefore, if the immutable version is
never referenced
again. "

I am trying to use unsafeThaw in a parallel program
written in a parallel language that I am developing. A
process have to transmit a UArray Int Double to another
process. I used unsafeThaw to translate the UArray into a
StorableArray. Thus, using the function
withStorableArray, it is possible to access the array
elements (double's) in a contiguos buffer that could be
sent directly through MPI. The receiving process fetch
the message (MPI_Recv) in a buffer and copy their
elements (including array bounds information) in a
StorableArray (using withStorableArray and copyArray).
Then, using unsafeFreeze applied to the StorableArray,
the original UArray Int Double sent by the sender is
obtained.

 But this is efficient only if the conversion from
UArray to StorableArray, using unsafe versions of thaw
and freeze, does not take a copy, as GHC documentation
suggests to the reader. However, my experiments
(profiling, measurements, etc.) have detected a source of
ineficiency in the application of unsafeThaw and
unsafreFreeze.  It appears to be more efficient to
translate the array to a list (using elems) and then copy
the list in a "Ptr" array using pokeArray.

 unsafeThaw is really "free of copy" ?? Any
exception ?? I need a way to access an unboxed immutable
array in a contiguos buffer in constant time, in
order to minimize marshalling overheads for preparing a
message to be sent through a network, using MPI. This is
important for parallel scientific code, that makes
extensive use of arrays and have to transmitt them
between processes.

 Any suggestion ?

Heron



__
Acabe com aquelas janelinhas que pulam na sua tela.
AntiPop-up UOL - É grátis!
http://antipopup.uol.com.br/


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: efficiency of UArray

2002-05-17 Thread Russell O'Connor

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Thu, 16 May 2002, Simon Peyton-Jones wrote:

> GHC doesn't remove intermediate lists down both
> branches of a zip, so yes, you'll get intermediate lists.

Does deforestation not apply in this situation?

- -- 
Russell O'Connor[EMAIL PROTECTED]
   
``Later generations will regard set theory as a disease from which one
has recovered.'' -- Poincare
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.6 (SunOS)
Comment: For info see http://www.gnupg.org

iD8DBQE85YbuuZUa0PWVyWQRAp5mAJ9r8rwosEr+1Z8CC/fjHa9gtuf7YACfcS+2
MIkhxrpDHjW/sqjl08Gkres=
=/A1n
-END PGP SIGNATURE-

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



RE: efficiency of UArray

2002-05-16 Thread Hal Daume III

> GHC doesn't remove intermediate lists down both
> branches of a zip, so yes, you'll get intermediate lists.

Okay.

> Why not use array indexing, as per your second version
> (only in Haskell)?

something along the lines of:

f arr = f' low 0
where (low,high) = bounds arr
  f' pos acc | pos > high = acc
 | otherwise  = f' (pos+1) (acc + arr!pos)

?

would:

  sum [v!i + u!i | i <- range (bounds v)]

also generate an intermediate list?

And finally, what about something like:

  f u v = listArray (bounds u) [v!i * u!i | i <- range (bounds u)]

versus

  f u v = u // [(i, v!i*u!i) | i <- range (bounds u)]

?

It's very unclear to me exactly what is going on "behind the scenes" with
arrays.  I would like to see functions like:

  afoldl, afoldr, azipWith, etc...

to be defined in the libraries, since there are so many possible
implementations and, it seems, the "best" implementation could be very
compiler dependent.  but barring this happening, what's the best approach
to take for things like this.  is // fast, or is it better to create new
arrays?

 - Hal

> | -Original Message-
> | From: Hal Daume III [mailto:[EMAIL PROTECTED]] 
> | Sent: 16 May 2002 00:55
> | To: GHC Users Mailing List
> | Subject: efficiency of UArray
> | 
> | 
> | can we expect a function like:
> | 
> |   sum [x*y | (x,y) <- zip (elems v) (elems u)]
> | 
> | to be as efficient as, say:
> | 
> | sum = 0
> | for i=1, n
> |   sum = sum + v[i] * u[i]
> | 
> | ?
> | 
> | Basically, will any intermediate lists be created here?
> | 
> | --
> | Hal Daume III
> | 
> |  "Computer science is no more about computers| [EMAIL PROTECTED]
> |   than astronomy is about telescopes." -Dijkstra | www.isi.edu/~hdaume
> | 
> | ___
> | Glasgow-haskell-users mailing list 
> | [EMAIL PROTECTED] 
> | http://www.haskell.org/mailman/listinfo/glasgow-| haskell-users
> | 
> 

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



RE: efficiency of UArray

2002-05-16 Thread Simon Peyton-Jones

GHC doesn't remove intermediate lists down both
branches of a zip, so yes, you'll get intermediate lists.

Why not use array indexing, as per your second version
(only in Haskell)?

Simon

| -Original Message-
| From: Hal Daume III [mailto:[EMAIL PROTECTED]] 
| Sent: 16 May 2002 00:55
| To: GHC Users Mailing List
| Subject: efficiency of UArray
| 
| 
| can we expect a function like:
| 
|   sum [x*y | (x,y) <- zip (elems v) (elems u)]
| 
| to be as efficient as, say:
| 
| sum = 0
| for i=1, n
|   sum = sum + v[i] * u[i]
| 
| ?
| 
| Basically, will any intermediate lists be created here?
| 
| --
| Hal Daume III
| 
|  "Computer science is no more about computers| [EMAIL PROTECTED]
|   than astronomy is about telescopes." -Dijkstra | www.isi.edu/~hdaume
| 
| ___
| Glasgow-haskell-users mailing list 
| [EMAIL PROTECTED] 
| http://www.haskell.org/mailman/listinfo/glasgow-| haskell-users
| 
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



efficiency of UArray

2002-05-15 Thread Hal Daume III

can we expect a function like:

  sum [x*y | (x,y) <- zip (elems v) (elems u)]

to be as efficient as, say:

sum = 0
for i=1, n
  sum = sum + v[i] * u[i]

?

Basically, will any intermediate lists be created here?

--
Hal Daume III

 "Computer science is no more about computers| [EMAIL PROTECTED]
  than astronomy is about telescopes." -Dijkstra | www.isi.edu/~hdaume

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: efficiency question

2002-02-11 Thread Jorge Adriano

On Monday 11 February 2002 02:10, Hal Daume III wrote:
> So instaed of storing 1 and 2 in the list, it stores pointers to
> them.  The reason it does this is so that it makes it easy to generate
> code for polymorhpic functions.  (,) being boxed means that instead of
> storing a pair of element, you're storing a pair of pointers.  That means
> you get worse performance because you have to chase more pointers.
>
> (at least that's my impression)
Thanks, I knew the concept but not the name.


On Sunday 10 February 2002 18:48, Kirsten Chevalier wrote:
> I'd guess that it's not just that you have to apply the (,) constructor --
> it also has to do with the fact that the tuples it's constructing here are
> boxed.
This brings another question to my mind, isn't it toupling a standard 
technique used in functional programming? I'm pretty sure I've seen it 
focused in some papers/text books.
I for one would not expect that folding the list twice would be more 
efficient...

J.A.
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



RE: efficiency question

2002-02-11 Thread Julian Seward (Intl Vendor)


The transformations that the compiler does, especially with -O,
are sufficiently complex that trying to second-guess the performance
effects of source changes is difficult.  You'd get a much better
handle on this by reading the post-optimised program, which you
can get with -ddump-simpl.  Good luck!  This is not for the
faint of heart.

J

| -Original Message-
| From: Jorge Adriano [mailto:[EMAIL PROTECTED]] 
| Sent: Sunday, February 10, 2002 7:18 PM
| To: [EMAIL PROTECTED]
| Subject: Re: efficiency question
| 
| 
| On Sunday 10 February 2002 18:48, Kirsten Chevalier wrote:
| > I'd guess that it's not just that you have to apply the (,) 
| constructor --
| > it also has to do with the fact that the tuples it's 
| constructing here are
| > boxed.
| 
| could you elaborate a little more on that (boxed / unboxed) 
| or provide a link 
| on the subject?
| 
| thanks
| J.A.
| ___
| Glasgow-haskell-users mailing list
| [EMAIL PROTECTED]
| http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
| 
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: efficiency question

2002-02-10 Thread Hal Daume III

I don't know of a reference, but here's the basic idea:  

if you have a polymorphic type (like [a]), and it's instantiated to [Int]
(or whatever), instead of storing in memory a standard linked list 
representation like 
  
[1,2] = 1:2:[] =
  
  [1  x]   /--> [2  x]   /--> Nil
  \---/ \---/

(which would be standard), it stores it as:
  
  [x  x]   /--> [x  x]   /--> Nil
   |  \---/  |  \---/
   1 2
  
So instaed of storing 1 and 2 in the list, it stores pointers to 
them.  The reason it does this is so that it makes it easy to generate
code for polymorhpic functions.  (,) being boxed means that instead of
storing a pair of element, you're storing a pair of pointers.  That means
you get worse performance because you have to chase more pointers.
  
(at least that's my impression)

 - hal

--
Hal Daume III

 "Computer science is no more about computers| [EMAIL PROTECTED]
  than astronomy is about telescopes." -Dijkstra | www.isi.edu/~hdaume

On Sun, 10 Feb 2002, Jorge Adriano wrote:

> On Sunday 10 February 2002 18:48, Kirsten Chevalier wrote:
> > I'd guess that it's not just that you have to apply the (,) constructor --
> > it also has to do with the fact that the tuples it's constructing here are
> > boxed.
> 
> could you elaborate a little more on that (boxed / unboxed) or provide a link 
> on the subject?
> 
> thanks
> J.A.
> ___
> Glasgow-haskell-users mailing list
> [EMAIL PROTECTED]
> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
> 

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: efficiency question

2002-02-10 Thread Jorge Adriano

On Sunday 10 February 2002 18:48, Kirsten Chevalier wrote:
> I'd guess that it's not just that you have to apply the (,) constructor --
> it also has to do with the fact that the tuples it's constructing here are
> boxed.

could you elaborate a little more on that (boxed / unboxed) or provide a link 
on the subject?

thanks
J.A.
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: efficiency question

2002-02-10 Thread Kirsten Chevalier

On Sat, Feb 09, 2002 at 12:01:02PM -0500, [EMAIL PROTECTED] 
wrote:
> 
> > I'd say that's because in the second case you also got to apply the (,),
> > besides the (+)/(-) constructor during the transversing...
> > Am I right?
> 
> opss... I meant to write: the (,) constructor besides the (+)/(-)...
> J.A.
> 

I'd guess that it's not just that you have to apply the (,) constructor -- it
also has to do with the fact that the tuples it's constructing here are boxed.

-- 
Kirsten Chevalier * [EMAIL PROTECTED] * Often in error, never in doubt
"When nothing remains of me, / Not a particle, / I shall sparkle in the
footnote of an article." -- Daniel Aaron
http://www.cs.berkeley.edu/~krc/
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: efficiency question

2002-02-08 Thread Jorge Adriano


> I'd say that's because in the second case you also got to apply the (,),
> besides the (+)/(-) constructor during the transversing...
> Am I right?

opss... I meant to write: the (,) constructor besides the (+)/(-)...
J.A.
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: efficiency question

2002-02-08 Thread Jorge Adriano

On Friday 08 February 2002 22:14, Hal Daume III wrote:
> define
>
> test1 l =
> let s1 = foldr (+) 1 l
> s2 = foldr (-) 1 l
> in  (s1, s2)
>
> test2 l =
> let s = foldr (\x (a,b) -> (x+a,x-b)) (1,1) l
> in  s
>
> why is test1 so much faster than test2 for long lists l (eg
> [1..100])?  replacing foldr with foldl makes it faster (of course),
> but test2 is still much slower.
>
> i *expected* test2 to be much faster because you're only traversing the
> list once.  presumably the two elements "a" and "b" in test2 could be put
> in registers and i'd imagine test2 should be faster (it certainly would be
> if written in c).
>
>  - hal


I'd say that's because in the second case you also got to apply the (,), 
besides the (+)/(-) constructor during the transversing...
Am I right?

(if you had no mentioned it though, I'd probably also expect the 2nd one to 
be faster...)
J.A.
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



efficiency question

2002-02-08 Thread Hal Daume III

define

test1 l = 
let s1 = foldr (+) 1 l
s2 = foldr (-) 1 l
in  (s1, s2)

test2 l =
let s = foldr (\x (a,b) -> (x+a,x-b)) (1,1) l
in  s

why is test1 so much faster than test2 for long lists l (eg
[1..100])?  replacing foldr with foldl makes it faster (of course),
but test2 is still much slower.

i *expected* test2 to be much faster because you're only traversing the
list once.  presumably the two elements "a" and "b" in test2 could be put
in registers and i'd imagine test2 should be faster (it certainly would be
if written in c).

 - hal

--
Hal Daume III

 "Computer science is no more about computers| [EMAIL PROTECTED]
  than astronomy is about telescopes." -Dijkstra | www.isi.edu/~hdaume

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



++ efficiency. Reply

2000-05-05 Thread S.D.Mechveliani


Mike Jones <[EMAIL PROTECTED]> writes on  ++ efficiency

> how do I approximate its efficiency compared to adding one
> element at a time?

In a "regular" situation,   xs ++ ys  
needs the number of "steps" proportional to  (length xs).
This is according to the simplest Haskell program you could suggest 
for (++) - guess, what is this program?

> Does 1:2:3:4:5:6:7:8 execute faster than [1, 2, 3, 4] ++ [5, 6, 7, 8], 
> where the first case is executed recursively in a function?

This literally expressions the compiler is likely to evaluate at 
the compile time. They would both cost zero at the run-time.

But if at the run-time  ys = [5,6,7,8]  is "ready", after this
   [1,2,3,4]++ys
costs 4 "steps",  while  1:2:3:4:5:6:7:8  costs 8 steps.
The "lazy" evaluation makes the truth of these considerations 
depend highly on the concrete situation in the program.

Also if some designer implements Haskell basing on some different 
internal model for the lists (God save!), then one could make (++) 
regularly cost 1 step ...

Generally, beware of the (++) danger in some situations.   

--
Sergey Mechveliani
[EMAIL PROTECTED]







++ efficiency

2000-05-03 Thread Mike Jones

Hi,

If I use ++ to concatenate two lists, how do I calculate the number of copy
operations, i.e. how do I approximate its efficiency compared to adding one
element at a time?

For example:

Does 1:2:3:4:5:6:7:8 execute faster than [1, 2, 3, 4] ++ [5, 6, 7, 8], where
the first case is executed recursively in a function?

Mike






Re: Q: Efficiency of Array Implementation

1999-02-22 Thread Shin-Cheng Mu


On Fri, 19 Feb 1999 08:33:13 -0800
Simon Marlow <[EMAIL PROTECTED]> wrote:
> There's a well-known trick the compiler can play to make (//) work in
> constant time: namely instead of copying the array you perform the update
> descructively, but ensure that any references to the old array are
> indirected through a filter that returns the old values for any updated
> slots.  In the common case where the array is used single-threadedly, the
> filter will be garbage collected.  I believe hbc uses this idea.

Just some noise. :) My boss has some papers on relevant topic:

.A randomized implementation of multiple functional arrays,
Tyng-Ruey Chuang
In Proceedings of the 1994 ACM Conference on Lisp and Functional Programming, 
pages 173-184. Orlando, Florida, USA, June 1994.
http://www.iis.sinica.edu.tw/~trc/lfp94.ps

Fully persistent arrays for efficient incremental updates and voluminous reads,
Tyng-Ruey Chuang.
In Bernd Krieg-Brueckner, editor, 4th European Symposium on Programming, 
pages 110-129. Rennes, France, February 1992. Lecture Notes in Computer
Science, Volume 582. Springer-Verlag. 

sincerely,
Shin-Cheng Mu




RE: Q: Efficiency of Array Implementation

1999-02-19 Thread Simon Marlow

> This reminds me of a question I could not solve on my own yet (due to
> lack of an operational profiler for the most part): Is it 
> possible that
> ghc knows how to transform Array data structures into destructive
> arrays in some settings?

If you mean does ghc ever implement the (//) operation destructively, the
answer is no.  It always copies the array before performing the update.  

There's a well-known trick the compiler can play to make (//) work in
constant time: namely instead of copying the array you perform the update
descructively, but ensure that any references to the old array are
indirected through a filter that returns the old values for any updated
slots.  In the common case where the array is used single-threadedly, the
filter will be garbage collected.  I believe hbc uses this idea.

> I had an algorithm with a terrible time complexity and wrote an
> implementation using purely functional Arrays. Since the runtime was
> still to bad, I tried ST together with MutableArrays instead, but the
> runtime got worse.

There could be a number of reasons for this: perhaps your program wasn't
reliant on O(1) complexity for array update, perhaps the overloading was
expensive (indexing arrays is overloaded), or perhaps the ST combinators
weren't being inlined.  In GHC 4.02 the latter two shouldn't be a problem:
if your array is indexed with Int, then you'll get specialised array
operations, and the ST implementation is now much faster than pre-4.00
compilers.

If you still have the program, we'd love to take a look at it and perhaps
add it to our benchmark suite.

Cheers,
Simon



Re: Q: Efficiency of Array Implementation

1999-02-19 Thread fis


> Date: Fri, 19 Feb 1999 09:41:58 +0100 (MET)
> From: Lennart Augustsson <[EMAIL PROTECTED]>
> CC: [EMAIL PROTECTED], [EMAIL PROTECTED],
> [EMAIL PROTECTED]
>
>
> > Concerning 'rapid access' you found in docs - it is hard to believe
> > this access is as fast as in C array - i mean changing X[i] in array
> > X. Because when this change is done fast, it is a side effect, breaks
> > functionality.
> Extracting values from an array can be as fast as in C.  The tricky
> part is contructing the array.  The Haskell Array module provides
> one way of constructing monolithic arrays, i.e., arrays that are built
> once and never again modified.
> But there are other ways, e.g., using ST monad which allows incremental
> construction of the array.
>
>-- Lennart
>

This reminds me of a question I could not solve on my own yet (due to
lack of an operational profiler for the most part): Is it possible that
ghc knows how to transform Array data structures into destructive
arrays in some settings?

I had an algorithm with a terrible time complexity and wrote an
implementation using purely functional Arrays. Since the runtime was
still to bad, I tried ST together with MutableArrays instead, but the
runtime got worse.

I don't have the time to figure out the details right now, but if you
plan to stay interested in them for a longer period, tell me and I will
give you more information when I work on the problem again. (It's still
possible that it's a bug in my code, however.)


 -Matthias





-- 
Max-Planck-Institut für Informatik  |  DFKI
[EMAIL PROTECTED]   |  [EMAIL PROTECTED]
http://www.mpi-sb.mpg.de/~fis   |





Re: Q: Efficiency of Array Implementation

1999-02-19 Thread Ch. A. Herrmann

Hi,

> "S" == S D Mechveliani <[EMAIL PROTECTED]> writes:

S> Concerning 'rapid access' you found in docs - it is hard to
S> believe this access is as fast as in C array - i mean changing
S> X[i] in array X. Because when this change is done fast, it is a
S> side effect, breaks functionality.

if the compiler can be sure that only a single copy of the array is
needed any further, it should be possible in principle instead of
making a modified copy of the array, just to change the old version.

S> I always thought that this is
S> the main cost the functional world pays for the nice functional
S> principles

It should not be necessary to *pay* for it more than perhaps loosing 
laziness for a small part of the program. Consider that in the
imperative style you also have to carry copies if you
need different modifications of an array. The questions is how can
the compiler of a functional language be sure that you don't use
different modifications? At least a compiler implementation can 
invite the programmer to help by providing an array monad together
with an interface that cuts off the array monad from the rest of the 
program (a way to get out of the monad in a secure way).

S> to think of how to avoid indice in the critical
S> parts of the algorithm.

(1) Sometimes you don't want to because you are short on time
and have an algorithm based on indexing available already.
(2) Let's assume that array accesses can be performed in constant
time (which is an unrealistic, but frequently made assumption).
I'm not sure if graph algorithms can always be translated to an
equivalent which has the same asymptotical time complexity without 
using marking. If you spend a logarithm, OK.
-
 Christoph Herrmann
 E-mail:  [EMAIL PROTECTED]
 WWW: http://brahms.fmi.uni-passau.de/cl/staff/herrmann.html





Re: Q: Efficiency of Array Implementation

1999-02-19 Thread Lennart Augustsson


> Concerning 'rapid access' you found in docs - it is hard to believe
> this access is as fast as in C array - i mean changing X[i] in array
> X. Because when this change is done fast, it is a side effect, breaks
> functionality.
Extracting values from an array can be as fast as in C.  The tricky
part is contructing the array.  The Haskell Array module provides
one way of constructing monolithic arrays, i.e., arrays that are built
once and never again modified.
But there are other ways, e.g., using ST monad which allows incremental
construction of the array.

   -- Lennart



Re: Q: Efficiency of Array Implementation

1999-02-19 Thread S.D.Mechveliani

Concerning 'rapid access' you found in docs - it is hard to believe
this access is as fast as in C array - i mean changing X[i] in array
X. Because when this change is done fast, it is a side effect, breaks
functionality.
I always thought that this is the main cost the functional world pays
for the nice functional principles - to think of how to avoid indice
in the critical parts of the algorithm.
Well, i may mistake ...



Re: Q: Efficiency of Array Implementation

1999-02-19 Thread Jan Laitenberger


Dear Mr. Mechveliani,
 

> I thought efficient arrays are impossible in functional language.

I'm not sharing this thought...   

Here is a quote from the Haskell Library Report:

\begin{quote}
  ``Haskell provides indexable arrays, which may be thought of as functions
whose domains are isomorphic to contiguous subsets of the integers.
Functions restricted in this way can be implemented efficiently;
in particular, a programmer may reasonably expect rapid access to the
components.''
\end{quote}
   
> So we have to organise the data so that to avoid the access by index
>  - as possible. 

Sure. If possible - ok.


Regards,

Jan

 ___
'---|--
|  __,   _  _  EMail: [EMAIL PROTECTED]
| /  |  / |/ | WWWeb: http://www.uni-passau.de/~laitenbe/
|/\_/|_/  |  |_/
   /| Laitenberger 
--(-|--
   \|



Re: Q: Efficiency of Array Implementation

1999-02-19 Thread S.D.Mechveliani

I thought efficient arrays are impossible in functional language.
Binary trees to be used instead, and still they will be much slower
than C array. So we have to organise the data so that to avoid the 
access by index - as possible. Also the C array holds the block of
memory, no matter what part of it is used. And a binary balanced tree
- it takes and returns the cells flexibly.

Sergey Mechveliani
[EMAIL PROTECTED]



RE: Efficiency of Array Implementation

1999-02-18 Thread Simon Peyton-Jones


> I recently noticed in a test program, that updating a table of
> fixed size (index and entries of type Int) was slower using an
> Array instead of our AVL implementation.
> 
> Does anybody know which compiler option I must give on the
> command line that the Array is translated to a C array?
> 
> I was using ghc-4.01 without special options.

GHC is very poor on array performance.  To do a halfway good
job we have to get list fusion to work, and that keeps slipping
down the agenda.

Sorry

Simon



Q: Efficiency of Array Implementation

1999-02-18 Thread Jan Laitenberger


Hi,


I recently noticed in a test program, that updating a table of
fixed size (index and entries of type Int) was slower using an
Array instead of our AVL implementation.

Does anybody know which compiler option I must give on the
command line that the Array is translated to a C array?

I was using ghc-4.01 without special options.


Many thanks in advance,   

Jan

 ___
'---|--
|  __,   _  _  EMail: [EMAIL PROTECTED]
| /  |  / |/ | WWWeb: http://www.uni-passau.de/~laitenbe/
|/\_/|_/  |  |_/
   /| Laitenberger 
--(-|--
   \|