Re: [Haskell-cafe] Does anyone know where George Pollard is?

2012-11-08 Thread Ketil Malde
Myles C. Maxfield myles.maxfi...@gmail.com writes:

 Does anyone know where he is? 

On GitHub? https://github.com/Porges  One of the repos was apparently
updated less than a week ago.

 If not, is there an accepted practice to
 resolve this situation? Should I upload my own 'idna2' package?

You can always upload a fork, but unless you have a lot of new
functionality that won't fit naturally in the old package, you can
perhaps try a bit more to contact the original author.

-k

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Why doesn't GHC optimize recursive functions by converting them into `let`?

2012-11-08 Thread Petr P
  Hi,

recently I found out that some recursive functions can be greatly optimized
just by rewriting them using `let` so that they're not recursive themselves
any more (only the inner let is). To give an example:

 fix :: (a - a) - a
 fix f = f (fix f)

isn't optimized, because it's a recursive function and so it isn't inlined
or anything. However, defining it as

 fix f = let x = f x in x

makes it much more efficient, since `fix` is not recursive any more, so it
gets inlined and then optimized for a given argument `f`.
(See
http://stackoverflow.com/questions/12999385/why-does-ghc-make-fix-so-confounding
)

Another example is the well-known generic catamorphism function:

 newtype Fix f = Fix { unfix :: f (Fix f) }

 catam :: (Functor f) = (f a - a) - (Fix f - a)
 catam f = f . fmap (catam f) . unfix

is not optimized. When `catam` is used on a data structure such as [] or a
tree, at each level `fmap` creates new constructors that are immediately
consumed by `f`. But when written as

 catam f = let u = f . fmap u . unfix in u

this gets inlined and then optimized, and constructor creation/consumption
is fused into very effective code.
(See http://stackoverflow.com/q/13099203/1333025)

As one comment asked, this seems like something GHC could do automatically.
Would it be difficult? Is there a reason against it? I suppose in cases
where it doesn't help, the additional `let` would get optimized away, so no
harm would be done. And in cases like `fix` or `catam` the gain would be
substantial.

  Best regards,
   Petr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Computed promoted natural

2012-11-08 Thread Arie Peterson
Hi,


I'm trying to use data kinds, and in particular promoted naturals, to simplify 
an existing program.

The background is as follows: I have a big computation, that uses a certain 
natural number 'd' throughout, which is computed from the input. Previously, 
this number was present as a field in many of my data types, for instance

 data OldA = OldA Integer …

. There would be many values of this type (and others) floating around, with 
all the same value of 'd'. I would like to move this parameter to the type 
level, like this:

 data NewA (d :: Nat) = NewA …

The advantage would be, that the compiler can verify that the same value of 
'd' is used throughout the computation.

Also, it would then be possible to make 'NewA' a full instance of 'Num', 
because 'fromInteger :: Integer - NewA d' has a natural meaning (where the 
value of 'd' is provided by the type, i.e. the context in which the expression 
is used), while 'fromInteger :: Integer - OldA' does not, because it is not 
possible to create the right value of 'd' out of thin air.


Is this a sane idea? I seem to get stuck when trying to /use/ the computation, 
because it is not possible to create 'd :: Nat', at the type level, from the 
computed integer.

Can one somehow instantiate the type variable 'd :: Nat' at an integer that is 
not statically known?

Formulated this way, it sounds like this should not be possible, because all 
types are erased at compile time.

However, it feels as though it might not be unreasonable in this situation, 
because the computation is polymorphic in the type 'd :: Nat'. I just want to 
substitute a specific value for 'd'.


Maybe there is another way to approach this?


Thanks for any advice,

Arie


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Computed promoted natural

2012-11-08 Thread Ertugrul Söylemez
Arie Peterson ar...@xs4all.nl wrote:

 I'm trying to use data kinds, and in particular promoted naturals, to
 simplify an existing program.

 The background is as follows: I have a big computation, that uses a
 certain natural number 'd' throughout, which is computed from the
 input. Previously, this number was present as a field in many of my
 data types, for instance

  data OldA = OldA Integer …

 . There would be many values of this type (and others) floating
 around, with all the same value of 'd'. I would like to move this
 parameter to the type level, like this:

  data NewA (d :: Nat) = NewA …

 The advantage would be, that the compiler can verify that the same
 value of 'd' is used throughout the computation.

 Also, it would then be possible to make 'NewA' a full instance of
 'Num', because 'fromInteger :: Integer - NewA d' has a natural
 meaning (where the value of 'd' is provided by the type, i.e. the
 context in which the expression is used), while 'fromInteger ::
 Integer - OldA' does not, because it is not possible to create the
 right value of 'd' out of thin air.

 Is this a sane idea? I seem to get stuck when trying to /use/ the
 computation, because it is not possible to create 'd :: Nat', at the
 type level, from the computed integer.

This is a known and nice way to do it, and not just possible, but
actually quite beautiful.  It all revolves around two related concepts
called reflection and reification, the latter allowing precisely what
you think is impossible:

{-# RankNTypes, ScopedTypeVariables #-}

reflectNum :: (Num a, ReflectNum n) = proxy n - a

reifyNum ::
(Num a)
= a
- (forall n. (ReflectNum n) = Proxy n - b)
- b


 Can one somehow instantiate the type variable 'd :: Nat' at an
 integer that is not statically known?

The idea is that reifyNum takes a polymorphic (!) function in 'n', such
that the function can guarantee that it can handle any 'n', as long as
it's an instance of ReflectNum.  Now since the argument function is
polymorphic, the reifyNum function itself can choose the type based on
whatever value you pass as the first argument:

reifyNum 0 k = k (Proxy :: Proxy Z)
reifyNum n k =
reifyNum (n - 1) (\(_ :: Proxy n) -
  k (Proxy :: Proxy (S n)))

Reflection and reification together are part of a larger concept called
implicit configurations, and there is a paper about it.


 Formulated this way, it sounds like this should not be possible,
 because all types are erased at compile time.

The types are, but the type class dictionaries remain.

Of course there is no reason to reinvent the wheel here.  Check out the
'reflection' library, which even uses some magic to make this as
efficient as just passing the value right away (without
Peano-constructing it).


Greets,
Ertugrul

-- 
Not to be or to be and (not to be or to be and (not to be or to be and
(not to be or to be and ... that is the list monad.


signature.asc
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Computed promoted natural

2012-11-08 Thread Iavor Diatchki
Hello Arie,

One way to achieve the additional static checking is to use values of type
`Sing (n :: Nat)` in the places where you've used `Integer` (and
parameterize data structures by the `n`).  If the code is fully polymorphic
in the `n`, then you can use it with values whose types as not statically
know by using an existential.  Here is an example:

{-# LANGUAGE DataKinds, KindSignatures, ExistentialQuantification #-}

import GHC.TypeLits

data SomeNat = forall (n :: Nat). SomeNat (Sing n)

getSomeNat :: IO SomeNat
getSomeNat =
  do x - getLine
 case reads x of
   -- The use of `unsafeSingNat` is OK here because it is wrapped in
`SomeNat`
   -- so we are not assuming anything about the actual number.
   [(n,_)] | n = 0 - return $ SomeNat $ unsafeSingNat n
   _ - putStrLn Invalid number, try again.  getSomeNat

main :: IO ()
main =
  do x - getSomeNat
 case x of
   SomeNat s - polyFun s s

-- The argument of this function are always going to be the same.
-- (just an example, one could probably get more interesting properties)
polyFun :: Sing (n :: Nat) - Sing n - IO ()
polyFun x y = print (x,y)

I can elaborate more, just ask if this does not make sense.   One issue at
the moment is that you have to pass the explicit `Sing` values everywhere,
and it is a lot more convenient to use the `SingI` class in GHC.TypeLits.
 Unfortunately at the moment this only works for types that are statically
known at compile time.  I think that we should be able to find a way to
work around this, but we're not quite there yet.

-Iavor






On Thu, Nov 8, 2012 at 7:54 AM, Arie Peterson ar...@xs4all.nl wrote:

 Hi,


 I'm trying to use data kinds, and in particular promoted naturals, to
 simplify
 an existing program.

 The background is as follows: I have a big computation, that uses a certain
 natural number 'd' throughout, which is computed from the input.
 Previously,
 this number was present as a field in many of my data types, for instance

  data OldA = OldA Integer …

 . There would be many values of this type (and others) floating around,
 with
 all the same value of 'd'. I would like to move this parameter to the type
 level, like this:

  data NewA (d :: Nat) = NewA …

 The advantage would be, that the compiler can verify that the same value of
 'd' is used throughout the computation.

 Also, it would then be possible to make 'NewA' a full instance of 'Num',
 because 'fromInteger :: Integer - NewA d' has a natural meaning (where the
 value of 'd' is provided by the type, i.e. the context in which the
 expression
 is used), while 'fromInteger :: Integer - OldA' does not, because it is
 not
 possible to create the right value of 'd' out of thin air.


 Is this a sane idea? I seem to get stuck when trying to /use/ the
 computation,
 because it is not possible to create 'd :: Nat', at the type level, from
 the
 computed integer.

 Can one somehow instantiate the type variable 'd :: Nat' at an integer
 that is
 not statically known?

 Formulated this way, it sounds like this should not be possible, because
 all
 types are erased at compile time.

 However, it feels as though it might not be unreasonable in this situation,
 because the computation is polymorphic in the type 'd :: Nat'. I just want
 to
 substitute a specific value for 'd'.


 Maybe there is another way to approach this?


 Thanks for any advice,

 Arie


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why doesn't GHC optimize recursive functions by converting them into `let`?

2012-11-08 Thread Carter Schonwald
Hey Petr, interesting post! (and links)
I imagine that one issue that would make it not a default activity by GHC
is that this sort of strategy has the following implications:

1) ghc in general doesn't always want to inline in general, inlining
increases the size of code! and depending on how that happens that can
increase compilation time and sometime decrease performance. That said,
there are many instances where perfomance is

2) This approach *can* be extended to mutually recursive functions, but
again, the naive inlining to depth k would have on the order of ~ 2^k
code blow up potentially (or so I think offhand)

theres probably a few other problems with doing this automatically, but it
looks like a nice performance trick I may consider using time.

cheers
-Carter



On Thu, Nov 8, 2012 at 10:00 AM, Petr P petr@gmail.com wrote:

   Hi,

 recently I found out that some recursive functions can be greatly
 optimized just by rewriting them using `let` so that they're not recursive
 themselves any more (only the inner let is). To give an example:

  fix :: (a - a) - a
  fix f = f (fix f)

 isn't optimized, because it's a recursive function and so it isn't inlined
 or anything. However, defining it as

  fix f = let x = f x in x

 makes it much more efficient, since `fix` is not recursive any more, so it
 gets inlined and then optimized for a given argument `f`.
 (See
 http://stackoverflow.com/questions/12999385/why-does-ghc-make-fix-so-confounding
 )

 Another example is the well-known generic catamorphism function:

  newtype Fix f = Fix { unfix :: f (Fix f) }
 
  catam :: (Functor f) = (f a - a) - (Fix f - a)
  catam f = f . fmap (catam f) . unfix

 is not optimized. When `catam` is used on a data structure such as [] or a
 tree, at each level `fmap` creates new constructors that are immediately
 consumed by `f`. But when written as

  catam f = let u = f . fmap u . unfix in u

 this gets inlined and then optimized, and constructor creation/consumption
 is fused into very effective code.
 (See http://stackoverflow.com/q/13099203/1333025)

 As one comment asked, this seems like something GHC could do
 automatically. Would it be difficult? Is there a reason against it? I
 suppose in cases where it doesn't help, the additional `let` would get
 optimized away, so no harm would be done. And in cases like `fix` or
 `catam` the gain would be substantial.

   Best regards,
Petr

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe