Re: How does GHC's testsuite work?

2017-10-31 Thread Wolfram Kahl
Cher Sébastien,

> Thanks. Unfortunately, the paper has been written in French.

No need to add ``Unfortunately''...  ;-)


> Will come back with a link ASAP!

I guess that many will agree with my opinion:
Links to French papers are welcome, too!


Amicalement,

Wolfram
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: simultaneous ghc versions

2015-07-31 Thread Wolfram Kahl
On Sat, Aug 01, 2015 at 06:31:38AM +1000, Erik de Castro Lopo wrote:
 I maintaing multiple versions of GHC on all the machines I use regularly
 for Haskell development. I have:
 
 * ghc-7.6.3 installed under /usr/lib/ghc-7.6/
 * ghc-7.8.4 installed under /usr/lib/ghc-7.8/
 * ghc-7.10.2 installed under /usr/lib/ghc-7.10/
 
 To switch between versions all I need to do is modify $PATH
 to remove say /usr/lib/ghc-7.6/bin and add /usr/lib/ghc-7.10/bin.
 This lets me have two terminal window side by side with different
 versions of GHC.

I use essentially the same setup,
but found that cabal-install does not play nicely with this:
No matter under which prefix I install cabal-install,
it always assumes the same global path for its configuration file.
This is a nuisance in particular when for some reason different versions
of GHC need different versions of cabal-install --- apparently older GHCs
don't work with newer versions of cabal-install?

I did experiment with cabal-install's -w and sandboxes,
and agree with what was previously mentioned:
It only helps if you never use GHC without cabal-install.

It would be nice if cabal-install had an installation-time option
to set its prefix directory FOR EVERYTHING.
The best cludge I found was scripts /usr/lib/ghc-*/bin/mycabal
that call /usr/lib/ghc-*/bin/cabal with appropriate options...


Wolfram
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


[Haskell] Relational Algebraic Methods --- RAMiCS 2014 --- Final CFP, with Deadlines extended!

2013-10-26 Thread Wolfram Kahl
 (NICTA, Australia; Publicity chair)
Ali Jaoua(Doha, Qatar)
Peter Jipsen (Chapman U., USA; PC co-chair)
Wolfram Kahl (McMaster U., Canada; PC co-chair)
Tadeusz Litak(Erlangen, Germany)
Larissa Meinicke (U. Queensland, Australia)
Szabolcs Mikulas (London, UK)
Bernhard Möller  (Augsburg, Germany)
Martin E. Müller (St. Augustin, Germany; General chair)
José Oliveira(U. Minho, Portugal)
Ewa Orłowska (Warsaw, Poland)
Matthew Parkinson(Microsoft Research, UK)
Damien Pous  (CNRS, France)
Ingrid Rewitzky  (Stellenbosch, South Africa)
Holger Schlingloff   (Berlin, Germany)
Gunther Schmidt  (Munich, Germany)
Renate Schmidt   (Manchester, UK)
Georg Struth (Sheffield, UK)
George Theodorakopoulos  (Cardiff, UK)
Michael Winter   (Brock U., Canada)



Steering Committee
--

Rudolf Berghammer  (Kiel, Germany)
Jules Desharnais   (Laval U., Canada)
Harrie de Swart(Rotterdam, Netherlands)
Ali Jaoua  (Doha, Qatar)
Bernhard Möller(Augsburg, Germany)
Ewa Orłowska   (Warsaw, Poland)
Gunther Schmidt(Munich, Germany)
Renate Schmidt (Manchester, UK)
Michael Winter (Brock U., Canada)



Organising Committee


Martin E. Müller, Sankt Augustin, Germany: Conference Chair, Local Organiser
Peter Höfner, NICTA, Australia: Publicity
Peter Jipsen, Chapman U., USA: PC Co-Chair
Wolfram Kahl, McMaster U., Canada: PC Co-Chair

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: base package (Was: GHC 7.8 release?)

2013-02-15 Thread Wolfram Kahl
On Thu, Feb 14, 2013 at 03:48:51PM +0100, Joachim Breitner wrote:
 
 Yesterday, I experimented a bit with base’s code, [...]

 Maybe the proper is to reverse the whole approach: Leave base as it is,
 and then build re-exporting smaller packages (e.g. a base-pure) on top
 of it. The advantage is:
   * No need to rewrite the tightly intertwined base.
   * Libraries still have the option to have tighter dependencies.
   * Base can evolve with lots of breaking changes, as long as they
 do not affect the API by the smaller packages.
   * Development of this collection can happen outside the GHC tree.
   * Alternative implementations for (some of) these packages can be
 created, if the reason why they could not be moved out of base
 is one of implementation, not of API
 
 How does that sound?

Essentially good to me...


One might consider instead (as has been proposed before, I believe),
to rename the current ``base'' to something like ``ghc-base''
which is not intended to be depended on by packages not shipped with GHC
(that is, by default ``hidden'' in ghc-pkg), and instead export:
  base   with a very stable interface
  io with a very stable interface
  GHCwith a probably rapidly evolving interface.
  *  possibly other packages giving access to internals

Most packages that currently depend on ``base'' would then depend
only on ``base'' and possibly ``io'', and by virtue of the stability
of these two interfaces would therefore not be affected
by most GHC releases.

This would effectively be
   ``splitting the interfaces GHC and io out from base''
instead of
   ``deprecating base and replacing it with the three new interfaces
 base-pure, io, and GHC''.

That choice is possibly mostly a matter of taste ---
I think that the name ``base'' is good for a user-facing interface,
and the name ``ghc-base'' more indicative of its
implementation-dependent character.


Wolfram

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


+RTS -S heap reporting oddity

2012-08-17 Thread Wolfram Kahl
During one of my long Agda runs (with GHC-7.4.2), I observed the following
output, with run-time options

   +RTS -S -H11G -M11G -K256M

:

7694558208  30623864 3833166176  0.11  0.11  234.75  234.7900  (Gen:  0)
7678904688  29295168 3847737784  0.11  0.11  242.04  242.0900  (Gen:  0)
7662481840  29195736 3861451856  0.11  0.11  249.31  249.3500  (Gen:  0)
7647989280  26482704 3872463688  0.12  0.12  256.64  256.6800  (Gen:  0)
4609865360  25764016 3886000448  0.09  0.09  261.04  261.0900  (Gen:  0)
4581294920  19435032 3891512272  0.07  0.07  265.37  265.4200  (Gen:  0)
4568757088  21095864 3902286000  0.08  0.08  269.70  269.7400  (Gen:  0)
4546421608  21618856 3913923976  0.09  0.09  274.04  274.0900  (Gen:  0)
452151 2894668056 3484748224  7.63  7.63  285.94  285.9800  (Gen:  
1)
8085358392  23776128 3499185336  0.11  0.11  293.49  293.5300  (Gen:  0)
8064630856  32055112 3515876576  0.13  0.13  300.91  300.9500  (Gen:  0)
8040500112  31477608 3528105088  0.12  0.12  308.37  308.4100  (Gen:  0)
8031456296  29641328 3540632456  0.11  0.11  315.83  315.8700  (Gen:  0)
8018447264  30187208 3554339600  0.12  0.12  323.26  323.3100  (Gen:  0)

To my untrained eye, this seems to be saying the following:
In the first 4 lines, the heap runs (almost) full before (minor) collections.
In lines 5 to 9 it apparently leaves 3G empty before collection,
but ``those 3G'' then appear on line 9 in the ``amount of data copied during 
(major) collection''
column, and after that it runs up to fill all 11G again before the next few 
minor collections.

What is really going on here?
(Previously I had never seen such big numbers in the second column on major 
collections.)


Wolfram


P.S.: Same effect again, but more dramatic, later during the same Agda run:

448829488   4864536 5710435424  0.02  0.02 1422.80 1422.9000  (Gen:  0)
445544064   3251712 5710248752  0.01  0.01 1423.23 1423.3200  (Gen:  0)
450236784   4148864 5712696848  0.02  0.02 1423.68 1423.7700  (Gen:  0)
445240152   3828120 5713606328  0.02  0.02 1424.10 1424.1900  (Gen:  0)
443285616   5906448 5717731864  0.02  0.02 1424.52 1424.6100  (Gen:  0)
430698248 4773500032 5363214440  9.30  9.30 1434.21 1434.3000  (Gen:  1)
6148455592  13490304 5374609848  0.07  0.07 1439.83 1439.9200  (Gen:  0)
6185350848  27419744 5389326896  0.11  0.11 1445.50 1445.5900  (Gen:  0)
6168805736  23069072 5398725784  0.11  0.11 1451.22 1451.3200  (Gen:  0)
6157744328  23451872 5408370152  0.09  0.09 1456.93 1457.0300  (Gen:  0)
6151715272  25739584 5421044592  0.11  0.11 1462.62 1462.7200  (Gen:  0)
6132589488  24541688 5428809632  0.10  0.10 1468.26 1468.3700  (Gen:  0)

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Call to arms: lambda-case is stuck and needs your help

2012-07-09 Thread Wolfram Kahl
 for clunky...

For argument supply and ``into'' we use the opposite sequence,
``arg  into  pat'', and clunky could then become:

  clunky = caseof
env -
  var1 -
var2 -
  lookup env var1  into
Just val1 -
  lookup env var2  into
Just val2 - val1 + val2
... other case bindings for clunky 

(This runs into similar nested layout issues as discussed recently;
 if we were to change the layout rule (perhaps only for the new keywords)
 to not apply if the next token is separated from the layout keyword
 by only a single space, we could turn this into a more compact shape:

  clunky = caseof
env - var1 - var2 - lookup env var1  into Just val1
- lookup env var2  into Just val2 - val1 + val2
... other case bindings for clunky 

 (The alignment of the two lookups and their preceding - is irrelevant here.)
)

``into'' would generalise pattern guards by allowing the same expression
to be matched against different patterns and still enable fall-through:

  f = caseof
[x] - g x  into  [] - []
  a : b : bs - (a, b) : h bs
... other cases, including g returning a singleton ...


This could be:

  Proposal 4: ``case-group argument supply'' or ``generalised pattern guards''



Best wishes,

Wolfram


-


@InProceedings{Kahl-2004a,
  author =   {Wolfram Kahl},
  title ={Basic Pattern Matching Calculi: A Fresh View on Matching 
Failure},
  crossref =  {FLOPS2004},
  pages ={276--290},
  DOI = {10.1007/978-3-540-24754-8_20},
  SpringerURL = {http://www.springerlink.com/content/3jet4qgw1q2nu0a8/},
  abstract = {We propose pattern matching calculi as a refinement
  of $\lambda$-calculus that integrates mechanisms appropriate for
  fine-grained modelling of non-strict pattern matching.

  Compared with the functional rewriting strategy
  usually employed to define the operational semantics of
  pattern matching in non-strict functional programming languages
  like Haskell or Clean, our pattern matching calculi
  achieve the same effects using simpler and more local rules.

  The main device is to embed into expressions
  the separate syntactic category of matchings;
  the resulting language naturally encompasses
  pattern guards and Boolean guards as special cases.

  By allowing a confluent reduction system and a normalising strategy,
  these pattern matching calculi provide a new basis for
  operational semantics of non-strict programming languages
  and also for implementations.}
}

@InProceedings{Kahl-Carette-Ji-2006a,
  author =   {Wolfram Kahl and Jacques Carette and Xiaoheng Ji},
  title ={Bimonadic Semantics for Basic Pattern Matching Calculi},
  crossref =  {MPC2006},
  pages ={253--273},
  DOI = {10.1007/11783596_16},
  SpringerURL = {http://www.springerlink.com/content/2715070606u63648/}
}

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ghc-cabal-Random

2011-12-31 Thread Wolfram Kahl
On Sat, Dec 31, 2011 at 11:43:26PM +0200, Yitzchak Gale wrote:
 Serge D. Mechveliani wrote:
  I have  ghc-7.4.0.20111219  made from source and tested it on the
  DoCon-2.12 application -- thanks to people for their help!
  It looks all right.
  This was -- with skipping the module Random
  Now it remains to add the Random package.
  I have taken  AC-Random Version 0.1  from hackage...
  Its installation requires Cabal..
  And Cabal is difficult to install..
 
 [...]

 Today, it is very unusual to use GHC by itself.
 To use Haskell, you install the Haskell Platform.
 That is GHC together with Cabal and a basic
 set of libraries. It is very easy to install.
 
 http://hackage.haskell.org/platform/

However, since you are willing and able to test bleeding-edge versions of GHC,
you need to be able to live without the platform, which typically
catches up to GHC versions only within a couple of months.

 Almost all Haskell software is expected to
 be installed using Cabal nowadays.

It is important to know that people associate two packages with the name 
``Cabal'':

 * cabal : package infrastructure shipped with GHC ---
   only a library.
 * cabal-install : package manager requiring a number of other packages
 (in particular networking packages), and providing the executable ``cabal''

Life without cabal-install is not only possible,
but also safer.
(See also: http://www.vex.net/~trebla/haskell/sicp.xhtml )

If you installed an experimental GHC version,
it makes sense to install packages into the same directory,
say /usr/local/packages/ghc-7.4.0.20111219.

Assuming you downloaded AC-Random-0.1.tar.gz, do the following:

  tar xzf AC-Random-0.1.tar.gz
  cd AC-Random-0.1.tar.gz
  ghc --make Setup
  ./Setup configure --prefix=/usr/local/packages/ghc-7.4.0.20111219 -p
  ./Setup build -v
  ./Setup haddock
  ./Setup install -v


Hope this helps!


Wolfram

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Recompile with -fPIC (was building a patched ghc)

2011-02-18 Thread Wolfram Kahl
I am not certain, but this may be the same problem that I once had,
and that was solved by updating to binutils-2.20.

 ld --version
GNU ld (GNU Binutils) 2.20.1.20100303


Wolfram


On Fri, Feb 18, 2011 at 11:34:03AM +0100, José Pedro Magalhães wrote:
 Hi all,
 
 I'm getting the same error as Alexy below in some 64bit linux system. What
 can I do? Adding -fPIC and also -dynamic does not seem to solve the problem.
 Also, this only happens with a perf build; devel1 works fine.
 
 
 Thanks,
 Pedro
 
 On Sat, Jun 26, 2010 at 05:56, braver delivera...@gmail.com wrote:
 
  An attempt to build the trunk gets me this:
 
  /opt/portage/usr/lib/gcc/x86_64-pc-linux-gnu/4.2.4/../../../../x86_64-
  pc-linux-gnu/bin/ld: rts/dist/build/RtsStartup.dyn_o: relocation
  R_X86_64_PC32 against symbol `StgRun' can not be used when making a
  shared object; recompile with -fPIC
 
  -- I use prefix portage on a CentOS box, admittedly a non-standard
  setup.  Its gcc is found first and it wants -fPIC...  Should I just
  add it to CFLAGS or what?
 
  -- Alexy
  ___
  Haskell-Cafe mailing list
  haskell-c...@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 
 !DSPAM:4d5e4b2789541804284693!

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
 
 
 !DSPAM:4d5e4b2789541804284693!


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Trying to build Agda 2.2.9 with ghc-7.1.20110131

2011-02-01 Thread kahl
On Tue, Feb 01, 2011 at 06:01:04PM +0300, Pavel Perikov wrote:
 If anyone interested...
 
 Agda-2.2.9 compiled perfectly with 7.0.1 release but with 7.1.20110131 the 
 compiler had a few problems including impossible happened when building 
 profiling library.

Possibly related:

  http://hackage.haskell.org/trac/ghc/ticket/4462

Therefore: does the problem change if you pass

  --ghc-options=-dcore-lint

to

  ./Setup configure -p

?


Wolfram

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: In opposition of Functor as super-class of Monad

2011-01-04 Thread kahl
On Tue, Jan 04, 2011 at 02:24:21AM -0800, o...@okmij.org wrote:
  
  I'd like to argue in opposition of making Functor a super-class of
  Monad. I would argue that superclass constraints are not the right
  tool for expressing mathematical relationship such that all monads are
  functors and applicatives.
  
  Then argument is practical. It seems that making Functor a superclass
  of Monad makes defining new monad instances more of a chore, leading
  to code duplication. To me, code duplication is a sign that an
  abstraction is missing or misused.

The argument about code duplication somehow seems to assume that
class member instances need to be defined as part of the instance
declaration. This is not the case, and in fact I am arguing in general
against putting any interesting code into instance declarations,
especially into declarations of instances with constraints
(since, in ML terminology, they are functors, and putting their
definition inside an instance declaration constrains their applicability).

In my opinion, the better approach is to define (generalised versions of)
the functions mentioned in the class interface,
and then just throw together the instances from those functions.
This also makes it easier to adapt to the ``class hierarchy du jour''.

The point for the situation here is that although we eventually need
definitions of all the functions declared as class members,
there is absolutely nothing that constrains the dependency relation
between the definitions of these functions to be conforming in any way
to the class hierarchy.

For a simpler example, assume that I have some arbitrary data type

 data T a = One a | Two a a

and assume that I am interested only in Ord instances, since I want to
use T with Data.Set, and I am not really interested in Eq instances.

Assume that the order will depend on that for |a|,
so I will define a function:

 compareT :: (a - a - Ordering) - T a - T a - Ordering

Then I can thow together the necessary instances from that:

 instance Ord a = Ord (T a) where
   compare = compareT compare
 
 instance Ord a = Eq (T a) where
   (==) = eqFromCompare compare

assuming I have (preferably from the exporter of Eq and Ord):

 eqFromCompare :: (a - a - Ordering) - (a - a - Bool)
 eqFromCompare cmp x y = case cmp x y of
   EQ - True
   _ - False

The same approach works for Oleg's example:

  For the sake of the argument, let us suppose that Functor is a
  superclass of Monad. Let us see how to define a new Monad
  instance. For the sake of a better illustration, I'll use a complex
  monad. I just happen to have an example of that: Iteratee.
  The data type Iteratee is defined as follows:

 type ErrMsg = String-- simplifying
 data Stream el = EOF (Maybe ErrMsg) | Chunk [el] deriving Show

 data Iteratee el m a = IE_done a
  | IE_cont (Maybe ErrMsg)
(Stream el - m (Iteratee el m a, Stream el))

  [...]   
  
  It _almost_ makes me wish the constraint go the other way:
  
   instance Monad m = Functor m where
  fmap f m = m = (return . f)
  
  That is, we need an instance rather than a superclass constraint, and
  in the other direction. The instance constraint says that every monad
  is a functor. Moreover,
   \f m = m = (return . f)
  
  is a _proof term_ that every monad is a functor. We can state it once
  and for all, for all present and future monads.

I would expect that proof term to exported by the package exporting
Functor and Monad; let us define it here:

 fmapFromBind (=) f m = m = (return . f)

Now you can write, no matter which class is a superclass of which:

 bindIt return (=) (IE_done a) f = f a
 bindIt return (=) (IE_cont e k) f = IE_cont e (\s - k s = docase)
   where
 docase (IE_done a, stream)   = case f a of
   IE_cont Nothing k - k stream
   i - return (i,stream)
 docase (i, s)  = return (bindIt return (=) i f, s)
 
 instance Monad m = Monad (Iteratee el m) where
return = IE_done
(=) = bindIt return (=)
 
 instance Monad m = Functor (Iteratee el m) where
   fmap = fmapFromBind (=)

Of course this assumes that you are not actually interested in an
instance of shape: instance (Functor ...) = Functor (Iteratee el m),
but this seems to be a plausible assumption.

Defining the functionality really has nothing to do with declaring an
instance of a type class, and it is normally better to keep the two
separated. And that does not lead to any real code duplication,
only extremely boring instance declarations.


Wolfram



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


How to activate DEBUG in Data.HashTable?

2010-11-28 Thread Wolfram Kahl
Hello,

  with a large Agda development, I have a reproducible segmentation fault
that I have been able to localise to the serialisation
(Agda.TypeChecking.Serialise.encode), which heavily relies on Data.HashTable.

Now I find that Data.HashTable (from GHC-7.0.1) has a CPP-enabled DEBUG version 
---
is there a ``proper'' way to activate this? Possibly even so that it can be 
added
ex-post to an existing installation? (So I don't have to re-install all 
packages.)

mk/ways.mk says that the ``debug'' way only affects the RTS...


Wolfram

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell] Help-me read file

2010-10-18 Thread kahl
  I have a file with 100 lists, with 100 ints.
  
  I have to read the file and apply the map and sort functions on lists.
  
  II did it to read file:
  
  learquivo :: FilePath - IO ([[Int]])
  learquivo s = do
             conteudo - readFile s
             return (read conteudo)
  
  But now applying the sort, the error appears:
  
  
  interactive:1:5:
     Couldn't match expected type `[a]'
            against inferred type `IO [[Int]]'
  
  how can I fix?

You did not even say how you produced that error.


Assuming you are in GHCi, you want to do:


m - learquivo miarquivo

:t m


(Think of the interactive GHCi prompt as inside a big ``do''.)



Hope this helps!

Wolfram
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: Feedback request: priority queues in containers

2010-03-16 Thread kahl


Louis Wasserman wrote:
  
  I'm not willing to do this sort of typeclass wrapper thing, primarily
  because nothing else in containers does -- even though we might have a
  Mapping type class that handles both IntMap and Map, we don't.
  
  I'm inclined to let that design choice stand, as far as containers is
  concerned.  It would make perfect sense to write a new package with such a
  type class and offering instances for the containers priority queue
  implementations, but I prefer to stick with the style that containers
  already seems to use -- that is, exporting separate modules without a
  unifying type class, but with nearly-identical method signatures.

Just an aside (and shameless plug ;-): Since the signatures overlap so much,
it is in fact easy to wrap these modules into instances
for many possible different type classes that one might consider using for
containers --- I have a tool that mechanises this instance generation,
available at:

  http://sqrl.mcmaster.ca/~kahl/Haskell/ModuleTools/


More about this in the forthcoming TFP 2009 proceedings paper:

@InCollection{Kahl-2009_TFP,
  author =   {Wolfram Kahl},
  title ={Haskell Module Tools for Liberating Type Class Design},
  crossref =  {TFP2009},
  pages = {129--144},
  chapter = {9},
  abstract ={Design of Haskell type class hierarchies for complex purposes,
 including for standard containers, is a non-trivial exercise,
 and evolution of such designs
 is additionally hampered by the large overhead
 of connecting to existing implementations.

 We systematically discuss this overhead,
 and propose a tool solution, implemented using the GHC API,
 to automate its generation.}
}

@Book{TFP2009,
  title = {Trends in Functional Programming, {TFP 2009}},
  booktitle = {Trends in Functional Programming, {TFP 2009}},
  year =  2010,
  editor ={Zolt\'an Horv{\'a}th and Vikt\'oia Zs{\'o}k and Peter Achten and 
Pieter Koopman},
  address =   {UK},
  publisher = {Intellect},
  note = {(In press)}
}



Wolfram
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell] recursive definitions in Haskell (inductive and coinductive)

2010-02-02 Thread kahl
Norman,

  Haskell permits recursive definitions at both the type level and the
  term level.  Here's an example definition at the type level:
  
data Intlist = Nil | Cons Integer Intlist
  
  If my understanding is correct, Haskell takes as the definition of
  `Intlist` the *greatest* solution to this recursion equation.
  That is, in a vaguely domain-theoretic way, Intlist is the greatest
  fixed point of the function
  
 f(S) = {Nil} union {Cons n ns | n in Integer, ns in S}
  
  It's this 'greatest fixed point' or coinductive definition that admits
  of infinite lists in the `Intlist` type.  Right?

AFAIK, the normal understanding is that recursive types
are the least fixed points of endofunctors on the category of CPOs,
and it is the CPO property that least upper bounds of chains exist
that forces the existence of infinite lists.

Haskell uses non-coalesced sum, that is,

  Left undefined /= undefined /= Right undefined,

and non-strict product, that is,

  (undefined, undefined) /= undefined,

which make these chains non-trivial.

(OCa)ML has strict constructors,
so effectively uses coalesced sum and strict product,
which makes all chains resulting from polynomial functors finite.

Therefore it is not the CPO semantics alone that creates infinite lists,
but rather the presence of non-strict data constructors.


Wolfram

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] recursive definitions in Haskell (inductive and coinductive)

2010-02-02 Thread kahl
Norman,
  
AFAIK, the normal understanding is that recursive types
are the least fixed points of endofunctors on the category of CPOs,
and it is the CPO property that least upper bounds of chains exist
that forces the existence of infinite lists.
  
  But ML has CPOs and infinite chains too!  The situation is simpler
  because the only *interesting* infinite ascending chains are in
  function domains.

That's why I (later) said ``polynomial functors'' ---
in ML, infinite chains do not arise in simple recursive datatypes like

  List A = 1 + A \times List A
.

  To paraphrase, is what you're saying that the definition of a Haskell
  type is the smallest fixed point that contains the bottom element
  (divergent computation) as a member?

Standard Haskell types are interpreted as objects in the category of CPOs
that have a least element --- bottom.
(We are not talking about unboxed types here.)
Morphisms do not need to preserve the least element ---
non-strict functions are allowed.
Then + is interpreted as non-coalesced sum with non-strict injections,
and $\times$ is interpreted via non-strict pair construction,
i.e., the function (curry (id :: (a,b) - (a,b))) is non-strict,
which gives rise to the property just emphasised by Jonathan:

(undefined, y) /= undefined

(This hold even if  y = undefined.)

So both + and \times produce non-flat CPOs even from flat CPOs,
introducing infinite ascending chains in the recursive case.


Anyway, the bottom does not come in through the fixed point construction,
i.e., the semantics of recursion,
but rather through the initial choice of semantic domains.

  To paraphrase, is what you're saying that the definition of a Haskell
  type is the smallest fixed point

Technically, that fixpoint is contructed as a colimit (``inverse limit'')
of an inifinite diagram resulting from a chain of functor applications
starting from the trivial CPO with least element: {bottom}.
(The fact that it is a colimit is a generalisation of the ``smallest''
 aspect and is shared with initial algebras.)


Wolfram
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: Running GHC 6.10.* on OpenBSD

2009-12-12 Thread kahl

On Sat, Dec 12, 2009 at 05:30:44AM +0200, Thanos Tsouanas wrote:
 Up to now I've only used binary versions of GHC, but since
 my operating system's (OpenBSD) version of GHC is
 lagging behind (currently at 6.6.1), I need to update it.
 I tried using my system's ghc-6.6.1 to compile ghc-6.10.4
 but it failed due to haskeline not being installed (and trying
 to install it also failed).

If Haskeline continues to make trouble
(it seems to at least contribute to the crashing of GHCi on PowerPC),
you could try to do a build without GHCi first ---
Haskeline is not needed for the compiler.

(I haven't tried that yet myself, since it only crashes at runtime.)


Wolfram

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Update on GHC 6.12.1

2009-10-29 Thread kahl
Simon Peyton Jones answered me:

  | do { a - getChar
  |; rec { b - f c
  |  ; c - g b }
  |; putChar c
  |; return b }
  
  
  | This last point notwithstanding,
  | I find the scoping rules very unintuitive!
  | (b and c appear to escape their apparently nested scope.)
  
  well you are happy with
^

(Let's say: I got used to it...)

  
   do { z - getChar
  ; let { b = f c
  ; c = g b }
; putChar c
; return b }
  
  It's just the same!

Indeed; I had not noticed that.
(I am spoilt by layout, and had never ``seen'' those braces before.)

However, if you write those braces, there is no reason anymore to omit the
 ``in do'' at the end! This variant of let is only motivated by layout...

Analogously, is

| do { a - getChar
|; rec { b - f c
|  ; c - g b }
|; putChar c
|; return b }

equivalent to

| do { a - getChar
|; rec { b - f c
|  ; c - g b } in do
|   { putChar c
|   ; return b }

?

Is

| do { rec { b - f c
|  ; c - g b }
|; putChar c
|; return b }

equivalent to

| rec { b - f c
| ; c - g b } in do
|{ putChar c
|; return b }

?

Would ``dorec'', in analogy with ``letrec'', perhaps be a better name?


Wolfram
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Update on GHC 6.12.1

2009-10-28 Thread kahl
Simon Peyton Jones wrote:
  
  Recursive do-notation.
  ~~
  The change is this.  Instead of writing
  
mdo { a - getChar
; b - f c
; c - g b
; putChar c
; return b }
  
  you would write
  
do { a - getChar
   ; rec { b - f c
 ; c - g b }
   ; putChar c
   ; return b }
  
  That is, 
* 'mdo' is eliminated 
* 'rec' is added, which groups a bunch of statements
  into a single recursive statement
  See http://hackage.haskell.org/trac/ghc/ticket/2798
  
  This 'rec' thing is already present for the arrow notation, so it  
  makes the two more uniform.  Moreover, 'rec' lets you say more
  precisely where the recursion is (if you want to), whereas 'mdo' just
  says there's recursion here somewhere.  Lastly, all this works with
  rebindable syntax (which mdo does not).
  
  The main question is not whether to make this change, but when. 

This last point notwithstanding,
I find the scoping rules very unintuitive!
(b and c appear to escape their apparently nested scope.)


Wolfram
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Template-haskell] How to extract name and type of exported functions in modules

2009-10-20 Thread kahl
Simon Peyton Jones wrote:

  What is the info for an instance?
  Do we need a way to ask for all the instances of a class
  (or rules for a function)
  regardless of what module those instances come from?   etc
  
  
  Does this ring bells for anyone else?

It does --- but completely outside the context of Template-Haskell.

I have been trying to do things via the GHC API (the full API,
not just module GHC), and found it surprisingly hard to obtain
all exports for a given module
(whether source or compiled library module):

-- If it exports any prelude indentifiers (not only the Prelude itself,
   but also for example Data.List), I have no idea how to get at
   their types --- lookupGlobalName does not find them.

-- I cannot find any re-exported instances. The ``package InstEnv''
   obtained from tcGetInstEnvs after loading a library module interface
   (via findAndReadIface) seems to contain only the module's own exported
   instances, which are the same instances I also find via mi_insts.
   (For source modules, where I use typeCheckModule,
I've only been able to find the module's own exported instances via
hptInstances).

   Even while not counting the missing Prelude instances,
   Data.Map is still re-exporting the Data.Set instances from the same
   package --- where does GHC keep those?
   And where does it keep those from different packages?
   my current understanding of the ``home InstEnv'' is that it contains
   instances encountered during ``ghc --make'' that may not be visible in
   the current module, so if I ever do find instances there,
   how would I filter the visible ones?

  Do we need a way to ask for all the instances of a class
  (or rules for a function)
  regardless of what module those instances come from?   etc

As long as a module re-exports an instance, I'd like to be able to find it
from that module.


  For example, if module exports T( C1, C2), where C1, C2 are constructors
  of T which also has constructors C3, C4, what should appear in
  mi_exports?  Should mi_exports reflect the structure of the export list
  (see the AvailInfo type in GHC).

Applications that look at a module ``from the outside''
should, in my opinion, not be able see C3 and C4.
I think they also do not need to be able to know
that there are additional constructors.
There is always _|_ that doesn't match C1 nor C2,
even if there are no C3 nor C4.

Applications that need to know more will want an interface
that allows to ``look at all the insides'' of a module,
i.e., they want access to all items defined and imported inside,
and may of course also want to know which of these items are exported.

Since there are other interfaces to find out that C1 and C2 belong to T,
I don't think it is essential to know the structure of the export list.
It may be convenient, though...

(By the way, IIRC, sometimes I can import C1 as ``C1'',
 and sometimes I need to import it as ``T(C1)''.
 Is this a side effect of some LANGUAGE extension?)


Wolfram
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Announcing the new Haskell Prime process, and Haskell 2010

2009-07-08 Thread kahl
Don Stewart wrote:
  
  Tom Lokhorst suggests[1]
  
  Haskell'10
 
  [1] http://twitter.com/tomlokhorst/statuses/2539313506

How pessimistic.


Some people expect Haskell and/or Haskell'
not to be around anymore in 2110?



Wolfram
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell] ANNOUNCE: OpenGLRaw 1.0.0.0

2009-06-11 Thread kahl
  
  Graphics.Rendering.OpenGL.GL.CoordTrans is awfully long.
  
  I think that Graphics.Rendering. is clutter, and OpenGL.GL. seems
  redundant to me. I'd very much like to encourage the idea that tremendously
  long package names ought to be considered poor form.

... or be made abstractable, e.g.:

import prefix Graphics.Rendering.OpenGL.GL as OpenGL
import OpenGL.CoordTrans


Wolfram
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Marketing Haskell

2009-04-01 Thread kahl
  
  A good one indeed, Simon.

Check the ``Received: from'' headers of the original message...


Wolfram
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: Portage slots for GHC?

2007-12-13 Thread kahl
Duncan Coutts wrote:
  
  Sadly not because portage is not clever enough to deal with the kind of
  dependencies you end up with by doing that. Portage does not know that
  some libs are installed for one ghc and some for another, so it'd try
  and satisfy a dependency for mtl using the mtl you'd installed for
  ghc-6.4.2 when in fact you were building something for 6.6.1.
  
  You may have heard of other new portage features that sound like they'd
  help, they don't. We've thought about this a lot and it is not feasible
  in with the current portage.
  
  We could slot multiple versions of a lib for the same ghc, but not slot
  ghc itself.


Why not library slots named

ghc-6.4
ghc-6.6
ghc-6.8

(or finer), each depending on the repective compiler slot?

The different slots of gtk+ also depend on different slots of glib.



Wolfram




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Portage slots for GHC?

2007-12-13 Thread kahl
Duncan Coutts wrote:
 We could slot multiple versions of a lib for the same ghc, but not slot
 ghc itself.
   
   
   Why not library slots named
   
   ghc-6.4
   ghc-6.6
   ghc-6.8
   
   (or finer), each depending on the repective compiler slot?
   
   The different slots of gtk+ also depend on different slots of glib.
  
  But those are actually different versions of the Gtk+ library. Our
  situation is worse, we have the exact same library version slotted for
  multiple versions of ghc.
  
  We'd want to be able to say something like:
  
  SLOT=${GHC_PV}-${PV}
  
  to slot on the ghc version and library version. But where does ${GHC_PV}
  come from?
  
  Well, suppose we could do that, now how do we specify the deps:
  
  DEPEND=dev-haskell/foo:???
  
  it's supposed to be the same ghc slot as our own slot is. But we can't
  get that information.

Until portage actually can do this in a better way,
you would have to multiply ebuilds
(probably each with common import and a one-line variable setting)
(I don't know the exact restrictions on portage version strings):


=== mtl-common-X.Y.Z ===
...
SLOT=${GHC_PV}-${PV}
DEPEND=dev-haskell/ghc-${GHC_PV}
...


=== mtl-ghc.6.6..X.Y.Z ===
GHC_PV=ghc.6.6
include mtl-common-X.Y.Z


=== mtl-ghc.6.8..X.Y.Z ===
GHC_PV=ghc.6.8
include mtl-common-X.Y.Z



One day, you may even want mtl for other compilers...


Wolfram
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Getting source code

2007-11-08 Thread kahl
  It would also be nice to have the 
  case issue with Mac OS X document, as recent file systems have been 
  case-insensitive for a while.

Just a quick note: At Mac OS X install time
(or other file system creation time)
you can choose case sensitivity for HPFS+
(the default file system on Mac OS X).



Wolfram
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 6.8.1 on Mac OS X 10.5 (Leopard)

2007-11-06 Thread kahl
  
  You do not perchance have one for powerpc, do you?  I attempted to
  bootstrap from GHC 6.6.1, but I keep getting errors about Illegal
  Instructions,

Try to force

./configure --build=powerpc-mac-darwin

or whatever is appropriate under MacOS ---
without it, ./configure will identify the target as
powerpc64-... --- just give it a try without the 64.
(This is what I need with 32bit userland under linux.)

Otherwise try adding

 GhcUnregisterised=YES
 SplitObjs=NO

to mk/build.mk.



Wolfram
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Unknown option -XPatternSig used in warning

2007-09-19 Thread kahl
  
  | Illegal signature in pattern: a
  | Use -XPatternSigs to permit it
  |

I am very happy to see
that it doesn't recommend -fglasgow-exts anymore!

However, I still would prefer

  Use LANGUAGE pragma with extension ``PatternSigs'' to permit it

since it points out even more things that many may not have known before...


By the way, some of my LANGUAGE pragmas are getting quite long,
and last time I tried I was not allowed to have a line break in there.

Are there any plans to allow line breaks,
or multiple LANGUAGE pragmas?



Wolfram
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: building ghc6 on ppc64 linux

2007-07-29 Thread kahl
  
   Has anyone successfully built it on ppc64 yet?  The latest build I can
   find in a distro is ghc-6.4.2 in gentoo.  Has anyone tried ghc [6.6.1] on 
   ppc64?
  

I have a gentoo binary package built some time ago from the
gentoo ebuild for 6.6.1:

http://sqrl.mcmaster.ca/~kahl/Haskell/ghc-bin-6.6.1.tbz2

This is for 32-bit userland running on a 64-bit kernel
--- the default in gentoo, and currently restricts to unregisterised build.


Wolfram
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Make it possible to evaluate monadic actions when assigning record fields

2007-07-10 Thread kahl
Isaac Dupree [EMAIL PROTECTED] wrote:
  
  Adde wrote:
tmp - foo
return Bar {
  barFoo = tmp
}
  
  There is a feature being worked on in GHC HEAD that would let you do
  
do
 tmp - foo
 return Bar{..}
  
  which captures fields from everything of the same name that's in scope. 
I think this would also satisfy your desire.
  

I guess this means I could write:


data D = C {field1 :: Bool, field2 :: Char}

f x = do
  field1 - foo1
  field2 - foo2
  field3 - foo3
  other stuff
  return C{..}


instead of


f x = do
  tmp1 - foo1
  tmp2 - foo2
  field3 - foo3
  other stuff
  return $ C { field1 = tmp1, field2 = tmp2 }


This has a dangerous feel to it ---
extending the definition of D to include a field field3
may have quite unintended consequences.


What I am missing most in the record arena
is a functional notation for record update, for example:

{^ field1 }  =  \ f r - r {field1 = f (field1 r)}
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] What puts False before True?

2007-05-30 Thread kahl
  
  What is the basic philosophy for Bool being a member of Ord?
  What justifies False  True?

The implication ordering,
which on this smallest non-trivial Boolean algebra
happens to be a linear order,
is therefore the natural candidate for Ord,
the type class of ``default linear orders''.


False == True


Wolfram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The C Equiv of != in Haskell miscommunication thread

2007-05-29 Thread kahl
  
  P.S. Have some cute code:
  
  Control.Monad.Fix.fix ((1:) . scanl (+) 1)


Cute!

But what an un-cute qualified name:

:t Control.Monad.Fix.fix
Control.Monad.Fix.fix :: (a - a) - a


Has nothing to do with monads,
and would perhaps be considered as ``out of Control'' in any case...

;-)


Wolfram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Cost of Overloading vs. HOFs

2007-05-04 Thread kahl
Adrian Hey wrote:
  
  Duncan Coutts wrote:
   One might hope that in this case we could hoist the extraction of the
   dictionary members outside the inner loop.
  
  This possibility had crossed my mind too. If HOFs really are faster
  (for whatever reason) then it should be possible for a compiler to
  do this automatically.

Once everything is in terms of dictionaries,
this should even be part of ``full laziness''.


Wolfram

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Displaying infered type signature of 'offside' functions

2007-05-02 Thread kahl
Jules Bean [EMAIL PROTECTED] wrote,
concerning the problem of getting at the types of local definitions:

  Simon Peyton-Jones wrote:
  The principal difficulties here are to do with what do we want
  rather the implementation challenges.
 
   1.  Should the compiler print the type of every declaration? Should GHCi 
   allow you to ask the type of a local decl?
 
  
  IMO, ghci should definitely allow you to ask. This comes up for me every 
  time that I write any haskell code (and in general I end up hoisting 
  local definitions to the top level, which is a real pain if there is 
  local scope, data or type, to hoist with it).
  
   2.  How should the variables be identified?  There may be many local 
   bindings for 'f', so you can't say just :t f.  Ditto if dumping all 
   local bindings.
  
 
  
  I think this is a hard question. I was imagining some kind of 
  hierarchical system like foo.f, in the case that f is locally defined 
  inside foo. (Yes I know we can't easily use '.' for that). There might 
  be might be multiple fs inside the definition of foo; indeed there might 
  even be multiple fs nested inside each other.


I just wanted to contribute a PRACTICAL TRICK I use:


* If the local definition is a pattern binding

f = ...

  then I just add

f :: Ordering

* If the local definition is a, say binary, function binding

f p1 p2 = ...

  then I just add

f :: Float - Double - Ordering

  (Type does not matter for the number of function arrows you need to put;
   only the syntactic arity of the bindings matters here.)


This relies on the fact that the types Float, Double, and Ordering
very rarely occur in my code --- pick your own.


Now the compiler gives you wonderful error messages
``cannot match type `x y z' against Ordering'' ---
so you replace ``Ordering'' with ``x y z''.

If there are type variables in ``x y z''
that come from the context,
take care that you have

{-# LANGUAGE ScopedTypeVariables #-}

at the beginning of your module
(after the {-# OPTIONS ...#-} line,
 which should, as a matter of style,
 NOT contain -fglasgow-exts
)
and the necessary ``forall ty1 ty2 ...'' in front of your the type
in the type signature of the enclosing definition.


At the cost of a re-compilation, this works wonderfully for me,
and is much less painful than hoisting AND exporting local definitions.




But even I still have some wishes open:

* It would be nice if this worked inside the do-notation, too:

  do x :: Ordering
 x - m

  (This is curently a syntax error.)

* It would be nice if {-# LANGUAGE ScopedTypeVariables #-}
  brought in no other extensions,
  and if GHC would recommend
  the appropriate LANGUAGE pragmas instead of -fglasgow-exts
  when it determines that the user might have wanted some extension.

  (What is the right Language.Extension for GADTs?)



Wolfram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Displaying infered type signature of 'offside' functions

2007-05-02 Thread kahl
  
   * It would be nice if this worked inside the do-notation, too:
  
 do x :: Ordering
x - m
  
 (This is curently a syntax error.)
  
  I think the following works with -fglasgow-exts:
  
  do (x :: Ordering) - m

I know, but it is not as nice!

;-)


Wolfram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] -compiler-options vs {#-LANGUAGE Flags-#}

2007-03-23 Thread kahl
S. Alexander Jacobson [EMAIL PROTECTED] wrote:

  The correct answer, I believe, is to require that module authors put 
  flags in the module themselves where they belong.  At very least it 
  should be considred bad style not to have them in your code and really 
  what would make more sense is for compiler writers to disable command 
  line flags that specify lanaguge-extensions.

I strongly support this.

I think the LANGUAGE pragma of GHC-6.6 is a step in the right direction.
It is unfortunate that GHC still proposes to use -fglasgow-exts
whenever it suspects use of extensions;
it would be better to propose the appropriate extension for
the LANGUAGE pragma.

(By the way, why is there no extension for GADTs?
 My impression is that there is a number of other extensions,
 each of which is sufficient individually to enable GADTs.

 Some more documentation in Language.Haskell.Extension
 would also be useful...
)



Wolfram

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] Recursion in Haskell

2007-02-19 Thread kahl
  
  For absorbing the functional style of programming (which is what you really 
  should be working on at this point),

For functional style and the reasoning attitude associated with lazy
functional programming, the following book is a good introduction:

@Book{Bird-2000,
  author = {Richard Bird},
  title = {Introduction to Functional Programming using {Haskell}},
  year = 2000,
  publisher = {Prentice-Hall}
}

This is the second edition of:

@Book{Bird-Wadler-1988,
  year = 1988,
  title = {Introduction to Functional Programming},
  publisher = {Prentice-Hall},
  author = {Richard Bird and Phil Wadler}
}


Wolfram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] factorising to prime numbers

2007-02-09 Thread kahl

  
  If you look at the definition of 'powers' you'll note it's infinite. So
  there's no easy way to take the product of this list, if I don't know
  how many items to take from it.

So you need finite lists.

  
  -- how many of the prime p are in the unique factorisation
  -- of the integer n?
  f 0 _ = 0
  f n p | n `mod` p == 0 = 1 + f (n `div` p) p
| otherwise = 0
  
  powers n = map (f n) primes

Extremely high-level hint:
Move from div to divMod (i.e., let f return a pair),
and add List.unfoldr (or takeWhile) capacities to map.

;-)



Wolfram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Space Leak Help

2007-02-04 Thread kahl
  
   \begin{code}
   catWithLen :: [a] - (Int - [a]) - [a]
   catWithLen xs f = h 0 xs
 where
   h k [] = f k
   h k (x : xs) = case succ k of-- forcing evaluation
k' - x : h k' xs
   \end{code}
  
  
  Thanks but this gives a different problem:
  
  [EMAIL PROTECTED]:~/sha1 ./allInOne 101 +RTS -hc -RTS
  [2845392438,1191608682,3124634993,2018558572,2630932637]
  [2224569924,473682542,3131984545,4182845925,3846598897]
  Stack space overflow: current size 8388608 bytes.
  Use `+RTS -Ksize' to increase it.


Does it still do that if you youse seq instead of case?


\begin{code}
catWithLen :: [a] - (Int - [a]) - [a]
catWithLen xs f = h 0 xs
  where
h k [] = f k
h k (x : xs) =  let k' = succ k
in k' `seq` x : h k' xs
\end{code}


Wolfram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Space Leak Help

2007-02-03 Thread kahl
  
  I have re-written SHA1 so that is more idiomatically haskell and it is easy 
  to 
  see how it implements the specification. The only problem is I now have a 
  space leak. I can see where the leak is but I'm less sure what to do about 
  getting rid of it.
  
  Here's the offending function:
  
  pad :: [Word8] - [Word8]
  pad xs =
 xs ++ [0x80] ++ ps ++ lb
 where
l = length xs
pl = (64-(l+9)) `mod` 64
ps = replicate pl 0x00
lb = i2osp 8 (8*l)


I would try something along the following lines (untested):

\begin{spec}
catWithLen xs f = xs ++ f (length xs)
\end{spec}

\begin{code}
catWithLen :: [a] - (Int - [a]) - [a]
catWithLen xs f = h 0 xs
  where
h k [] = f k
h k (x : xs) = case succ k of-- forcing evaluation
 k' - x : h k' xs
\end{code}

\begin{code}
pad :: [Word8] - [Word8]
pad xs = catWithLen xs f
  where
f l = 0x80 : ps lb
  where
 -- we know that |l = length xs|
 pl = (64-(l+9)) `mod` 64
 ps = funPow pl (0x00 :)
 lb = i2osp 8 (8*l)
\end{code}

If you are lucky, then the replicate and the (++lb) in the original code
might be fused by the compiler as an instance of foldr-build
or something related --- check the optimised core code. 

In my variant I changed this to rely on efficient function powering
instead:

\begin{spec}
funPow k f = foldr (.) id $ replicate k f
\end{spec}

\begin{code}
funPow :: Int - (a - a) - (a - a)
funPow n f = case compare n 0 of
LT - error (funPow: negative argument:  ++ show n)
EQ - id
GT - pow n f
  where
pow m g = if m  1
  then let (m',r) = divMod m 2
   g' = g . g
   in if r == 0
  then pow m' g'
  else pow m' g' . g
  else g
\end{code}

(You will probably also consider using Data.Bits
 for (`mod` 64), (8*), and (`divMod` 2).
)


Wolfram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GADTs: the plot thickens?

2007-01-31 Thread kahl
Hi Connor,

  A puzzle for you (and for me too). I just tried to translate an old 
  Epigram favourite (which predates Epigram by some time) into GADT-speak. 
  It went wrong. I had a feeling it might. I wonder how to fix it: I 
  imagine it's possible to fix it, but that some cunning may be necessary. 
  Here it is:
  
  Type-level Nat
  
data Z = Z
data S x = S x
  
  Finite sets: Fin Z is empty, Fin (S n) has one more element than Fin n
  
data Fin :: * - * where
  Fz :: Fin (S n)
  Fs :: Fin n - Fin (S n)


To me, this looks more like ordinal numbers than like general finite sets:
For each type-level natural number n, the set  Fin n  contains
all object-level natural numbers k  n,
where  k = Fs^k Fz .

  The thinning function: thin i is the order-preserving embedding from 
  Fin n to Fin (S n) which never returns i.
  
   thin :: Fin (S n) - Fin n - Fin (S n)
   thin Fz  i   = Fs i
   thin (Fs i)  Fz  = Fz
   thin (Fs i)  (Fs j)  = Fs (thin i j)


So ``thin Fz i''  is defined for  i = undefined :: Fin Z.

If you don't want this kind of effect, you might consider to keep the empty
type away from your calculations:

 {-# OPTIONS -fglasgow-exts #-}
 data Z = Z
 data S x = S x

 data Fin :: * - * where
   Fz :: Fin (S n)
   Fs :: Fin (S n) - Fin (S (S n))

 thin :: Fin (S (S n)) - Fin (S n) - Fin (S (S n))
 thin Fz  i   = Fs i
 thin (Fs i)  Fz  = Fz
 thin (Fs i)  (Fs j)  = Fs (thin i j)

This gets me one line farther into thicken.
But what is possible now:

Add a lifting (or embedding) function:

 lift :: forall n . Fin n - Fin (S n)
 lift Fz = Fz
 lift (Fs i) = Fs (lift i)

 thicken0 :: forall n . Fin n - Fin n - Maybe (Fin n)
 thicken0 Fz  Fz  = Nothing
 thicken0 Fz  (Fs j)  = Just (lift j)
 thicken0 (Fs i)  Fz  = Just Fz
 thicken0 (Fs i)  (Fs j)  = case (thicken0 i j) of
  Nothing - Nothing
  Just k - Just (Fs k)


I think that one ley to your trouble is the fact that there is,
as far as I can see, no direct way to define
a predicate determining whether an Fin (S n) element
is the largest of its type:

 isMax :: forall n . Fin (S n) - Bool
 isMax = undefined

This would need a type case on  Fz :: Fin (S Z)  ---
I'd dig through Oleg's posts about type equality checking.

If you can make this possible, cou can also define

 unlift :: forall n . Fin (S n) - Maybe (Fin n)
 unlift = undefined

and get your original thicken from that.


Wolfram
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: lambda-match vs PMC

2006-10-31 Thread kahl
After a particularly bad week in a very busy term
I am just now getting around to have a first look
at Claus Reinke's [EMAIL PROTECTED] lambda-match proposal
and at his message from 29 Oct 2006:
  
  - matchings are not first-class expressions in PMC
  
  the only syntactically correct way to mention a matching in an expression
  is inside a splice ( {| match |} ).
  this is fairly surprising given the aims of PMC,

The most important aim of the original PMC
was to be a confluent rewriting system
that emulates Haskell pattern matching
(also known as the ``functional rewriting strategy'').
This makes equational reasoning much easier
since you don't have to take a very complex evaluation strategy
into account.
(Isabell proof of confluence described in FLOPS 2004.)

  as one would expect operations on matchings, matchings as 
  parameters and results of functions, etc. .. 

So I needed only those operations on matchings
that are ``somehow implicit'' in Haskell pattern matching,
and I did not need matching variables etc.


  application exists, but is not needed,
  because [corrected version]  {| e | ^f^ |} -- {| ^f e^ |} -- f e

I haven't tried yet whether the corresponding adaptation of the rules
still keeps the system confluent (I suspect it does),
but my original motivation was to deviate from Haskell as little as
possible, so you can have lambda-abstraction as syntactic sugar,
and the only real changes are matching groups without case-argument,
and the addition of argument supply for matchings.

  and the separation of 
  expressions and matchings seems needed only to support 
  conversions between the two:
  
  {| _ |} :: Match - Expression
  ^ _ ^ :: Expression - Match 

Which are handled as separate syntactic categories,
just like matching alternatives and expressions are separate
syntactic categories in Haskell.

  in spite of the monadic semantics, there are no monads in the type
  system,

Just like in ML.

In fact, just like in pure expression Haskell, which,
from the point of view taken in the MPC2006 paper,
still relies on a lifting monad to add undefined to constructed types.

  interactive:1:0:-)
  Couldn't match `Categories' against `Haskell'
  Expected type: Haskell
  Inferred type: Categories

The problem with a Haskell translation of PMC
(I do have a prototype implementation like that)
is that too much is implicit in the Haskell semantics
that needs to be made explicit, and making it explicit in Haskell notation
does not make it more readable than the category notation.

  A running commentary
  in computational lambda-calculus,

The main problem I see with that
is that the computational lambda-calculus only sugars away ONE monad,
while, in the MPC paper, we are dealing with TWO monads.

In fact, for our two-monad setting, I suspect that if we want
to get close in spirit to the computational lambda-calculus,
it is going to look pretty much like PMC again...
perhaps that's why the original description of the 
computational lambda-calculus is using categories ;-)


In the lambda-match proposal there is the remark:
  
  -- we use Maybe as the default matching monad, but [] can be fun, too..

In PMC (see the MPC paper)
you can change not only the matching monad,
but also the expression monad, for example:
 * the lifting monad in standard Haskell
 * a Maybe monad in the ``matching failure as exceptions'' view
of the pattern guards paper [Erwig-PeytonJones-2001],
 *  a list monad in functional-logic programming.


To me, it looks like the main difference between the lambda-match library
and Tullsen's pattern matching combinators [PADL 2000] is
that the lambda-match library also includes
the same ``pointwise lifting'' of the monadic operations into function types
that we use in the MPC paper.

Since PMC handles this as a language extension,
it only concerns the semantics, not the type system.

Since the lambda-match proposal does it as a library,
it has to be done inside the language,
leading to the type-class artistry involving declarations like the
following:

  -- extract (with function) from inner match monad
  -- (extraction is lifted over lambda-match parameters;
  --  we cannot express all functional dependencies,
  --  because the inner c could be a function type)
  class Ex a c da dc | da - a, dc da - c, da c - dc {- , dc a c - da -} 
  where
ex :: (a - c) - da - dc 

So probably this is a seemingly innocuous extension to do notation
with the potential for some quite spectacular type error messages...

And, from my point of view,
all it achieves over my tentative PMC proposal
is to avoid the two language extensions of
alternatives inside lambda abstractions,
and argument supply (match ... with ...),
at the cost of a different language extension.

(I am also afraid that the ``|'' notation might be dangerous.
 To emphasise its character as a ``monadic lambda'',
 I would rather consider something like ``\'', to be typeset $\lambda_{}$.
)


Wolfram

Re: [Haskell] Replacing and improving pattern guards with PMC syntax

2006-10-02 Thread kahl
Brandon Moore [EMAIL PROTECTED] wrote in his answer to my PMC syntax
proposal:
  
  Maybe the '\' could be reiterated to introduce the layout group:
  
  take' = \
  0 _ - []
  n \[] - []
  (x:x) - x : take (pred n) xs

Definitely not, since the lambda gets us back into expressions,
destroying the fall-through properties of matchings.
(In this particular case this is no problem,
 but in general it would destroy fall-through.)

  
   alt - match exp with alts
  Maybe reuse case, unless a new keyword is needed to avoid confusion.

We definitely need this construct to produce a matching (or ``alt'').
If we use ``case'' here, then case expressions don't exist anymore.


Wolfram

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


[brand...@yahoo-inc.com: Re: [Haskell] Replacing and improving pattern guards with PMC syntax]

2006-10-02 Thread kahl
Brandon asked me to forward this --- I already posted a small
clarification.

Wolfram

--- Start of forwarded message ---
Date: Mon, 02 Oct 2006 00:07:17 -0700
From: Brandon Moore [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
CC: haskell-prime@haskell.org
Subject: Re: [Haskell] Replacing and improving pattern guards with PMC syntax
In-Reply-To: [EMAIL PROTECTED]

[EMAIL PROTECTED] wrote:

...

 So far I never considered it important to devise a concrete syntax for PMC,
 but triggered by the current pattern guards thread on haskell-prime,
 I now try to give a PMC syntax that blends well with Haskell.
I think with some alterations your syntax would blend even better. In 
particular, it would be really nice to have a backwards-compatible syntax.
 One of my gripes with Haskell pattern matching is
 that the anonymous variant, namely case expressions,
 has an application to an argument built in,
 and therefore does not by itself produce a function.

 However, lambda-abstraction already allows single-alternativepattern 
 matching, so I propose to relax this to multi-alternative
 pattern matching:
 length = \
   [] - 0
   x : xs - 1 + length xs
I'm with you here. Extending lambda to introduce a layout group of 
alternatives makes sense even without the rest of the changes to 
matching. This isn't completely backwards-compatible, but it's pretty close.

While we're at it, why not have the identifier of a top level definition 
introduce a layout group of alternatives, rather than requiring the name 
to be repeated all the time.
How about

take
0 _ = []
_ [] = []
n (x:xs) = x : take (pred n) ns

 (Lists of alternatives correspond to the ``matchings'' of PMC,
  and concatenation of those lists (using ``;'' or layout) corresponds
  to matching alternative in PMC.
  ``\ m'' then corresponds to ``{| m |}'' in PMC.
 )

 For multi-argument pattern matching,
 I propose to allow nested matchings:
 zip = \
   x : xs - y : ys - (x,y) : zip xs ys
   _  - _  - []
This deviates from the current syntax of multi-argument lambda, and 
introduces overloading on -. Perhaps - could be reserved for match 
and lift, and whitespace could separate several alternatives:

zip = \
(x:xs) (y:ys) - (x,y) : zip xs ys
_ _ - []

There is some tension in the current between argument lists which allow 
several arguments and require parentheses around constructor 
applications to delimit patterns, and case expression which match only 
one item, but without grouping.

If we are trying to unify things, perhaps case could be extended to 
support matching several values together, perhaps separating expressions 
with commas in the head and using the syntax of a lambda for the cases.

Perhaps '=' in function declarations vs. - for other matching could 
also be simplified somehow. I don't have any ideas here.

 take' = \
   0 -   _  - []
   n - { [] - []
; x : xs - x : take (pred n) xs
}
I'm not sure how to handle a case like this. Introducing layout at 
whitespace seems a bit excessive, but requiring explicit separators 
looks a bit heavy on punctuation next to the current grammar, where only 
records (and perhaps
Maybe the '\' could be reiterated to introduce the layout group:

take' = \
0 _ - []
n \[] - []
(x:x) - x : take (pred n) xs


 Pattern guards now can be seen as a special case of
 alternatives-level argument application
 (``argument supply'' $\righttriangle$ in PMC).
 I find it useful to write the argument first, as in PMC.

 I see two options of writing argument supply in Haskell:
 either using new keywords, giving rise to the syntax production

 alt - match exp with alts
Maybe reuse case, unless a new keyword is needed to avoid confusion.
 or using a keyword infix operator:

 alt - exp | alts

 (Using this kind of argument supply instead of pattern guards
  also has the advantage that multiple alternatives become possible.)

I'm not using - to separate patterns, but some leading symbol is 
required to separate the last pattern from the expression.

I like '|' from the current syntax, selected with hopes of incorporating 
and extending the current guard syntax.

I'm pretty sure incorporating the existing guard syntax will require 
that the grammar keep track whether the last bit of the matching was a 
pattern or a guard (just like currently | switches from patterns 
separated by whitespace to a list of guards separated by commas).

Pattern guards can be translated to matching plus argument supply, so 
argument supply needs to be part of the primitive matching syntax. I 
like the suggestion
alt | exp | alts
It would be unambiguous but require unbounded lookahead to allow a case 
to begin with argument supply:
foo :: [Int] - [Int]
foo = \lookup env v | (Just r) [] - [r]

Given argument supply, we can sugar in the existing guard syntax by allowing
alt | guard, guards, exp | alts
and
alt | guards - expr

The sequence -\ will transition from guards back to matching more 
arguments. It might be nice to abbreviate 

[Haskell] Replacing and improving pattern guards with PMC syntax

2006-10-01 Thread kahl

One of the problems of pattern matching in Haskell is
that it might well be regarded as not very declarative:
function definition lines cease being generalisable equations,
and the definition of the rewriting strategy involved
in Haskell pattern matching extends over about three pages,
no matter whether you look in the Haskell report,
or for a definition of the ``functional rewriting strategy''
in the theoretical literature.

shameless plug
To fix exactly this problem was my original goal
when I started developing a family of Pattern Matching Calculi,
see  http://www.cas.mcmaster.ca/~kahl/PMC/ .
/shameless plug

(Other pattern matching calculi in the literature have different goals.)


The original calculus, which I shall call ``PMC'' for the remainder of
this message, is a confluent rewriting system with a normalising strategy
that incorporates Haskell pattern matching ---
the confluence proof has been mechanised using Isabelle.


So far I never considered it important to devise a concrete syntax for PMC,
but triggered by the current pattern guards thread on haskell-prime,
I now try to give a PMC syntax that blends well with Haskell.

PMC ``automatically'' includes
the ``expression pattern'' of pattern guards,
but organised differently, see below.


I am now presenting a series of syntactic extensions to Haskell.
without assuming that the reader is familiar with PMC,
but I will also explain how these extensions relate with PMC.


One of my gripes with Haskell pattern matching is
that the anonymous variant, namely case expressions,
has an application to an argument built in,
and therefore does not by itself produce a function.

However, lambda-abstraction already allows single-alternative
pattern matching, so I propose to relax this to multi-alternative
pattern matching:

 length = \
   [] - 0
   x : xs - 1 + length xs

(Lists of alternatives correspond to the ``matchings'' of PMC,
 and concatenation of those lists (using ``;'' or layout) corresponds
 to matching alternative in PMC.
 ``\ m'' then corresponds to ``{| m |}'' in PMC.
)

For multi-argument pattern matching,
I propose to allow nested matchings:

 zip = \
   x : xs - y : ys - (x,y) : zip xs ys
   _  - _  - []

 take' = \
   0 -   _  - []
   n - { [] - []
; x : xs - x : take (pred n) xs
}

This means that the alternative ``-''
now can have either an expression or a list of alternatives
as its right argument.
(Having an expression there corresponds to
 ``lifting'' $\leftharpoonup \_ \rightharpoonup$ in PMC.)


Pattern guards now can be seen as a special case of
alternatives-level argument application
(``argument supply'' $\righttriangle$ in PMC).
I find it useful to write the argument first, as in PMC.

I see two options of writing argument supply in Haskell:
either using new keywords, giving rise to the syntax production

alt - match exp with alts

or using a keyword infix operator:

alt - exp | alts

(Using this kind of argument supply instead of pattern guards
 also has the advantage that multiple alternatives become possible.)


I start with Simon Peyton Jones' standard example:

 clunky env v1 v2 |  Just r1 - lookup env v1
   , Just r2 - lookup env v2  = r1 + r2
  | otherwise  = v1 + v2

This could now be written using either ``match _ with _'':

 clunky env v1 v2 = \
   match lookup env v1 with
 Just r1 -  match lookup env v2 with
   Just r2 -  r1 + r2
   v1 + v2

or infix ``_ | _'':

 clunky env v1 v2 = \
   lookup env v1   | Just r1 -
 lookup env v2 | Just r2 -  r1 + r2
   v1 + v2

Note that the syntactic structure is different from pattern guards;
here is the fully parenthesised version:

 clunky env v1 v2 = \
   { lookup env v1   | { Just r1 -
   { lookup env v2 | { Just r2 -  r1 + r2 }}}
   ; v1 + v2
   }


Boolean guards are matches against True:

 take = \
   0 -   _  - []
   n - match n  0 with
  True -
{ [] - []
; x : xs - x : take (pred n) xs
}
  _ - error take: negative argument

But Boolean guards can also be kept as syntactic sugar:


 take = \
   0 -   _  - []
   n | n  0 -
{ [] - []
; x : xs - x : take (pred n) xs
}
 | otherwise - error take: negative argument


Now we have all the syntax in place ---
PMC's ``failure'' is of course the empty alternative
(allowed in Haskell 98).


For the grammar, the relevant productions in Haskell 98 are:


exp10   -  \ apat1 ... apatn - exp(lambda abstraction, n=1)
|   case exp of { alts }(case expression)
alts-  alt1 ; ... ; altn   (n=1)
alt -  pat - exp [where decls]
|   pat gdpat [where decls]
|   (empty alternative)
gdpat   -  gd - exp [ gdpat ]
gd  -  | exp


This would change to the following, where I propose to rename
``alt'' to ``match'' for ``matching'' as in PMC,
since this is now

Re: [Haskell] timing/timeout (how to express that in Haskell)

2006-05-12 Thread kahl
  
  It's just annoying that turning a partial function into a total one 
  looses so much strictness, since it prevents strictness propagation. Of 
  course, this is easily solved using a `strict' Maybe:
  data Perhaps a = Just' !a | Nothing'
  
  Are other people experiencing the same thing, or is it just an academic 
  issue and can Haskell compilers optimize it?

I am using StrictMaybe.Maybe'.
I haven't tried to quantify the effect of the optimization pragmas...


Wolfram

==
%include polycode.fmt

\section{Strict Maybe Variant}

\begin{code}
module Data.Rel.Utils.StrictMaybe where

import Control.Monad
\end{code}

\begin{code}
data Maybe' a = Nothing' | Just' {fromJust' :: {-# UNPACK #-} ! a}
  deriving (Eq, Ord, Show, Read)
\end{code}

\begin{code}
maybe' r f Nothing' = r
maybe' r f (Just' x) = f x
\end{code}

\begin{code}
instance Functor Maybe' where
  fmap f Nothing' = Nothing'
  fmap f (Just' x) = Just' (f x)
  {-# INLINE fmap #-}
\end{code}

\begin{code}
instance Monad Maybe' where
  return = Just'
  Nothing' = f = Nothing'
  (Just' x) = f = f x
  {-# INLINE (=) #-}
  fail = const Nothing'
\end{code}

\begin{code}
instance MonadPlus Maybe' where
  mzero = Nothing'
  Nothing' `mplus` m = m
  m@(Just' x) `mplus` _ = m
\end{code}
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] lhs2TeX: lazy pattern ~ formatting

2006-04-19 Thread kahl
  
  Does anyone have a nice way to format the ~ of a lazy pattern in
  lhs2TeX?  I use
  
%format ~ = {} ^\sim 
  
  but the result isn't as pretty as I'd like.  For one thing, no space
  is inserted before the ~ after a lambda, do, or whatever.

``Nice'' is in the eye of the beholder --- after limited fiddling,
I use:

\def\irref{{\lower0.3ex\hbox{\texttt{\bf\~{\kern-0.1em\strut}


Wolfram
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell'-private] pragmas and annotations (RE: the record system)

2006-02-28 Thread kahl
Malcolm.Wallace wrote:

  (But then, how would you guarantee that the first three characters
  in the file must be {-# ?)

In particular, what do you propose for literate source?
(I hardly have any .hs files.)

As far as I can see,
it seems to be possible to get LaTeX to work with UTF8;
the (apparently not extremely active) ``Unicode TeX project'' Omega
apparently started out with ``16-bit Unicode'' (http://omega.enstb.org/)
and now turned to 31-bit characters (http://omega.cse.unsw.edu.au/omega/),
and the future may of course bring us other variants...


(Isn't it great that we can add a new dimension to Wadler's law
 by discussing character encodings?  ;-)


Wolfram
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: public/private module sections (was: Haskell-prime Digest, Vol 2, Issue 58)

2006-02-24 Thread kahl
Bulat.Ziganshin responded to Claus Reinke:
  
  CR yes, this would add one constraint on where to place definitions. but
  CR grouping logically related definitions together is not quite what one 
  CR might think anyway: aren't the definitions making up the interface
  CR most strongly related, with the others just changeable auxiliaries?
  
  [...]
 
  and in order to see map's type or comment when i implement it, i
  should see to other part of file. i personally prefer to have
  public/private modifiers on each function and gather interface
  documentation by tools like haddock

In my personal experience, I have three kinds of Haskell modules:

 1.  Exporting everything
 2.  Exporting almost everything
 3.  Having a designed export interface


I agree with Claus' comment,
understanding it to refer to modules of kind 3.

I also agree with the motivation of Bulat's comment,
and understand it to refer mostly to modules of kind 1.


I found myself striving to eliminate large modules of kind 2,
typically replacing them by a large module of kind 1
which re-exports a small kernel module of kind 3
(or, to be honest, frequently of kind 2).
This method is mostly for statype encapsulation ---
another motivation to have kind 2 modules are auxiliary functions:
I mostly strive to abstract them away
into (kind 1) standard library supplement modules, or keep them local.

Therefore, I essentially do not have large kind 2 modules,
and for each module of kind 3 (or small module of kind 2) in my code base,
I guess I have at least five of kind 1.
(I.e., for each module with an explicit export list, at least five without,
 or with an export list that only adds re-exports of imports.)


Module kind 1 is of course extremely well-served by the option
not to have an explicit export list ---
I may have overlooked something,
but I did not have the impression that anybody wanted to abolish that,


Module kind 3, where we put real thought into design of the interface,
would of course be served better by a more explicit interface syntax
(like ML's signatures, or module types).


Therefore I would propose that we try to get a better
and more conscious feeling of the different module kinds we need supported,
and then optimise the language for each module kind separately
as far as possible.

Do you have or recognise other frequently-used module kinds?
Should we consider collection modules, or, more generally, kind 1 modules
that also re-export imports, as a separate kind?


Wolfram

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Problems with 6.x compilers

2005-11-23 Thread kahl
  
   6.4 and 6.4.1: When repeating a build (with ghc --make) all modules
   are rebuild even if nothing has changed. With earlier compilers,
   only linking takes place in this setting. Can I change this
   behaviour? I cannot develop this way.
  
  Should not happen, if it does there is a bug.  Can you tell us how to
  repeat the behaviour you're seeing?  We aren't seeing any unecessary
  recompilation here.

I think I had that effect exactly when using   -ddump-minimal-imports


Wolfram
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell] The Haskell mailing list

2005-11-16 Thread kahl

Simon Peyton-Jones [EMAIL PROTECTED] writes:

 Can anyone remember when the Haskell mailing list first opened?  Does
 anyone have an archive that goes right back to the beginning?

In my news archives
(which start 9 Feb 92,
 shortly after we got an internet connection back there :-)
I found the Haskell 1.2 announcement,
which refers to a mailing list:

 | From: [EMAIL PROTECTED] (john peterson)
 | Date: Fri, 20 Mar 1992 18:27:21 GMT
 | Newsgroups: comp.lang.functional
 | Subject: Haskell Report version 1.2 and tutorial now available
 | 
 | [...]
 | 
 | A mailing list for general discussions about Haskell is reachable in
 | one of two ways:
 | 
 | [EMAIL PROTECTED]  or  [EMAIL PROTECTED]
 | 
 | To be added to the above mailing list, send mail to either
 | [EMAIL PROTECTED] or [EMAIL PROTECTED],
 | whichever is the nearer site to you.  Please do not post
 | administrative requests direct to the mailing list.  To inquire about
 | implementation status, send mail to one of the addresses mentioned in
 | the preface of the report.

(One of the next messages:

 | From: [EMAIL PROTECTED] (Will Partain)
 | Date: 30 Mar 92 08:46:57 GMT
 | Newsgroups: mail.haskell,comp.lang.functional
 | Subject: glorious Glasgow Haskell compiler, 0.06, PATCH 1

;-)


Wolfram
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] The next langauage definition version will be ``Haskell 1.6''

2005-11-02 Thread kahl
Hi all,

  somebody wrote something which reminds me:


  
  *  *
  *  The ``next stable version'' of the Haskell language definition  *
  *  should be called ``Haskell 1.6''.   *
  *  *
  


This follows the numbering scheme used before the current stable version.

Some will still remember that
during the discussion for the name of that version,
a ballot was put out, and ``somehow'' the logical name
``Haskell 1.5'' was missing from that ballot ---
it still received the second-most votes after ``Haskell 98'':
as a free-form fill-in choice!
But ``somehow'' nobody drew the consequence
to do a corrected ballot...


Add to that, that ``Haskell 98'' did of course not come out
at a point of time that had any obvious relation with the number 98.

Choosing ``Haskell 2006'' would therefore be equally unwise.

And choosing something like ``Haskell 06'' would send a message
that is completely wrong for a language
that actually provides arbitrary-precision integers
as its default integer number type.
(It would also make version comparison even harder...)

I therefore encourage everybody
to refer to the current stable version as ``Haskell 1.5'',
and stick with ``Haskell 1.6'' as the name for the next stable version.


Wolfram

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Fwd: Re: [Haskell-cafe] Typeclasses and GADT [Was: Regular Expressions without GADTs]]

2005-10-26 Thread kahl
Tomasz Zielonka [EMAIL PROTECTED] wrote
Wed, 26 Oct 2005 13:37:29 +0200:
 
  Speaking about casts, I was playing with using GADTs to create a
  non-extensible version of Data.Typeable and Data.Dynamic.
  I wonder if it's possible to write such a thing without GADTs (and
  unsafeCoerce, which is used in Data.Dynamic, IIRC).

It is possible to get even closer to Typeable by using
the same interface --- which is sufficient for some applications.

Otherwise, what I did recently is essentially equivalent
to the ``non-Dynamic part'' of Tomasz code
(and adding Dyn would of course work in the same way):

Thomasz'   --   attached modules
---
data Type  --   data TI.TI*

class Typed--   class Typeable.Typeable*

newtype Wrapper*   --   replaced by the ``*''s above ;-)



Wolfram


--[[application/octet-stream
Content-Disposition: attachment; filename=TI.lhs][quoted-printable]]
%include lhs2TeX.fmt
%include WKlhs.sty
\section{Type Indices}

%{-# OPTIONS_GHC -fglasgow-exts #-}  taken out for lhs2tex
\begin{code}
module TI where

import NatNum
\end{code}

For each ground type |a| constructed only from supported type constructors,
there is exactly one \emph{type index} in |TI a|.
In that respect,
an element of |TI a|
is similar to the satisfaction of the constraint |Typeable a|.

The important difference between |TI a| and |Typeable a|
is that structural induction over the structure of a type
documented by a |TI a| contains references to the constituent types
\emph{at the type level},
while structural induction over |TypeRep| representations of
type structure made accessible by |Typeable a|
has no connection to the type level at all.

\begin{code}
data TI :: * - * where
  Triv :: TI ()
  Bool :: TI Bool
  Char :: TI Char
  Nat  :: TI Nat
  Int  :: TI Int
  Integer :: TI Integer
  Float :: TI Float
  Double :: TI Double
  Apply_0 :: TI_0 t - TI a - TI (t a)
  Apply_1 :: TI_1 t - TI_0 a - TI (t a)
\end{code}

Since one of the limitations of |Data.Typeable|
is its naming scheme that does not accommodate higher-kinded type constructors,
we use a naming scheme that is at least somewhat open in that direction,
and provide
alternative names following the scheme of |Data.Typeable| as type synonyms.

\begin{code}
type TI1 = TI_0
type TI2 = TI_0_0
type TI3 = TI_0_0_0
\end{code}

\begin{code}
data TI_0 :: (* - *) - * where
  Maybe :: TI_0 Maybe
  List :: TI_0 []
  Apply_0_0 :: TI_0_0 t - TI a - TI_0 (t a)
\end{code}

\begin{code}
data TI_0_0 :: (* - * - *) - * where
  Either :: TI_0_0 Either
  Pair :: TI_0_0 (,)
  Fct :: TI_0_0 (-)
  Apply_0_0_0 :: TI_0_0_0 t - TI a - TI_0_0 (t a)
\end{code}

\begin{code}
data TI_0_0_0 :: (* - * - * - *) - * where
  Tup3 :: TI_0_0_0 (,,)
\end{code}

Just to demonstrate a higher-order kind in this context,
we define the datatype recursion type constructor.

\begin{code}
data TI_1 :: ((* - *) - *) - * where
  Fix :: TI_1 DFix

data DFix f = DFix (f (DFix f))
\end{code}

Defining type index equality without requiring identical types
makes the implementation of the cast functions much easier.

\begin{code}
eqTI :: TI a - TI b - Bool
eqTI  Triv Triv = True
eqTI  Bool Bool = True
eqTI  Char Char = True
eqTI  Nat Nat = True
eqTI  Int Int = True
eqTI  Integer Integer = True
eqTI  Float Float = True
eqTI  Double Double = True
eqTI (Apply_0 t a) (Apply_0 t' a') = eqTI_0 t t'  eqTI a a'
eqTI (Apply_1 t a) (Apply_1 t' a') = eqTI_1 t t'  eqTI_0 a a'
eqTI _ _ = False
\end{code}

\begin{code}
eqTI_0 :: TI_0 t - TI_0 t' - Bool
eqTI_0 Maybe Maybe = True
eqTI_0 List List = True
eqTI_0 (Apply_0_0 t a) (Apply_0_0 t' a') = eqTI_0_0 t t'  eqTI a a'
eqTI_0 _ _ = False
\end{code}

\begin{code}
eqTI_1 :: TI_1 t - TI_1 t' - Bool
eqTI_1 Fix Fix = True
eqTI_1 _ _ = False
\end{code}

\begin{code}
eqTI_0_0 :: TI_0_0 t - TI_0_0 t' - Bool
eqTI_0_0 Either Either = True
eqTI_0_0 Pair Pair = True
eqTI_0_0 Fct Fct = True
eqTI_0_0 _ _ = False
\end{code}


%{{{ EMACS lv
% Local Variables:
% folded-file: t
% fold-internal-margins: 0
% eval: (fold-set-marks %{{{  %}}})
% eval: (fold-whole-buffer)
% end:
%}}}
--[[application/octet-stream
Content-Disposition: attachment; filename=Typeable.lhs][quoted-printable]]
%include lhs2TeX.fmt
%include WKlhs.sty
\section{Home-made |Typeable|}

%{-# OPTIONS_GHC -fglasgow-exts #-}  taken out for lhs2tex
\begin{code}
module Typeable where

import NatNum
import TI
import GHC.Base ( unsafeCoerce# )
\end{code}

This module is currently happily restricting itself to Glasgow Haskell.
Many things that are done in a more portable way in |Data.Typeable|
are done in a more concise, but GHC-specific way here.

The following function is GHC-specific:
\begin{code}
unsafeCoerce :: a - b
unsafeCoerce = unsafeCoerce#
\end{code}

For the tyme being, we only go to |Typeable3|,
while |Data.Typeable| goes all the way to |Typeable7|.

\begin{code}
class Typeable  a where typeIndex  ::   a - TI a
class Typeable1 t where 

Re: [Fwd: Re: [Haskell-cafe] Typeclasses and GADT [Was: Regular Expressions without GADTs]]

2005-10-26 Thread kahl
 
  On 26 Oct 2005 10:11:25 -0400, [EMAIL PROTECTED]
  [EMAIL PROTECTED] wrote:
   Tomasz Zielonka [EMAIL PROTECTED] wrote
   Wed, 26 Oct 2005 13:37:29 +0200:

 Speaking about casts, I was playing with using GADTs to create a
 non-extensible version of Data.Typeable and Data.Dynamic.
 I wonder if it's possible to write such a thing without GADTs (and
 unsafeCoerce, which is used in Data.Dynamic, IIRC).
  
   It is possible to get even closer to Typeable by using
   the same interface --- which is sufficient for some applications.
 
  Hmmm... but you use unsafeCoerce, don't you? ;-)
 
  I might have been unclear about it - I didn't want to use unsafeCoerce.
  With unsafeCoerce you can easily implement Typeable/Dynamic without
  GADTs, don't you?

The misunderstanding was entirely on my side:
I had not seen the previous discussion,
but only been forwarded your last message by Jacques.
Now I finally subscribed to haskell-cafe, too...

My concern was more with the interface than with the implementation.

I need ``TI'' (your ``Type'')
to make some other things possible that are not possible with Data.Typeable,
but also have a bunch of things that do work with Data.Typeable.

With -ddump-minimal-imports, I see that most of my files
import only (Typeable*(), gcast) from Data.Typeable ---
only a fraction actually use TypeRep or typeOf*.

My main point was that this common interface can be implemented
also with the GADT implementation of Typeable,
so there is no need to rewrite code based on that common interface.

A seondary point was the use of separate types at separate kinds,
corresponding to the separate Typeable* classes at separate kinds.

Since your Type and my TI are essentially the same,
your nice technique for the cast can of course be transferred ---
while your Wrapper* types have a custom flavour,
I use general type combinators that are useful also in other settings.
These are essentially just the standard combinatory logic combinators
(with some Haskell influence ;-):

newtype Ix =Ix
newtype K  x y =Kx
newtype Sf g x =S(f x (g x))
newtype Flip f x y =Flip (f y x)
newtype Bf g x =B(f (g x))

Unfortunately we do not have kind polymorphism,
so we have to define variants for all the kind combinations we need.
Naming these is a challenge --- I am still experimenting.
(attached module TypeCombinators)


The attached modules now don't use unsafeCoerce any more (thanks!),
but still include all the higher-kinded material.


Wolfram




TI.lhs
Description: Binary data


Typeable.lhs
Description: Binary data


TypeCombinators.lhs
Description: Binary data


GCast.lhs
Description: Binary data


NatNum.lhs
Description: Binary data

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Re: help with some basic code that doesn't work

2005-02-13 Thread kahl
Shin-Cheng Mu [EMAIL PROTECTED] wrote:
  
  Occasionally I would need to define recursive datatypes
  using an explicit fixed-point operator, such as:
  
data Fix f = In (f (Fix f))  deriving (Show, Eq)
data L a x = Nil | Cons a x  deriving (Show, Eq)
  
  However, Haskell was not able to  derive from Fix f any
  instances. The following is what happens in GHCi:
  
 *Main In Nil == In Nil
  
 Context reduction stack overflow; size = 21
 [...]
  
Instance Show (f (Fix f)) = Show (Fix f) where
  showsPrec _ (In x) = (In (++) . showsPrec 1 x . (')':)
  
  This is rather unsatisfactory, because I would not be able
  to inspect values of type Fix f in the interpreter. Is there
  a way to get around this?

You have to tie the knot yourself --- unfortunately this involves
re-coding the instance functors behind ``deriving''.

I did this last summer for demonstration purposes ---
you need only -fglasgow-exts for the ``deep instance'':


\begin{code}
module Fix where

data Fix f = F (f (Fix f))

data L a b = L (Maybe (a,b))
 deriving Show

oParen = ('(' :)
cParen = (')' :)
parens shows = oParen . shows . cParen

mkShowsPair showsA showsB (a,b) = parens $ showsA a . (,  ++) . showsB b

mkShowsMaybe showsA Nothing = (Nothing ++)
mkShowsMaybe showsA (Just a) = parens $ (Just  ++) . showsA a

mkShowsL showsA showsB (L m) = parens $
   (L  ++) . mkShowsMaybe (mkShowsPair showsA showsB) m

mkShowsFix :: ((Fix f - ShowS) - (f (Fix f) - ShowS)) -
  (Fix f - ShowS)
mkShowsFix mkShowsF = showsF
 where
  showsF (F x) = parens $ (F  ++) . mkShowsF showsF x

showsFL :: (Show a) = Fix (L a) - ShowS
showsFL = mkShowsFix (mkShowsL shows)


instance (Show a) = Show (Fix (L a)) where
  showsPrec _ = showsFL


flEmpty = F (L Nothing)
flCons x xs = F (L (Just (x,xs)))

infixr 5 `flCons`
\end{code}



Wolfram
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Context of a pattern variable

2005-01-12 Thread kahl
Tomasz Zielonka [EMAIL PROTECTED] continues the thread
on his second-order extension of as-patterns:
  
   Malcolm Wallace [EMAIL PROTECTED] writes:
   
Marcus Mohnen, Context Patterns, Proceedings of IFL'96 (LNCS 1268)
Marcus Mohnen, Context Patterns II, Proceedings of IFL'97 (LNCS 
1467)
   
   There is a web page too, with an implementation of context patterns
   for an old (2.01) version of ghc:
   
   http://www-i2.informatik.rwth-aachen.de/Staff/Current/mohnen/CP/index.html
  
  Wow, this gives much more than my proposal. 
  
  Actually, it seems to be a similar, but different idea.
  Mohnen's patterns must (AFAICS) be compiled to a
  pattern _search_ algorithm, while mine can be
  implemented with very simple code transformation. 

The way I would put this is that Mohnen's context patterns
are an addition to the matching aspect of patterns
(among other things, it introduces true second-order variables
 and replaces the Maybe monad
 used for example by Tullsen and Harrison to model pattern matching
 by a list monad),
while Tomasz' idea is an addition to the access interface
for the result of pattern matching (of which as-patterns are a part).


Tomasz' also asked:

  Probably I get something wrong, but couldn't we simply write it this way?
  
case t of
(ctx z) @ (_, _, z) - ctx (z + 1)
  
  That is - completely forget about @ ?

One might, and I had considered it,
but I think that hiding the two sides of @ is conceptually not clean,
because z would then in some sense be bound by both the @ and the -.

I would consider this as syntactic artificial sweetener ...


   The first occurrence of zH is a binding occurrence,
   and the second occurrence is co-bound by the ``@''.
   (Replacing ``@'' with ``@'' could give rise to confusing errors,
no matter which way the binding of its left argument is resolved.)
  
  Could you give an example?

f3 z q3 = case q3 of
(ctx z) @ (x, z @ Just y) - f z (ctx y)

could be defined to be alpha-equivalent to either of the following two
(distinguishing @ and @):

f3' z q3 = case q3 of
 (ctx zH) @ (x, zH @ Just y) - f z (ctx y)

f3'' z q3 = case q3 of
 (ctx zH) @ (x, z @ Just y) - f z (ctx y)

No matter which definition you choose,
somebody writing f3 could have the other definition in mind.
(They would probably arrive at the situation of f3 via modifying
 something else...)


  Nice. So we could also collapse the case expressions, nesting the
  second-order patterns?:
  
  case t of
(ctx1 vH) @ (x, y, vH @ (ctx2 zH) @ (a, zH @ Just z)) - ctx1 (ctx2 z)

Of course.

We could even extend it to do third-order patterns, and so on:

case (1,[Left 2, Right 3, Left 4, Right 5]) of
  (ctx3 f) @ (x, (f h) @ (y : z : h @ (Left q : _))) - ctx List.inits
=
(1, [[], [Left 4], [Left 4, Right 5]])

since f matches to \ h - Left 2 : Right 3 : h
and ctx3 therefore matches to

\ f - (1, f [Left 4, Right 5])



Have fun!



Wolfram


-

@InProceedings{Tullsen-2000,
  author = {Mark Tullsen},
  title = {First Class Patterns},
  crossref = {PADL2000},
  pages = {1--15},
  URL = {http://www.cs.yale.edu/~tullsen/patterns.ps},
}

@InProceedings{Harrison-Sheard-Hook-2002,
  author = {William L. Harrison and Timothy Sheard and James Hook},
  title = {Fine Control of Demand in {Haskell}},
  crossref = {MPC2002}
}

For the syntactic side of Haskell pattern matching:

@InProceedings{Kahl-2004a,
  author = {Wolfram Kahl},
  title =  {Basic Pattern Matching Calculi: A Fresh View on Matching Failure},
  pages = {276--290},
  booktitle = {Functional and Logic Programming, {Proceedings of FLOPS 2004}},
  year = 2004,
  editor =   {Yukiyoshi Kameyama and Peter Stuckey},
  volume =   {2998},
  series =   LNCS,
  publisher =Springer,
  note = {Long version: SQRL Report No. 16, available from
  \url{http://sqrl.mcmaster.ca/sqrl_reports.html}},
}
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Re: Context of a pattern variable

2005-01-11 Thread kahl
Tomasz Zielonka [EMAIL PROTECTED]
proposed an interesting kind of ``second-order as-patterns''
that are ``safe'' from the usual problems of second-order matching
by virtue of fixing (at least) the spine
of the images of the second-order variables.

Tomasz' first example (incorporating Stefan Holdermans' correction) is:

 | before:
 | 
 |   case t of
 |   (x, y, z) - (x, y, z+1)
 | 
 | after:
 | 
 |   case t of
 |   (_, _, ctx @ z) - ctx (z + 1)

Stefan Monnier [EMAIL PROTECTED]
generalised to contexts with arbitrarily many holes:

 case t of
 (x, y, z) - (x, y, z+1)
  
  How would you do:
  
  case t of
  (x, y, z, a, b, c) - (x+1, y, z, a, b, c+2)
  
  would you allow:
  
  case t of
  (ctx @ x, _, _, _, _, ctx @ c) - ctx (x+1) (c+2)

First a minor point of ``getting the names right'':

In either case ``ctx'' is a second-order variable,
or a metavariable in the sense of Klop's Combinatory Rewriting Systems
(CRSs, [Klop-1980]), and not a variable for a ``context''
in the technical sense of ``term with a hole''.
Even if patterns in Haskell included variable binders,
we would want to avoid the variable capture allowed by instantiating the
hole of a context containing variable binders on its spine.
(In the context of first-order patterns this distinction is not
 so relevant, but I still believe that it makes sense to be careful.)

Already in his original message, Tomasz identifies the following problem:

case t of
(x, y, (a, ctx @ Just z)) - ctx z
  
  wouldn't be equivalent to this one
  
case t of
(x, y, v) - 
case v of 
(a, ctx @ Just z) - ctx z

[ The last line contains my interpretation of Tomasz' intention. ]

This could be alleviated by making the character of the intended binding
as a second-order binding more explicit
by allowing as-patterns to bind against function left-hand sides,
where variables occurring in patterns may be ``co-bound''
in the pattern part of the as-pattern
by a SINGLE occurrence of the ``co-as''
for which I continue to use Tomasz' symbol ``@''.

The first example would then be written:

   case t of
  (ctx zH) @ (_, _, zH @ z) - ctx (z + 1)

I continue to use the variable name ``ctx'' to better exhibit the
relation with the original notation.
``zH'' stands for ``z-hole'' for lack of a better naming convention.
The first occurrence of zH is a binding occurrence,
and the second occurrence is co-bound by the ``@''.
(Replacing ``@'' with ``@'' could give rise to confusing errors,
 no matter which way the binding of its left argument is resolved.)

The first version of Tomasz' counterexample
is treated in the same way as the first example:

  case t of
(ctx zH) @ (x, y, (a, zH @ Just z)) - ctx z

The expression corresponding to the ``broken'' context
then exhibits a composed context:

  case t of
(ctx1 vH) @ (x, y, vH @ v) -
case v of
   (ctx2 zH) @ (a, zH @ Just z) - ctx1 (ctx2 z)


Allowing arbitrary function left-hand sides
(non-terminal ``funlhs'' from the Haskell98 report)
is probably not useful, but should be otherwise unproblematic ;-)

As an example,

  case q1 of
(f (xH, yH)) @ ((xH @ x, yH @ y) : _) - f p

would be equivalent to:

  case q1 of
(f pH) @ (pH @ (x, y) : _) - f p


Essentially the same holds for admitting ``hole variables''
that are not bound by a @, since they will just be ignored arguments
of the ``context'' function.

   case q2 of
 (f xH yH) @ (xH @ x, Nothing) - f (x+1) (Right 2)

would be equivalent to:

   case q2 of
 (f xH) @ (xH @ x, Nothing) - f (x+1)


(The restriction to only single occurrences to the left of @
 corresponds directly to the linearity requirement for Haskell patterns.)


I think that this way we have the concepts right ---
how useful this feature would be
is probably going to be mostly a question of programming style.
For very deep pattern matching in, for example, XSLT-like applications
it could probably be quite convenient.



Best regards,

Wolfram



@TechReport{Klop-1980,
  author = {Jan Willem Klop},
  title = {Combinatory Reduction Systems},
  year = 1980,
  type = {Mathematical Centre Tracts},
  number = 127,
  note = {PhD thesis},
  institution = {Centre for Mathematics and Computer Science},
  address = {Amsterdam},
}
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Program maintenance tools

2004-10-12 Thread kahl
Gavin Lowe [EMAIL PROTECTED] asked for:
  
  - a graphical representation of dependencies between modules.

HsDep available from

http://www.cas.mcmaster.ca/~kahl/Haskell/

uses dot from ATT's graphviz suite.
You can of course tune with the generated dot file manually
and run dot again to tune the result.


Wolfram

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: tables

2002-06-13 Thread kahl


Peter Padawitz [EMAIL PROTECTED] wrote Thu, 13 Jun 2002:

  Have Parnas' tabular expressions and algorithms
  ever been implemented in Haskell?

Not yet, as far as I know.

  Would anyone consider this a reasonable project?

We are actually working on it, hoping to get some first
results over the summer.

It will probably connected into RATH (Relation Algebra Tools for Haskell),
old homepage waiting to be transferred and updated:

http://ist.unibw-muenchen.de/relmics/tools/RATH/


Wolfram

---

Dr. Wolfram Kahl
Associate Professor
Department of Computing and Software
Information Technology Building, Room 245
Faculty of Engineering
McMaster University
Hamilton, Ontario, Canada, L8S 4K1

Phone:  (905) 525-9140, Ext. 27042
Fax:(905) 524-0340
E-Mail: [EMAIL PROTECTED]
URL:http://www.cas.mcmaster.ca/~kahl/
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Incoherence

2001-10-24 Thread kahl



John Hughes wrote:
  What we need is different binding syntax for monomorphic and polymorphic
  bindings. Roll on := and = ...

If I recall correctly, in some earlier language (KRC?)
this difference was achieved by letting let-bindings be
polymorphic, and where-bindings be monomorphic.
The idea was that where-bindings ONLY encode sharing,
i.e., making the term graph structure explicit.

I actually like that distinction



Wolfram


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: The future of Haskell discussion

2001-09-13 Thread kahl


Marcin 'Qrczak' Kowalczyk [EMAIL PROTECTED] asks:
 
 12 Sep 2001 12:37:25 -, [EMAIL PROTECTED] 
[EMAIL PROTECTED] pisze:
 
  * Currently HOPS implements only one evaluation strategy,
namely leftmost outermost graph rewriting with sharing preservation
(without automatic sharing maximisation).
With the standard rules in place, this corresponds
to the original definition of lazy evaluation
(also known as the ``D-rule''), which is different
from the lazy pattern matching evaluation
of sequential Haskell implementations.
 
 What is the difference?

If you have

f x [] = 0
f x (y:ys) = 1 + f x ys

bot = bot

and consider the redex

f bot (enumFromTo 1 2)

then leftmost outermost rewriting (sharing is irrelevant here) diverges:

f bot (enumFromTo 1 2) - 
f bot (enumFromTo 1 2) - ...

while the ``functional strategy'' finds out that the second argument
needs to be evaluated first, and terminates:

f bot (enumFromTo 1 2) -
f bot (1 : enumFromTo 2 2) -
1 + f x (enumFromTo 2 2) -
1 + f x (2 : enumFromTo 3 2) -
1 + (1 + f x (enumFromTo 3 2)) -
1 + (1 + f x []) -
1 + (1 + 0) -
1 + 1 -
2


For a definition of the functional rewriting strategy
see for example p. 139f in:

@Book{Plasmeijer-vanEekelen-1993,
  author =   {Rinus Plasmeijer and van Eekelen, Marko},
  title ={Functional Programming and Parallel Graph Rewriting},
  publisher ={Addison-Wesley},
  year = 1993,
  series =   {International Computer Science Series},
  ISBN = {0-201-41663-8}
}

The fact that Haskell uses the functional strategy
should also follow from the translation for function bindings
given in the Haskell report, section 4.4.3

http://research.microsoft.com/~simonpj/haskell98-revised/haskell98-report-html/decls.html#sect4.4.3

in conjunction with the pattern matching semantics, section 3.17.

http://research.microsoft.com/~simonpj/haskell98-revised/haskell98-report-html/exps.html#pattern-matching


However, I already have troubles with
the first sentence of 3.17.2 ``Informal Semantics of Pattern Matching'':

http://research.microsoft.com/~simonpj/haskell98-revised/haskell98-report-html/exps.html#sect3.17.2

which says: ``Patterns are matched against values.'':

Variables are patterns, and in the above example,
the variable x is matched against the non-value bot ---
if my understanding of values is correct.
(Also, the direction of ``matching against'' in this
sentence is contrary to that used in most of the explanations that follow
after it.)

For the examples section of 3.17.2,
perhaps one might add an example under item 1. that illustrates this effect:

| If (x,'b') is matched against (_|_,'b'),
| the match succeeds and binds x to _|_.



Wolfram




___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: The future of Haskell discussion

2001-09-12 Thread kahl

Olaf Chitil [EMAIL PROTECTED] summarised the 10 minute talks
of this years Haskell workshop, including my presentation:
 
 Wolfram Kahl: Animating Haskell by Term Graph Rewriting
 HOPS is a system for term graph rewriting that can also show the
 rewriting steps in a graphical animation. [...]

For clarification, a few core points:

* I have a GHC extension, called MHA (for ``Munich Haskell Animator'')
  that automatically converts programs written in a subset of Haskell
  into HOPS modules. This is not yet publicly available,
  but this may change in the future. Contact me if interested.

* Animation in HOPS can show every intermediate step,
  but it is also possible to show only a selection,
  currently by the module where the applied rules reside.

  On a 1.2GHz Athlon, I currently get up to over 1000 raw rule applications
  per second, and around 10 graph drawings per second, so
  striking some reasonable balance can be quite helpful.

  The current version of HOPS awaits mostly a thorough documentation
  update for release.

  The HOPS home page is: http://ist.unibw-muenchen.de/kahl/HOPS/

* Currently HOPS implements only one evaluation strategy,
  namely leftmost outermost graph rewriting with sharing preservation
  (without automatic sharing maximisation).
  With the standard rules in place, this corresponds
  to the original definition of lazy evaluation
  (also known as the ``D-rule''), which is different
  from the lazy pattern matching evaluation
  of sequential Haskell implementations.

  Naive translations can therefore yield inappropriate results,
  and for some Haskell programs this is still the case
  in the current version.
  Therefore, you have to understand the translation
  in order to be sure that what you see corresponds to
  what happens in the Haskell implementation.

  I hope to release MHA only after I have addressed this issue better
  than in the current version.

* For a variety of reasons, I would tend to say that the HOPS/MHA approach
  is really not very similar to GHood. Some of them may already
  be apparent from the above, others would go into more detail
  then I intend to do in this message.


Best regards,

Wolfram







___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: outputs during runtime

2001-07-30 Thread kahl

Nicole Gabler [EMAIL PROTECTED] wrote:

 During runtime I want the program to output =
 information about the status every time another file is parsed. That =
 doesn't work, the output appears much later, mostly not until end of =
 program. But I need these outputs at that specific time, so what can I =
 do to manage it?

Perhaps the following fragment from my prelude extensions helps.
It requires ``import IO''.

Most probably you would either be calling

 printList \n statusInformationList

or factor out the flushing part
for interleaving with your other IO actions.


Wolfram


%{{{ hPutList, printList
It is frequently annoying if a long-running program
writes a list of items to a channel or file,
and while it calculates the next item,
the last item has usually not yet been (completely) written
because at least part of it is still held in some output buffer.

The following functions flush the output buffer after every
item.
The most general variant, \verb|hPutList|,
is parameterised by a separator string for list items,
a \verb|shows| function for items,
and a handle for output.
The variant \verb|putList| writes always to the standard output channel,
and \verb|printList| in addition uses the \verb|shows| function
supplied by the type class system.

\begin{code}
hPutList :: String - (a - ShowS) - Handle - [a] - IO ()
hPutList sep shows h = mapM_ f
 where f x = hPutStr h (shows x sep)  hFlush h

putList :: String - (a - ShowS) - [a] - IO ()
putList sep shows = hPutList sep shows stdout

printList :: Show a = String - [a] - IO ()
printList sep = putList sep shows
\end{code}
%}}}


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



GHC derives and accepts illegal Haskell98 contexts

2001-07-24 Thread kahl


I have experimented with the definition of class assertions in section 4.1.3
in the Haskell 98 report, and found that GHC 4.08 and GHC 5.00.1
(invoked via ``ghc -c Test.hs'') both derive and accept type arguments
(in class assertions) that do not have a variable as head.

I cannot really imagine where this might hurt, though.


By the way, I find that the .hi format of 5.00 is, from a human-readability
point of view, in my opinion not optimal in that there is no separator
after the __forall-bound variables:

r2 :: __forall b (E (PrelBase.String, b)) = b - PrelBase.Bool;
^

There is no comparison with NHC since NHC still has the
``simple context restriction''.


Best regards,

Wolfram

=

module Test where

class T a where
  t :: a b c - Bool

class E a where
  e :: a - a - Bool

instance T (,) where
  t (a,b) = True

q1 :: (T a,Eq (a String b)) = a String b - Bool
q1 x = if x == x then t x else True

--r1 :: Eq (String, b) = b - Bool -- derived and accepted by GHC
--r1 :: Eq a = a - Bool   -- derived by Hugs98,
-- accepted by both Hugs98 and GHC
r1 x = q1 (asd,x)


q2 :: (T a,E (a String b)) = a String b - Bool
q2 x = if e x x then t x else True

--r2 :: E (String, b) = b - Bool  -- derived and accepted by GHC
r2 x = q2 (asd,x) -- not accepted by hugs +98

___
Glasgow-haskell-bugs mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs



Type generalization ``order''?

2001-07-20 Thread kahl


The Haskell 98 report (also in the current revised version)
includes the following in section 4.1.4:

  Types are related by a generalization order (specified below);
  the most general type that can be assigned to a particular expression
  (in a given environment) is called its principal type.
  Haskell 's extended Hindley-Milner type system can infer the
  principal type of all expressions, including the proper use of
  overloaded class methods (although certain ambiguous overloadings
  could arise, as described in Section 4.3.4). Therefore, explicit typings
  (called type signatures) are usually optional (see Sections 3.16 and 4.4.1).
  
  The type  forall u. cx1 =t1  is more general than the type
  forall w. cx2 =t2  if and only if there is a substitution S
  whose domain is u such that:
  
 t2 is identical to S(t1).
  
 Whenever cx2 holds in the class environment, S(cx1) also holds.

The relation defined here clearly is a preorder.
If it is an order, then in particular the following types are equal
(via identical substitutions):

(Eq a, Ord b) = (a,b)
(Ord b, Eq a) = (a,b)
(Eq b, Ord b, Eq a) = (a,b)
(Eq [a], Ord b) = (a,b)

It seems to me that this arises since the report only defines
the SYNTAX of contexts
(see the heading ``4.1.3  Syntax of Class Assertions and Contexts''),
but never defines what a context IS, that is, the SEMANTICS of contexts.

A context could be:

 * a SEQUENCE of class assertions (according to the syntax)

 * a SET  of class assertions (slightly more intuitive)

 * a THEORY   of class assertions

If type generalisation is an order,
then the third alternative was chosen.

I suggest to include an appropriate hint in the report...
(I am aware of the beginning of 4.1.4:

  In this subsection, we provide informal details of the type system.
  (Wadler and Blott [11] and Jones [6] discuss type and constructor classes,
  respectively, in more detail.)

 I think this kind of outsourcing of essential parts of the
 language definition is not desirable.
)


Best regards,

Wolfram

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Type generalization ``order''?

2001-07-20 Thread kahl

Simon Peyton-Jones [EMAIL PROTECTED] wrote:
 
 If, however, you can suggest some specific sentences that would help
 to eliminate confusion, then please do suggest them.

It is, to a certain extent, a design decision.

One possibility might be to change

  Types are related by a generalization order (specified below);
  the most general type that can be assigned to a particular expression
  (in a given environment) is called its principal type.

to

  Types are related by a generalization preorder (specified below);
  the most general type (up to the equivalence induced by that preorder)
  that can be assigned to a particular expression
  (in a given environment) is called its principal type.

Another possibility might be to leave generalization as it is:

  Types are related by a generalization order (specified below);
  the most general type that can be assigned to a particular expression
  (in a given environment) is called its principal type.
  Haskell 's extended Hindley-Milner type system can infer the
  principal type of all expressions, including the proper use of
  overloaded class methods (although certain ambiguous overloadings
  could arise, as described in Section 4.3.4). Therefore, explicit typings
  (called type signatures) are usually optional (see Sections 3.16 and 4.4.1).
  
  The type  forall u. cx1 =t1  is more general than the type
  forall w. cx2 =t2  if and only if there is a substitution S
  whose domain is u such that:
  
 t2 is identical to S(t1).
  
 Whenever cx2 holds in the class environment, S(cx1) also holds.

and add a sentence here:

  This implies that two contexts are considered equal
  if and only if they imply the same set of class assertions
  in the class environment.

Instead of ``equal'' one might consider
the somewhat less precise ``equivalent''.


Wolfram




___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Haskell 98 Report

2001-06-11 Thread kahl


Simon Peyton-Jones wrote:
 
 I've finished what I hope is the final version of the Haskell 98
 Language and Library Reports
   http://research.microsoft.com/~simonpj/haskell98-revised

haskell98-library-html/index.html still contains the following line:

title The Haskell Library Report 1.4 /title

By my count, it should be ``1.6'' ;-)


Cheers,

Wolfram


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Implict parameters and monomorphism

2001-04-26 Thread kahl


Jeffrey R. Lewis [EMAIL PROTECTED] argued that
the monomorphism restriction enabled translations of
let-bindungs into lambda-bindings:

   let z = x + ?y in z+z
  
  
  [...]
  
  The example above becomes:
  (\z - z + z) (x + ?y)


In Hindley-Milner type systems,
the point about let-bindings is that they can generalise.
Not all let-bindings make use of that power,
but those that do cannot be converted to first-order-typed
lambda-expressions.

I now expand a little bit on these trivialities,
but only for being able to draw parallels below.
For example, the following expression cannot be converted into a
lambda-expression:

 let myid = \ x - x
 in (myid 5, myid 'c')

This is because type inference derives a polymorphic type
for the let-bound variable

 myid :: a - a

(or, more precisely, myid :: forall a . a - a),
and this polymorphism is actually needed.

It is, howeverm, possible to override polymorphism
by supplying a type annotation:

 let myid :: Int - Int
 myid = \ x - x
 in (myid 5, myid 6)

This can now be converted into a lambda expresssion:

 (\ myid - (myid 5, myid 6)) ((\ x - x) :: Int - Int)



In Haskell, an additional point about let-bindings is
that let-bound variables may have constrained types,
that is, types with context, no matter whether this is a class context
or an implicit parameter constraint.

Lambda-bound variables must not have constrained types
as long as we want constraints be well-separated
from the rest of the type.
Just consider:

\ q - ( \ (z :: (?y :: Int) = Int) - q + (z with ?y = q) + ?x)

The type of this would have to be

(?x :: Int) = Int - ((?y :: Int) = Int) - Int

As far as I can see, currently nobody wants that.


To my mind, let-bindings that make use of the constraint-power
cannot be converted to lambda-bindings in the same way as
let-bindings that make use of the type generalisation power
cannot be converted to (first-order) lambda-bindings.

With implicit parameters, it is perfectly possible to write

 let z = x + ?y
 in (z with ?y = 5) + (z with ?y = 6)

This is the reason why the type checker derives (given x :: Int)

 z :: (?y :: Int) = Int

And the above expression cannot be converted to a lambda-expression.


The situation corresponds to that with type classes, as the following
expression shows:

 let z () = 0
 in (z () + (5 :: Int), z () + (6.5 :: Double))

where we have

 z :: Num a = () - a

Just like polymorphism, type classes can be forced away
via type annotation (IF there is an instance for that class):

 let z :: () - Int
 z () = 0
 in (z () + (5 :: Int), z () + 6)

In the paper mentioned in my last message
(http://ist.unibw-muenchen.de/Haskell/ModConstraints/)
we argue that this is really supplying a module (or dictionary)
argument (the ``default instance''), and that more direct ways
to supply module (or dictionary) arguments would be useful.

One such direct way is the ``with'' clause of implicit parameters ---
restricted to one-entry dictionaries. And there are no default instances
for these, so they cannot be forced away via type signatures.
Otherwise --- I feel I have to insist --- the situation
for implicit parameters is EXACTLY the same as for type classes.

Therefore, the monomorphism restriction enforces Simon's (A);
and Simon's (C) would be a first step towards finally abolishing
the monomorphism restriction.



Another argument towards the latter is that this is not a dark,
undecidable corner of the type system.
With this I mean that the type inference algorithm
does not NEED user-supplied type declarations
in order to do its job reliably.
Therefore, it should not make a difference whether I supply one or not.

(As long as the monomorphism restriction prevails,
 I strongly urge implementors not to say just
 ``cannot justify context ...'',
 but to output the full derived type,
 since then I could just copy that into the source!)



Best regards,

Wolfram




___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Implict parameters and monomorphism

2001-04-25 Thread kahl



As a minor point in our draft paper

``From Type Classes to Module Constraints''

http://ist.unibw-muenchen.de/Haskell/ModConstraints/

we argue that implicit parameters and conventional type class contexts
are really the same thing.

This immediately dictated the answers to the first two questions,
and provides guidance for the third:

  Question 1: can we inherit implicit parameters
  
  Consider this:
  
  f x = (x::Int) + ?y
  
  where f is *not* a top-level binding.
  From the RHS of f we'll get the constraint (?y::Int).
  There are two types we might infer for f:
  
  f :: Int - Int
  
  (so we get ?y from the context of f's definition), or
  
  f :: (?y::Int) = Int - Int
  
  At first you might think the first was better, becuase then
  ?y behaves like a free variable of the definition, rather than
  having to be passed at each call site.  But of course, the WHOLE
  IDEA is that ?y should be passed at each call site (that's what
  dynamic binding means) so we'd better infer the second.

Wholeheartedly agreed --- the first would break many things.

  Question 2: type signatures
  ~~~
  OK, so is it legal to give an explicit, user type signature to f, thus:
  
  f :: Int - Int
  f x = (x::Int) + ?y
  
 [...]
 
  Conclusion: the above type signature is illegal.  You'll get a message
  of the form could not deduce (?y::Int) from ().

Must be so.


  Question 3: monomorphism
  
 [...] 
  
  Possible choices
  
  (A) Always generalise over implicit parameters
  Bindings that fall under the monomorphism restriction can't
  be generalised
  
  Consequences:
  * Inlning remains valid
  * No unexpected loss of sharing
  * But simple bindings like
  z = ?y + 1
will be rejected, unless you add an explicit type signature
(to avoid the monomorphism restriction)
  z :: (?y::Int) = Int
  z = ?y + 1
This seems unacceptable

To me, it seems just as unacceptable as the other type signatures
I have to figure out only because of the monomorphism restriction.

It is, in fact, a logical consequence of the monomorphism restriction.
So as far as the monomorphism restriction is going to be kept,
this is the way to go.

  (B) Monomorphism restriction wins
  Bindings that fall under the monomorphism restriction can't
  be generalised
  Always generalise over implicit parameters *except* for bindings
  that fall under the monomorphism restriction
  
  Consequences
  * Inlining isn't valid in general
  * No unexpected loss of sharing
  * Simple bindings like
  z = ?y + 1
accepted (get value of ?y from binding site)
  
This is really unacceptable.
Just like the rejected alternatives
in questions 1 and 2, it relies on breaking an otherwise coherent system.

  (C) Always generalise over implicit parameters
  Bindings that fall under the monomorphism restriction can't
  be generalised, EXCEPT for implicit parameters
  
  Consequences
  * Inlining remains valid
  * Unexpected loss of sharing (from the extra generalisation)
  * Simple bindings like
  z = ?y + 1
accepted (get value of ?y from occurrence sites)
  
  
  Discussion
  ~
 [...]
 
  Choice (C) really says the monomorphism restriction doesn't apply
  to implicit parameters.

Precisely.

  Which is fine, but remember that every
  innocent binding 'x = ...' that mentions an implicit parameter in
  the RHS becomes a *function* of that parameter, called at each
  use of 'x'.

Just like anything that is bound to a type with a context.

  Now, the chances are that there are no intervening 'with'
  clauses that bind ?y, so a decent compiler should common up all
  those function calls.  So I think I strongly favour (C).  Indeed,
  one could make a similar argument for abolishing the monomorphism
  restriction altogether.

Indeed, this already is an argument for abolishing the monomorphism
restriction altogether.


Best regards,

Wolfram















___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



2nd CFP: RelMiS 2001

2001-01-02 Thread Wolfram Kahl


[please post.  apologies for multiple copies]


SECOND CALL FOR PAPERS


RelMiS 2001 - Relational Methods in Software


7-8 April 2001, Genova, Italy

 http://ist.unibw-muenchen.de/RelMiS/

   A Satellite Event to ETAPS 2001


Important Dates
===

Deadline for submission:10 January  2001
Notification of acceptance:  9 February 2001
Final version due:  28 February 2001
Workshop dates:7-8 April2001


Workshop Topics
===

* Relational Specifications and Modelling:
 methods and tools, tabular methods, abstract data types
* Relational Software Design and Development Techniques:
 relational refinement, heuristic approaches for derivation, correctness
 considerations, dynamic programming, greedy algorithms, catamorphisms,
 paramorphisms, hylomorphisms and related topics
* Programming with Relations:
 prototyping, testing, fault tolerance, information systems, information
 coding
* Implementing relational algebra with mixed representation of relations
* Handling of Large Relations:
 problems of scale, innovative representations, distributed
 implementation


Submissions
===

Submissions will be evaluated by the Program Committee for inclusion in the
proceedings, which will be published in the ENTCS series. Papers must
contain original contributions, be clearly written, and include appropriate
reference to and comparison with related work. Papers should be submitted
electronically as uuencoded PostScript files at the address
[EMAIL PROTECTED] Preference will be given to papers that are no
shorter than 10 and no longer than 15 pages. A separate message should also
be sent, with a text-only one-page abstract and with mailing addresses (both
postal and electronic), telephone number and fax number of the corresponding
author.

Final versions will have to be submitted as LaTeX source and have to adhere
to the ENTCS style!


Programme Committee
===

Rudolf Berghammer (Kiel), Jules Desharnais (Quebec), Wolfram Kahl (Munich),
David L. Parnas (Hamilton), Gunther Schmidt (Munich)


-

E-Mail: [EMAIL PROTECTED]
Workshop home page: URL: http://ist.unibw-muenchen.de/RelMiS/


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Editor Combinators

2000-06-21 Thread Wolfram Kahl



Editor Combinators
==

   http://ist.unibw-muenchen.de/EdComb/


Editor combinators allow to assemble structure editors compositionally
instead of generating them from language descriptions,
just as parsing combinators allow to assemble parsers compositionally
instead of employing parser generators to generate parsers from
grammar descriptions.

We have put together a first, simple editor combinator library in Haskell,
connected it to the GUI toolkit FranTk,
and described and documented the editor combinator library
in a technical report, all available from the EdComb home page at URL:

http://ist.unibw-muenchen.de/EdComb/


Best regards,

Wolfram Kahl




Re: lines --- to derive, or not to derive ;-)

2000-05-11 Thread Wolfram Kahl


Simon Marlow writes:

   You didn't mention the accumulating parameter version:

[[[with correction pointed out by Koen Claessen:]]]

 lines :: String - [String]
 lines s = lines' s ""
   where
   lines' []   acc = [reverse acc]
   lines' ('\n':s) acc = reverse acc : lines' s ""
   lines' (c:s)acc = lines' s (c:acc)

  This one is more than twice as fast as the foldr version, despite the fact
  that it needs to reverse the accumulating parameter for each line.

According to my measurements,
with HBC it is twice as fast as the prelude version,
and on the whole it is slightly faster then the foldr version.

However, it can still be sped up: Just use the old trick of
avoiding reverse by using a functional accumulator:

 lines :: String - [String]
 lines s = lines' s id
   where
   lines' []   acc = [acc []]
   lines' ('\n':s) acc = acc [] : lines' s id
   lines' (c:s)acc = lines' s (acc . (c:))

(Interestingly, this is the only optimisation that HBC
 does not gratefully take advantage of.)


As Koen Claessen pointed out furthermore,
neither this nor my foldr definition are fully lazy.
I produced a first lazified variant of my foldr definition,
and it has running times similar to those of the prelude variants.


All the stuff on is on the updated page:

http://ist.unibw-muenchen.de/Lectures/FT2000/FP/Lines.html


So the interesting question is whether we can get laziness
without these continuously reconstructed pairs.



Cheers,

Wolfram




Re: lines --- to derive, or not to derive ;-)

2000-05-10 Thread Wolfram Kahl


Shin-Cheng Mu [EMAIL PROTECTED]
is puzzled why the derived foldr version of lines
is significantly faster than the prelude version,
which he recognises as an unfold:

  I am curious why the Prelude version is less efficient than the 
  your fold version? It seems to me the fold version construct a 
  cons cell and deconstruct it immedately everytime it glues a 
  character to the first line, while the Prelude version only
  wastes a pair cell for every line.

I am surprised that none of the experts has taken this on.


So I have experimented a little bit further,
first assuming that ``break (== '\n')'' might be a lot slower
than ``span (/= '\n')'' or even ``span ('\n' /=)'' --
but the running time differences are small.

So here is my --- maybe superficial --- conclusion,
somewhat illuminated by the animations
now to be found on my ``lines page'' at:

http://ist.unibw-muenchen.de/Lectures/FT2000/FP/Lines.html

 * In the unfold definition,
   the pair cell is constructed and destructed for every character,
   and every construction of the pair cell
   induces two destructive accesses,
   via different evaluation paths, and at different times

 * In the fold definition,
   the cons cell is destructed locally in one step.

This is also illustrated by the fact that the HOPS animations
for an optimised version of the prelude definition
have essentially just one step more per character of the input string
than those for the fold definition.


Does this sound plausible to the experts?


Wolfram




lines --- to derive, or not to derive ;-)

2000-05-03 Thread Wolfram Kahl


Currently I teach functional programming classes again,
and I wanted to present the derivation of ``lines''
that can be found in [Bird-Wadler-1988].
However, since I wanted to spare my students the confusion of
a non-prelude-equivalent definition of ``unlines'',
I re-did the development of ``lines'' from the current
prelude definition of ``unlines'', therewith in fact solving
exercise 4.3.3. of that book :-).

To my surprise,
the prelude definition of lines:

 lines   :: String - [String]
 lines "" = []
 lines s  = let (l, s') = break (== '\n') s
in  l : case s' of
 []  - []
 (_:s'') - lines s''

(for which I have not yet seen a derivation,
 and which is actually used in hugs, ghc, and hbc
 although the report allows more efficient implementations)
not only is not much less cryptic than the definition I arrived at
(and anybody would arrive at):

 lines :: String - [String]
 lines = foldr f []
  where
   f '\n' xss = "" : xss
   f x [] = [[x]]
   f x (ys:yss) = (x:ys) : yss

but it is also much less efficient:

Here are the best running times (out of ten runs) of

 main = interact (unlines . lines)

on a 1.4Mb source file using

 *  first the prelude definition,
 *  then the new definition,
 *  then the new definition with foldr expanded
(the fact that this is faster should be an
 urgent challenge to compiler writers),
 *  and finally even with f expanded: 

   ghc-4.06 -O|   hbc-0..5b -O
   realuser   |   realuser
  PrelLines:  1.772s  1.660s  |  2.615s  2.330s
  NewLines:   1.340s  1.220s  |  1.428s  1.280s
  NewLines2:  1.204s  1.080s  |  1.361s  1.210s
  NewLines3:  1.190s  1.060s  |  1.291s  1.140s

Since everybody on the Haskell committee knows [Bird-Wadler-1988] :-),
I can only conclude that the prelude definition of ``lines''
that is contained in the report is a typo (TM) ;-),
especially since it has been stated repeatedly on the Haskell list
that prelude definitions in the report should also serve as
examples of good coding practices.


The picture is essentially the same for ``words'',
where I would propose the following definition:

 words :: String - [String]
 words [] = []
 words (x:xs)
  | Char.isSpace x  =  words1 xs
  | otherwise   =  case words xs of
 []  -  [[x]]
 (ys:yss) - (x:ys):yss
  where
   words1 [] = []
   words1 xs@(y:ys)
| Char.isSpace y  =  words1 ys
| otherwise   =  [] : case words ys of
[]  -  [[y]]
(zs:zss) - (y:zs):zss

All the details can be found at:

http://ist.unibw-muenchen.de/Lectures/FT2000/FP/Lines.html


Best regards,

Wolfram Kahl




lines --- to derive, or not to derive ;-)

2000-05-03 Thread Wolfram Kahl


Currently I teach functional programming classes again,
and I wanted to present the derivation of ``lines''
that can be found in [Bird-Wadler-1988].
However, since I wanted to spare my students the confusion of
a non-prelude-equivalent definition of ``unlines'',
I re-did the development of ``lines'' from the current
prelude definition of ``unlines'', therewith in fact solving
exercise 4.3.3. of that book :-).

To my surprise,
the prelude definition of lines:

 lines   :: String - [String]
 lines "" = []
 lines s  = let (l, s') = break (== '\n') s
in  l : case s' of
 []  - []
 (_:s'') - lines s''

(for which I have not yet seen a derivation,
 and which is actually used in hugs, ghc, and hbc
 although the report allows more efficient implementations)
not only is not much less cryptic than the definition I arrived at
(and anybody would arrive at):

 lines :: String - [String]
 lines = foldr f []
  where
   f '\n' xss = "" : xss
   f x [] = [[x]]
   f x (ys:yss) = (x:ys) : yss

but it is also much less efficient:

Here are the best running times (out of ten runs) of

 main = interact (unlines . lines)

on a 1.4Mb source file using

 *  first the prelude definition,
 *  then the new definition,
 *  then the new definition with foldr expanded
(the fact that this is faster should be an
 urgent challenge to compiler writers),
 *  and finally even with f expanded: 

   ghc-4.06 -O|   hbc-0..5b -O
   realuser   |   realuser
  PrelLines:  1.772s  1.660s  |  2.615s  2.330s
  NewLines:   1.340s  1.220s  |  1.428s  1.280s
  NewLines2:  1.204s  1.080s  |  1.361s  1.210s
  NewLines3:  1.190s  1.060s  |  1.291s  1.140s

Since everybody on the Haskell committee knows [Bird-Wadler-1988] :-),
I can only conclude that the prelude definition of ``lines''
that is contained in the report is a typo (TM) ;-),
especially since it has been stated repeatedly on the Haskell list
that prelude definitions in the report should also serve as
examples of good coding practices.


The picture is essentially the same for ``words'',
where I would propose the following definition:

 words :: String - [String]
 words [] = []
 words (x:xs)
  | Char.isSpace x  =  words1 xs
  | otherwise   =  case words xs of
 []  -  [[x]]
 (ys:yss) - (x:ys):yss
  where
   words1 [] = []
   words1 xs@(y:ys)
| Char.isSpace y  =  words1 ys
| otherwise   =  [] : case words ys of
[]  -  [[y]]
(zs:zss) - (y:zs):zss

All the details can be found at:

http://ist.unibw-muenchen.de/Lectures/FT2000/FP/Lines.html


Best regards,

Wolfram Kahl




Re: Trivial Haskell Concurrency problem

2000-02-15 Thread Wolfram Kahl


Simon Peyton-Jones [EMAIL PROTECTED] writes:
  
  | elegant.  If MVar's were instances of Ord as well as Eq, a 
  | neat solution would
  | be to always get the least MVar first, but they aren't.  So 
  | what should one do?
  
  But you could make Flag an instance of Ord
  
   data Flag = MkFlag Int (MVar Bool)
  
  Now newMVar needs to consult a global variable to get the
  next Flag number, but after that there's no global locking.
  
  This is, of course, precisely what we'd have to do to make
  MVars an instance of Ord --- but it would impose a cost on
  all MVars, whether or not they needed it, which is why we've not
  done it.
  
This is something that I have long been wondering about
(perhaps it is just because of my ignorance):
Wouldn't stable pointers be a cheaper and more appropriate means
to get Ord for MVars, STRefs, and IORefs?


Best regards,

Wolfram



Re: Trivial Haskell Concurrency problem

2000-02-15 Thread Wolfram Kahl

Simon Peyton-Jones [EMAIL PROTECTED] answers my question:
  
  | This is something that I have long been wondering about
  | (perhaps it is just because of my ignorance):
  | Wouldn't stable pointers be a cheaper and more appropriate means
  | to get Ord for MVars, STRefs, and IORefs?
  
  Could be -- but do we really want to clog up the stable-pointer table
  with an entry for every MVar, whether or not anyone is interested in
  ordering?
  
  I think what you want is a distributed way to get a unique,
  as George suggested.  Then you can pair that with an MVar when
  you want something comparable.  The unique can include the processor
  id, so it can be globally unique.  64 bits?
  
  I'm still leery of putting such a unique inside every MVar, IORef etc.
  But maybe I shouldn't worry.
  

Perhaps I should give some background:
I am interested in implementing graph structures,
and would need to handle mappings between graphs,
or node labellings, or whatever.
All these mappings need not reside in the graph itself,
so they would require some FiniteMap structure.
However, most decent such data types require Ord
for being able to work efficiently.
If IORefs (or whatever I use) are not ordered,
then I have essentially two possibilities:

1) Do the Integer trick: slows down my program
   (as actually experienced in OCaml)

2) Do the memory management myself by allocating
   huge arrays and using the indices which are in Ord: clumsy and unwieldy

So  I would already be happy if IORefs, STRefs and MVars came with a variant
in Ord (consider this as a concrete proposal for the standard library)
--- even if some implementations choose to implement that via the
Integer trick: hopefully the best implementations
would provide something faster ;-)


Best regards,

Wolfram




Re: graphs

2000-02-01 Thread Wolfram Kahl


Wojciech Moczydlowski, Jr [EMAIL PROTECTED] (Khaliff TM) wrote:
  
  The second question - does anybody know about a GHC/Haskell library with
  graphs implementation?

Depending on what precisely you need, Martin Erwig's
``Functional Graph Library'' might contain something useful for you:

http://www.informatik.fernuni-hagen.de/import/pi4/erwig/fgl/



Wolfram



Re: Referential Transparency

1999-10-08 Thread Wolfram Kahl


  
   | As far as I understood the matter, referential transparency
   | denotes the property of a language, in which variables of the
   | same scope do not change their value. 
  
  So ML and Erlang are referentially transparent too?

Before everybody gets completely muddled up,
I point to a page where I conserve an old USENET NEWS posting
by Tom DeBoni [EMAIL PROTECTED] quoting the definitions
from a few relevant books:

http://www2.informatik.unibw-muenchen.de/kahl/reftrans.html


Regards,

Wolfram






Re: Wanted: a Business Programming Language!

1999-09-21 Thread Wolfram Kahl


J.P. Morrison [EMAIL PROTECTED] writes on the dataflow list:

  
   I have been in the IT business for about 40 years, and have been
maintaining PL/I programs intensively for the last year or so.  In 1994
I wrote a book called "Flow Based Programming" which was quite well
received.  It describes a technique for developing big applications as
networks of asynchronous processes communicating via streams of
formatted data chunks.  In fact this technique has been in continuous
use for the last 30 years at various places I have worked (surprise!),
and I feel it has proven its usefulness.  It also leads very naturally
to the concept of mini-languages, where a given process can be written
in a language appropriate to the application area.  However, designing
such a language is non-obvious!  As a result of my work, I feel pretty
sure that both variables and constants are undesirable - variables
because they are very sensitive to the timing of events, and constants
because they usually aren't!
  
Now, even if we replace variables by entity attributes, in most
languages I am familiar with the coding schemes are usually mathematical
- i.e. most variables are either binary, packed decimal, BCD, etc.  What
I am looking for is a language where most values have dimension, which
constrains the operations that can be performed on them.  E.g.
  
- currency can be added, but not multiplied
- currency can be multiplied or divided by non-dimensioned quantities,
but not added to them
- distances can be added; if multiplied, you get area; multiply again,
and you get volume!
- dates can be subtracted, but not added; number of days can be added or
substracted to/from a date
- etc.
  
Further, dimensioned quantities are measured in units, so it would be
nice to be able to add miles to kilometers - I don't care what unit is
used for the result, so long as it is identified.  Currency conversions
are more problematic, as a) the rate changes frequently, and b) there is
often a charge, so some amount of cash has to be sent somewhere else!
  
If your answer is OOP, it must be symmetrical (the permitted operations
must depend on the types of all variables - in which case, how do you
manage all the combinations?) - the popular OOP languages are
asymmetrical, which is not acceptable.

Multi-parameter type classes in Haskell.

Haskell:

http://www.haskell.org/

GHC User Manual --- Section about Multi-parameter type classes:

http://www.haskell.org/ghc/docs/latest/users_guide/users_guide-5.html#ss5.5

Paper:

http://www.dcs.gla.ac.uk/~simonpj/multi.ps.gz

Also, the language must be
natural to non-programmers - business types ought to be able to express
business rules without having to learn some abstruse notation...
  

Well, Haskell looks more like a kind of mathematical notation than
the kind of programming language (C, COBOL, Java) most non-programmers
are used to consider as non-natural, so it might fit your ideas ;-)

More seriously:

There has been quite some research about using Haskell as background
for domain-specific languages, so the the approach to follow would
require some real Haskell programming to set up the necessary classes
and combinators, and then business-type sticking-together
of these building blocks.

See the home page of Paul Hudak for DSL work:

http://www.cs.yale.edu/~hudak.html


A recent thread on the Haskell mailing list discussed units:

Search for ``Units'' in:

http://www.dcs.gla.ac.uk/mail-www/haskell/


This message also goes to the Haskell list.


Best regards,

Wolfram





Re: The dreaded layout rule

1999-07-30 Thread Wolfram Kahl

Simon Peyton-Jones [EMAIL PROTECTED] writes:
  
   In other words, it is a bug (and GHC and Hugs don't do it
   right - see my previous message; from your comment, I
   presume HBC also doesn't follow the definition).  I think,
   the only Right Thing is to remove this awful rule (unless
   somebody comes up with a rule that can be decided locally).
  
  Maybe so.  But (H98 editors hat on) this is more than a "typo".

I am surprised!  ;-)  

  It's a Haskell 2  issue.  Perhaps there will be no fully conforming 
  H98 compilers!

Perhaps it would be a reasonable Haskell 1.6 issue?


Wolfram





Re: diagonalisation

1999-07-20 Thread Wolfram Kahl

Hi all,

  since I have gotten into the habit to relate proposed diagonalisation
function, I will not resist this time, either ;-)

Koen Claessen [EMAIL PROTECTED] writes:
  
as // bs = diag [] [ [ (a,b) | a - as] | b - bs ]
  
diag current []  = diag [] current
diag current (new : pending) = heads ++ diag (new : tails) pending
 where
  (heads,tails) = unzip [ (c,cs) | (c:cs) - current ]
  
  I thought it was short and sweet :-)

It is.  At a price, though:

Just as my first versions, it does not terminate in finite cases; try:
  
   take 20 (diag [] [[3..],[1,2]])

Otherwise, it is in the class of ``upside-down diagonals'':

Main take 20 (diag [] [[(x,y) | y - [0..] ] | x - [0..]])
[(0,0),(1,0),(0,1),(2,0),(1,1),(0,2),(3,0),(2,1),(1,2),(0,3)
,(4,0),(3,1),(2,2),(1,3),(0,4),(5,0),(4,1),(3,2),(2,3),(1,4)]

and is very close to the variant of Jón Fairbairn:

 diag:: [[a]] - [a]
 diag l = d [] l

 d [] [] = []
 d acc [] = --  d [] acc would do, but muddles the order;
heads acc ++ d (rests acc) []
 d ls (l1:rest) = heads (ls') ++ d (rests ls') rest
  where ls' = l1: ls

 heads l = [a | (a: _) - l]
 rests l = [as | (_: as) - l]

The second line of Jón Fairbairn's ``d'' handles termination;
this is what is missing in Koen Claessen's solution.
Koen Claessen's ``new'' is analysed one round earlier
in Jón Fairbairn's solution (and also in my corresponding solution DiagWK5).

The pairs in Koen Claessen's solution seem to be more expensive than
the double anaysis in Jón Fairbairn's solution:


   220  200  500

DiagKC0:00.14  0:01.73  0:24.27  1:05.41
DiagJF0:00.13  0:01.64  0:21.85  0:57.99
DiagWK5   0:00.12  0:01.34  0:18.92  0:52.06


The infinite dimensional problem is also quite fun --- I think I shall try
to accumulate some more interesting stuff in this direction and then
set out to condense it in some form suitable for publication,
perhaps even as a Functional Programming Pearl.


Wolfram





Re: instance overlapping

1999-07-18 Thread Wolfram Kahl

Nguyen Phan Dung [EMAIL PROTECTED] writes:
  
  If I have the following declaration:
  
  class Soup where
  ...
  
  instance Soup String where
  ...
  
  instance Soup t = Soup [t] where
  ...
  
  This will lead to an error: "instance overlapping".
  
  Is there anyway to solve this? 
  (I could not make an instance for Char)

One possibility is

 newtype MyString = MyString String

 instance Soup MyString where...



For another possibility let us assume:

 class Soup a where
   soupmethod :: souptype a

 instance Soup String where
   soupmethod = stringsoupmethod

 instance Soup t = Soup [t] where
   soupmethod = souplistfunctor (soupmethod :: souptype t)

A trick I am then using in these cases is the following:

 class Soup a where
   soupmethod :: Souptype a

 class SoupList a where
   souplistmethod :: Souptype [a]

 instance SoupList a = Soup [a] where
   soupmethod = souplistmethod

 instance SoupList Char where
   souplistmethod = stringsoupmethod

 instance SoupList t = SoupList [t] where
   souplistmethod = souplistfunctor souplistmethod

Maybe it does not give you all the power you want,
but in simple cases it serves me well.


Cheers,

Wolfram







Diagonalisation

1999-07-14 Thread Wolfram Kahl


Jón Fairbairn  [EMAIL PROTECTED] writes:
  
   diagonalise:: [[a]] - [a]
   diagonalise l = d [] l
  
  
   d [] [] = []
   d acc [] = --  d [] acc would do, but muddles the order;
  heads acc ++ d (rests acc) []
   d ls (l1:rest) = heads (ls') ++ d (rests ls') rest
where ls' = l1: ls
  
   heads l = [a | (a: _) - l]
   rests l = [as | (_: as) - l]
  
This differs from the versions given by Mark Jones, Stefan Kahrs and myself
in that it does the diagonals ``upside down''.

However, methodologically it is actually very close to my version,
and even closer to some predecessor of that that I do not have anymore.
In fact, if I introduce into DiagWK3 the corresponding muddling avoidance,
we get:

DiagWK4:

 diag :: [[a]] - [a]
 diag [] = []
 diag (l:ls) = split id [l] ls
  where
   split :: ([[a]] - [[a]]) - [[a]] - [[a]] - [a]
   split a [] ls  = case ls of
  [] - split id [] (a [])
  (l:ls) - split id (a [l]) ls
   split a (l : ls) r = case l of
  [] - split a ls r
  (x:xs) - x : split (a . (xs :)) ls r


Then a tiny change turns the diagonals ``upside down'', too:

DiagWK5:

 diag :: [[a]] - [a]
 diag [] = []
 diag (l:ls) = split id [l] ls
  where
   split :: ([[a]] - [[a]]) - [[a]] - [[a]] - [a]
   split a [] ls  = case ls of
  [] - split id [] (a [])
  (l:ls) - split id (l : (a [])) ls -- CHANGED!!
   split a (l : ls) r = case l of
  [] - split a ls r
  (x:xs) - x : split (a . (xs :)) ls r

The timing differences between DiagJF and DiagWK5 are probably
mostly due to some combination of

* the double list analysis effort in heads and rests
* and the tradeoff between lists with (++) and functions with (.).

It might be interesting to look into the compiled code in detail.


   220  200  500

DiagMPJ   0:00.16  0:02.32  0:37.55  1:48.09
DiagMPJ1  0:00.12  0:01.50  0:23.83  1:06.71
DiagMPJ1a 0:00.12  0:01.47  0:23.54  1:06.04

DiagJF0:00.13  0:01.64  0:21.85  0:57.99

DiagWK3   0:00.12  0:01.34  0:18.82  0:52.31
DiagWK4   0:00.11  0:01.33  0:19.29  0:53.22
DiagWK5   0:00.12  0:01.34  0:18.92  0:52.06


Wolfram





Re: Deriving Enum

1999-07-12 Thread Wolfram Kahl

Mark P Jones [EMAIL PROTECTED] writes:
  
  Here's my definition of an integer free diagonalization function.
  [..] As written, I think
  it is a nice example of programming with higher-order functions,
  and, in particular, using function composition to construct a
  pipelined program:
  
   diag :: [[a]] - [a]
   diag  = concat . foldr skew [] . map (map (\x - [x]))
   where skew [] ys = ys
 skew (x:xs) ys = x : comb (++) xs ys
  
  This uses an auxiliary function comb, which is like zipWith
  except that it doesn't throw away the tail of one list when it
  reaches the end of the other:
  
   comb:: (a - a - a) - [a] - [a] - [a]
   comb f (x:xs) (y:ys) = f x y : comb f xs ys
   comb f [] ys = ys
   comb f xs [] = xs

This is in fact much nicer and much more intuitive than my version!

The only improvement that comes to my mind is to apply the equality

   ([x] ++) = (x :)

wherever possible:

 diag :: [[a]] - [a]
 diag  = concat . foldr skew []
 where skew [] ys = ys
   skew (x:xs) ys = [x] : conscomb xs ys

 conscomb   :: [a] - [[a]] - [[a]]
 conscomb (x:xs) (y:ys) = (x : y) : conscomb xs ys
 conscomb [] ys = ys
 conscomb xs [] = map (:[]) xs

Perhaps this looks slightly less elegant, but it claws back quite
some efficiency: I compiled both versions (as DiagMPJ and DiagMPJ1)
and my original version (as DiagWK) with ghc-4.04 (perhaps a week old)
with ``-O'' and with the following test module
(substitute the respective definitions for ``@1''):

 module Main where

 import System

@1

 test = [[(x,y) | x - [1..]] | y - [1..]]
 main = do
  (arg : _) - System.getArgs
  let n = (read arg :: Int)
  print (length (show (take n (diag test

(I also tried Tom Pledgers second version, but it runs out of stack space...)

Best times out of five runs where:


Argument:  220  200

DiagMPJ   0:00.16  0:02.32  0:37.55
DiagMPJ1  0:00.12  0:01.50  0:23.83
DiagWK0:00.11  0:01.33  0:19.10


This is not the first time that I get the impression that,
at least with current implementations,
functions are unbeatable when it comes to efficient data structures ---
perhaps this is why it is called functional programming ;-)


Cheers,

Wolfram






Diagonalisations (was: Re: Deriving Enum)

1999-07-12 Thread Wolfram Kahl


In the meantime I have discovered a flaw in my original
diagonalisation in that it looped in finite cases.

This can easily be mended:

DiagWK1:

 diag :: [[a]] - [a]
 diag = f id where
  f :: ([[a]] - [[a]]) - [[a]] - [a]
  f a [] = split id (a []) []
  f a (l:ls) = split id (a [l]) ls
  split :: ([[a]] - [[a]]) - [[a]] - [[a]] - [a]
  split a [] [] = diag (a [])  -- this line was originally missing
  split a [] r = f a r
  split a ([] : ls) r = split a ls r
  split a ((x:xs) : ls) r = x : split (a . (xs :)) ls r

Now we can eliminate f:

DiagWK2:

 diag :: [[a]] - [a]
 diag [] = []
 diag (l:ls) = split id [l] ls
  where
   split :: ([[a]] - [[a]]) - [[a]] - [[a]] - [a]
   split a [][] = diag (a [])
   split a [](l:ls) = split id   (a [l]) ls
   split a ([] : ls) r  = split als  r
   split a ((x:xs) : ls) r  = x : split (a . (xs :)) ls  r

Another, equivalent version:

DiagWk3:

 diag :: [[a]] - [a]
 diag [] = []
 diag (l:ls) = split id [l] ls
  where
   split :: ([[a]] - [[a]]) - [[a]] - [[a]] - [a]
   split a [] ls  = case ls of
  [] - diag (a [])
  (l:ls) - split id   (a [l]) ls
   split a (l : ls) r = case l of
  [] - split als  r
  (x:xs) - x : split (a . (xs :)) ls  r

The timimgs are now:


   220  200

DiagMPJ   0:00.16  0:02.32  0:37.55
DiagMPJ1  0:00.12  0:01.50  0:23.83
DiagWK1   0:00.12  0:01.34  0:19.02
DiagWK2   0:00.12  0:01.35  0:19.09
DiagWK3   0:00.12  0:01.34  0:18.82


The only thing that surprises me is
that the compiler does not do the optimization from DiagWK2 to DiagWK3 itself.


Wolfram





Re: Deriving Enum

1999-07-11 Thread Wolfram Kahl

Koen Claessen [EMAIL PROTECTED]
proposes the following diagonalisation function:
  
[ (a,b) | (a,b) - [1..] // [1..] ]
  
  For a suitable definition of (//), for example:
  
(//) :: [a] - [b] - [(a,b)]  
xs // ys = diagonalize 1 [[(x,y) | x - xs] | y - ys]
 where
  diagonalize n xss = 
xs ++ diagonalize (n+1) (xss1 ++ xss2)
   where
(xs,xss1) = unzip [ (x,xs) | (x:xs) - take n xss ]
xss2  = drop n xss
  
  And it works for any type.

The core function here is

 (diagonalize (1 :: Integer)) :: [[a]] - [a]

This function diagonalises finite or infinite lists
with arbitrary finite or infinite element lists.


To me, it seems unsatisfactory to have a solution to this pure list problem
with auxiliary functions relying on integers.


It turns out to be a nice exercise to implement

 diagonalise :: [[a]] - [a]

without any reference to numbers.

Spoiler ahead:
I arrive at the appended version (which does not even use pairs).


Have fun!

Wolfram




   Warning:  Spoiler ahead!!



















 diagonalise :: [[a]] - [a]
 diagonalise = f id
  where
   f   :: ([[a]] - [[a]]) - [[a]] - [a]
   f a []  =  split id (a []) []
   f a (l:ls)  =  split id (a [l]) ls
   split   :: ([[a]] - [[a]]) - [[a]] - [[a]] - [a]
   split a [] r=  f a r
   split a ([] : ls) r =  split a ls r
   split a ((x:xs) : ls) r =  x : split (a . (xs :)) ls r






Re: Projects using HUGS or Haskell

1999-06-11 Thread Wolfram Kahl

Simon Peyton-Jones [EMAIL PROTECTED] writes:
  What I have not done (any volunteers) is to export these rules, or
  the function definitions to a thm prover.  

I am in the course of exporting function definitions
(and later probably also rules)
to the term graph transformation system HOPS
(
  URL: http://www2.informatik.unibw-muenchen.de/kahl/HOPS/
)
which can also be considered as a fledgling theorem prover ---
anyway I expect the problems to be essentially the same.

I am trying to finish a short summary next week...

Wolfram






Parsing qualified types

1999-06-08 Thread Wolfram Kahl


Consider the following two Modules:

File A.lhs:
==
 module A where

 data A a = MkA a
==

File B.lhs:
==
 module B where

 import qualified A

 type A a = A.A a--- Problem!
==

ghc-4.03 (current sources) complains while compiling B.lhs:

B.lhs:5: parse error on input `.'

(Renaming all the types does not make a difference.)

Both ghc-3.02 and Hugs 98 (May 1999) are happy with these modules,
and I also cannot detect anything in the Haskell98 report
justifying this rejection.


Best regards,

Wolfram


P.S.: When I call ``make tags'' from fptools/ghc/compiler,
  the output consists mostly of error messages concerning the absence
  of hsp:

cpp: output pipe has been closed
sh: /var/tmp/kahl/ghc/current/build-4.03/ghc/compiler/hsp: No such file or directory



Re: Here's a puzzle to fry your brains ....

1999-04-29 Thread Wolfram Kahl


John Launchbury posed a nice puzzle about
mutual recursive bindings in the do notation:



test :: [Int]
test = do (x,z) - [(y,1),(2*y,2), (3*y,3)]
  Just y - map isEven [z .. 2*z]
  return (x+y)

isEven x = if even x then Just x else Nothing

-

I would consider it as equivalent to:

-

testTrans :: [Int]
testTrans = do (x,z) - [(\y - y,1),(\y - 2*y,2), (\y - 3*y,3)]
   Just y - map isEven [z .. 2*z]
   return (x y+y)

isEven x = if even x then Just x else Nothing

-

which hugs accepts and evaluates to [4,6,12,16,24].

The principle is to consider variables (x) bound to expressions containing
later-bound variables (y) as functions accepting values for y.
When x is used, the current y is supplied.


MORE GENERAL would be to consider the following equivalent version
of the original program:

-

test' :: [Int]
test' = do (x,z) - return (y,1)   ++
return (2*y,2) ++
return (3*y,3)
  Just y - map isEven [z .. 2*z]
  return (x+y)

-

and adapt all occurrences of return
to returning functions accepting the later-bound variables.

Patterns fed by such returns are replaced by new function variables (f).

Variables bound in patterns fed by such returns
occur in two situation:

1: (z) before the later bindings:
  Apply to undefined and extract from the pattern,

2: (x) after the later bindings:
  Apply to the now bound variable and extract from the pattern.

-
testTrans' :: [Int]
testTrans' = do f - return (\y - (  y,1)) ++
 return (\y - (2*y,2)) ++
 return (\y - (3*y,3))
let z = snd (f undefined)
Just y - map isEven [z .. 2*z]
let x = fst (f y)
return (x + y)
-

Hugs says:

Main testTrans'
[4,6,12,16,24]


How about it?


Best regards,

Wolfram





Re: Haskell 98 library: Directory.lhs

1999-03-11 Thread Wolfram Kahl


Simon Peyton-Jones proposes:

   A Haskell 98 addendum
 [ ... ]
  Well, the bits are frozen, but I propose to regard this as a gross
  "typo" and add it to the typos page. 
 [ ... ]
  So the "typo" fix I propose is
 [ ... ]

  Any objections?



Call it Haskell 1.6 ;-)


Best,

Wolfram






Re: Partial Type Declarations

1999-01-16 Thread Wolfram Kahl

Jeffrey R. Lewis" [EMAIL PROTECTED] writes:
  
  foo := C a = a - b  roughly equiv to foo :: C _a = _a - _b
  
  I can easily imagine that you might want some variables to be a bound, and
  others to be exact, as in
  
  foo :: C a = a - _b
  
  I don't think the above can be expressed with Claus' proposal.
  

You could, by being more explicit than usual:

 foo := forall b = C a = a - b


Wolfram



Re: Partial Type Declarations

1999-01-16 Thread Wolfram Kahl

To my last message:
  Jeffrey R. Lewis" [EMAIL PROTECTED] writes:

foo := C a = a - b  roughly equiv to foo :: C _a = _a - _b

I can easily imagine that you might want some variables to be a bound, and
others to be exact, as in

foo :: C a = a - _b

I don't think the above can be expressed with Claus' proposal.

  
  You could, by being more explicit than usual:
  
   foo := forall b = C a = a - b
  
I forgot to add:

and this is exactly where problems begin,
since, besides harmless cases like

 foo :: forall b = C int = int - b

you also want to allow instantiations such as

 foo :: forall b = C [b] = [b] - b

Therefore

 * either the ``metavariables'' approach makes sense,
   where metavariables are context holes, and
   we allow context substitution

 * or we allow second order substitution.

In this second case, we have to distinguish between

 foo1 := forall b = C a = a - b

which allows

 foo1 :: forall b = C int = int - b

but not

 foo1 :: forall b = C [b] = [b] - b

and

 foo2 := forall b = C (a b) = (a b) - b

which allows both of


 foo2 :: forall b = C int = int - b

and

 foo2 :: forall b = C [b] = [b] - b


This second approach seems to be preferrable to me.


Wolfram



Re: Partial Type Declarations

1999-01-16 Thread Wolfram Kahl



Starting from Jeffrey R. Lewis' [EMAIL PROTECTED] wish to
let partial type declarations express binding of SOME type variables

  foo :: C a = a - _b

and modulating the syntax proposed by Claus Reinke [EMAIL PROTECTED],

  foo := C a = a - b

I suggested the following notation to express this desire:

 foo2 := forall b = C (a b) = (a b) - b


This can also be seen as syntactic sugar
for an assertion that can be checked at compile-time
(a type judgement assertion):

 assert (exists a = foo2 :: forall b = C (a b) = (a b) - b)

This way we get even closer to existing syntax,
although it is of course a departure from
the original intention of Koen Claessen [EMAIL PROTECTED]
to hide subtleties of Haskell's type system from DSL users.

So building on such an explicit assertion syntax,
we might easily adapt some variation of any of the proposals as syntactic sugar.


Nevertheless, I think that the distiction between

 assert (exists a = foo1 :: forall b = C a = a - b)

and

 assert (exists a = foo2 :: forall b = C (a b) = (a b) - b)

should not be blurred.

Am I too puristic here?



Cheers,

Wolfram



Re: Partial Type Declarations

1999-01-16 Thread Wolfram Kahl

Jeffrey R. Lewis" [EMAIL PROTECTED] wrote:
  
  Anyway, the only thing missing now in the above proposal
  is a similar flexibility with contexts.
  Say, you want `b' to be a bound, and thus use :=,
  but you want the context to be exact
  (i.e. you don't want extra context elements to be allowed).
  I can't think of any particularly compelling notation for
  "the context is `C a' and no more".
  In this case, the combination of allowing `...' to extend contexts,
  and _a (or similar annotation) for unquantified type variables seems
  more natural.

Getting more and more explicit,
we may well distinguish between

 assert (exists a :: * - *,
D :: * - Context
E :: * - Context
F :: Context
 = 
foo3 :: (forall b :: *
 =
 C (a b), D (a b), E b, F
 =
 (a b) - b))

and

 assert (exists a :: * - *,
D :: * - Context
E :: * - Context
 = 
foo3 :: (forall b :: *
 =
 C (a b), D (a b), E b
 =
 (a b) - b))

or any other specialisations.

Now the DSL user really gets lost,
and we should consult Simon Marlow
for an appropriate layout rule for types ;-)


But the expressivity is ours ;-)


Have fun,

Wolfram



Existentially quantified types and ``deriving Eq''

1998-10-20 Thread Wolfram Kahl

Hello!

The following datatype declaration would, if possible,
actually be very useful for an application I have in mind:


 module Test(V(..)) where
 
 import ST
 
 data V s = forall a . MkV (STRef s a)  deriving Eq


But when compiling it with Ghc-4.00 I get:

==

ecserver ~~ ghc -fglasgow-exts -c test.hs
 
test.hs:5:
Inferred type is less polymorphic than expected
Quantified type variable `a' is unified with `a1'
When checking an existential pattern that binds
a1 :: STRef s a1
b1 :: STRef s1 a
In an equation for function `==':
== (MkV a1) (MkV b1) = (a1 PrelBase.== b1)
In the definition for method `=='


Compilation had errors

==

(Essentially the same happens in Hbc.)

Do I have to understand this error message as signalling that
the ``deriving'' mechanism may not yet be fully aware
of existentially quantified constructors?
(It should be prepared that the rule

== (MkV a1) (MkV b1) = (a1 PrelBase.== b1)

 may not be applicable for typing reasons,
 i.e., before the program-side pattern ``== (MkV a1) (MkV b1)''
 is matched, the typing pattern induced by it should be allowed to fail.)

Or is this a design design that I just could not find
any documentation for?

Would other people also like to derive classes in such a way for
existentially quantified datatypes?


BTW, the sparc-sun-solaris binary is not at the end of its link
in the Ghc download page.



Best regards,


Wolfram



Re: Proofs and development

1998-05-28 Thread Wolfram Kahl

Alan Grover ([EMAIL PROTECTED]) writes (on the Clean list):

  I'd then like to modify the source code adding the better algorithm, but I'd
  like to keep the original algorithm as the documentation. The
  language/system should then help me prove that the better version is
  equivalent. I feel that this helps keep the source code understandable, and
  connects it more directly with the requirements and specifications.
   
  ...
   
   
  This all has relation to literate programming, does anyone else have
  interest in that?

This kind of approach is big part of the motivation behind my
graphically interactive strongly typed term graph program development
and transformation system HOPS (work in progress), see

URL: http://diogenes.informatik.unibw-muenchen.de:8080/kahl/HOPS/

HOPS in principle is a language-independent term graph programming
framework with support for user-defined text generation from term graphs.
The only constraint are the kind of term graphs supported (with
explicit variable binding and variable identity, and with second-order
rewriting) and the typing system (essentially simply-typed lambda-calculus
with type variables).

HOPS distinguishes

- ``declarations'' (type signatures in Haskell) that introduce a new
  identifier with its typing, and

- ``rules'', which are term graph transformation rules normally
  considered as semantics preserving equations.

  Rules in a certain shape can be used as definitions in functional
  programming languages; other rules may serve as theorems or lemmata.

  The possibility of classifying declarations and rules into groups
  according to their perceived role is not yet fully implemented;
  considerable flexibility is needed here since among the many possible
  definitions: one may be the ``documentation version'', one may be
  best for this compiler, another may be best for that compiler,
  yet another may be best as a starting point for transformation into
  a different paradigm. 


However, attributes on types (such as uniqueness or Haskell's type classes)
are not implemented yet, and currently types are always maximally identified,
which is incompatible with unique types.

Therefore, the current version of HOPS best supports a Haskellish style
of programming, and in the supplied rudimentary standard library
only output to Haskell is currently supported (to a certain extent).



Wolfram