Re: [Haskell-cafe] JavaScript (SpiderMonkey, V8, etc) embedded in GHC?

2012-11-10 Thread Claus Reinke

I've looked around with no success… this surprises me actually. Has
anyone embedded SpiderMonkey, V8, or any other relatively decent
JavaScript interpreters in GHC (using the FFI)?


I just started something [1].

Cheers,
Simon

[1] https://github.com/sol/v8


Out of curiosity: wouldn't it make more sense to focus on the
other direction (calling Haskell from V8)? Roughly like:

- devices/GUI:
   Javascript/HTML/CSS in the browser/webview

- server/IO+lightweight computation:
   Javascript on node.js/V8

- server/computation+algorithms+parallelism+concurrency+..:
   Haskell on GHC

Also, if I recall correctly, the behind the scenes upgrade of
evented IO in GHC was never carried over to Windows.

Since node.js had to solve similar issues, and did so by using
libuv, perhaps there is an opening for completing the cross-
platform support for efficient evented IO in GHC, reusing
node's library-level efforts[1,2,3]? Just a thought..

Claus

[1] https://github.com/joyent/libuv
   Its purpose is to abstract IOCP on Windows and
   libev on Unix systems.

[2] http://nikhilm.github.com/uvbook/introduction.html
libuv as a high performance evented I/O library
   which offers the same API on Windows and Unix.

[3] libuv - The little library that could (slides)
   http://www.2bs.nl/nodeconf2012/#1



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reddy on Referential Transparency

2012-07-27 Thread Claus Reinke

Have we become a bit complacent about RT?


We're not complacent, we just know things without having to
check references. Just kidding, of course, functional programmers
tend to enjoy improving their understanding!-)

The Strachey reference is worth reading - great that it is online
these days, but it can be useful to complement it with others.

Before Quine, there was Carroll, who explored meaning [1]
and referential opacity [2], among other things. Also useful
is Sondergaard and Sestoft [3], which explores both the history
and the differences between not-quite-equivalent definitions.

I happen to disagree with Reddy's assertion that having to
explain a complicated language with the help of a less complicated
one is perfectly adequate. Reddy himself has done good work on
semantics of programming languages, but I'm a programmer
first - if the language I work with does not give me the qualities
that its semantics give me, then my means of expression and
understanding are limited by the translation.

All to be taken with grain of salt, active mind and sense of humor.
Claus

[1, Chapter 6]
   http://en.wikiquote.org/wiki/Through_the_Looking-Glass

[2, the name of the song]
   http://homepages.tcp.co.uk/~nicholson/alice.html

[3] Søndergaard, Sestoft (1990). Referential transparency,
   definiteness and unfoldability. Acta Informatica 27 (6): 505-517.
   http://www.itu.dk/people/sestoft/papers/SondergaardSestoft1990.pdf



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Need inputs for a Haskell awareness presentation

2012-06-01 Thread Claus Reinke

I have the opportunity to make a presentation to folks (developers and
managers) in my organization about Haskell - and why it's important - and
why it's the only way forward.


Haskell is important, but not the only way forward. Also, there have been
other great languages, with limited impact - incorporating great ideas is
no guarantee for takeup. If you want to be convincing, you need to be 
honest.



1. Any thoughts around the outline of the presentation - target audience
being seasoned imperative programmers who love and live at the pinnacle of
object oriented bliss.


If you approach this from the Haskell side (what is Haskell good at),
you might get some people curious, but you won't connect their interest
to their daily work. You don't want to give them a show, you want to
inspire them to want to try coding in that language.

If you really want to understand what is good about Haskell, stop using
it for a while, and work in something like Javascript (or whatever your
audience is comfortable with, but for me Javascript was an eye opener).

You won't believe how basic the issues are that conventional coders
are struggling with until you realize that you do not have them in Haskell.

If you have felt that pain, have understood that you can't make those
issues go away by saying that wouldn't be an issue in Haskell, then
you can understand that their struggles and issues are real.

If you respect that, you can take one of their problems/source bases,
and translate it to Haskell. That step tells them (and you!) that Haskell
is adequate for their problem domains (which won't always be the
case - no point showing them a wonderful language that they won't
be able to apply).

The next step is to go through some of those pain points in that code
and show how to get rid of them in the Haskell version. Instead of
presenting ready-made solutions, show them how to work with code
they understand, much more confidently than they would be used to.

Go slowly, and focus on their daily pain points (which they probably
have stopped feeling because they can't do anything against them).
Explain why you are confident in your modifications, compare against
the obstacles that their current language would throw up against such
modifications. Some examples:

- types can replace basic documentation and documentation lookup

- no need to test things that the type system can check, not in test suites
   and not in the prelude of every function definition; you still need 
tests,

   but those can focus on interesting aspects; you don't need to run 10
   minutes of tests to verify that a refactoring in a large code base 
hasn't

   broken scoping by misspelling a variable name, or that function calls
   have the proper number and type of parameters

- thanks to decades of development, Haskell's static type system does
   not (usually) prevent you from writing the code you mean (this is
   where the proof comes in - you've rewritten their code in Haskell),
   nor does it clutter the code with type annotations; types of functions
   can be inferred and checked, unlike missing or outdated documentation;

   (careful here: language weaknesses are often compensated for through
   extensive tool usage; some IDEs may check type annotations within
   comments, or try to cover other gaps in dynamic languages)

- part of the trick is to work with the type system instead of against it:
   document intentions in code, not comments

- separation of expressions and statements

- instead of every expression being a statement (side effects everywhere),
   every statement is an expression (functional abstraction works
   everywhere)

- since functional abstraction works everywhere, once you see repeated
   code, you know you can factor it out

- you can build libraries of useful abstractions

- building abstraction libraries in the language is so easy that you
   can build domain-specific abstraction libraries

- domain-specific abstraction libraries become embedded DSLs;
   no need to write parsers, no risk to useful program properties
   from overuse of introspection

- ..

I better stop here - I hope you can see the direction:-)

Many of these do not even touch on the advanced language features,
but all of them rely on the simplicity and careful design of the core
language. All of these advantages carry over to concurrent and
parallel programming, without having to switch to another language.

Also, both functions and threads are so cheap (and controllable in
terms of side-effects) that you do not have to break your head to
avoid using them. And instead of noisy syntax, Haskell's syntax
tends not to get in the way of writing in domain-specific abstractions.

The main advantages of Haskell (as I see them at this moment;-):

- fewer problems to keep track of: can focus on higher-level issues

- great support for abstraction: you can scratch that itch and don't
   have to live with feeling bad about your code; over time, that
   means 

Re: [Haskell-cafe] Hierarchical tracing for debugging laziness

2012-01-25 Thread Claus Reinke

Look how one can watch the evaluation tree of a computation, to debug
laziness-related problems.


You might like the old Hood/GHood:

http://hackage.haskell.org/package/hood
http://hackage.haskell.org/package/GHood

Background info/papers:

http://www.ittc.ku.edu/csdl/fpg/Tools/Hood
http://www.ittc.ku.edu/csdl/fpg/node/26
http://community.haskell.org/~claus/GHood/

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to speedup generically parsing sum types?

2011-11-03 Thread Claus Reinke

* syb: toJSON and fromJSON from the Data.Aeson.Generic module. Uses
the Data type class.
..
As can be seen, in most cases the GHC Generics implementation is much
faster than SYB and just as fast as TH. I'm impressed by how well GHC
optimizes the code!


Not that it matters much if you're going with other tools, but your 
SYB code has a long linear chain of type rep comparisons, at every

level of the recursive traversals. That is partially inherent in the SYB
design (reducing everything to cast), but could probably be improved?

Claus



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What library package fulfills these requirements?

2011-10-28 Thread Claus Reinke
I am planning to give a workshop on FP using Haskell. 
The audience will be programmers with quite a bit of 
experience with conventional languages like Java and 
.net . I want to give them some feeling about FP. And 
hopefully, they will become interested so they want more...


My recommendations: 


- don't go for advanced coding styles or wow applications

- don't just walk through finished code, show how
   Haskell allows you to work with code (understanding
   and changing code, types as machine-checked 
   documentation, effect control, abstracting out ideas 
   for reuse, ..)


   you could:
   - show how to find relevant pieces of code in a large
   project, how to understand the piece, and how
   problematic interactions may be limited, compared 
   to Java-like languages


   - build up working code from empty (risky, but some
   Scala introductions have used this, and managed to
   give listeners the impression that they get what
   they see, and that they might be able to reproduce it)

   - take working code, then refactor it; for instance, start
   with simple code not too different from what an
   imperative coder might write, then start factoring
   out reusable patterns (when you get to factoring out
   control structures, you can go beyond what is easy
   in Java, and you can motivate introducing some of
   the more fancy Haskell idioms, in context)

   Don't be afraid of things going wrong, but have a script,
   know your tools: your audience will be interested to see 
   what you do when the unexpected happens (debugging

   support, source navigation, ...). As usual, have a fallback,
   to rescue the talk if you cannot fix things on the spot.

- it helps to know your audience: the advantages of Haskell
   over Java are different from those over Javascript; .Net
   coders might have different needs again (libraries; might
   prefer F# or something else that runs on the platform)

- complex apps or advanced coding patterns might wow 
   the audience (didn't think that was possible), but will 
   they try to reproduce that (will they even get the idea?)?


- simple everyday tasks solved in simple coding patterns
   might not wow in themselves, but make it easier to 
   follow the language differences, and can be built on


- try for a steady pace through the whole presentation
   (too many Haskell talks start great, then either take off
   and leave the audience behind, or lose pace and direction
   as the speaker tries to fill the time)

If the authors agree, it might be good to take an existing
talk and try to adapt/improve it. Would be good to work
toward one set of slides that can give a starting point for
such talks: a general set that one can point to, and sets of
modifications that tune the general set to different audiences.

There are lots of introduction to Haskell talks/slides on 
the web, btw., including these hands-on ones:


http://bos.github.com/strange-loop-2011/slides/slides.html 


(google for alternatives, but being on github, with an easy
HTML interface, allows for collaborative improvements)

My main recommendation again: don't just show working
code, show how to work with code.

Claus

I am wondering what package is suitable to be used as 
an example too? It needs to fulfill at least the following 
requirements:
+ I have to be able to explain the purpose of the software 
   in no more than 1 or 2 minutes
+ There should be parts of the code that can be easily 
   linked to the purpose /use of the package

+ These code parts must show the 'prettiness' of Haskell
+ It would be nice if there is something GUI-like to demo, 
  which runs under windows.

+ I prefer not to use some kind of a compiler as an example.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Message

2011-10-22 Thread Claus Reinke
The world needs programmers to accept and take seriously Greg 
Wilson's extensible programming, and stop laughing it off as lolwut 
wysiwyg msword for programming, and start implementing it.

http://third-bit.com/blog/archives/4302.html


Who is the world? For starters, I don't think it is Greg Wilson's
idea, and if you look for alternate sources, often under other titles, 
you'll find parts of it implemented, with varying degrees of success
and often little acceptance. The idea is much older than one might 
think - conferences on extensible languages were held around 1970. 

Early implementation approximations didn't have the disposable 
computing power of today's PCs, nor did early implementers find
an audience ready for their ideas (to feed their students or 
themselves, some of those who were such ahead of the curve 
had to switch to working on more conventional, funded, topics).


Useful search keys:

- extensible languages (as in AI, the meaning of extensible tends
   to be redefined whenever a problem gets solved, so many features
   that used to mark an extensible language in the past have now
   become standard)

- structure editors (in that they were forerunners of projectional
   IDEs, and exhibited some of their advantages and disadvantages;
   there have been many efforts to generate structure editors from 
   language descriptions)


- projectional language workbenches (instead of parsing source
   to AST, the IDE/workbench operates on an AST-like abstract
   model, and source code views are just projections of that; 
   makes it easier to embed sublanguages);


   Smalltalkers will probably claim their image-based IDEs have
   been doing that all along.

- hyper-programming (where persistent runtime data can be 
   embedded in code via linking, similar to hypertext, with

   generic editors instead of generic Read/Show)

- Banana Algebra: Syntactic Language Extension via an Algebra 
   of Languages and Transformations (one example of research

   on language composition)

IDE generators, IDE tooling for domain-specific languages, 
language-oriented programming, language workbenches, ... 
they all contribute to the now broader interest in the topic.


In the context of Haskell, there once was Keith Hanna's
document-centered programming:

http://www.cs.kent.ac.uk/projects/vital/
http://www.cs.kent.ac.uk/projects/pivotal/

Perhaps Keith's projects can serve as an inspiration to just 
start hacking?-) The subject is an instance of these quotes:


The future is already here - it's just not very evenly distributed.
William Gibson

The best way to predict the future is to invent it.
Alan Kay

Claus
http://clausreinke.github.com/



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] which tags program should I use?

2011-09-26 Thread Claus Reinke

suggests using :etags in GHCI or hasktags, or gasbag.  Of the three,
hasktags comes closest to working but it has (for me) a major
inconvenience, namely it finds both function definitions and type
signatures, resulting in two TAGS entries such as:


Some customization required? Tweaking the output format is
usually the easiest part of a tags file generator. Alternatively, add
information about the kind of tag, have both tags in the tags file,
and do a little filtering on the editor side (in Vim, try :help 
taglist()),

so that you can go to the type or value level definition with different
key combinations.


I'm also a user of the one true editor (namely vim ;) and I'd be
interested in an answer to that too.  Every once and a while I go look
at the various tag programs to see if there's anything that works
better than hasktags.  The continuing proliferation of tags programs
implies that others are also not satisfied with what's out there, but
no one is devoted enough to the cause to devote the effort to make the
One True Tags program.


Some notes from someone who has written his share of tags file
generators:

1. there are trade-offs to consider: fast or detailed? incremental (file by
   file) or tackle a whole project at once? should it work on syntactically
   incorrect code (regexes, parsing, or parsing with recovery)? how much
   detail can your editor handle, without requiring extra scripting (emacs
   tags files can't hold as much info as vim tags files can - the latter 
are

   extensible; but to make use of that extra info, scripting is required)?

2. once you reach the borders of what quick and dirty tags generators
   can do, things get interesting: haskell-src-exts makes it quite simple
   to generate top-level tags with proper parsing, if the file can be 
parsed;

   GHCi knew about top-level definitions anyway, so it was easy to output
   that as a tags file; but what if the file cannot be parsed? what if the
   import hasn't got any source (not untypical in the world of GHC/Cabal)?
   do the interface files of the binary modules have source location info??
   how to handle GHC/Cabal projects, not just collections of files?

3. what about local definitions? what about non-exported top-level
   definitions? Here the editors get into difficulties handling too many
   tags without distinguishing features. As it happens, I've recently
   released support for (lexically) scoped tags in Vim, with a generator
   for Javascript source:

   
http://libraryinstitute.wordpress.com/2011/09/16/scoped-tags-support-for-javascript-in-vim/

   I'd love to see a scoped tags generator for Haskell (the scoped
   tags format is language independent, and the Vim script support
   could be reused), but that would double the size and complexity
   of a simple parsing tags generator (beyond traversing top-level
   definitions: traversing expressions, handling Haskell's complex
   scopes) or require additional code for a GHCi-based generator
   (of course, a regex-based generator would fail here); also, the
   Haskell frontend used would need to record accurate source spans;

   I'd also like to see type info associated with scoped tags, for
   presentation as editor tooltips, at which stage the air is getting
   thin - parsing alone doesn't suffice, so you need parsing,
   type-checking, and generic scope-tracking traversals of typed
   ASTs, with embedded source span info; perhaps an application
   of scion, but by then you're beyond simple; and all that power
   comes at a price of complexity, speed and brittleness: no tags
   if parsing or typing fails, where the latter requires reading
   interface files, or handling whole projects instead of isolated files..

Perhaps combining a quick and dirty incremental tags generator
with a detailed and sound full-project generator could do the trick?

Or, you could drop the tags file generation and treat the detailed
and sound full-project analyzer as a server which your IDE/editor
could ask for info about code, while reloading only those files that
change (as if the language query server was a programmed GHCi
session running in the background). Which seemed to be the
direction scion was heading in..

   https://github.com/nominolo/scion

Tags files are a nice interface between language-aware generators
and general purpose editors/IDEs, but they are limited. I still think
it is worth using them, in extended form, but if your editor doesn't
make such extension easy, or if you want to leverage the complex
language frontend for other purposes, switching from offline
tags file generators to online language query servers makes sense
(even back in the C days there were both ctags and cscope).

I hope this helps to explain why there is no One True Tags program?

Claus
http://clausreinke.github.com/



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Deciding equality of functions.

2011-04-10 Thread Claus Reinke
It is a common situation when one has two implementations of 
the same function, one being straightforward but slow, and the 
other being fast but complex. It would be nice to be able to check 
if these two versions are equal to catch bugs in the more complex 
implementation.


This common situation is often actually one of the harder ones 
to prove, I say coming from proving a few of them in Coq. The 
thing is that a lot of the common optimizations (e.g., TCO) 
completely wreck the inductive structure of the function which, 
in turn, makes it difficult to say interesting things about them.[1]


The traditional approach is to derive the efficient version from
the simple, obviously correct version, by a series of small code
transformations. The steps would include meaning-preserving
equivalences as well as refinements (where implementation
decisions come in to narrow down the set of equivalent code).

Advantages: codes are equivalent by construction (modulo 
implementation decisions), and the relationship is documented

(so you can replay it in case requirements changes make you
want to revisit some implementation decisions).

Even with modern provers/assistants, this should be easier
than trying to relate two separately developed pieces of code,
though I can't speak from experience on this last point. But 
there have been derivations of tail-recursive code from the 
general form (don't have any reference at hand right now).


Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] why are trading/banking industries seriouslyadopting FPLs???

2011-03-25 Thread Claus Reinke

I am very curious about the readiness of trading and banking
industries to adopt FPLs like Haskell:
.. Why are are trading/banking diving into FPLs?


Some possible reasons have been given, but to keep things
in perspective, you might want to consider that it isn't just
FPLs. Smalltalk, for instance, got some mileage out of finance
sector success stories. Happily, there is at least one showcase
with some documentation:

   http://www.cincomsmalltalk.com/main/successes/financial-services/jpmorgan/

if you don't see the pdf there, it is

JPMorgan Derives Clear Benefits From Cincom SmalltalkT
   http://www.cincom.com/pdf/CS040819-1.pdf

Comparing the claims in there (attributed to Dr. Colin Lewis,
Vice President, JPMorgan) with similar ones in Haskell success
stories might yield some clues:

With such a high productivity factor
that Smalltalk gives us, reaction times to
market changes have enabled us to
beat most of our competitors.

We have estimated that if we had built
Kapital in another language such as
Java, we would require at least three
times the amount of resources.

The key is that our development
resources do not have to be Smalltalk
experts.

Now a benchmark game comparing financial success of
Haskell, Smalltalk, Java, .. -backed companies, that would
be something. But it would still not account for the quality
of programmers that came with the language.

Claus



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Byte Histogram

2011-02-05 Thread Claus Reinke

Lately I've been trying to go the other direction: make a large
section of formerly strict code lazy.  


There used to be a couple of tools trying to make suggestions
when a function could be made less strict (Olaf Chitil's StrictCheck 
and another that escapes memory at the moment). Often, it

comes down to some form of eta-expansion - making information
available earlier [(\x-f x) tells us we have a function without
having to evaluate f, (\p-(fst p,snd p)) marks a pair without
needing to evaluate p, and so on].

I fully agree that once code size gets big these problems get a lot 
harder.  You have to be very careful passing around state that you 
don't do anything that causes too much to be evaluated at the 
wrong time.  


Generally, the trick is to develop an intuitition early, before
growing the program in size;-) However, as long as you can
run your big code with small data sets, you might want to try
GHood, maintained on hackage thanks to Hugo Pacheco:

   http://hackage.haskell.org/package/GHood
   http://community.haskell.org/~claus/GHood/ (currently unavailable:-(

The latter url has a few examples and a paper describing how
GHood can be useful for observing relative strictness, ie, when
data at one probe is forced relative to when it is forced at another.

(it seems that citeseerx has a copy of the paper, at least:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.132.1397 )

For instance, if you put one probe on the output and one on the
input, you can see which parts of the input are forced to produce
which parts of the output. As I said, small data sets help, but since
you put in probes manually and selectively, large code size should
not interfere with observations (though it might hinder intuition:-).


But there's definitely a knack to be learned, and I think I might
eventually get better at it.  For example, I realized that the
criteria to make something non-strict wrt data dependency are the same
as trying to parallelize.  Sometimes it's easier to think what do I
have to do to make these two processes run in parallel and that's the
same thing I have to do to make them interleave with each other
lazily.


Sometimes, I think of non-strict evaluation as spark a thread for
everything, then process the threads with a single processor..

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Denotational semantics for the lay man.

2011-01-17 Thread Claus Reinke

I've recently had the opportunity to explain in prose what denotational
semantics are to a person unfamiliar with it. I was trying to get across
the concept of distilling the essence out of some problem domain. I
wasn't able to get the idea across so I'm looking for some simple ways
to explain it.


Trying to explain things to a layperson is often a useful exercise.
So here goes my attempt:

Semantics are all about relating two partially understood structures
to each other, in the hope of reaching a better understanding of both.

Often, the plan is to translate a problem systematically from a less
well-understood domain to a domain whose tools are more amenable
to solving the problem, then to translate the solution back to the 
original domain (and originally, just about anything was better

understood than programming).

In the beginning, the meaning of a program was what it caused a 
certain machine to do - nothing to relate to but experience. Then 
there were specifications of how conforming machines were 
supposed to execute a given program, so one could compare a

given machine against the specification. Mathematicians tried to
relate programs to mathematical objects (traditionally functions),
which they thought they understood better, only to learn a thing
or two about both programs and functions. Programs for concrete
machines were related to programs for abstract machines, or
concrete program runs to abstract program executions, in order
to focus only on those aspects of program executions relevant
to the problem at hand. Programs were related to conditions in 
a logic, to allow reasoning about programs, or programming 
from proofs. And so on..


The two main questions are always: how well do the semantics
model reality, and how well do programs express the semantics?
It helps if the mappings back and forth preserve all relevant
structure and elide any irrelevant details.

Claus

anecdotal semantics:
   you know, once I wrote this program, and it just fried the printer..

barometric semantics:
   I think it is getting clearer..

conventional semantics:
   usually, this means..

detonational semantics:
   what does this button do?

existential semantics:
   I'm sure it means something.

forensic semantics:
   I think this was meant to prevent..

game semantics:
   let the dice decide

historical semantics:
   I'm sure this used to work

idealistic semantics:
   this can only mean..

jovial semantics:
   oh, sure, it can mean that.

knotty semantics:
   hm, this part over here probably means..

looking glass semantics:
   when I use a program, it means just what I choose it to mean, 
   neither more nor less


musical semantics:
   it don't mean a thing if it ain't got that swing.

nihilistic semantics:
   this means nothing.

optimistic semantics:
   this could mean..

probabilistic semantics:
   often, this means that..

quantum semantics:
   you can't ask what it means.

reactionary semantics:
   this means something else.

sherlockian semantics:
   since it cannot possibly mean anything else, ..

transitional semantics:
   for the moment, this means that..

utilitarian semantics:
   this means we can use it to..

venerable semantics:
   this has always meant..

weary semantics:
   sigh I guess that means..

xenophobic semantics:
   for us here, this means..

yogic semantics:
   we shall meditate on the meaning of this.

zen semantics:
   ah!



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Needed: A repeatable process for installing GHC on Windows

2011-01-15 Thread Claus Reinke

Earlier today I was trying to set up a Windows build bot for the
'network' package. That turned out to be quite difficult. Too much
playing with PATHs, different gcc versions, etc. Does anyone have a
repeatable, step-by-step process to install GHC and get a build
environment (where I could build network) going?


If you don't need to build GHC yourself, then the binary installers
(or daily snapshot builds) should come with their own copy of GCC 
tools. So you only need some shell/autoconf environment on top

(either cygwin or MSYS). Fewer things to go wrong than in the past,
but cygwin's configure scripts tend to think they're on unix, and
for MSYS, it can be difficult to get the right pieces together.

Snapshots should appear as installers and tar-balls here:

   http://www.haskell.org/ghc/download#snapshots

but windows build failures tend to go unnoticed for a while,
so you might have to send a heads-up to cvs-ghc@.. if you 
need up to date builds.



No, but I used to (and sadly can't find it any more). I used to have a
script called ghcsetup which built GHC on Windows, and importantly
validated the setup was correct (the right gcc was first in the path
etc) and took actions to correct it.

I am sure there used to be a great web page on the GHC wiki, saying
the exact steps to build (written by Claus), but I can't find it any
more. Perhaps Claus knows where it has gone?


Yes, I ran into this so often that I made a record of the build
your own GHC steps one time, together with a cygwin package
description that recorded the cygwin packages needed for such
builds. Both cygwin and GHC have moved since then, and I guess
someone decided it was too much trouble to keep the step-by-step
guide up to date, or perhaps it was no longer needed (check with 
GHC HQ to be sure)?


If you do need to build your own GHC head, the GHC wiki building 
guide has a page on 


   Setting up a Windows system for building GHC
   http://hackage.haskell.org/trac/ghc/wiki/Building/Preparation/Windows

which not only claims to list suitable MSYS versions, but holds
copies (so you don't have to hunt for the MSYS packages), as well
as links to other necessary tools (python, alex, happy, darcs). What
I do not know is whether the versions are still up to date (check
with GHC HQ).

As usual, if the wiki page instructions are not working anymore,
please add a note, and include any improvements you figure out;-)

Claus

PS. GHC no longer uses buildbot, it has its own builder, so if
   you really want to set up a build bot, you can probably copy
   GHC HQ's setup for the purpose?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] dot-ghci files

2010-12-09 Thread Claus Reinke
Perhaps ghc should also ignore all group-writable *.hs, *.lhs, *.c, *.o, 
*.hi files.


dot-ghci files are *run* if you just start ghci (or ghc -e) in that 
directory

(even if you don't intend to compile, load, or run any Haskell code).

Claus



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] OverloadedStrings mixed with type classes leads toboilerplate type signatures

2010-12-05 Thread Claus Reinke

   ghci :set -XOverloadedStrings
   ghci $name ate a banana. % [(name, Johan)]
   Johan ate a banana.



   class Context a where
   lookup :: a - T.Text - T.Text

   instance Context [(T.Text, T.Text)] where
   lookup xs k = fromMaybe (error $ KeyError:  ++ show k) (P.lookup 
k xs)


This instance only applies if the pair components are Texts.

With OverloadedStrings, your unannotated String-like Pairs
have variable type components, so the instances neither
matches nor forces the type variables to be Text.

It sounds as if you want an instance that always applies for
lists of pairs, instantiates type variable components to Texts,
and fails if the components cannot be Texts. Untested:

   instance (key~T.Text,value~T.Text) = Context [(key, value)] where

You might also want to parameterize Context over the
type of template, but then you might need to annotate
the template type, or use separate ops for different types
of template?

Claus


   --  $foo % [(foo :: T.Text, bar :: T.Text)]
   (%) :: Context c = T.Text - c - LT.Text
   (%) = undefined

The problem is that the compiler is not able to deduce that string
literals should have type 'Text' when used in 'Context's. For example

   ghci :t $foo % [(foo, bar)]

   interactive:1:8:
   No instance for (Context [(a, a1)])
 arising from a use of `%'
   Possible fix: add an instance declaration for (Context [(a, a1)])
   In the expression: $foo % [(foo, bar)]

This forces the user to provide explicit type signatures, which makes
the construct more heavy weight and defeats the whole purpose of
introducing the 'Context' class:

   ghci :t $foo % [(foo :: T.Text, bar :: T.Text)]
   $foo % [(foo :: T.Text, bar :: T.Text)] :: LT.Text

Is there any way to make the syntactically short `$foo % [(foo,
bar)]` work but still keep the 'Context' class?

Johan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe






___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Conditional compilation for different versionsof GHC?

2010-12-01 Thread Claus Reinke

This is obviously a personal preference issue, but I try to avoid the
Cabal macros since they don't let my code run outside the context of
Cabal. I often times like to have a test suite that I can just use
with runhaskell, and (unless you can tell me otherwise) I can't run it
anymore.

Also, I think

#if GHC7
...
#endif

is more transparent than a check on template-haskell.


Just in case this hasn't been mentioned yet: if you want to be
independent of cabal, there is the old standby

__GLASGOW_HASKELL__
http://haskell.org/ghc/docs/latest/html/users_guide/options-phases.html#c-pre-processor

Claus



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type Directed Name Resolution

2010-11-11 Thread Claus Reinke

but if improved records are never going to happen


Just to inject the usual comment: improved records have
been here for quite some time now. In Hugs, there is TREX;
in GHC, you can define your own. No need to wait for them.

Using one particular random variation of extensible records 
and labels:


{-# LANGUAGE CPP,TypeOperators,QuasiQuotes #-}

import Data.Label
import Data.Record

data PetOwner = PetOwner deriving Show
data FurnitureOwner = FurnitureOwner deriving Show

-- abstract out labels so that we can bridge backwards-incompatibility
-- http://haskell.org/haskellwiki/Upgrading_packages/Updating_to_GHC_7
#if __GLASGOW_HASKELL__=700
catOwner   = [l|catOwner|]
chairOwner = [l|chairOwner|]
owner  = [l|owner|]
#else
catOwner   = [$l|catOwner|]
chairOwner = [$l|chairOwner|]
owner  = [$l|owner|]
#endif

-- we can still give unique labels, if we want
oldcat   = catOwner := PetOwner
   :# ()

oldchair = chairOwner := FurnitureOwner
   :# ()

-- but we don't have to, even if the field types differ
newcat   = owner := PetOwner
:# ()

newchair = owner := FurnitureOwner
:# ()

main = do
 print $ oldcat #? catOwner
 print $ oldchair #? chairOwner
 print $ newcat #? owner
 print $ newchair #? owner

This variation collected some of the techniques in a sort-of
library, which you can find at 


   http://community.haskell.org/~claus/
   
   in files (near bottom of page)


   Data.Record
   Data.Label
   Data.Label.TH
   
   (there are examples in Data.Record and labels.hs)


That library code was for discussion purposes only, there
is no cabal package, I don't maintain it (I just had to update
the code for current GHC versions because of the usual 
non-backward-compatibility issues, and the operator 
precedences don't look quite right). There are maintained

alternatives on hackage (eg, HList), but most of the time
people define their own variant when needed (the basics
take less than a page, see labels.hs for an example).

I'm not aware of any systematic performance studies
of such library-defined extensible records (heavy use
of type-class machinery that could be compile-time,
but probably is partly runtime with current compilers;
the difference could affect whether field access is
constant or not).

It is also worrying that these libraries tend to be defined
in the gap between Hugs' strict (only allow what is known
to be sound) and GHC's lenient (allow what doesn't bite
now) view of type system feature interactions. 


The practical success weighs heavily in favour of GHC's
approach, but I'm looking forward to when the current
give-it-a-solid-basis-and-reimplement-everything
effort in GHC reaches the same level of expressiveness
as the old-style lenient implementation!-)

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: change in overlapping instance behavior between GHC 6.12 and GHC 7 causes compilation failure

2010-11-09 Thread Claus Reinke

 instance (EmbedAsChild m c, m1 ~ m) = EmbedAsChild m (XMLGenT m1 c)

That looked to me like a long-winded way of saying:

 instance (EmbedAsChild m c) = EmbedAsChild m (XMLGenT m c)

Unless I'm missing something?  


These two instances are not equivalent: 


- the first matches even if m and m1 differ, causing a type-error.
- the second matches only if m~m1

Claus

{-# LANGUAGE OverlappingInstances #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE MultiParamTypeClasses #-}

class C a bwhere c :: a - b - Bool
instance C a a where c _ _ = True
instance C a b where c _ _ = False

class D a b where d :: a - b - Bool
instance a~b=D a b where d _ _ = True
-- instance  D a b where d _ _ = False -- would be a duplicate instance

{-
*Main c () ()
True
*Main c () True
False
*Main d () ()
True
*Main d () True

interactive:1:0:
   Couldn't match expected type `Bool' against inferred type `()'
   When generalising the type(s) for `it'
-} 
___

Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] http://functionalley.eu

2010-11-06 Thread Claus Reinke
I opted to host them there rather than  uploading them to Hackage, 
because

they're part of a wider project.


Note that this means they won't be cabal installable or searchable. Was
that your intention?


I am curious about this: wasn't cabal designed with the
option of having several package repos in mind?

   please clarify/document --remote-repo
   http://hackage.haskell.org/trac/hackage/ticket/759

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reference for technique wanted

2010-11-05 Thread Claus Reinke

I haven't the faintest idea what SML is doing with the third
version, but clearly it shouldn't.  


Those numbers are worrying, not just because of the third
version - should doubling the tree size have such a large effect?

I find your report that GHC doesn't do as well with the third 
version as the first two somewhat reassuring.


Well, I reported it as a performance bug;-) Also because 
GHC 6.12.3 had the second version as fast as the first while

GHC 7.1.x had the second version as slow as the third. You
can find my code in the ticket:

http://hackage.haskell.org/trac/ghc/ticket/4474


The difference makes me wonder whether there might be
performance consequences of the point-free style in more cases.


It would be good to know what is going on, as those
code transformations are not unusual, and I like to be
aware of any major performance trade-offs involved.


I'd still call both patterns difference lists,


I have been shouting about how bad a name difference list is
in Prolog for about 20 years.  


Ah, when you put it like this, I agree completely. The name
focusses on the wrong part of the idea, or rather on the
junk in one particular, explanatory representation of the idea.


It really is much more useful in Prolog
to think of list differences as a kind of *relationship*, because
then you are able to think hey, why just two ends?  Why can't I
pass around *several* references into the same list?


Names have power - if you want to get rid of one as well
established as difference lists, you'll have to replace it 
with another. How about 

   incomplete data structures? 

That is already used in connection with more general 
applications of logic variables, and it focusses on half of 
the interesting bit of the idea (the other part  being how 
one completes the structures).


Then logic programmers can think of logic variables,
functional programmers can think of parameterised
structures, operational semanticists can think of contexts
with hole filling, denotational semanticists can think of
partially defined structures, and so on..

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reference for technique wanted

2010-11-04 Thread Claus Reinke

The bottom line is that

- in logic programming languages, building a list by working on
  a pair of arguments representing a segment of the list is the
  NORMAL way to build a list; it's as fast as it gets, and the
  list is inspectable during construction.


modulo usage patterns: e.g., mostly linear use.


- at least in SML, the (fn ys = xs @ ys)  [Haskell: \ys - xs ++ ys]
  approach is stunningly slow, so slow as to make even naive use of
  append more attractive for most uses of; it is not the normal way
  to build lists, and for good reason; you can do nothing with a list
  so represented except compose and apply.


Even in your SML code, the boring old plain lists seemed to 
be list_flatten, which uses difference lists in disguise, and won

your little test, right? Using Haskell notation:

flat (LEAF x) ys = x : ys
flat (FORK(a,b)) ys = flat a (flat b ys)
--
flat (LEAF x)  = \ys-x : ys
flat (FORK(a,b)) = \ys-flat a (flat b ys)
--
flat (LEAF x)  = (x :)
flat (FORK(a,b)) = flat a . flat b

As in Prolog, it is often better not to make the structure
explicit, though I am slightly disappointed that GHC's
optimizer doesn't give the same performance for the
two versions (when summing a flattened strict tree in 
Haskell, I get roughly a factor of 2 between naive list 
append, explicit diff lists, as in the lower version of flat, 
and hidden diff lists, as in your list_flatten, the latter 
being fastest).


Of course, one has to be careful about usage patterns and
efficiency considerations. For instance, head and tail with
Haskell diff lists are not O(n), as you mentioned, because
of non-strictness. And building up lots of nested operations
under a lambda means that many of those operations are 
not likely to be shared, but repeated every time the lambda
is applied (such as converting back to plain lists), so one 
has to be careful about not doing that too often. etc.



The practical consequence of this is that calling both techniques by
the same name is going to mislead people familiar with one kind of
language when they use the other:  logic programmers will wrongly
expect dlists to be practically useful in functional languags,
functional programmers will expect them to be impractical in logic
programming languages.


I do tend to expect a little more insight from Haskellers, but
perhaps you're right. Having a pre-canned library instead of
writing out the near-trivial code patterns oneself, one might 
be surprised when expected miracles don't happen (diffarrays

were one such case for me;-).

I'd still call both patterns difference lists, because it can be
useful to know about the connections, but if you could suggest
text for the DList documentation that warns of the differences,
and of performance pitfalls, I'm sure the package author would
be happy to add such improvements.

If we ever get per-package wikis for hackage, you could add 
such comments yourself.


Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Non-hackage cabal source

2010-11-04 Thread Claus Reinke

remote-repo: myhackage:http://myhackage/packages
However, when I try to unpack my package with cabal:
$ cabal unpack MyPackage
Downloading MyPackage-0.0.1...
cabal: Failed to download
http://myhackage/packages/package/MyPackage-0.0.1.tar.gz : ErrorMisc
Unsucessful HTTP code: 404

Why is cabal even making this request?
Why is it not making the request to
http://myhackage/packages/MyPackage/0.0.1/MyPackage-0.0.1.tar.gz


Seems to be a hardcoded difference in repo layouts,
in Distribution/Client/Fetch.hs:packageURI . Refers
to repo layouts for current hackage and future 
package servers, although the new docs (from an
old GSoC project) and implementation do not appear 
to be in synch:


http://hackage.haskell.org/trac/hackage/wiki/HackageDB 
http://hackage.haskell.org/trac/hackage/wiki/HackageDB/2.0/OldURIs

http://hackage.haskell.org/trac/hackage/wiki/HackageDB/2.0/URIs

Btw, I've opened a couple of related tickets:

- please clarify/document --remote-repo
   http://hackage.haskell.org/trac/hackage/ticket/759

- expose existing functionality: fetchPackage/unpackPackage
   http://hackage.haskell.org/trac/hackage/ticket/758

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reference for technique wanted

2010-11-03 Thread Claus Reinke

  The characteristics of the logical variable are as follows.
  An incomplete data structure (ie. containing free variables)
  may be returned as a procedure's output. The free variables
  can later be filled in by other procedures, giving the effect
  of implicit assignments to a data structure (cf. Lisp's rplaca,
  rplacd).


There they are *explaining* things to Lisp programmers;
not giving the origin of an idea.


If you want to read it that way, it still means that they
and their readers were sufficiently aware of the connections
that it made sense to explain things this way. I was trying to
point out the general context in which Prolog techniques
developed, but my impressions are undoubtedly biased by
the way I was introduced to Prolog in the late 1980s. If you 
have an undisputed reference to the original invention of 
difference lists, where the author(s) explicitly deny any 
connection to Lisp, I'd be interested.



Also, I thought that Prolog had two origins - one in
grammars, the other in logic as a programming language.


See http://en.wikipedia.org/wiki/Definite_clause_grammar
This was specifically the focus of Alain Colmerauer.

You may be thinking of Cordell Green's 'The use of
theorem-proving techniques in question-answering
systems.


No, I haven't read that, yet (I've found the later 
'Application of Theorem Proving to Problem Solving'
online, but not this one). 

I was thinking of the later theme of 'Predicate Logic 
as a Programming Language', 'Algorithm = Logic + 
Control', etc (here exemplified by titles of Kowalski's 
papers), but Kowalski points to Green's paper as 
'perhaps the first zenith of logic in AI' in his 
'The Early Years of Logic Programming' (Kowalski's
papers online at: http://www.doc.ic.ac.uk/~rak/), 
so perhaps that was the start of this theme.


Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reference for technique wanted

2010-11-02 Thread Claus Reinke

Interesting discussion. I still think it is the same idea,
namely to represent not-yet-known list tails by variables,
embedded into two different kinds of languages.

  \rest-start++rest
  [start|rest]\rest-- '\' is an infix constructor


Savvy Prolog programmers wouldn't *DREAM* of
using an infix constructor here.


Well, you got me there, I'm not a savvy Prolog programmer
anymore, so I took the '\' for difference lists straight from
SterlingShapiro, who employ it for clarity over efficiency.

http://books.google.de/books?id=w-XjuvpOrjMCpg=PA284lpg=PA283ots=4WF3_NFXOtdq=difference+lists+Prolog

Since my argument was about common origin of ideas
vs differences in representation/implementation, I chose
representations that I considered suggestive of the ideas.


The differences arise from the different handling of
variables and scoping in those languages:

- functional languages: explicit, local scopes, variables
  are bound by injecting values from outside the scope
  (applying the binding construct to values); scoped
  expressions can be copied, allowing multiple
  instantiations of variables

- logic languages: implicit, global scopes, variables
  are bound by finding possible values inside the scope


Eh?  The scope of *identifiers* is strictly LOCAL in Prolog.
Logic *variables* don't *have* scope.


I was thinking of abstract declarative semantics, where
variables (even logic ones) are replaced by substitution.

If you make the quantifiers explicit, the identifiers become
alpha-convertible, so their names are irrelevant, and the
scope is given explicitly, as the part of the program enclosed
by the quantifiers. But since variables can be passed around
in Prolog, one needs to move the quantifiers out of the way,
upwards (called scope extrusion in process calculi, which
have the same issue).

You end up with no upper bound for the quantifiers - in
that sense, logic variables have a global scope, because
the quantifiers must enclose all parts of the program to
which the variables have been passed.


To close the circle: I understand that difference lists in
Prolog might have been a declarative translation of
in-place updating in Lisp lists:-)


It seems unlikely.  Prolog was born as a grammar formalism
(metamorphosis grammars) and the idea of a non-terminal as
a relation between a (starting point, end point) pair owes
nothing to Lisp.


I suspect you've researched the history of Prolog in more
detail than I have, so I'll just remind you that Prolog wouldn't
have been successful without efficient implementations
(including structure sharing), and that both implementers
and early users tended to be aware of implementation
aspects and of Lisp - they wanted to leave behind the impure
aspects of Lisp, but they wanted their implementations of
'Predicate Logic as a Programming Language' to be as
efficient as Lisp implementations.

One example reference:

   Warren, Pereira, Pereira; 1977
   Prolog - The Language and its Implementation
   compared with Lisp
   http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.65.7097

On page 111, we find:

   The characteristics of the logical variable are as follows.
   An incomplete data structure (ie. containing free variables)
   may be returned as a procedure's output. The free variables
   can later be filled in by other procedures, giving the effect
   of implicit assignments to a data structure (cf. Lisp's rplaca,
   rplacd).

This mindset - wanting to program declaratively, but
being aware of implementation issues that might help
efficiency, and being aware of how competing languages
do it, has persisted till today, and continues to fuel advances
(e.g., the way that logic programming could fill in incomplete
structures without traversing them again prompted the
development of circular programming techniques in
functional languages - recursion and non-strictness allow
variables to be bound by the result of a computation
that already distributes those variables over a structure).

Also, I thought that Prolog had two origins - one in
grammars, the other in logic as a programming language.

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] non-hackage cabal repos? unpacking non-repo package tarballs?

2010-11-02 Thread Claus Reinke

I often find myself writing example code that I'd like
to distribute via cabal, but without further burdening
hackage with not generally useful packages.

1. The simplest approach would be if cabal could expose
its internal 'unpackPackage' as a command, so that

   author: cabal sdist
   user: cabal unpackPackage Example.tar.gz

would work (the point being that cabal handles .tar.gz,
which not all users have installed separately; distributing
the .tar.gz would be handled via the usual channels; and
after unpacking, cabal can build the source package).

Note that 'cabal unpack' does not seem helpful here, 
as it touches the package index instead of just looking 
at the tarball.


Could this be made possible, please? Or is it already?

2. Failing that, I remembered that cabal used to be designed
without a fixed package repo address, and a little digging
found options --remote-repo and --local-repo, as well as
the directory layout for package repositories:

http://hackage.haskell.org/trac/hackage/wiki/HackageDB

So this scenario seems possible, in principle:

   author: expose a package repo http://myrepo
   user: cabal --remote-repo=http://myrepo fetch Example

Is it possible to add a (temporary) repo location like this, 
especially for 'cabal fetch'/'cabal unpack'? I've managed 
to get 'cabal --local-repo=myrepo list Example' to work 
(needs uncompressed 00-index.tar), but not the remote

variant, and fetching from the local repo doesn't work,
either.

Are there any usage examples of these features?

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reference for technique wanted

2010-11-01 Thread Claus Reinke

To simplify, the difference in persistence between the two
representations is enough to consider them very different
as it makes a dramatic difference in interface.


Interesting discussion. I still think it is the same idea,
namely to represent not-yet-known list tails by variables,
embedded into two different kinds of languages.

   \rest-start++rest
   [start|rest]\rest-- '\' is an infix constructor

The differences arise from the different handling of
variables and scoping in those languages:

- functional languages: explicit, local scopes, variables
   are bound by injecting values from outside the scope
   (applying the binding construct to values); scoped
   expressions can be copied, allowing multiple
   instantiations of variables

- logic languages: implicit, global scopes, variables
   are bound by finding possible values inside the scope
   (unifying bound variables); outside of non-determinism/
   sets-of-solutions handling, only non-scoped terms can
   be copied, allowing single instantiation only

If current functional language implementations had not
abandoned support for reducing under binders, one could
float out the binder for a difference list, and get limitations
closer to those of logic languages (though binding to values
would then become much more unwieldy).

If logic languages allowed local binders in terms, one could
copy difference lists more easily (though substitution and
unification would then involve binders).

So, yes, the realization of the idea is different, as are the
language frameworks, and the junk in the representations,
but the idea is the same.

Btw, functional language implementations not reducing
under binders also implies that functional difference list
operations are not shared to the same extent as logic
difference list operations are - copying a closure copies
un-evaluated expressions, to be re-evaluated every
time the closure is opened with different variable bindings,
so the functional representation is not as efficient as the
logical one, in most implementations.

I can't confirm the reference, but several authors point
to this for an early description

[CT77] Clark, K.L.; Tärnlund, S,Å:
A First Order Theory of Data and Programs.
In: Inf. Proc. (B. Gilchrist, ed.), North Holland, pp. 939-944, 1977.

To close the circle: I understand that difference lists in
Prolog might have been a declarative translation of
in-place updating in Lisp lists:-)

Claus



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Edit Hackage

2010-11-01 Thread Claus Reinke



Stack Overflow and Reddit are at least improvements over the traditional
web forums, starting to acquire some of the features Usenet had twenty
years ago.  Much like Planet-style meta-blogs and RSS syndication makes
it liveable to follow blogs.


Very much this. I mourn Usenet's potential as much as anyone, but life
goes on.


Agreed, in principle. However, the quality of discussions on Reddit
makes me want to run away more often than not - it is already worse
than Usenet was in its last throws (yes, I know it is living an 
afterlive;-). One thing we learned from Usenet is that trying to 
add to a thread gone bad is very unlikely to make it any better, 
and too many Reddit threads go bad so quickly that I've never 
felt like even trying to improve the signal/noise ratio. Just my 
own impression, of course (and perhaps the Haskell Reddit

doesn't suffer quite as much).

Also, while both Reddit and Stack Overflow can be read without
Javascript, both require Javascript for posting (can't even log in
to Reddit without, and have to edit half-blind in Stack Overflow
without). Community-edited sites like these are the last ones on
which I'd want to be forced to enable Javascript.

Moving from a few Haskell mailing lists to many lists, to added
IRC channels, to added blogs and RSS-feeds and aggregators,
and added sites like Reddit and Stack Overflow does give more
options, but makes it rather harder to follow everthing (in the
beginning, feeds and aggregators give you the feeling that
you're more up to date than ever, but at some point your feed
handler overflows your number of hours per day:-).

Which means that it is also getting more and more difficult to
reach people as easily as before (do you ask on haskell-cafe,
haskell-beginners, reddit, or SO? do you announce on haskell,
haskell-cafe, or reddit? do you survey on haskell, haskell-cafe,
google, or reddit? do you answer queries on the wiki, on -cafe,
on -beginners, on reddit, on SO, or where? and so on..). Some
people try to crosspost items to their favourite sites, in the
hope of finding them again, in a single place. So many social
sites now compete with each other that blog entries come
with one-click-forward-this-there buttons.

So, I agree with Don that you're missing things if you only
follow the -cafe, and I agree with others that the -cafe is the
most important forum for me. Overall, I'm not too happy
with the way things are diverging, though..

Apart from the Usenet-mailing list move, it also reminds
me of the command line-GUI movement - some people
are quite happy with tools that at least remind them of
command line control (such as most mail readers or
programmer's editors), while others want web and guis
that do not remind them of something they've never seen
or put to good use (the command line prompt).

Or perhaps, it is just a tick easier to get started on web
forums - you can read without subscribing, you can
subscribe without committing yourself (throwaway
accounts on reddit, for instance) or installing tools (if
I recall correctly, my last Windows notebook no longer
came with pre-installed email client..).


As you say, most email archives leave something to be desired. As far
as I know, the best way to find anything in old -cafe threads is to do
a google search with
site:http://www.haskell.org/pipermail/haskell-cafe/;, and there's no
good way to get an overview. Especially as topic drift leads to
subject lines being uninformative (I mean, Edit Hackage? What?).


I have the feeling that the existence of 4-5 archives for some
Haskell lists means that the Google ranking will be spread
among them, giving each a weaker ranking than one would
hope for (it certainly didn't help that some time ago, haskell.org
had robots banned from its mailing list archives for a while).

Btw, does anyone know why searching with list:haskell-cafe
does not help much, even though every single posting to this
list has a List-Archive: heading pointing to the pipermail
archive?

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] who's in charge?

2010-10-29 Thread Claus Reinke
2) If there is a problem, here's what you could do about it, 
in descending order of attractiveness:


y) specify the requirements (a sample application
   of what needs to be supported would be a start)

z) review the existing options wrt to those requirements
   (which ones are you aware about, why don't they work
   for you?) (*)


a) Fix it yourself

b) Pay someone else to fix it

c) Motivate or politely encourage others to fix it, providing moral 
support, etc.


d) provide framework organisation (repo, discussion venu,
   code outline, issue and task tracking, ..) so that others can 
   contribute small  patches instead of having to take on full 
   responsibilities?


(*)
IMO, the lack of good quality reviews of hackage contributions
(especially over whole usage areas, such as web development,
GUIs, databases, ..) has been a major and growing obstacle to 
hackage use, not to mention targetting of efforts. 

Some form of wiki-based hackage reviews column (with 
editor-in-charge, invited reviews, and mild reviewing of 
reviews for obvious problems, but otherwise free-form and

-schedule) would probably work, if integrated into hackage
itself. Unfortunately, I don't have the time to edit such a thing,
but perhaps others here feel motivated to do so?

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Need programming advice for Network ProtocolParsing

2010-10-27 Thread Claus Reinke

I'm occasionally working on making a friendly yet performant library that
simultaneously builds parsers and generators, but it's non-trivial. If you


I'm probably missing something in the friendly yet performant
requirements, but I never quite understood the difficulty:

A typical translation of grammar to parser combinators has very
little code specific to parsing - it is mostly encoding the grammar
in a coordination framework that calls on literal parsers at the
bottom. Since the coordination framework uses some form of
state transformers, exchanging the literal  parsers with literal
unparsers should turn the grammar parser into a grammar
unparser (in essence, the non-terminal code is reusable, the
terminal code is specific to the direction of data flow).

Add switch-points (where the mode can switch from parsing
to unparsing and back), and one has syntax-directed editors
(here you need to be able to restart the process on arbitrary
non-terminals), or expect-like protocol-driven computations
(two or more agents with complementary views of which
parts of the grammar involve parsing and which unparsing).

The non-trivial parts I remember are to ensure that the unparser
is directed by the AST (unless you want to generate random
sentences from the whole language), just as the parser is directed
by the input String, and not biasing the combinator framework
towards parsing (which happens all too easily). But then, it has
been a long time since I wrote such parser/unparsers (the first
before Monads and do-notation became must-have aspects of
combinator parsers, when we were free just to make it work;-).

It would be useful to have an overview of the issues that
lead to the widespread view of this being non-trivial (other
than in the narrow interpretation of non-trivial as needs
some code). I've always wondered why there was so much
focus on just parsing in combinator libraries.

Just curious,
Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haddock API and .haddock interface files questions

2010-10-26 Thread Claus Reinke

Some questions about Haddock usage:

1. Haddock executable and library are a single hackage package,
   but GHC seems to include only the former (haddock does not
   even appear as a hidden package anymore). Is that intended?

2. Naively, I'd expect Haddock processing to involve three stages:
   1. extract information for each file/package
   2. mix and match information batches for crosslinking
   3. generate output for each file/package

   I would then expect .haddock interface files to repesent the
   complete per-package information extracted in step 1, so 
   that packages with source can be used interchangeably

   with packages with .haddock files.

   However, I can't seem to use 'haddock --hoogle', say, with
   only .haddock interface files as input (No input file(s).).

3. It would be nice if the Haddock executable was just a thin
   wrapper over the Haddock API, if only to test that the API
   exposes sufficient functionality for implementing everything
   Haddock can do.

   Instead, there is an awful lot of useful code in Haddock's
   Main.hs, which is not available via the API. So when coding
   against the API, for instance, to extract information from
   .haddock files, one has to copy much of that code.

   Also, some inportant functionality isn't exported (e.g., the
   standard form of constructing URLs), so it has to be copied
   and kept in synch with the in-Haddock version of the code.

   It might also be useful to think about the representation
   of the output of stage 2 above: currently, Haddock directly
   generates indices in XHtml form, even though much of
   the index computation should be shareable accross
   backends. That is, current backends seem to do both
   stage 2 and stage 3, with little reuse of code for stage 2.

It seems that exposing sufficient information in the API, and
allowing .haddock interface files as first-class inputs, there
should be less need for hardcoding external tools into Haddock
(such as --hoogle, or haddock-leksah). Instead, clients should
be able to code alternative backends separately, using Haddock
to extract information from sources into .haddock files, and
the API for processing those .haddock files. 

Are these expectations reasonable, or am I misreading the 
intent behind API and .haddock files? Is there any 
documentation about the role and usage of these two

Haddock features, as well as the plans for their development?

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bug in HTTP (bad internal error handling)

2010-10-16 Thread Claus Reinke

After it catches this error, the function returns (line 376):

return (fail (show e))

The fail is running in the Either monad (The Result type = Either).
This calls the default Monad implementation of fail, which is just a
call to plain old error. This basically causes the entire program to
crash.



Actually, it appears that simpleHTTP isn't actually supposed to throw
an IOException, and it is instead supposed to return a ConnError
result. So the real fix is to fix the code to make this happen. But


Sounds like a victim of 


   http://hackage.haskell.org/trac/ghc/ticket/4159

For mtl clients, 'fail' for 'Either' used to call 'Left'. That was
changed, though the ticket does not indicate the library
versions affected.

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A new cabal odissey: cabal-1.8 breaking its ownneck by updating its dependencies

2010-09-17 Thread Claus Reinke

On the topic of cabal odisseys:

I think it would help to document (prominently) what Cabal 
fundamentally doesn't (try to) do, to avoid optimistic 
expectations (and hence the surprises when Cabal doesn't
meet those expectations), and to point out the design choices 
behind many bug tickets (even if the choices are temporary

and driven by limited manpower). Such as

- cabal doesn't keep track of what it installs, hence

   - no uninstall
   - no application tracking
   - library tracking and information about installed
   library configurations only via ghc-pkg
   ..

- cabal's view of package interfaces is limited to explicitly
   provided package names and version numbers, hence

   - no guarantee that interface changes are reflected in
   version number differences
   - no detailed view of interfaces (types, functions, ..)
   - no reflection of build/configure options in versions
   - no reflection of dependency versions/configurations
   in dependent versions
   - no knowledge of whether dependencies get exposed
   in dependent package interfaces
   ..

This is just to demonstrate the kind of information I'd like
to see - Duncanco know the real list, though they don't
seem to have it spelled out anywhere? So users see that
it works to say cabal install and build up their own often
optimistic picture of what (current) Cabal is supposed to
be able to do. It might even be useful to have a category
of core-tickets or such on the trac, to identify these as
work-package opportunities for hackathons, GSoC and 
the like, positively affecting many tickets at once.


On to the specific issue at hand:


2. cabal ought to allow using multiple versions of a single package in
more circumstances than it does now
..
2. This is a problem of information and optimisitic or pesimistic
assumptions. Technically there is no problem with typechecking or
linking in the presense of multiple versions of a package. If we have
a type Foo from package foo-1.0 then that is a different type to Foo
from package foo-1.1. GHC knows this.

So if for example a package uses regex or QC privately then other
parts of the same program (e.g. different libs) can also use different
versions of the same packages. There are other examples of course
where types from some common package get used in interfaces (e.g.
ByteString or Text). In these cases it is essential that the same
version of the package be used on both sides of the interface
otherwise we will get a type error because text-0.7:Data.Text.Text
does not unify with text-0.8:Data.Text.Text.

The problem for the package manager (i.e. cabal) is knowing which of
the two above scenarios apply for each dependency and thus whether
multiple versions of that dependency should be allowed or not.
Currently cabal does not have any information whatsoever to make that
distinction so we have to make the conservative assumption. If for
example we knew that particular dependencies were private
dependencies then we would have enough information to do a better job
in very many of the common examples.

My preference here is for adding a new field, build-depends-private
(or some such similar name) and to encourage packages to distinguish
between their public/visible dependencies and their private/invisible
deps.


This private/public distinction should be inferred, or at least be 
checked, not just stated. I don't know how GHC computes its
ABI hashes - are they monolithic, or modular (so that the 
influence of dependencies, current package, current compiler,
current option settings, could be extracted)? 

Even for monolithic ABI hashes, it might be possible to compute 
the package ABI hash a second time, setting the versions of all 
dependencies declared to be private to 0.0.0 and seeing if that 
makes a difference (if it does, the supposedly private dependency 
leaks into the package ABI, right?).


And secondly, how about making it possible for cabal files to 
express sharing constraints for versions of dependencies?


To begin with, there is currently no way to distinguish packages 
with different flag settings or dependency versions. If I write a 
package extended-base, adding the all-important functions car 
and cdr to an otherwise unchanged Data.List, that package will 
work with just about any version of base (until car and cdr 
appear in base:-), but the resulting packages may not be 
compatible, in spite of identical version numbers.


If it was possible to refer to those package parameters (build 
options and dependency versions), one could then add constraints

specifying which package parameters ought to match for a build
configuration to be acceptable.

Let us annotate package identifiers with their dependencies
where the current lack of dependency and sharing annotations 
means I don't care how this was built. Then


   build-depends:  a, regex

means I need a and regex, but I don't care whether a also uses
some version of regex, while

   build-depends: a, regex
   sharing: 

Re: RĂ©f. : [Haskell-cafe] Re: circular imports

2010-09-07 Thread Claus Reinke

That sort of code (stripped out):

In Game.hs:

data Game = Game { ...
  activeRules :: [Rule]}

applyTo :: Rule - Game - Game
applyTo r gs = ...


Often, it helps to parameterize the types/functions (instead
of using recursive  modules to hardcode the parameters).

Would something like this work for your case (taking the
Game module out of the loop)?

data Game rule = Game { ...
 activeRules :: [rule]}

applyTo :: rule - Game rule - Game rule
applyTo r gs = ...


In Rule.hs:
.. 
isRuleLegal :: Rule - NamedRule - Game Rule - Bool

isRuleLegal = ...

In Obs.hs:

evalObs :: Obs - NamedRule - Game Rule - EvalObsType
evalObs = ...


For the record, I'd like to have recursive modules without
having to go outside the language (some standardized
notion of Haskell module interfaces would be nicer than
implementation-specific boot files). 


But I'd like to have parameterized modules even more,
as that would allow me to avoid many use cases of
recursive modules (again, that would seem to require
module interfaces).

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] what's the best environment for haskell work?

2010-08-06 Thread Claus Reinke

For another programs (that compile fine with ghc --make), I didn't
bother making the package. But I had to find out the package
dependencies by building, checking where it fails, and trying to add a
package to the dependency list. Maybe there's a better way, didn't
find it.


We do plan to fix this in the same way we resolve missing imports.  
I had a look to see if I could do it when a user cabalizes the source, 
but ghc --make -v does not include the packages automatically 
loaded in its output.  Instead we will need to wait for the error then 
resolve it when the user presses Ctrl+R.


I looked into this for Vim's haskellmode (would also be nice for
cabal itself), never got the time to implement more than 
':show packages', but perhaps this info helps you along:


- 'ghc -v -e ..' instead of 'ghc -v --make ..' avoids the temporary files
   (assuming the file can be loaded by GHCi).

- GHCi's ':show packages' was intended to provide package info,
   but seems to have bitrotted (you could try the undocumented
   internal ':show linker' and please file a ticket for ':show packages').

- to force loading without running the code, one might have to 
   resort to ugly things like:

ghc -v -e 'snd(main,1)' -e ':show linker' tst.hs

- at which stage, we might as well use something like

   $ ghc -v -e 'snd(main,1)'  tst.hs 21 | grep Load
   Loading package ghc-prim ... linking ... done.
   Loading package integer-gmp ... linking ... done.
   Loading package base ... *** Parser:
   Loading package ffi-1.0 ... linking ... done.
   Loading package ghc-paths-0.1.0.6 ... linking ... done.
   Loading package array-0.3.0.1 ... linking ... done.
   Loading package containers-0.3.0.0 ... linking ... done.

- for automatic package generation, one would also need
   the language extensions in use (among other things);
   unfortunately, the code for that in GHC stops at the
   first missing extension and is not yet written in a way
   that makes it easy to identify all such cases.

- ideally, one would have ghc --show-languages and
   ghc --show-packages, or ghc --mk-cabal, or implement 
   them via GHC API.


Hth,
Claus

PS. if someone started a haskell-tools list (for cross-tool
   interests in haskellmodes, ides, ghc-api clients, etc., I
   would subscribe, though I can't afford the time to do
   much Haskell hacking at the moment)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haddock anchors

2010-07-15 Thread Claus Reinke
One of the problems is that the anchors that Haddock 
currently generate aren't always legal in HTML, XHTML, 
or XML. I'd like to fix the anchor generation so that they 
are. If I do, then links between old and new generated 
Haddock pages will land on the right page, but won't 
always get to the right spot on the page.


If I recall correctly, Haddock's relative links hit a spot in 
the specs that was open to interpretation, as in: different

browsers chose to interpret the specs differently. Hence
the doubled 'NAME' tags:

http://trac.haskell.org/haddock/ticket/45

Will this be a problem for anyone? On one's own 
machine, I imagine we can come up with a simple 
script that will just rebuild all the Haddock docs and 
that will take care of it.


What is the problem you have in mind, and what is
your solution? Apart from differing browsers, there 
used to be tools out there that scrape information 
from Haddock pages to provide access to docs, such 
as haskellmode for Vim or the Hayoo! indexer:


http://trac.haskell.org/haddock/ticket/113

To what extent any of these would be affected depends
on the nature of the changes. Also, while haskellmode
no longer constructs the relative urls itself, being able
to find the relative url for any identifier with Haddocks
in a scriptable/predictable way would be helpful (eg, to
open Haddocks from GHCi).

Claus
 
___

Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Debugging cause of indefinite thread blocking

2010-07-07 Thread Claus Reinke
I am making use of the Data.Array.Repa module to achieve data-parallelism. 
On running my program I get the error:


thread blocked indefinitely on an MVar operation


Haven't seen any responses yet, so here are some suggestions:


Two questions:
1. What could be some of the potential causes for the above error when the 
only use of parallelism is Repa arrays?


If you're sure the issue is in Repa, contact the Repa authors?

2. What are the best strategies for debugging he cause(s) of such an 
error?


If it is in your code, you could try replacing the Control.Concurrent
operations with variants that generate a helpful trace before calling
the originals. That approach was used in the Concurrent Haskell
Debugger: http://www.informatik.uni-kiel.de/~fhu/chd/

Google for concurrent haskell debugger to find related and
follow-on work on extensions of chd and alternative verification
approaches.

Newer GHCs have some internal runtime event logging features
(eg, link the code with -debug, then run with +RTS -Ds). See RTS
options in the GHC user manual (4.15.[67]).

Claus



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: checking types with type families

2010-07-03 Thread Claus Reinke

 Prelude :t id :: Eq b = b - b
 id :: Eq b = b - b :: (Eq b) = b - b
 Prelude id :: Eq b = b - b

 interactive:1:0:
 No instance for (Show (b - b))
   arising from a use of `print' at interactive:1:0-19
 Possible fix: add an instance declaration for (Show (b - b))
 In a stmt of a 'do' expression: print it

The Eq constraint is irrelevant to the fact that there is no b - b Show
instance. The search for an instance looks only at the type part, and if 
it
finds a suitable match, then checks if the necessary constraints are also 
in

place.


Thanks - that first sentence is right, but not because of the second!-)

I thought the error message was misleading, speaking of instance
when it should have been speaking about instance *head*. But I
was really confused by the difference between (making the show
explicit and using scoped type variables):

   *Main :t (show (id::a-a))::forall a.Eq a=String

   interactive:1:0:
   Ambiguous constraint `Eq a'
   At least one of the forall'd type variables mentioned by the 
constraint

   must be reachable from the type after the '='
   In an expression type signature: forall a. (Eq a) = String
   In the expression:
 (show (id :: a - a)) :: forall a. (Eq a) = String

   interactive:1:1:
   Could not deduce (Show (a - a)) from the context (Eq a)
 arising from a use of `show' at interactive:1:1-15
   Possible fix:
 add (Show (a - a)) to the context of an expression type signature
 or add an instance declaration for (Show (a - a))
   In the expression:
 (show (id :: a - a)) :: forall a. (Eq a) = String

which does list a non-empty context from which 'Show (a-a)'
could not be deduced and

   *Main :t show (id :: Eq a = a- a)

   interactive:1:0:
   No instance for (Show (a - a))
 arising from a use of `show' at interactive:1:0-25
   Possible fix: add an instance declaration for (Show (a - a))
   In the expression: show (id :: (Eq a) = a - a)

which only talks about general 'Show (a-a)' instances. We
cannot define 'instance Show (forall a. Eq a=a-a)', which
is why the 'Eq a' constraint plays no role, I think..

Btw, the issues in the other example can be made more
explicit by defining:

   class MyEq a b | a-b, b-a
   instance MyEq a a

First, both FDs and TFs simplify this:

   *Main :t id :: (forall b. MyEq b Bool = b-b)
   id :: (forall b. MyEq b Bool = b-b) :: Bool - Bool
   *Main :t id :: (forall b. b~Bool = b-b)
   id :: (forall b. b~Bool = b-b) :: Bool - Bool

but the FD version here typechecks (note, though, that the
type is only partially simplified)

   *Main :t id :: (forall b. MyEq b Bool = b-b)
   - (forall b. MyEq b Bool = b-b)
   id :: (forall b. MyEq b Bool = b-b)
   - (forall b. MyEq b Bool = b-b)
 :: (forall b. (MyEq b Bool) = b - b) - Bool - Bool

while the TF version doesn't

   *Main :t id :: (forall b. b~Bool = b-b)
   - (forall b. b~Bool = b-b)

   interactive:1:0:
   Couldn't match expected type `forall b. (b ~ Bool) = b - b'
  against inferred type `forall b. (b ~ Bool) = b - b'
 Expected type: forall b. (b ~ Bool) = b - b
 Inferred type: forall b. (b ~ Bool) = b - b
   In the expression:
 id ::
   (forall b. (b ~ Bool) = b - b)
   - (forall b. (b ~ Bool) = b - b)

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is my code too complicated?

2010-07-03 Thread Claus Reinke
Most languages today provide a certain glue to bring everything 
together.


Most languages today provide several kinds of glue and, while some
of those kinds are not recommended, Haskell unfortunately doesn't
provide all useful kinds of glue. Especially the module system is a
weak point: in SML, you'd have parameterized modules, in Java,
you'd have dependency injection (of course, being Java, they do
everything the hard way, via XML and reflection; but they are
already on their way back, with things like Spring, annotations,
and aspect-oriented programming, pushing full reflection under
the hood, permitting to compose plain-old Java objects, and
reducing the role of XML configuration files), in Haskell, we have ??
(yes, extended type-classes are equivalent to SML modules in
theory, but not in hackage practice, nor are first-class modules
modelled via extensible records).


The problem with that approach is:  This makes my code harder to
understand for beginners.  Usually they can tell /what/ my code is
doing, because it's written in natural language as much as possible, but
they couldn't reproduce it.  And when they try to learn it, they give up
fast, because you need quite some background for that.


What kind of beginner? What kind of background? Since you are
talking to a PHP developer, you will first have to repeat the common
parts of both languages, pointing out all the headaches that disappear
when moving from PHP to even imperative Haskell (static scoping
and IO typing means no accidental global variables or accidental
side-effects, much less manual-reading to figure out which parts
of some library API are functional, which have side-effects, etc.).

Then your friend has to start trusting the compiler (those unit
tests that only make sure that we don't break scoping disappear;
those defensively programmed runtime type checks and comment
annotations disappear in favour of real statically checked types; etc)
and libraries (much less worrying about whether some library
routine will modify its parameters in place; callbacks are no big
deal or new feature; many design patterns can actually be encoded
in libraries, rather than pre-processors; which means that small-scale
design patterns are worth a library; etc.).

Once that happens, a whole lot of thinking capacity is freed for
worrying about higher-level details, and you two will have an
easier time walking through high-level abstractions. Do not
try to lead your friends through higher-order abstractions in
Haskell when they are still worrying about small stuff like
scoping or type safety - that would be frightening.


Also sometimes when I write Haskell, my friend sits beside me
and watches.  When he asks (as a PHP programmer with some
C background), say, about my types, I can't give a brief explanation
like I could in other languages.


When looking through job adverts, I get the impression that nobody
is working in plain programming languages anymore: it is Java plus
Spring plus persistence framework plus web framework plus .., and
for PHP especially, it is some framework or content-management
system that just happens to be programmed and extended in PHP,
but otherwise has its own language conventions and configuration
languages.

If you think of monad transformers and the like as mini-frameworks,
implemented *without* changing the language conventions, should
they not be easier to explain than a PHP framework or preprocessor
that comes with its own syntax/semantics?


Yesterday I was writing a toy texture handler for OpenGL (for loading
and selecting textures).  He asked about my TextureT type.  I couldn't
explain it, because you couldn't even express such a thing in PHP or C.

 type TextureT = StateT Config

 -- Note that this is MonadLib.
 -- BaseM m IO corresponds to MonadIO m.
 selectTexture :: BaseM m IO = B.ByteString - TextureT m ()


State monads are the ones that translate most directly to an
OOP pattern from the Smalltalk days: method chaining (each
method returns its object, so you can build chains of method
calls just as you can chain monad operations. The state is
held in the object (which is similar to holding a record in a
State monad instead of nesting State transformers, but
inheritance could be likened to nesting).

Of course, in imperative OOP languages, only programmer
discipline keeps you from modifying other objects as well,
while in Haskell, the type system sets safety boundaries
(not in the there is something wonderful you can't do
sense but in the you'd hurt someone if you'd do that sense).


I fear that my code is already too difficult to understand for
beginners, and it's getting worse.  But then I ask myself:  I've got a
powerful language, so why shouldn't I use that power?  After all I
haven't learnt Haskell to write C code with it.  And a new Haskell
programmer doesn't read my code to learn Haskell.  They first learn
Haskell and /then/ read my code.


It is necessary to understand enough of Haskell that one gets

Re: [Haskell-cafe] Re: checking types with type families

2010-07-02 Thread Claus Reinke

  class C a b | a-b where
op :: a - b



  instance C Int Bool where
op n = n0



  data T a where
T1 :: T a
T2 :: T Int



  -- Does this typecheck?
  f :: C a b = T a - Bool
  f T1 = True
  f T2 = op 3



The function f should typecheck because inside the T2 branch we know
that (a~Int), and hence by the fundep (b~Bool).  


Perhaps I'm confused, but there seems to be no link between
the call 'op 3' and 'a' in this example. While the 'desugaring' 
introduces just such a connection.



g :: C a b = TT a - Bool
g TT1 = True
g (TT2 eq) = op (if False then cast eq undefined else 3)


If we add that connection to the original example, it typechecks
as well:

f :: forall a b. C a b = T a - Bool
f T1 = True
f T2 = op (3::a)

We could also write:

f :: forall a b. C a b = T a - Bool
f T1 = True
f T2 = (op :: a - Bool) 3

Though there is a long-standing issue that we cannot write

f :: forall a b. C a b = T a - Bool
f T1 = True
f T2 = (op :: a - b) 3

as that results in the counter-intuitive

   Couldn't match expected type `Bool' against inferred type `b'
 `b' is a rigid type variable bound by
 the type signature for `f'
   at C:\Users\claus\Desktop\tmp\fd-local.hs:17:14
   In the expression: (op :: a - b) 3
   In the definition of `f': f T2 = (op :: a - b) 3

Which seems to be a variant of Oleg's example? I thought there 
was a ticket for this already, but I can't find it at the moment. It

would be useful to have standard keywords (FD, TF, ..) to make
ticket querying for common topics easier, and to record the 
status of FD vs TF in trac.


There used to be a difference the other way round as well
(more improvement with FDs than with TFs), which I found
while searching trac for the other issue:

http://hackage.haskell.org/trac/ghc/ticket/2239


From testing that example again with 6.12.3, it seems that

bug has since been fixed but the ticket not closed?

But we have no formal type system for fundeps that describes this, 
and GHC's implementation certainly rejects it. 


I've not yet finished Martin's paper, and your recent major work 
is also still on my reading heap, but just to get an idea of the 
issues involved: from CHR-based fundep semantics, the step 
to local reasoning would seem to encounter hurdles mainly in 
the interaction of local and non-local reasoning, right?


If we only wanted to add local reasoning, the standard solution
would be to make copies, so that the local steps don't affect the
global constraint store (somewhat more practical, we could add
tags to distinguish local and global stores; for sequential 
implementations, Prolog-style variable trails could probably be 
extended to account for constraints as well).


But we want local reasoning to make use of non-local constraints 
and variable bindings, and we probably want to filter some, but

not all, local reasoning results back into the global store.

Are these the issues, or are there others?

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: checking types with type families

2010-07-02 Thread Claus Reinke

f :: forall a b. C a b = T a - Bool
f T1 = True
f T2 = (op :: a - b) 3

as that results in the counter-intuitive

   Couldn't match expected type `Bool' against inferred type `b'
 `b' is a rigid type variable bound by
 the type signature for `f'
   at C:\Users\claus\Desktop\tmp\fd-local.hs:17:14
   In the expression: (op :: a - b) 3
   In the definition of `f': f T2 = (op :: a - b) 3

Which seems to be a variant of Oleg's example? 


If it is, it is a simpler variant, because it has a workaround:

f :: forall a b. C a b = T a - Bool
f T1 = True
f T2 = (op :: C a b1 = a - b1) 3

While playing around with Oleg's example, I also found the
following two pieces (still ghc-6.12.3):

-- first, a wonderful error message instead of expected simplification
Prelude (id :: (forall b. b~Bool=b-b) - (forall b. b~Bool=b-b))

interactive:1:1:
   Couldn't match expected type `forall b. (b ~ Bool) = b - b'
  against inferred type `forall b. (b ~ Bool) = b - b'
 Expected type: forall b. (b ~ Bool) = b - b
 Inferred type: forall b. (b ~ Bool) = b - b
   In the expression:
   (id ::
  (forall b. (b ~ Bool) = b - b)
  - (forall b. (b ~ Bool) = b - b))
   In the definition of `it':
   it = (id ::
   (forall b. (b ~ Bool) = b - b)
   - (forall b. (b ~ Bool) = b - b))

-- second, while trying the piece with classic, non-equality constraints
Prelude (id :: (forall b. Eq b=b-b) - (forall b. Eq b=b-b))

interactive:1:0:
   No instance for (Show ((forall b1. (Eq b1) = b1 - b1) - b - b))
 arising from a use of `print' at interactive:1:0-55
   Possible fix:
 add an instance declaration for
 (Show ((forall b1. (Eq b1) = b1 - b1) - b - b))
   In a stmt of an interactive GHCi command: print it

Note that the second version goes beyond the initial problem,
to the missing Show instance, but the error message loses the
Eq constraint on b!

-- it is just the error message, the type is still complete
Prelude :t (id :: (forall b. Eq b=b-b) - (forall b. Eq b=b-b))
(id :: (forall b. Eq b=b-b) - (forall b. Eq b=b-b))
 :: (Eq b) = (forall b1. (Eq b1) = b1 - b1) - b - b

I don't have a GHC head at hand, perhaps that is doing better?

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Mining Twitter data in Haskell and Clojure

2010-06-28 Thread Claus Reinke

Claus -- cafe5 is pretty much where it's at.  You're right, the proggy
was used as the bug finder, actually at cafe3, still using ByteString.


It would be useful to have a really tiny data source - no more than 
100 entries per Map should be sufficient to confirm or reject hunches 
about potential leaks by profiling. As it stands, my poor old laptop 
with a 32bit GHC won't be much use with your sample data, and 
now that the GHC bug is fixed, the size of the samples would only 
hide the interesting aspects (from a profiling perspective).


Having translated it from Clojure to Haskell to OCaml, 


Translating quickly between strict-by-default and non-strict-by-default
languages is always a warning sign: not only is it unlikely to make
best use of each language's strengths, but typical patterns in one
class of languages simply don't translate directly into the other.

I'm now debugging the logic and perhaps the conceptual 
data structures.  Then better maps will be tried.  


No matter what Maps you try, if they are strict in keys and 
non-strict in values, translating code from strict language

needs careful inspection. Most of the higher-order functions
in Maps have issues here (eg, repeated use of insertWith
is going to build up unevaluated thunks, and so on). I'm
not even sure how well binary fares with nested IntMaps
(not to mention the occasional too few bytes error 
depending on strictness or package version - it would be 
useful to have a cabal file, or a README listing the versions 
of libraries you used).


To binary package users/authors: is there a typed version 
of binary (that is, one that records and checks a representation 
of the serialized type before actual (de-)serialization)? It

would be nice to have such a type check, even though it
wouldn't protect against missing bytes or strictness changes. 


Then a giant shootout will ensue, now that
Haskell finishes!  I'll post here when it's ready.


Just make sure Haskell isn't running with brakes on!-)

Claus



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Mining Twitter data in Haskell and Clojure

2010-06-24 Thread Claus Reinke

I'll work with Simon to investigate the runtime, but would welcome any
ideas on further speeding up cafe4.


An update on this: with the help of Alex I tracked down the problem (an 
integer overflow bug in GHC's memory allocator), and his program now 
runs to completion.


So this was about keeping the program largely unchanged in 
order to keep the GHC issue repeatable for tracking? Or have 
you also looked into removing space leaks in the code (there 
still seemed to be some left in the intern/cafe5 version, iirc)?


Alexy: what does the latest version of the code look like - is 
there an uptodate text connecting all the versions/branches/tags, 
so that one can find the latest version, and is there a small/tiny 
data source for profiling purposes?


This is the largest program (in terms of memory requirements) I've ever 
seen anyone run using GHC.  In fact there was no machine in our building 
capable of running it, I had to fire up the largest Amazon EC2 instance 
available (68GB) to debug it - this bug cost me $26.  Here are the stats 
from the working program:


 392,908,177,040 bytes allocated in the heap


Ouch! If you keep on doing that, we Haskellers will be paged 
out of reality to make room for the heaps of GHC's executables!-)


Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] checking types with type families

2010-06-23 Thread Claus Reinke
I'm interested in situations where you think fundeps work 
and type families don't.  Reason: no one knows how to make 
fundeps work cleanly with local type constraints (such as GADTs).  

If you think you have such as case, do send me a test case.  


Do you have a wiki page somewhere collecting these examples?
I seem to recall that similar discussions have arisen before and
test cases have been provided but I wouldn't know where to 
check for the currently recorded state of differences.


Also, what is the difference between fundeps and type families
wrt local type constraints? I had always assumed them to be
equivalent, if fully implemented. Similar to logic vs functional
programming, where Haskellers tend to find the latter more 
convenient. Functional logic programming shows that there 
are some tricks missing if one just drops the logic part.


Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Huffman Codes in Haskell

2010-06-23 Thread Claus Reinke

This seems like an example of list-chauvinism -- what Chris Okasaki
calls a communal blind spot of the FP community in Breadth-First
Numbering: Lessons from a Small Exercise in Algorithm Design --
http://www.eecs.usma.edu/webs/people/okasaki/icfp00.ps



Thanks for sharing; this was an interesting (and short!) read.

I would like to see other Haskeller's responses to this problem.  

..

How would you implement bfnum?  (If you've already read the paper,
what was your first answer?)


That paper is indeed a useful read, provided one does
try to solve the puzzle as well as carefully check any
imagined solutions. 

I happened on it when I was working on the first GHood 
prototype, so my struggles happen to be documented 
(together with the popular foldl vs foldl' strictness) in 
the GHood paper and online examples:


   http://community.haskell.org/~claus/GHood/

The observation tool and problem turned out to be well
suited for each other: by observing both the input and
the result tree, one can see how traversal of the result
tree drives evaluation of the input tree (or not, if one
gets the strictness wrong;-).

You might find this useful when testing your solutions
or trying to get an intuition about dynamic strictness.

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] TH instance code.

2010-06-22 Thread Claus Reinke

I have below duplicate code, but i don't know how to use TH instance code.


-- duplicate code start 
--

instance Variable PageType where
  toVariant = toVariant . show
  fromVariant x = fmap (\v - read v :: PageType) $ fromVariant x


If this isn't an exercise to learn TH, you might also want to try
scoped type variables (7.8.7) to connect the 'read v' annotation
to the instance head:

http://www.haskell.org/ghc/docs/6.12.2/html/users_guide/other-type-extensions.html#scoped-type-variables

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How to browse code written by others

2010-06-20 Thread Claus Reinke

If you go this route, I will shamelessly promote hothasktags instead
of ghci.  It generates proper tags for qualified imports.

What do you mean by proper here?


I think Luke means that if you use qualified names then hothasktags can 
give you better location information than current ghci ctags.


GHCi :ctags doesn't output tags for qualified names (though it
probably should), but that isn't enough for proper handling of
qualified imports. I believe hothasktags is meant to output
multiple tags per target, one for each module referencing the
target. But that would rely on scoped tags.

I discussed this with Luke before and I sumarrized what would need to be 
done to imporove ghci ctags to support qualified names better. Here is the 
post which explains it with an example:


http://permalink.gmane.org/gmane.comp.lang.haskell.cafe/73116


The problem with that example is that all occurrences of B.x point
to b.hs and C.x always points to c.hs, so it doesn't test the scoping
aspect of static tags. For instance, if you are in b.hs, and try ':tag C.x',
you'll still be sent to c.hs, even though that isn't in scope (see also
':help tag-priority').

If I add a file d.hs that is the same as a.hs but with the qualifiers
exchanged:

module D () where
import qualified B as C
import qualified C as B
localAct = do
 print B.x
 print C.x

and try to add corresponding scoped tags by hand, then I don't
see the scoping being taken into account (though my Vim is old
7.2 from August 2008). Depending on sort order of the tags file,
either all B.x point to b.hs or all B.x point to c.hs. So, one either
gets the wrong pointers in a.hs or in d.hs.

I did not go to add this to ghci ctags since I barely ever use qualified 
names so it is a non-issue for me. Also propper support for scoped tags 
would include some vim macro which (on ^] key-press) tries to find a 
qualified name first and only after a failure it would try to find plain 
name without qualification. So if one wants to use it well he/she needs 
first select the name in visual mode and only then pres ^]. (Or one should 
use full ex commands for navigation like :tselect.)


One could redefine 'iskeyword' to include '.', but that may not
always be what one wants. haskellmode also uses a script,
though not yet for ^].


Your suggested use-case for such a feature is interesting, but
we're getting into an area where live calls to a scope resolution tool 
might make more sense.


Och I would like to have a vim with full incremental parser for Haskell 
... with things like AST being drawn in a pane, intellisense and 
completion on symbols but also on grammar/syntax, re-factoring options and 
all the goodies this could provide. It will not be a vim any more. 
Probably won't happen in my lifetime either but I can dream :)


The main thing standing in the way is Bram's strict interpretation
of ':help design-not'. It seems to be an overreaction to Emacs as
an OS, wanting to keep Vim small and simple. Ironically, adding
one standard portable way of communicating with external tools
would allow to make Vim smaller (removing all those special
case version of tool communication). And semantics-based
editing is wanted in all languages. So the feature has been
rising and is currently topping the votes:-)

http://www.vim.org/sponsor/vote_results.php

Just keep Vim for what it is good at, and add a way to interface
with language-aware external tools doing the heavy lifting (the
old Haskell refactorer had to use sockets for the Vim interface,
the Emacs interface was rather simpler in that respect).

Claus


PS: Thanks for your vim support.

You're welcome.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What is Haskell unsuitable for?

2010-06-18 Thread Claus Reinke
If you want to use cool languages, you may have to get a cool job. I  
know: it's easy to say and harder to accomplish.


Most functional languages (e.g. Lisp, Haskell, ...) have a challenging  
time in industry since they require some savvy with multiple levels of  
higher abstractions and some savvy with mathematical ideas and some  
savvy with functional algorithms and data structures.


Javascript (the Basic of Lisps) isn't half bad, and is doing well.

But on the old topic of programming languages in industry: when 
I looked into PHP, I had my difficulties. But doing so helped me to 
understand an important aspect of programming language usage: 


   In the PhD world, people care more about their
   language than about their applications.

   In the PHP world, people care more about their
   applications than about their language.

Obviously, this is oversimplified in just about every way (you don't 
need a PhD to care about your language, not all PhDs care about

their language, some PhDs care about their applications, some
PHPs care about their language, many programmers care about
both their language and their applications, and so on). But as a 
starting point, and especially to shake up preconceived notions, 
it still helps to compress common prejudices this way.


Industry may well understand, and sometimes even already
know, when their tools are not the best. But their job is not
to use the best tools, it is to get things done (in a way that
ensures survival in their market). There is no point arguing 
with them about the quality of tools - they might even agree 
with you, and still not change anything. There is no point 
complaining that they don't understand our shiny advanced 
tools - you need to show them that these tools make a 
difference in getting things done (wrt market survival).


Industry also have their own levels of abstraction: if a 
manager makes decisions, and programmers have to

implement them, problems with the tools are hidden
from the managerial level. If managers can buy solutions,
even problems with programmers are hidden from them.
And if developers can use frameworks, configurable
management systems, and configuration management
software, they are isolated from some programming
problems (*).

So you could think about the problem of selling a
new language as trying to sell a new implementation
of an existing API, for which a more-or-less working
implementation alread exists. And that is the good
case - the bad case is when you're competing with
frameworks and similar infrastructure, where the
programming language plays a decreasingly small
role in building software.

Claus

(*) In the you can run, but you can't hide sense.

PS. There should be a Haskell for PHP developers,
   so that Haskellers understand that the advantages
   of Haskell begin long before we get into advanced
   type systems, so that PHPers see that most of what
   they want to do can be done in Haskell, with fewer
   problems and without using advanced features, 
   and so that Haskellers and PHPers get together 
   about getting things done.


   (challenge for adventurous minds: since PHP gets
   most of its functionality from C-libraries: would it
   be possible to use Haskell's FFI to hook into the
   C-libraries of a PHP installation and get the best
   of both worlds, as seen from a PHP developers
   perspective?-)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Mining Twitter data in Haskell and Clojure

2010-06-17 Thread Claus Reinke

I'll work with Simon to investigate the runtime, but would welcome any
ideas on further speeding up cafe4.


Just a wild guess, but those foldWithKeys make me nervous.

The result is strict, the step function tries to be strict, but if
you look at the code for Data.IntMap.foldr, it doesn't really
give you a handle for propagating that strictness. Unless
the foldr definition is inlined into your code, the generic,
non-strictly accumulating version might be called. Have
you checked that this isn't the case?

Also, shouldn't the two foldWithKeys on dm be merged?
And, while Maps are strict in their representation, putting 
them into a non-strict field of a data structure might lose 
that.


As I said, just guessing;-)
Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to browse code written by others

2010-06-15 Thread Claus Reinke
 ..ghci is able to generate the tagsfiles for you. This allows you to 
jump to definitions of   identifiers.


If you go this route, I will shamelessly promote hothasktags instead
of ghci.  It generates proper tags for qualified imports.


What do you mean by proper here? GHCi has the information
needed to generate more detailed tags, but the tags file format
did not support much more detail last time I checked. 

Tags for explicitly qualified names could be generated (and 
probably should be), though that would interact with the default 
identification of Haskell identifiers in Vim. But if you want to 
resolve imports properly (or at least a bit better, such as adding

import qualified as support, or pointing unqualified uses to
the correct import), you need more support from the tags 
mechanism.


There is a tags file format proposal here:

   http://ctags.sourceforge.net/FORMAT

that does (among other scopes) suggest explicitly file-scoped 
local tags
   
   file:  Static (local) tag, with a scope of the specified file.  
  When the value is empty, {tagfile} is used.


but in Vim 7.2 the help file still says

   http://vimdoc.sourceforge.net/htmldoc/tagsrch.html#tags-file-format

   The only other field currently recognized by Vim is file:
   (with an empty value).  It is used for a static tag.

If Vim somehow started supporting that extended file:scope
format without updating its docs, that would be good to know
(what version of Vim? where is this documented?).

Your suggested use-case for such a feature is interesting, but
we're getting into an area where live calls to a scope resolution 
tool might make more sense. If one is willing to depend on

Vim's Python-binding, one could keep a GHCi session live
in the background, track the current file/module, and use the 
:info -- Defined at output to find the correct definition. 


Btw, GHCi's :browse! gives information on where available
names come from, which can be useful for resolving unqualified
names (which map is that?) in unknown code.

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to Show an Operation?

2010-06-14 Thread Claus Reinke
As others have pointed out, you can't go from operation to 
representation,

but you can pair operations and expressions with their representations.


This idea is also implemented in my little 'repr' package:

http://hackage.haskell.org/package/repr


And probably more completely/comfortably!-) The version I 
pointed to, which I have occasionally mentioned here over the 
years, was deliberately simplified and reduced. I think Lennart 
also had a fairly complete version. Wasn't there also a version

in one of the IRC bots?

This kind of trick also comes up in embedded DSLs, especially 
if used for embedded compilers / code generators (eg, I used 
to generate VRML and Javascript from a Haskell DSEL, and by 
changing the expression representation to Javascript, running

the Haskell-embedded expression would generate Javascript).

I first encountered this idea when learning about type classes: I
was trying to convince myself that overloading does not break
referential transparency, even though this example clearly
shows that the same expression can have different meanings,
depending only on type context.

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] HP/Cygwin and Curl

2010-06-08 Thread Claus Reinke

Thanks Stephen--that was related to my original question, about using HP
with Cygwin. The answer seems to be No!--you must use MSYS (for real 
work).


The short version:

- Cygwin provides commandline tools, compilers and libraries
- MSYS provides commandline tools for the MinGW compilers and libraries

You can use the commandline tools from either Cygwin or MSYS,
but you need to compile and link with the compilers and libraries
from MinGW.

Cygwin's gcc produces binaries that live in a unix-emulation on top
of windows, and depend on a cygwin dll to act as a translator. MinGW's
gcc produces native windows binaries.

Claus

http://www.haskell.org/ghc/docs/6.12.2/html/users_guide/ghci-cygwin.html 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] HP/Cygwin and Curl

2010-06-08 Thread Claus Reinke

Your condensed summary was my understanding, but if I try to issue

   Cabal install --reinstall cmu

It works every time from a MSYS shell, but with Cygwin I get


If MSYS works for you, why not stick with it?-)

Cygwin with MinGW tools should work, but it is very easy to mess up
with this setup (configure scripts might not notice that they are on
windows and use the wrong calling convention, Cygwin libraries and
compiler tools might be called accidentally if they are installed, and
so on). Of course, the more people give up and switch to MSYS, the
smaller the chances that such bugs (things that accidentally work when
using MSYS, or with a certain directory layout) get reported and fixed.


   Linking dist\build\cmu\cmu.exe ...
  C:\Program Files\Haskell Platform\2010.1.0.0\lib\..\mingw\bin\windres:
can't open temporary file `\/cca04932.irc': No such file or directory
   cabal.exe: Error: some packages failed to install:
  cmu-1.2 failed during the building phase. The exception was:
  ExitFailure 1

('cmu' is just an example; the same behaviour seems apparent whatever the
package; I see something very similar when I ask GHC to compile hello.hs.)


You could try running 'ghc -v hello.hs' and check the windres command
for oddities. I have no problems building a simple hello world from Cygwin:

   $ uname -a
   CYGWIN_NT-6.1-WOW64 VAIO 1.7.5(0.225/5/3) 2010-04-12 19:07 i686 Cygwin

   $ type ghc
   ghc is hashed (/cygdrive/c/haskell/ghc/ghc-6.12.2.20100522/bin/ghc)

   $ ghc tst.hs

   $ ./main.exe
   Hello, World!


The general answer I seem to have been getting (and responses I have seen
elsewhere top this problem) is 'don't expect the Haskell tools to work 
with

Cygwin'.


That tends to be a self-fulfilling prophesy, similar to 'don't expect the 
Haskell

tools to work on Windows (because so many people rely on unixisms)'. But
the result is that problems do surface more often in this configuration. If 
you

want to investigate, you could file a report on the HP trac.

Claus


At any rate it seems that, for some people at least, the latest version of
the Haskell tools won't work when launched from Cygwin Bash.

Chris

-Original Message-
From: haskell-cafe-boun...@haskell.org
[mailto:haskell-cafe-boun...@haskell.org] On Behalf Of Claus Reinke
Sent: 08 June 2010 09:02
To: haskell-cafe@haskell.org
Subject: Re: [Haskell-cafe] HP/Cygwin and Curl


Thanks Stephen--that was related to my original question, about using
HP with Cygwin. The answer seems to be No!--you must use MSYS (for
real work).


The short version:

- Cygwin provides commandline tools, compilers and libraries
- MSYS provides commandline tools for the MinGW compilers and libraries

You can use the commandline tools from either Cygwin or MSYS, but you need
to compile and link with the compilers and libraries from MinGW.

Cygwin's gcc produces binaries that live in a unix-emulation on top of
windows, and depend on a cygwin dll to act as a translator. MinGW's gcc
produces native windows binaries.

Claus

http://www.haskell.org/ghc/docs/6.12.2/html/users_guide/ghci-cygwin.html

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to Show an Operation?

2010-06-07 Thread Claus Reinke
 If I have a problem where I have to select from a set of operations, 
 how

 would I print the result?

 Example: If I can chose from (x+y), (x*y), (x^2+y)...
 and I feed them all into my problem solver
 and it finds that (x*y) is right, how can I print that string?


As others have pointed out, you can't go from operation to representation,
but you can pair operations and expressions with their representations.

Unless your problem solver just tries out a list of known expressions,
you'll probably end up with a proper expression representation as an
algebraic datatype, and a way to bind the variables. However, if you
really only want arithmetic expressions and their values, you could
define your own instances of the classes defining those operations.

A small example of this approach can be found here:

   http://community.haskell.org/~claus/misc/R.hs

It pairs up values with String representations of the expressions
that led to those values

   data R a = R { rep:: String, val:: a }

and defines a Num instance for 'Num a = R a'

Then you can do things like (more examples in the source):

   -- expressions at default type: show the value
   *R foldl (+) 0 [1..4]
   10
   *R foldr (+) 0 [1..4]
   10

   -- expressions at 'R Int' types: show the representation
   *R foldl (+) 0 [1..4]::R Int
   0 + 1) + 2) + 3) + 4)

   *R foldr (+) 0 [1..4]::R Int
   (1 + (2 + (3 + (4 + 0

   *R flip (foldr (+)) [1..4]::R Int-R Int
   \x-(1 + (2 + (3 + (4 + x

   *R flip (foldl (+)) [1..4]::R Int-R Int
   \x-x + 1) + 2) + 3) + 4)

This approach does not easily scale to more complex expressions,
and the code was meant for small demonstrations only, but it might
give you some ideas.

Claus



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Chuch encoding of data structures in Haskell

2010-05-27 Thread Claus Reinke
The approach is so simple and trivial that it must have occurred to 
people a hundred times over. Yet I do not find any other examples of 
this. Whenever I google for church encoding the examples don't go beyond 
church numerals.


Am I googling for the wrong keywords?


You might find Typing Record Concatenation for Free
interesting:

   http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.49.401

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How do you rewrite your code?

2010-03-04 Thread Claus Reinke
All my code, whether neat or not so neat is still way too concrete, too 
direct.
I think the correct answer is one should try to find abstractions and 
not code straight down to the point. Which to me is still a really tough 
one, I have to admit.


Taking this cue, since you've raised it before, and because the current
thread holds my favourite answer to your problem:

   Small moves, Ellie, small moves.. :-)

Don't think of finding abstractions as an all-or-nothing problem.
One good way of finding good abstractions is to start with straight
code, to get the functionality out of the way, then try to abstract over
repetitive details, improving the code structure without ruining the
functionality. The abstractions you're trying to find evolved this way,
and can be understood this way, as can new abstractions that have
not been found yet.

There are, of course, problems one cannot even tackle (in any 
practical sense) unless one knows some suitable abstractions, and 
design pattern dictionaries can help to get up to speed there. But 
unless one has learned to transform straight code into nicer/more 
concise/more maintainable code in many small steps, using other 
people's nice abstractions wholesale will remain a Chinese room

style black art.

For instance, the whole point of refactoring is to separate general
code rewriting into rewrites that change observable behaviour (API 
or semantics), such as bug fixes, new features, and those that don't 
change observable behaviour, such as cleaning up, restructuring

below the externally visible API, and introducing internal abstractions.
Only the latter group fall under refactoring, and turn out to be a nice
match to the equational reasoning that pure-functional programmers
value so highly. 

What that means is simply that many small code transformations are 
thinkable without test coverage (larger code bases should still have 
tests, as not every typo/thinko is caught by the type system). Even 
better, complex code transformations, such as large-scale refactorings, 
can be composed from those small equational-reasoning-based 
transformations, and these small steps can be applied *without having 
to understand what the code does* (so they are helpful even for 
exploratory code reviews: transform/simplify the code until we 
understand it - if it was wrong before, it will still be wrong the same 
way, but the issues should be easier to see or fix).



From a glance at this thread, it seems mostly about refactorings/

meaning-preserving program transformations, so it might be
helpful to keep the customary distinction between rewriting and
refactoring in mind. A couple of lessons we learned in the old
refactoring functional programs project:

1 refactoring is always with respect to a boundary: things within
   that boundary can change freely, things beyond need to stay
   fixed to avoid observable changes. It is important to make
   the boundary, and the definition of observable change
   explicit for every refactoring session (it might simply mean 
   denotational equivalence, or operational equivalence, or

   API equivalence, or performance equivalence, or..)

2. refactoring is always with respect to a goal: adding structure,
   removing structure, changing structure, making code more
   readable, more maintainable, more concise, .. These goals
   often conflict, and sometimes even lie in opposite directions
   (eg.,removing clever abstractions to understand what is 
   going on, or adding clever abstractions to remove boilerplate),
   so it is important to be clear about the current goal when 
   refactoring.


Hth,
Claus

PS. Obligatory nostalgia:

- a long time ago, HaRe did implement some of the refactorings
   raised in this thread (more were catalogued than implemented,
   and not all suggestions in this thread were even catalogued, but
   the project site should still be a useful resource)

   http://www.cs.kent.ac.uk/projects/refactor-fp/
   
   A mini demo that shows a few of the implemented refactorings

   in action can be found here:

   http://www.youtube.com/watch?v=4I7VZV7elnY
   
- once upon a time, a page was started on the haskell wiki, to

   collect experiences of Haskell code rewriting in practice (the
   question of how to find/understand advanced design patterns
   governs both of the examples listed there so far, it would be
   nice if any new examples raised in this thread would be added
   to the wiki page)

   http://www.haskell.org/haskellwiki/Equational_reasoning_examples


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Questions about haskell CPP macros

2009-07-13 Thread Claus Reinke

I am trying to improve the error reporting in my sendfile library, and I
know I can find out the current file name and line number with something
like this:

{-# LANGUAGE CPP #-}
main = putStrLn (__FILE__ ++ : ++ show __LINE__)

This outputs:
test.hs:2

Unfortunately, if your file is in a hierarchy of folders, this flat file
name doesn't give much context. Is there a macro to find out the current
module? IE if I had a module Foo.Bar.Car.MyModule, I would like to be able
to output something like this on error:
Foo.Bar.Car.MyModule:2


Sounds like a job for cabal or ghc, to define appropriate macros for
package and module when compiling the source?


Any help is appreciated!


For actually making use of such information, see 

   http://hackage.haskell.org/trac/ghc/wiki/ExplicitCallStack 
   http://hackage.haskell.org/trac/ghc/wiki/ExplicitCallStack/StackTraceExperience


and also the recent thread on how to improve the quality of +RTS -xc
output via mapException (hmm, can't reach the archive at the moment,
one subject was Should exhaustiveness testing be on by default?, about
May; http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ).

If you really mean any help, you could also use Template Haskell:-)

   {-# LANGUAGE TemplateHaskell #-}
   module Oh.Hi where 
   
   import Language.Haskell.TH
   
   main = print $( location = \(Loc f p m s e)- 
   stringE (f++:++p++:++m++:++show s++:++show e))


Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell on JVM

2009-06-26 Thread Claus Reinke

For example, Clojure lacks proper tail recrusion  optimization due to
some missing functionality in the JVM. But does anybody know the  
details?


|Basically, the JVM lacks a native ability to do tail calls. It does  
|not have an instruction to remove/replace a stack frame without 
|executing an actual return to the calling method/function.


There is a conflict between preserving stack layout and efficient
tail calls. Unfortunately, stack inspection appears to be used for
security aspects in JVM. That doesn't make tail calls impossible,
but may have slowed down progress as this argument always
comes up in discussing tail calls on the JVM (since its byte code
isn't just an internal detail, but an externally used API).

None of the various partial workarounds are quite satisfactory
(jumps work only locally, there is an upper size limit on how
much code can be considered as local, trampolines return before 
each call, there are alternatives that clear the stack not before

each call, but every n calls, ..., see the various Haskell to Java/JVM
papers).

I was curious about the current state (the issue is as old as JVM),
and here's what I've found so far (more concrete/official info would
be appreciated):

   tail calls in the VM [2007]
   http://blogs.sun.com/jrose/entry/tail_calls_in_the_vm

   RFE: Tail Call Optimization [2002]
   http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4726340

   [2009]
   http://wikis.sun.com/display/mlvm/TailCalls

   Tail Call Optimization in the Java HotSpot(TM) VM [2009]
   http://www.ssw.uni-linz.ac.at/Research/Papers/Schwaighofer09Master/

Still cooking, still not done, it seems, but perhaps closer than ever?

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell on JVM

2009-06-26 Thread Claus Reinke
JVM 7 has tail calls, 


Source, please? JSR-292 seems the most likely candidate so far,
and its draft doesn't seem to mention tail calls yet. As of March
this year, the people working on tail calls for mlvm [1], which
seems to be the experimentation ground for this, did not seem to 
expect any fast route:


http://mail.openjdk.java.net/pipermail/mlvm-dev/2009-March/000405.html

There have been years of rumours and plans, so it would be 
nice to have concrete details, before any fp-on-jvm implementation

design starts to rely on this.

Claus

[1] http://openjdk.java.net/projects/mlvm/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Error in array index.

2009-06-25 Thread Claus Reinke

 It's too bad that indexes are `Int` instead of `Word` under
 the hood. Why is `Int` used in so many places where it is
 semantically wrong? Not just here but also in list indexing...
 Indices/offsets can only be positive and I can't see any good
 reason to waste half the address space -- yet we encounter
 this problem over and over again.


Readers who disliked the above also disliked the following:

   index out of range error message regression
   http://hackage.haskell.org/trac/ghc/ticket/2669

   Int / Word / IntN / WordN are unequally optimized
   http://hackage.haskell.org/trac/ghc/ticket/3055

   Arrays allow out-of-bounds indexes
   http://hackage.haskell.org/trac/ghc/ticket/2120
   ..

Not to mention that many serious array programmers use their
own array libraries (yes, plural:-(, bypassing the standard, so 
their valuable experience/expertise doesn't result in improvements 
in the standard array libraries (nor have they agreed on a new API). 

If any of this is affecting your use of GHC or libraries, you might 
want to add yourself to relevant tickets, or add new tickets. Small
bug fixes, alternative designs and grand array library reunification 
initiatives might also be welcome.


Claus

PS. You could, of course, rebase your array indices to make
   use of the negatives, so the address space isn't wasted, just
   made difficult to use.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] HaRe (the Haskell Refactorer) in action - short screencast

2009-06-22 Thread Claus Reinke
I've heard that many Haskellers know HaRe only as a rumour. It has 
been many years since the original project finished, and HaRe hasn't
been maintained for quite some time, so just pointing to the sources 
isn't quite the right answer. 


The sources are still available, and build with GHC 6.8.3 (I had to fix
one lineending issue on windows, iirc, and copy one old bug fix that
hadn't made it into the latest release), but there is currently noone with 
the time or funding for maintenance, fixing bugs, making releases, or 
ironing out practical issues. If anyone would provide funding, people to 
do the work could be found, but the effort would probably be better 
spent on reimplementing the ideas in a GHC / Cabal environment 
(instead of the Haskell'98 environment targetted by our Refactoring 
Functional Programs project). If you've got the funding, please get

in touch - even a three month run could get something started at least!-)

In principle, the project experiences and lessons learned are quite well 
documented at the project site 


   http://www.cs.kent.ac.uk/projects/refactor-fp/

but that doesn't give anyone an idea of what working with HaRe was 
like. With the recent interest in screencasts, I thought I'd make a little
demo, for archival purposes. Nothing fancy, using only features that 
were already present in HaRe 0.3 (end of 2004), and not all of those, 
on a tiny 2-module example (screenspace is a bit crowded to keep 
the text readable on YouTube). 

I hope it might give a rough idea of what the papers, slides and reports 
are talking about, for Haskellers who weren't around at the time:


   http://www.youtube.com/watch?v=4I7VZV7elnY

For the old HaRe team,
Claus

--- YouTube video description:
HaRe - the Haskell Refactorer (a mini demo) [4:10]

The Haskell Refactorer HaRe was developed in our EPSRC project 
   Refactoring Functional Programs 
   http://www.cs.kent.ac.uk/projects/refactor-fp/ 
Building on Programatica's Haskell-in-Haskell frontend and Strafunski's 
generic programming library, it supported module-aware refactorings 
over the full Haskell'98 language standard. Interfaces to the refactoring 
engine were provided for both Vim and Emacs (this demo uses HaRe 
via GVim on Windows).


While HaRe has continued to see occasional contributions by students 
and researchers, who use its Haskell program transformation API as a 
platform for their own work, it is not currently maintained. As the Haskell 
environment marches on, this demo is meant to record a snapshot of 
what working with HaRe could be like when it still built (here with GHC 6.8.3). 

The lessons learned (note, eg, the preservation of comments, and the 
limited use of pretty-printing, to minimize layout changes) are well 
documented at the project site, and should be taken into account 
when porting the ideas to the GHC Api, or other Haskell frontends.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Code walking off the right edge of the screen

2009-06-21 Thread Claus Reinke

I (too) often find myself writing code such as this:

if something
 then putStrLn howdy there!
 else if somethingElse
 then putStrLn howdy ho!
 else ...


1. recognize something odd. done.
2. look for improvements. good.
3. define suitable abstractions for your special case
4. look for general abstractions covering the special case


I recall reading some tutorial about how you can use the Maybe monad
if your code starts looking like this, but as you can see, that
doesn't really apply here. something and somethingElse are simply
booleans and each of them have different actions to take if either of
them is True.


Maybe, or MaybeT (a monad transformer adding Maybe-style
functionality to your base monad, in this case IO) can be used here
as well, but may not be the first choice. As has been pointed out,
guards would seem to cover your use case:

e something somethingElse
 | something = putStrLn howdy there!
 | somethingElse = putStrLn howdy ho!
 | otherwise = putStrLn hmm.. hi?

If you need something more, you can define your own abstractions
to cover the repeated patterns in your code. Perhaps a function to
select one of a list of (condition,action) pairs:

g something somethingElse = oneOf
 [(something, putStrLn howdy there!)
 ,(somethingElse, putStrLn howdy ho!)
 ,(otherwise, putStrLn hmm.. hi?)
 ]
 where oneOf = foldr (\(c,a) r-if c then a else r) (error no match in oneOf)

or some combinators for alternatives of guarded actions instead

h something somethingElse =
 (something -: putStrLn howdy there!)
 `orElse`
 (somethingElse -: putStrLn howdy ho!)
 `orElse`
 (otherwise -: putStrLn hmm.. hi?)
 where
 c -: a  = when c a  return c
 a `orElse` b = a = \ar- if ar then return True else b

Now, the former can be quite sufficient for many situations, but it
doesn't quite feel like a general solution, and the latter clearly shows
the dangers of defining your own abstractions: if you overdo it, anyone
reading your code will need a translator!-) Which is where the search
for general abstractions comes in - we're looking for something that
will not only cover this special use case, but will be more generally
useful, in a form that only needs to be understand once (not once per
project).

And that brings us to things like MonadPlus: you don't have to use
the Monad combinator for sequencing, but if you do (as in IO),
then it is natural to ask for a second combinator, for alternatives.
Now, IO itself doesn't have a MonadPlus instance, but we can
use a monad transformer to add such functionality. Using MaybeT,
that will be similar to version 'h' above:

i something somethingElse = runMaybeT $
 (guard something  lift (putStrLn howdy there!))
 `mplus`
 (guard somethingElse  lift (putStrLn howdy ho!))
 `mplus`
 (   lift (putStrLn hmm.. hi?))

and it can also be used for related patterns, such as running
a sequence of actions until the first failure:

j something somethingElse = runMaybeT $ do
 (guard something  lift (putStrLn howdy there!))
 (guard somethingElse  lift (putStrLn howdy ho!))
 (   lift (putStrLn hmm.. hi?))

or other combinations of these two patterns.

MaybeT is not the only possibility, and not always the best,
but Maybe is perhaps the best known instance of MonadPlus
(and the only thing that needs to change to use other MonadPlus
instances is the 'runMaybeT').

Hth,
Claus

PS. for a more extensive example of MaybeT vs indentation creep, see 
http://www.haskell.org/haskellwiki/Equational_reasoning_examples#Coding_style:_indentation_creep_with_nested_Maybe


---
data MaybeT m a = MaybeT { runMaybeT :: m (Maybe a) }

instance Monad m = Monad (MaybeT m) where
 return  = MaybeT . return . Just
 a = b = MaybeT $ runMaybeT a = maybe (return Nothing) (runMaybeT . b)
 fail msg= mzero

instance Monad m = MonadPlus (MaybeT m) where
 mzero   = MaybeT $ return Nothing
 a `mplus` b = MaybeT $ runMaybeT a = maybe (runMaybeT b) (return . Just)

instance MonadTrans MaybeT where
 lift m = MaybeT $ m = return . Just


main = do
 putStrLn e:  mapM_ (uncurry e) args
 putStrLn f:  mapM_ (uncurry f) args
 putStrLn g:  mapM_ (uncurry g) args
 putStrLn h:  mapM_ (uncurry h) args
 putStrLn i:  mapM_ (uncurry i) args
 putStrLn j:  mapM_ (uncurry j) args
 where args = [(x,y)|x-[True,False],y-[True,False]]





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] IORef memory leak

2009-06-19 Thread Claus Reinke

It is not possible to write a modifyIORef that *doesn't* leak memory!
  

Why? Or can one read about it somewhere?


Possibly, Don meant that 'modifyIORef' is defined in a way that 
does not allow to enforce evaluation of the result of the modification

function (a typical problem with fmap-style library functions):

   modifyIORef ref f = readIORef ref = writeIORef ref . f

No matter whether 'f' is strict or not, the 'writeIORef r' doesn't
evaluate its result, just stores the unevaluated application:

r-newIORef 0
modifyIORef r (\x-trace done $ x+1)
modifyIORef r (\x-trace done $ x+1)
readIORef r
   done
   done
   2

If it had been defined like this instead

   mRef r ($) f = readIORef r = (writeIORef r $) . f

it would be possible to transform the strictness of 'writeIORef r'
to match that of 'f':

r-newIORef 0
mRef r ($) (\x-trace done $ x+1)
mRef r ($) (\x-trace done $ x+1)
readIORef r
   done
   done
   2
r-newIORef 0
mRef r ($!) (\x-trace done $ x+1)
   done
mRef r ($!) (\x-trace done $ x+1)
   done
readIORef r
   2

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Runtime strictness analysis for polymorphic HOFs?

2009-06-15 Thread Claus Reinke

I was recently trying to figure out if there was a way, at runtime, to do
better strictness analysis for polymorphic HOFs, for which the strictness of
some arguments might depend on the strictness of the strictness of function
types that are passed as arguments [1]. As an example, consider foldl. The
'seed' parameter of foldl can be made strict as long as the binary function
used for the fold is strict in its arguments. Unfortunately, because
strictness analysis is done statically, Haskell can't assume anything about
the strictness of the binary function - assuming we only compile one
instance of foldl, it must be the most conservative version possible, and
that means making the seed parameter lazy. :-(


As has been pointed out, strictness analysis is not decidable. That doesn't
mean it is undecidable - running the code and seeing what it does gives a
naive semi-decision procedure. So strictness analysis can be made more
precise by using more and more runtime information; unfortunately it also
becomes less and less useful as a static analysis in the process (in practice,
not just termination is important, but also efficiency of the analyses, so an
analysis would tend to become unusable before it became possibly non-
terminating - there are trade-offs between precision and efficiency).

For typical higher-order functions, there's a simple workaround, already
employed in the libraries, namely to split the definition into a simple front
that may be inlined, and a recursive back where the parameter function 
is no longer a parameter. Then, after inlining the front at the call site, the

back can be compiled and analysed with knowledge of the parameter
function. See the comment above foldl:

http://www.haskell.org/ghc/docs/latest/html/libraries/base/src/GHC-List.html#foldl

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Documentation on hackage

2009-06-15 Thread Claus Reinke

who needs this kind of documentation?

http://hackage.haskell.org/packages/archive/tfp/0.2/doc/html/Types-Data-Num-Decimal-Literals.html


The code below is shown under 'Source' links
in that documentation. I don't understand it,
but it seems everything is generated automatically.
What should the author do to avoid those comments?


Older versions of haddock used to define a CPP constant that
could be used to tweak the code for haddock. Since newer
versions nominally support every feature that GHC supports,
that CPP support was dropped.

(a) it would be nice if haddock still identified itself via CPP,
   just like GHC does. Then users would at least have the
   option to work around current limitations / bugs in haddock,
   as well as present tweaked presentations of their interfaces.

(b) it seems sensible for haddock to provide two options for
   handling template Haskell code:
   - document the TH code
   - document the code generated by TH
   In this case, the first option would seem more suited.

Claus


module Types.Data.Num.Decimal.Literals where

import Types.Data.Num.Decimal.Literals.TH

import Types.Data.Num.Decimal.Digits
import Types.Data.Num.Ops

$( decLiteralsD D d (-1) (1) )

--

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to know the build dependencies?

2009-06-14 Thread Claus Reinke

 I am learning to use cabal for my code.
 Just when I start, I met a question, is there an easy way to find
out what packages my code depends?


If you've managed to get your code to compile,

   ghc --show-iface Main.hi

is perhaps the easiest way (ghc --make and ghci will also report 
package dependencies as they encounter them).


If you're looking for the package for a particular module, ghc-pkg
can help

   ghc-pkg find-module Control.Concurrent
   c:/ghc/ghc-6.10.3\package.conf:
   base-3.0.3.1, base-4.1.0.0

   ghc-pkg find-module Data.Map
   c:/ghc/ghc-6.10.3\package.conf:
   containers-0.2.0.1

If you're looking for a minimal set of imports before hunting for 
packages, ghc's -ddump-minimal-imports will create a file Main.imports 
with that information. You could then run ghc-pkg find-module over

that list.

These are not the only options. Perhaps the available tools 
need to be advertized more?-)


Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] curious about sum

2009-06-14 Thread Claus Reinke
A much better idea than making sum strict, would simply be to add a  
sum'.


Even better to abstract over strictness, to keep a lid on code duplication?

   {-# LANGUAGE TypeOperators #-}

   sum  = foldlS ($)  (+) 0
   sum' = foldlS ($!) (+) 0

   -- identity on constructors of t (from a), modulo strictness in a
   type a :-? t = (a - t) - (a - t)

   foldlS ::  (b :-? ([a] - b)) - (a - b - b) - (b - [a] - b)
   foldlS ($) op n []= n
   foldlS ($) op n (h:t) = (foldlS ($) op $ (op h n)) t

Strictness is encoded as a constructor transformer - ($) keeps the
constructor in question unchanged, ($!) makes it strict. Also works 
with container types (Maps strict or not strict in their elements can
share the same strictness-abstracted code, for instance). Though 
sometimes there is more than one strictness choice to make in the 
same piece of code..


Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Debugging misbehaving multi-threaded programs

2009-06-11 Thread Claus Reinke

I've written a multi-threaded Haskell program that I'm trying to
debug. Basically what's happening is the program runs for a while, and
then at some point one of the threads goes crazy and spins the CPU
while allocating memory; this proceeds until the system runs out of
available memory. I can't fix this without figuring out what code is
being executed in the loop (or at least which thread is misbehaving,
which would narrow things down a lot).
..
Does anyone have any tips for dealing this? Have other people run into
similar problems? I'm out of ideas, so I'm open to any suggestions.


Don't know whether this still works, but there was a Concurrent
Haskell Debugger here:

http://www.informatik.uni-kiel.de/~fhu/chd/

The idea being that you put an indirection module between your
code and the Concurrent Haskell imports, and then instrument
the indirections to give you more information (they had built 
more tools on top of that idea).


In a similar direction, I once suggested a shell-jobs-like thread
interface for GHCi, in the context of this _|_-ed ticket:

http://hackage.haskell.org/trac/ghc/ticket/1399#comment:3

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad transformer responsibilities

2009-06-05 Thread Claus Reinke

From what I understand, the current best practices are to build your
package dependencies like so:

ParsecMyMonadT
MyMonadT_Parsec   -- orphan instances go here
ProjectPackage

This does mean splitting up your project into three packages, but
decouples the orphan instance into its own package where it can do the
least damage :)


Lets assume the above are modules rather than packages (same 
difference, but fewer indirections in the explanation to follow): if 
ProjectPackage imports MyMonadT_Parsec and is itself meant

to be imported by other modules, then that decoupling breaks down
(unless MyMonadT is a private class, in which case there is only
one provider of instances, who can try to manage the situation).


At the very least it should go into its own module which can be
imported only in the places that need it, similar to
Control.Monad.Instances defining the orphan instance for Monad ((-)
r).


Orphan instances aren't themselves bad, I think. But since current
technology doesn't allow for import/export control, they always
indicate a problem, hence the warning.  When possible, the problem
should be avoided, by making either the class or the type private
(if neccessary by wrapping a common type in a newtype). That 
doesn't mean that the problem can always be avoided, just that 
there is something that needs attention. Back to that import hierarchy:



ParsecMyMonadT
MyMonadT_Parsec   -- orphan instances go here
ProjectPackage


If ProjectPackage is meant to be imported, there are at least two 
ways to proceed. Version A is to split the dependent modules, so 
that each of them can be used with or without the import.


ParsecMyMonadT
MyMonadT_Parsec   -- orphan instances go here
ProjectPackageWith -- imports, and implicitly exports, MyMonadT_Parsec
ProjectPackageWithout -- no import, no implicit export

So clients can still use ProjectPackageWithout if they get the
instances by another route. This only works for convenience 
instances where the instances are nice to provide for clients,

but not needed in ProjectPackage itself - in essence:

ProjectPackageWith(module ProjectPackageWithout) where 
import MyMonadT_Parsec
import ProjectPackageWithout 

If ProjectPackage actually depends on the existence of those 
orphan instances, plan B is to delay instance resolution, from 
library to clients, so instead of importing the orphan instances


module ProjectPackage where 
import MyMonadT_Parsec

f .. =  .. orphan instances are available, use them ..

(which leads to the dreaded implicit export), you'd just assert 
their existence:


module ProjectPackage where 
f :: .. Orphan x = .. ; f .. = .. use orphan instances ..


so the client module would have to import both ProjectPackage 
and MyMonadT_Parsec if it wants to call 'f', or find another way
to provide those instance. Of course, the same concerns apply to 
the client modules, so you end up delaying all instance resolution 
until the last possible moment (at which point all the orphans need to be imported).


If there is a main module somewhere (something that isn't itself
imported), that is the place where importing the orphan instances
won't cause any trouble (other than that all the users of such
instances better have compatible ideas about what kind of instance
they want, because they all get the same ones).

If there is no main module (you're writing a library meant to
be imported), you better delay the import of any orphans or
provide both libraryWith and libraryWithout. It isn't pretty.

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad transformer responsibilities

2009-06-05 Thread Claus Reinke

| bar :: (C T) = T
| *Main :t bar
| 
| interactive:1:0:

| No instance for (C T)
|   arising from a use of `bar' at interactive:1:0-2
| Possible fix: add an instance declaration for (C T)
| In the expression: bar

I'm not sure where that comes from, but it does seem to be an 
artifact of GHC's type inference, which seems unwilling to infer

a flexible context even if flexible contexts are enabled:

*Main :show languages
active language flags:
 -XImplicitPrelude
 -XFlexibleContexts
*Main let f _ = negate []
*Main :t f
f :: (Num [a]) = t - [a]
*Main let f _ = negate [()]

interactive:1:10:
   No instance for (Num [()])
 arising from a use of `negate' at interactive:1:10-20
   Possible fix: add an instance declaration for (Num [()])
   In the expression: negate [()]
   In the definition of `f': f _ = negate [()]
*Main let f :: Num [()] = t - [()]; f _ = negate [()]
*Main :t f

interactive:1:0:
   No instance for (Num [()])
 arising from a use of `f' at interactive:1:0
   Possible fix: add an instance declaration for (Num [()])
   In the expression: f

This does look like a bug to me? Compare with Hugs (Hugs mode):

Main :t let f _ = negate [] in f
let {...} in f :: Num [a] = b - [a]
Main :t let f _ = negate [()] in f
let {...} in f :: Num [()] = a - [()]

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bool as type class to serve EDSLs.

2009-06-01 Thread Claus Reinke

Do you argue that overloading logical operations like this in Haskell
sacrifices type safety? Could programs go wrong [1] that use such
abstractions?


If I understand your point correctly, you are suggesting that such programs
are still type safe.  I agree with the claim that such features are
detrimental in practice though.  Instead of lumping it with type safety,
then what do we call it?  I think I've heard of languages that do such
conversions as weakly typed.  Really the issue is with implicit
conversions, right?


Isn't it merely a matter of balance? In order for typed programs not
to go wrong, one has to define right and wrong, and devise a type
system that rules out anything that might go wrong, usually at the
expense of some programs that might go right. 

Advanced type system features like overloading take that unused space 
and devise ways to redirect code that would go wrong (in simpler 
systems) to go right in useful new ways (eg: adding two functions or 
matrices or .. does not have to be wrong, there are interpretations in 
which all of these make perfect sense, and Haskell can express many

of them).

What is happening then is that more and more of the previously wrong
space is filled up with meaningful ways of going right, until nearly every
syntactically valid program goes somewhere. That can make for an
extremely expressive and powerful language, but it renders the naive
notion of going wrong or right rather meaningless: wrong just
means we haven't figured out a meaningful way to interpret it, and 
going right can easily be a far cry from where you wanted it to go.


Claus

PS. this problem can be made worse if the implicit conversions
   aren't consistent, if small twitches in source code can lead to 
   grossly different behaviour. There is a fine line between advanced

   and uncontrollable, and opinions on what side of the line any given
   definition is on can differ.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Error message reform (was: Strange type errorwith associated type synonyms)

2009-06-01 Thread Claus Reinke
I once thought, that error messages must be configurable by libraries, 
too. This would be perfect for EDSLs that shall be used by non-Haskellers. 


Yes, that is a problem.


But I have no idea how to design that.


There was some work in that direction in the context of the Helium
project. See the publication lists of Bastiaan Heeren and Jurriaan Hage:

http://people.cs.uu.nl/bastiaan/#Publications
http://www.cs.uu.nl/research/techreps/aut/jur.html

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Question on kind inference

2009-05-31 Thread Claus Reinke

---
class A a where
 foo :: a b

class B a

class (A a, B a) = C a
---

GHC compiles it without errors, but Hugs rejects it: Illegal type in
class constraint.


The error message is horribly uninformative.


What is the correct behavior, and which part of the haskell 98 report
explains this?


4.6 Kind Inference, combined with 4.5(.1) dependency analysis.

My interpretation: 'A' and 'B' are not in the same dependency group,
so 'a's kind in 'B' defaults to '*', so 'C' is ill-kinded. Try moving 'B'
into a separate module to get the same effect in GHC (which, in the
single-module case, uses 'A' and 'C' to determine the kind of 'B's 'a').

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Which type variables are allowed in a context?

2009-05-31 Thread Claus Reinke

--
class A a where
 foo :: A (b d) = a (c b)
--

GHC compiles it successfully, but Hugs rejects it:

Ambiguous type signature in class declaration
*** ambiguous type : (A a, A (b c)) = a (d b)
*** assigned to: foo


'd' ('c' in the error message) does not occur in any position that
would allow to determine it, so you'll have a hard time using 'foo'.


What is the correct behavior, and which part of the haskell 98 report
explains this?


4.3.4 Ambiguous Types, .. (?)

strictly speaking, that only rules out expressions of ambiguous
types, so GHC can defer complaining until you try to use 'foo',
and Hugs might give a dead code warning instead of an error,
but the late errors can be really confusing:

   Could not deduce (A (b d)) from the context (A (b d1)) ..

so GHC's no-error, no-warning approach for the class method
isn't optimal

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] (no subject)

2009-05-31 Thread Claus Reinke

--
type F a = Int

class A a where
 foo :: A b = a (F b)
--

GHC - OK
Hugs - Illegal type F b in constructor application


This time, I'd say Hugs is wrong (though eliminating that initial
complaint leads back to an ambiguous and unusable method 'foo').

4.2.2 Type Synonym Declarations, lists only class instances as
exceptions for type synonyms, and 'Int' isn't illegal there.


--
type F a = Int

class A a where
 foo :: F a

instance A Bool where
 foo = 1

instance A Char where
 foo = 2

xs = [foo :: F Bool, foo :: F Char]
--

GHC:

M.hs:14:6:
   Ambiguous type variable `a' in the constraint:
 `A a' arising from a use of `foo' at M.hs:14:6-8
   Probable fix: add a type signature that fixes these type variable(s)

M.hs:14:21:
   Ambiguous type variable `a1' in the constraint:
 `A a1' arising from a use of `foo' at M.hs:14:21-23
   Probable fix: add a type signature that fixes these type variable(s)

Hugs: [1,2]


Neither seems correct? 4.3.1 Class Declarations, says:

   The type of the top-level class method vi is: 
   vi :: forall u,w. (C u, cxi) =ti 
   The ti must mention u; ..


'foo's type, after synonym expansion, does not mention 'a'.

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHCi vs. Hugs (record syntax)

2009-05-31 Thread Claus Reinke

head[[]{}]

GHCi: []
Hugs: ERROR - Empty field list in update

What is the correct behavior?


Seems as if GHC interprets []{} as labelled construction instead
of labelled update - 3 Expressions (the grammar productions):

   | qcon { fbind1 , ... , fbindn } (labeled construction, n=0) 
   | aexpqcon { fbind1 , ... , fbindn } (labeled update, n = 1) 


But the grammar (3.2) makes [] and () into exceptions (gcon, not qcon)

   gcon - () 
   | [] 
   | (,{,}) 
   | qcon 


(though interpreting them as nullary constructors may be more
consistent..).

Btw, the language report is recommended browsing for all Haskellers:
http://www.haskell.org/haskellwiki/Language_and_library_specification

In addition to fun puzzles like the above, it also answers many beginner 
questions frequently asked on this list and provides lots of small code

snippets.

What do I win?-)
Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] (no subject)

2009-05-31 Thread Claus Reinke

--
type F a = Int

class A a where
 foo :: A b = a (F b)
--

GHC - OK
Hugs - Illegal type F b in constructor application


This time, I'd say Hugs is wrong (though eliminating that initial
complaint leads back to an ambiguous and unusable method 'foo').


I only just recognized the horrible error message from the first
example.. what Hugs is trying to tell us about is a kind error!

The kind of 'a' in 'F' defaults to '*', but in 'A', 'F' is applied to
'b', which, via 'A b' is constrained to '*-*'. So Hugs is quite
right (I should have known!-).

The error message can be improved drastically, btw:

   :set +k
   ERROR file:.\hugs-vs-ghc.hs:19 - Kind error in constructor application
   *** expression : F b
   *** constructor : b
   *** kind : a - b
   *** does not match : *

See http://cvs.haskell.org/Hugs/pages/hugsman/started.html and
search for '+k' - highly recommended if you're investigating kinds.

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Error message reform

2009-05-30 Thread Claus Reinke
I find this slightly more complicated case quite confusing with the 
current wording:


  Prelude :t (\x - x) :: (a - b) - (a - a)
  interactive:1:7:
  Couldn't match expected type `a' against inferred type `b'
`a' is a rigid type variable bound by
an expression type signature at interactive:1:14
`b' is a rigid type variable bound by
an expression type signature at interactive:1:19
  In the expression: x
  In the expression: (\ x - x) :: (a - b) - (a - a)

This message suggests that ghc has inferred type b for x.


Not really; but this looks like a nice simple example of what I 
was referring to: GHC is so careful not to bother the user with

scary type stuff that it mentions types as little as possible. In
particular, it never mentions the type of 'x'! 


It only mentions that it has run into 'a' and 'b' somewhere
*while looking into 'x's type*, so those are conflicting types 
for some parts of 'x's type, not 'x's type itself. 


In more complex code, this peephole view can be really
unhelpful, which is why I suggested [1] that type errors
should give types for all expressions mentioned in the 
type error context (here 'x' and '(\x-x)', the latter's

type is there only because it was part of the input).

Claus

[1] http://hackage.haskell.org/trac/ghc/ticket/1928#comment:2


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Error message reform (was: Strange type error with associated type synonyms)

2009-05-28 Thread Claus Reinke

One user's view of error message history, perhaps helpful to reformers:-)

Once upon a time, Hugs tended to have better error messages than GHC.

They still weren't perfect, mostly when begginners where confronted with
messages referring to advanced concepts - eg, Simon Thompson had a list 
of translations of the kind if Hugs reports this, it most likely means that,
especially targetted to beginners in his courses, ie users who where unlikely 
to want things like 'Num (a-b)' instances, or find non-existing semicolons. 

(the advice to focus on the error location usually means that either the 
error message is misleading or too difficult for the user to interpret - it

is a standard fallback in all programming language implementations,
but no more than that; if we rely on it too often, the messages aren't
doing their job)

Even for more advanced users, it was helpful to get messages from
several implementations (at least Hugs and GHC) for tricky cases, just
to get different views from which to piece together a picture. This, sadly,
is not as well supported these days as it used to be, but getting multiple
opinions on type errors is still useful advice (if your code can be handled
by multiple implementations, or if a single implementation can offer multiple
views, eg, beginner/advanced or top-down/bottom-up or ..).

Then Simon PJ invested a lot of energy into improving GHC's error 
messages, so the balance changed. Error message complaints didn't
stop, though, and the error messages were improved further, with 
more text, and suggestions for possible fixes, etc.


The suggestions are sometimes misleading, and I've felt there has 
been too much weight on narrative text, to the exclusion of actual 
type information (it is a programming language, after all, so why 
don't the error messages give type signatures for context instead of 
trying to talk me through it in natural language, without further 
references to types!-), but discussions like the present show that 
Simon has been on the right track with that, for many users.


Still, I would really like a just the facts, please mode for GHC, 
with less text and more type signatures (especially for the contexts 
of type mismatches). Error messages simply not including the

information I need has become my main issue with GHC messages,
and seems to be a common thread over many tickets and threads
apparently discussing phrasing (eg, expected/inferred: who expects
and why, and what is the inference based on?).

Somewhere in between, much research has focussed on type errors,
and how to report them in general, and implementations like Helium 
have set out to target beginners with simpler error messages. (*)


As for fixes, concrete suggestions are most likely to be adopted,
but sometimes it just isn't clear how to improve the situation, and
sometimes, there is no single best solution (which has spawned
countless interesting papers on type-error reporting;-).

It you want to help, file tickets for messages you find unhelpful,
and make suggestions for possible improvements. Unfortunately,
there doesn't seem to be an error message component in GHC's
trac, so querying for relevant tickets is not straightforward. Simon
keeps a partial list of error message related tickets here (keeping
track of trac tickets - meta-trac?-):

http://hackage.haskell.org/trac/ghc/wiki/Status/SLPJ-Tickets

Just, please, keep in mind that there is no one-size-fits-all:
improving a message for one group of users might well make
it less useful for another group.

Claus

(*) A beginner mode for Haskell systems has often been suggested,
   even before Helium. Instead, error messages, and even language
   design decisions have been constrained by trying to serve everyone
   with a single mode. I'd have thought that Helium, and Dr Scheme,
   have sufficiently demonstrated the value of having separate levels
   of a language and the corresponding error messages.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bool as type class to serve EDSLs.

2009-05-28 Thread Claus Reinke

Of course once you've got ifthenelse you find yourself wanting explicit
desugaring of pattern matching (could view patterns help here?),


Could you be more specific about what you want there, perhaps
with a small example? I recognize the other problems from my own
forays into EDSLs, but I'm not sure I recognize this one. If you want
to reuse the existing 'lhs=rhs' syntax for patterns specific to your EDSL,
both QuasiQuotes and ViewPatterns might help, if you want to be able 
to compose patterns beyond the limitations of that syntax, one of the

proposals and libraries for first-class patterns might help, adding
functionality at the expense of more complex syntax.


recursion (into an explicit use of fix), etc...


Yes, in principle (you're thinking of sharing, and of lifting pure code?). 
In practice, too much machinery is attached to the existing 'let' to 
make that likely.


But this reminds me of another closely related issue: GHC offers a
lot of help in moving stuff from compiler-baked-in to library-provided,
but almost none of the mechanisms works at the level of syntax/AST
(where most other languages do their meta-programming/language
extension support). 

RULES and compiler plugins work on core, where it is too late to 
do EDSL-specific source-level rewrites, TH requires quoted things 
to parse and type-check in the un-extended language, making it less 
useful for EDSL-specific language extensions. For instance, think of 
the arrows syntax: there doesn't seem to be any way that this could 
have been defined in a library, so it had to be baked into GHC?


QuasiQuotes seem to be the only feature that gives source-level
control, and they suffer from the lack of easily available parsers
(eg, if my EDSL requires an extension of some Haskell construct, 
I'd like to be able to reuse the Haskell parsers for the baseline). 
There is a package on hackage targetting that issue (haskell-src-meta), 
but it relies on a second frontend (haskell-src-ext), and only works

with a single GHC version.

Claus



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to implement this? A case for scoped record labels?

2009-05-26 Thread Claus Reinke

I wonder if I am completely off here, but I am surprised that there is
no progress on the scoped labels front. The Haskell wiki mentioned
that the status quo is due to a missing optimum in the design space,
but the same can be said about generic programming in Haskell and yet,
GHC ships with Scrap your boilerplate. So we have to resort to type
classes hacks instead of a proper solution. 


There are various implementations of extensible records available.
HList may have the best-supported versions and the most experience,
but essentially, they are simple enough to define that some packages
ship with their own variants (as long as there is no agreement on the
future of the language extensions needed to implement these libraries,
there won't be any standard library). See the links on the Haskell wiki
[1], though there are also newer entries on the GHC trac wiki [2,3].

The Haskell wiki page also points to my old first class labels proposal, 
which included a small example implementation based on Daan's 
scoped labels (there was a more recent implementation of Data.Record
which noone seemed interested in, and the fairly new Data.Label 
suggestion offers a workaround for the lack of first class labels, see [4]

for unsupported experimental versions of both).

The various accessor packages and generators might be a more
lightweight/portable alternative. In particular, they also cover the
case of nested accessors. And, going back to your original problem,
there is an intermediate stage between

   data BayeuxMessage = HandshakeRequest { channel :: String , ... }
| HandshakeResponse { channel :: String, successful :: Bool, ... }
| ...

and 


   data HandshakeRequest = HandshakeRequest { channel :: String , ... }
   data HandshakeResponse = HandshakeResponse { channel :: String,
   successful :: Bool, ... }
   ...

   data BayeuxMessage = HSReq HandshakeRequest
   | HSRes HandshakeResponse
   ...

namely

   data HandshakeRequest = HandshakeRequest { ... }
   data HandshakeResponse = HandshakeResponse { successful :: Bool, ... }
   ...
   data BayeuxMessage = HSReq{ channel :: String, request :: HandshakeRequest }
   | HSRes{ channel :: String, response :: HandshakeResponse }
   ...

Generally, you'll often want to use labelled fields with parameterized
types, eg

   data NamedRecord a = Record { name :: a, ... }
   type StringNamedRecord = Record String
   type IntNamedRecord = Record Int

and, no, I don't suggest to encoded types in names, this is just an
abstract example;-) Mostly, don't feel bound to a single upfront design,
refactor your initial code until it better fits your needs, as you discover
them.

Hth,
Claus

[1] http://www.haskell.org/haskellwiki/Extensible_record
[2] http://hackage.haskell.org/trac/ghc/wiki/ExtensibleRecords
[3] http://hackage.haskell.org/trac/ghc/ticket/1872
[4] http://community.haskell.org/~claus/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] fast Eucl. dist. - Haskell vs C

2009-05-19 Thread Claus Reinke

I understand from your later post that is was in fact specialized, but
how do I make sure it _is_ specialized? 


-ddump-tc seems to give the generalized type, so it seems you'd need
to look at the -ddump-simpl output if you want to know whether a
local function is specialized.

http://www.haskell.org/haskellwiki/Performance/GHC#Looking_at_the_Core

Can I just add a type signature in the dist_fast definition for euclidean, 


If you need it at exactly one type, that is the simplest way. There's
also the SPECIALIZE pragma

http://www.haskell.org/ghc/docs/latest/html/users_guide/pragmas.html#specialize-pragma

and for local and non-exported functions '-O' should enable auto-specialization

http://www.haskell.org/ghc/docs/latest/html/users_guide/faster.html



After that, unrolling the fused fold loop (uvector internal) might  
help a bit, but isn't there yet:


http://hackage.haskell.org/trac/ghc/ticket/3123
http://hackage.haskell.org/trac/ghc/wiki/Inlining

And even if that gets implemented, it doesn't apply directly to your
case, where the loop is in a library, but you might want to control  
its unrolling in your client code. Having the loop unrolled by a default
factor (8x or so) should help for loops like this, with little  
computation.


This seems rather serious, and might be one of the bigger reasons why
I'm getting nowhere close to C in terms of performance...


You should be able to get near (a small factor) without it, but not having 
it leaves a substantial gap in performance, especially for simple loop 
bodies (there is also the case of enabling fusion over multiple loop 
iterations, but that may call for proper fix-point fusion).



The loop body is ridiculously small, so it would make sense to
unroll it somewhat to help avoid the loop overhead.
However, it seems like GHC isn't able to do that now.


Apart from the links above, the new backend also has relevant TODO
items: http://hackage.haskell.org/trac/ghc/wiki/BackEndNotes


Is there any way to unroll the loop myself, to speed things up?
Seems hard, because I'm using uvector...


You could 'cabal unpack' uvector, hand-unroll the core loop all
its operations get fused into, then reinstall the modified package..
(perhaps that should be a package configuration option, at least
until GHC gets loop or recursion unrolling - since this would be
a temporary workaround, it would probably not be worth it to 
have multiple package versions with different unroll factors; if
this actually helps uvector users with performance in practice, 
they could suggest it as a patch).


Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] tips on using monads

2009-05-18 Thread Claus Reinke
I've got one of those algorithms which threatens to march off the right edge (in the words of 
Goerzen et al). I need something like a State or Maybe monad, but this is inside the IO monad. So 
I presume I need StateT or MaybeT. However, I'm still (slowly) learning about monads from first 
principles. I thought I might present my code and get some pointers... maybe someone could 
actually show me how to rewrite it, which would be a neat way to see MaybeT and StateT in action. 
I'm hoping to get anything from a one-line response to a rewrite of my code. Anything will help.


Perhaps this is useful:
http://www.haskell.org/haskellwiki/Equational_reasoning_examples#Coding_style:_indentation_creep_with_nested_Maybe

Claus 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] fast Eucl. dist. - Haskell vs C

2009-05-18 Thread Claus Reinke

My current best try uses the uvector package, has two 'vectors' of type
(UArr Double)  as input, and relies on the sumU and zipWithU functions
which use streaming to compute the result:

dist_fast :: UArr Double - UArr Double - Double
dist_fast p1 p2 = sumDs `seq` sqrt sumDs
where
sumDs = sumU ds
ds= zipWithU euclidean p1 p2
euclidean x y = d*d
where
d = x-y


You'll probably want to make sure that 'euclidian' is specialized to
the types you need (here 'Double'), not used overloaded for 'Num a=a'
(check -ddump-tc, or -ddump-simpl output).

After that, unrolling the fused fold loop (uvector internal) might help
a bit, but isn't there yet:

http://hackage.haskell.org/trac/ghc/ticket/3123
http://hackage.haskell.org/trac/ghc/wiki/Inlining

And even if that gets implemented, it doesn't apply directly to your
case, where the loop is in a library, but you might want to control its
unrolling in your client code. Having the loop unrolled by a default
factor (8x or so) should help for loops like this, with little computation.

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] fast Eucl. dist. - Haskell vs C

2009-05-18 Thread Claus Reinke

dist_fast :: UArr Double - UArr Double - Double
dist_fast p1 p2 = sumDs `seq` sqrt sumDs
where
sumDs = sumU ds
ds= zipWithU euclidean p1 p2
euclidean x y = d*d
where
d = x-y


You'll probably want to make sure that 'euclidian' is specialized to
the types you need (here 'Double'), not used overloaded for 'Num a=a'
(check -ddump-tc, or -ddump-simpl output).


Sorry about that misdirection - as it happened, I was looking at the 
tc output for 'dist_fast' (euclidean :: forall a. (Num a) = a - a - a), 
but the simpl output for 'dist_fast_inline' .., which uses things like 


   __inline_me ..
   case Dist.sumU (Dist.$wzipWithU ..
   GHC.Num.- @ GHC.Types.Double GHC.Float.$f9 x_aLt y_aLv 

Once I actually add a 'dist_fast_inline_caller', that indirection 
disappears in the inlined code, just as it does for dist_fast itself.


   dist_fast_inlined_caller :: UArr Double - UArr Double - Bool
   dist_fast_inlined_caller p1 p2 = dist_fast_inlined p1 p2  2

However, in the simpl output for 'dist_fast_inline_caller', the
'sumU' and 'zipWithU' still don't seem to be fused - Don?

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] tips on using monads

2009-05-18 Thread Claus Reinke
I've got one of those algorithms which threatens to march off the right edge (in the words of 
Goerzen et al). I need something like a State or Maybe monad, but this is inside the IO monad. 
So I presume I need StateT or MaybeT. However, I'm still (sdlowly) learning about monads from 
first principles. I thought I might present my code and get some pointers... maybe someone could 
actually show me how to rewrite it, which would be a neat way to see MaybeT and StateT in 
action. I'm hoping to get anything from a one-line response to a rewrite of my code. Anything 
will help.


Perhaps this is useful:
http://www.haskell.org/haskellwiki/Equational_reasoning_examples#Coding_style:_indentation_creep_with_nested_Maybe
I can't quite tell--is that example in the IO monad? Part of my difficulty is that I'm inside IO. 
I know how to do this with Maybe, except that I have to combine Maybe and IO (use MaybeT?)


It was in the GHC.Conc.STM monad, so yes, it used a MaybeT
and Control.Monad.Trans.MonadTrans's lift (btw, the MonadTrans
docs only point to [1], but [2] might also be of interest, if rather
more compact/terse).

Claus

[1] http://web.cecs.pdx.edu/~mpj/pubs/springschool.html
[2] http://web.cecs.pdx.edu/~mpj/pubs/modinterp.html




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] fast Eucl. dist. - Haskell vs C

2009-05-18 Thread Claus Reinke
Once I actually add a 'dist_fast_inline_caller', that indirection  
disappears in the inlined code, just as it does for dist_fast itself.


   dist_fast_inlined_caller :: UArr Double - UArr Double - Bool
   dist_fast_inlined_caller p1 p2 = dist_fast_inlined p1 p2  2

However, in the simpl output for 'dist_fast_inline_caller', the
'sumU' and 'zipWithU' still don't seem to be fused - Don?


All the 'seq's and so on should be unnecessary, and even so, I still get
the expected fusion:


As I said, I don't get the fusion if I just add the function above to the 
original Dist.hs, export it and compile the module with '-c -O2 -ddump-simpl':


Dist.dist_fast_inlined_caller =
 \ (w1_s1nb :: Data.Array.Vector.UArr.UArr GHC.Types.Double)
   (w2_s1nc :: Data.Array.Vector.UArr.UArr GHC.Types.Double) -
   case (Dist.$wzipWithU Dist.lvl2 w1_s1nb w2_s1nc)
`cast` (trans
  Data.Array.Vector.UArr.TFCo:R56:UArr
  Data.Array.Vector.UArr.NTCo:R56:UArr
:: Data.Array.Vector.UArr.UArr GHC.Types.Double
 ~
   Data.Array.Vector.Prim.BUArr.BUArr GHC.Types.Double)
   of _
   { Data.Array.Vector.Prim.BUArr.BUArr ipv_s1lb
ipv1_s1lc
ipv2_s1ld -
   letrec {
 $wfold_s1nN :: GHC.Prim.Double#
- GHC.Prim.Int#
- GHC.Prim.Double#
 LclId
 [Arity 2
  Str: DmdType LL]
 $wfold_s1nN =
   \ (ww_s1mZ :: GHC.Prim.Double#) (ww1_s1n3 :: GHC.Prim.Int#) -
 case GHC.Prim.==# ww1_s1n3 ipv1_s1lc of _ {
   GHC.Bool.False -
 $wfold_s1nN
   (GHC.Prim.+##
  ww_s1mZ
  (GHC.Prim.indexDoubleArray#
 ipv2_s1ld (GHC.Prim.+# ipv_s1lb ww1_s1n3)))
   (GHC.Prim.+# ww1_s1n3 1);
   GHC.Bool.True - ww_s1mZ
 }; } in
   case $wfold_s1nN 0.0 0 of ww_s1n7 { __DEFAULT -
   GHC.Prim.## (GHC.Prim.sqrtDouble# ww_s1n7) 2.0
   }
   }

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] fast Eucl. dist. - Haskell vs C

2009-05-18 Thread Claus Reinke
As I said, I don't get the fusion if I just add the function above to the 
original Dist.hs, export it and compile the module with '-c -O2 
-ddump-simpl':


I can't reproduce this.


Interesting. I'm using ghc 6.11.20090320 (windows), uvector-0.1.0.3. 
I attach the modified Dist.hs and its simpl output, created via:


   ghc -c Dist.hs -O2 -ddump-tc -ddump-simpl-stats -ddump-simpl  Dist.dumps

Perhaps others can confirm the effect? Note that the 'dist_fast' in the 
same module does get fused, so it is not likely an options issue. I still 
suspect that the inlining of the 'Dist.zipWith' wrapper in the 'dist_fast_inlined'

'__inline_me' has some significance - it is odd to see inlined code in an
'__inline_me' and the fusion rule won't trigger on 'Dist.sumU . 
Dist.$wzipWithU',
right?


Does the complete program fragment I posted earlier yield the desired
result?


Yes. Note that the original poster also reported slowdown from
use of 'dist_fast_inlined'.

Claus



Dist.dumps
Description: Binary data


Dist.hs
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] conflicting variable definitions in pattern

2009-05-15 Thread Claus Reinke

I miss lots of stuff from when I was a kid. I used to write

  elem x (_ ++ x : _)  = True
  elem _ _ = False

and think that was cool. How dumb was I?


Yeah, the Kiel Reduction Language had similarly expressive
and fun pattern matching, with subsequence matching and 
backtracking if the guard failed. Of course, these days, you 
could use view patterns for the simpler cases, but it doesn't 
look quite as nice:


   elem x (break (==x) - (_, _:_)) = True
   elem _ _ = False

and gets ugly enough to spoil the fun quickly:

   -- lookup key (_++((key,value):_)) = Just value
   lookup key (break ((==key).fst) - (_ , (_,value):_)) = Just value
   lookup _   _  = Nothing

Also, view patterns don't handle match failure by an implicit
Monad, let alone MonadPlus, so one often has to insert an
explicit Maybe, and there is no backtracking:-(

Claus

-- Nostalgia isn't what it used to be [source: ?]


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Functional Reactive Web Application Framework?

2009-05-13 Thread Claus Reinke

I assume you want to write FRP in a Haskell-embedded DSL and generate
FRP'd JavaScript.  If you wish to use Flapjax as a supporting library
I'd be glad to help.


I'm curious: how difficult is it nowadays for in-page JavaScript to
control the evolution of its surrouding page, FRP-style? I used to
do something like that, but with VRML+JavaScript instead of HTML
+JavaScript (there's a screencast for those without VRML viewer;-):

http://community.haskell.org/~claus/FunWorlds/VRML/

and I found that mapping FRP style behaviours and events wasn't
the problem (though VRML has better support for this than HTML),
the problem was gaining the same advantages one would have from
embedding FRP in a functional language. Problems included scoping,
linking dynamically created material with enclosing scope (JavaScript being in strings with scopes 
separate from the enclosing

VRML, VRML to be created dynamically also via strings, also in their
own scope) 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Functional Reactive Web Application Framework?

2009-05-13 Thread Claus Reinke

oops, sorry, keyboard accident


I assume you want to write FRP in a Haskell-embedded DSL and generate

FRP'd JavaScript.  If you wish to use Flapjax as a supporting library
I'd be glad to help.


I'm curious: how difficult is it nowadays for in-page JavaScript to
control the evolution of its surrouding page, FRP-style? I used to
do something like that, but with VRML+JavaScript instead of HTML
+JavaScript (there's a screencast for those without VRML viewer;-):

http://community.haskell.org/~claus/FunWorlds/VRML/

and I found that mapping FRP style behaviours and events wasn't
the problem (though VRML has better support for this than HTML),
the problem was gaining the same advantages one would have from
embedding FRP in a functional language. Problems included scoping,
linking dynamically created material with enclosing scope (JavaScript 
being in strings with scopes separate from the enclosing VRML, 
VRML to be created dynamically also via strings, also in their own 
scope) 


This became particularly obvious for dynamic recursion of reactive
behaviours, somewhat like 


 page n = code involving n `until` (keypress `then` (\key- page n))

How are such things handled in Flapjax, if it follows the embedded
compiler approach? 


Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Map and strictness (was: Is Haskell aGoodChoice for WebApplications?(ANN: Vocabulink))

2009-05-07 Thread Claus Reinke

seq something like size map that will force a traversal of the entire
tree, and ensure that the result is actually demanded, ..
(Not tested)


and not recommended, either, I'm afraid!-)

| Actually, I'm unsure how to fix this. For an expression like this:
| 
|Data.Map.delete key map
| 
| how can I use seq (or something else) to sufficiently evaluate the above

| to ensure that the value associated with key can be garbage collected?

You can ensure that the Map update no longer holds on to the old key
by keeping a copy of the old Map in an unevaluated thunk, but for
garbage collection, you'd also have to make sure that there are no other
unused references to the old Map, and no other references to the value
in that old key. I assume you knew that, but the discussion suggested 
that we should keep making such things explicit.


| My knowledge of Data.Map is limited to it's haddock documentation.

That won't be sufficient in practice, eg., most of the documentation is 
silent about strictness properties. If you are willing to look a little bit

under the hood, GHCi's :info is your friend:

   Prelude :m +Data.Map
   Prelude Data.Map :info Map
   data Map k a
 = Data.Map.Tip
 | Data.Map.Bin !Data.Map.Size !k a !(Map k a) !(Map k a)
   -- Defined in Data.Map
   ...

You can see that the constructors are not exported (GHCi reports
them qualified, even though we've brought Data.Map into scope),
so any use you make of their strictness properties is version dependent.
They are not likely to change soon, and should probably be documented
in the haddock API specification (at least for the abstract constructors
that are actually exported), so that we can rely on them with clear
conscience, but that isn't quite the case at the moment.

Anyway;-) You can see that size is actually pre-computed, so there's
no guarantee that asking for it will traverse the internal representation,
the element type is not stored strictly (which isn't all that unexpected, 
but sometimes surprises because it isn't documented), and everything

else is strict. So, with the current representation, if you force the Map,
its whole structure will be forced, though none of its elements will be.

Hth,
Claus

PS. GHood is useful for observing dynamic strictness properties
   (what is inspected when, with what dependencies and results),
   though it won't help you directly with garbage collection issues
   (it can't observe when a value gets collected).


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Visualizing Typed Functions

2009-05-07 Thread Claus Reinke
With these functions visualized, one could make a kind of drag and 
drop interface for Haskell programming, although that isn't really my 
intention.  I admit this is a little convoluted even for the purpose of 
visualization, but at least it's a starting place.  Does anyone know of 
another system or better representation?


Sure, google for visual programming languages - some examples:

- there was a Visual Haskell, before the Visual Studio plugin
   http://ptolemy.eecs.berkeley.edu/~johnr/papers/

- dissect a mockingbird
   http://users.bigpond.net.au/d.keenan/Lambda/

- vpl bibliography
   http://web.engr.oregonstate.edu/~burnett/vpl.html

@ It seems that we are getting pretty close to the point that youtube is 
getting to be a better reference than a paper, at least for 
practitioners. A lot of talks are on youtube :)


You could always post a screencast of yourself reading a paper!-)

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Combining computations

2009-05-03 Thread Claus Reinke

 mplus' :: MonadPlus m = Maybe a - m a - m a
 mplus' m l = maybeToMonad m `mplus` l

 maybeToMonad :: Monad m = Maybe a - m a
 maybeToMonad = maybe (fail Nothing) return

In general, however, this operation can't be done.  For example,
how would you write:

 mplus' :: IO a - [a] - [a]


Perhaps the question should be: is there an interesting structure
that would allow us to capture when this kind of merging Monads
is possible? We can convert every 'Maybe a' to a '[] a', but the 
other way round is partial or loses information, so lets focus on 
the first direction. Should there be a


   type family Up m1 m2
   type instance Up Maybe [] = []

so that one could define

   mplusUp :: m1 a - m2 a - (m1 `Up` m2) a 


? Well, we'd need the conversions, too, so perhaps

   {-# LANGUAGE FlexibleInstances, MultiParamTypeClasses, TypeFamilies, 
TypeOperators #-}

   import Control.Monad

   class Up m1 m2 where
 type m1 :/\: m2 :: * - *
 up :: m1 a - m2 a - ((m1 :/\: m2) a, (m1 :/\: m2) a)

   instance Up m m where
 type m :/\: m = m
 up ma1 ma2 = (ma1, ma2)

   instance Up Maybe [] where
 type Maybe :/\: [] = []
 up m1a m2a = (maybe [] (:[]) m1a, m2a)

   instance Up [] Maybe where
 type [] :/\: Maybe = []
 up m1a m2a = (m1a, maybe [] (:[]) m2a)

   mplusUp :: (m ~ (m1 :/\: m2), Up m1 m2, MonadPlus m) = m1 a - m2 a - m a
   m1a `mplusUp` m2a = mUp1a `mplus` mUp2a
 where (mUp1a,mUp2a) = up m1a m2a

Whether or not that is interesting, or whether it needs to be defined
differently to correspond to an interesting structure, I'll leave to the 
residential (co-)Categorians!-)


Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Array Binary IO molecular simulation

2009-05-01 Thread Claus Reinke
So I wonder of existing projects of such type, both Molecular dynamics and Monte Carlo methods. 


The fastest Haskell Monte Carlo code I've seen in action is 
Simon's port of a Monte Carlo Go engine:


http://www.haskell.org/pipermail/haskell-cafe/2009-March/057982.html
http://www.haskell.org/pipermail/haskell-cafe/2009-March/058069.html

That is competitive to lightly optimised non-Haskell versions, though
not competitive with highly optimised versions (the kind that slows
down when you update your compiler, but gives you dramatic boosts
due to detailed attention both to generated assembly code and to 
high-level algorithm shortcuts).


Though there's also specialization as an option

http://www.cse.unsw.edu.au/~chak/papers/KCCSB07.html

and googling for haskell monte carlo give a few more hits, such as

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.47.2981
http://www.citeulike.org/user/dguibert/article/561815

and even some hackage package, though I don't know whether
efficiency was a concern there?

I've got also some technical questions. Now I'm using 2D DiffUArray 
to represent particle positions during the simulation (when there are lots 
of array updates). Is this reasonably fast (I want to use pure external 
interface of DiffArray)?


DiffArray is slow (don't know about DiffUArray):
http://hackage.haskell.org/trac/ghc/ticket/2727

The default Random is also slow (see the mersenne alternatives on
hackage instead, and be careful if you use them through the standard
class interface):

http://hackage.haskell.org/trac/ghc/ticket/2280
http://hackage.haskell.org/trac/ghc/ticket/427

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to install HOpenGL to Windows?

2009-04-30 Thread Claus Reinke

The thing is, it doesn't really matter if autoconf macros work fine for
every Unix ever invented. The Windows users simply cannot use packages
with configure scripts. They complain about it a lot. We can call them
foolish for not installing cygwin/mingw, but they will not do it and
instead will simply not use our software and/or continue to complain.


Windows users - that's a lot of different people to generalize over. 
From what I've heard, most complaints are about assumptions made 

by non-windows users that turn out not to hold on windows,
causing things to fail on windows. Which seems reason enough for
failure reports, otherwise known as complaints.

If someone wants to use a unix shell on an unknown platform, they 
should at least check that one exists there or -even better- provide 
one, not just assume that there'll always be one (and then be surprised 
about getting complaints from those windows users). Same for

autoconf, make  co.

If someone mixes up GCC's standard layout, they need to adjust
everything for those changes, and when that turns out to be rather
difficult, it is no surprise if GCC seems unable to pick the right 
include or library paths. This particular issue has just recently been 
fixed in GHC head, I understand (will that fix cause problems for
cabal, btw, when the existing path hacks are no longer needed?). 
http://hackage.haskell.org/trac/ghc/ticket/1502#comment:12


Apart from causing lots of other path issues (and confusing tools
like cabal, which tried to compensate by special-case code), this 
complicated the process of installing headers and libraries used 
by FFI bindings, at least for those windows users who didn't build 
their own GHCs, with the help of a full GCC install.


And so on..

Listen to the complaints, there is (usually) a real issue behind them.

A couple more examples of why installing the tools that some 
cabal packages rely on isn't straightforward: 

I like and use cygwin, but when installing it, one has to be careful to 
include all the bits that are needed, preferably leaving out the bits that 
might cause confusion when used for GHC/Cabal (such as a second 
GCC installation). I once tried to capture the list of dependencies 
needed for building GHC in a cygwin pseudo-package (no contents, 
only dependencies), which made that easy. 


http://www.haskell.org/pipermail/cvs-ghc/2005-May/025089.html
http://hackage.haskell.org/trac/ghc/wiki/Building/Windows/Cygwin

But then cygwin's installer added security signatures, and I don't
think such were ever added to the pseudo-package, the dependencies
are also not uptodate anymore (I don't have write access to where it 
was put). Nowadays, one could put the cygwin pseudo-package into 
a cabal package on hackage, including the necessary signature, and 
that cabal package could run the cygwin installer with the dependencies 
given in the cygwin package (assuming there's a commandline interface 
to the cygwin installer) or at least users could run the cygwin installers
with an updated version of that cygwin package, and get a cygwin 
setup useable for cabal packages.


Others like msys, but for some reason the msys developers hardly
ever put up a simple installer, one usually has to collect various
bits and pieces in the right version, then unpack them on top of
each other - very far from automated one-click installation. Simon
recently collected the pieces needed for building GHC, so one
could now point windows users to 


http://hackage.haskell.org/trac/ghc/wiki/Building/Preparation/Windows

for one-stop shopping. One could probably do better by 
collecting those pieces into a cabal package (assuming the

license is favourable), but I could never find documentation
for using the mingw/msys installers from the commandline,
as a cabal package would need to do. Someone familiar
with those installers might not find this difficult.

Improving Cabal to replace autoconf is a nice long-term goal,
but in the meantime, things could be made a lot easier for
those windows users. GHC using the standard layout for
its GCC is one step, packaging up the msys and cygwin
dependencies would make it straightforward to install those
correctly (someone with installer knowledge might even
be able to automate that from the cabal commandline..).

Then windows users could easily install either msys or
cygwin, and the remaining issue would be how to install
the headers and libraries for cabal packages with ffi 
bindings. Just as on other platforms.


Claus

PS. Don't think of them and they vs we and us 
   and our software. It doesn't help. Neither does

   classifying bug reports as complaints.

-- It is not that we think of other as bad,
-- it is that we think of bad as other.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Google SoC: Space profiling reloaded

2009-04-30 Thread Claus Reinke

http://socghop.appspot.com/student_project/show/google/gsoc2009/haskell/t124022468245

There's less than a month left before I'm supposed to jump into coding,
and I'd love to hear about any little idea you think would make this
project even better! I created a project page with a rough draft of what
I intend to concentrate on, but the issue tracker is obviously begging
for content, so feel encouraged to populate it. :)

http://code.google.com/p/hp2any/


Hi there,

I'm sure lots of Haskellers are looking forward to profiling improvements!-)

The wxHaskell applications page

http://wxhaskell.sourceforge.net/applications.html

mentions 


http://dready.org/projects/HPView/

which seems relevant.

Shortening the song and dance needed to get profiles would be nice
(eliminating the PS viewer dependency in the process).
So would not having to select profile type in advance (and not having
to repeat the profile runs again and again, with only one view each time).

I assume you're only going to look into visualization, not semantics?-)
http://www.haskell.org/pipermail/cvs-ghc/2008-October/045981.html

Also, there is a lot of space profiling support, but I'd sometimes like
more detailed time profiling (not just end-of-run summaries, but time
sliced profiling - in this time slice, most of the time was spent in x,
in the next time slice, most of the time was spent in y, these functions
are frequented in the beginning, but are not used towards the end, etc). 
Perhaps the recent support for code coverage could be bent to support 
something like this without too much hacking?


Keep us informed!
Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to install HOpenGL to Windows?

2009-04-30 Thread Claus Reinke
If someone wants to use a unix shell on an unknown platform, they 
should at least check that one exists there or -even better- provide 
one, not just assume that there'll always be one (and then be surprised 
about getting complaints from those windows users). Same for

autoconf, make  co.


You mean that we should be shipping a haskell platform with a full copy
of mingw + all the tools to make configure scripts run? I doubt that'll
make people happy. They'll complain about it not being native.


The platform includes GHC, which already includes mingw (GCC)!-)

What I meant was a cabal package bundling the pieces of msys (sh,
autoconf, make,..). And, no, the haskell platform doesn't need this,
so it should be a separate package. But cabal install should know
that this package would be nice to have at hand if building a package
that uses build-type:configure on windows.


just fixed:
http://hackage.haskell.org/trac/ghc/ticket/1502#comment:12

In principle there is no problem with finding the mingw headers on
Windows. All we need is that the rts or base package specify them in the
include-dirs in the package registration info. Cabal uses the same info.


In practice, we have instructions like these circulating (these being
examples of the better kind, as they actually got things working):
http://joelhough.com/blog/2008/04/14/haskell-and-freeglut-at-last/
http://netsuperbrain.com/blog/posts/freeglut-windows-hopengl-hglut/

And I've lost count of the number of times the finding of mingw
headers or libs has caused bugs (which usually went unnoticed
on the author's machine because the author would have a full
copy of mingw in c:/mingw, where gcc looks for things when
everything else fails - oh, unless you're using ghc's gcc on d:,
when this failsafe would suddenly fail as well, until recently).

Perhaps cabal had no problems with this, but problems were plenty
and initially non-obvious. So I'm happy that this fix has eliminated one
common cause of problems with installing Haskell software on windows.
Every little bit helps.

A couple more examples of why installing the tools that some 
cabal packages rely on isn't straightforward: 


But there is a significant number of Windows users who do not care to
install these tools at all.

Making the stuff work better for people who choose to use cygwin is
great. I have no objections to that. But the majority want stuff to
work without them having to install cygwin. In this case stuff often
means random hackage packages and many of them are using configure
scripts.


If cabal can do autoconf/configure stuff without having autoconf/
configure installed, great. Someone could write SH as a Haskell 
DSEL, for starters, then replicate the autoconf stuff (its all just SH
libraries and some macro processing, right?). But I'm not optimistic 
about that in general.


There probably are windows users who don't want to have
msys or cygwin on their machine, but I suspect there are more
windows users who simply don't want to bother with msys (so
if cabal installed and used it internally, they might accept that,
just as they accept, or at least live with, GHC installing and 
using mingw internally).



Then windows users could easily install either msys or
cygwin, and the remaining issue would be how to install
the headers and libraries for cabal packages with ffi 
bindings. Just as on other platforms.


Making them easier to install would probably convince many 
Windows users to install one of them.


It would also simplify cabal package installation instructions,
and avoid the it doesn't work and it thinks it is on unix so I
don't see how it could possibly work feeling. Instead, cabal
install would simply point out another package dependency
that needs installing.

I'm not complaining about people complaining. :-) I'd like to see 
things work better on windows.


I'm not surprised!-)
Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Non-atomic atoms for type-level programming

2009-04-29 Thread Claus Reinke

z :: client - Label client
z client = undefined



  ok :: (B.Label client ~ A.Label client) =
client - [A.Label client].
  ok client = [ A.z client, B.z client]


This technique relies on the explicit management of the identities of 
modules both at compile-time (type annotation of D.ok) and run-time 
(creation of (D client) in the body of D.ok). While explicit management 
of modules at compile time is the point of the exercise, it would be 
better to avoid the passing of reified module identities at runtime.


In particular, this variant requires all bindings in the module to take
an explicit parameter, instead of having a single parameter for the 
whole module. Having a single module-identifier parameter for each 
binding is better than having lots of individual parameters for each 
binding, but the idea of a parameterized module is to avoid these 
extra per-binding parameters alltogether.


Of course, my variant cheated a little, using the TF feature that
TF applications hide implicit parameters from contexts, so instead 
of 'Label a~type=type', we write 'Label a'. (which meant I had 
to fix the 'a' to something concrete to avoid ambiguous types, as

I needed to use type families, not data families, to get the sharing)

Nevertheless, interesting possibilities.
Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Non-atomic atoms for type-level programming

2009-04-28 Thread Claus Reinke

Standard ML's answer to that kind of issue is type sharing.

Does type sharing help with making modules retroactively compatible?



It would be as if one could write modules parameterised by types,
instead of declaring them locally, and being able to share a type
parameter over several imports:



module A[type label] where x = undefined :: label
module B[type label] where x = undefined :: label



module C[type label] where
import A[label]
import B[label]
ok = [A.x,B.x]



assuming that:
- 'module X[types]' means a module parameterized by 'types'
- 'import X[types]' means a module import with parameters 'types'.


It appears I need to qualify my earlier statement that Haskell doesn't
have parameterized modules and type sharing (thanks to Tuve Nordius
[1] for suggesting to use type families for the former). Here is an encoding 
of the hypothetical example above using type families (using one of their 
lesser publicized features - they can be referred to before being defined):



{-# LANGUAGE TypeFamilies #-}
module LA where 


type family Label a
z = undefined::Label ()


{-# LANGUAGE TypeFamilies #-}
module LB where 


type family Label a
z = undefined::Label ()


{-# LANGUAGE UndecidableInstances #-}
{-# LANGUAGE TypeFamilies #-}
module LC where
import LA
import LB

-- express type sharing while leaving actual type open
type family Label a
type instance LA.Label a = LC.Label a
type instance LB.Label a = LC.Label a
ok2 = [LA.z,LB.z]


Modules 'LA' and 'LB' use the applications of the yet to be instantiated
type family 'Label a' as placeholders for unknown types (ie, type families
are used as glorified type synonyms, but with delayed definitions), effectively 
parameterizing the whole modules over these types. Module 'LC' adds

its own placeholder 'LC.Label a', instantiating both 'LA.Label a' and
'LB.Label a' to the same, as yet unknown type (we're refining their
definitions just enough to allow them to match identically), effectively 
expressing a type sharing constraint.


This is probably implicit in the work comparing SML's module language
with Haskell's recent type class/family extensions (can anyone confirm
this with a quote/reference?), but I hadn't realized that this part of the
encoding is straightforward enough to use in practice.

One remaining issue is whether this encoding can be modified to allow
for multiple independent instantiations of 'LA', 'LB', and 'LC' above,
each with their own type parameters, in the same program.

Claus

[1] http://www.haskell.org/pipermail/haskell-cafe/2009-April/060665.html


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can subclass override its super-class' defaultimplementation of a function?

2009-04-27 Thread Claus Reinke

Basically, I have a bunch of instances that have a common functionality but
I'd like to be able to group those instances and give each group a different
default implementation of that functionality. It's so easy to do this in
Java, for example, but I have no idea how to do it in Haskell. The above
example will not compile, btw.


Before going into language extensions, is there anything wrong with:

class C a where foo :: a - Double

fooA a = 5.0 --default A
fooB a = 7.0 -- default B

data Blah = Blah
data Bar = Bar

instance C Blah where foo = fooA
instance C Bar  where foo = fooB

blah = Blah
bar = Bar

main = do
 print $ foo blah -- should print out 5
 print $ foo bar  -- should print out 7

It seems to match your spec.

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   4   5   6   >