RE: About Haskell Thread Model

2003-10-13 Thread Yang, Qing
Thanks very much. It's very useful.

PS: Sorry for the inconvenience I think I should have sent the mail in
plain text.

Qing 

-Original Message-
From: Wolfgang Thaller [mailto:[EMAIL PROTECTED] 
Sent: Sunday, October 12, 2003 4:58 AM
To: Yang, Qing
Cc: [EMAIL PROTECTED]
Subject: Re: About Haskell Thread Model

 I am a new learner of Haskell and I am interested in Haskell's
 concurrent model. Can somebody give me a brief intro about Haskell's
 thread model, like how the use-level threads are mapped to kernel 
 thread
 and what scheduling mechanism that Haskell uses, or point me to some
 links or documents that I can learn from.

In the interpreter Hugs, all Haskell threads are run in one kernel 
thread. They are scheduled cooperatively; thread switches only take 
place when a function from the Concurrent module is called.

In the currently released version of GHC, all Haskell threads are run 
in one kernel thread, too; however, thread switches can take place 
whenever memory is allocated --- and in Haskell, that means almost 
always. The optimizer manages to compile a fibonacci function that 
doesn't allocate any memory, but in the real world, it's as good as 
real preemption.

If you compile the bleeding-edge GHC from the CVS HEAD, you'll get 
something else; while most threads (those created using forkIO) are 
still light-weight threads that are scheduled in just one kernel 
thread, you can also create threads that get their own operating system 
thread. This is solves all the problems that lightweight threads can 
have with foreign (i.e. non-Haskell) libraries.

You should also note that no Haskell implementation currently supports 
SMP; even when multiple kernel threads are used, there is a mutual 
exclusion lock on the Haskell heap, so a multithreaded Haskell program 
will use only one CPU on an SMP system.

I hope my answer was useful...

Cheers,

Wolfgang

P.S.: If you want to do me a favour, you could tell your mail program 
not to send multipart or HTML messages to the list; they look terrible 
to people like me who get a daily digest from the mailing list.

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: About Haskell Thread Model

2003-10-13 Thread William Lee Irwin III
On Sat, Oct 11, 2003 at 10:58:04PM +0200, Wolfgang Thaller wrote:
 If you compile the bleeding-edge GHC from the CVS HEAD, you'll get 
 something else; while most threads (those created using forkIO) are 
 still light-weight threads that are scheduled in just one kernel 
 thread, you can also create threads that get their own operating system 
 thread. This is solves all the problems that lightweight threads can 
 have with foreign (i.e. non-Haskell) libraries.
 You should also note that no Haskell implementation currently supports 
 SMP; even when multiple kernel threads are used, there is a mutual 
 exclusion lock on the Haskell heap, so a multithreaded Haskell program 
 will use only one CPU on an SMP system.
 I hope my answer was useful...

That's a painful-sounding state of affairs, though not entirely
unexpected. It would be interesting to hear of BKL breakup efforts
for Haskell runtime systems, though anymore I'm totally ignorant of
what the devil is going on in userspace except database-only syscalls.


-- wli
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: About Haskell Thread Model

2003-10-13 Thread Wolfgang Thaller
William Lee Irwin III wrote:

On Sat, Oct 11, 2003 at 10:58:04PM +0200, Wolfgang Thaller wrote:
You should also note that no Haskell implementation currently supports
SMP; even when multiple kernel threads are used, there is a mutual
exclusion lock on the Haskell heap, so a multithreaded Haskell program
will use only one CPU on an SMP system.
I hope my answer was useful...
That's a painful-sounding state of affairs, though not entirely
unexpected. It would be interesting to hear of BKL breakup efforts
for Haskell runtime systems, though anymore I'm totally ignorant of
what the devil is going on in userspace except database-only syscalls.
I'm not sure I understand you. I found out that BKL refers to the 
Big Kernel Lock (in the Linux kernel), but I have no idea what 
database-only syscalls are.

The reason why we currently do not take advantage of SMP is that the 
Haskell Heap is a shared data structure which is modified whenever a 
thunk (an unevaluated expression) is evaluated. Using synchronisation 
primitives like pthread_mutex_lock for every evaluation of a thunk 
would be deadly for performance.
There is some old (1999) code in the GHC RTS that attempts this (using 
intel cmpxchg instructions for synchronisation), but it's currently 
bitrotted and I don't know how successful it was.

Cheers,

Wolfgang

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: About Haskell Thread Model

2003-10-13 Thread William Lee Irwin III
William Lee Irwin III wrote:
 That's a painful-sounding state of affairs, though not entirely
 unexpected. It would be interesting to hear of BKL breakup efforts
 for Haskell runtime systems, though anymore I'm totally ignorant of
 what the devil is going on in userspace except database-only syscalls.

On Mon, Oct 13, 2003 at 11:13:56AM +0200, Wolfgang Thaller wrote:
 I'm not sure I understand you. I found out that BKL refers to the 
 Big Kernel Lock (in the Linux kernel), but I have no idea what 
 database-only syscalls are.

remap_file_pages() (lightweight mmap() for doing, say, 65536 mappings
of distinct 32KB chunks of a file without kernel memory usage exploding)
and async io syscalls are examples of database-only syscalls, that I'm
aware userspace is doing largely from various kernel mailing list
controversies.


On Mon, Oct 13, 2003 at 11:13:56AM +0200, Wolfgang Thaller wrote:
 The reason why we currently do not take advantage of SMP is that the 
 Haskell Heap is a shared data structure which is modified whenever a 
 thunk (an unevaluated expression) is evaluated. Using synchronisation 
 primitives like pthread_mutex_lock for every evaluation of a thunk 
 would be deadly for performance.
 There is some old (1999) code in the GHC RTS that attempts this (using 
 intel cmpxchg instructions for synchronisation), but it's currently 
 bitrotted and I don't know how successful it was.

cmpxchg and then taking a blocking lock sounds like the 2-tier locking
supported with Linux' new futex system calls. I wonder how they chose
to block in the older GHC RTS.


-- wli
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: About Haskell Thread Model

2003-10-13 Thread Yang, Qing
Wolfgang,

According to the User Guide of GHC 6.1 that GHC supports both Concurrent
Haskell and Parallel Haskell. And Concurrent Haskell and Parallel
Haskell have very different purpose. 
To my understanding about the document and your reply, it's Concurrent
Haskell who uses Threads to process tasks concurrently and the threads
are actually user-level thread. And no SMP is supported now in
Concurrent Haskell. Is this right? For Parallel Haskell, it is said in
the User Guide document as follows:
---
Parallel Haskell is about speed - spawning threads onto multiple
processors so that your program will run faster. ..

A Parallel Haskell program implies multiple processes running on
multiple processors, under a PVM (Parallel Virtual Machine) framework.
..

Do you have some experience or knowledge about Parallel Haskell? And
what you mentioned in you previous email is all about Concurrent Haskell
or about the both?


Thanks,
Qing Yang
System Software Lab
Intel Corporation
[EMAIL PROTECTED]
86-10-85298800 Ext. 1955 (Office)
8751-1955 (iNet)

-Original Message-
From: Wolfgang Thaller [mailto:[EMAIL PROTECTED] 
Sent: Sunday, October 12, 2003 4:58 AM
To: Yang, Qing
Cc: [EMAIL PROTECTED]
Subject: Re: About Haskell Thread Model

 I am a new learner of Haskell and I am interested in Haskell's
 concurrent model. Can somebody give me a brief intro about Haskell's
 thread model, like how the use-level threads are mapped to kernel 
 thread
 and what scheduling mechanism that Haskell uses, or point me to some
 links or documents that I can learn from.

In the interpreter Hugs, all Haskell threads are run in one kernel 
thread. They are scheduled cooperatively; thread switches only take 
place when a function from the Concurrent module is called.

In the currently released version of GHC, all Haskell threads are run 
in one kernel thread, too; however, thread switches can take place 
whenever memory is allocated --- and in Haskell, that means almost 
always. The optimizer manages to compile a fibonacci function that 
doesn't allocate any memory, but in the real world, it's as good as 
real preemption.

If you compile the bleeding-edge GHC from the CVS HEAD, you'll get 
something else; while most threads (those created using forkIO) are 
still light-weight threads that are scheduled in just one kernel 
thread, you can also create threads that get their own operating system 
thread. This is solves all the problems that lightweight threads can 
have with foreign (i.e. non-Haskell) libraries.

You should also note that no Haskell implementation currently supports 
SMP; even when multiple kernel threads are used, there is a mutual 
exclusion lock on the Haskell heap, so a multithreaded Haskell program 
will use only one CPU on an SMP system.

I hope my answer was useful...

Cheers,

Wolfgang

P.S.: If you want to do me a favour, you could tell your mail program 
not to send multipart or HTML messages to the list; they look terrible 
to people like me who get a daily digest from the mailing list.

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: About Haskell Thread Model

2003-10-13 Thread Wolfgang Thaller
Do you have some experience or knowledge about Parallel Haskell? And
what you mentioned in you previous email is all about Concurrent 
Haskell
or about the both?
Everything I said was about Concurrent Haskell. I have no experience 
with Parallel Haskell. All the binaries available on the GHC web page 
are built for Concurrent Haskell.

Cheers,

Wolfgang

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


nitpicking?

2003-10-13 Thread Janis Voigtlaender
Hello,

just as an addition to the discussion (one week ago or so) about Haskell
being defined as a non-strict rather than lazy language: in that context
it struck me as odd that Section 6.2 of the Report says `seq` is used to
avoid unneeded laziness. The only other uses of lazy seem to appear
in the context of irrefutable patterns, getContents, and a source
comment for the function transpose in the List library.

Ciao, Janis.

--
Janis Voigtlaender
http://wwwtcs.inf.tu-dresden.de/~voigt/
mailto:[EMAIL PROTECTED]
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


TPLP Special Issue on Verification

2003-10-13 Thread Sandro . Etalle
TPLP Special Issue on

 Specification, Analysis and Verification of Reactive Systems
  http://www.cs.utwente.nl/~etalle/specialissue.html

Motivations and Goals

The huge increase in interconnectivity we have witnessed in the last
decade has boosted the development of systems which are often
large-scale, distributed, time-critical, and possibly acting in an
unreliable or malicious environment. Furthermore, software and
hardware components are often mobile, and have to interact with a
potentially arbitrary number of other entities.

These systems require solid formal techniques for their specification,
analysis and verification.

In this respect, computational logic plays an increasingly important
role. Concerning the specification aspect, one can think for instance
at the specification, in temporal logic, of a communication
protocol. Such specification offers the advantage that one can reason
about it using formal methods, and at the same time it is often easily
executable by rewriting it into a logic-based programming language. In
addition, Computational logic plays a fundamental role by providing
formal methods for proving system's correctness and tools - e.g. using
techniques like constraint programming and theorem proving - for
verifying their properties.

This special issue is inspired to the ICLP workshops on
Specification, Analysis and Verification of Emerging technologies
(SAVE) that took place during iclp 2001 and iclp 2002. Nevertheless,
submission is open to everyone.

TOPICS

The topics of interest include but are not limited to:

* Specification languages and rapid prototyping:
  * Logic programming and its extensions
  * First-order, constructive, modal and temporal logic
  * Constraints
  * Type theory
* Analysis:
  * Abstract interpretation
  * Program analysis and transformation
* Validation:
  * Simulation and testing
  * Deductive methods
  * Model checking
  * Theorem proving

The preferred issues include, but are not limited to:

* Security: e.g. specification and verification of security protocols.
* Mobility: specification and verification of mobile code.
* Interaction, coordination, negotiation, communication and exchange on the Web.
* Open and Parametrized Systems.

SUBMISSIONS:

Only electronic submission can be accepted. Please send your
contribution in pdf or PostScript format to [EMAIL PROTECTED]

DATES:

Deadline for submission: November 15, 2003
Notification: May 1, 2003
Revised paper due: October 1, 2004
Publication: 2005

Authors of submitted papers are encouraged to post their articles at
CoRR, thereby helping timely distribution of the scientific works.


EDITORS

- Giorgio Delzanno, University of Genova, Italy, 
  http://www.disi.unige.it/person/DelzannoG/

- Sandro Etalle, University of Twente and CWI Amsterdam, the
  Netherlands, http://www.cs.utwente.nl/~etalle/

- Maurizio Gabbrielli, University of Bologna, Italy,
  http://www.cs.unibo.it/~gabbri/

ABOUT TPLP

Theory and Practice of Logic Programming (TPLP, see
http://uk.cambridge.org/journals/tlp/ and
http://www.cwi.nl/projects/alp/TPLP/index.html) is published by the
Cambridge University Press (http://uk.cambridge.org/), and is the sole
official journal of the Association for Logic Programming
(http://www.cwi.nl/projects/alp).




___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: About Haskell Thread Model

2003-10-13 Thread mgross




On Mon, 13 Oct 2003, Wolfgang Thaller wrote:

  Do you have some experience or knowledge about Parallel Haskell? And

Parallel Haskell runs, but there are problems. Unless someone has slipped
something past me, there is no parallel implementation for Release 6 yet, 
so if you want to tinker with it, you'll need to go to the CVS repository
for Release 5 of Haskell and the parallel run-time system (look for the
GUM branch). Please note, the Release 5 version of parallel Haskell should
be considered experimental and best left aside if you are unwilling to fix
code. 

Murray Gross
Metis Project,
Brooklyn College 




___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


CFP for LDTA 2004

2003-10-13 Thread jas
Apologies if you receive multiple copies of this message.

  **
  ***   Fourth Workshop on   ***
  ***Language Descriptions, Tools and Applications   ***
  ***   LDTA 2004***
  ******
  ***APRIL, 3, 2004  ***
  ***   BARCELONA, SPAIN ***
  ******
  *** http://www.di.uminho.pt/LDTA04 ***
  **


Scope:
--

 The aim of this one day workshop is to bring together researchers
 from academia and industry interested in the field of formal
 language definitions and language technologies, with a special
 emphasis on tools developed for or with these language
 definitions. Some of the scientific areas that take advantage
 from these active research fields are:

   - Program analysis, transformation, generation
   - Formal analysis of language properties
   - Automatic generation of language processing tools


 For example, language definitions can be augmented in a manner so
 that not only compilers and interpreters can be automatically
 generated but also other tools such as syntax-directed editors,
 debuggers, partial evaluators, test generators, documentation
 generators, etc. These beneficial results are well known and we
 would like to make them widely exploited in current practice.
 Domains of applications that are of interest for this workshop
 are among others:

- Language components, modeling languages
- Re-engineering, re-factoring
- Aspect-oriented programming, adaptive programming
- Domain-specific languages
- XML processing
- Visualization and graph transformation

  The workshop welcomes contributions on all aspects of formal
  language definitions, with special emphasis on applications and
  tools developed for or with these language definitions. We also
  encourage contributions on the use of these methodologies in
  education.


Tool demonstrations:

  The one day LDTA 2004 workshop program will also include a
  session on tool demonstrations. 

Submission procedure and publication: 
-

  Authors should submit a full paper (15-20 pages) to LDTA 2004
  program committee chairs. Tool demo papers (2 pages) should also
  be submitted to the PC chairs. Further information will be
  available at the LDTA 2004 home page.

  Accepted papers will be published and available during the
  workshop. After revision, final copies of the accepted papers
  will be published in Electronic Notes in Theoretical Computer
  Science (ENTCS), Elsevier Science. Author's instructions are
  given here. The authors of the best papers will be invited to
  write a journal version of their paper which will be separately
  reviewed and after acceptance be published in a special issue
  devoted to LDTA 2004 of the journal Science of Computer
  Programming (Elsevier Science).

Organizing Committee:
-
  Isabelle Attali, INRIA Sophia Antipolis, France
  Thomas Noll, Aachen University of Technology, Germany
  Joao Saraiva, University of Minho, Portugal 

Program committee co-chairs:

  Gorel Hedin, Lund Institute of Technology, Sweden
  Eric Van Wyk, University of Minnesota, USA

Program Committee members:
--
  John Boyland, University of Wisconsin, USA
  Olivier Danvy,  University of Aarhus, Denmark
  Jose Labra Gayo, Oviedo University, Spain 
  Paul Klint, CWI,  The Netherlands
  Jens Knoop, Vienna University of Technology, Austria 
  Erik Meijer, Microsoft Research, USA
  Didier Parigot, INRIA, France
  Paul Roe, QUT, Australia
  Ganesh Sittampalam, Oxford University Computing Laboratory, England
  Anthony Sloane, Macquarie University, Australia
  Yannis Smaragdakis, Georgia Institute of Technology, USA
  Doaitse Swierstra, Utrecht University, The Netherlands
  Kris de Volder, University of British Columbia, Canada

---
IMPORTANT DATES
---

December 1st 2003   Submission of full paper

January  5th 2004   Submission of a tool demo paper

January  15th 2004  Notification

February 15th 2004  Final version due

April3rd 2004   LDTA 2004
 
  

 
  
___
Haskell mailing list
[EMAIL PROTECTED]

Re: About Haskell Thread Model

2003-10-13 Thread Jan-Willem Maessen
William Lee Irwin III [EMAIL PROTECTED] asks about true SMP
haskell.
 On Mon, Oct 13, 2003 at 11:13:56AM +0200, Wolfgang Thaller wrote:
  The reason why we currently do not take advantage of SMP is that the 
  Haskell Heap is a shared data structure which is modified whenever a 
  thunk (an unevaluated expression) is evaluated. Using synchronisation 
  primitives like pthread_mutex_lock for every evaluation of a thunk 
  would be deadly for performance.
  There is some old (1999) code in the GHC RTS that attempts this (using 
  intel cmpxchg instructions for synchronisation), but it's currently 
  bitrotted and I don't know how successful it was.
 
 cmpxchg and then taking a blocking lock sounds like the 2-tier locking
 supported with Linux' new futex system calls. I wonder how they chose
 to block in the older GHC RTS.

This invariably proves tricky, even more so now that GHC effectively
uses a many-to-one threading model (and where it may therefore be
wrong to simply block the OS-level thread).  In pH, which isn't quite
haskell but has the same syntax and runs on multiple processors with a
shared heap, we kludged: we simply called yield or sleep and kept
spinning.  We were actually using work stealing (with lock-free work
queues as I recall) along with wait queues on empty locations, so we
didn't sleep very often.  Nonetheless, this did occasionally turn out
to cause real performance problems.

In reality you probably want to use some sort of thin lock / fat lock
approach a la Java.  Is the location thinly locked?  If so, CAS in a
fat lock instead, then wait on that.  Now update requires a CAS as
well, to see whether the thin lock was fattened.  If so, the fat lock
must be unlocked and deallocated.

Naturally, we'd like to keep track of unshared objects so we can avoid
any kind of complicated update protocol in the common case.  This
requires the implementor to establish a careful and clear contract
between the compiler, the GC, and the thread scheduler.

Finally, the complexity of thunk update pales in comparison to the
complexity of multiprocessor GC.  In pH we punted on this
issue---debugging a GC with per-thread allocation pools was hard
enough.  It killed us; thread performance didn't scale because GC
didn't scale.  Eager Haskell was designed with multithreaded
shared-memory execution in mind, but I simply couldn't find the year
or more it would take to build and debug a true multiprocessor GC---so
it remains a uniprocessor implementation.

-Jan-Willem Maessen
[EMAIL PROTECTED]
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: About Haskell Thread Model

2003-10-13 Thread William Lee Irwin III
William Lee Irwin III [EMAIL PROTECTED] asks about true SMP
 cmpxchg and then taking a blocking lock sounds like the 2-tier locking
 supported with Linux' new futex system calls. I wonder how they chose
 to block in the older GHC RTS.

On Mon, Oct 13, 2003 at 12:02:52PM -0400, Jan-Willem Maessen wrote:
 This invariably proves tricky, even more so now that GHC effectively
 uses a many-to-one threading model (and where it may therefore be
 wrong to simply block the OS-level thread).  In pH, which isn't quite
 haskell but has the same syntax and runs on multiple processors with a
 shared heap, we kludged: we simply called yield or sleep and kept
 spinning.  We were actually using work stealing (with lock-free work
 queues as I recall) along with wait queues on empty locations, so we
 didn't sleep very often.  Nonetheless, this did occasionally turn out
 to cause real performance problems.

Abusing sched_yield() like this has been a performance problem with
threading engines on Linux for a while, though usually as the middle
tier of 3-tier locks as opposed to being the sole blocking primitive.


On Mon, Oct 13, 2003 at 12:02:52PM -0400, Jan-Willem Maessen wrote:
 In reality you probably want to use some sort of thin lock / fat lock
 approach a la Java.  Is the location thinly locked?  If so, CAS in a
 fat lock instead, then wait on that.  Now update requires a CAS as
 well, to see whether the thin lock was fattened.  If so, the fat lock
 must be unlocked and deallocated.
 Naturally, we'd like to keep track of unshared objects so we can avoid
 any kind of complicated update protocol in the common case.  This
 requires the implementor to establish a careful and clear contract
 between the compiler, the GC, and the thread scheduler.

This sounds very heavyweight. I'd be tempted to start looking at
lockless algorithms from the outset if you need to synchronize such
basic/frequently exercised operations.


On Mon, Oct 13, 2003 at 12:02:52PM -0400, Jan-Willem Maessen wrote:
 Finally, the complexity of thunk update pales in comparison to the
 complexity of multiprocessor GC.  In pH we punted on this
 issue---debugging a GC with per-thread allocation pools was hard
 enough.  It killed us; thread performance didn't scale because GC
 didn't scale.  Eager Haskell was designed with multithreaded
 shared-memory execution in mind, but I simply couldn't find the year
 or more it would take to build and debug a true multiprocessor GC---so
 it remains a uniprocessor implementation.

I've not really heard horror stories per se, but indirectly I've gotten
the idea that the JVM's out there spent a lot of time on this. I suspect
some of the Java userspace lock contention I've seen was related to GC.


-- wli
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: global variable. No, we are not sectarians...

2003-10-13 Thread Jerzy Karczmarczuk
Nosso amigo José Moreira perguntou:

I need a function called, say, newItem, that when first called
 returns 1, the next time it is called it would return 2, then 3 and so
 on. How can I achieve this?
Several people, among whom Jón Fairbarn explained what are the problems
with that in Haskell. The standard proposal to 'generate' a, then b,c,d,e ...
is to make a lazy list [a,b,c,d,e,... ] and to iterate over it. Since José
seems not to master the notion of purely functional processing of
such 'non-deterministic collections', the magical advice to use monads,
seeds, etc. seem less useful, though...
My answer was triggered by the somehow intransigent formulation of Jón:

 You can't (as Glynn has explained), and most of the time
 it's the wrong thing to do anyway.
==

Saying that something is a wrong thing to do is too normative, Jón.
I believe that even on the Haskell list we might try to understand the
*needs* of a potential Haskell user or a non-user.
1. If José wants to keep intact his idea of multiple calls giving different
   results, then the True Functionalists should convince him that there are
   other nice languages.
   The notion of *generators* whose purpose is EXACTLY this, is a popular one,
   and there is nothing 'wrong' with it.
   * Use Python, and, the yield construct, and 'next'.
   * Use streams and 'next' (or 'do') in Smalltalk.
   * Use Icon and its generators. Very well explained.
   * Try to master the relational programming, backtracking, and use Prolog.
   * Manufacture your generators using Object-Oriented programming, as
 the Smalltalkers did. Instead of using a global 'seed', embed all the
 state in an object and define the 'next' method.
2. I agree with all my venerable predecessors that José should perhaps specify
   his real needs, then the Only True Functional Church might show him the
   way of Salvation, by helping him to design his algorithms in a functional
   way.
   We are convinced that we might help you, José, but there is a price: you
   will have to understand what the 'laziness' is, or perhaps what is a
   'continuation'. [[I claim, however, that nobody will oblige you to learn
   what a 'monad' is, although it might do some good for you one day.]]
Jerzy Karczmarczuk
Caen, France


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: Using existential types

2003-10-13 Thread Tim Docker
oleg wrote:

 Sorry I didn't know of that requirement. The desired sharing 
 can easily be introduced:

That's always a risk with posting a toy example - which bits
are important can be hard to tell. You are right of course
about being able to introduce the sharing.

FWIW, after the suggestions that I've had (thanks!), my
existential-free example is as below.

Tim



type ColumnFn rowv = [rowv] - ([String],String)

column :: (rowv-a) - ([a]-a) - (a-String) - ColumnFn
column valf sumf fmtf rows = (map fmtf vals, (fmtf.sumf) vals)
where vals = map valf rows

calculate :: [ColumnFn rowv] - [rowv] - ([String],String)
calculate cfs rows = [ cf rows | cf - cfs ]




rows = [I,wish,to,define,a,report]

cols = [
column id  (\_ - ) id,
column length  sumshow,
column (\s - length s * length s) maximumshow
]
:r
values = [ calculate rows col | col - cols ] 
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: AW: Using existential types

2003-10-13 Thread Graham Klyne
At 11:25 10/10/03 +0200, [EMAIL PROTECTED] wrote:
Hi Graham,

 Instead, I replace the class instances by a single algebraic
 data type,
 whose members are functions corresponding to OO-style class methods.
could you give an example?
The code in a previous message of mine [1] was an example of sorts, though 
complicated by some other issues.  Look for type 'DatatypeVal'.

[1] http://haskell.org/pipermail/haskell-cafe/2003-October/005231.html

A simpler example might be:

Consider a class of values called shape, for which the following operations 
are defined:

   draw :: Shape - Canvas - Canvas
   flip :: Shape - Shape
   move :: Shape - Displacement - Shape
   etc.
One can imagine defining a Haskell type class with these methods, but then 
you get the type mixing problem noted previously.  What I have found can 
work in situations like this is to define a type, thus:

data Shape = Shape
{ draw :: Canvas - Canvas
, flip :: Shape
, move :: Displacement - Shape
etc
}
then one would also need methods to create different kinds of shape, e.g.:

makeSquare :: Point - Displacement - Shape
makeCircle :: Point - Length - Shape
etc.
(Assuming appropriate type definitions for Point, Displacement, Length, etc.)

#g


Graham Klyne
For email:
http://www.ninebynine.org/#Contact
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Using existential types

2003-10-13 Thread Jan-Willem Maessen
Oleg replies to my message on eliminating existentials:
 Jan-Willem Maessen wrote:
 
   data Expr a = Val a | forall b . Apply (Expr (b - a)) (Expr b)
 
  I've been trying to convince myself that we really need the
  existential types here.
  ...
  Now (Expr a) is really two things:
  * A structural representation of a term
  * A fancy representation of a set of function applications
 
 Funny I've been thinking about the same example. I have also found a way
 to eliminate the existential quantification, in the most faithful way: by
 implicitly preserving it. The question remains: do existential-free
 expressions equivalent to the original ones? It seems, with
 the existential quantification we can do more things. However, all these
 things are equivalent under observation.
 ...
 [snip]
 Here's a quantification-free re-formulation. Note: it's Haskell 98:
 
  makev v f g seed = (g seed,v)
  makea opr opa f g seed = 
 let (seed1,opr') = opr f g seed in
 let (seed2,opa') = opa f g seed1 in
 (f seed2,(opr' opa'))
 
  test1 = makea (makev fromEnum) (makea (makea (makev (==)) (makev 'd')) (mak
*ev 'c'))
 
  eval1 t = snd $ t id id id
 
  size1 t = fst $ t (+1) (+1) 0

This is similar to my second formulation (a pair of value and
structure), except that you're quantifying over the traversal of
that structure and eliminating the structural type entirely.  I do
wonder how you'd write the type Expr a in this setting.  I think the
intermediate value of the traversal must inevitably get involved, at
which point we need either extra type variables or existential type.

 There is no polymorphic recursion anymore. 

Indeed, my second formulation (tupling) eliminated the need for
polymorphic recursion.

Interestingly, by making traversal explicit in this way you throw away
the chief advantage of laziness: memoization.  The two formulations I
presented both memoize the result of eval1 (which I called getValue).  

This memoization did not occur in the original structure, and might
not even be desirable.  With a bit of tweaking, I can control
evaluation of subtrees by using seq on the structural
representation.  But there isn't an easy way to *not* memoize eval1
unless we resort to a formulation like the one you give.

 I still have an uneasy feeling. The original representation would let
 us do more operations: we can flatten the tree:
 
  flatten1 n@(Val x) = n
  flatten1 (Apply f a) = Apply (Val $ eval f) (Val $ eval a)
 
 so the tree would have at most two levels (counting the root). 

Surprisingly easy if you explicitly represent the structure---again,
just as long as you're comfortable with the memoization that's going
on.  If you aren't, then it's hard.

I speculate that most of the time we don't care that the memoization
is happening, and it might even be a good thing.  I'm not sure that's
the case for the Hughes/Swierstra parser trick---on the other hand,
they cook up a very special set of functions to push values deeper
into the structure, which we may get for free in a separated formulation.

 So, the existential quantification permits more nuanced
 transformations -- and yet the nuances aren't observable. So, they
 don't matter?

Whether they matter or not depends upon our aims.  I'm curious about
what's possible---in the hopes that I can get a better handle on
what's really desirable in a particular situation.

-Jan-Willem Maessen
[EMAIL PROTECTED]
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: putStr

2003-10-13 Thread Hal Daume III
In general, you can't, not with these type signatures, since printing 
something is a side effect.

If this is only for debugging purposes, you might try using trace from 
IOExts.  You can use it in a nice fashion like:

 f1 :: Int - Int
 f1 x 
   | trace (The initial value is  ++ show x) False = undefined
   | otherwise = f2 x

In general, the 'trace ... False = undefined' thing is quite useful, but 
note that trace uses unsafePerformIO and very little is guarenteed about 
the output (especially the ordering might be different than you would 
expect...but it's usually fine for debugging purposes).   

On 13 Oct 2003, Jose Morais wrote:

 Hi,
 
   I am trying to something like
 
 f1 :: Int - Int
 f1 x = f2 x
 
 f2 :: Int - Int
 f2 x = 2 * x
 
 
   but before f2 returns its result I'd like it to print something like
 The initial value is  ++ show x.
 
   How could I do this?
 
 
   Thank you
 
 ___
 Haskell-Cafe mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 

-- 
 Hal Daume III   | [EMAIL PROTECTED]
 Arrest this man, he talks in maths.   | www.isi.edu/~hdaume

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: putStr

2003-10-13 Thread Shawn P. Garbett
On Monday 13 October 2003 11:54 am, Jose Morais wrote:
 Hi,

   I am trying to something like

 f1 :: Int - Int
 f1 x = f2 x

 f2 :: Int - Int
 f2 x = 2 * x


   but before f2 returns its result I'd like it to print something like
 The initial value is  ++ show x.


f1 :: Int - IO Int
f1 x = f2 x

f2 :: Int - IO Int
f2 x = do
putStrLn $ The initial value is  ++ show x
return (2 * x)

The previously mentioned trace method is prefered for debugging because 
with this, one is now in the IO monad.

Shawn
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: putStr

2003-10-13 Thread Jason Wilcox
On Mon, Oct 13, 2003 at 06:32:14PM +0100, Jose Morais wrote:
 Hi,
 
   What I needed was actualy something more like
 
 f3 :: Int - String
 f3 x = show x ++ f4 x
  
 f4 :: Int - IO String
 f4 x = do
   putStr (initial value is  ++ show x)
   return (show x)
 
 
 but this gives the error
 
 
 Type checking
 ERROR teste.hs:11 - Type error in application
 *** Expression : show x ++ f4 x
 *** Term   : f4 x
 *** Type   : IO String
 *** Does not match : [a]
 
 
   Can you help me?
 
 
   Thank you

Your problem is that once you use the IO Monad in a function, every
function that calls it must use IO as well. (If a function you call
has side-effects then you have side-effects by association.) The
following version of f3 should fix your problem:

f3 :: Int - IO String
f3 x = do x' - f4 x
  return (show x ++ x')

This code first extracts the String from the IO String returned by f4
and binds it to the variable x'. x' is now just a plain String which
can be concat'd onto other strings, which is what we do in the second line.

I hope this helps,

-- 
--
Jason WilcoxCS Tutor Coordinator
[EMAIL PROTECTED]   503.725.4023
[EMAIL PROTECTED]  www.cat.pdx.eduFAB 135-01
--
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe