Re: newtype and existentials

2005-11-18 Thread Ross Paterson
On Fri, Nov 18, 2005 at 01:31:46PM -, Simon Peyton-Jones wrote:
 yes.  a newtype declares a new type isomorphic to an existing type.  
   newtype T = MkT S
 declares T to be isomorphic to S.
 
 There is no existing Haskell type isomorphic to your Dynamic.
 
 In concrete terms, the newtype constructor is discarded before we get to
 System F; but we can't do that with an existential.

The first argument, but not the second, also applies to

newtype T = MkT (forall a. S)

___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


RE: newtype and existentials

2005-11-18 Thread Simon Peyton-Jones
true.  GHC's intermediate language has first-class universal
quantification, but not first-class existentials. I can't claim that is
the Right Thing, but it might explain why GHC at least makes the choices
it does

Simon

| -Original Message-
| From: Ross Paterson [mailto:[EMAIL PROTECTED]
| Sent: 18 November 2005 13:44
| To: Simon Peyton-Jones
| Cc: Ralf Hinze; glasgow-haskell-bugs@haskell.org
| Subject: Re: newtype and existentials
| 
| On Fri, Nov 18, 2005 at 01:31:46PM -, Simon Peyton-Jones wrote:
|  yes.  a newtype declares a new type isomorphic to an existing type.
|  newtype T = MkT S
|  declares T to be isomorphic to S.
| 
|  There is no existing Haskell type isomorphic to your Dynamic.
| 
|  In concrete terms, the newtype constructor is discarded before we
get to
|  System F; but we can't do that with an existential.
| 
| The first argument, but not the second, also applies to
| 
|   newtype T = MkT (forall a. S)

___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Confusing Typing Problem

2005-11-18 Thread Bernd Brassel
I am a bit confused by GHC behaviour for the following variants of a 
program:


data A = A (Int - A)

f = A g

g _ = f

h = g 'c'

ghc:
 Error:
 Couldn't match `Int' against `Char'

The problem is that a small change ommits the error:
data A = A (Int - A)

f = A g
  where
  g _ = f

  h = g 'c'

ghc:
 Ok, modules loaded

But I really need the functions to be top-level, thus I added type 
signatures:

data A = A (Int - A)

f :: A
f = A g

g :: a - A
g _ = f

h :: A
h = g 'c'

ghc:
 Ok, modules loaded

Fine, but what I really need, is this:

data A = A (Int - A)

f :: A
f = A g

g :: Ord a = a - A
g _ = f

h :: A
h = g 'c'

 Contexts differ in length
 The signature contexts in a mutually recursive group should all be
 identical
 ...
 Couldn't match `Int' against `Char'

Again the problem does not occur for local functions:

data A = A (Int - A)

f :: A
f = A g
  where
g :: Ord a = a - A
g _ = f

h :: A
h = g 'c'

Why is ghc treating local and global functions differently? Is there 
really a difference in decidability of the Polymorphic Type Inference?
And regarding the contexts differ in length, I read a comment in 
Typing Haskell in Haskell:


a throw-away comment specifying that all explicit type signatures 
 in a binding group must have the same context up to renaming of 
variables

 [10,Section 4.5.2]. This is a syntactic restriction that can easily
 be checked prior to type checking. Our comments here, however, suggest
 that it is unnecessarily restrictive.

Why does ghc still use that unnecessary restriction?

Cheers
Bernd
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: kind inference

2005-11-18 Thread Ross Paterson
On Thu, Nov 17, 2005 at 05:43:28PM -0800, John Meacham wrote:
 On Thu, Nov 17, 2005 at 01:17:02PM +, Ross Paterson wrote:
  On Thu, Nov 17, 2005 at 12:46:31PM -, Simon Peyton-Jones wrote:
   On 17 November 2005 12:45, Ross Paterson wrote:
   | I think the H98 rule is arbitrarily restrictive.  But what about
   | going further and considering the occurrences of type constructors
   | in instance declarations, type signature declarations and expression
   | type signatures?
  
   one could. but GHC doesn't.  feels low prio to me...
  
  It would probably not make many more practical programs legal, but it
  would be less arbitrary and easier to explain.
 
 explicit kinds allow you to type everything that this would make legal
 right?  

Yes.  On the other hand if kind inference used all the available
information in a module you'd hardly ever need explicit kinds.
(And if polymorphic kinds were inferred in dependency order, you'd
never need them.)

I'm thinking of what a clean kind inference rule would look like,
and neither the H98 nor the GHC rules are easy to explain or remember.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Confusing Typing Problem

2005-11-18 Thread Simon Peyton-Jones
Tricky!  

It certainly is confusing, but that's the way Hindley-Milner type
inference for mutually recursive functions works.

The local-defn version works differently because it 
- assumes a monomorphic type for f
- completely typechecks g, including generalisation
- finishes typechecking f

That could be done if one typechecked the original defns in the right
order, but it's not clear what the right order is.   You specified the
order by choosing which defn to nest. The other way round won't work
g _ = f where f = A g

Specifying a type sig should fix the problem, and does.

|  a throw-away comment specifying that all explicit type signatures
|   in a binding group must have the same context up to renaming of
| variables
|   [10,Section 4.5.2]. This is a syntactic restriction that can easily
|   be checked prior to type checking. Our comments here, however,
suggest
|   that it is unnecessarily restrictive.
| 
| Why does ghc still use that unnecessary restriction?

It doesn't any more.  The HEAD lifts the restriction; that'll be in GHC
6.6.  Meanwhile you could try a snapshot build from the download site.

Simon


| -Original Message-
| From: [EMAIL PROTECTED]
[mailto:glasgow-haskell-users-
| [EMAIL PROTECTED] On Behalf Of Bernd Brassel
| Sent: 18 November 2005 10:16
| To: glasgow-haskell-users@haskell.org
| Subject: Confusing Typing Problem
| 
| I am a bit confused by GHC behaviour for the following variants of a
| program:
| 
| data A = A (Int - A)
| 
| f = A g
| 
| g _ = f
| 
| h = g 'c'
| 
| ghc:
|   Error:
|   Couldn't match `Int' against `Char'
| 
| The problem is that a small change ommits the error:
| data A = A (Int - A)
| 
| f = A g
|where
|g _ = f
| 
|h = g 'c'
| 
| ghc:
|   Ok, modules loaded
| 
| But I really need the functions to be top-level, thus I added type
| signatures:
| data A = A (Int - A)
| 
| f :: A
| f = A g
| 
| g :: a - A
| g _ = f
| 
| h :: A
| h = g 'c'
| 
| ghc:
|   Ok, modules loaded
| 
| Fine, but what I really need, is this:
| 
| data A = A (Int - A)
| 
| f :: A
| f = A g
| 
| g :: Ord a = a - A
| g _ = f
| 
| h :: A
| h = g 'c'
| 
|   Contexts differ in length
|   The signature contexts in a mutually recursive group should all be
|   identical
|   ...
|   Couldn't match `Int' against `Char'
| 
| Again the problem does not occur for local functions:
| 
| data A = A (Int - A)
| 
| f :: A
| f = A g
|where
|  g :: Ord a = a - A
|  g _ = f
| 
|  h :: A
|  h = g 'c'
| 
| Why is ghc treating local and global functions differently? Is there
| really a difference in decidability of the Polymorphic Type Inference?
| And regarding the contexts differ in length, I read a comment in
| Typing Haskell in Haskell:
| 
|  a throw-away comment specifying that all explicit type signatures
|   in a binding group must have the same context up to renaming of
| variables
|   [10,Section 4.5.2]. This is a syntactic restriction that can easily
|   be checked prior to type checking. Our comments here, however,
suggest
|   that it is unnecessarily restrictive.
| 
| Why does ghc still use that unnecessary restriction?
| 
| Cheers
| Bernd
| ___
| Glasgow-haskell-users mailing list
| Glasgow-haskell-users@haskell.org
| http://www.haskell.org/mailman/listinfo/glasgow-hask
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Compiling GHC in Windows, Mingw

2005-11-18 Thread Simon Marlow
On 17 November 2005 21:37, Esa Ilari Vuokko wrote:

 On 11/17/05, Esa Ilari Vuokko [EMAIL PROTECTED] wrote:
 I will start a new clean build, hope it goes away and I need to look
 into buying non-broken hardware... ;)
 
 The build failed and I sent mail about it on cvs-ghc list this time,
 even if something seemed to rip away my message and just leave
 attachmenent.. 
 
 Anyway, I downloaded few nightly builds and it seems this happens
 with 11-17 build as well, but not with 11-16, so it's really recent.

I think I have just fixed this (modification to the mangler in
ghc/driver/mangler), please try again.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Compiling GHC in Windows, Mingw

2005-11-18 Thread Esa Ilari Vuokko
On 11/18/05, Esa Ilari Vuokko [EMAIL PROTECTED] wrote:
 On 11/18/05, Simon Marlow [EMAIL PROTECTED] wrote:
  I think I have just fixed this (modification to the mangler in
  ghc/driver/mangler), please try again.

 It compiles again!  Thanks!

 ..but it still crashes on installed location, probably still while
 reading the package.conf  But it seems you committed stuff, I'll
 cvs upd, clean and restart build.

Yeah, still crashes, same as before :-(

Best regards,
--Esa
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell] Compiling wxHaskell from source?

2005-11-18 Thread Johan Jeuring
I cannot build it on Mac OSX either. I think it has to do with GCC  
4.x.


I managed to build wxHaskell on Mac OSX 10.4. Make sure to have the  
latest XCode, but also make sure you use gcc 3.3 instead of gcc 4.?.  
You can reset gcc by means of gcc_select.


-- Johan


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] PhD position at the CWI, Amsterdam

2005-11-18 Thread Frank de Boer
Position Description


The Coordination and Component Based Software group in SEN3 at CWI has
an open position for a PhD student (OIO) for four years.

The PhD student will perform research in the context of the BSIK project
BRICKS (Basic Research in Informatics for Creating the Knowledge
Society). BRICKS addresses the need for a strong impulse in fundamental 
research in informatics. 

The participation of SEN3 in BRICKS focuses on the component-based
development of large software systems. These components can be viewed as
the basic building blocks out of which larger software systems can be
constructed, by composing the components together within large networks.
Since components in an application are supposed to be highly independent
of each other and of their environment, it is the responsibility of 
their connecting network to coordinate their individual activities to 
ensure their proper co-operation with one another to form a coherent 
whole. 

Aside from its clear advantages in construction of traditional 
complex software, this approach is a vital necessity in the development 
and deployment of software for an emerging class of systems that cater 
to ubiquitous computing and mobile applications. These systems must 
always be on and cannot be shut down or rebooted for software upgrade
or configuration adjustments. On the other hand, they must change in 
reaction to the changes in their environment, such as evolving user 
requirements or relocation. Incremental, dynamic composition of software
out of components is the only way to accommodate the requirements of 
such systems. The component networks of these applications are typically
active in that their various connections will vary and change at run 
time, reflecting the dynamic changes in the communication structure of 
their components.

The main aim of the present PhD research project is the development of 
compositional methods, tools, and formal techniques for the dynamic 
composition of components, the validation of their composition, and the 
validation  of components themselves.

The candidate for the PhD position should have a masters degree in
computer science or mathematics, with a clear interest in questions in
the field of theoretical computer science. Ideally, the candidate has a
background in such topics in theoretical computer science as automata
theory, semantics and proof theory.

The Theme SEN3 (http://www.cwi.nl/sen3) at CWI is a dynamic group of
internationally recognized researchers who work on Coordination Models
and Languages and Component-Based Software Composition.  The activity
in SEN3 is a productive, healthy mix of theoretical, foundational, and
experimental work in Computer Science, ranging in a spectrum covering
mathematical foundations of models of computation, formal methods and
semantics, implementation of advanced research software systems, as well
as their real-life applications.


General information
===

CWI is an internationally renowned research institute in mathematics and
computer science, located in Amsterdam, The Netherlands. The focus is on
fundamental research problems, derived from societal needs. Research is
carried out in 15 research themes. More information about these themes
can be found on the website www.cwi.nl where you can also take a look at
our Annual Reports. A substantial part of this research is carried out
in the framework of national or international programs. 

CWI maintains excellent relations with industry and the academic world,
at home as well as abroad. After their research careers at CWI, an
increasing number of young staff members find employment in these
sectors, for example in spin-off companies that are based on research
results from CWI.  Of course, library and computing facilities at CWI
are first-rate. CWI's non-scientific services to its personnel include
career planning, training  courses, assistance in finding housing, and
tailor-made solutions to problems that may occasionally arise.

Terms of employment
===

The salary is in accordance with the CAO-onderzoekinstellingen and is
commensurate with experience.  The starting
salary for a first year PhD student is around 1850 Euros/month with an
incremental raise for each subsequent year.  

Besides the salary, CWI offers very attractive and flexible terms of
employment, like a collective health insurance, pension-fund, etc.

Application
===

For more information on these vacancies you can contact either of the
PIs:

 F. Arbab, telephone +31-20-5924056, e-mail [EMAIL PROTECTED]
 F.S. de Boer, telephone +31-20-592-4139, email [EMAIL PROTECTED]

Official applications, together with curriculum vitae, letters of
references, and lists of publications must be sent to Mrs. J. Koster,
head of Personnel Department, P.O. box 94079, 1090 GB Amsterdam,
The Netherlands.

___
Haskell mailing list
Haskell@haskell.org

[Haskell] kernel 2.6.11 and readline.so

2005-11-18 Thread Irina Kaliman

I wonder if anybody knows the answer to this:
I have Fedora Core 4 (  2.6.11-1.1369_FC4smp ) installed on one of our
servers.

When I tried to install ghc-6.4-1, I got a dependancy problem:

# rpm -ivh ghc-6.4-1.i386.rpm
error: Failed dependencies:
libreadline.so.4 is needed by ghc-6.4-1.i386

I have /usr/lib/libreadline.so.5 which is a part of my distribution.
( Fedora Core 4 , no updates ) on this machine

On my ther machine, which is 2.4.20-6, this package was installed with no
probelms

Thank you for your help,

*
Irina Kaliman, System Administrator
University of Miami, Computer Science Department
e-mail:  [EMAIL PROTECTED]

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Re: kernel 2.6.11 and readline.so

2005-11-18 Thread Jim Apple

Irina Kaliman wrote:

# rpm -ivh ghc-6.4-1.i386.rpm
error: Failed dependencies:
libreadline.so.4 is needed by ghc-6.4-1.i386


Go to rpmfind.net
Type libreadline.so.4
Find the text Fedora Core 4
Install the compat-readline RPM

Fedora has compat projects for other old libraries as well.

Jim

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Haskell Weekly News: November 15, 2005

2005-11-18 Thread John Goerzen
 Haskell Weekly News: November 15, 2005

   Greetings, and thanks for reading the 15th issue of HWN, a weekly
   newsletter for the Haskell community. Each Tuesday, new editions will be
   posted (as text) to [1]the Haskell mailing list and (as HTML) to [2]The
   Haskell Sequence.

   1. http://www.haskell.org/mailman/listinfo/haskell
   2. http://sequence.complete.org/

New Releases

 * York Haskell Compiler. Thomas Davie [3]announced the York Haskell
   Compiler project, which already has working code. Quite a few people
   chimed in with questions.

   3. http://thread.gmane.org/gmane.comp.lang.haskell.general/12485

Discussion

   Making Haskell more open. Simon Peyton-Jones began a [4]long thread with
   some ideas about making Haskell more open to the wider community. The
   thread is too long to completely summarize here, and branched out in
   several directions, including a [5]discussion of the haskell.org homepage.

   4. http://thread.gmane.org/gmane.comp.lang.haskell.general/12479
   5. http://article.gmane.org/gmane.comp.lang.haskell.general/12533

   Haskell users survey. John Hughes [6]wrote about a web-based survey about
   Haskell. He is encouraging everyone with an interest in Haskell to
   participate.

   6. http://thread.gmane.org/gmane.comp.lang.haskell.general/12448

Darcs Corner

   Darcs 1.0.4 was [7]released this week. There are quite a few performance
   improvements, a new posthook option, manifest feature, new darcs put
   command, more git support, and various bugfixes.

   7. http://thread.gmane.org/gmane.comp.version-control.darcs.user/8706

Quote of the Week

   Impossible things are delayed immediately. Miracles may take a little
   longer. -- Claus Reinke on haskell-cafe.

About Haskell Weekly News

   Yes, it's late again. But, while I am no Zaphod Beeblebrox, carry no
   towel, have no Improbability Drive, and certainly couldn't have come up
   with 42 on my own, if Douglas Adam's Hitchhiker's Guide to the Universe
   series can still be called a trilogy, then HWN can still be 
   called weekly :-)

   Want to continue reading HWN? Please help us create new editions of this
   newsletter. Please see the [8]contributing information, or send stories to
   hwn -at- complete -dot- org. There is also a Darcs repository available.

   8. http://sequence.complete.org/hwn-contrib

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] ANNOUNCEMENT: HAskell Server Pages version 0.4.0

2005-11-18 Thread Lemmih
== HAskell Server Pages ==

HASP is an experimental framework for combining the flexibility of
Haskell with the popularity/necessity of web-pages.

HASP offers an embedded XML syntax without dispensing with the Glasgow
Haskell extensions we hold dear. By using the FastCGI library, HASP is
independent of webserver type, and keeps maximum performance. HASP supports
sessions, POST data and file uploads, persistent values of arbitrary types,
dynamic loading of Haskell code, and thus provides a sound foundation for
higher level programming models for dynamic web-pages.

Changes since 0.3.0:
  * Use the bytecode generator from GHC instead of hs-plugins.
  * Removed the need for Alex, Happy, plugins, ghc-src, trhsp and haspr-config.

Changes from Niklas Broberg's original HSP:
  * GHC bytecode instead of hs-plugins.
  * Dropped the support for Harp.
  * Dropped the support for xml in patterns.
  * Full support for every GHC extension.
  * HASP uses FastCGI where HSP implements its own web server.

HASP: darcs get http://scannedinavian.org/~lemmih/hasp
HSP: http://www.cs.chalmers.se/~d00nibro/hsp/
FastCgi: http://fastcgi.com/

--
Friendly,
  Lemmih
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


RE: [Haskell-cafe] Project postmortem

2005-11-18 Thread Simon Peyton-Jones

| Unless lightning strikes and tomorrow morning I figure out what's the
| deal with the spurious Mac OSX crashes, I think this might be my last
| network app in Haskell. I should really be spending time on the
| business end of the app intead of figuring out platform differences
| and the like.

Joel, I think it's fantastic that you've been pushing on Haskell in the
way you have.  What I learn from your experience is that the *language*
is pretty good for what you wanted to do (esp lightweight concurrency)
but the *libraries* in the area of networking are lacking both
functionality and (more particularly) robustness.

I hope you don't abandon Haskell altogether.  Without steady, friendly
pressure from applications-end folk like you, things won't improve.
It's incredibly valuable feedback.  But I can see that when you have to
deliver something next week you can't wait around for some someone to
get around to fixing your problem.  (They aren't paid either!)  Maybe
you can use Haskell for something less mission-critical, so that you can
keep up the pressure? 

Meanwhile, let me utter my customary encouragement to the Haskell
community out there: please pitch in and help!  Haskell will only break
into real applications, of the kind Joel has been writing, if we can
offer robust libraries, and that depends utterly on you.  Don't wait for
someone else to do it.

Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Project postmortem

2005-11-18 Thread Joel Reymont

On Nov 18, 2005, at 10:17 AM, Simon Peyton-Jones wrote:


I hope you don't abandon Haskell altogether.  Without steady, friendly
pressure from applications-end folk like you, things won't improve.


Nah, I'm just having a very frustrating Friday. I think I need some  
direction in which to dig and a bit of patience over the weekend. For  
example,


What does this mean precisely? My take is that the GHC runtime is  
trying to call a C function. this much I gathered from the source  
code. It also seems that since I do not see another library at #0  
then the issue is within GHC. Is that the right take on it?


Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x3139322e
0x0027c174 in s8j1_info ()
(gdb) where
#0  0x0027c174 in s8j1_info ()
#1  0x0021c9f4 in StgRunIsImplementedInAssembler () at StgCRun.c:576
#2  0x0021cdc4 in schedule (mainThread=0x1100360,  
initialCapability=0x308548) at Schedule.c:932
#3  0x0021dd6c in waitThread_ (m=0x1100360, initialCapability=0x0) at  
Schedule.c:2156
#4  0x0021dc50 in scheduleWaitThread (tso=0x13c, ret=0x0,  
initialCapability=0x0) at Schedule.c:2050

#5  0x00219548 in rts_evalLazyIO (p=0x29b47c, ret=0x0) at RtsAPI.c:459
#6  0x001e4768 in main (argc=2262116, argv=0x308548) at Main.c:104

It's incredibly valuable feedback.  But I can see that when you  
have to

deliver something next week you can't wait around for some someone to
get around to fixing your problem.  (They aren't paid either!)  Maybe
you can use Haskell for something less mission-critical, so that  
you can

keep up the pressure?


I can't change who I am, I just gotta push the envelope. I would not  
have stood the pain of doing this project in Erlang, for example,  
what with all the nested data structures, etc.


I'm not waiting for someone to fix my problem, I would gladly fix it  
myself if I understood where the problem is. It used to be fairly  
clear before when the stack trace pointed to one of the OpenSSL  
libraries. In this particular case I don't even know how to start  
debugging this. Do I set a break point in s8j1_info? But it's  
something else periodically, like s34n_info.


Do I inspect the C code somehow? But how do I do that? How do I debug  
the GHC runtime?


Thanks, Joel

--
http://wagerlabs.com/





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


The IT buzzword of the next decade (was Re: [Haskell-cafe] Project postmortem)

2005-11-18 Thread Joel Reymont

This would be a good new thread to discuss it ;-)

On Nov 18, 2005, at 10:42 AM, Jan Stoklasa (gmail) wrote:


Hi,
so sad, so true...
At least haskell ideas sneak into mainstream languages under  
disguise (LINQ
anyone?). C-Java-C# syntax that business developers and their  
bosses love

so much is mandatory so the result lack the beauty we all know and
appreciate, but it is kinda nice to see functional programming going
mainstream at last. Maybe, Lambda is the IT buzzword of next  
decade :-).


--
http://wagerlabs.com/





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Records (was Re: [Haskell] Improvements to GHC)

2005-11-18 Thread Glynn Clements

Sebastian Sylvan wrote:

   How about (¤)? It looks like a ring to me, I'm not sure where that's
   located on a EN keyboard, but it's not terribly inconvenient on my SE
   keyboard. f ¤ g looks better than f . g for function composition, if
   you ask me.
  
  That symbol actually does look better, but isn't on any English
  keyboards to the best of my knowledge. I can get it in my setup with
  compose-key o x, but not many people have a compose key assigned.
  Also, this may just be a bug, but currently, ghc gives a lexical error
  if I try to use that symbol anywhere, probably just since it's not an
  ASCII character.
 
 Hmm. On my keyboard it's Shift+4. Strange that it's not available on
 other keyboards. As far as I know that symbol means nothing
 particularly swedish. In fact, I have no idea what it means at all
 =)

It's a generic currency symbol (the X11 keysym is XK_currency). It
doesn't exist on a UK keyboard (where Shift-4 is the dollar sign).

In any case, using non-ASCII characters gives rise to encoding issues
(e.g. you have to be able to edit UTF-8 files).

-- 
Glynn Clements [EMAIL PROTECTED]
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Records (was Re: [Haskell] Improvements to GHC)

2005-11-18 Thread Sebastian Sylvan
On 11/18/05, Tomasz Zielonka [EMAIL PROTECTED] wrote:
 On Thu, Nov 17, 2005 at 06:56:09PM +0100, Sebastian Sylvan wrote:

  Some people do use it more often than I do, but I find that in most
  cases except simple pipelined functions it only makes the code
  harder to read.

 But this case is quite important, isn't it?

I'm not so sure it is, and you can almost always write it using ($)
without too much trouble. I really only ever use (.) for pretty simple
things like filter (not . null).

Again. I'm thinking () is a good operator. An intelligent editor
would pull them together a bit more to make it look even more like a
ring.
I could see myself using  and  for dot and cross products in
linear algebra, though, but I'm willing to sacrifice those operators
for the greater good :-)

/S

--
Sebastian Sylvan
+46(0)736-818655
UIN: 44640862
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Records (was Re: [Haskell] Improvements to GHC)

2005-11-18 Thread John Meacham
I always fancied () as a synonym for 'mappend'
John

-- 
John Meacham - ⑆repetae.net⑆john⑈ 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Records

2005-11-18 Thread Ketil Malde
Fraser Wilson [EMAIL PROTECTED] writes:

Isn't there a potential for confusion with function
  composition (f . g)?

 Perhaps, but I always have spaces on either side when it's function
 composition.

Good for you.  Syntax that changes depending on spacing is my number
one gripe with the Haskell syntax.  And too many infix operators and
symbolic elements are on the list as well.

How about a pair of (magical, if necessary) functions called set and
get?  Letting you do something like

  x `set` first 4 `set` second foo
?

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Records (was Re: [Haskell] Improvements to GHC)

2005-11-18 Thread Tomasz Zielonka
On Fri, Nov 18, 2005 at 12:21:09PM +0100, Sebastian Sylvan wrote:
 On 11/18/05, Tomasz Zielonka [EMAIL PROTECTED] wrote:
  On Thu, Nov 17, 2005 at 06:56:09PM +0100, Sebastian Sylvan wrote:
 
   Some people do use it more often than I do, but I find that in most
   cases except simple pipelined functions it only makes the code
   harder to read.
 
  But this case is quite important, isn't it?
 
 I'm not so sure it is, and you can almost always write it using ($)
 without too much trouble. I really only ever use (.) for pretty simple
 things like filter (not . null).

Try not to look as if you wanted to _remove_ the composition operator,
because that will make people angry (w...) :-)
We are talking about _renaming_ the composition, not removing it,
right?

If you removed it from the Prelude, most people would write their own
versions, with different names, and we rather don't want that.

Anyway, is it realistic to expect that people will rewrite their
programs to use the new operator? I thought that the new version of
Haskell will be mostly downwards compatible with Hashell 98?

Best regards
Tomasz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Project postmortem

2005-11-18 Thread Simon Marlow
On 18 November 2005 10:48, Joel Reymont wrote:

 On Nov 18, 2005, at 10:17 AM, Simon Peyton-Jones wrote:
 
 I hope you don't abandon Haskell altogether.  Without steady,
 friendly pressure from applications-end folk like you, things won't
 improve. 
 
 Nah, I'm just having a very frustrating Friday. I think I need some
 direction in which to dig and a bit of patience over the weekend. For
 example,
 
 What does this mean precisely? My take is that the GHC runtime is
 trying to call a C function. this much I gathered from the source
 code. It also seems that since I do not see another library at #0
 then the issue is within GHC. Is that the right take on it?

The stack trace doesn't mean much at all I'm afraid - GHC doesn't use
the C stack, so any stack trace generated for a crash inside the Haskell
code is mostly useless.  It does tell you the block in which the crash
happened (s8j1_info), and it tells you that the crash was in Haskell and
not C.  The rest of the frames on the stack are from the GHC runtime,
and you'll pretty much always see these same frames on the stack for any
crash inside Haskell code.

How we normally proceed for a crash like this is as follows: examine
where the crash happened and determine whether it is a result of heap or
stack corruption, and then attempt to trace backwards to find out where
the corruption originated from.  Tracing backwards means running the
program from the beginning again, so it's essential to have a
reproducible example.  Without reproducibility, we have to use a
combination of debugging printfs and staring really hard at the code,
which is much more time consuming (and still requires being able to run
the program to make it crash with debugging output turned on).

You can get debugging output by compiling your program with -debug, and
then running it with some of the -Dsomething options (use +RTS -? for
a list, +RTS -Ds is a good one to start with).

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Project postmortem

2005-11-18 Thread Joel Reymont

On Nov 18, 2005, at 1:55 PM, Simon Marlow wrote:

You can get debugging output by compiling your program with -debug,  
and
then running it with some of the -Dsomething options (use +RTS -?  
for

a list, +RTS -Ds is a good one to start with).


I'm still working on a repro case but here's what I get...

+RTS -Ds
...
scheduler: checking for threads blocked on I/O
sched: -- running thread 1103 ThreadRunGHC ...
sched: -- thread 1103 (ThreadRunGHC) stopped: is blocked on an MVar
all threads:
thread 1225 @ 0x1539000 is not blocked
thread 1224 @ 0x1506aa4 is not blocked
thread 1223 @ 0x15066a4 is not blocked
...
scheduler: checking for threads blocked on I/O
sched: -- running thread 1107 ThreadRunGHC ...
Segmentation fault

1107 is not blocked in the list of all threads. What options should I  
try next?


Thanks, Joel

--
http://wagerlabs.com/





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Project postmortem

2005-11-18 Thread Simon Marlow
On 18 November 2005 14:42, Joel Reymont wrote:

 On Nov 18, 2005, at 1:55 PM, Simon Marlow wrote:
 
 You can get debugging output by compiling your program with -debug,
 and then running it with some of the -Dsomething options (use +RTS
 -? for a list, +RTS -Ds is a good one to start with).
 
 I'm still working on a repro case but here's what I get...
 
 +RTS -Ds
 ...
 scheduler: checking for threads blocked on I/O
 sched: -- running thread 1103 ThreadRunGHC ...
 sched: -- thread 1103 (ThreadRunGHC) stopped: is blocked on an MVar
 all threads:
  thread 1225 @ 0x1539000 is not blocked
  thread 1224 @ 0x1506aa4 is not blocked
  thread 1223 @ 0x15066a4 is not blocked
 ...
 scheduler: checking for threads blocked on I/O
 sched: -- running thread 1107 ThreadRunGHC ...
 Segmentation fault
 
 1107 is not blocked in the list of all threads. What options should I
 try next?

That doesn't tell us much unfortunately.  Can you send a disassembly of
the block in which the crash happened?

Is it always the same block, BTW?  Does changing the heap size (+RTS
-Hsize) have any effect?

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Records (was Re: [Haskell] Improvements to GHC)

2005-11-18 Thread Sebastian Sylvan
On 11/18/05, Tomasz Zielonka [EMAIL PROTECTED] wrote:
 On Fri, Nov 18, 2005 at 12:21:09PM +0100, Sebastian Sylvan wrote:
  On 11/18/05, Tomasz Zielonka [EMAIL PROTECTED] wrote:
   On Thu, Nov 17, 2005 at 06:56:09PM +0100, Sebastian Sylvan wrote:
 
Some people do use it more often than I do, but I find that in most
cases except simple pipelined functions it only makes the code
harder to read.
  
   But this case is quite important, isn't it?
 
  I'm not so sure it is, and you can almost always write it using ($)
  without too much trouble. I really only ever use (.) for pretty simple
  things like filter (not . null).

 Try not to look as if you wanted to _remove_ the composition operator,
 because that will make people angry (w...) :-)
 We are talking about _renaming_ the composition, not removing it,
 right?

Yes. I just don't think it's used enough to warrant giving it one of
the best symbols.

 Anyway, is it realistic to expect that people will rewrite their
 programs to use the new operator? I thought that the new version of
 Haskell will be mostly downwards compatible with Hashell 98?

Well the records proposal is unlikely to go in Haskell 1.5 anyway, so
I'm mainly exercising wishful thinking here. In Haskell 2.0, which I
understand to be more of a complete make-over, backwards-compability
be damned!, this could be considered.

/S
--
Sebastian Sylvan
+46(0)736-818655
UIN: 44640862
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to use a wiki to annotate GHC Docs? was Re: [Haskell] Re: Making Haskell more open

2005-11-18 Thread Cale Gibbard
On 18/11/05, Wolfgang Jeltsch [EMAIL PROTECTED] wrote:
 Am Mittwoch, 16. November 2005 20:02 schrieb Cale Gibbard:
  [...]

  It's unfortunate, but if you don't put a little bit of effort into
  defending your forms, they will eventually get quite a lot of spam.
  Cleaning up 600+ pages by hand takes quite a lot of effort, even with
  the ability to revert.

 Is the wiki spam problem really that big?  If yes, I would really wonder how
 Wikipedia deals with it.  Since I have never come across a spammed Wikipedia
 article, it's hard for me to imagine that wiki spam is such a big problem.

Wikipedia has a page just to keep track of where the vandalism is
currently happening. See:
http://en.wikipedia.org/wiki/Vandalism_in_progress
There's a ridiculous amount of vandalism going on at any one time,
it's just that the wiki is so huge that it's only a tiny fraction of
the content, and it has so many users that things move quickly.

Yes, HaWiki has been hit several times now with large spamming runs,
often where nearly every page got hit. Normally someone who is
watching with permissions to do so will edit LocalBadContent and put a
stop to it, and clean things up, but it's not uncommon to have to
clean 100 pages, and when I got home one day, every page on the wiki
had been spammed (700+), and some of them twice, once by a different
spammer. This sort of thing is what prompted making all the pages
immutable to users not logged in.

  [...]

  Another way to raise the bar a bit perhaps would be to randomise the names
  of the form controls slightly, so that a spambot couldn't just use the same
  names for things every time, it would have to properly load the page and
  scrape the names out.

 This would mean modifying the wiki software, right?  This in turn would cause
 problems with, for example, security updates.

Perhaps the changes could be made upstream. Also, if people are
considering writing wiki software in Haskell, it's something to take
into account.

- Cale
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Project postmortem

2005-11-18 Thread Jason Dagit


On Nov 18, 2005, at 2:17 AM, Simon Peyton-Jones wrote:


I hope you don't abandon Haskell altogether.  Without steady, friendly
pressure from applications-end folk like you, things won't improve.
It's incredibly valuable feedback.  But I can see that when you  
have to

deliver something next week you can't wait around for some someone to
get around to fixing your problem.  (They aren't paid either!)  Maybe
you can use Haskell for something less mission-critical, so that  
you can

keep up the pressure?


Here is some feedback on a negative experience I had with Haskell  
recently (really about the only negative experience :)


I was playing with one of the Haskell OpenGL libraries (actually it's  
a refined FFI) over the summer and some things about it rubbed me the  
wrong way.  I wanted to try fixing them but I really couldn't figure  
out how to get ahold of the code and start hacking.  I found some  
candidates, but it seemed like old cvs repositories or something.  I  
was confused, ran out of time and moved on.  Why do I bring it up?   
If it had been obvious where to get an official copy of the library I  
could have tried sending in some patches to make things work the way  
I wanted.  I'm a huge fan of darcs repositories, BTW.


Thanks,
Jason
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Detecting Cycles in Datastructures

2005-11-18 Thread Paul Hudak

This is a very late response to an old thread...

Tom Hawkins wrote:
 In a pure language, is it possible to detect cycles in recursive
 data structures?  For example, is it possible to determine that
 cyclic has a loop? ...

 data Expr = Constant Int | Addition Expr Expr

 cyclic :: Expr
 cyclic = Addition (Constant 1) cyclic

 Or phased differently, is it possible to make Expr an instance of
 Eq such that cyclic == cyclic is smart enough to avoid a recursive
 decent?

 -Tom

---

Perhaps it's obvious, but one way to do this is to make the cycle
*implicit* via some kind of a fixed-point operator, which is a trick I
used recently in a DSL application.  For example, you could define:

 data Expr = Const Int
   | Add Expr Expr
   | Loop  -- not exported
  deriving (Eq, Show)

The purpose of Loop is to mark the point where a cycle exists.
Instead of working with values of type Expr, you work with values of
type Expr - Expr, or Fix Expr:

 type Fix a = a - a

For example:

 fe1,fe2 :: Fix Expr
 fe1 e = Add (Const 1) (Const 1)  -- non-recursive
 fe2 e = Add (Const 1) e  -- recursive

You can always get the value of an Expr from a Fix Expr whenever you
need it, by computing the fixed point:

 fix f = x where x = f x

 e1,e2 :: Expr
 e1 = fix fe1  -- finite
 e2 = fix fe2  -- infinite

Note that e2 is equivalent to your cyclic.  But now we can also test
for equality:

 instance Eq (Fix Expr) where
   fe1 == fe2  =  fe1 Loop == fe2 Loop

For example, fe2 == fe2 returns True, whereas e2 == e2
(i.e. your cyclic == cyclic) diverges.

Of course, note that, although fe1 == fe2 implies that fix fe1 == fix
fe2, the reverse is not true, since there will always be loops of
varying degrees of unwinding that are semantically equivalent, but not
syntactically, or structurally, equivalent.  Thus this definition of
equality is only approximate, but still, it's useful.

If you want to have multiple loops, or a loop not at the top level,
then you need to add some kind of a loop constructor to Expr.  I've
sketched that idea below, and pointed out a couple of other useful
ideas, such as showing a loop, and mapping a function over a loop
without unwinding it.

I hope this helps,

  -Paul

--

 module Cyclic where

Loop now needs an ID because there may be more than one of them.

 data Expr = Const Int
   | Add Expr Expr
   | Rec (Fix Expr)-- implicit loop
   | Loop ID   -- not exported

 type Fix a = a - a
 type ID= Int

To check equality of and show Exprs, we need to supply unique IDs to
each recursive loop, which we do via a simple counter.

 instance Eq Expr where
   e1 == e2  =
 let eq (Const x) (Const y)   n  =  x == y
 eq (Loop i1) (Loop i2)   n  =  i1 == i2
 eq (Add e1 e2) (Add e1' e2') n  =  eq e1 e1' n  eq e2 e2' n
 eq (Rec fe1) (Rec fe2)   n  =  eq (fe1 (Loop n))
   (fe2 (Loop n)) (n+1)
 eq _ _   n  =  False
 in  eq e1 e2 0

 instance Show Expr where
   show e  =
 let show' (Const x)   n =  (Const  ++ show x ++ )
 show' (Add e1 e2) n =  (Add  ++ show' e1 n ++  
++ show' e2 n ++ )
 show' (Loop i)n =  (Loop  ++ show i ++ )
 show' (Rec fe)n =  (Rec  ++ show n ++  
++ show' (fe (Loop n)) (n+1)
++ )
 in  show' e 0


Unwinding (i.e. computing fixpoints):

Note: unwind should never see a Loop constructor.

 unwind :: Expr - Expr
 unwind (Add e1 e2) = Add (unwind e1) (unwind e2)
 unwind (Rec fe)= x where x = unwind (fe x)
 unwind e   = e

The 2nd equation above is analogous to:
fix f = x where x = f x
And these two equations could also be written as:
fix f = f (fix f)
unwind (Rec fe) = unwind (fe (Rec fe))

Examples:

 fe1,fe2 :: Fix Expr
 fe1 e = Add (Const 1) (Const 1)  -- non-recursive
 fe2 e = Add (Const 1) e  -- recursive

 e1,e2,e3 :: Expr
 e1 = Rec fe1 -- no real loop
 e2 = Rec fe2 -- top-level loop
 e3 = Add e2 (Const 0)-- lower-level loop
 e4 = Rec (\e1- Add (Const 1)
 (Rec (\e2- Add e1 e2))) -- nested loop

 b1,b2 :: Bool
 b1 = e3==e3   -- returns True
 b2 = e3==e2   -- returns False

 e1u,e2u,e3u :: Expr
 e1u = unwind e1   -- finite
 e2u = unwind e2   -- infinite
 e3u = unwind e3   -- infinite
 e4u = unwind e4   -- infinite

Now suppose we define a function, say mapE, that applies a function to
the leaves (in this case the Const Int's) of an Expr.  For example:

mapE succ (Add (Const 1) (Const 2)) = Add (Const 2) (Const 3)

Then if we define something like:

cyclic1 = Add (Const 1) cyclic1

and do mapE succ 

Re: [Haskell-cafe] Detecting Cycles in Datastructures

2005-11-18 Thread Henning Thielemann


On Fri, 18 Nov 2005, Paul Hudak wrote:


For example:


fe1,fe2 :: Fix Expr
fe1 e = Add (Const 1) (Const 1)  -- non-recursive
fe2 e = Add (Const 1) e  -- recursive


Do you mean

fe1 _ = Add (Const 1) Loop

?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Project postmortem

2005-11-18 Thread Cale Gibbard
test.wagerlabs.com seems really slow for me right now. I've mirrored
the repo on my own machine (might not be 100% reliable, but should
stay up nearly all of the time). The mirror address is
http://vx.hn.org/postmortem/

 - Cale

On 18/11/05, Joel Reymont [EMAIL PROTECTED] wrote:
 Folks,

 This is not quite the error that I was expecting but they could be
 related, I'm not sure. In any case, you can retrieve the repro
 project thusly:

 darcs get http://test.wagerlabs.com/postmortem

 You need OpenSSL to build these so don't forget to add -lssl -lcrypto
 to either ghc or ghci.

 I would appreciate if we could all collectively look at this as
 things are either wierd or I'm missing something obvious. I will
 apply any patches sent to me.

 I run like this:

 ghci -fglasgow-exts -lssl -lcrypto
 :l Server
 main

 ghci -fglasgow-exts -lssl -lcrypto
 :l Client
 main

 I get in the server window:

 interactive: unknown exception

 14:51:39: ThreadId 1: Accepted new connection: {handle: socket: 5}
 14:51:39: ThreadId 1: Verify locations: 1
 14:51:39: ThreadId 1: sslGetError: 2
 14:51:39: ThreadId 4: Starting SSL handshake...
 14:51:39: ThreadId 4: Reading from BIO...
 14:51:39: ThreadId 4: Waiting for BIO 0x01108670
 14:51:39: ThreadId 4: waitForBio: gotta wait a bit...

 If you look at SSL.hs you will see that I'm calling threadDelay right
 after this message. No other messages are produced. This tells me
 that threadDelay is throwing an exception.

 Why would it, though? And how can I tell what the exception is? If I
 comment out the threadDelay then I get the exception somewhere in the
 expect code after bytes are sent to the other side.

 Overall, my intent is to get this to work for 1 thread and then try,
 say, 5 or 10 thousand.

 Thanks, Joel

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Project postmortem

2005-11-18 Thread Sven Panne
Am Freitag, 18. November 2005 17:16 schrieb Jason Dagit:
 [...]
 I was playing with one of the Haskell OpenGL libraries (actually it's
 a refined FFI) over the summer and some things about it rubbed me the
 wrong way.  I wanted to try fixing them but I really couldn't figure
 out how to get ahold of the code and start hacking.  I found some
 candidates, but it seemed like old cvs repositories or something.  I
 was confused, ran out of time and moved on.  Why do I bring it up?
 If it had been obvious where to get an official copy of the library I
 could have tried sending in some patches to make things work the way
 I wanted.  I'm a huge fan of darcs repositories, BTW.

Hmmm, as the OpenGL/GLUT/OpenAL/ALUT guy I have to admit that I should really, 
really update the web pages about those packages. But anyway: Asking on any 
Haskell mailing list (there is even one especially for the OpenGL/GLUT 
packages) normally gives you fast response times. Without even knowing that 
there is a problem, there is nothing I can fix. :-) And don't hesitate to ask 
questions about the usage of those packages, because this is valuable 
feedback, too. Regarding the repository: The normal fptools repository is the 
official one for those packages. But IIRC, most GHC binary packages include 
OpenGL/GLUT support, so there is normally no urgent need for a home-made 
version. All packages are already cabalized, but I have to admit that I have 
never tried to build them on their own.

Cheers,
   S.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Function application like a Unix pipe

2005-11-18 Thread Scherrer, Chad
I'm still trying to settle on a feel for good programming style in
Haskell. One thing I've been making some use of lately is

(\|) = flip ($)
infixl 0 \|

Then expressions like

f4 $ f3 $ f2 $ f1 $ x

become

x  \|
f1 \|
f2 \|
f3 \|
f4

I've seen something like this on haWiki using (#), but I prefer this
notation because it looks like a Unix pipe, which is exactly how it's
used. 

This style makes the sequential feel of monadic do notation available to
non-monadic functions, and would seem to ease the initial confusion most
people experience when they first program using lazy evaluation. Many
newbies have experience with Unix pipes, and this notation makes that
analogy explicit.

I have a couple of questions about this...

First, it would be convenient if ($) could appear in an fi without extra
parentheses, but this would require infixl (-1) \|. Is there a reason
this isn't allowed? The current fixities don't seem to allow much room
for expansion. Would it be possible to either (1) allow negative and/or
fractional fixities, or (2) shift and spread out the current fixities to
allow more flexibility in the introduction of new operators?

I'm also trying to determine to what extent this style should really be
used. Does anyone see this as bad style? Is there an argument why this
might not be such a good way to ease someone into the language? 

Chad Scherrer
Computational Mathematics Group
Pacific Northwest National Laboratory

Time flies like an arrow; fruit flies like a banana. -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Detecting Cycles in Datastructures

2005-11-18 Thread Greg Woodhouse


--- Paul Hudak [EMAIL PROTECTED] wrote:

 This is a very late response to an old thread...
 
 Tom Hawkins wrote:
   In a pure language, is it possible to detect cycles in recursive
   data structures?  For example, is it possible to determine that
   cyclic has a loop? ...
  
   data Expr = Constant Int | Addition Expr Expr
  
   cyclic :: Expr
   cyclic = Addition (Constant 1) cyclic
  
   Or phased differently, is it possible to make Expr an instance
 of
   Eq such that cyclic == cyclic is smart enough to avoid a
 recursive
   decent?
  

I'm a little confused. It's one thing to demonstrate that a (possibly
infinite) digraph contains a loop, but quite another to show that it
does not. Carrying around a copy of the subgraph consisting of nodes
actually visited is workable in a pure language, but there's no
guarantee of termination.

   -Tom
 
 ---
 
 Perhaps it's obvious, but one way to do this is to make the cycle
 *implicit* via some kind of a fixed-point operator, which is a trick
 I
 used recently in a DSL application.  


So, you imagine f to be a function that (non-deterministally) follows
an edge in a digraph. The idea being that G is (possibly infinite)
digraph and F is a subgraph. The function f would
non-deterministically select a vertext of F, follow an edge in G (again
chosen non-deterministically), add the (possibly new) edge to F, and
return the resulting graph. Then, you are asking if f has a fixpoint in
this broader context?

 For example, you could define:
 
   data Expr = Const Int
 | Add Expr Expr
 | Loop  -- not exported
deriving (Eq, Show)
 
 The purpose of Loop is to mark the point where a cycle exists.
 Instead of working with values of type Expr, you work with values of
 type Expr - Expr, or Fix Expr:
 
   type Fix a = a - a
 
 For example:
 
   fe1,fe2 :: Fix Expr
   fe1 e = Add (Const 1) (Const 1)  -- non-recursive
   fe2 e = Add (Const 1) e  -- recursive
 
 You can always get the value of an Expr from a Fix Expr whenever you
 need it, by computing the fixed point:
 
   fix f = x where x = f x

If it can be computed. Maybe I'm off-track here, but it seems to me
that when you introduce laziness into the equation, you lose any
guarantee of there even being a fixpoint, much less one that can be
computed.
 
   e1,e2 :: Expr
   e1 = fix fe1  -- finite
   e2 = fix fe2  -- infinite
 
 Note that e2 is equivalent to your cyclic.  But now we can also
 test
 for equality:
 
   instance Eq (Fix Expr) where
 fe1 == fe2  =  fe1 Loop == fe2 Loop
 
 For example, fe2 == fe2 returns True, whereas e2 == e2
 (i.e. your cyclic == cyclic) diverges.
 
 Of course, note that, although fe1 == fe2 implies that fix fe1 == fix
 fe2, the reverse is not true, since there will always be loops of
 varying degrees of unwinding that are semantically equivalent, but
 not
 syntactically, or structurally, equivalent.  Thus this definition
 of
 equality is only approximate, but still, it's useful.

I'm lost. Are you saying bottom is bottom?
 
 If you want to have multiple loops, or a loop not at the top level,
 then you need to add some kind of a loop constructor to Expr.  I've
 sketched that idea below, and pointed out a couple of other useful
 ideas, such as showing a loop, and mapping a function over a loop
 without unwinding it.
 
 I hope this helps,
 
-Paul
 



===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Project postmortem

2005-11-18 Thread Joel Reymont
The exception is actually from withTimeOut. Removing calls to that  
lets the handshake proceed. The server is using a client handshake,  
though, so the handshake of client vs. client goes on indefinitely.


I'm fixing the server side and once that is done will clean up SSL at  
the end of the handshake and launch a few thousand clients. It's not  
a good repro case yet although I would love to know why withTimeOut  
is throwing that exception.


Joel

On Nov 18, 2005, at 5:02 PM, Christian Maeder wrote:


Sorry, I can only show you my output on
Linux turing 2.6.11.4-21.9, but I don't know what's going on and  
will not have more time this week.


Cheers Christian

[EMAIL PROTECTED]:/local/maeder/haskell/postmortem ./server
17:55:14: ThreadId 1: Accepted new connection: {handle: socket: 4}
17:55:14: ThreadId 1: Verify locations: 1
17:55:14: ThreadId 1: sslGetError: 2
17:55:14: ThreadId 4: Starting SSL handshake...
17:55:14: ThreadId 4: Reading from BIO...
17:55:14: ThreadId 4: Waiting for BIO 0x080d10d8
17:55:14: ThreadId 4: waitForBio: gotta wait a bit...
server: unknown exception


--
http://wagerlabs.com/





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Formalizing lazy lists?

2005-11-18 Thread Greg Woodhouse
Maybe this is old hat, but the question about detecting loops in data
structures got me thinking about this. I know you can encode the cons
operator (and ordinary lists) in pure lambda calculus, but how could
you possibly represent something like [0, 1..]? One thought that
occurss to me is to treat it ass a kind of direct limit, but of course,
the essence of a structure like this is that the ith entry is
computable in a natural way, and it seems that an algebraic limit
couldn't really capture this intuition unless we were to introduce some
sort of higher order structure (which may be what we want, anyway).


===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Formalizing lazy lists?

2005-11-18 Thread Lennart Augustsson

What do you mean by represent?
It's easy enough to write down the lambda term that is the
encoding of [0..].

-- Lennart


Greg Woodhouse wrote:

Maybe this is old hat, but the question about detecting loops in data
structures got me thinking about this. I know you can encode the cons
operator (and ordinary lists) in pure lambda calculus, but how could
you possibly represent something like [0, 1..]? One thought that
occurss to me is to treat it ass a kind of direct limit, but of course,
the essence of a structure like this is that the ith entry is
computable in a natural way, and it seems that an algebraic limit
couldn't really capture this intuition unless we were to introduce some
sort of higher order structure (which may be what we want, anyway).


===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Detecting Cycles in Datastructures

2005-11-18 Thread Paul Hudak

Henning Thielemann wrote:

On Fri, 18 Nov 2005, Paul Hudak wrote:

For example:

fe1,fe2 :: Fix Expr
fe1 e = Add (Const 1) (Const 1)  -- non-recursive
fe2 e = Add (Const 1) e  -- recursive


Do you mean
fe1 _ = Add (Const 1) Loop
?


No, I really meant it as written.  I included this example just to point 
out that an expression didn't have to be infinite to represent it as the 
fixpoint of a function.  Note that the least fixpoint of fe1 is Add 
(Const 1) (Const 1).


Loop shouldn't ever be used by the user -- that's why I added the 
comment that it was not exported.  It's just there to open the 
function for inspection.  In this sense, the trick is analogous to the 
use of higher-order abstract syntax -- i.e. using the meta-language 
(Haskell) to represent functions in the object language (expressions).


  -Paul
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Formalizing lazy lists?

2005-11-18 Thread Greg Woodhouse


--- Lennart Augustsson [EMAIL PROTECTED] wrote:

 What do you mean by represent?
 It's easy enough to write down the lambda term that is the
 encoding of [0..].
 
   -- Lennart
 

You mean like \x - x ? If I apply it to the Church numeral i, I get i
in return.  But that hardly seems satisifying because infinite lists
seem to be a wholly different type of object. 


===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Detecting Cycles in Datastructures

2005-11-18 Thread Paul Hudak

Greg Woodhouse wrote:
 --- Paul Hudak [EMAIL PROTECTED] wrote:
Tom Hawkins wrote:
  In a pure language, is it possible to detect cycles in recursive
  data structures?  For example, is it possible to determine that
  cyclic has a loop? ...
 
  data Expr = Constant Int | Addition Expr Expr
 
  cyclic :: Expr
  cyclic = Addition (Constant 1) cyclic
 
  Or phased differently, is it possible to make Expr an instance of
  Eq such that cyclic == cyclic is smart enough to avoid a
  recursive decent?

 I'm a little confused. It's one thing to demonstrate that a (possibly
 infinite) digraph contains a loop, but quite another to show that it
 does not. Carrying around a copy of the subgraph consisting of nodes
 actually visited is workable in a pure language, but there's no
 guarantee of termination.

The question that was originally posted assumed that the graph was
represented using direct recursion in Haskell.  In which case you
cannot detect a cycle, as previous replies pointed out.  Of course, if
you represented the graph in some more concrete form -- such as an
explicit list of nodes and edges -- then you could detect the cycle
using a standard graph algorithm, at the expense of losing the
elegance of the Haskell recursion.  My post was just pointing out that
there is a third possibility, one that retains most of the elegance
and abstractness of Haskell, yet still allows detecting cycles.

Perhaps it's obvious, but one way to do this is to make the cycle
*implicit* via some kind of a fixed-point operator, which is a trick
I used recently in a DSL application.

 So, you imagine f to be a function that (non-deterministally) follows
 an edge in a digraph. The idea being that G is (possibly infinite)
 digraph and F is a subgraph. The function f would
 non-deterministically select a vertext of F, follow an edge in G (again
 chosen non-deterministically), add the (possibly new) edge to F, and
 return the resulting graph. Then, you are asking if f has a fixpoint in
 this broader context?

I don't understand this... and therefore it's probably not what I
imagine!  I'm saying simply that a cyclic graph can be represented as
the fixed point of a function.

For example, you could define:

  data Expr = Const Int
| Add Expr Expr
| Loop  -- not exported
   deriving (Eq, Show)

The purpose of Loop is to mark the point where a cycle exists.
Instead of working with values of type Expr, you work with values of
type Expr - Expr, or Fix Expr:

  type Fix a = a - a

For example:

  fe1,fe2 :: Fix Expr
  fe1 e = Add (Const 1) (Const 1)  -- non-recursive
  fe2 e = Add (Const 1) e  -- recursive

You can always get the value of an Expr from a Fix Expr whenever you
need it, by computing the fixed point:

  fix f = x where x = f x

 If it can be computed. Maybe I'm off-track here, but it seems to me
 that when you introduce laziness into the equation, you lose any
 guarantee of there even being a fixpoint, much less one that can be
 computed.

Actually it's the other way around -- laziness is what makes this work! 
 The least fixpoint of fe2 in a strict language, for example, is bottom.


  e1,e2 :: Expr
  e1 = fix fe1  -- finite
  e2 = fix fe2  -- infinite

Note that e2 is equivalent to your cyclic.  But now we can also
test for equality:

  instance Eq (Fix Expr) where
fe1 == fe2  =  fe1 Loop == fe2 Loop

For example, fe2 == fe2 returns True, whereas e2 == e2
(i.e. your cyclic == cyclic) diverges.

Of course, note that, although fe1 == fe2 implies that fix fe1 == fix
fe2, the reverse is not true, since there will always be loops of
varying degrees of unwinding that are semantically equivalent, but
not syntactically, or structurally, equivalent.  Thus this definition
of equality is only approximate, but still, it's useful.

 I'm lost. Are you saying bottom is bottom?

I suspect from your other post that you haven't seen the standard
trick of encoding infinite data structures as fixpoints.  Suppose you
have a lambda calculus term for cons, as well as for the numeral 1.
Then the infinite list of ones is just:

Y (\ones. cons 1 ones)

where Y (aka the paradoxical combinator or fixed point combinator)
is defined as:

\f. (\x. f (x x)) (\x. f (x x))

To see this, try unfolding the above term a few iterations using
normal-order reduction.  Note that the term has no recursion in it 
whatsoever.


Now, my earlier point above is that:

Y (\ones. cons 1 ones)

unwinds to the same term as:

Y (\ones. cons 1 (cons 1 ones))

yet the two terms (\ones. cons 1 ones) and
(\ones. cons 1 (cons 1 ones)) are not the same (in fact they're not 
lambda convertible, either).


  -Paul
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Formalizing lazy lists?

2005-11-18 Thread Lennart Augustsson

How about:

nil = \ n c . n
cons x xs = \ n c . c x xs

zero = \ z s . z
suc n = \ z s . s n

listFromZero = Y ( \ from n . cons n (from (suc n))) zero

(Untested, so I might have some mistake.)

-- Lennart

Greg Woodhouse wrote:


--- Lennart Augustsson [EMAIL PROTECTED] wrote:



What do you mean by represent?
It's easy enough to write down the lambda term that is the
encoding of [0..].

-- Lennart




You mean like \x - x ? If I apply it to the Church numeral i, I get i
in return.  But that hardly seems satisifying because infinite lists
seem to be a wholly different type of object. 



===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Detecting Cycles in Datastructures

2005-11-18 Thread Greg Woodhouse


--- Paul Hudak [EMAIL PROTECTED] wrote:

 
 I suspect from your other post that you haven't seen the standard
 trick of encoding infinite data structures as fixpoints.  Suppose you
 have a lambda calculus term for cons, as well as for the numeral 1.
 Then the infinite list of ones is just:
 

Not in quite those terms. Intuitively, I think of the infinite data
stucture here as being a kind of a completion of the space of
ordinary (finite) graphs to a space in which th given function actually
does have a fixed point.

 Y (\ones. cons 1 ones)
 
 where Y (aka the paradoxical combinator or fixed point
 combinator)
 is defined as:
 
 \f. (\x. f (x x)) (\x. f (x x))
 

Now, this is I have seen, but it frankly strikes me as a formal
trick. I've never felt that this definition of Y gave me much insight
into the computation you describe below.

 To see this, try unfolding the above term a few iterations using
 normal-order reduction.  Note that the term has no recursion in it 
 whatsoever.
 
 Now, my earlier point above is that:
 
 Y (\ones. cons 1 ones)
 
 unwinds to the same term as:
 
 Y (\ones. cons 1 (cons 1 ones))
 
 yet the two terms (\ones. cons 1 ones) and
 (\ones. cons 1 (cons 1 ones)) are not the same (in fact they're not 
 lambda convertible, either).
 
-Paul

And this is what leaves me scatching my head. If you leave off the
ones, then you get a sequence of finite lists [1], [1, 1], ... that
seems to approximate the infinite list, but I find it difficult to
imagine how you might pluck a formula like \i. 1 (the ith term)  out of
the air.
 



===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Formalizing lazy lists?

2005-11-18 Thread Greg Woodhouse


--- Lennart Augustsson [EMAIL PROTECTED] wrote:

 How about:
 
 nil = \ n c . n
 cons x xs = \ n c . c x xs
 
 zero = \ z s . z
 suc n = \ z s . s n
 
 listFromZero = Y ( \ from n . cons n (from (suc n))) zero
 
 (Untested, so I might have some mistake.)
 
   -- Lennart
 

Okay, I see what you mean here. What I was thinking is that repeatedly
unfolding Y give us kind of an algorithm to extract initial segments of
the list (like from), but there's something disquieting about all of
this. Maybe it's just my lack of familiarity with lambda calculus, but
I think one thing I got used to as a student was the idea that if I
started peeling back layers from an abstract defintion or theorem, I'd
end up with something that looked familiar. Now, to me a list
comprehension is something like an iterative recipe. Maybe [0, 1, ..]
is something primitive that can only really be understood
recursively, and then a comprehension like

[ x*x | x - [0, 1..] ]

is something easily built from that (or itself defined as a fixed
point), but it would be nice to see the iterative recipe somehow fall
out of the more formal definition in terms of Y.


===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Detecting Cycles in Datastructures

2005-11-18 Thread Paul Hudak

Greg Woodhouse wrote:

--- Paul Hudak [EMAIL PROTECTED] wrote:

Y (\ones. cons 1 ones)

where Y (aka the paradoxical combinator or fixed point
combinator) is defined as:

\f. (\x. f (x x)) (\x. f (x x))


Now, this is I have seen, but it frankly strikes me as a formal
trick. I've never felt that this definition of Y gave me much insight
into the computation you describe below.


To see this, try unfolding the above term a few iterations using
normal-order reduction.  Note that the term has no recursion in it 
whatsoever.


Now, my earlier point above is that:
Y (\ones. cons 1 ones)
unwinds to the same term as:
Y (\ones. cons 1 (cons 1 ones))

yet the two terms (\ones. cons 1 ones) and
(\ones. cons 1 (cons 1 ones)) are not the same (in fact they're not 
lambda convertible, either).

-Paul


And this is what leaves me scatching my head. If you leave off the
ones, then you get a sequence of finite lists [1], [1, 1], ... that
seems to approximate the infinite list, but I find it difficult to
imagine how you might pluck a formula like \i. 1 (the ith term)  out of
the air.


The important property of Y is this:

Y f = f (Y f)

In this way you can see it as unwinding the function, one step at a 
time.  If we define f as (\ones. cons 1 ones), then here is one step of 
unwinding:


Y f == f (Y f) == cons 1 (Y f)

If you do this again you get:

cons 1 (cons 1 (Y f))

and so on.  As you can see, continuing this process yields the infinite 
list of ones.


Now note that if we define g as (\ones. cons 1 (cons 1 ones)), we get:

Y g == g (Y g) == cons 1 (cons 1 (Y g))

which obviously also yields the infinite list of ones.  Yet f is not 
equal to g.


Does this help?

  -Paul

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Records (was Re: [Haskell] Improvements to GHC)

2005-11-18 Thread Benjamin Franksen
On Friday 18 November 2005 02:59, you wrote:
 On Nov 17, 2005, at 1:52 PM, Benjamin Franksen wrote:
  ...
  Yes, yes, yes. I'd rather use a different operator for record
  selection.
  For instance the colon (:). Yes, I know it is the 'cons' operator
  for a
  certain concrete data type that implements stacks (so called
  'lists'). However I am generally opposed to wasting good operator
  and function names as well as syntactic sugar of any kind on a
  /concrete/ data type,
  and especially not for stacks aka lists.

 Would you be happier if it were the yield operator for iterators?

 Yours lazily,

Ok, ok, I tend to forget that Haskell lists are lazy streams, not just 
simple stacks... which makes them indeed a /lot/ more useful than the 
corresponding data type in strict languages.

I still think all those nice short and meaningful names in the Prelude 
(map, filter, ...) should be type class members in some suitable 
standard collection library.

Ben
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Detecting Cycles in Datastructures

2005-11-18 Thread Greg Woodhouse


--- Paul Hudak [EMAIL PROTECTED] wrote:

 
 The important property of Y is this:
 
 Y f = f (Y f)

Right. This is just a formal statement of the property thaat f fixex Y
f. I'm with you so far.
 
 In this way you can see it as unwinding the function, one step at a
 
 time.  If we define f as (\ones. cons 1 ones), then here is one step
 of 
 unwinding:
 
 Y f == f (Y f) == cons 1 (Y f)
 

I'm with you here, too, except that we are *assuming* that Y has the
stated property. If it does, it's absolutely clear that the above
should hold.

 If you do this again you get:
 
 cons 1 (cons 1 (Y f))
 
 and so on.  As you can see, continuing this process yields the
 infinite 
 list of ones.

Right.

 
 Now note that if we define g as (\ones. cons 1 (cons 1 ones)), we
 get:
 
 Y g == g (Y g) == cons 1 (cons 1 (Y g))

Now, this is where things get a little mysterious. Where did g come
from?

I understand that the x x om formal definition of Y is what makes
everything work (or, at least I think I do). When I try and think this
all through, I wind up with something like this:

First of all, I'm after a list (named ones) with the property that
ones = cons 1 ones. How can I express the existence of such a thing as
a fixed point? Well, if it does exist, it must be a fixed point of f =
\x. cons 1 x, because

f ones == (\x. cons 1 x) ones == cons 1 ones ==? ones

Now, if I write

Y = \y. (\x y (x x)) (\x. y x x)

then,

Y f == cons 1 (\x. cons 1 ( x x)) (\x (cons 1) x x)

and I'm left to ponder what that might mean. Formally, I can see that
this pulls an extra cons 1 out to the left, and so we're led to your g

 
 which obviously also yields the infinite list of ones.  Yet f is not 
 equal to g.

To me, all this really seems to be saying is that for any n, there's a
lambda expression explicitly involving [1, 1, ..., 1] (n times) that is
a solution to the above problem, which I interpet (perhaps
incorrectly) to mean that *if* there were a real list that wedre a
solution to the above recurrence, then for any n, there would be an
initial sublist consisting of just 1's.

Now, I know that lambda expressions are not lists, though lists can be
encoded as lambda expressions. For finite lists, this all seem simple
enough, but in this case we seem to have a lambda expression that is
not a (finite) list, but that can nevertheless be approximated in
some3 sense by finte lists.

But I can't really put my finger on just what that sense might be.
 
 Does this help?

Yes, it does. Very much so.
 
-Paul
 
 



===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Detecting Cycles in Datastructures

2005-11-18 Thread Lennart Augustsson

Greg Woodhouse wrote:


--- Paul Hudak [EMAIL PROTECTED] wrote:



The important property of Y is this:

Y f = f (Y f)



Right. This is just a formal statement of the property thaat f fixex Y
f. I'm with you so far.


In this way you can see it as unwinding the function, one step at a

time.  If we define f as (\ones. cons 1 ones), then here is one step
of 
unwinding:


Y f == f (Y f) == cons 1 (Y f)




I'm with you here, too, except that we are *assuming* that Y has the
stated property. If it does, it's absolutely clear that the above
should hold.


You can easily verify that
Y f = f (Y f)

LHS =
Y f =
(\f. (\x. f (x x)) (\x. f (x x))) f =
(\x. f (x x)) (\x. f (x x)) =
f ((\x. f (x x) (\x. f (x x)))

RHS =
f (Y f) =
f ((\f. (\x. f (x x)) (\x. f (x x))) f) =
f ((\x. f (x x)) (\x. f (x x)))

So (\f. (\x. f (x x)) (\x. f (x x))) is a fixpoint combinator
(one of infinitely many).

-- Lennart


-- Lennart
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Records (was Re: [Haskell] Improvements to GHC)

2005-11-18 Thread John Meacham
On Fri, Nov 18, 2005 at 04:22:59PM +0100, Sebastian Sylvan wrote:
 Yes. I just don't think it's used enough to warrant giving it one of
 the best symbols.

 grep -o ' [-+.*/[EMAIL PROTECTED] ' GenUtil.hs | sort | uniq -c | sort -n
  1  $! 
  1  * 
  8  + 
 10  == 
 12  - 
 17  -- 
 30  . 
 31  $ 
 39  ++ 

one of the most common operators. I think experienced haskell programers
tend to use it a whole lot more often than beginning ones, and I am not
even a point-free advocate.

John

-- 
John Meacham - ⑆repetae.net⑆john⑈ 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Formalizing lazy lists?

2005-11-18 Thread Lennart Augustsson

Greg Woodhouse wrote:


--- Lennart Augustsson [EMAIL PROTECTED] wrote:



How about:

nil = \ n c . n
cons x xs = \ n c . c x xs

zero = \ z s . z
suc n = \ z s . s n

listFromZero = Y ( \ from n . cons n (from (suc n))) zero

(Untested, so I might have some mistake.)

-- Lennart




Okay, I see what you mean here. What I was thinking is that repeatedly
unfolding Y give us kind of an algorithm to extract initial segments of
the list (like from), but there's something disquieting about all of
this.


Unfolding Y is indeed part of the algorithm to generate the list.
The lambda calculus is just another programming language, so
why does this disturb you?


 Maybe it's just my lack of familiarity with lambda calculus, but

I think one thing I got used to as a student was the idea that if I
started peeling back layers from an abstract defintion or theorem, I'd
end up with something that looked familiar. Now, to me a list
comprehension is something like an iterative recipe. Maybe [0, 1, ..]
is something primitive that can only really be understood
recursively, and then a comprehension like

[ x*x | x - [0, 1..] ]

is something easily built from that (or itself defined as a fixed
point), but it would be nice to see the iterative recipe somehow fall
out of the more formal definition in terms of Y.


List comprehensions are just a different way of writing concats
and maps, both of which have recursive definitions.

What is an iterative recipe?

-- Lennart
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Infinite lists and lambda calculus

2005-11-18 Thread Lennart Augustsson

Greg Woodhouse wrote:

Perhaps the issue is that the manipulations below are purely syntactic,

But all the computation rules of the lambda calculus are syntactic
in that sense.  When you can prove things by symbol pushing it's
the easiest way.

But as Paul Hudak mentioned, there definitions that are equal, but
you need something stronger to prove it, like fix point induction.


and though they work, it is not at all clear how to make contact with
other notions of computability. Perhaps it is that

Y = (\f. (\x. f (x x)) (\x. f (x x))) 


is a perfect sensible definition, it's still just a recipe for a
computation. Maybe I'm thinking too hard, but it reminds me of the mu
operator. Primiive recursive functions are nice and constructive, but
minimization is basically a search, there's no guarantee it will
work. If I write

g(x) = mu x (f(x) - x)

then I've basically said look at every x and stop when you find a
fixed point. Likewise, given a candidate for f, it's easy to verify
that Y f = f (Y f), as you've shown, but can the f be found without
some kind of search?

There is no search with this defintion (as you can see), it's very
constructive.

It computes the fix point which you can also define as
 oo
fix f = lub f^i(_|_)
i=0
where f^i is f iterated i times.  Is that a definition
of fixpoint that makes you happier?


Since there are recursive functions that aren't
primitive recursive, the answer has to be no.

Where did primitive recursion come from?  Y is used
to express general recursion.



Finally, you've exhibited a sequence of fixed points, and in this case
it's intuitively clear how these correspond to something we might call
an infinite list. But is there an interpetration that makes this
precise? The notation

ones = cons 1 ones
ones = cons 1 (cons 1 ones)
...


The list ones is the least upper bound in the domain of lazy (integer)
lists of the following chain
_|_, cons 1 _|_, cons 1 (cons 1 _|_), ...

How much domain theory have you studied?  Maybe that's where you can
find the connection you're missing?

-- Lennart
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Formalizing lazy lists?

2005-11-18 Thread Greg Woodhouse


--- Lennart Augustsson [EMAIL PROTECTED] wrote:

 
 Unfolding Y is indeed part of the algorithm to generate the list.
 The lambda calculus is just another programming language, so
 why does this disturb you?
 

Well...think about this way. The function

f i = [1, 1 ..]!!i

is just a constant function expressed in a complicated way. Can I
algoritmically determine that f is a constant function? For any
particular value of i, the calculations you give demonstrate that f i =
1. It's a little less obvious that f really is constant. Yes, I know
this can be proved by induction, but verifying a proof is not the same
as discovering the theorem in the first place.

I suppose the big picture is whether I can embed the theory of finite
lists in the theory of infinite lists, preferably in such a way that
the isomorphism of the theory finite lists with the embedded
subtheory is immediately obvious.


===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Infinite lists and lambda calculus

2005-11-18 Thread Greg Woodhouse


--- Lennart Augustsson [EMAIL PROTECTED] wrote:

 
 It computes the fix point which you can also define as
   oo
   fix f = lub f^i(_|_)
  i=0
 where f^i is f iterated i times.  Is that a definition
 of fixpoint that makes you happier?

Believe it or not, yes. My background is in mathematics, so something
looking more like a fixed point of a continuous map seems a lot more
intuitive.
 
 
 How much domain theory have you studied?  Maybe that's where you can
 find the connection you're missing?
 

None. But it seems clear that this is a deficit in my education I need
to address.

   -- Lennart
 



===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Detecting Cycles in Datastructures

2005-11-18 Thread jerzy . karczmarczuk

My simple-mindness and naïveté begin to bother me. I begin to get lost,
and I don't know anymore what is the problem...

Greg Woodhouse a eacute;crit:


--- Paul Hudak [EMAIL PROTECTED] wrote:

...

The important property of Y is this:

Y f = f (Y f)


Right. This is just a formal statement of the property thaat f fixex Y
f. I'm with you so far.


No, not exactly. This *may be* the *definition* of Y in a typed system
(Haskell). On the other hand you cannot  make Y as a self-applying lambda
construct for obvious reasons, and still you can sleep well...

...

Now note that if we define g as (\ones. cons 1 (cons 1 ones)), we
get:

Y g == g (Y g) == cons 1 (cons 1 (Y g))


Now, this is where things get a little mysterious. Where did g come
from?

I understand that the x x om formal definition of Y is what makes
everything work (or, at least I think I do).


Not exactly. Let's do it in Haskell, without this (x x) stuff...
My `g`means something different, though.

fix f = f (fix f)   -- Here you have your Y. No typeless lambda.

g f n = n : f n-- This is a generic *non-recursive* `repeat`
ones = fix g 1 -- Guess what.


Now (take 30 ones) works perfectly well. 'ones' is a piece of co-data, or a
co-recursive stream as many people, e.g., David Turner would say. It has
no finite form. Yet, we live happy with, provided we have a normal
reduction scheme (laziness).
Why do you want (in a further posting) to have a real thing which for you
means a finite list? Are you sure that everybody needs the LEAST fixed
point? The co-streams give you something different...


Frankly, I don't know what is the issue...

Jerzy Karczmarczuk


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Project postmortem

2005-11-18 Thread Joel Reymont
I'm happy to report that the problem can be reproduced by running the  
code from my darcs repo at http://test.wagerlabs.com/postmortem. See  
the README file. I'm on Mac OSX 10.4.3.


The server just sits there, goes through the SSL handshake and...  
does nothing else. The clients go through the handshake with the  
server and do nothing else. The handshake goes through X number of  
times and then the client crashes.


On Nov 18, 2005, at 1:55 PM, Simon Marlow wrote:


How we normally proceed for a crash like this is as follows: examine
where the crash happened and determine whether it is a result of  
heap or
stack corruption, and then attempt to trace backwards to find out  
where

the corruption originated from.  Tracing backwards means running the
program from the beginning again, so it's essential to have a
reproducible example.  Without reproducibility, we have to use a
combination of debugging printfs and staring really hard at the code,
which is much more time consuming (and still requires being able to  
run

the program to make it crash with debugging output turned on).


--
http://wagerlabs.com/





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Infinite lists and lambda calculus (was: Detecting Cycles in Datastructures)

2005-11-18 Thread Cale Gibbard
On 18/11/05, Greg Woodhouse [EMAIL PROTECTED] wrote:


 --- Lennart Augustsson [EMAIL PROTECTED] wrote:

 I guess I'm not doing a very good job of expressing myself. I see that
 if you define Y as you do, then the various functions you list have the
 property that Y f = f (Y f).

 I don't want to draw out a boring discussion that is basically about my
 own qualms here, especially since I haven't really been able to
 articulate what it is that bothers me.

 Perhaps the issue is that the manipulations below are purely syntactic,
 and though they work, it is not at all clear how to make contact with
 other notions of computability. Perhaps it is that

 Y = (\f. (\x. f (x x)) (\x. f (x x)))

In a sense, the real definition of Y is Y f = f (Y f), this lambda
term just happens to have that property, but such functions aren't
rare.
One fun one is:

Y = (L L L L L L L L L L L L L L L L L L L L L L L L L L L L)
where
L = λabcdefghijklmnopqstuvwxyzr. (r (t h i s i s a f i x e d p o i n t
c o m b i n a t o r))

 is a perfect sensible definition, it's still just a recipe for a
 computation. Maybe I'm thinking too hard, but it reminds me of the mu
 operator. Primiive recursive functions are nice and constructive, but
 minimization is basically a search, there's no guarantee it will
 work. If I write

 g(x) = mu x (f(x) - x)

 then I've basically said look at every x and stop when you find a
 fixed point. Likewise, given a candidate for f, it's easy to verify
 that Y f = f (Y f), as you've shown, but can the f be found without
 some kind of search? Since there are recursive functions that aren't
 primitive recursive, the answer has to be no.

 Finally, you've exhibited a sequence of fixed points, and in this case
 it's intuitively clear how these correspond to something we might call
 an infinite list. But is there an interpetration that makes this
 precise? The notation

 ones = cons 1 ones
 ones = cons 1 (cons 1 ones)
 ...

 is suggestive, but only suggestive (at least as far as I can see). Is
 there a model in which [1, 1 ...] is a real thing that is somehow
 approached by the finite lists?
 
  You can easily verify that
Y f = f (Y f)
 
  LHS =
  Y f =
  (\f. (\x. f (x x)) (\x. f (x x))) f =
  (\x. f (x x)) (\x. f (x x)) =
  f ((\x. f (x x) (\x. f (x x)))
 
  RHS =
  f (Y f) =
  f ((\f. (\x. f (x x)) (\x. f (x x))) f) =
  f ((\x. f (x x)) (\x. f (x x)))
 
  So (\f. (\x. f (x x)) (\x. f (x x))) is a fixpoint combinator
  (one of infinitely many).
 
-- Lennart
 
 
-- Lennart
 



 ===
 Gregory Woodhouse  [EMAIL PROTECTED]


 Interaction is the mind-body problem of computing.

 --Philip Wadler











 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Detecting Cycles in Datastructures

2005-11-18 Thread Benjamin Franksen
On Saturday 19 November 2005 02:16, Greg Woodhouse wrote:
 --- [EMAIL PROTECTED] wrote:
  fix f = f (fix f)   -- Here you have your Y. No typeless lambda.
 
  g f n = n : f n-- This is a generic *non-recursive* `repeat`
  ones = fix g 1 -- Guess what.

 Very nice! I honestly would not have expected this to work. Simple
 cases of lazy evaluation are one thing, but this is impressive.

[You should read some of his papers, for instance the most unreliable 
techique in the world to compute pi. I was ROTFL when I saw the title 
and reading it was an eye-opener and fun too.]

Ben
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Infinite lists and lambda calculus

2005-11-18 Thread Paul Hudak

Cale Gibbard wrote:

Y = (\f. (\x. f (x x)) (\x. f (x x)))


In a sense, the real definition of Y is Y f = f (Y f), this lambda
term just happens to have that property, but such functions aren't
rare.


Actually no, the real definition is the top one, because the other one 
isn't even a valid lambda term, since it's recursive!  There is no such 
thing as recursion in the pure lambda calculus.


Of course, there are many valid definitions of Y, as you point out, both 
recursive and non-recursive.  But I do believe that the first one above 
is the most concise non-recursive definition.


  -Paul
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Infinite lists and lambda calculus

2005-11-18 Thread Cale Gibbard
On 18/11/05, Paul Hudak [EMAIL PROTECTED] wrote:
 Cale Gibbard wrote:
 Y = (\f. (\x. f (x x)) (\x. f (x x)))
 
  In a sense, the real definition of Y is Y f = f (Y f), this lambda
  term just happens to have that property, but such functions aren't
  rare.

 Actually no, the real definition is the top one, because the other one
 isn't even a valid lambda term, since it's recursive!  There is no such
 thing as recursion in the pure lambda calculus.

I meant in the sense that Y f = f (Y f) is an axiomatic definition of
Y. We're satisfied with any concrete definition of Y which meets that
criterion. Of course it's not actually a lambda term, but it is an
equation which a lambda term Y can satisfy. Similarly in, say, set
theory, we don't say what sets *are* so much as we say what it is that
they satisfy, and there are many models of set theory which meet those
axioms. Y f = f (Y f) is really the only important property of Y, and
any model of it will do as a concrete definition.

 - Cale

 Of course, there are many valid definitions of Y, as you point out, both
 recursive and non-recursive.  But I do believe that the first one above
 is the most concise non-recursive definition.

-Paul

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe