[Haskell-cafe] Re: Parsec and network data

2008-08-30 Thread apfelmus
Johannes Waldmann wrote:
 Imagine you're writing a parser for a simple programming language.
 A program is a sequence of statements.
 Fine, you do readFile (once) and then apply a pure Parsec parser.
 
 Then you decide to include import statements in your  language.
 Suddenly the parser needs to do IO. Assume the import statements
 need not be the first statements  of the program
 (there may be headers, comments etc. before).
 Then you really have to interweave the parsing and the IO.
 
 If anyone has a nice solution to this, please tell. - J.W.

Design your language in a way that the *parse* tree does not depend on import
statements? I.e. Chasing imports is performed after you've got an abstract
syntax tree.


Regards,
apfelmus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Aaron Denney
On 2008-08-28, Yitzchak Gale [EMAIL PROTECTED] wrote:
 However we work that out, right now we need a working
 idiom to get out of trouble when this situation comes up.
 What we have is a hack that is not guaranteed to work.
 We are abusing the NOINLINE pragma and assuming
 things about it that are not part of its intended use.
 We are lucky that it happens to work right now in GHC.

 So my proposal is that, right now, we make the simple
 temporary fix of adding an ONLYONCE pragma that does
 have the proper guaranteed sematics.

 In the meantime, we can keep tackling the awkward squad.

What keeps this a temporary fix.  Even now, industrial user demands
keep us from making radical changes to the languages and libraries.  If
we adopt a not entirely satisfactory solution, it's never going away.
If we keep the NOINLINE pragma hack, we can claim it was never supported
and do away with it.  If we don't have a real solution, perhaps in this
case we haven't worn the hair shirt long enough?

-- 
Aaron Denney
--

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: 2 modules in one file

2008-08-30 Thread Aaron Denney
On 2008-08-27, Henrik Nilsson [EMAIL PROTECTED] wrote:
 And there are also potential issues with not every legal module name
 being a legal file name across all possible file systems.

I find this unconvincing.  Broken file systems need to be fixed.

-- 
Aaron Denney
--

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ganesh Sittampalam

On Sat, 30 Aug 2008, Adrian Hey wrote:


Ganesh Sittampalam wrote:
Will Data.Unique still work properly if a value is sent across a RPC 
interface?


A value of type Unique you mean? This isn't possible. Data.Unique has 
been designed so cannot be Shown/Read or otherwise 
serialised/deserialised (for obvious reasons I guess).


How do the implementers of Data.Unique know that they musn't let them be 
serialised/deserialised? What stops the same rule from applying to 
Data.Random?



Also what if I want a thread-local variable?


Well actually I would say that threads are bad concurrency model so
I'm not keen on thread local state at all. Mainly because I'd like to
get rid of threads, but also a few other doubts even if we keep
threads.


Even if you don't like them, people still use them.


AFAICS this is irrelvant for the present discussions as Haskell doesn't
support thread local variable thingies. If it ever does being precise
about that is someone elses problem.


The fact that your proposal isn't general enough to handle them is a mark 
against it; standardised language features should be widely applicable, 
and as orthogonal as possible to other considerations.


For the time being the scope of IORefs/MVars/Chans is (and should 
remain) whatever process is described by main (whether or not they 
appear at top level).


And if main isn't the entry point? This comes back to my questions about 
dynamic loading.


(I.E. Just making existing practice *safe*, at least in the sense that 
the compiler ain't gonna fcuk it up with INLINING or CSE and every one 
understands what is and isn't safe in ACIO)


Creating new language features means defining their semantics rather more 
clearly than just no inlining or cse, IMO.


I wouldn't even know how to go about that to the satisfaction of
purists. But global variables *are* being used whether or not the top
level - bindings are implemented. They're in the standard libraries!

So if this stuff matters someone had better figure it out :-)


It's a hack that isn't robust in many situations. We should find better 
ways to do it, not standardise it.


Cheers,

Ganesh
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Adrian Hey

Ganesh Sittampalam wrote:
How do the implementers of Data.Unique know that they musn't let them be 
serialised/deserialised?


Because if you could take a String and convert it to a Unique there
would be no guarantee that result was *unique*.


What stops the same rule from applying to Data.Random?


Well the only data type defined by this is StdGen, which is a Read/Show
instance. I guess there's no semantic problem with that (can't think of
one off hand myself).


Also what if I want a thread-local variable?


Well actually I would say that threads are bad concurrency model so
I'm not keen on thread local state at all. Mainly because I'd like to
get rid of threads, but also a few other doubts even if we keep
threads.


Even if you don't like them, people still use them.


AFAICS this is irrelvant for the present discussions as Haskell doesn't
support thread local variable thingies. If it ever does being precise
about that is someone elses problem.


The fact that your proposal isn't general enough to handle them is a 
mark against it; standardised language features should be widely 
applicable, and as orthogonal as possible to other considerations.


I think the whole thread local state thing is a complete red herring.

I've never seen a convincing use case for it and I suspect the only
reason these to issues have become linked is that some folk are so
convinced that global variables are evil, they mistakenly think
thread local variables must be less evil (because they are less
global).

Anyway, if you understand the reasons why all the real world libraries
that do currently use global variables do this, it's not hard to see
why they don't want this to be thread local (it would break all the
safety properties they're trying to ensure). So whatever problem thread
local variables might solve, it isn't this one.

For the time being the scope of IORefs/MVars/Chans is (and should 
remain) whatever process is described by main (whether or not they 
appear at top level).


And if main isn't the entry point? This comes back to my questions about 
dynamic loading.


Well you're talking about some non-standard Haskell, so with this and
other non standard stuff (like plugins etc) I guess the answer is
it's up to whoever's doing this to make sure they do it right. I
can't comment further as I don't know what it is they're trying
to do, but AFAICS it's not a language design issue at present.

If plugins breaks is down to plugins to fix itself, at least until such
time as a suitable formal theory of plugins has been developed so
it can become standard Haskell :-)

(I.E. Just making existing practice *safe*, at least in the sense 
that the compiler ain't gonna fcuk it up with INLINING or CSE and 
every one understands what is and isn't safe in ACIO)


Creating new language features means defining their semantics rather 
more clearly than just no inlining or cse, IMO.


I wouldn't even know how to go about that to the satisfaction of
purists. But global variables *are* being used whether or not the top
level - bindings are implemented. They're in the standard libraries!

So if this stuff matters someone had better figure it out :-)


It's a hack that isn't robust in many situations. We should find better 
ways to do it, not standardise it.


Nobody's talking about standardising the current hack. This the whole
point of the top level - proposal, which JM seems to think is sound
enough for incorporation into JHC (correctly IMO). Nobody's found
fault with it, other than the usual global variables are evil mantra
:-)

Regards
--
Adrian Hey

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec and network data

2008-08-30 Thread Thomas Schilling
There's a whole bunch of other problems with lazy network IO.  The big
problem is that you cannot detect when your stream ends since that
will happen inside unsafeInterleaveIO which is invisible from inside
pure code.  You also have no guarantee that the lazy code actually
consumes code enough.  Finalisers don't help, either, since there is
in fact no guarantee they are actually run, never mind on time.

The proposed solution by Oleg  co. is to use enumerations/left folds
[1].  The basic idea is to use a callback which gets handed a chunk of
the input from the network.  When the last chunk is handed-out the
connection is closed automatically.  Using continuations, you can turn
this into a stream again [2] which is needed for many input processing
tasks, like parsing.

I remember Johan Tibell (CC'd) working on an extended variant of
Parsec that can deal with this chunked processing.  The idea is to
teach Parsec about a partial input and have it return a function to
process the rest (a continuation) if it encounters the end of a chunk
(but not the end of a file).  Maybe Johan can tell you more about
this, or point you to his implementation.

 [1]: http://okmij.org/ftp/papers/LL3-collections-enumerators.txt
 [2]: http://okmij.org/ftp/Haskell/fold-stream.lhs

/ Thomas

On Tue, Aug 26, 2008 at 10:35 PM, brian [EMAIL PROTECTED] wrote:
 Hi, I've been struggling with this problem for days and I'm dying. Please 
 help.

 I want to use Parsec to parse NNTP data coming to me from a handle I
 get from connectTo.

 One unworkable approach I tried is to get a lazy String from the
 handle with hGetContents. The problem: suppose the first message from
 the NNTP server is 200 OK\r\n. Parsec parses it beautifully. Now I
 need to discard the parsed part so that Parsec will parse whatever the
 server sends next, so I use Parsec's getInput to get the remaining
 data. But there isn't any, so it blocks. Deadlock: the client is
 inappropriately waiting for server data and the server is waiting for
 my first command.

 Another approach that doesn't quite work is to create an instance of
 Parsec's Stream with timeout functionality:

 instance Stream Handle IO Char where
  uncons h =
do r - hWaitForInput h ms
   if r
 then liftM (\c - Just (c, h)) (hGetChar h)
 else return Nothing
where ms = 5000

 It's probably obvious to you why it doesn't work, but it wasn't to me
 at first. The problem: suppose you tell parsec you're looking for
 (many digit) followed by (string \r\n). 123\r\n won't match;
 123\n will. My Stream has no backtracking. Even if you don't need
 'try', it won't work for even basic stuff.

 Here's another way:
 http://www.mail-archive.com/haskell-cafe@haskell.org/msg22385.html
 The OP had the same problem I did, so he made a variant of
 hGetContents with timeout support. The problem: he used something from
 unsafe*. I came to Haskell for rigor and reliability and it would make
 me really sad to have to use a function with 'unsafe' in its name that
 has a lot of wacky caveats about inlining, etc.

 In that same thread, Bulat says a timeout-enabled Stream could help.
 But I can't tell what library that is. 'cabal list stream' shows me 3
 libraries none of which seems to be the one in question. Is Streams a
 going concern? Should I be checking that out?

 I'm not doing anything with hGetLine because 1) there's no way to
 specify a maximum number of characters to read 2) what is meant by a
 line is not specified 3) there is no way to tell if it read a line
 or just got to the end of the data. Even using something like hGetLine
 that worked better would make the parsing more obscure.

 Thank you very very much for *any* help.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ashley Yakeley

Ganesh Sittampalam wrote:
If you want to standardise a language feature, you have to explain its 
behaviour properly. This is one part of the necessary explanation.


To be concrete about scenarios I was considering, what happens if:

 - the same process loads two copies of the GHC RTS as part of two 
completely independent libraries? For added complications, imagine that 
one of the libraries uses a different implementation instead (e.g. Hugs)


 - one Haskell program loads several different plugins in a way that 
allows Haskell values to pass across the plugin boundary


How do these scenarios work with use cases for - like (a) Data.Unique 
and (b) preventing multiple instantiation of a sub-library?


That's a good question. But before you propose these scenarios, you must 
establish that they are sane for Haskell as it is today.


In particular, would _local_ IORefs work correctly? After all, the 
memory allocator must be global in some sense. Could you be sure that 
different calls to newIORef returned separate IORefs?


Perhaps this is the One True Global Scope: the scope in which refs from 
newIORef are guaranteed to be separate. It's the scope in which values 
from newUnique are supposed to be different, and it would also be the 
scope in which top-level - would be called at most once.


--
Ashley Yakeley
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ashley Yakeley

Philippa Cowderoy wrote:

Talking of which, we really ought to look at an IO typeclass or two (not
just our existing MonadIO) and rework the library ops to use it in
Haskell'. You're not the only one to want it, and if it's not fixed this
time it may never get fixed.


This could allow both the best of both worlds, as we could have a monad 
that one couldn't create global variables for, and a monad for which one 
could.


--
Ashley Yakeley
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ganesh Sittampalam

On Sat, 30 Aug 2008, Ashley Yakeley wrote:


Ganesh Sittampalam wrote:
If you want to standardise a language feature, you have to explain its 
behaviour properly. This is one part of the necessary explanation.


To be concrete about scenarios I was considering, what happens if:

 - the same process loads two copies of the GHC RTS as part of two 
completely independent libraries? For added complications, imagine that 
one of the libraries uses a different implementation instead (e.g. Hugs)


 - one Haskell program loads several different plugins in a way that 
allows Haskell values to pass across the plugin boundary


How do these scenarios work with use cases for - like (a) Data.Unique and 
(b) preventing multiple instantiation of a sub-library?


That's a good question. But before you propose these scenarios, you must 
establish that they are sane for Haskell as it is today.


In particular, would _local_ IORefs work correctly? After all, the memory 
allocator must be global in some sense. Could you be sure that different 
calls to newIORef returned separate IORefs?


Yes, I would expect that. Allocation areas propagate downwards from the OS 
to the top-level of a process and then into dynamically loaded modules if 
necessary. Any part of this puzzle that fails to keep them separate (in 
some sense) is just broken.


Perhaps this is the One True Global Scope: the scope in which refs from 
newIORef are guaranteed to be separate.


Every single call to newIORef, across the whole world, returns a different 
ref. The same one as a previous one can only be returned once the old 
one has become unused (and GCed).


It's the scope in which values from newUnique are supposed to be 
different, and it would also be the scope in which top-level - would be 
called at most once.


I don't really follow this. Do you mean the minimal such scope, or the 
maximal such scope? The problem here is not about separate calls to 
newIORef, it's about how many times an individual - will be executed.


Ganesh
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Calling Lockheed, Indra, Thales, Raytheon

2008-08-30 Thread Ashley Yakeley

Paul Johnson wrote:
This is a strange question, I know, but is there anyone working in any 
of the above companies on this mailing list?


Everyone will no doubt be wondering what they have in common.  I'm 
afraid I can't discuss that.


Air Traffic Control?

--
Ashley Yakeley
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ganesh Sittampalam

On Sat, 30 Aug 2008, Adrian Hey wrote:


Ganesh Sittampalam wrote:
How do the implementers of Data.Unique know that they musn't let them be 
serialised/deserialised?


Because if you could take a String and convert it to a Unique there
would be no guarantee that result was *unique*.


Well, yes, but if I implemented a library in standard Haskell it would 
always be safely serialisable/deserialisable (I think). So the global 
variables hack somehow destroys that property - how do I work out why it 
does in some cases but not others?



I think the whole thread local state thing is a complete red herring.

I've never seen a convincing use case for it and I suspect the only


Well, I've never seen a convincing use case for global variables :-)


reason these to issues have become linked is that some folk are so
convinced that global variables are evil, they mistakenly think
thread local variables must be less evil (because they are less
global).


I don't think they're less evil, just that you might want them for the 
same sorts of reasons you might want global variables.


If plugins breaks is down to plugins to fix itself, at least until such 
time as a suitable formal theory of plugins has been developed so it can 
become standard Haskell :-)


Dynamic loading and plugins work fine with standard Haskell now, because 
nothing in standard Haskell breaks them. The - proposal might well break 
them, which is a significant downside for it. In general, the smaller the 
world that the Haskell standard lives in, the less it can interfere with 
other concerns. - massively increases that world, by introducing the 
concept of a process scope.


It's a hack that isn't robust in many situations. We should find better 
ways to do it, not standardise it.


Nobody's talking about standardising the current hack. This the whole
point of the top level - proposal,


It just amounts to giving the current hack some nicer syntax and stating 
some rules under which it can be used. Those rules aren't actually strong 
enough to provide a guarantee of process level scope.


which JM seems to think is sound enough for incorporation into JHC 
(correctly IMO). Nobody's found fault with it, other than the usual 
global variables are evil mantra :-)


Several people have found faults with it, you've just ignored or dismissed 
them. No doubt from your perspective the faults are irrelevant or untrue, 
but that's not my perspective.


Ganesh
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Calling Lockheed, Indra, Thales, Raytheon

2008-08-30 Thread minh thu
2008/8/30 Ashley Yakeley [EMAIL PROTECTED]:
 Paul Johnson wrote:

 This is a strange question, I know, but is there anyone working in any of
 the above companies on this mailing list?

 Everyone will no doubt be wondering what they have in common.  I'm afraid
 I can't discuss that.

 Air Traffic Control?

Maybe global warming.

Thu
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Calling Lockheed, Indra, Thales, Raytheon

2008-08-30 Thread Lennart Augustsson
They are all defense contractors.

On Sat, Aug 30, 2008 at 12:18 PM, Ashley Yakeley [EMAIL PROTECTED] wrote:
 Paul Johnson wrote:

 This is a strange question, I know, but is there anyone working in any of
 the above companies on this mailing list?

 Everyone will no doubt be wondering what they have in common.  I'm afraid
 I can't discuss that.

 Air Traffic Control?

 --
 Ashley Yakeley
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ashley Yakeley

Ganesh Sittampalam wrote:
Every single call to newIORef, across the whole world, returns a 
different ref.


How do you know? How can you compare them, except in the same Haskell 
expression?


The same one as a previous one can only be returned 
once the old one has become unused (and GCed).


Perhaps, but internally the IORef is a pointer value, and those pointer 
values might be the same. From the same perspective, one could say that 
every single call to newUnique across the whole world returns a 
different value, but internally they are Integers that might repeat.


It's the scope in which values from newUnique are supposed to be 
different, and it would also be the scope in which top-level - would 
be called at most once.


I don't really follow this. Do you mean the minimal such scope, or the 
maximal such scope? The problem here is not about separate calls to 
newIORef, it's about how many times an individual - will be executed.


Two IO executions are in the same global scope if their resulting 
values can be used in the same expression. Top-level - declarations 
must execute at most once in this scope.


--
Ashley Yakeley
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ganesh Sittampalam

On Sat, 30 Aug 2008, Ashley Yakeley wrote:


Ganesh Sittampalam wrote:
Every single call to newIORef, across the whole world, returns a different 
ref.


How do you know? How can you compare them, except in the same Haskell 
expression?


I can write to one and see if the other changes.

The same one as a previous one can only be returned once the old one has 
become unused (and GCed).


Perhaps, but internally the IORef is a pointer value, and those pointer 
values might be the same. From the same perspective, one could say that


How can they be the same unless the memory management system is broken? I 
consider different pointers on different machines or in different virtual 
address spaces different too; it's the fact that they don't alias 
that matters.


every single call to newUnique across the whole world returns a different 
value, but internally they are Integers that might repeat.


The thing about pointers is that they are managed by the standard 
behaviour of memory allocation. This isn't true of Integers.


In fact this point suggests an implementation for Data.Unique that should 
actually be safe without global variables: just use IORefs for the actual 
Unique values. IORefs already support Eq, as it happens. That gives you 
process scope for free, and if you want bigger scopes you can pair that 
with whatever makes sense, e.g. process ID, MAC address, etc.


Two IO executions are in the same global scope if their resulting 
values can be used in the same expression. Top-level - declarations 
must execute at most once in this scope.


This brings us back to the RPC question, and indeed to just passing values 
to somewhere else via FFI. I think you can work around some of that by 
talking about ADTs that aren't serialisable (e.g. ban the class Storable), 
but now we have different global scopes for different kinds of values, so 
which scope do we use to define - ?


Ganesh
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ashley Yakeley

Ashley Yakeley wrote:
I don't really follow this. Do you mean the minimal such scope, or the 
maximal such scope? The problem here is not about separate calls to 
newIORef, it's about how many times an individual - will be executed.


Two IO executions are in the same global scope if their resulting 
values can be used in the same expression. Top-level - declarations 
must execute at most once in this scope.


Better:

Two newIORef executions are in the same global scope if their 
resulting refs can be used in the same expression. Top-level - 
declarations must execute at most once in this scope.


--
Ashley Yakeley
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ashley Yakeley

Ganesh Sittampalam wrote:
How do the implementers of Data.Unique know that they musn't let them be 
serialised/deserialised? What stops the same rule from applying to 
Data.Random?


Unique values should be no more deserialisable than IORefs.

Is it the functionality of Data.Unique that you object to, or the fact 
that it's implemented with a global variable?


If the former, one could easily build Unique values on top of IORefs, 
since IORef is in Eq. Thus Data.Unique is no worse than IORefs (ignoring 
hashability, anyway).


If the latter, how do you recommend implementing Data.Unique? 
Implementing them on IORefs seems ugly. Or should they just be a 
primitive of the platform, like IORefs themselves?


--
Ashley Yakeley
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ganesh Sittampalam

On Sat, 30 Aug 2008, Ashley Yakeley wrote:

Is it the functionality of Data.Unique that you object to, or the fact that 
it's implemented with a global variable?


If the former, one could easily build Unique values on top of IORefs, since 
IORef is in Eq. Thus Data.Unique is no worse than IORefs (ignoring 
hashability, anyway).


If the latter, how do you recommend implementing Data.Unique? 
Implementing them on IORefs seems ugly.


This seems fine to me. It's based on something that already does work 
properly across a process scope, instead of some new language feature that 
is actually hard to implement across the process scope.


Ganesh
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ashley Yakeley

Ganesh Sittampalam wrote:
How can they be the same unless the memory management system is broken? 
I consider different pointers on different machines or in different 
virtual address spaces different too; it's the fact that they don't 
alias that matters.


But the actual pointer value might repeat.

every single call to newUnique across the whole world returns a 
different value, but internally they are Integers that might repeat.


The thing about pointers is that they are managed by the standard 
behaviour of memory allocation. This isn't true of Integers.


But it could be. A global variable allows us to do the same thing as the 
memory allocator, and allocate unique Integers just as the allocator 
allocates unique pointer values.


Now you can say that the same pointer value on different machines is 
different pointers; equally, you can say the same Integer in Unique on 
different machines is different Uniques: it's the fact that they don't 
alias that matters.


In fact this point suggests an implementation for Data.Unique that 
should actually be safe without global variables: just use IORefs for 
the actual Unique values. IORefs already support Eq, as it happens. That 
gives you process scope for free,


Isn't this rather ugly, though? We're using IORefs for something that 
doesn't involve reading or writing to them. Shouldn't there be a more 
general mechanism?


--
Ashley Yakeley
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ashley Yakeley

Ganesh Sittampalam wrote:
This seems fine to me. It's based on something that already does work 
properly across a process scope,


But you agree that IORefs define a concept of process scope?


instead of some new language feature that is actually hard to implement across 
the process scope.


If we have a concept of process scope, how is it hard to implement?

--
Ashley Yakeley
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ganesh Sittampalam

On Sat, 30 Aug 2008, Ashley Yakeley wrote:


Ganesh Sittampalam wrote:
This seems fine to me. It's based on something that already does work 
properly across a process scope,


But you agree that IORefs define a concept of process scope?


I'm not sure that they *define* process scope, because it might be safe to 
use them across multiple processes; it depends on OS-dependent properties. 
But they exist *at least* at process scope.


instead of some new language feature that is actually hard to implement 
across the process scope.


If we have a concept of process scope, how is it hard to implement?


Because memory allocation is already implemented, and not in a 
Haskell-dependent way. If two completely separate Haskell libraries are 
present in the same process, linked together by a C program, they don't 
even know about each others existence. But they still don't share memory 
space.


Ganesh
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Adrian Hey

Ganesh Sittampalam wrote:

On Sat, 30 Aug 2008, Adrian Hey wrote:

Because if you could take a String and convert it to a Unique there
would be no guarantee that result was *unique*.


Well, yes, but if I implemented a library in standard Haskell it would 
always be safely serialisable/deserialisable (I think). So the global 
variables hack somehow destroys that property - how do I work out why it 
does in some cases but not others?


This has nothing to do with the use of global variables. If you have
a set of values that are guaranteed to be distinct (unique) and you
add another random/arbitrary value to that set you have no way of
knowing that it is different from any current member (other than
searching the entire set, assuming it's available).


Well, I've never seen a convincing use case for global variables :-)


Well apart from all the libs that couldn't be implemented with them...


reason these to issues have become linked is that some folk are so
convinced that global variables are evil, they mistakenly think
thread local variables must be less evil (because they are less
global).


I don't think they're less evil, just that you might want them for the 
same sorts of reasons you might want global variables.


Global variables are needed to ensure important safety properties,
but the only reasons I've seen people give for thread local variables
is that explicit state threading is just so tiresome and ugly. Well
that may be (wouldn't disagree), but I'm not aware of any library
that simply couldn't be implemented without them.

If plugins breaks is down to plugins to fix itself, at least until 
such time as a suitable formal theory of plugins has been developed so 
it can become standard Haskell :-)


Dynamic loading and plugins work fine with standard Haskell now, because 
nothing in standard Haskell breaks them. The - proposal might well 
break them, which is a significant downside for it.


I don't see how, but if so - bindings are not the cause of the
brokeness. They'd still be broken using the unsafePerformIO hack.

In general, the 
smaller the world that the Haskell standard lives in, the less it can 
interfere with other concerns. - massively increases that world, by 
introducing the concept of a process scope.


All IORefs,MVars,Chans scope across the entire process defined by main.
Or at least they *should*, if they don't then something is already
badly wrong somewhere. This has nothing to do with whether or not they
appear at top level. This is what an IORef/MVar whatever is defined to
be.



It's a hack that isn't robust in many situations. We should find 
better ways to do it, not standardise it.


Nobody's talking about standardising the current hack. This the whole
point of the top level - proposal,


It just amounts to giving the current hack some nicer syntax and stating 
some rules under which it can be used.


No, the unsafePerformIO hack is a hack because it's *unsound*. The
compiler doesn't know how to translate this into code that does
what the programmer intended. Fortunately ghc at least does have
a couple of flags that give the intended result (we hope).

The new binding syntax is nicer, but it's real purpose is to leave the
compiler no wriggle room when interpreting the programmers intent.

But then again, I'm sure that some that will be adamant that any way
of making global variables is a hack. But they'll still be happy
to go on using file IO, sockets etc regardless, blissfully unaware
of the hacks they are dependent on :-)

Those rules aren't actually 
strong enough to provide a guarantee of process level scope.


The rules for - bindings shouldn't have to guarantee this.
This should be guaranteed by newMVar returning a new *MVar*, wherever
it's used (for example).

which JM seems to think is sound enough for incorporation into JHC 
(correctly IMO). Nobody's found fault with it, other than the usual 
global variables are evil mantra :-)


Several people have found faults with it, you've just ignored or 
dismissed them. No doubt from your perspective the faults are irrelevant 
or untrue, but that's not my perspective.


I mean semantic faults, as in the proposal just doesn't do what it
promises for some subtle reason. If you consider not giving you thread
local variables a fault I guess you're entitled to that view, but this
was never the intent of the proposal in the first place (that's not
what people are trying to do when they use the unsafePerformIO hack).

Regards
--
Adrian Hey


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ashley Yakeley

Ganesh Sittampalam wrote:

On Sat, 30 Aug 2008, Ashley Yakeley wrote:


Ganesh Sittampalam wrote:
This seems fine to me. It's based on something that already does work 
properly across a process scope,


But you agree that IORefs define a concept of process scope?


I'm not sure that they *define* process scope, because it might be safe 
to use them across multiple processes; it depends on OS-dependent 
properties. But they exist *at least* at process scope.


How can one use IORefs across multiple processes? They cannot be serialised.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ganesh Sittampalam

On Sat, 30 Aug 2008, Ashley Yakeley wrote:


Ganesh Sittampalam wrote:

On Sat, 30 Aug 2008, Ashley Yakeley wrote:


Ganesh Sittampalam wrote:
This seems fine to me. It's based on something that already does work 
properly across a process scope,


But you agree that IORefs define a concept of process scope?


I'm not sure that they *define* process scope, because it might be safe 
to use them across multiple processes; it depends on OS-dependent 
properties. But they exist *at least* at process scope.


How can one use IORefs across multiple processes? They cannot be serialised.


Firstly, that's a property of the current implementation, rather than a 
universal one, IMO. I don't for example see why you couldn't add a 
newIORef variant that points into shared memory, locking issues aside.


Also, the issue is not whether you can *use* them across multiple 
processes, but whether they are unique across multiple processes. 
Uniqueness has two possible definitions; aliasing, and representational 
equality. No two IORefs will ever alias, so by that definition they exist 
at global scope. For representational equality, that exists at least at 
process scope, and perhaps more.


Ganesh
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Parsec and network data

2008-08-30 Thread Johannes Waldmann

apfelmus wrote:


Design your language in a way that the *parse* tree does not depend on import
statements? I.e. Chasing imports is performed after you've got an abstract
syntax tree.


OK, that would work.

This property does not hold for Haskell,
because you need the fixities of the operators
(so, another language design error :-)

J.W.



signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ashley Yakeley

Ganesh Sittampalam wrote:
Firstly, that's a property of the current implementation, rather than a 
universal one, IMO. I don't for example see why you couldn't add a 
newIORef variant that points into shared memory, locking issues aside.


OK, so that would be a new Haskell feature. And it's that feature that 
would be the problem, not top-level -. It would bring its own garbage 
collection issues, for instance.


Currently we have shared memory should be raw bytes, and IORef values 
can't be serialised there.


Also, the issue is not whether you can *use* them across multiple 
processes, but whether they are unique across multiple processes. 
Uniqueness has two possible definitions; aliasing, and representational 
equality. No two IORefs will ever alias, so by that definition they 
exist at global scope. For representational equality, that exists at 
least at process scope, and perhaps more.


By global scope, I mean the largest execution scope an IORef created by 
newIORef can have. Each top-level IORef declaration should create an 
IORef at most once in this scope.


IORefs cannot be serialised, so they cannot be sent over serialised RPC. 
So let us consider your shared memory possibility.


Do you mean simply an IORef of a block of bytes of the shared memory? 
That would be fine, but that is really a different type than IORef. It 
still keeps the global scopes separate, as IORefs cannot be passed 
through [Word8].


Or do you mean you could use shared memory to pass across IORefs? This 
would mean joining the address spaces with no memory protection between 
them. It would mean joining the garbage collectors somehow. Once you've 
dealt with that, the issue of making sure that each initialiser runs 
only once for the new shared space is really only one more issue.


--
Ashley Yakeley
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ganesh Sittampalam

On Sat, 30 Aug 2008, Ashley Yakeley wrote:


Ganesh Sittampalam wrote:
Firstly, that's a property of the current implementation, rather than a 
universal one, IMO. I don't for example see why you couldn't add a 
newIORef variant that points into shared memory, locking issues aside.


OK, so that would be a new Haskell feature. And it's that feature that 
would be the problem, not top-level -. It would bring its own garbage 
collection issues, for instance.


OK, never mind about that; I agree it's not a very good idea. An IORef 
shouldn't escape the scope of the RTS/GC that created it.


Also, the issue is not whether you can *use* them across multiple 
processes, but whether they are unique across multiple processes. 
Uniqueness has two possible definitions; aliasing, and representational 
equality. No two IORefs will ever alias, so by that definition they exist 
at global scope. For representational equality, that exists at least at 
process scope, and perhaps more.


By global scope, I mean the largest execution scope an IORef created by 
newIORef can have. Each top-level IORef declaration should create an IORef 
at most once in this scope.


That's a reasonable definition, if by execution scope you mean your 
previous definition of where the IORef can be directly used. But it's 
not process scope; two independent Haskell libraries in the same process 
can no more share IORefs than two separate Haskell processes.


[what I meant by global scope above was the entire world]

Ganesh
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Named field syntax

2008-08-30 Thread Johannes Waldmann

Consider for instance defining datatype for 3x3 matrix.


I think the only sensible modelling for that
would use dependent types.

Also, if positional record notation is a design error, then is it also a 
design error not to require all arguments to be explicitly associated 
with named formal parameters at a function call site (e.g. f(x = 1, y = 
2, z = 3))?


well, same question for incomplete case expressions.
Would you rule them out?

Leaving out some parameter associations would be possible
if we could declare default parameters
(in record types and function declarations) -

but this conflicts with partial application.
if you define  f (x :: Int) (y :: Int = 42) :: Int,
and write  (f 0), does it have type Int - Int (partial app)
or type Int (using the default)?

I understand no-one seriously wants to remove partial application
for functions but if we (hypothetically, but we're in the *-cafe)
forbid positional notation for record constructors,
then we could have default values in record types,
and I can imagine quite some applications for that,
and especially so if the default value can be defined
to depend on the (other) values (that are given on construction).


Well, my horror for positional notation is basically
that introducing or removing a component/parameter
breaks all code that uses the type/function.
So, perhaps instead of changes in the language
I just want a refactoring tool that supports
* change function signature (remove, insert, swap parameters,
  including all necessary changes at call sites)
* for data declaration, convert positional to named notation

J.W.




signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Adrian Hey

Adrian Hey wrote:

Global variables are needed to ensure important safety properties,
but the only reasons I've seen people give for thread local variables
is that explicit state threading is just so tiresome and ugly. Well
that may be (wouldn't disagree), but I'm not aware of any library
that simply couldn't be implemented without them.


I thought I ought to say a bit more about my unkind and hasty
words re. thread local variables. This is discussed from time to
time and there's a wiki page here sumarising proposals...

 http://www.haskell.org/haskellwiki/Thread-local_storage

One thing that worries me is that nobody seems to know what problem
thread local storage is solving, hence diversity of proposals. I'm
also a struggling to see why we need it, but I don't have any passionate
objections to it either.

Unfortunately for those of us that want a solution to the global
variables problem the two issues seem have been linked as being the
part of same problem, so while there's all this uncertainty about what
thread local variables are actually going to be used for and what they
should look like the (IMO) much simpler global variables
problem/solution is in limbo. This has been going on 4 or 5 years now
IIRC.

But the global variables problem is really much simpler. All we
want is something that does exactly what the unsafePerformIO hack
currently does (assuming flag/pragma hackery does the trick), but
does it reliably. (IMO, YMMV..)

Regards
--
Adrian Hey

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: 2 modules in one file

2008-08-30 Thread Brandon S. Allbery KF8NH

On 2008 Aug 30, at 4:22, Aaron Denney wrote:

On 2008-08-27, Henrik Nilsson [EMAIL PROTECTED] wrote:

And there are also potential issues with not every legal module name
being a legal file name across all possible file systems.


I find this unconvincing.  Broken file systems need to be fixed.



Language people trying to impose constraints on filesystems is the  
tail wagging the dog.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] FunGEn

2008-08-30 Thread Henk-Jan van Tuyl

L.S.,

I found the Functional Game Engine FunGEn on the web:
  http://www.cin.ufpe.br/~haskell/fungen/download.html
It looks useful, but since it hasn't been maintained for a long time, it  
doesn't compile. Is there a newer version I can download?


--
Regards,
Henk-Jan van Tuyl


--
http://functor.bamikanarie.com
http://Van.Tuyl.eu/
--

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec and network data

2008-08-30 Thread Johan Tibell
On Sat, Aug 30, 2008 at 3:51 AM, Thomas Schilling
[EMAIL PROTECTED] wrote:
 I remember Johan Tibell (CC'd) working on an extended variant of
 Parsec that can deal with this chunked processing.  The idea is to
 teach Parsec about a partial input and have it return a function to
 process the rest (a continuation) if it encounters the end of a chunk
 (but not the end of a file).  Maybe Johan can tell you more about
 this, or point you to his implementation.

I have written a parser for my web server that uses continuations to
resume parsing. It's not really Parsec like anymore though. It only
parses LL(1) grammars as that's all I need to parse HTTP. I haven't
released a first version of my server yet, indeed most of the code is
on this laptop and not in the Git repo [1], but if you would like to
steal some ideas feel free.

1. 
http://www.johantibell.com/cgi-bin/gitweb.cgi?p=hyena.git;a=blob;f=Hyena/Parser.hs;h=8086b11bfeb3bca15bfd16ec9c6a4b34aadf528e;hb=HEAD

Cheers,

Johan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Brandon S. Allbery KF8NH

On 2008 Aug 30, at 6:28, Adrian Hey wrote:

Ganesh Sittampalam wrote:
How do the implementers of Data.Unique know that they musn't let  
them be serialised/deserialised?


Because if you could take a String and convert it to a Unique there
would be no guarantee that result was *unique*.


What stops the same rule from applying to Data.Random?


Well the only data type defined by this is StdGen, which is a Read/ 
Show
instance. I guess there's no semantic problem with that (can't think  
of

one off hand myself).


You *want* to be able to reproduce a given random seed, for  
simulations and the like.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: FunGEn

2008-08-30 Thread Simon Michael

Henk-Jan van Tuyl wrote:

L.S.,

I found the Functional Game Engine FunGEn on the web:
  http://www.cin.ufpe.br/~haskell/fungen/download.html
It looks useful, but since it hasn't been maintained for a long time, it 
doesn't compile. Is there a newer version I can download?


Hi Henk-Jan,

yes, see my updates at 
http://joyful.com/darcsweb/darcsweb.cgi?r=fungen;a=summary .


Thanks to Andre Furtado for this nice engine. I suggested he choose a 
license such as X11 or GPLv3 so it could be uploaded to hackage. He was 
unsure about his licensing options given that Fungen depends on HOpenGL 
and got busy. Please contact him so this can get licensed and packaged 
for wider exposure.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell Weekly News: Issue 83 - August 30, 2008

2008-08-30 Thread Brent Yorgey
---
Haskell Weekly News
http://sequence.complete.org/hwn/20080830
Issue 83 - August 30, 2008
---

   Welcome to issue 83 of HWN, a newsletter covering developments in the
   [1]Haskell community.

   This is the better late than never edition. As an excuse I could tell
   you that my home internet service has been horrible (now fixed) and I
   was away from home for a few days with my wife celebrating our third
   wedding anniversary. But instead, I give you a link to the
   [2]Uncyclopedia entry on Haskell. If you haven't already seen it, you
   should give it a read, being sure not to drink any milk at the same
   time, or at least pointing your nose away from the keyboard if you
   insist on drinking milk.

Community News

   If Dell sends John Goerzen (CosmicRay) one more catalog, it will
   [3]actually be a federal crime.

Announcements

   LogFloat 0.9. wren ng thornton [4]announced a new official release of
   the [5]logfloat package for manipulating log-domain floating numbers.
   This release is mainly for those who are playing with Transfinite
   rather than LogFloat, but the interface changes warrant a minor version
   change.

   validating xml lib - need some guidance. Marc Weber [6]asked for help
   developing an xml generating library validating the result against a
   given DTD.

   gsl-random 0.1 and monte-carlo-0.1. Patrick Perry [7]announced that he
   has started on [8]bindings for the random number generators and random
   distributions provided by the gsl. He has also written a [9]monad and
   transformer for doing monte carlo computations that uses gsl-random
   internally. For a quick tutorial in the latter package, see [10]his
   blog.

   Wired 0.1.1. Emil Axelsson [11]announced the first release of the
   hardware description library [12]Wired. Wired can be seen as an
   extension to Lava that targets (not exclusively) semi-custom VLSI
   design. A particular aim of Wired is to give the designer more control
   over the routing wires' effects on performance.

   darcs weekly news #1. Eric Kow [13]sent out the first edition of the
   new Darcs Weekly News!

   zip-archive 0.0. John MacFarlane [14]announced the release of the
   [15]zip-archive library for dealing with zip archives.

   The Monad.Reader - Issue 11. Wouter Swierstra [16]announced a new issue
   of [17]The Monad.Reader, with articles by David Place, Kenn Knowles,
   and Doug Auclair.

   First Monad Tutorial of the Season. Hans van Thiel [18]announced a new
   monad tutorial, [19]The Greenhorn's Guide to becoming a Monad Cowboy.

   Real World Haskell hits a milestone. Bryan O'Sullivan [20]proudly
   announced that the draft manuscript of Real World Haskell is
   [21]complete! It is now available online in its entirety. The authors
   expect the final book to be published around the beginning of November,
   and to weigh in at about 700 pages.

   Mueval 0.5.1, 0.6, 0.6.1, 0.6.2, 0.6.3, 0.6.4. Gwern Branwen
   [22]announced a number of new releases of [23]Mueval. Lambdabot now
   uses mueval for all its dynamic Haskell evaluation needs.

   Hoogle Database Generation. Neil Mitchell (ndm) [24]announced that a
   new release of the [25]Hoogle command line is out, including bug fixes
   and additional features. Upgrading is recommended.Two interesting
   features of Hoogle 4 are working with multiple function databases (from
   multiple packages), and running your own web server.

Blog noise

   [26]Haskell news from the [27]blogosphere.
 * Real-World Haskell: [28]Source handed over to production.
 * Douglas M. Auclair (geophf): [29]Earning \bot-Trophies.
 * Douglas M. Auclair (geophf): [30]Scanner-parsers II: State Monad
   Transformers.
 * Douglas M. Auclair (geophf): [31]Scanner-parsers I: lifting
   functions.
 *  software engineering radio: [32]Episode 108: Simon Peyton Jones
   on Functional Programming and Haskell. A podcast interview with
   Simon Peyton Jones.
 * Neil Mitchell: [33]Running your own Hoogle on a Web Server.
 * Braden Shepherdson: [34]Announcing xmonad-light. Braden is rolling
   out a new configuration framework for xmonad, providing an easier
   learning curve for those not wanting to learn Haskell right away,
   and an easy transition to a more powerful Haskell configuration
   when they want it.
 * Gabor Grief: [35]Category. Gabor is excited that base-3.0 will
   include Control.Category.
 * Douglas M. Auclair (geophf): [36]Ten = 1+2+3+4. Solving an
   arithmetic puzzle with Haskell, Prolog-style.
 * Paul R Brown: [37]perpubplat now on github.
 * Douglas M. Auclair (geophf): [38]Lucky you!?. Doug shares some
   secrets of his success in getting Haskell/Dylan/Mercury/Prolog
   jobs.
 * Neil Mitchell: [39]Hoogle Database Generation. Neil releases a new

Re: [Haskell-cafe] Re: FunGEn

2008-08-30 Thread Gwern Branwen
On 2008.08.30 12:06:33 -0700, Simon Michael [EMAIL PROTECTED] scribbled 0.7K 
characters:
 Henk-Jan van Tuyl wrote:
 L.S.,

 I found the Functional Game Engine FunGEn on the web:
   http://www.cin.ufpe.br/~haskell/fungen/download.html
 It looks useful, but since it hasn't been maintained for a long time,
 it doesn't compile. Is there a newer version I can download?

 Hi Henk-Jan,

 yes, see my updates at
 http://joyful.com/darcsweb/darcsweb.cgi?r=fungen;a=summary .

 Thanks to Andre Furtado for this nice engine. I suggested he choose a
 license such as X11 or GPLv3 so it could be uploaded to hackage. He was
 unsure about his licensing options given that Fungen depends on HOpenGL
 and got busy. Please contact him so this can get licensed and packaged
 for wider exposure.

The HOpenGL stuff is BSD3-licensed like the rest of the base libraries, isn't 
it? I think that means there are no license restrictions in that direction; 
Andre just needs to pick a Free license Hackage likes (==DFSG-compatible or 
OSI-approved). I don't see what the problem is.

--
gwern
InfoSec Z-150T HERF Met Information Texas Cornflower 20755 Bubba IS


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Top Level -

2008-08-30 Thread Ashley Yakeley

Ganesh Sittampalam wrote:
By global scope, I mean the largest execution scope an IORef created 
by newIORef can have. Each top-level IORef declaration should create 
an IORef at most once in this scope.


That's a reasonable definition, if by execution scope you mean your 
previous definition of where the IORef can be directly used. But it's 
not process scope; two independent Haskell libraries in the same process 
can no more share IORefs than two separate Haskell processes.


[what I meant by global scope above was the entire world]


OK. Let's call it top-level scope. Haskell naturally defines such a 
thing, regardless of processes and processors. Each top-level - would 
run at most once in top-level scope.


If you had two Haskell runtimes call by C code, each would have its own 
memory allocator and GC; IORefs, Uniques and thunks cannot be shared 
between them; and each would have its own top-level scope, even though 
they're in the same process.


--
Ashley Yakeley
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Parsec and network data

2008-08-30 Thread Aaron Denney
On 2008-08-30, Johannes Waldmann [EMAIL PROTECTED] wrote:
 apfelmus wrote:

 Design your language in a way that the *parse* tree does not depend
 on import statements? I.e. Chasing imports is performed after you've
 got an abstract syntax tree.

 OK, that would work.

 This property does not hold for Haskell,
 because you need the fixities of the operators
 (so, another language design error :-)

Yes, but you can partially parse into a list, which later gets
completely parsed.  It's not like C with its textual inclusion, and
constructs changing what counts as a type.

-- 
Aaron Denney
--

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Discussing FFI

2008-08-30 Thread Maurí­cio

Hi,

I saw the (a?) page about FFI on Haskell:

http://www.cse.unsw.edu.au/~chak/haskell/ffi

It shows a link to a mailing list but,
checking the list archives at:

http://www.haskell.org/pipermail/ffi ,

it seems it's dead since 2007. Is there
a new list about FFI? Where should I
discuss my begginer issues about it?

Thanks,
Maurício

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Discussing FFI

2008-08-30 Thread Don Stewart
briqueabraque:
 Hi,
 
 I saw the (a?) page about FFI on Haskell:
 
 http://www.cse.unsw.edu.au/~chak/haskell/ffi
 
 It shows a link to a mailing list but,
 checking the list archives at:
 
 http://www.haskell.org/pipermail/ffi ,
 
 it seems it's dead since 2007. Is there
 a new list about FFI? Where should I
 discuss my begginer issues about it?

The FFI is standard Haskell, so just talk about it on haskell-cafe@haskell.org
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] idea: TH calculating type level function results / binary types?

2008-08-30 Thread Marc Weber
Hi,

Maybe you've noted that I've started writing an XML library which
validates the generated XML against a given DTD at compile time.
Not everything is implemented right now and it would be quite usable if
it didn't take that much time.

The problem:

To see some details have look at my small announcement on haskellcafe.
type checking 12 lines of XML code has taken
 3hourse using some TypeToNat type equality comparison
 35sec using an implementation proposed by Oleg
 30sec no longer using HLists to check wether an attribute may be added
  but an AttrOk tag attr class (proposed by Oleg)
 ?sec after finishing a state transformation implementation propably
  generating thausands of intermediate state or a mixture of this and
  the parser like apporach.

Even if the last approach works it's a pity because:
* it took much time to write
* hard to read code
* no error messages..
  In the 30sec apprach I've used some instances such as
  class Consume elType state state' | elType state - state'
  class FailHelper state' state'' | state' - state''
  instance FailHelper (Fail a) = FailHelper (F a) () -- F 
indicating sub element could not be consumed, failure
  instance FailHelper a a --  

  So I got error messages telling me
  no instance for MoreElementsExpected
(Or (E Html_T)
(E Body_T))
  or such.. that's nice and usable.

I can't think of a nice way reporting errors when transforming a dtd
line (a,b,c) (= tag sequence a b c) into
data State1
data State2
data State3
instance Conseme State1 A_T State2 -- consume tag a 
instance Conseme State2 B_T State3 -- consume tag b 
instance Conseme State3 C_T ConsumedEnd -- consume tag c
in a convinient way without adding much bloat.

The XHMTL spec has about 150 ! tags.. So even if creating this kind of
state transforming instances this will result in a lot of bloat.
So if the compiler has to load a some MB large .hi file it will spend
unnecessary time just loading them.



an idea: (new?) solution:
  What about enhancing ghc so that 

  a) you can define type level functions and
calculate the result using template haskell functions:

benefits:
  * speed (it can even be compiled! )
  * nice error messages on failure
  * mantainable code
  * reusing known source (such as parser combinator libraries)

syntax could look like this:

  class Bar a b c d | a b - c, a b - d

  instance (Maybe x) z $(calculateC) $(calculateD)

  calculateC :: Type - Type - Type
  calculateD :: Type - Type - Type

an efficient implementation for
TypeEq a b typebool | a b - typebool

could be done by everyone with ease without reading HList source or
Olegs papers.. (you should do so anyway ..)

  b) add some binary serialization support so that you don't have to do
expensive data - type level - data (de)serializations.
.hi files wouldn't blow much this way as well.

So a parse specification such as (1) could really be included within
types and could become even more complex. Than an exisiting parser
combinator library could be used and a validating XML library could be
written within some hours..

I think ghc is already a superiour compiler because it allows such
advanced technics thereby making users such as me even ask for more
:-) I consider this a good sign.


  So has anyone thought about this before?
  Would someone help / guide me implementing this extension in the near
  future?

  Of course it will clash with some instance -X extensions.. but I think
  that in everyday use you'd probably use either this template haskell
  approach or use normal class instance declarations so it I guess it
  could be handled.

Maybe this does already exist in one or the other way?

Sincerly
Marc Weber


(1)
  (Seq
 (Elem Title_T)
 (Seq
(Star
   (Or
  (Elem Script_T)
  (Or
 (Elem Style_T)
 (Or
(Elem Meta_T)
(Or (Elem Link_T) (Elem Object_T))
(Query
   (Seq
  (Elem Base_T)
  (Star
 (Or
(Elem Script_T)
(Or
   (Elem Style_T)
   (Or
  (Elem Meta_T)
  (Or
 (Elem Link_T) (Elem Object_T))




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe