Re: [cloud-haskell-developers] Does anyone have much experience generating Haskell from Coq?

2018-12-10 Thread Tim Watson
On Mon, 10 Dec 2018, 09:30 Gershom B  The other approach, which has been quite successful, by the penn team,
> is using hs-to-coq to extract coq from haskell and _then_ verify:
> https://github.com/antalsz/hs-to-coq


Thank you! Someone else proposed that off list yesterday too. If we get our
layering right, that could definitely be a viable alternative!

I will do some more research. I generally think that https://deepspec.org/
is an awesome idea. :)
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Guidelines for respectful communication

2018-12-08 Thread Tim Watson
I think this is brilliant. Will have a good read of them, and do my best to
adopt them for my own projects and any interactions I have within the
community.

Thank you Simon!

PS: we love you too! :D

On Thu, 6 Dec 2018 at 10:35, Simon Peyton Jones via Glasgow-haskell-users <
glasgow-haskell-users@haskell.org> wrote:

> Friends
> As many of you will know, I have been concerned for several years about
> the standards of discourse in the Haskell community.  I think things have
> improved since the period that drove me to write my Respect email<
> https://mail.haskell.org/pipermail/haskell/2016-September/024995.html>,
> but it's far from secure.
> We discussed this at a meeting of the GHC Steering Committee<
> https://github.com/ghc-proposals/ghc-proposals> at ICFP in September, and
> many of us have had related discussions since.  Arising out of that
> conversation, the GHC Steering Committee has decided to adopt these
>   Guidelines for respectful communication<
> https://github.com/ghc-proposals/ghc-proposals/blob/master/GRC.rst>
>
> We are not trying to impose these guidelines on members of the Haskell
> community generally. Rather, we are adopting them for ourselves, as a
> signal that we seek high standards of discourse in the Haskell community,
> and are willing to publicly hold ourselves to that standard, in the hope
> that others may choose to follow suit.
> We are calling them "guidelines for respectful communication" rather than
> a "code of conduct", because we want to encourage good communication,
> rather than focus on bad behaviour.  Richard Stallman's recent post<
> https://lwn.net/Articles/769167/> about the new GNU Kind Communication
> Guidelines expresses
> the same idea.
> Meanwhile, the Stack community is taking a similar approach<
> https://www.snoyman.com/blog/2018/11/proposal-stack-coc>.
> Our guidelines are not set in stone; you can comment here<
> https://github.com/ghc-proposals/ghc-proposals/commit/373044b5a78519071b9a24b3681cfd1af06e57e0>.
>  Perhaps they can evolve so that other Haskell committees (or even
> individuals) feel able to adopt them.
> The Haskell community is such a rich collection of intelligent,
> passionate, and committed people. Thank you -- I love you all!
> Simon
>
>
>
> ___
> Glasgow-haskell-users mailing list
> Glasgow-haskell-users@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
>
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Does anyone have much experience generating Haskell from Coq?

2018-12-08 Thread Tim Watson
So far I've been reading
https://www.cs.purdue.edu/homes/bendy/Fiat/FiatByteString.pdf. I'm
interested in the ideas presented in
https://github.com/DistributedComponents/verdi-runtime, which is OCaml
based.

My goal is to provide building blocks for verifying and testing Cloud
Haskell programs. I've been looking at existing frameworks (such as
quickcheck-state-machine/-distributed and hedgehog) for model based
testing, and ways of injecting an application layer scheduler for detecting
race conditions. The final bit of the puzzle is being able to apply formal
methods to verify concurrent/distributed algorithms, and generate some (if
not all) of the required implementation code.

Any pointers to research or prior art would be greatly appreciated.

Cheers,
Tim Watson
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Static values language extension proposal

2014-01-28 Thread Tim Watson
Hi Mathieu,

On 28 Jan 2014, at 12:53, Mathieu Boespflug wrote:
 We would prefer to do it that way, to be honest. As explained in my
 previous email, we identified two problems with this approach:
 
 1) User friendliness. It's important for us that Cloud Haskell be
 pretty much as user friendly and easy to use as Erlang is.
 

Exactly!

a) I don't know that it's possible from Template Haskell to detect
 and warn the user when dependent modules have not been compiled into
 dynamic object code or into static code with the right flags.
 

I don't think that it is, from what I've seen, though I'm by no means an expert.

b)  It's very convenient in practice to be able to send not just
 `f` if `f` is a global identifier, but in general `e` where `e` is any
 closed expression mentioning only global names. That can easily be
 done by having the compiler float the expression `e` to the top-level
 and give it a global name. I don't see how to do that in TH in a user
 friendly way.

Agreed.

 
 2) A technical issue: you ought to be able to send unexported
 functions across the wire, just as you can pass unexported functions
 as arguments to higher-order functions. Yet GHC does not create linker
 symbols for unexported identifiers, so our approach would break down.
 Worse, I don't think that it's even possible to detect in TH whether
 an identifier is exported or not, in order to warn the user. One could
 imagine a compiler flag to force the creation of linker symbols for
 all toplevel bindings, exported or unexported. But that seems
 wasteful, and potentially not very user friendly.

Interesting.

 
 If the above can be solved, all the better!
 
 If not: we don't always want to touch the compiler, but when we do,
 ideally it should be in an unintrusive way. I contend our proposal
 fits that criterion. And our cursory implementation efforts seem to
 confirm that so far.

Good!

 
 But I really think insisting that the linker symbol names denote the datum
 agreement in a distributed system is punting on what should be handled at
 the application level. Simon Marlow put some improvements into GHC to help
 improve doing dynamic code (un)loading, stress test that!
 
 We could use either the system linker or rts linker. Not sure that it
 makes any difference at the application level.

No indeed.

 
 2) I've a work in progress on specing out a proper (and sound :) ) static
 values type extension for ghc, that will be usable perhaps in your your case
 (though by dint of being sound, will preclude some of the things you think
 you want).
 
 I look forward to hearing more about that.

+1

 How is the existing proposal not (type?) sound?
 

I'd like to hear more about the concerns too.

 As for *how* to send an AST fragment, edward kmett and other have some
 pretty nice typed AST models that are easy to adapt and extend for an
 application specific use case. Bound
 http://hackage.haskell.org/package/bound is one nice one.
 
 heres a really really good school of haskell exposition
 https://www.fpcomplete.com/user/edwardk/bound
 
 These are nice encodings for AST's. But they don't address how to
 minimize the amount of code to ship around the cluster. If you have no
 agreement about what functions are commonly available, then the AST
 needs to include the code for the function you are sending, + any
 functions it depends, + any of their dependencies, and so on
 transitively.

That was precisely my concern with the idea of shipping *something* AST-like 
around. It's a lot of overhead for every application you want to develop, or a 
*massive* overhead to cover all bases.

 
 Tim, perhaps the following also answers some of your questions. This
 is where the current proposal comes in: if you choose to ship around
 AST's, you can minimize their size by having them mention shared
 linker symbol names.

Indeed, that does seem to simplify things.

 Mind, that's already possible today, by means of
 the global RemoteTable, but it's building that remote table safely,
 conveniently, in a modular way, and with static checking that no
 symbols from any of the modules that were linked at build time were
 missed, that is difficult.
 

Yep. It's awkward and when you get it wrong, you're either fighting with 
TH-obscured compiler errors or worse, the damn thing just doesn't work (because 
you can't decode properly on the remote node and things just crash, or worse 
still, just hang on waiting for the *correct* input types, which never arrive 
because they're not known to the RTS).

 By avoiding a RemoteTable entirely, we avoid having to solve that
 difficult problem. :)

Not having a RemoteTable sounds like a plus to me.

Cheers,
Tim


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Static values language extension proposal

2014-01-27 Thread Tim Watson
Hi Brandon,

On 26 Jan 2014, at 19:01, Brandon Allbery wrote:

 On Sun, Jan 26, 2014 at 1:43 PM, Tim Watson watson.timo...@gmail.com wrote:
 In Erlang, I can rpc/send *any* term and evaluate it on another node. That 
 includes functions of course. Whether or not we want to be quite that general 
 is another matter, but that is the comparison I've been making.
 
 Note that Erlang gets away with this through being a virtual machine 
 architecture; BEAM is about as write-once-run-anywhere as it gets, and the 
 platform specifics are abstracted by the BEAM VM interpreter. You just aren't 
 going to accomplish this with a native compiled language, without encoding as 
 a virtual machine yourself (that is, the AST-based mechanisms).

Yeah, I do realise this. Of course we're not trying to reproduce the BEAM 
really, but what we /do/ want is to be able to do is exchange messages between 
nodes that are not running the same executable. The proposal does appear to 
address this requirement, at least to some extent. There may be complementary 
(or better) approaches. I believe Carter is going to provide some additional 
details viz his work in this area at some point.

Anything that reduces the amount of Template Haskell required to work with 
Cloud Haskell is a good thing (tm) IMO. Not that I mind using TH, but the 
programming model is currently quite awkward from the caller's perspective, 
since you've got to (a) create a Static/Closure out of potentially complex 
chunks of code, which often involves creating numerous top level wrapper APIs 
and (b) fiddle around with the remote-table (both in the code that defines 
remote-able thunks *and* in the code that starts a node wishing to operate on 
them.

Also note that this problem isn't limited to sending code around the network. 
Just sending arbitrary *data* between nodes is currently discouraged (though 
not disallowed) because the receiving program *might* not understand the types 
you're sending it. This is very restrictive and the proposal does, at the very 
least, allow us to safely serialise, send and receive types that both programs 
know about by virtue of having been linked to the same library/libraries. 

But yes - there are certainly constraints and edge cases aplenty here. I'm not 
entirely sure whether or not we'd need to potentially change the (binary) 
encoding of raw messages in distributed-process, for example, in response to 
this change. Currently we serialise a pointer (i.e., the pointer to the 
fingerprint for the type that's being sent), and I can imagine that not working 
properly across different nodes running on different architectures etc.

 Perhaps you should consider fleshing out ghc's current bytecode support to be 
 a full VM?

After discussing this with Simon M, we concluded there was little point in 
doing so. The GHC RTS is practically a VM anyway, and there's probably not that 
much value to be gained by shipping bytecode around. Besides, as you put it, 
the AST-based mechanisms allow for this anyway (albeit with some coding 
required on the part of the application developer) and Carter (and others) 
assure me that the mechanisms required to do this kind of thing already exist. 
We just need to find the right way to take advantage of them.

 Or perhaps an interesting alternative would be a BEAM backend for ghc.
 

I've talked to a couple of people that want to try this. I'm intrigued, but 
have other things to focus on. :)

Cheers,
Tim___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Static values language extension proposal

2014-01-26 Thread Tim Watson
On 25 Jan 2014, at 18:12, Carter Schonwald wrote:

 1) you should (once 7.8 is out) evaluate how far you can push your ideas wrt 
 dynamic loading as a user land library.
  If you can't make it work as a library and can demonstrate why (or how even 
 though it works its not quite satisfactory), thats signals something!  
 

Is that something you'll consider looking at Matthieu?

  Theres quite a few industrial haskell shops that provide products / services 
 where internally they do runtime dynamic loading of user provided object 
 files, so i'm sure that the core GHC support is there if you actually dig 
 into the apis! And they do this in a distributed systems context, sans CH.
 

We have a pull request from Edsko that melds hs-plugins support with static, as 
per the original proposal's notes, so this seems like a corollary issue to me. 

 2) I've a work in progress on specing out a proper (and sound :) ) static 
 values type extension for ghc, that will be usable perhaps in your your case 
 (though by dint of being sound, will preclude some of the things you think 
 you want). BUT, any type system changes need to actually provide safety. My 
 motivation for having a notion of static values comes from a desire to add 
 compiler support for certain numerical computing operations that require 
 compiler support to be usable in haskell. BUT, much of the same work 
 

Timescales? There are commercial users of Cloud Haskell clamouring for 
improvements to the way we handle this situation, and I'm keen to combine 
getting broader community agreements about the right thing to do with 
facilitating our users real needs. If there are other options pertaining to 
static support, I'd like to know more!

 @tim: what on earth does sending arbitrary code mean? I feel like the more 
 precise thing everyone here wants is for a given application / 
 infrastructure deployment, I would to be able to send my application specific 
 computations over the network, using cloud haskell, and be sure that both 
 sides think its the same code.
 

With Cloud Haskell in its current guise, I can Closure up pretty any thunk I 
like and spawn it on a remote node. If the node's are both running the same 
executable, we're fine. If they're not, we're potentially in trouble.

In Erlang, I can rpc/send *any* term and evaluate it on another node. That 
includes functions of course. Whether or not we want to be quite that general 
is another matter, but that is the comparison I've been making.

 As for *how* to send an AST fragment, edward kmett and other have some pretty 
 nice typed AST models that are easy to adapt and extend for an application 
 specific use case. Bound http://hackage.haskell.org/package/bound is one nice 
 one. 
 
 heres a really really good school of haskell exposition 
 https://www.fpcomplete.com/user/edwardk/bound
 
 And theres a generalization that supports strong typing that i've copied from 
 an hpaste https://gist.github.com/cartazio/5727196, where its notable that 
 the AST data type is called Remote :),
 I think thats a hint its meant to be a haskell manipulable way of 
 constructing a typed DSL you can serialize using a finally tagless style api 
 approach (ie have a set of type class instances / operations that you use to 
 run the computation and/or construct the AST you can send over the wire)
 

These are all lovely, but aren't we talking about either (a) putting together 
an AST to represent whatever valid Haskell program someone wants to send, or 
(b) forcing every application developer to write an AST to cover all their 
remote computations. Both of those sound like a lot more work than the proposal 
below. They may be the right approach from some domains, but there is a fair 
bit of developer overhead involved from what I can see.

 On Fri, Jan 24, 2014 at 3:19 PM, Mathieu Boespflug 0xbadc...@gmail.com 
 wrote:
 The `static e` form could as well be a piece of Template Haskell, but
 making it a proper extension means that the compiler can enforce more
 invariants and be a bit more helpful to the user. In particular,
 detecting situations where symbolic references cannot be generated
 because e.g. the imported packages were not compiled as dynamic linked
 libraries. Or seamlessly supporting calling `static f` on an idenfier
 `f` that is not exported by the module.
 

All of which sound like a usability improvement to me.

 I very much subscribe to the idea of defining small DSL's for
 exchanging code between nodes. And this proposal is compatible with
 that idea.
 
 One thing that might not have been so clear in the original email is
 that we are proposing here to introduce just *one such DSL*. It's just
 that it's a trivial one whose grammar only contains linker symbol
 names.
 

That triviality is a rather important point as well, because...

 As it happens, distributed-static today already supports two such
 DSL's: a DSL of labels, which are arbitrary string names for
 functions, and a small language 

Re: Static values language extension proposal

2014-01-24 Thread Tim Watson
I don't have time to weigh in on this proposal right now, but I have several 
comments...

On 24 Jan 2014, at 17:19, Facundo Domínguez wrote:
 Rationale
 ===
 
 We want the language extension to meet the following requirements:
 
  1. It must be a practical alternative to the remoteTable functions
 in the distributed-static package.
 

Agreed - this is vital!

  2. It must not change the build scheme used for Haskell programs. A
 collection of .o files produced from Haskell source code should still
 be possible to link with the system linking tools.
 

Also vital.

  3. It must not restrict all communicating processes using the
 extension to be launched from the same binary.
 

I personally think this is very valuable.

 About the need for using different binaries
 ==
 
 While using distributed-process we found some use cases for supporting
 communicating closures between multiple binaries.
 
 One of these use cases involved a distributed application and a
 monitoring tool. The monitoring tool would need to link in some
 graphics libraries to display information on the screen, none of which
 were required by the monitored application. Conversely, the monitored
 application would link in some modules that the monitoring application
 didn’t need. Crucially, both applications are fairly loosely coupled,
 even if they both need to exchange static values about bindings in
 some modules they shared.

Indeed - this is an almost canonical use-case, as are administrative (e.g., 
remote management) tools.

 As the application depends on shared libraries, now a tool to collect
 these libraries would be required so they can be distributed together
 with the executable binary when deploying a Cloud Haskell application
 in a cluster. We won’t delve further into this problem.

Great idea.

 
 Another possible line of work is extending this approach so a process
 can pull shared objects from a remote peer, when this remote peer
 sends a static value that is defined in a shared object not available
 to the process.

This would go a long way towards answering our questions about 'hot code 
upgrade' and be useful in many other areas too.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Static values language extension proposal

2014-01-24 Thread Tim Watson
On 24 Jan 2014, at 17:59, Carter Schonwald wrote:
 0) I think you could actually implement this proposal as a userland library, 
 at least as you've described it. Have you tried doing so? 
 

I didn't pick up on that at all - how would we be able to do that?

 1) what does this accomplish that can not be accomplished by having various 
 nodes agree on a DSL, and sending ASTs to each other?
  1a) in fact, I'd argue (and some others agree, and i'll admit my 
 opinions have been shaped by those more expert than me) that the sending a 
 wee AST you can interpret on the other side is much SAFER than sending a 
 function symbol thats hard coded hopefully into both programs in a way that 
 it means the same thing.  I've had many educational conversations with 
 

I've still not seen a convincing example of how to do this though. It would 
help if someone explained what this would look like, running over two (or more) 
separate binaries and still shipping code. It's just that, afaict, that AST 
wouldn't be so wee once it had to represent any arbitrary expression. One 
could, of course, just ship source (or some intermediate representation), but 
that would also require compiler infrastructure to be installed on the target.

 2) how does it provide more type safety than the current TH based approach? 
 (I've seen Tim and others hit very very gnarly bugs in cloud haskell based 
 upon the magic static values approach). 
 

This is definitely true, but I see it as a problem related to our use of TH 
rather than our current use of closures and 'Static' per se. Having said that, 
it can be toe-curlingly difficult to work with closure/static sometimes, so 
*anything* that makes this easier sounds good to me.

 
 to repeat: have you considered defining an AST type + interpreter for the 
 computations you want to send around, and doing that? I think its a much 
 simpler, safer, easier, flexible and PORTABLE approach, though one current CH 
 doesn't do (though the folks working on CH seem to be receptive to switching 
 to such a strategy if someone validates it)
 

I/we are, I think, amenable to doing whatever makes the most sense. This could 
include doing more than one thing, when it comes to dealing with 'statics'. 
Personally I think the proposal sounds interesting, though as I mentioned in my 
previously mail, I haven't had time to sit down and look at it in detail yet.

Cheers,
Tim
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.8 release?

2013-02-08 Thread Tim Watson
On 8 Feb 2013, at 05:18, Carter Schonwald wrote:
 johan, how do you and Bryan have those jenkin's nodes setup?
 
 (I'm planning  to setup something similar  for my own use, and seeing how 
 thats setup would be awesome)
 

Likewise, I'm in the process of setting up Elastic Bamboo on EC2 for Cloud 
Haskell and would be very interested in seeing how you've dealt with multiple 
versions of GHC.

Cheers,
Tim
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.8 release?

2013-02-08 Thread Tim Watson
Hi Bryan,

On 8 Feb 2013, at 11:53, Bryan O'Sullivan wrote:

 On Fri, Feb 8, 2013 at 1:29 AM, Tim Watson watson.timo...@gmail.com wrote:
 Likewise, I'm in the process of setting up Elastic Bamboo on EC2 for Cloud 
 Haskell and would be very interested in seeing how you've dealt with multiple 
 versions of GHC.
 
 It's easy to parameterize builds in Jenkins based on different values of an 
 environment variable, so Johan and I just have different versions of GHC 
 installed side by side, and then set $GHC_VERSION to 7.6 7.4 7.2 7.0 6.12 
 (or whatever), put /usr/local/$GHC_VERSION/bin at the front of $PATH, and the 
 right thing happens.

Ok cool, that's pretty much what I had in mind but I wasn't sure about 
installing dependencies and using cabal-install. In my development environment 
I quickly found that installing multiple GHCs and haskell-platform releases got 
a bit messy, so I was wondering if there was a recognised 'best way' to do 
this. I'll probably replicate what I've done with other things (such as Erlang) 
and manage it with ${PREFIX}/ghc/versions/... and symlink 
${PREFIX}/ghc/current/... to avoid the path switching. Hopefully telling 
cabal-install to use ${PREFIX}/ghc/current/lib will 'just work' when installing 
dependencies as I switch between ghc versions.

Cheers!
Tim
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Cloud Haskell and network latency issues with -threaded

2013-02-06 Thread Tim Watson
Hi Kostirya,

I'm putting the parallel-haskell and ghc-users lists on cc, just in case other 
(better informed) folks want to chip in here.



First of all, I'm assuming you're talking about network latency when compiling 
with -threaded - if not I apologise for misunderstanding!

There is apparently an outstanding network latency issue when compiling with 
-threaded, but according to a conversation I had with the other developers on 
#haskell-distributed, this is not something that's specific to Cloud Haskell. 
It is something to do with the threaded runtime system, so would need to be 
solved for GHC (or is it just the Network package!?) in general. Writing up a 
simple C program and equivalent socket use in Haskell and comparing the latency 
using -threaded will show this up.

See the latency section in 
http://haskell-distributed.github.com/wiki/networktransport.html for some more 
details. According to that, there *are* some things we might be able to do, but 
the 20% latency isn't going to change significantly on the face of things.

We have an open ticket to look into this 
(https://cloud-haskell.atlassian.net/browse/NTTCP-4) and at some point we'll 
try and put together the sample programs in a github repository (if that's not 
already done - I might've missed previous spikes done by Edsko or others) and 
investigate further.

One of the other (more experienced!) devs might be able to chip in and proffer 
a better explanation.

Cheers,
Tim


On 6 Feb 2013, at 13:27, kosti...@gmail.com wrote:

 Haven’t you had a necessity to launch Haskell in no-threaded mode during the 
 intense network data exchange? 
 I am getting the double performance penalty in threaded mode. But I must use 
 threaded mode because epoll and kevent are available in the threaded mode 
 only. 
 

[snip]

 
 
 среда, 6 февраля 2013 г., 12:33:36 UTC+2 пользователь Tim Watson написал:
 Hello all, 
 
 It's been a busy week for Cloud Haskell and I wanted to share a few of 
 our news items with you all. 
 
 Firstly, we have a new home page at http://haskell-distributed.github.com, 
 into which most of the documentation and wiki pages have been merged. Making 
 sassy looking websites is not really my bag, so I'm very grateful to the 
 various author's whose Creative Commons licensed designs and layouts made 
 it easy to put together. We've already had some pull requests to fix minor 
 problems on the site, so thanks very much to those who've contributed 
 already! 
 
 As well as the new site, you will find a few of us hanging out on the 
 #haskell-distributed channel on freenode. Please do come along and join in 
 the conversation. 
 
 We also recently split up the distributed-process project into separate 
 git repositories, one for each component that makes up Cloud Haskell. This 
 was done partly for administrative purposes and partly because we're in the 
 process of setting up CI builds for all the projects. 
 
 Finally, we've moved from Github's issue tracker to a hosted Jira/Bamboo 
 setup 
 at https://cloud-haskell.atlassian.net - pull requests are naturally still 
 welcome 
 via Github! Although you can browse issues freely without logging in, you 
 will 
 need to provide an email address and get an account in order to submit new 
 ones. 
 If you have any difficulties logging in, please don't hesitate to contact me 
 directly, via this forum or the cloud-haskell-developers mailing list (on 
 google groups). 
 
 As always, we'd be delighted to hear any feedback! 
 
 Cheers, 
 Tim


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Master thesis

2013-01-16 Thread Tim Watson
Shameless plug: Cloud Haskell. See the links below for a list of open issues, 
some really complex, some really simple. Not sure if any are suitable for a 
master thesis, but feel free to look and see if there's anything meaty enough.

https://github.com/haskell-distributed/distributed-process/issues
https://github.com/haskell-distributed/distributed-process-platform/issues
https://github.com/haskell-distributed/distributed-process-platform/wiki/Contributing
 (same guidelines apply for both projects)

Versioning and conversion of serialized types (between versions) would be a 
boon to us if you could think of a neat way to do that. It would also come in 
handy for dphd I'm sure - see distributed-process-static and 
https://github.com/haskell-distributed/distributed-process/issues/106 for some 
of the reasons why we might want that.

On 16 Jan 2013, at 09:11, Vikraman wrote:

 Hi, I am looking to hack on ghc/haskell for my master thesis. What are some
 areas that I should be looking into?
 
 Any suggestions are welcome.
 
 
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: How to start with GHC development?

2012-12-13 Thread Tim Watson
I'm in a very similar position. Have some background knowledge and would love 
to contribute to ghc in the future hut the barrier to entry is pretty high even 
though I've some familiarity with compiler theory, have a long history with 
functional languages like ml/ocaml and a background in c that might mean 
contributing to the runtime system in the future could be an area where I make 
myself useful.

More pointers to resources and research materials would be most helpful.

Tim

On 13 Dec 2012, at 08:56, Jan Stolarek jan.stola...@p.lodz.pl wrote:

 Dear list,
 
 I'm reposting my message from Haskell-cafe here since this seems like a more 
 appropriate place to 
 ask this question. I would like to learn about internals of GHC and 
 contribute to its development 
 in the future. I read a couple of papers that give a very general overview of 
 GHC (chapter from 
 AoS, papers about inliner and multicore support) and I'm thinking what 
 direction should I pursue 
 now. I got the GHC sources and started reading commentary on the wiki, but it 
 seems that entry 
 barrier is very high.
 
 Aside from the problem of understanding the whole project organization 
 itself, I also don't have 
 background on implementation of functional languages. In fact I have a rather 
 basic knowledge of 
 compilers in general - I only took Stanford's online course on Compilers at 
 Coursera. I was 
 thinking that perhaps I should read SPJs Implementing Functional Languages: 
 a tutorial or The 
 Implementation Of Functional Languages book to learn more about theory? On 
 the other hand I 
 don't know if getting stuck in theory for next couple of months is a good 
 idea.
 
 I would greatly appreciate any advice and help.
 
 Janek
 
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC Performance Tsar

2012-11-30 Thread Tim Watson
Could we not configure travis-ci to run the benchmarks for us or something like 
that? A simple (free) ci setup would be easier than finding a pair of hands to 
do this regularly I would've thought.

On 30 Nov 2012, at 14:42, Simon Peyton-Jones simo...@microsoft.com wrote:

 |  While writing a new nofib benchmark today I found myself wondering
 |  whether all the nofib benchmarks are run just before each release,
 
 I think we could do with a GHC Performance Tsar.  Especially now that Simon 
 has changed jobs, we need to try even harder to broaden the base of people 
 who help with GHC.  It would be amazing to have someone who was willing to:
 
 * Run nofib benchmarks regularly, and publish the results
 
 * Keep baseline figures for GHC 7.6, 7.4, etc so we can keep
   track of regressions
 
 * Investigate regressions to see where they come from; ideally
   propose fixes.
 
 * Extend nofib to contain more representative programs (as Johan is
   currently doing).
 
 That would help keep us on the straight and narrow.  
 
 Any offers?  It could be more than one person.

 Simon
 
 | -Original Message-
 | From: glasgow-haskell-users-boun...@haskell.org [mailto:glasgow-haskell-
 | users-boun...@haskell.org] On Behalf Of Simon Marlow
 | Sent: 30 November 2012 12:11
 | To: Johan Tibell
 | Cc: glasgow-haskell-users
 | Subject: Re: Is the GHC release process documented?
 | 
 | On 30/11/12 03:54, Johan Tibell wrote:
 |  While writing a new nofib benchmark today I found myself wondering
 |  whether all the nofib benchmarks are run just before each release,
 |  which the drove me to go look for a document describing the release
 |  process. A quick search didn't turn up anything, so I thought I'd ask
 |  instead. Is there a documented GHC release process? Does it include
 |  running nofib? If not, may I propose that we do so before each release
 |  and compare the result to the previous release*.
 | 
 |  * This likely means that nofib has to be run for the upcoming release
 |  and the prior release each time a release is made, as numbers don't
 |  translate well between machines so storing the results somewhere is
 |  likely not that useful.
 | 
 | I used to do this on an ad-hoc basis: the nightly builds at MSR spit out
 | nofib results that I compared against previous releases.
 | 
 | In practice you want to do this much earlier than just before a release,
 | because it can take time to investigate and squash any discrepancies.
 | 
 | On the subject of the release process, I believe Ian has a checklist
 | that he keeps promising to put on the wiki (nudge :)).
 | 
 | Cheers,
 |Simon
 | 
 | 
 | ___
 | Glasgow-haskell-users mailing list
 | Glasgow-haskell-users@haskell.org
 | http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
 
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Exclusively For You

2011-03-09 Thread Tim Watson
This is a good offer http://www.gogoamerica.com/info.html It's cool!

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Exclusively For You

2011-03-09 Thread Tim Watson
This offer is really nice, isn't it? http://www.network4dummies.com/info.html

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: help

2011-03-08 Thread Tim Watson
 fine.  On the other hand, you cannot simply type it at the ghci
 prompt; you will get a parse error like the one you mentioned.  ghci
 only allows you to enter expressions, not declarations.

Which in practise (for a total beginner not familiar with haskell)
means typing `let doubleMe x = x + x` instead.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users