Re: [Haskell-cafe] [Haskell] ANN: Monad.Reader Issue 22

2013-08-08 Thread Neil Davies
And yet others who believe the Axiom of Choice is flawed?


On 8 Aug 2013, at 09:04, Henning Thielemann lemm...@henning-thielemann.de 
wrote:

 
 On Wed, 7 Aug 2013, Edward Z. Yang wrote:
 
 I am pleased to announce that Issue 22 of the Monad Reader is now available.
 
   http://themonadreader.files.wordpress.com/2013/08/issue22.pdf
 
 Issue 22 consists of the following two articles:
 
 * Generalized Algebraic Data Types in Haskell by Anton Dergunov
 * Error Reporting Parsers: a Monad Transformer Approach by Matt Fenwick 
 and Jay Vyas
 * Two Monoids for Approximating NP-Complete Problems by Mike Izbicki
 
 That is, there are three kinds of Haskellers: The ones who can count and the 
 others who cannot. :-)
 
 ___
 Haskell mailing list
 hask...@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Backward compatibility

2013-05-03 Thread Neil Davies
Isn't this a problem of timescale?

Nothing can be backward compatible for ever (or at least nothing that 
is being developed or maintained)

There will be, in the life of non-trival length project, change.

We rely (and I mean rely) on Haskell s/w that was written over 10 years ago - 
we accept that
when we want to recompile it (about every 2/3 years as the systems it interacts 
with are updated)
there will be a cost of change to bring it up to latest libraries etc.

But the joy is that (combined with the regression and property tests around it) 
we can have very high
confidence that once the old stuff recompiles it will operate the same. And 
as it is key to our business
that is a nice feeling - let's me sleep soundly at night.

The beauty of the Haskell ecosystem is that such change is nowhere as hazardous 
as with other approaches.
Therefore its total costs are less.

The experimentalism in this community is to be applauded and encouraged - it is 
spawning robust
solutions to the real underlying problems of constructing a society that can 
actually support the 
technical infrastructure it is creating. Don't forget the motivation for Ada 
was that the projected
costs of supporting the US defence infrastructure was projected to exceed the 
projected GDP of 
the US by 2012. Maintaining essential safety critical systems isn't an optional 
extra.

We see that the same Ada-like scenario is working its way out in companies 
today - large Telcos, 
Goverment projects etc - I also see that formalism, such as embodied in 
Haskell, is the ONLY hope
we have to contain the costs of maintenance complexity. 

Yes, the underlying issue is a human one - but it lies in the low value given 
to medium to long
term sustainability of organisations (e.g Telcos) compared to the relative high 
value that is given 
to novelty. Perhaps this is an inevitable by-product of a marketing driven, 
short term driven 
business culture?

Neil

On 3 May 2013, at 10:04, Ertugrul Söylemez e...@ertes.de wrote:

 Raphael Gaschignard dasur...@gmail.com wrote:
 
 I'm pretty sure most of us have experienced some issue with
 dependencies breaking , and its probably the most frustrating problem
 we can have have in any language. It's hard not to take this all a bit
 personally. Maybe if we think more about how to solve this (getting
 people to maintain their stuff, for example) we can make the world a
 better place instead of bickering about issues that are more or less
 language-agnostic really.
 
 The problem can't be solved technically.  It's a human problem after all
 and it's amplified by the experimentalism in this community.  I think
 the best we can do is to acknowledge its existence, which places us way
 ahead of mainstream programming communities.
 
 We don't pretend that type X in lib-0.1.0 is the same as type X in
 lib-0.2.0.  What we need to work on is the ability to actually combine
 multiple versions of the same package conveniently, i.e. we shouldn't
 view this combination as an error.
 
 
 Greets,
 Ertugrul
 
 -- 
 Not to be or to be and (not to be or to be and (not to be or to be and
 (not to be or to be and ... that is the list monad.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can a GC delay TCP connection formation?

2012-11-28 Thread Neil Davies
Jeff

Are you certain that all the delay can be laid at the GHC runtime? 

How much of the end-to-end delay budget is being allocated to you? I recently 
moved a static website from a 10-year old server in telehouse into AWS in 
Ireland and watched the access time (HTTP GET to check time on top index page) 
increase by 150ms.

Neil

On 27 Nov 2012, at 19:02, Jeff Shaw shawj...@gmail.com wrote:

 Hello Timothy and others,
 One of my clients hosts their HTTP clients in an Amazon cloud, so even when 
 they turn on persistent HTTP connections, they use many connections. Usually 
 they only end up sending one HTTP request per TCP connection. My specific 
 problem is that they want a response in 120 ms or so, and at times they are 
 unable to complete a TCP connection in that amount of time. I'm looking at on 
 the order of 100 TCP connections per second, and on the order of 1000 HTTP 
 requests per second (other clients do benefit from persistent HTTP 
 connections).
 
 Once each minute, a thread of my program updates a global state, stored in an 
 IORef, and updated with atomicModifyIORef', based on query results via 
 HDBC-obdc. The query results are strict, and atomicModifyIORef' should 
 receive the updated state already evaluated. I reduced the amount of time 
 that query took from tens of seconds to just a couple, and for some reason 
 that reduced the proportion of TCP timeouts drastically. The approximate 
 before and after TCP timeout proportions are 15% and 5%. I'm not sure why 
 this reduction in timeouts resulted from the query time improving, but this 
 discovery has me on the task of removing all database code from the main 
 program and into a cron job. My best guess is that HDBC-odbc somehow disrupts 
 other communications while it waits for the DB server to respond.
 
 To respond to Ertugrul, I'm compiling with -threaded, and running with +RTS 
 -N.
 
 I hope this helps describe my problem. I c an probably come up with some hard 
 information if requested, E.G. threadscope.
 
 Jeff
 
 On 11/27/2012 10:55 AM, timothyho...@seznam.cz wrote:
 Could you give us more info on what your constraints are?  Is it necessary 
 that you have a certain number of connections per second, or is it necessary 
 that the connection results very quickly after some other message is 
 received?
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can a GC delay TCP connection formation?

2012-11-28 Thread Neil Davies
No - the difference is 6.5ms each way



On 28 Nov 2012, at 14:44, Alexander Kjeldaas alexander.kjeld...@gmail.com 
wrote:

 
 Jeff, this is somewhat off topic, but interesting.  Are telehouse and AWS 
 physically close?  Was this latency increase not expected due to geography?
 
 Alexander
 
 On 28 November 2012 06:21, Neil Davies semanticphilosop...@gmail.com wrote:
 Jeff
 
 Are you certain that all the delay can be laid at the GHC runtime?
 
 How much of the end-to-end delay budget is being allocated to you? I recently 
 moved a static website from a 10-year old server in telehouse into AWS in 
 Ireland and watched the access time (HTTP GET to check time on top index 
 page) increase by 150ms.
 
 Neil
 
 On 27 Nov 2012, at 19:02, Jeff Shaw shawj...@gmail.com wrote:
 
  Hello Timothy and others,
  One of my clients hosts their HTTP clients in an Amazon cloud, so even when 
  they turn on persistent HTTP connections, they use many connections. 
  Usually they only end up sending one HTTP request per TCP connection. My 
  specific problem is that they want a response in 120 ms or so, and at times 
  they are unable to complete a TCP connection in that amount of time. I'm 
  looking at on the order of 100 TCP connections per second, and on the order 
  of 1000 HTTP requests per second (other clients do benefit from persistent 
  HTTP connections).
 
  Once each minute, a thread of my program updates a global state, stored in 
  an IORef, and updated with atomicModifyIORef', based on query results via 
  HDBC-obdc. The query results are strict, and atomicModifyIORef' should 
  receive the updated state already evaluated. I reduced the amount of time 
  that query took from tens of seconds to just a couple, and for some reason 
  that reduced the proportion of TCP timeouts drastically. The approximate 
  before and after TCP timeout proportions are 15% and 5%. I'm not sure why 
  this reduction in timeouts resulted from the query time improving, but this 
  discovery has me on the task of removing all database code from the main 
  program and into a cron job. My best guess is that HDBC-odbc somehow 
  disrupts other communications while it waits for the DB server to respond.
 
  To respond to Ertugrul, I'm compiling with -threaded, and running with +RTS 
  -N.
 
  I hope this helps describe my problem. I c an probably come up with some 
  hard information if requested, E.G. threadscope.
 
  Jeff
 
  On 11/27/2012 10:55 AM, timothyho...@seznam.cz wrote:
  Could you give us more info on what your constraints are?  Is it necessary 
  that you have a certain number of connections per second, or is it 
  necessary that the connection results very quickly after some other 
  message is received?
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] acid-state audit trail

2012-10-19 Thread Neil Davies
The history is there until you archive (move a checkpoint out into a separate 
directory) it and then delete the archive yourself.

the checkpointing just reduces the recovery time (i.e creates a fixed point in 
time), if you were to keep all the checkpoint/archives then you would have the 
complete history

Neil

On 19 Oct 2012, at 06:18, Richard Wallace rwall...@thewallacepack.net wrote:

 Hey all,
 
 I've been looking at acid-state as a possible storage backend for an
 application.  It looks like it fits my needs pretty damn well, but one
 thing that I'm curious about is if it is possible to get a list of
 update events.  You can obviously query for the current state, but
 it's not immediately apparent if you can see the history of your
 values state.  This is useful in some things, like providing audit
 trails and debugging.  As well as being able to re-create state in a
 different form.
 
 I was also curious if the createCheckpoint function eliminates the
 state history or does it just create a snapshot, it's not apparent
 from the docs.
 
 Thanks,
 Rich
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reddy on Referential Transparency

2012-07-28 Thread Neil Davies
Except in the complexity gymnastics and the fragility of the conclusions.

Humans can't do large scale complex brain gymnastics - that's why abstraction 
exists - if your proof process doesn't abstract (and in the C case you need to 
know *everything* about *everything* and have to prove it all in one go and 
that proof will not survive a single change) then it isn't feasible.

Haskell gives you the means to manage the complexity - and grasping complexity 
is humanities current challenge...

Neil



On 28 Jul 2012, at 05:43, damodar kulkarni wrote:

 
 
 So a language is referentially transparent if replacing a sub-term with
 another with the same denotation doesn't change the overall meaning?
 But then isn't any language RT with a sufficiently cunning denotational
 semantics?  Or even a dumb one that gives each term a distinct denotation.
 
 That's neat ... I mean, by performing sufficiently complicated brain 
 gymnastics, one can do equational reasoning on C subroutines (functions!) too.
 
 So, there is no big difference between C and Haskell when it comes to 
 equational reasoning...
 
  
 Regards,
 Damodar 
 
 
 On Sat, Jul 28, 2012 at 1:47 AM, Alexander Solla alex.so...@gmail.com wrote:
 
 
 On Fri, Jul 27, 2012 at 12:06 PM, Ross Paterson r...@soi.city.ac.uk wrote:
 On Fri, Jul 27, 2012 at 07:19:40PM +0100, Chris Dornan wrote:
   So a language is referentially transparent if replacing a sub-term with 
   another with the same
   denotation doesn't change the overall meaning?
 
  Isn't this just summarizing the distinguishing characteristic of a 
  denotational semantics?
 
 Right, so where's the substance here?
 
  My understanding is that RT is about how easy it is to carry out
  _syntactical_ transformations of a program that preserve its meaning.
  For example, if you can freely and naively inline a function definition
  without having to worry too much about context then your PL is deemed
  to possess lots of RT-goodness (according to FP propaganda anyway; note
  you typically can't freely inline function definitions in a procedural
  programming language because the actual arguments to the function may
  involve dastardly side effects; even with a strict function-calling
  semantics divergence will complicate matters).
 
 Ah, but we only think that because of our blinkered world-view.
 
 Another way of looking at it is that the denotational semanticists have
 created a beautiful language to express the meanings of all those ugly
 languages, and we're programming in it.
 
 A third way to look at it is that mathematicians, philosophers, and logicians 
 invented the semantics denotational semanticists have borrowed, specifically 
 because of the properties derived from the philosophical commitments they 
 made.  Computer science has habit of taking ideas from other fields and 
 merely renaming them.  Denotational semantics is known as model theory to 
 everyone else. 
 
 Let's consider a referentially /opaque/ context:  quotation marks.  We might 
 say It is necessary that four and four are eight.  And we might also say 
 that The number of planets is eight.  But we cannot unify the two by 
 substitution and still preserve truth functional semantics.  We would get It 
 is necessary that four and four are the number of planets (via strict 
 substitution joining on 'eight') or a more idiomatic phrasing like It is 
 necessary that the number of planets is four and four.
 
 This is a big deal in logic, because there are a lot of languages which 
 quantify over real things, like time, possibility and necessity, etc., and 
 some of these are not referentially transparent.  In particular, a model for 
 such a language will have to use frames to represent context, and there 
 typically is not a unique way to create the framing relation for a logic.
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] cool tools

2012-05-20 Thread Neil Davies
+1
On 20 May 2012, at 01:23, Simon Michael wrote:

 Well said!
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Amazon AWS storage best to use with Haskell?

2011-11-01 Thread Neil Davies
We use all three (in various ways as they have arrived on the scene over time) 
in production systems.


On 1 Nov 2011, at 02:03, Ryan Newton wrote:

  Any example code of using hscassandra package would really help!
 
 I'll ask my student.  We may have some simple examples.
 
 Also, I have no idea as to their quality but I was pleasantly surprised to 
 find three different amazon related packages on Hackage (simply by searching 
 for the word Amazon in the package list).  
 
http://hackage.haskell.org/package/hS3
http://hackage.haskell.org/package/hSimpleDB
http://hackage.haskell.org/package/aws
 
 It would be great to know if these work.
 
  -Ryan

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Amazon AWS storage best to use with Haskell?

2011-11-01 Thread Neil Davies
Word of caution

Understand the semantics (and cost profile) of the AWS services first - you 
can't just open a HTTP connection and dribble data out over several days and 
hope for things to work. It is not a system that has that sort of laziness at 
its heart.

AWS doesn't supply a traditional remote file store semantics - is queuing, 
simple database and object store have all been designed for large scale systems 
being offered as a service to a (potentially hostile) large set of users - you 
can see that in the way that things are designed. There are all sorts of 
(sensible from their point of view) performance related limits and retries.

The challenge in designing nice clean layers on top of AWS is how/when to hide 
the transient/load related failures.



Neil


On 1 Nov 2011, at 06:21, dokondr wrote:

 On Tue, Nov 1, 2011 at 5:03 AM, Ryan Newton rrnew...@gmail.com wrote:
  Any example code of using hscassandra package would really help!
 
 I'll ask my student.  We may have some simple examples.
 
 Also, I have no idea as to their quality but I was pleasantly surprised to 
 find three different amazon related packages on Hackage (simply by searching 
 for the word Amazon in the package list).  
 
http://hackage.haskell.org/package/hS3
http://hackage.haskell.org/package/hSimpleDB
http://hackage.haskell.org/package/aws
 
 It would be great to know if these work.
 
  
 Thinking about how to implement Data.Map on top of hscassandra or any other 
 key-value storage ...
 For example creating new map with fromList will require to store *all* 
 (key, value) list elements in external storage at once. How to deal with 
 laziness in this case?
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reading pcap

2011-10-12 Thread Neil Davies
Its the byte ordering being different between the pcap file and the machine on 
which the haskell is running


On 12 Oct 2011, at 16:38, mukesh tiwari wrote:

 Hello all 
 I was going through wireshark and read this pcap file in wireshark. I wrote a 
 simple haskell file which reads the pcap file displays its contents however 
 it looks completely different from wireshark. When i run this program . it 
 does not produce any thing and when i press ^C ( CTRL - C ) it produce 
 output. 
 
 output for given file 
 ^C0xd4 0xc3 0xb2 0xa1 0x02 0x00 0x04 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 
 0x00 0xff 0xff 0x00 0x00 0x01 0x00 0x00 0x00 0x0b 0xd4 0x9e 0x43 0x41 0x38 
 0x01 0x00 0x3c 0x00 0x00 0x00 0x3c 0x00 0x00 0x00 0x00 0x04 0x76 0xdd 0xbb 
 0x3a 0x00 0x04 0x75 0xc7 0x87 0x49 0x08 0x00 0x45 0x00 0x00 0x28 0x1a 0x6a 
 0x40 0x00 0x40 0x88 0x6f 0x71 0x8b 0x85 0xcc 0xb0 0x8b 0x85 0xcc 0xb7 0x80 
 0x00 0x04 0xd2 0x00 0x00 0x38 0x45 0x68 0x65 0x6c 0x6c 0x6f 0x20 0x77 0x6f 
 0x72 0x6c 0x64 0x00 0x00 0x00 0x00 0x00 0x00 
 
 The values displayed in wireshark 
   00 04 76 dd bb 3a 00 04  75 c7 87 49 08 00 45 00   ..v..:.. u..I..E.
 0010  00 28 1a 6a 40 00 40 88  6f 71 8b 85 cc b0 8b 85   .(.j@.@. oq..
 0020  cc b7 80 00 04 d2 00 00  38 45 68 65 6c 6c 6f 20    8Ehello 
 0030  77 6f 72 6c 64 0a 00 00  00 00 00 00   world... 
 
 
 
 import Data.Char
 import Data.List
 import Text.Printf
 import Control.Monad
 
 
 
 fileReader :: Handle - IO ()
 fileReader h = do
 t - hIsEOF h
 if t  then return ()
  else do
 tmp - hGetLine h
 forM_  tmp (  printf 0x%02x  ) 
 fileReader h
 
 main = do 
 l - openBinaryFile udp_lite_full_coverage_0.pcap ReadMode
 fileReader l 
 print end
 
 I am simply trying to write  a  haskell script which produce interpretation 
 of pcap packet same as wireshark ( At least for UDP packet ) . Could some one 
 please tell me a guide map to approach for this . A general guide line for 
 this project like What to read which  could be helpful for this project , 
 which haskell library or any thing which you think is useful . 
 
 Regards 
 Mukesh Tiwari
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reading pcap

2011-10-12 Thread Neil Davies
There is a pcap library  - it is a bit of overkill if all you are trying to do 
is read pcap files.

I have an (internal - could be made external to the company) library that does 
this sort of thing and reads using Binary the pcap file and does the 
appropriate re-ordering of the bytes within the words depending on the pcap 
endianness

Neil

On 12 Oct 2011, at 16:38, mukesh tiwari wrote:

 Hello all 
 I was going through wireshark and read this pcap file in wireshark. I wrote a 
 simple haskell file which reads the pcap file displays its contents however 
 it looks completely different from wireshark. When i run this program . it 
 does not produce any thing and when i press ^C ( CTRL - C ) it produce 
 output. 
 
 output for given file 
 ^C0xd4 0xc3 0xb2 0xa1 0x02 0x00 0x04 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 
 0x00 0xff 0xff 0x00 0x00 0x01 0x00 0x00 0x00 0x0b 0xd4 0x9e 0x43 0x41 0x38 
 0x01 0x00 0x3c 0x00 0x00 0x00 0x3c 0x00 0x00 0x00 0x00 0x04 0x76 0xdd 0xbb 
 0x3a 0x00 0x04 0x75 0xc7 0x87 0x49 0x08 0x00 0x45 0x00 0x00 0x28 0x1a 0x6a 
 0x40 0x00 0x40 0x88 0x6f 0x71 0x8b 0x85 0xcc 0xb0 0x8b 0x85 0xcc 0xb7 0x80 
 0x00 0x04 0xd2 0x00 0x00 0x38 0x45 0x68 0x65 0x6c 0x6c 0x6f 0x20 0x77 0x6f 
 0x72 0x6c 0x64 0x00 0x00 0x00 0x00 0x00 0x00 
 
 The values displayed in wireshark 
   00 04 76 dd bb 3a 00 04  75 c7 87 49 08 00 45 00   ..v..:.. u..I..E.
 0010  00 28 1a 6a 40 00 40 88  6f 71 8b 85 cc b0 8b 85   .(.j@.@. oq..
 0020  cc b7 80 00 04 d2 00 00  38 45 68 65 6c 6c 6f 20    8Ehello 
 0030  77 6f 72 6c 64 0a 00 00  00 00 00 00   world... 
 
 
 
 import Data.Char
 import Data.List
 import Text.Printf
 import Control.Monad
 
 
 
 fileReader :: Handle - IO ()
 fileReader h = do
 t - hIsEOF h
 if t  then return ()
  else do
 tmp - hGetLine h
 forM_  tmp (  printf 0x%02x  ) 
 fileReader h
 
 main = do 
 l - openBinaryFile udp_lite_full_coverage_0.pcap ReadMode
 fileReader l 
 print end
 
 I am simply trying to write  a  haskell script which produce interpretation 
 of pcap packet same as wireshark ( At least for UDP packet ) . Could some one 
 please tell me a guide map to approach for this . A general guide line for 
 this project like What to read which  could be helpful for this project , 
 which haskell library or any thing which you think is useful . 
 
 Regards 
 Mukesh Tiwari
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC/Cabal on AFS

2011-09-05 Thread Neil Davies
Yep

We get this as well - as you say, once it is in the cache it works fine

Neil
On 5 Sep 2011, at 18:06, Tristan Ravitch wrote:

 I have the Haskell Platform (and my home directory with my
 cabal-installed packages) installed on an AFS (a network filesystem)
 volume and have been noticing a strange issue.  Whenever I install a
 package using cabal-install and it gets to a phase of the build where
 it needs to load a bunch of packages, the build fails without a useful
 error.  Example:
 
 
 cabal-dev install yesod
 Resolving dependencies...
 Configuring yesod-core-0.9.1.1...
 Preprocessing library yesod-core-0.9.1.1...
 Preprocessing test suites for yesod-core-0.9.1.1...
 Building yesod-core-0.9.1.1...
 [ 1 of 15] Compiling Yesod.Internal.Session (
 Yesod/Internal/Session.hs, dist/build/Yesod/Internal/Session.o )
 [ 2 of 15] Compiling Paths_yesod_core 
 (dist/build/autogen/Paths_yesod_core.hs, dist/build/Paths_yesod_core.o)
 [ 3 of 15] Compiling Yesod.Logger 
 (Yesod/Logger.hs,dist/build/Yesod/Logger.o )
 [ 4 of 15] Compiling 
 Yesod.Internal.RouteParsing(Yesod/Internal/RouteParsing.hs,dist/build/Yesod/Internal/RouteParsing.o)
 [ 5 of 15] Compiling Yesod.Internal   
 (Yesod/Internal.hs,dist/build/Yesod/Internal.o )
 Loading package ghc-prim ... linking ... done.
 Loading package integer-gmp ... linking ... done.
 Loading package base ... linking ... done.
 Loading package bytestring-0.9.1.10 ... linking ... done.
 Loading package array-0.3.0.2 ... linking ... done.
 Loading package containers-0.4.0.0 ... linking ... done.
 Loading package deepseq-1.1.0.2 ... linking ... done.
 Loading package text-0.11.0.5 ... cabal: Error: some packages failed to 
 install:
 yesod-0.9.1.1 depends on yesod-core-0.9.1.1 which failed to install.
 yesod-auth-0.7.1 depends on yesod-core-0.9.1.1 which failed to install.
 yesod-core-0.9.1.1 failed during the building phase. The exception
 was:
 ExitFailure 7
 yesod-form-0.3.1 depends on yesod-core-0.9.1.1 which failed to install.
 yesod-json-0.2.1 depends on yesod-core-0.9.1.1 which failed to install.
 yesod-persistent-0.2.1 depends on yesod-core-0.9.1.1 which failed to install.
 
 
 
 If I keep re-running it, it will eventually succeed.  It also always
 makes forward progress (the next attempt will get past text and a few
 more packages).  It seems to be related to the state of the AFS cache;
 if all of the required packages are in the local AFS cache it usually
 just works.  If the cache has just been flushed (due to other FS
 operations), this failure pretty much always shows up.
 
 
 Has anyone else experienced anything like this?  Alternatively, does
 anyone have ideas on getting a more useful error message/tracking it
 down?  I didn't see any relevant bugs filed yet, but I wanted to get
 more information before adding a report.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Replacing stdin from within Haskell

2011-06-09 Thread Neil Davies
Hi

Anyone out there got an elegant solution to being able to fork a haskell thread 
and replace its 'stdin' ? 

Why do I want this - well I'm using the pcap library and I want to uncompress 
data to feed into 'openOffile' (which will take - to designate read from 
stdin). Yes I can create a separate command and use the stuff in System.Process 
or uncompress it into a temporary file - but that all seems in-elegant.

Suggestions?

Cheers

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Replacing stdin from within Haskell

2011-06-09 Thread Neil Davies
Thanks

That is what I thought. I'll stick with what I'm doing at present which is to 
use temporary files - don't want to get into separate processes which I then 
have to co-ordinate.

Cheers

Neil

On 9 Jun 2011, at 17:01, Brandon Allbery wrote:

 On Thu, Jun 9, 2011 at 07:40, Neil Davies semanticphilosop...@gmail.com 
 wrote:
 Anyone out there got an elegant solution to being able to fork a haskell 
 thread and replace its 'stdin' ?
 
 File descriptors/file handles are per process, not per thread, on
 (almost?) every OS ghc supports.  You'll need to create a full
 subprocess.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] why are trading/banking industries seriously adopting FPLs???

2011-03-25 Thread Neil Davies

Trustworthiness

It provides the means of constructing systems that can be reasoned  
about, in which the risks of mistakes can be assessed, in which  
concurrency can be exploited without compromising those properties.


I once sat on a plane with a guy who ran a company that made software  
to move money around markets, he was really pleased that they could  
handle up to 60 transactions a second. They would take a money moment  
(say $5B) split it into 'maximum risk units' and engage in the  
conversion between the two currencies. Given the nature of the  
distributed transaction, the way in which the commitment process  
operated what was *really* important was managing the overall risk of  
currency fluctuation in the partially completed distributed  
transactions. His typical employee at the time (this was about 8-10  
years ago) was a good PhD in quantum chromodynamics - they had the  
ability to think about the 'all possible futures' that the code had to  
handle. (yes I did my FP evangelisation bit)


That company, with today's Haskell, could start from a simple,  
obviously correct, description of the issues and evolve a solution -  
knowing that, with equational reasoning, referential transparency and  
other properties, the transformations were 'safe'. Doesn't mean you  
don't test - does mean you can do more with your good staff.


I've used some of the techniques that are in the haskell libraries  
(esp. iteratee's and dsl's) in developing s/w for intrustion detection  
companies in the past - granted they were not actually running GHC  
code - but specialised C coming out of a DSL


Neil


On 25 Mar 2011, at 07:08, Vasili I. Galchin wrote:


Hello,

 I am very curious about the readiness of trading and banking  
industries to adopt FPLs like Haskell: http://talenteze.catsone.com/careers/index.php?m=portala=detailsjobOrderID=466095 
  I currently work in the computer security(intrusion  
detection). My colleagues are totally ignorant concerning the  
foundations/motivations of FPLs. (Ironically http://www.galois.com  
participates in the computer security arena!). Why are are trading/ 
banking diving into FPLs?


Regards,

Vasili
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Anyone recommend a VPS?

2011-03-19 Thread Neil Davies

Hi

We run a the whole of our distributed file system (AFS) on a single  
AWS micro instance with linux containers inside.


We then use other instances for various things as/when needed (using  
kerberos to distributed the management and control to the appropriate  
people). For example we have a EC2 machine that we power up and down  
as needed (still have to pay for the filestore when not being used -  
but that is very small) for GHC - used it this morning to upgrade our  
shared (via AFS) development environment to 7.0.2. All our other  
systems read of that (we are completely distributed operation).


They've been great - had a physical processor go bad once - and also  
they had a h/w problem on one the machines once, that is in last 2  
years or so about 6 operational system years.


Neil





On 19 Mar 2011, at 11:36, Pasqualino Titto Assini wrote:


If you need to run your server continuously you might be better off
with a cheap dedicated server.

To run my quid2.org site, a rather complex setup with a web server and
a number of background haskell processes, I use a server from the
French provider OVH/Kimsufi (http://www.kimsufi.co.uk/  and
http://www.ovh.co.uk/products/dedicated_offers.xml ,main site is
ovh.com).

You can get a decent box for 15 euro a month and a hell of a machine
for 50/60 euros.

They also have some Cloud/VPS options, that I have not used.


Does anyone have first-hand experience with Amazon EC2?

They also look very tempting.


Best,

   titto


On 19 March 2011 10:12, Lyndon Maydwell maydw...@gmail.com wrote:

Does anyone have any Binaries that are built to run on EC2?

That would be super!

On Tue, Feb 2, 2010 at 1:11 AM, Jason Dusek jason.du...@gmail.com  
wrote:

2010/01/31 Marc Weber marco-owe...@gmx.de:

If all you want is standard debian or such it does'nt matter.
However I tried installing NixOS Linux and I've had lot's of
trouble until switching to linode. NixOS was up and running
within 30min then..


 How did you get NixOS on your Linode system? They don't seem to
 offer it, last I checked.

 I'm looking in to doing this with PRGMR, which has pretty good
 pricing though it's not nearly as featureful as Linode.

--
Jason Dusek
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





--
Dr. Pasqualino Titto Assini
http://quid2.org/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] DSL for task dependencies

2011-03-18 Thread Neil Davies
I've always liked the semantics of Unity - they seem the right sort of  
thing to construct such a system on - they also permit concepts such  
as partial completion and recovery from failure. Used to use this as  
one of the concurrency models I taught - see http://www.amazon.com/Parallel-Program-Design-Mani-Chandy/dp/0201058669 
.


It is one of those friday afternoon thoughts about constructing  
distributed, fault tolerant systems that have formal semantics, can  
have a rich set of pre/post conditions so that 'average joe' could  
write script-lets in it and it could take over the monitoring and  
(normal) fault management of some of our distributed systems.



On 18 Mar 2011, at 04:43, Conal Elliott wrote:

Speaking of which, for a while now I've been interested in designs  
of make-like systems that have precise  simple (denotational)  
semantics with pleasant properties. What Peter Landin called  
denotative (as opposed to functional-looking but semantically ill- 
defined or intractable).


Norman Ramsey (cc'd) pointed me to the Vesta system from DEC SRC. If  
anyone knows of other related experiments, I'd appreciate hearing.


  - Conal

On Thu, Mar 17, 2011 at 1:31 PM, David Peixotto d...@rice.edu wrote:
Hi Serge,

You may be thinking of the Shake DSL presented by Neil Mitchell at  
last years Haskell Implementers Workshop. Slides and video are  
available from: http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2010


Max Bolingbroke has an open source implementation available here: 
https://github.com/batterseapower/openshake

Hope that helps.

-David

On Mar 17, 2011, at 3:00 PM, Serge Le Huitouze wrote:

 Hi Haskellers!

 I think I remember reading a blog post or web page describing a
 EDSL to describe tasks and their dependencies a la make.

 Can anyone point me to such published material?

 Thanks in advance.

 --serge
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: HaLVM 1.0: the Haskell Lightweight Virtual Machine

2010-12-01 Thread Neil Davies
Yes, thanks for sharing
.. 

 Congrats to Galois for open sourcing this. Now let the collaboration begin.
 
 Would it be possible to run HaLVM on Amazon EC2?

They do say that you can boot any image from an EBS volume  - if you start 
playing with this I would be interested to hear any (positive or negative) 
progress...

 Jason M. Knight
 Ph.D. Electrical Engineering '13
 Texas AM University
 Cell: 512-814-8101
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: network-2.2.3, merger with network-bytestring

2010-11-01 Thread Neil Davies

Chris

What you are observing is the effects of the delay on the operation of  
the TCP stacks and the way your 'sleep' works.


You are introducing delay (the sleep time is a 'minimum' and then at  
least one o/s jiffy) - that represents one limit. The other limit is  
delay/bandwidth product of the connection hiding of this effect is  
dependent on the window size negotiated.


How accurate do you need this control of throughput? To get really  
accurate rates we had to write our own specialist rate regulated  
thread library which accounts for any scheduling delay and can even  
spin if you want low delay variance in the packet dispatch times.


Neil

On 31 Oct 2010, at 17:56, Christopher Done wrote:


On 31 October 2010 16:14, Johan Tibell johan.tib...@gmail.com wrote:

This version marks the end of the network-bytestring package, which
has now been merged into the network package. This means that
efficient and correct networking using ByteStrings is available as
part of the standard network package.

As part of the merger, two new modules have been added:
Network.Socket.ByteString and Network.Socket.ByteString.lAzy


Huzzah! I updated my little throttler program to use ByteString.[1]

One thing I'm curious about, that maybe you probably know off the top
of your head, is that when I'm limiting the speed, I was merely
recv'ing and send'ing at 1024 bytes a pop[2], however, when I did this
at, say, ~500KB/s, Firefox is really slow at receiving it, whereas
when I set it 4096 it's much faster/more accurate to the speed I
intend. Chrome doesn't seem to care.

I think the difference is that Firefox is single threaded
(select/poll-based?) and has to switch between various jobs while
receiving every tiny block that I'm sending. Chrome probably has the
receiver in a separate process.

So it seems like, short of balancing the package size (at powers of 2)
and the delay to get some ideal throughput, it's easiest and probably
realistically equivalent to set it to 4096 and just rely on an
accurate delay? What do you think? Just curious. This is something I
think I'll look into in the future, I really don't have a good feel
for typical network latency and throughput. It's not a big deal right
now as 56k slow is all I need to test my web app.

[1]: 
http://github.com/chrisdone/throttle/commit/97e03bfc64adc074c9d1f19c2605cb496576c593
[2]: 
http://github.com/chrisdone/throttle/commit/30dc1d970a7c0d43c1b6dc33da9deecf30808114
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] HDBC/ODBC/MySQL - working configuration?

2010-10-20 Thread Neil Davies
Hi

I'm having some difficulty getting the above combination to work, I've 
successfully used ODBC against MS but the MySQL stuff is creating some really 
interesting error conditions in the HDBC-ODBC module.

My first thought is that it must be me, so does anyone out there have this 
combination working (with or without HaskellDB)? If so, can you tell me which 
versions of things you are using and which ODBC drivers (and configurations) 
you are using?

Cheers

Neil___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] HaskellDB/ODBC/MySQL issue

2010-10-19 Thread Neil Davies
Hi

I can't seem to get the combination of HaskellDB/ODBC/MySQL to even get off the 
ground, example:

import OmlqDBConnectData (connect'options)
import Database.HaskellDB
import Database.HaskellDB.HDBC.ODBC (odbcConnect)
import Database.HaskellDB.Sql.MySQL (generator)

main = odbcConnect generator connect'options 
  $ \db - tables db = print

gives me 

Prelude.(!!): index too large

Suggestions?

Cheers

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Unified Haskell login

2010-09-17 Thread Neil Davies

Why not use kerberos?

We find it works for us, integrates with web (natively or via  
WebAuth), remote command execution (remctl) and ssh - widely used,  
scales brilliantly.


Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Linking issues with HaskellDB/HDBC/ODBC

2010-09-02 Thread Neil Davies
Hi

Anyone got any hints as to why this linkage error is occurring (GHC 6.12.3, Mac 
OSX 10.6 (and 10.5 has same issue)):

cabal install haskelldb-hdbc-odbc --enable-documentation --root-cmd=sudo 
--global
Resolving dependencies...
Configuring haskelldb-hdbc-odbc-0.13...
Preprocessing library haskelldb-hdbc-odbc-0.13...
Preprocessing executables for haskelldb-hdbc-odbc-0.13...
Building haskelldb-hdbc-odbc-0.13...
[1 of 1] Compiling Database.HaskellDB.HDBC.ODBC ( 
Database/HaskellDB/HDBC/ODBC.hs, dist/build/Database/HaskellDB/HDBC/ODBC.o )
Registering haskelldb-hdbc-odbc-0.13...
[1 of 2] Compiling Database.HaskellDB.HDBC.ODBC ( 
Database/HaskellDB/HDBC/ODBC.hs, 
dist/build/DBDirect-hdbc-odbc/DBDirect-hdbc-odbc-tmp/Database/HaskellDB/HDBC/ODBC.o
 )
[2 of 2] Compiling Main ( DBDirect.hs, 
dist/build/DBDirect-hdbc-odbc/DBDirect-hdbc-odbc-tmp/Main.o )
Linking dist/build/DBDirect-hdbc-odbc/DBDirect-hdbc-odbc ...
Undefined symbols:
  _CFBundleCopyResourceURL, referenced from:
  _SQLDriverConnect_Internal in libodbc.a(connect.o)
  ___CFConstantStringClassReference, referenced from:
  cfstring=org.iodbc.core in libodbc.a(connect.o)
  cfstring=iODBCadm.bundle in libodbc.a(connect.o)
  _CFURLCopyFileSystemPath, referenced from:
  _SQLDriverConnect_Internal in libodbc.a(connect.o)
  _CFRelease, referenced from:
  _SQLDriverConnect_Internal in libodbc.a(connect.o)
  _SQLDriverConnect_Internal in libodbc.a(connect.o)
  _CFStringGetCString, referenced from:
  _SQLDriverConnect_Internal in libodbc.a(connect.o)
  _CFBundleGetBundleWithIdentifier, referenced from:
  _SQLDriverConnect_Internal in libodbc.a(connect.o)
ld: symbol(s) not found
collect2: ld returned 1 exit status
cabal: Error: some packages failed to install:
haskelldb-hdbc-odbc-0.13 failed during the building phase. The exception was:
ExitFailure 1

Hints gratefully received!...

Cheers

Neil___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there any female Haskellers?

2010-03-27 Thread Neil Davies
If you are looking for a real first - http://en.wikipedia.org/wiki/Ada_Lovelace 
 - she is even credited with writing the first algorithm for machine  
execution.



On 27 Mar 2010, at 20:06, John Van Enk wrote:


http://en.wikipedia.org/wiki/Grace_Hopper

A heck of a lady.

On Sat, Mar 27, 2010 at 12:51 PM, Andrew Coppin andrewcop...@btinternet.com 
 wrote:

Ozgur Akgun wrote:
Nevertheless, I guess you're right. There are very few females in  
most of the CS topics, and haskell is no different.


This is my experience too. Although note that apparently the world's  
very first computer programmer was apparently a woman...



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Real-time garbage collection for Haskell

2010-03-03 Thread Neil Davies


On 2 Mar 2010, at 21:38, Simon Marlow wrote:


On 02/03/10 20:37, Luke Palmer wrote:
On Tue, Mar 2, 2010 at 7:17 AM, Simon Marlowmarlo...@gmail.com   
wrote:

For games,
though, we have a very good point that occurs regularly where we  
know
that all/most short-lived objects will no longer be referenced -  
at the

start of a fresh frame.


System.Mem.performGC is your friend, but if you're unlucky it  
might do a

major GC and then you'll get more pause than you bargained for.


Some fine-grained control might be nice here.  Eg. I could do a major
GC as a player is opening a menu, on a loading screen, when the game
is paused, or some other key points, and it might still be annoying,
but at least it wouldn't interfere with gameplay.  There is of course
the question of what happens if one of these key points doesn't  
happen

when we need to do an allocation, but... oh well.  Perhaps that could
be mitigated by saying I would rather you allocate than major GC
right now.  Are any of these options impossible, or be unreasonably
difficult to implement (I don't suspect so)?


Actually that's one thing we can do relatively easily, i.e. defer  
major GC for a while.  Due to the way GHC has a two-layer memory  
manager, the heap is a list of discontiguous blocks, so we can  
always allocate some more memory.


So it would be pretty easy to provide something like

 disableMajorGC, enableMajorGC :: IO ()

Of course leaving it disabled too long could be bad, but that's your  
responsibility.


Oh, I just checked and System.Mem.performGC actually performs a  
major GC, here's its implementation:


foreign import ccall {-safe-} performMajorGC performGC :: IO ()

to perform a minor GC (or possibly a major GC if one is due), you  
want this:


foreign import ccall {-safe-} performGC performMinorGC :: IO ()

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Is there a similar set of runes to be able to see how much mutation  
has occurred, how much was live last GC, etc?


Neil

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-03 Thread Neil Davies

Sorry, no.

We wanted a basic bound on the jitter - the application is not one  
that creates much (if any) long lived heap.


Having just seen Simon's email on the fact that performGC forces a  
major GC - i think that there is some

new mileage here with making the speculative GC's minor ones.

More control needs some more instrumentation of how much mutation is  
occurring and ways of estimating
how much of that is short and long lived - I know that past history is  
not necessarily a good indicator
of future actions - but visibility of the counters that being kept  
would help.


Neil

On 3 Mar 2010, at 00:00, Jason Dusek wrote:


2010/02/28 Neil Davies semanticphilosop...@googlemail.com:
I've never observed ones that size. I have an application that runs  
in 'rate
equivalent real-time' (i.e. there may be some jitter in the exact  
time of
events but it does not accumulate). It does have some visibility of  
likely
time of future events and uses that to perform some speculative  
garbage

collection.


 Do you have information on how it behaves without speculative
 GC?

--
Jason Dusek


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-03-01 Thread Neil Davies
I don't know that hanging your hat on the deterministic coat hook is  
the right thing to do.


The way that I've always looked at this is more probabilistic - you  
want the result to arrive within a certain time frame for a certain  
operation with a high probability.  There is always the probability  
that the h/w will fail anyway - you could even reason that the  
software taking too long is just  a transient fault that clears -  
random (non-correlated - preferably a bernoulli choice) failures are  
OK, non-deterministic ones aren't.


This probabilistic, low probability of being at the tail of timing,  
approach would give a lot more flexibility in any form of (say  
incremental) GC - you may not be able to bound the incremental steps  
absolutely but a strong probabilistic bound might well be more  
achievable.


Neil


On 1 Mar 2010, at 21:06, Job Vranish wrote:




On Mon, Mar 1, 2010 at 2:37 PM, Thomas Schilling nomin...@googlemail.com 
 wrote:

On 1 March 2010 16:27, Job Vranish job.vran...@gmail.com wrote:
 My current area of work is on realtime embedded software  
programming for
 avionics systems. We do most of our coding in Ada but I've been  
dreaming of

 using haskell instaed.

A possible workaround would be to sprinkle lots of 'rnf's around your
code to make sure you don't build up a thunk or two that will delay
you later.  And if you do this, aren't you essentially programming in
a strict functional language (like SML or O'Caml)?  By careful
profiling you and auditing you can probably rule out most of the
potential bad cases, so it can be acceptable for a soft real-time
system (Galois did something like this, I believe).  But for avionics
systems you probably want to more assurances than that, don't you?

Yes and no.
It's true that lazy evaluation makes reasoning about timings a bit  
more difficult (and might not be usable in very time critical  
scenarios) but it is still has well defined deterministic behavior.


It's the referential transparency that saves us here. If you run a  
lazy function with the same objects (in the same evaluation state)  
it should _theoretically_ take the same amount of time to run. All  
of our toplevel inputs will be strict, and if we keep our frame-to- 
frame state strick, our variances in runtimes, given the same  
inputs, should be quite low modulo the GC.


Even our current code can take significantly different amounts of  
time to compute things depending on what you're doing. Some  
waypoints take longer to lookup from the database than others.  
Predicting the time to arrival can take significantly longer/shorter  
depending on seemingly trivial parameters, etc...


It matters less that code always takes the same amount of time to  
run (though it needs to always be less than the frame time)  and  
more so that it always takes the same amount of time to run given  
the same initial conditions.


- Job
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Real-time garbage collection for Haskell

2010-02-28 Thread Neil Davies

My experience agrees with Pavel.

I've never observed ones that size. I have an application that runs in  
'rate equivalent real-time' (i.e. there may be some jitter in the  
exact time of events but it does not accumulate). It does have some  
visibility of likely time of future events and uses that to perform  
some speculative garbage collection. GC is pretty short and i've not  
seen an effect  1ms in those runs (all the usual caveats apply - my  
programs are not your programs etc).



Neil

On 28 Feb 2010, at 09:06, Pavel Perikov wrote:

Did you really seen 100ms pauses?! I never did extensive research on  
this but my numbers are rather in microseconds range (below 1ms).  
What causes such a long garbage collection? Lots of allocated and  
long-living objects?


Pavel.

On 28.02.2010, at 8:20, Luke Palmer wrote:


I have seen some proposals around here for SoC projects and other
things to try to improve the latency of GHC's garbage collector.  I'm
currently developing a game in Haskell, and even 100ms pauses are
unacceptable for a real-time game.  I'm calling out to people who  
have

seen or made such proposals, because I would be willing to contribute
funding and/or mentor a project that would contribute to this goal.
Also any ideas for reducing this latency in other ways would be very
appreciated.

Thanks,
Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] writing graphs with do-notation

2009-12-13 Thread Neil Davies

Neat

Surely there is somewhere in the haskell Twiki that something like  
this should live?


Neil

On 12 Dec 2009, at 21:00, Soenke Hahn wrote:


Hi!

Some time ago, i needed to write down graphs in Haskell. I wanted to  
be able
to write them down without to much noise, to make them easily  
maintainable. I
came up with a way to define graphs using monads and the do  
notation. I thought
this might be interesting to someone, so i wrote a small script to  
illustrate

the idea. Here's an example:

example :: Graph String
example = buildGraph $ do
   a - mkNode A []
   b - mkNode B [a]
   mkNode C [a, b]

In this graph there are three nodes identified by [A, B, C]  
and three
edges ([(A, B), (A, C), (B, C)]). Think of the variables  
a and b
as outputs of the nodes A and B. Note that each node identifier  
needs to be
mentioned only once. Also the definition of edges (references to  
other nodes

via the outputs) can be checked at compile time.

The attachment is a little script that defines a Graph-type (nothing
elaborate), the buildGraph function and an example graph that is a  
little
more complex than the above. The main function of the script prints  
the

example graph to stdout to be read by dot (or similar).

By the way, it is possible to define cyclic graphs using mdo  
(RecursiveDo).


I haven't come across something similar, so i thought, i'd share it.  
What do

you think?

Sönke
Graph-Monads.hs___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Optimization with Strings ?

2009-12-04 Thread Neil Davies

Or maybe it should be renamed

  
proofObligationsOnUseNeedToBeSupliedBySuitablyQualifiedIndividualPerformIO


which is what it really is - unsafe in the wrong hands

Nei

On 4 Dec 2009, at 08:57, Colin Adams wrote:


Please help me understand the holes in Haskell's type system.


Not really wanting to support the troll, but ...

unsafePerformIO?

Can't it be removed?
--
Colin Adams
Preston,
Lancashire,
ENGLAND
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Optimization with Strings ?

2009-12-04 Thread Neil Davies

Ah

but the type system is the proof - it doesn't permit you to construct  
things that are 'unsafe' - the whole way the language (and its  
implementation) is constructed is to do that for you.


The issue is that, very occasionally, you the programmer (usually for  
reasons of performance - runtime or code lines) want something  
slightly out of the ordinary. This is the escape mechanism.


To quote the late, great DNA - it is all about rigidly defined areas  
of doubt and uncertainty - one of the arts of programming is to push  
all the nasty doubt and uncertainty into a small corner where you can  
beat it to death with a large dose of logic, proof and (occasional)  
handwaving...


Now before you start talking about 'surely the type system should be  
complete' - I refer you to http://en.wikipedia.org/wiki/Gödel%27s_incompleteness_theorem


Take comfort in that, I do, it means that us humans still have a  
role...


Neil

On 4 Dec 2009, at 09:16, Colin Adams wrote:

But the type system doesn't insist on such a proof - so is it not a  
hole?


2009/12/4 Neil Davies semanticphilosop...@googlemail.com:

Or maybe it should be renamed

  
proofObligationsOnUseNeedToBeSupliedBySuitablyQualifiedIndividualPerformIO


which is what it really is - unsafe in the wrong hands

Nei

On 4 Dec 2009, at 08:57, Colin Adams wrote:


Please help me understand the holes in Haskell's type system.


Not really wanting to support the troll, but ...

unsafePerformIO?

Can't it be removed?
--
Colin Adams
Preston,
Lancashire,
ENGLAND
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe







--
Colin Adams
Preston,
Lancashire,
ENGLAND


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Timing and Atom

2009-12-01 Thread Neil Davies

On 1 Dec 2009, at 05:44, Tom Hawkins wrote:




I never considered running Atom generated functions in an asynchronous
loop until you posted your Atomic Fibonacci Server example
(http://leepike.wordpress.com/2009/05/05/an-atomic-fibonacci-server-exploring-the-atom-haskell-dsl/ 
).

I'm curious how such a system would behave if it referenced a
hardware clock to enable and disable rules.  I can see how this could
be problematic for hard real-time, safety critical stuff.  But there
is a wide field of applications where this approach would work just
fine -- not having to worry about meeting a hard loop time would
certainly simplify the design.  Could some form of static analysis be
applied to provide the timing guarantees?



Yes this is possible. I work with Stochastic Process Algebras that  
have these properties.
With them it is possible to get strong probabilistic guarantees and,  
when combined with carefully
chosen priority and preemption model, you can have both 'hard' (i.e  
response within a given time with

probability 1) and softer guarantees (yet with known time CDF).

The really nice property is that, given that your problem will permit  
it, you can even create systems that
can go into saturation (arrival rate exceeds processing rate) and  
define how they will gracefully degrade.


We use this for constructing and analysing large scale distributed  
systems and reasoning about,
and controlling, their emergent properties including under overload  
and when the communications is lossy.


Neil

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Conversion from string to array

2009-06-06 Thread Neil Davies

Look at Data.Binary (binary package)

It will marshall and unmarshall data types for you. If you don't like  
its binary encoding you can dive in there and use the same principles


Cheers

Neil

On 6 Jun 2009, at 07:13, John Ky wrote:


Hi Haskell Cafe,

I'm trying to send stuff over UDP.  To do that, I've written a test  
program that sends strings across.  That was fine, but I wanted to  
send binary data too.  So I tried that only to find I am having  
difficulty putting together some binary data.  For examples take the  
fromHex function in the following code.
It is supposed to convert from a Hexadecimal string to a list of  
bytes, but I am having trouble coercing the integer types to the  
size I want.


Is this the right way to do it?

Cheers,

-John

  import Data.Bits
  import Data.Char
  import Data.Word
  import System.Environment
  import XStream.Tsmr.Client

  data CmdLineOptions = CmdLineOptions
 { optionHelp :: Bool
 , optionVersion :: Bool
 , optionPort :: String
 , optionMessage :: String
 , optionInvalids :: [String]
 }
 deriving (Eq, Show)

  initCmdLineOptions = CmdLineOptions
 { optionHelp = False
 , optionVersion = False
 , optionPort = 1234
 , optionMessage = 
 }

  parseArgs :: [String] - CmdLineOptions
  parseArgs [] = initCmdLineOptions
  parseArgs (--port:port:xs) = (parseArgs xs) { optionPort = port }
  parseArgs (--help:xs) = (parseArgs xs) { optionHelp = True }
  parseArgs (--version:xs) = (parseArgs xs) { optionVersion =  
True }

  parseArgs (('-':opt):xs) = let option = (parseArgs xs) in
 option { optionInvalids = ('-':opt):optionInvalids option }
  parseArgs (message:xs) = (parseArgs xs) { optionMessage = message }

  printUsage = do
 putStrLn Usage: udp-server.lhs [options] message
 putStrLn 
 putStrLn Options:
 putStrLn   --help  Get help information.
 putStrLn   --vesionGet version information.
 putStrLn   --port nThe port number to listen on.
 putStrLn 
 putStrLn Message:
 putStrLn   The message to send.
 putStrLn 

  printVersion = do
 putStrLn Version.

  fromHex :: String - [Word8]
  fromHex [] = []
  fromHex (u:l:xs) = (hexU .|. hexL):fromHex xs
 where
hexU = (fromInteger $ hexValue u) :: Word8
hexL = (fromInteger $ hexValue l) :: Int
hexValue c
   | '0' = c  c = '9' = ord c - ord '0'
   | 'a' = c  c = 'z' = ord c - ord 'a' + 10


  run port message = do
 h - openlog localhost port udp-client.lhs
 syslog h (fromHex message)

  main = do
 args - getArgs
 let options = parseArgs args
 let port = optionPort options
 let message = optionMessage options
 if optionHelp options
then printUsage
else if optionVersion options
   then printVersion
   else do
  putStrLn (Starting UDP listener on port:  ++ port)
  run port message





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Simulation and GHC Thread Scheduling

2009-05-09 Thread Neil Davies

Thomas

You can build your own scheduler very easily using what is already
there.

As with any simulation the two things that you need to capture are
dependency and resource contention. Haskell does both the dependency
stuff beautifully  and the resource contention. Using STM you can even
get nice compositional properties.

All you really have to take care of is how time progresses (if that is
the sort of simulation you are in to).

Yes, refactor the code, choose an appropriate (monadic) framework to
run it in and build a consistent logging/monitoring/measuring model
into it - then things work great.

Cheers

Neil

On 9 May 2009, at 00:22, Thomas DuBuisson wrote:


All,
I have a simple Haskell P2P library that I've been playing with in
simulations of 20 to 600 nodes.   To run the simulation there is a
Haskell thread (forkIO) for every node in the system, one that starts
up all the nodes and prints the info (so prints aren't mangled), and
one that acts as the router.

Before its mentioned - I understand the best way forward would be to
refactor the code into IO-less 'algorithm' sections and other sections
that perform the needed IO when I'm not simulating.  I know this would
allow me to declare what order each node runs in and would free me
from the scheduler.  I'd like to do that if its practical... but!

None-the-less, here I am saying that there are many interesting little
simulations that could be done without refactoring and the correctness
isn't altered by the order of operations (not if the nodes behave
properly, the slight variation is actually a good test).  What I would
like to know is are there any plans for GHC to incorporate
user-definable scheduler?  It would be useful in numerous instance
beyond this poor example; I know user scheduling was briefly mentioned
in Li's paper but haven't seen or heard of any interest from others
since then.

Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Estimating the time to garbage collect

2009-05-04 Thread Neil Davies

Duncan

That was my first thought - but what I'm looking for is some  
confirmation from those who know better that treating the GC as  
'statistical source' is a valid hypothesis. If the thing is 'random'  
that's fine - if its timing is non-deterministic, that's not fine.


So GC experts are there any hints you can give me? - are there any  
papers that cover this timing aspect? and are there any corner cases  
that might make the statistical approach risky? (or at worse invalid).


I don't want to have to build a stochastic model of the GC, if I can  
help it!


Neil



On 4 May 2009, at 12:51, Duncan Coutts wrote:


On Fri, 2009-05-01 at 09:14 +0100, Neil Davies wrote:

Hi

With the discussion on threads and priority, and given that (in
Stats.c) there are lots of useful pieces of information that the run
time system is collecting, some of which is already visible (like the
total amount of memory mutated) and it is easy to make other measures
available - it has raised this question in my mind:

Given that you have access to that information (the stuff that comes
out at the end of a run if you use +RTS -S) is it possible to  
estimate

the time a GC will take before asking for one?

Ignoring, at least for the moment, all the issues of paging,  
processor
cache occupancy etc, what are the complexity drivers for the time  
to GC?


I realise that it is going to depend on things like, volume of data
mutated, count of objects mutated, what fraction of them are live etc
- and even if it turns out that these things are very program  
specific

then I have a follow-on question - what properties do you need from
your program to be able to construct a viable estimate of GC time  
from

a past history of such garbage collections?


Would looking at statistics suffice? Treat it mostly as a black box.
Measure all the info you can before and after each GC and then use
statistical methods to look for correlations to see if any set of
variables predicts GC time.

Duncan



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [tryReadAdvChan :: AdvChan a - IO (Maybe a)] problems

2009-05-02 Thread Neil Davies

Belka

Now that you've got some metrics to know when you have a successful  
design, start examining the trades.


Most systems have a turning point in their performance as load is  
increased, you need manage the traffic so that the offered load does  
not push the system over that point (or if it does, not for too long -  
it doesn't have to be perfect all the time, just suitably good - this  
is where stochastic come it, another story).


What you are doing here is trading efficient use of the infrastructure  
(e.g. you are not going to try and run the servers close to 100%) for  
responsive behaviour.


This is one of the reasons why people create multi-levelled web manage  
architectures - you could think of the design as having an initial  
stage that can analyse and identify the traffic an characterise it -  
ideally it should be able to cope with line rate or at least a known  
upper bound (which the load balancer in front knows and can be  
configured for), it can then pass the real traffic on for processing.


This all comes down to creating back pressure: typically by mediating  
between producer and consumer with a finite queue - then configuring  
that queue to have the right length so that the back pressure is felt  
at an appropriate point, too long and the experienced delay is too  
large, too short and you are changing modes of behaviour too quickly -  
again it that stochastics/queueing theory stuff.


Avoiding starvation is easy, FIFO service does that - but you'll find  
that's not enough you also need to bound the service time for certain  
requests people often think that 'priority' is the right answer but  
usually that is too naive. Resources are finite, giving one source of  
demand 'priority' means that the other sources loose out and that  
trade is highly non-linear and creates its own denial-of-service.


Neil

On 2 May 2009, at 06:27, Belka wrote:



Thanks, Niel. :)
You actually motivated me to determine/specify defense requirements  
- that

I should have done long before writing here.
Now I'm not experienced in DDoSs defending, so my reasoning here  
might be a

bit voulnerable. Few basic requirements:
1. Server has services that shouldn't be endangered by computational
resource starvation. That is why I use load balancing for SAR  
(Services
under Attack Risk). I even use 2 types of load controls: one per  
each SAR,

and the second - above all ARSes.
2. Even when under attack SAR should be able to serve. Of course, it's
effective input capability becomes much lower, but requirement here  
is to

provide possible maximum of effectiveness. That is why
2.1. identification of bad request should be fast, and
2.2. request processing should be fair (without starvation on  
acceptance

time).

After projecting this /\ specification on architecture plan, the  
need in
*good* tryReadChan is now less sharp. However, it still would be  
very useful

- I also have other applications for it.

The *good* tryReadChan would be atomic, immediate, and with  
determinate

result (of type Maybe)...
--
By the way, for

Actually, am I wrong thinking, that it can't be helped - and the
degradation
from cute concurency synchronization model of Chan is unavoidable?

I have an idea of such solution (without getting down to lower level
programming), - called it fishing: one should complicate the flow  
unit
(FlowUnit), that is being passed in the Channel. The FlowUnit  
diversifies to
real bizness data, and service data. That way I now may gain control  
over

blocking

But this solution is not simple and lightweight.  If anybody is  
interested,

I could describe the concept in more details.

Belka


Neil Davies-2 wrote:


Belka

You've described what you don't want - what do you want?

Given that the fundamental premise of a DDoS attack is to saturate
resources
so that legitimate activity is curtailed - ultimately the only
response has to be to
discard load, preferably not the legitimate load (and therein lies  
the

nub of the problem).

What are you trying to achieve here - a guarantee of progress for the
system?
a guarantee of a fairness property? (e.g. some legitimate traffic  
will

get
processed) or, given that the DDoS load can be identified given some
initial
computation, guarantee to progress legitimate load up to some level  
of

DDoS
attack?

Neil




--
View this message in context: 
http://www.nabble.com/-tryReadAdvChan-%3A%3A-AdvChan-a--%3E-IO-%28Maybe-a%29--problems-tp23328237p23343213.html
Sent from the Haskell - Haskell-Cafe mailing list archive at  
Nabble.com.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Estimating the time to garbage collect

2009-05-02 Thread Neil Davies
Yes, you've got the problem domain. I don't have to deliver responses
to stimuli all the time within a bound, but I need to supply some
probability for that figure.
That problem domain is everywhere - all that varies is the bound on the time
and the probability of meeting it.

'Hard real time' systems are very expensive to build and, typically, make
very low utilisation of resources and have interesting failure modes when
timing stops being be met. Meeting strict timing constraints is becoming
more difficult as processors become more complex (think multi-level caching,
clock rates that vary with temperature and/or load) and when those systems
use packet based multiplexed as their interconnect (time slotted shared bus
being too expensive).

Yes, the proof obligations are more challenging, no more ability to
enumerate the complete state space and prove that the schedule can always be
met, no more 'certainty' that events and communications will occur within a
fixed time. Interestingly giving up that constraint may well have its
up-side, it was being used as a design 'crutch' - possibly being leaned on
too heavily. Having to explicitly consider a probability distribution
appears to create more robust overall systems.

On the flip side, this more stochastic approach has to work - the commercial
trends in wide area networking mean things are getting more
stochastic, deterministic timings for wide are communications will be a
thing of the past in 10 - 15 years (or prohibitively expensive). This is
already worrying people like electricity distribution folks - their control
systems are looking vulnerable to such changes and the issue of
co-ordination electricity grids is only going to get more difficult as the
number of generations sources increase, as is inevitable.

Perhaps this is too much for a Saturday morning, sunny one at that

Neil

2009/5/1 John Van Enk vane...@gmail.com

 I think the problem becomes slightly easier if you can provide an upper
 bound on the time GC will take. If I understand your problem domain, Neil,
 you're most concerned with holding up other processes/partitions who are
 expecting to have a certain amount of processing time per frame. If we can
 give an upper bound to the GC time, then we can plan for it in the schedule
 without upsetting the other processes.

 I don't have an answer (though I'd love one), but I do think that asking
 for an upper bound substantially simplifies the problem (though, I could be
 wrong) and still gives you the characterisics you need to give a 'time to
 complete'.
  /jve
 On Fri, May 1, 2009 at 4:14 AM, Neil Davies 
 semanticphilosop...@googlemail.com wrote:

 Hi

 With the discussion on threads and priority, and given that (in Stats.c)
 there are lots of useful pieces of information that the run time system is
 collecting, some of which is already visible (like the total amount of
 memory mutated) and it is easy to make other measures available - it has
 raised this question in my mind:

 Given that you have access to that information (the stuff that comes out
 at the end of a run if you use +RTS -S) is it possible to estimate the time
 a GC will take before asking for one?

 Ignoring, at least for the moment, all the issues of paging, processor
 cache occupancy etc, what are the complexity drivers for the time to GC?

 I realise that it is going to depend on things like, volume of data
 mutated, count of objects mutated, what fraction of them are live etc - and
 even if it turns out that these things are very program specific then I have
 a follow-on question - what properties do you need from your program to be
 able to construct a viable estimate of GC time from a past history of such
 garbage collections?

 Why am I interested? There are all manners of 'real time' in systems,
 there is a vast class where a statistical bound (ie some sort of 'time to
 complete' CDF) is more than adequate for production use. If this is possible
 then it opens up areas where all the lovely properties of haskell can be
 exploited if only you had confidence in the timing behaviour.

 Cheers
 Neil
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 /jve

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [tryReadAdvChan :: AdvChan a - IO (Maybe a)] problems

2009-05-01 Thread Neil Davies

Belka

You've described what you don't want - what do you want?

Given that the fundamental premise of a DDoS attack is to saturate  
resources
so that legitimate activity is curtailed - ultimately the only  
response has to be to

discard load, preferably not the legitimate load (and therein lies the
nub of the problem).

What are you trying to achieve here - a guarantee of progress for the  
system?
a guarantee of a fairness property? (e.g. some legitimate traffic will  
get
processed) or, given that the DDoS load can be identified given some  
initial
computation, guarantee to progress legitimate load up to some level of  
DDoS

attack?

Neil


On 1 May 2009, at 05:09, Belka wrote:



Hi!

I need this function with requirement of heavy reads, *possibly  
under DDoS

attack*.
Was trying to write such function, but discovered some serious  
problems of

** possible racings,
** possible starvations
** unbalance: readAdvChan users may get better service than ones of
tryReadAdvChan
These are totally unacceptible for my case of DDoS risk.

Actually, am I wrong thinking, that it can't be helped - and the  
degradation

from cute concurency synchronization model of Chan is unavoidable?

My (untested) code:
---
---
module AdvChan ( AdvChan
  , newAdvChan
  , readAdvChan
  , writeAdvChan
  , writeList2AdvChan
  , advChan2StrictList
  , withResourceFromAdvChan
  , tryReadAdvChan
  , isEmptyAdvChan
  ) where

import Control.Concurrent.Chan
import Control.Concurrent.MVar

data AdvChan a = AdvChan {
   acInst:: MVar Chan a
 , acWrite   :: a - IO ()
 , acIsEmpty :: IO Bool
}

newAdvChan :: IO AdvChan a
newAdvChan = do ch- newChan
   mv_ch - newMVar ch
   return AdvChan {
acInst= mv_ch
  , acWrite   = writeChan ch
  , acIsEmpty = isEmptyChan ch
  }

readAdvChan :: AdvChan a - IO a
readAdvChan ach = modifyMVar (acInst ach)
(\ ch - do a - readChan ch
return (ch, a)
)

writeAdvChan :: AdvChan a - a - IO ()
writeAdvChan = acWrite

writeList2AdvChan :: AdvChan a - [a] - IO ()
writeList2AdvChan ach[] = return ()
writeList2AdvChan ach (h:t) = writeAdvChan ach h   
writeList2AdvChan ach t


advChan2StrictList :: AdvChan a - IO [a]
advChan2StrictList ach = modifyMVar (acInst ach)
   (\ ch - let readLoop = do emp -
isEmptyChan ch
  case  
emp of
   
True  -

return []
   
False -

do _head - readChan ch

_rest - readLoop

return (_head : _rest)
 in liftTuple (return ch,
readLoop)
   )

withResourceFromAdvChan :: AdvChan a - (\ a - IO (a, b)) - IO b
withResourceFromAdvChan ach f = do res - readAdvChan ach
  (res_processed, result) - f res
  writeAdvChan ach res_processed
  return result

isEmptyAdvChan :: AdvChan a - IO Bool
isEmptyAdvChan = acIsEmpty

microDelta = 50

tryReadAdvChan :: AdvChan a - IO (Maybe a)
tryReadAdvChan ach = emp2Maybeness $ do mb_inst - tryTakeMVar  
(acInst ach)

   case mb_inst of
   Nothing   - emp2Maybeness
(threadDelay microDelta  tryReadAdvChan ach)
   Just chan - do emp -
isEmptyChan ch
   result -  
case

emp of

True  - return Nothing

False - Just `liftM` readChan ch
   putMVar  
(acInst

ach) chan
   return  
result

 where emp2Maybeness f = do emp - isEmptyAdvChan ach
case emp of
True  - return Nothing
False - f

---
---

Later after writing my own code, and understanding the problem I  
checked

Hackage. Found synchronous-channels package there
(http://hackage.haskell.org/cgi-bin/hackage-scripts/package/synchronous-channels 
),

but it isn't any further in solving my the unbalacedness problems.

Any suggestions on the fresh matter are welcome.
Belka.
--
View this message in context: 
http://www.nabble.com/-tryReadAdvChan-%3A%3A-AdvChan-a--%3E-IO-%28Maybe-a%29--problems-tp23328237p23328237.html
Sent from the Haskell - Haskell-Cafe 

[Haskell-cafe] Estimating the time to garbage collect

2009-05-01 Thread Neil Davies

Hi

With the discussion on threads and priority, and given that (in  
Stats.c) there are lots of useful pieces of information that the run  
time system is collecting, some of which is already visible (like the  
total amount of memory mutated) and it is easy to make other measures  
available - it has raised this question in my mind:


Given that you have access to that information (the stuff that comes  
out at the end of a run if you use +RTS -S) is it possible to estimate  
the time a GC will take before asking for one?


Ignoring, at least for the moment, all the issues of paging, processor  
cache occupancy etc, what are the complexity drivers for the time to GC?


I realise that it is going to depend on things like, volume of data  
mutated, count of objects mutated, what fraction of them are live etc  
- and even if it turns out that these things are very program specific  
then I have a follow-on question - what properties do you need from  
your program to be able to construct a viable estimate of GC time from  
a past history of such garbage collections?


Why am I interested? There are all manners of 'real time' in systems,  
there is a vast class where a statistical bound (ie some sort of 'time  
to complete' CDF) is more than adequate for production use. If this is  
possible then it opens up areas where all the lovely properties of  
haskell can be exploited if only you had confidence in the timing  
behaviour.


Cheers
Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Thread priority?

2009-04-25 Thread Neil Davies

Count me in too

I've got a library that endeavours to deliver 'rate-equivalence' - i.e  
there may be some jitter in when the events should have occurred but  
their long term rate of progress is stable.


Testing has shown that I can get events to occur at the right time  
within 1ms (99%+ of the time, with 50%+ of the events  0.5ms) -  
choose your O/S carefully though. And this is without resorting to  
marking the threads as real time with the kernel. If it matters you  
can tune the scheduling to further reduce the timer scheduling latency  
(at the cost of more CPU) and increase the fidelity of event timing  
another order of magnitude or more.


As for 'garbage collection ruins my real time properties' argument -  
I've been pragmatic about this, you can configure the scheduler to go  
and perform a GC if it knows that it is going to sleep for a while  
(yes, this doesn't resolve the issues of external events arriving in  
that time but read on) but not perform that too often. Turning off  
GHCs timers (to stop it garbage collecting after it has actually been  
idle for a while) and having code that doesn't create vast amounts of  
garbage means that a GC takes 0.1ms.


Now if I could see within Haskell the amount of heap that has been  
recently mutated and some other GC statistics when it was running I  
could even decide to schedule that more effectively.


We use this to create synthetic network traffic sources and other  
timing/performance related activities in the wide area context - it  
serves our needs.


Neil


On 25 Apr 2009, at 03:48, Christopher Lane Hinson wrote:



Is there any interest or movement in developing thread priority or  
any other realtime support in Haskell?


Right now, if I have tasks that need to be responsive in real time,  
even if the realtime needs are very soft, it seems that the only  
option is to try to ensure that at least one hardware thread is kept  
clear of any other activity.


To be very useful to me, thread priority would not need to come with  
very strict guarantees, but there would need to be a way to make  
sure that `par` sparks and DPH inherit the priority of the sparking  
thread.


In part I ask because I'm working on a small library to support a  
degree of cooperative task prioritization.


Friendly,
--Lane
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Issues with running Ghci from emacs

2009-04-16 Thread Neil Davies

I end up doing

 :set -i../Documents/haskell/SOE/src

To set the search directory so that ghci can find the source.

I've not been how to tailor that 'cd' which is run at start up (but  
I've not looked to hard)


Neil


On 16 Apr 2009, at 04:12, Daryoush Mehrtash wrote:



I am having problem running GHCI with Haskell files that include  
imports.I am running emacs22 on Ubuntu,  with haskell-mode-2.4  
extensions.


I load my file (Fal.lhs in this case) in emacs.  Then try to run the  
Ghci by doing C-c C-l. The result is shown below.   Ghci fails  
to find the Draw.lhs which is also on the same directory as the  
Fal.lhs.   Note:  If I go to the directory where Fal.lhs is and run  
Ghci directly it all works fine.   Any idea how I can get the  
interepreter to work form emacs?


GHCi, version 6.10.1: http://www.haskell.org/ghc/  :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer ... linking ... done.
Loading package base ... linking ... done.
Prelude :cd ~/.cabal/
Prelude :load ../Documents/haskell/SOE/src/Fal.lhs

../Documents/haskell/SOE/src/Fal.lhs:76:9:
Could not find module `Draw':
  Use -v to see a list of the files searched for.
Failed, modules loaded: none.
Prelude

Thanks,

Daryoush

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] GHC 6.10.2 - PCap package - affected by the change of Finalizers?

2009-03-24 Thread Neil Davies

Hi

Looks like pcap package needs a little tweek for 6.10.2 - programs  
compiled with it bomb out with


..error: a C finalizer called back into Haskell.
   use Foreign.Concurrent.newForeignPtr for Haskell finalizers.

Is my interpretation of this error message correct?

Cheers
Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: System.CPUTime and picoseconds

2009-01-12 Thread Neil Davies
I've found the pico second accuracy useful in working with 'rate  
equivalent' real time systems. Systems where the individual timings  
(their jitter) is not critical but the long term rate should be  
accurate - the extra precision helps with keeping the error  
accumulation under control.


When you are selling something (like data bandwidth) and you are  
pacing the data stream on a per packet basis you definitely want any  
error to accumulate slowly - you are in the 10^10 events per day range  
here.


Neil


On 12 Jan 2009, at 00:00, Lennart Augustsson wrote:

On Sun, Jan 11, 2009 at 8:28 PM, ChrisK  
hask...@list.mightyreason.com wrote:
An Double or Int64 are both 8 bytes and counts with picoseconds  
precision
for 2.5 hours to 106 days.  Going to 12 byte integer lets you count  
to 3.9
billion years (signed).  Going to 16 byte integer is over 10^38  
years.


Lennart Augustsson wrote:


A double has 53 bits in the mantissa which means that for a running
time of about 24 hours you'd still have picoseconds.  I doubt anyone
cares about picoseconds when the running time is a day.


The above is an unfounded claim about the rest of humanity.


It's not really about humanity, but about physics.  The best known
clocks have a long term error of about 1e-14.
If anyone claims to have made a time measurement where the accuracy
exceeds the precision of a double I will just assume that this person
is a liar.

For counting discrete events, like clock cycles, I want something like
Integer or Int64.  For measuring physical quantities, like CPU time,
I'll settle for Double, because we can't measure any better than this
(this can of course become obsolete, but I'll accept that error).

 -- Lennart
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell haikus

2008-12-06 Thread Neil Davies

Yesterday it worked
Today is is still working
Haskell is like that!

On 5 Dec 2008, at 23:18, Gwern Branwen wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi everyone. So today I finally got around to something long on my
todo list - a compilation of all the Haskell haikus I've seen around!

It is at http://haskell.org/haskellwiki/Haiku

But I'm afraid I only have 5, and Google doesn't turn up any more.

So: does anybody have a haiku I missed? Or even better, is anyone
feeling poetically inspired tonight? :)

- --
gwern
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEAREKAAYFAkk5ttoACgkQvpDo5Pfl1oJ+ygCfdSSBmbgyoOwkG53rKahF2Su1
84UAoIOrxGwe3u+WwnKxvKulq1AT4IJS
=+2kH
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof of a multi-threaded application

2008-11-18 Thread Neil Davies

Ketil

You may not be asking the right question here. Your final system's  
performance is going to be influenced far more by your algorithm for  
updating than by STM (or any other concurrency primitive's) performance.


Others have already mentioned the granularity of locking - but that  
one of the performance design decisions that you need to quantify.


The relative merits of various approaches are going to come down to  
issues such as


  * given that you have a lock what is the cost of locking (in term  
of the lack of forward progress)
  * how often will you have to pay this cost (that will be  
application / data dependent)
  * given you use STM, what is the (differential) cost of the  
underlying housekeeping (depends what processing is with the  
'atomically')
  * what is the likelihood that you will have to undo stuff  
(represents the same cost as the lack of forward progress).


So the answer which is better, as it always is, will be - it depends

Welcome to the wonderful world of performance engineering.

The answer you seek is tractable - but will require more analysis.

Neil


On 18 Nov 2008, at 06:35, Ketil Malde wrote:


Tim Docker [EMAIL PROTECTED] writes:


My apologies for side-tracking, but does anybody have performance
numbers for STM? I have an application waiting to be written using
STM, boldly parallelizing where no man has parallelized before, but
if it doesn't make it faster,



Faster than what?


Faster than an equivalent non-concurrent program.  It'd also be nice
if performance was comparable lock-based implementation.


Are you considering using STM just to make otherwise pure code run in
parallel on multiple cores?


No, I have a complex structure that must be updated in non-predictable
ways.  That's what STM is for, no?

-k
--
If I haven't seen further, it is by standing in the footprints of  
giants

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof of a multi-threaded application

2008-11-18 Thread Neil Davies


On 18 Nov 2008, at 10:04, Ketil Malde wrote:


Neil Davies [EMAIL PROTECTED] writes:


You may not be asking the right question here. Your final system's
performance is going to be influenced far more by your algorithm for
updating than by STM (or any other concurrency primitive's)
performance.


I may not be asking the right question, but I am happy about the
answer, including yours :-)

I think STM is semantically the right tool for the (my) job, but for
it to be useful, the implementation must not be the limiting factor.
I.e running on n+1 CPUs should be measurably faster than running on n,
at least up to n=8, and preferably more.


More detailed questions: how complex is the mutual exclusion block? If  
it is well known now and not likely to change you can implement  
several ways and work out any extra overhead (run it lot) against the  
other approaches. Nothing like a quick benchmark. Otherwise stick with  
STM (its composable after all).



With the assuming I can get enough parallelism and avoiding too many
rollbacks, of course.


Its not the parallelism that is the issue here, it is the locality of  
reference. If you have data that is likely to be widely spread amongst  
all the possible mutual exclusion blocks then you are on to a winner.  
If your data is clustered and likely to hit the same 'block' then,  
whatever you do, your scalability is scuppered.


Also, consider how much concurrency you've got, not just the  
parallelism. You need enough concurrency to exploit the parallelism  
but not too much more - too much more can start creating contention  
for the mutual exclusion blocks that would not have existed at less  
concurrency.




Others have already mentioned the granularity of locking - but that
one of the performance design decisions that you need to quantify.


Yes.  Fine grained - I'm thinking a large Array of TVars.  (If you
only have a single TVar, it might as well be an MVar, no?)


What do those TVars contain? how many of them are being updated  
atomically?



-k
--
If I haven't seen further, it is by standing in the footprints of  
giants


Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell participating in big science like CERN Hadron....

2008-10-04 Thread Neil Davies

Gents

Too late to get in on that act - that software was designed over the  
last 10/15 years implemented and trailed over the last 5 and being  
tuned now.


But all is not lost, Haskell is being used! Well in least in helping  
ATLAS people understand how to the data acquisition system performs  
(and interacts) - its what the stochastic performance algebra language  
being used to capture behavioural models is written in.


Neil

On 3 Oct 2008, at 20:38, Don Stewart wrote:


wchogg:
On Fri, Oct 3, 2008 at 5:47 AM, Dougal Stanton [EMAIL PROTECTED] 
 wrote:

2008/10/3 Galchin, Vasili [EMAIL PROTECTED]:

Hello,

   One of my interests based on my education is grand challenge  
science.

Ok .. let's take the  CERN Hadrian Accelerator.

   Where do you think Haskell can fit into the CERN Hadrian effort
currently?

   Where do you think think Haskell currently is lacking and will  
have to

be improved in order to participate in CERN Hadrian?


Is that the experiment where Picts are accelerated to just short of
the speed of light in order to smash through to the Roman  
Empire? ;-)


I don't know what the main computational challenges are to the LHC
researchers. The stuff in the press has mostly been about
infrastructure --- how to store the gigabytes of data per second  
that

they end up keeping, out of the petabytes that are produced in the
first place (or something).


Well, with the LHC efforts I don't think a technology like Haskell
really has a place...at least not now.  Even just a few years back,
when I worked on this stuff, we were still doing lots of simulation  
in
preparation for the actual live experiment and Haskell might have  
been

a good choice for some of the tools.  All of the detector simulation
was written in C++, because C++ is the new FORTRAN to physicists, and
you ain't seen nothing till you've seen a jury-rigged form of lazy
evaluation built into a class hierarchy in C++.  Now, would the C++
based simulation have run faster than a Haskell based one?  Quite
possibly.  On the other hand, I remember how many delays and problems
were caused by the sheer complexity of the codebase.  That's where a
more modern programming language might have been extremely helpful.


How about EDSLs for producing high assurance controllers, and other
robust devices they might need. I imagine the LHC has a good need for
verified software components...
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Binary Endianness

2007-09-11 Thread Neil Davies
So the answer for persistence is Data.Binary - ASN.1 converter that
can be used in extrema?

On 11/09/2007, Brandon S. Allbery KF8NH [EMAIL PROTECTED] wrote:

 On Sep 11, 2007, at 7:01 , Jules Bean wrote:

  The actual format used by Data.Binary is not explicitly described
  in any standard (although in most cases it's moderately obvious,
  and anyone can read the code), and it's not formally guaranteed
  that it will never change in a later version (although the
  maintainers will no doubt try very hard to ensure it doesn't); nor
  does it contain any automatic support for version-stamping to
  ensure backwards compatibility in the face of later unlooked-for
  format changes.

 I will just point out that, while this is one extreme, the other
 extreme is ASN.1.  I think we want to walk the middle path instead

 --
 brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
 system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
 electrical and computer engineering, carnegie mellon universityKF8NH


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage and GHC 6.8

2007-09-09 Thread Neil Davies
Ah,

this begins to answer my question: there isn't really a plan

I would have thought that he first step is to be able to distinguish
which of the hackage packages compile under 6.8 - some annotation to
the hackage DB? Secondly, is there a dependency graph of the stuff on
hackage anywhere? That would identify which order to change packages
in (for example the cabal-install package is dependent on exactly one
version of the HTTP library). We need to size the problem.

Nei



On 09/09/2007, Duncan Coutts [EMAIL PROTECTED] wrote:
 On Sat, 2007-09-08 at 14:50 +0100, Neil Mitchell wrote:
  Hi Neil,
 
   Given that GHC 6.8 is just around the corner and, given how it has
   re-organised the libraries so that the dependencies in many (most/all)
   the packages in the hackage DB are now not correct.
  
   Is there a plan of how to get hackage DB up to speed with GHC 6.8 ?
 
  I think whatever we go with will be deeply painful. Especially given
  the switch to Cabal configurations comes at the same time, rather than
  before.

 Cabal 1.2 is out now and supports configurations and current ghc:

 http://haskell.org/cabal/download.html

 Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage and GHC 6.8

2007-09-09 Thread Neil Davies
Thinking on it more - surely there is enough information from the
failed compilations to suggest changes to the cabal files? Feed them
back to the developers?

My thoughts go this way - people like creating, that's why hackage is
beginning to grow. They don't tend to like the packaging issues - they
are seen as a distraction the more we can automate this the better,
the more quickly the libraries are going to get up to date.

As a community we are building a momentum here - let's not derail it
by a lack of attention to detail.

Neil

On 09/09/2007, Neil Davies [EMAIL PROTECTED] wrote:
 Ah,

 this begins to answer my question: there isn't really a plan

 I would have thought that he first step is to be able to distinguish
 which of the hackage packages compile under 6.8 - some annotation to
 the hackage DB? Secondly, is there a dependency graph of the stuff on
 hackage anywhere? That would identify which order to change packages
 in (for example the cabal-install package is dependent on exactly one
 version of the HTTP library). We need to size the problem.

 Nei



 On 09/09/2007, Duncan Coutts [EMAIL PROTECTED] wrote:
  On Sat, 2007-09-08 at 14:50 +0100, Neil Mitchell wrote:
   Hi Neil,
  
Given that GHC 6.8 is just around the corner and, given how it has
re-organised the libraries so that the dependencies in many (most/all)
the packages in the hackage DB are now not correct.
   
Is there a plan of how to get hackage DB up to speed with GHC 6.8 ?
  
   I think whatever we go with will be deeply painful. Especially given
   the switch to Cabal configurations comes at the same time, rather than
   before.
 
  Cabal 1.2 is out now and supports configurations and current ghc:
 
  http://haskell.org/cabal/download.html
 
  Duncan
 
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Hackage and GHC 6.8

2007-09-07 Thread Neil Davies
Given that GHC 6.8 is just around the corner and, given how it has
re-organised the libraries so that the dependencies in many (most/all)
the packages in the hackage DB are now not correct.

Is there a plan of how to get hackage DB up to speed with GHC 6.8 ?

Cheers

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Learn Prolog...

2007-09-02 Thread Neil Davies
Cut is a means of preventing backtracking beyond that point - it
prunes the potential search space saying the answer must be built on
the current set of bindings. (Lots of work went into how automatically
get cut's into programs to make them efficient but without the
programmer having to worry about them).

Atoms: think unique symbols that are elements in the set which is the
program's universe of discourse - one that is closed (no infinities
there please). All its doing is unifying things to see if they fit the
rules. It is one large parser that gives you back the bindings it made
on the way.

I do remember the day when a prologue researcher was fiercely
defending that prologue could compute solutions to problems that were
not achievable with ordinary turing complete languages - nothing as
ugly as a rampaging mob of Computer Scientists!

I've even written (late 80's) a program in prologue that performed
real-time subtitling for the deaf, which I'm told is still being used
out there.

Would I use it now? - never - it may give you an answer but rarely
does using it give you understanding and you can always code up the
searching algorithms if you have to go that brute force. And in the
end it is the understanding that is reusable, not the answer.

Neil


On 02/09/07, Andrew Coppin [EMAIL PROTECTED] wrote:

  One of standard exercices in Prolog is the construction of the
  meta-interpreter of Prolog in Prolog. While this is cheating, I recommend
  it to you. It opens eyes.

 Ever tried implementing Haskell in Haskell? ;-)

  Prolog strategies are straightforward, and I simply cannot understand the
  comments of Andrew Coppin. Which arbitrary set of conclusions?? Which
  patently obvious results not derivable?? Be kind, give some examples,
  otherwise people may suspect that you are issuing vacuous statements...

 Read my whole message. What I was saying (in essence) is that Prolog
 seemed to be performing impossible feats of logical deduction - until
 I saw a unification algorithm implemented in Haskell, and then it all
 made sense.

 (They showed an example where you basically program in a list of who is
 related to who, and then the computer suddenly seems to be able to
 magically deduce arbitrary family relationships - without any code for
 doing this being defined. This seemed utterly far-out to me... I'm not
 used to computers begin able to think for themselves. I'm more used to
 having them blindly follow whatever broken sequence of commands you feed
 to them... And yet, given a set of facts, this Prolog interpreter seemed
 to be able to magically derive arbitrarily complex conclusions from
 them. Double-impossible! Until I learned how it's implemented...)

 Having said all that, I still don't get what the purpose of the cut
 operator is. I also failed to understand the Prolog syntax description.
 (What the heck is an atom when it's at home? I thought an atom is a
 unit composed of protons and electrons...)

 I can certainly see why Prolog would be very useful for certain types of
 problems. As it happens, not the kind of problems that usually interest
 me. ;-)

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Data.Unique not safe in concurrent environments (or brain burned)?

2007-09-01 Thread Neil Davies
Hi, I was looking over the libraries for bits of GHC (no doubt a standard form 
of 
relaxation for readers of this list), and noticed the following statement 
(in Data.Unique):

 -- | Creates a new object of type 'Unique'.  The value returned will
 -- not compare equal to any other value of type 'Unique' returned by
 -- previous calls to 'newUnique'.  

This set me thinking - so I looked at the code 

newUnique :: IO Unique
newUnique = do
   val - takeMVar uniqSource
   let next = val+1
   putMVar uniqSource next
   return (Unique next)

In the concurrent execution world in which we live - I don't think that the 
implementation 
supports the uniqueness statement above - or am I not understanding something?

Cheers

Neil

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Data.Unique not safe in concurrent environments (or brain burned)?

2007-09-01 Thread Neil Davies
Let me answer this myself - brain burnt...

Of course it is OK, that is precisely the semantics of MVars - the are
empty or full, thus assuring the mutual exclusion between threads.

Been a hard week.

Neil
... so the deeper question - why don't you realise these mistakes till
five minutes *after* you've pressed the send button

Neil Davies wrote:
 Hi, I was looking over the libraries for bits of GHC (no doubt a standard 
 form of 
 relaxation for readers of this list), and noticed the following statement 
 (in Data.Unique):

  -- | Creates a new object of type 'Unique'.  The value returned will
  -- not compare equal to any other value of type 'Unique' returned by
  -- previous calls to 'newUnique'.  

 This set me thinking - so I looked at the code 

 newUnique :: IO Unique
 newUnique = do
val - takeMVar uniqSource
let next = val+1
putMVar uniqSource next
return (Unique next)

 In the concurrent execution world in which we live - I don't think that the 
 implementation 
 supports the uniqueness statement above - or am I not understanding 
 something?

 Cheers

 Neil


   

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Very freaky

2007-07-10 Thread Neil Davies

I means that you can view programming as constructing witnesses to
proofs - programs becoming the (finite) steps that, when followed,
construct a proof.

Intuitionism only allows you to make statements that can be proved
directly - no Reductio ad absurdum only good, honest to goodness
constructive computational steps - sounds like programming (and
general engineering) to me.

Powerful when you grasp it - which is why I've spent the last 15 or 20
years considering myself as an intuitionistic semantic philosopher -
reasoning about the meaning of things by constructing their proofs -
great way of taking ideas, refining them into an axiomatic system then
going and making them work.

Take it from me - it is a good approach it generates exploitable ideas
that people fund that make people money!

Neil

On 10/07/07, Andrew Coppin [EMAIL PROTECTED] wrote:

Jonathan Cast wrote:
 On Tuesday 10 July 2007, Andrew Coppin wrote:

 OK, so technically it's got nothing to do with Haskell itself, but...


 Actually, it does


I rephrase: This isn't *specifically* unique to Haskell, as such.

 Now, Wikipedia seems to be suggesting something really remarkable. The
 text is very poorly worded and hard to comprehend,


 Nothing is ever absolutely so --- except the incomprehensibility of
 Wikipedia's math articles.  They're still better than MathWorld, though.


Ah, MathWorld... If you want to look up a formula or identity, it's
practically guaranteed to be there. If you want to *learn* stuff...
forget it!

 So is this all a huge coincidence? Or have I actually suceeded in
 comprehending Wikipedia?


 Yes, you have.  In the (pure, non-recursive) typed lambda calculus, there is
 an isomorphism between (intuitionistic) propositions and types, and between
 (constructive) proofs and terms, such that a term exists with a given type
 iff a (corresponding) (constructive) proof exists of the corresponding
 (intuitionistic) theorem.  This is called the Curry-Howard isomorphism, after
 Haskell Curry (he whom our language is named for), and whatever computer
 scientist independently re-discovered it due to not having figured out to
 read the type theory literature before doing type theoretic research.


...let us not even go into what constitutes intuitionistic
propositions (hell, I can't even *pronounce* that!) or what a
constructive proof is...

 Once functional programming language designers realized that the
 generalization of this to the fragments of intuitionistic logic with logical
 connectives `and' (corresponds to products/record types) and `or'
 (corresponds to sums/union types) holds, as well, the prejudice that
 innovations in type systems should be driven by finding an isomorphism with
 some fragment of intuitionistic logic set in, which gave us existential types
 and rank-N types, btw.  So this is really good research to be doing.


On the one hand, it feels exciting to be around a programming language
where there are deep theoretical discoveries and new design territories
to be explored. (Compared to Haskell, the whole C / C++ / Java /
JavaScript / Delphi / VisualBasic / Perl / Python thing seems so boring.)

On the other hand... WHAT THE HECK DOES ALL THAT TEXT *MEAN*?! _

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell serialisation, was: To yi or not to yi...

2007-06-21 Thread Neil Davies

Ah -

   The state of the world serialized into your representation.

That would be interesting to see

Neil

... ah you meant something different?

On 21/06/07, apfelmus [EMAIL PROTECTED] wrote:

Tom Schrijvers wrote:
 I understand that, depending on what the compiler does the result of :

 do
let  f = (*) 2
print $ serialise f

 might differ as, for example, the compiler might have rewritten f as
 \n -
 n+n.

 But, why would that make equational reasoning on serialise not valid?

 Isn't that true for all functions in the IO monad that, even when
 invoked with the same arguments, they can produce different results?

 Not if you take the ``state of the world to be part of the arguments.
 If two programs behave differently for the same arguments and the same
 state of the world, then they're not equivalent. You do want your
 compiler to preserve equivalence, don't you?

You can put the internal representation of the argument into the state
of the world.

Regards,
apfelmus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Hardware

2007-06-02 Thread Neil Davies

Bulat

That was done to death as well in the '80s - data flow architectures
where the execution was data-availability driven. The issue becomes
one of getting the most of the available silicon area. Unfortunately
with very small amounts of computation per work unit you:
  a) spend a lot of time/area making the matching decision - the what
to do next
  b) the basic sequential blocks of code are too small - can't
efficiently pipeline

Locality is essential for performance. It is needed to hide all the
(relatively large) latencies in fetching things.

If anyone wants to build the new style of functional programming
execution hardware, it is those issues that need to be sorted.

Not to say that Haskell is the wrong beast to think about these things
in. It's demand driven execution framework, and it's ability to
perform multiple concurrent evaluations safely are the unique points.

Neil

PS if any of you really want to attack this seriously - do get in
touch - I've got notes and stuff from the '80s when we (at Bristol)
looked into this. I've also got  evaluation frameworks for modeling
communication behaviour and performance (which you'll need) -
naturally all written in Haskell!

On 02/06/07, Bulat Ziganshin [EMAIL PROTECTED] wrote:

Hello Jon,

Friday, June 1, 2007, 11:17:07 PM, you wrote:

 (we had the possiblity of funding to make something).  We
 had lots of ideas, but after much arguing back and forth the
 conclusion we reached was that anything we could do would
 either be slower than mainstream hardware or would be

this looks rather strange for me. it's well known that Neuman
architecture is a bottleneck with only one operation executed each
time. it was developed in 1946 because those times all programming was
in binary code and simplicity of programming was favored

but more efficient computational model exists. if cpu consists from
huge amount of execution engines which synchronizes their operations
only when one unit uses results produces by another then we got
processor with huge level of natural parallelism and friendlier for FP
programs. it seems that now we move right into this direction with GPUs


--
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with finalizers

2007-05-11 Thread Neil Davies

Ivan

If I remember correctly there is a caveat in the documentation that
stdin/stdout could be closed when the finalizer is called. So It may
be being called - you just can see it!

Neil

On 11/05/07, Ivan Tomac [EMAIL PROTECTED] wrote:

Why does the finalizer in the following code never get called unless I
explicitly call finalizeForeignPtr fptr?
Even adding System.Mem.performGC made no difference.

The code was compiled with ghc --make -fffi -fvia-c Test.hs

Ivan

 Test.hs 

module Main where

import Foreign.Ptr
import Foreign.ForeignPtr
import Foreign.Marshal.Utils

import System.Mem

foreign import ccall safe ctest.h ctest ctestPtr :: FunPtr (Ptr Int - IO
())

test :: Int - IO ()
test i = with i test'
where
test' ptr = do fptr - newForeignPtr ctestPtr ptr
   putStrLn test
--   finalizeForeignPtr fptr

main = do putStrLn before test...
  test 33
  putStrLn after test...
  performGC

- ctest.h --

#include stdio.h

static inline void ctest( int *i )
{
printf( finalizer called with: %d\n, *i );
}
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Tutorial on Haskell

2007-04-18 Thread Neil Davies

Yep - I've seen it in course work I've set in the past - random walk
through the arrangement of symbols in the language (it was a process
algebra work and proof system to check deadlock freedom).

... but ...

Haskell even helps those people - if you've created something that
works (and you are at least sensible to create a test suite be it
regression or property based) - then there is more confidence that
they've coded a solution (if not a good one).

Haskell raises the value of formality (both ecomomically and in terms
of its caché) - changin the mindset of the masses - creating the
meme - that's tricky. Especialy if they're really off the B Ark!
(http://www.bbc.co.uk/cult/hitchhikers/guide/golgafrincham.shtml)

Neil

On 18/04/07, Michael Vanier [EMAIL PROTECTED] wrote:

R Hayes wrote:




 On Apr 17, 2007, at 4:46 PM, David Brown wrote:

 R Hayes wrote:

 They *enjoy* debugging ...


 I have to say this is one of the best things I've found for catching
 bad programmers during interviews, no matter what kind of system it is
 for.  I learned this the hard way after watching someone who never
 really understood her program, but just kept thwacking at it with a
 debugger until it at least partially worked.


 I've seen this too, but I would not use the word debugging to describe
 it.  I don't think I agree that enjoying debugging is a sufficient
 symptom for diagnosing this condition.  There are many people that
 love the puzzle-box aspect of debugging.  Some of them are very
 talented developers.

 R Hayes
 rfhayes@/reillyhayes.com


 Dave

I agree with the latter sentiment.  I call the thwacking at it
approach random programming or shotgun programming, the latter
suggesting that it's like shooting at the problem randomly until it
dies.  I prefer not having to debug, but when I do have to I find it fun
(up to a point).

Mike



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] SIP SDP parsers

2007-02-13 Thread Neil Davies

Hi

Has anyone out there done any work on parsers for SIP (Session
Initiation Protocol) and/or SDP (Session Description Protocol)?

Thought that I would ask before I embarked on it myself.

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strange behaviour with writeFile

2007-02-04 Thread Neil Davies

Its about the lazyness of reading the file. The handles on the file
associated (underlying readFile) is still open - hence the resource
being in use.

When you add that extra line the act of writing out the remainer
causes the rest of the input to be fully evaluated and hence the
filehandle is closed.

If you wish to overwrite the existing file you have to assure that the
file is not open for reading - just like with any file interface.

Neil

On 04/02/07, C.M.Brown [EMAIL PROTECTED] wrote:

Hi,

I am observing some rather strange behaviour with writeFile.

Say I have the following code:

answer - AbstractIO.readFile filename
let (answer2, remainder) = parseAnswer answer
if remainder ==   answer2 == 
  then do
AbstractIO.putStrLn $ completed
  else do
AbstractIO.putStrLn answer2
AbstractIO.writeFile filename remainder

With the above I get an error saying the resources to filename are
locked. If I add the line AbstractIO.putStrLn $ show (answer2, remainder)
before I call writeFile it suddenly magically works!

Has anyone seen strange behaviour like this before?

Regards,
Chris.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] ANNOUNCE: binary: high performance, pure binary serialisation

2007-01-26 Thread Neil Davies

existing ecoding system - both the BER (Basic Encoding Rules) and the
PER (Packed Encoding Rules).

If you are looking to target a well supported standard - this would be the one.

Neil

On 26/01/07, Malcolm Wallace [EMAIL PROTECTED] wrote:

Tomasz Zielonka [EMAIL PROTECTED] wrote:

 Did you consider using an encoding which uses variable number of
 bytes? If yes, I would be interested to know your reason for not
 choosing such an encoding. Efficiency?

My Binary implementation (from 1998) used a type-specific number of bits
to encode the constructor - exactly as many as needed.  (If you were
writing custom instances, you could even use a variable number of bits
for the constructor, e.g. using Huffman encoding to make the more common
constructors have the shortest representation.)

The latter certainly imposes an extra time overhead on decoding, because
you cannot just take a fixed-size chunk of bits and have the value.  But
I would have thought that in the regular case, using a type-specific
(but not constructor-specific) size for representing the constructor
would be very easy and have no time overhead at all.

Regards,
Malcolm
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: GHC concurrency runtime breaks every 497 (and a bit) days

2007-01-23 Thread Neil Davies

I've prototyped a fix for this issue which will now only wrap every
585,000 years or so. It also removes the 1/50th of a second timer
resolution for the runtime. This means that the additional 20ms (or
thereabouts) of delay in the wakeup has gone.

This means that GHC is now on a par with any other program, i.e. down
to the resolution of the  jiifies within the O/S.

I've done the non-Windows branch of the GHC.Conc - but I'll need some
help with the windows branch.

Anyone out there able to help with the intricacies of Windows?

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] GHC concurrency runtime breaks every 497 (and a bit) days

2007-01-22 Thread Neil Davies

In investigating ways of getting less jitter in the GHC concurrency
runtime I've found the following issue:

The GHC concurrency model does all of its time calculations in ticks
- a tick being fixed at 1/50th of a second. It performs all of these
calculations in terms the number of these ticks since the Unix epoch
(Jan 1970). Which is stored as an Int - unfortunately this 2^31 bits
overflows every 497 (and a bit) days.

When it does overflow any threads on the timer queue will either a)
run immediately or b) not be run for another 497 days - depending on
which way the sign changes. Also any timer calculation that spans the
wrapround will not be dealt with correctly.

Although this doesn't happen often (the last one was around Sat Sep 30
18:32:51 UTC 2006 and the next one will not be until Sat Feb  9
21:00:44 UTC 2008) I don't think we can leave this sort of issue in
the run-time system.

Take a look at the definition of getTicksofDay (in
base/include/HsBase.h (non-windows) / base/cbits/Win32Utils.c) and
getDelay (in base/GHC/Conc.lhs) to understand the details of the
issue.

I don't mind having a go at the base system to remove this problem -
but before I do I would like to canvas some opinions. I'll do that in
a separate thread.

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Resolution of time for threadDelay / registerDelay

2007-01-22 Thread Neil Davies

The GHC concurrency run time currently forces all delays to be
resolved to 1/50th of second - independent of any of the run time
settings (e.g. -V flag) - with rounding to the next 1/50th of second.

This means that, in practice, the jitter in the time that a thread
wakes as compared with can be rather large - 20ms to 30ms; 20ms from
the GHC base library rounding along with a jiffy from the operating
system.

As far as I can see there is no reason why there should be any
quantization of delay times - the implementation creates an ordered
list of times which could be at any resolution (say microseconds) -
this would reduce the jitter to that of an operating system jiffy.

Thoughts?

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: help with threadDelay

2006-11-29 Thread Neil Davies

On 29/11/06, Ian Lynagh [EMAIL PROTECTED] wrote:

On Wed, Nov 22, 2006 at 03:37:05PM +, Neil Davies wrote:
 Ian/Simon(s) Thanks - looking forward to the fix.

I've now pushed it to the HEAD.


Thanks - I'll pull it down and give it a try.



 It will help with the real time enviroment that I've got.

Lazy evaluation and GHC's garbage collector will probably cause
headaches if you want true real time stuff.


So the wisdom goes, but I decided to try it and it works really
nicely. Yes, the GC can introduce additional jitter, but I can
arrange for GC's to be performed at times more convinient and not on
the time critical path. I'm reliably able to get sub-millisecond
jitter in the wakeup times - which is fine for the application.
Laziness is fine - It'll help later when I can arrange evaluation
outside the time critical path.

Yea, I'd love a non-locking, incremental  GC that returned within a
fixed  (configurable) time - but until that time



 Follow on query: Is there a way of seeing the value of this interval
 from within the Haskell program?  Helps in the calibration loop.

I don't follow - what interval do you want to know? You can't find out
how long threadDelay took, but the lower bound (in a correct
implementation) is what you ask for and the upper bound is what you get
by measuring it, as you did in your original message.


I was trying to find out (in the running haskell) the value supplied
by (for example) -V RTS flag.

In order to get low jitter you have to deliberately wake up early and
spin - hey what are all these extra cores for! - knowing the
quantisation of the RTS is useful in calibration loop for how much to
wake up early.




Thanks
Ian



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: help with threadDelay

2006-11-22 Thread Neil Davies

Ian/Simon(s) Thanks - looking forward to the fix. It will help with
the real time enviroment that I've got.

Follow on query: Is there a way of seeing the value of this interval
from within the Haskell program?  Helps in the calibration loop.

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] help with threadDelay

2006-11-15 Thread Neil Davies

Hi,

I'm using GHC 6.6 (debian/etch) - and having some fun with
threadDelay. When compiled without the -threaded compiler argument it
behaves as documented - waits at least the interval - for example:

Tgt/Actual = 0.00/0.036174s, diff = 0.036174s
Tgt/Actual = 0.01/0.049385s, diff = 0.049384s
Tgt/Actual = 0.05/0.049492s, diff = 0.049487s
Tgt/Actual = 0.25/0.049596s, diff = 0.049571s
Tgt/Actual = 0.000125/0.049655s, diff = 0.04953s
Tgt/Actual = 0.000625/0.04969s, diff = 0.049065s
Tgt/Actual = 0.003125/0.049684s, diff = 0.046559s
Tgt/Actual = 0.015625/0.04962s, diff = 0.033995s
Tgt/Actual = 0.078125/0.099668s, diff = 0.021543s
Tgt/Actual = 0.390625/0.399637s, diff = 0.009012s
Tgt/Actual = 1.953125/1.999515s, diff = 0.04639s
Tgt/Actual = 9.765625/9.799505s, diff = 0.03388s

however when -threaded is used you get some interesting effects,
including returning too early:

Tgt/Actual = 0.00/0.93s, diff = 0.93s
Tgt/Actual = 0.01/0.31s, diff = 0.3s
Tgt/Actual = 0.05/0.29s, diff = 0.24s
Tgt/Actual = 0.25/0.28s, diff = 0.03s
Tgt/Actual = 0.000125/0.34s, diff = -0.91s
Tgt/Actual = 0.000625/0.35s, diff = -0.00059s
Tgt/Actual = 0.003125/0.29s, diff = -0.003096s
Tgt/Actual = 0.015625/0.28s, diff = -0.015597s
Tgt/Actual = 0.078125/0.058525s, diff = -0.0196s
Tgt/Actual = 0.390625/0.389669s, diff = -0.000956s
Tgt/Actual = 1.953125/1.939513s, diff = -0.013612s
Tgt/Actual = 9.765625/9.749573s, diff = -0.016052s

The program I used to generate this is :-

import Control.Concurrent
import Data.Time
import Text.Printf

main = mapM_ delay (0 : take 11 (iterate (*5) 1))

delay n = do
 tS - getCurrentTime
 threadDelay n
 tE - getCurrentTime

 let n'  = toRational n / 10^6
 n'' = fromRational (n') :: Double
 obs = diffUTCTime tE tS

 printf Tgt/Actual = %0.6f/%s, diff = %s\n
n'' (show obs) (show $ obs - fromRational n')
 return ()

Any suggestions?

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe