Will
is your issue with the spikes i response time, rather than the mean values?
If so, once you’ve reduced the amount of unnecessary mutation, you might want
to take more control over when the GC is taking place. You might want to
disable
GC on timer (-I0) and force GC to occur at points you
response time. Having said that I'm not exactly sure what you mean by
> "mean values".
>
> I will have a look into -I0.
>
> Yes the arrival of messages is constant. This graph shows the number
> of messages that have been published to the system:
> http://i.imgur.com/A
the reason why why
BlockedIndefinitelyOnSTM is uncatchable, rather it sounded like this
is what Neil Davies was suggesting to be the reason. Also, I do seem
to recall something like this actually being the case; though it's
unclear whether the STSTM approach would actually be able to solve
Gabriel
Is the underlying issue one of “scope” - STM variables have global scope, would
a batter approach to be to create scope of such things and then some overall
recovery mechanism could handle such an exception within that scope?
Neil
On 14 Jul 2014, at 03:30, Gabriel Gonzalez
And yet others who believe the Axiom of Choice is flawed?
On 8 Aug 2013, at 09:04, Henning Thielemann lemm...@henning-thielemann.de
wrote:
On Wed, 7 Aug 2013, Edward Z. Yang wrote:
I am pleased to announce that Issue 22 of the Monad Reader is now available.
Isn't this a problem of timescale?
Nothing can be backward compatible for ever (or at least nothing that
is being developed or maintained)
There will be, in the life of non-trival length project, change.
We rely (and I mean rely) on Haskell s/w that was written over 10 years ago -
we accept
Simon
Looking at the wiki - I take it that the stage 1 compiler can now be used as
native compiler on the RPi? (last line of entry)?
Neil
On 25 Jan 2013, at 10:46, Simon Marlow marlo...@gmail.com wrote:
FYI, I created a wiki page for cross-compiling to Raspberry Pi:
That's good to hear - we've got AFS and the dev-RPi's are using it…
Neil
On 25 Jan 2013, at 12:20, Simon Marlow marlo...@gmail.com wrote:
On 25/01/13 11:23, Neil Davies wrote:
Simon
Looking at the wiki - I take it that the stage 1 compiler can now be used as
native compiler on the RPi
I looked at that route - the issue is that the emulator only has 256M of RAM
(and that's not changeable) - so there are going to be build issues with the
GHC tool chain - it was then that I moved to real hardware.
Neil
On 15 Jan 2013, at 17:01, rocon...@theorem.ca wrote:
On Tue, 15 Jan 2013,
wrote:
In theory we could try a couple variations of builds at the same time.
But at the moment, I'm running low on ideas on what to try.
I just got the, extensive, raspbian patches for 7.4.1 and I'm going to
browse through them when I get time (apt-get source ghc).
On Tue, 15 Jan 2013, Neil
Hi - would another RPi (or even 2 from tomorrow another one arriving) help?
I can make them accessible (i.e. in our DMZ) -
Neil
On 15 Jan 2013, at 16:36, rocon...@theorem.ca wrote:
On Mon, 14 Jan 2013, Thijs Alkemade wrote:
Op 14 jan. 2013, om 17:30 heeft rocon...@theorem.ca het volgende
Understood
I've got another RPi supposed to arrive this week - would more computation
power help anyone out there?
Neil
On 13 Jan 2013, at 15:59, rocon...@theorem.ca wrote:
On Sun, 13 Jan 2013, Neil Davies wrote:
Sounds like we're close - I must admit I've slightly lost track
Hi
I've found myself wanting to get GHC 7.4.2 (need TemplateHaskell for something)
working on the rapberry pi - can you (or anyone out there) share where you are
at?
My starting point is the raspian image of 2012-12-18-wheezy-raspian that has
GHC 7.4.1 on it...
Neil
On 11 Jan 2013, at
in the final stage with errors like:
/usr/bin/ld: error: libraries/ghc-prim/dist-install/build/cbits/debug.o uses
VFP register arguments,
libraries/ghc-prim/dist-install/build/HSghc-prim-0.3.0.0.o does not
I'm rebuilding with Karel's latest suggestion now.
On Sat, 12 Jan 2013, Neil
Jeff
Are you certain that all the delay can be laid at the GHC runtime?
How much of the end-to-end delay budget is being allocated to you? I recently
moved a static website from a 10-year old server in telehouse into AWS in
Ireland and watched the access time (HTTP GET to check time on top
On 28 November 2012 06:21, Neil Davies semanticphilosop...@gmail.com wrote:
Jeff
Are you certain that all the delay can be laid at the GHC runtime?
How much of the end-to-end delay budget is being allocated to you? I recently
moved a static website from a 10-year old server in telehouse
The history is there until you archive (move a checkpoint out into a separate
directory) it and then delete the archive yourself.
the checkpointing just reduces the recovery time (i.e creates a fixed point in
time), if you were to keep all the checkpoint/archives then you would have the
Except in the complexity gymnastics and the fragility of the conclusions.
Humans can't do large scale complex brain gymnastics - that's why abstraction
exists - if your proof process doesn't abstract (and in the C case you need to
know *everything* about *everything* and have to prove it all in
+1
On 20 May 2012, at 01:23, Simon Michael wrote:
Well said!
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
We use all three (in various ways as they have arrived on the scene over time)
in production systems.
On 1 Nov 2011, at 02:03, Ryan Newton wrote:
Any example code of using hscassandra package would really help!
I'll ask my student. We may have some simple examples.
Also, I have no
Word of caution
Understand the semantics (and cost profile) of the AWS services first - you
can't just open a HTTP connection and dribble data out over several days and
hope for things to work. It is not a system that has that sort of laziness at
its heart.
AWS doesn't supply a traditional
Its the byte ordering being different between the pcap file and the machine on
which the haskell is running
On 12 Oct 2011, at 16:38, mukesh tiwari wrote:
Hello all
I was going through wireshark and read this pcap file in wireshark. I wrote a
simple haskell file which reads the pcap file
There is a pcap library - it is a bit of overkill if all you are trying to do
is read pcap files.
I have an (internal - could be made external to the company) library that does
this sort of thing and reads using Binary the pcap file and does the
appropriate re-ordering of the bytes within the
Hi
I have some long running (multi-gigabit, multi-cpu hour) programs and
as part of trying to speed up I thought I would set the -V0 flag -
when I did this there was a slow space leak that caused it to blow the
heap.
Anyone out there have an explanation? Is there some garbage collection
Yep
We get this as well - as you say, once it is in the cache it works fine
Neil
On 5 Sep 2011, at 18:06, Tristan Ravitch wrote:
I have the Haskell Platform (and my home directory with my
cabal-installed packages) installed on an AFS (a network filesystem)
volume and have been noticing a
Hi
Anyone out there got an elegant solution to being able to fork a haskell thread
and replace its 'stdin' ?
Why do I want this - well I'm using the pcap library and I want to uncompress
data to feed into 'openOffile' (which will take - to designate read from
stdin). Yes I can create a
Thanks
That is what I thought. I'll stick with what I'm doing at present which is to
use temporary files - don't want to get into separate processes which I then
have to co-ordinate.
Cheers
Neil
On 9 Jun 2011, at 17:01, Brandon Allbery wrote:
On Thu, Jun 9, 2011 at 07:40, Neil Davies
Trustworthiness
It provides the means of constructing systems that can be reasoned
about, in which the risks of mistakes can be assessed, in which
concurrency can be exploited without compromising those properties.
I once sat on a plane with a guy who ran a company that made software
to
Hi
We run a the whole of our distributed file system (AFS) on a single
AWS micro instance with linux containers inside.
We then use other instances for various things as/when needed (using
kerberos to distributed the management and control to the appropriate
people). For example we have
I've always liked the semantics of Unity - they seem the right sort of
thing to construct such a system on - they also permit concepts such
as partial completion and recovery from failure. Used to use this as
one of the concurrency models I taught - see
Yes, thanks for sharing
..
Congrats to Galois for open sourcing this. Now let the collaboration begin.
Would it be possible to run HaLVM on Amazon EC2?
They do say that you can boot any image from an EBS volume - if you start
playing with this I would be interested to hear any
Chris
What you are observing is the effects of the delay on the operation of
the TCP stacks and the way your 'sleep' works.
You are introducing delay (the sleep time is a 'minimum' and then at
least one o/s jiffy) - that represents one limit. The other limit is
delay/bandwidth product of
Hi
I'm having some difficulty getting the above combination to work, I've
successfully used ODBC against MS but the MySQL stuff is creating some really
interesting error conditions in the HDBC-ODBC module.
My first thought is that it must be me, so does anyone out there have this
combination
Hi
I can't seem to get the combination of HaskellDB/ODBC/MySQL to even get off the
ground, example:
import OmlqDBConnectData (connect'options)
import Database.HaskellDB
import Database.HaskellDB.HDBC.ODBC (odbcConnect)
import Database.HaskellDB.Sql.MySQL (generator)
main = odbcConnect
Why not use kerberos?
We find it works for us, integrates with web (natively or via
WebAuth), remote command execution (remctl) and ssh - widely used,
scales brilliantly.
Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
Hi
Anyone got any hints as to why this linkage error is occurring (GHC 6.12.3, Mac
OSX 10.6 (and 10.5 has same issue)):
cabal install haskelldb-hdbc-odbc --enable-documentation --root-cmd=sudo
--global
Resolving dependencies...
Configuring haskelldb-hdbc-odbc-0.13...
Preprocessing library
If you are looking for a real first - http://en.wikipedia.org/wiki/Ada_Lovelace
- she is even credited with writing the first algorithm for machine
execution.
On 27 Mar 2010, at 20:06, John Van Enk wrote:
http://en.wikipedia.org/wiki/Grace_Hopper
A heck of a lady.
On Sat, Mar 27, 2010
On 2 Mar 2010, at 21:38, Simon Marlow wrote:
On 02/03/10 20:37, Luke Palmer wrote:
On Tue, Mar 2, 2010 at 7:17 AM, Simon Marlowmarlo...@gmail.com
wrote:
For games,
though, we have a very good point that occurs regularly where we
know
that all/most short-lived objects will no longer be
On 3 Mar 2010, at 00:00, Jason Dusek wrote:
2010/02/28 Neil Davies semanticphilosop...@googlemail.com:
I've never observed ones that size. I have an application that runs
in 'rate
equivalent real-time' (i.e. there may be some jitter in the exact
time of
events but it does not accumulate
I don't know that hanging your hat on the deterministic coat hook is
the right thing to do.
The way that I've always looked at this is more probabilistic - you
want the result to arrive within a certain time frame for a certain
operation with a high probability. There is always the
My experience agrees with Pavel.
I've never observed ones that size. I have an application that runs in
'rate equivalent real-time' (i.e. there may be some jitter in the
exact time of events but it does not accumulate). It does have some
visibility of likely time of future events and uses
Neat
Surely there is somewhere in the haskell Twiki that something like
this should live?
Neil
On 12 Dec 2009, at 21:00, Soenke Hahn wrote:
Hi!
Some time ago, i needed to write down graphs in Haskell. I wanted to
be able
to write them down without to much noise, to make them easily
Or maybe it should be renamed
proofObligationsOnUseNeedToBeSupliedBySuitablyQualifiedIndividualPerformIO
which is what it really is - unsafe in the wrong hands
Nei
On 4 Dec 2009, at 08:57, Colin Adams wrote:
Please help me understand the holes in Haskell's type system.
Not really
that us humans still have a
role...
Neil
On 4 Dec 2009, at 09:16, Colin Adams wrote:
But the type system doesn't insist on such a proof - so is it not a
hole?
2009/12/4 Neil Davies semanticphilosop...@googlemail.com:
Or maybe it should be renamed
On 1 Dec 2009, at 05:44, Tom Hawkins wrote:
I never considered running Atom generated functions in an asynchronous
loop until you posted your Atomic Fibonacci Server example
(http://leepike.wordpress.com/2009/05/05/an-atomic-fibonacci-server-exploring-the-atom-haskell-dsl/
).
I'm
Look at Data.Binary (binary package)
It will marshall and unmarshall data types for you. If you don't like
its binary encoding you can dive in there and use the same principles
Cheers
Neil
On 6 Jun 2009, at 07:13, John Ky wrote:
Hi Haskell Cafe,
I'm trying to send stuff over UDP. To do
Thomas
You can build your own scheduler very easily using what is already
there.
As with any simulation the two things that you need to capture are
dependency and resource contention. Haskell does both the dependency
stuff beautifully and the resource contention. Using STM you can even
get
Hi,
It does not appear that you can access the 'addrFlags' returned by
getAddrInfo, you get the exception: *** Exception: unpackBits: unhandled
bits set: 8
Is this just a MacOS issue? Should I raise a ticket for it?
Cheers
Neil
GHCi, version 6.10.2: http://www.haskell.org/ghc/ :? for help
Johan
Ticket (#8) raised - cheers.
Neil
2009/5/4 Johan Tibell johan.tib...@gmail.com
On Mon, May 4, 2009 at 8:20 PM, Bryan O'Sullivan b...@serpentine.com
wrote:
On Mon, May 4, 2009 at 8:35 AM, Neil Davies
semanticphilosop...@googlemail.com wrote:
It does not appear that you can access
, at 12:51, Duncan Coutts wrote:
On Fri, 2009-05-01 at 09:14 +0100, Neil Davies wrote:
Hi
With the discussion on threads and priority, and given that (in
Stats.c) there are lots of useful pieces of information that the run
time system is collecting, some of which is already visible (like the
total
and lightweight. If anybody is
interested,
I could describe the concept in more details.
Belka
Neil Davies-2 wrote:
Belka
You've described what you don't want - what do you want?
Given that the fundamental premise of a DDoS attack is to saturate
resources
so that legitimate activity is curtailed
that asking
for an upper bound substantially simplifies the problem (though, I could be
wrong) and still gives you the characterisics you need to give a 'time to
complete'.
/jve
On Fri, May 1, 2009 at 4:14 AM, Neil Davies
semanticphilosop...@googlemail.com wrote:
Hi
With the discussion on threads
Belka
You've described what you don't want - what do you want?
Given that the fundamental premise of a DDoS attack is to saturate
resources
so that legitimate activity is curtailed - ultimately the only
response has to be to
discard load, preferably not the legitimate load (and therein
Hi
With the discussion on threads and priority, and given that (in
Stats.c) there are lots of useful pieces of information that the run
time system is collecting, some of which is already visible (like the
total amount of memory mutated) and it is easy to make other measures
available -
Count me in too
I've got a library that endeavours to deliver 'rate-equivalence' - i.e
there may be some jitter in when the events should have occurred but
their long term rate of progress is stable.
Testing has shown that I can get events to occur at the right time
within 1ms (99%+ of
I end up doing
:set -i../Documents/haskell/SOE/src
To set the search directory so that ghci can find the source.
I've not been how to tailor that 'cd' which is run at start up (but
I've not looked to hard)
Neil
On 16 Apr 2009, at 04:12, Daryoush Mehrtash wrote:
I am having problem
Hi
Looks like pcap package needs a little tweek for 6.10.2 - programs
compiled with it bomb out with
..error: a C finalizer called back into Haskell.
use Foreign.Concurrent.newForeignPtr for Haskell finalizers.
Is my interpretation of this error message correct?
Cheers
Neil
I've found the pico second accuracy useful in working with 'rate
equivalent' real time systems. Systems where the individual timings
(their jitter) is not critical but the long term rate should be
accurate - the extra precision helps with keeping the error
accumulation under control.
Yesterday it worked
Today is is still working
Haskell is like that!
On 5 Dec 2008, at 23:18, Gwern Branwen wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Hi everyone. So today I finally got around to something long on my
todo list - a compilation of all the Haskell haikus I've seen
Ketil
You may not be asking the right question here. Your final system's
performance is going to be influenced far more by your algorithm for
updating than by STM (or any other concurrency primitive's) performance.
Others have already mentioned the granularity of locking - but that
one
On 18 Nov 2008, at 10:04, Ketil Malde wrote:
Neil Davies [EMAIL PROTECTED] writes:
You may not be asking the right question here. Your final system's
performance is going to be influenced far more by your algorithm for
updating than by STM (or any other concurrency primitive's)
performance
Gents
Too late to get in on that act - that software was designed over the
last 10/15 years implemented and trailed over the last 5 and being
tuned now.
But all is not lost, Haskell is being used! Well in least in helping
ATLAS people understand how to the data acquisition system
So the answer for persistence is Data.Binary - ASN.1 converter that
can be used in extrema?
On 11/09/2007, Brandon S. Allbery KF8NH [EMAIL PROTECTED] wrote:
On Sep 11, 2007, at 7:01 , Jules Bean wrote:
The actual format used by Data.Binary is not explicitly described
in any standard
Ah,
this begins to answer my question: there isn't really a plan
I would have thought that he first step is to be able to distinguish
which of the hackage packages compile under 6.8 - some annotation to
the hackage DB? Secondly, is there a dependency graph of the stuff on
hackage anywhere?
- they
are seen as a distraction the more we can automate this the better,
the more quickly the libraries are going to get up to date.
As a community we are building a momentum here - let's not derail it
by a lack of attention to detail.
Neil
On 09/09/2007, Neil Davies [EMAIL PROTECTED] wrote:
Ah
Given that GHC 6.8 is just around the corner and, given how it has
re-organised the libraries so that the dependencies in many (most/all)
the packages in the hackage DB are now not correct.
Is there a plan of how to get hackage DB up to speed with GHC 6.8 ?
Cheers
Neil
Cut is a means of preventing backtracking beyond that point - it
prunes the potential search space saying the answer must be built on
the current set of bindings. (Lots of work went into how automatically
get cut's into programs to make them efficient but without the
programmer having to worry
Hi, I was looking over the libraries for bits of GHC (no doubt a standard form
of
relaxation for readers of this list), and noticed the following statement
(in Data.Unique):
-- | Creates a new object of type 'Unique'. The value returned will
-- not compare equal to any other value of type
*after* you've pressed the send button
Neil Davies wrote:
Hi, I was looking over the libraries for bits of GHC (no doubt a standard
form of
relaxation for readers of this list), and noticed the following statement
(in Data.Unique):
-- | Creates a new object of type 'Unique'. The value
I means that you can view programming as constructing witnesses to
proofs - programs becoming the (finite) steps that, when followed,
construct a proof.
Intuitionism only allows you to make statements that can be proved
directly - no Reductio ad absurdum only good, honest to goodness
Ah -
The state of the world serialized into your representation.
That would be interesting to see
Neil
... ah you meant something different?
On 21/06/07, apfelmus [EMAIL PROTECTED] wrote:
Tom Schrijvers wrote:
I understand that, depending on what the compiler does the result of
Bulat
That was done to death as well in the '80s - data flow architectures
where the execution was data-availability driven. The issue becomes
one of getting the most of the available silicon area. Unfortunately
with very small amounts of computation per work unit you:
a) spend a lot of
Ivan
If I remember correctly there is a caveat in the documentation that
stdin/stdout could be closed when the finalizer is called. So It may
be being called - you just can see it!
Neil
On 11/05/07, Ivan Tomac [EMAIL PROTECTED] wrote:
Why does the finalizer in the following code never get
Yep - I've seen it in course work I've set in the past - random walk
through the arrangement of symbols in the language (it was a process
algebra work and proof system to check deadlock freedom).
... but ...
Haskell even helps those people - if you've created something that
works (and you are
Hi
Has anyone out there done any work on parsers for SIP (Session
Initiation Protocol) and/or SDP (Session Description Protocol)?
Thought that I would ask before I embarked on it myself.
Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
Its about the lazyness of reading the file. The handles on the file
associated (underlying readFile) is still open - hence the resource
being in use.
When you add that extra line the act of writing out the remainer
causes the rest of the input to be fully evaluated and hence the
filehandle is
existing ecoding system - both the BER (Basic Encoding Rules) and the
PER (Packed Encoding Rules).
If you are looking to target a well supported standard - this would be the one.
Neil
On 26/01/07, Malcolm Wallace [EMAIL PROTECTED] wrote:
Tomasz Zielonka [EMAIL PROTECTED] wrote:
Did you
I've prototyped a fix for this issue which will now only wrap every
585,000 years or so. It also removes the 1/50th of a second timer
resolution for the runtime. This means that the additional 20ms (or
thereabouts) of delay in the wakeup has gone.
This means that GHC is now on a par with any
In investigating ways of getting less jitter in the GHC concurrency
runtime I've found the following issue:
The GHC concurrency model does all of its time calculations in ticks
- a tick being fixed at 1/50th of a second. It performs all of these
calculations in terms the number of these ticks
The GHC concurrency run time currently forces all delays to be
resolved to 1/50th of second - independent of any of the run time
settings (e.g. -V flag) - with rounding to the next 1/50th of second.
This means that, in practice, the jitter in the time that a thread
wakes as compared with can be
On 29/11/06, Ian Lynagh [EMAIL PROTECTED] wrote:
On Wed, Nov 22, 2006 at 03:37:05PM +, Neil Davies wrote:
Ian/Simon(s) Thanks - looking forward to the fix.
I've now pushed it to the HEAD.
Thanks - I'll pull it down and give it a try.
It will help with the real time enviroment
Ian/Simon(s) Thanks - looking forward to the fix. It will help with
the real time enviroment that I've got.
Follow on query: Is there a way of seeing the value of this interval
from within the Haskell program? Helps in the calibration loop.
Neil
___
Hi,
I'm using GHC 6.6 (debian/etch) - and having some fun with
threadDelay. When compiled without the -threaded compiler argument it
behaves as documented - waits at least the interval - for example:
Tgt/Actual = 0.00/0.036174s, diff = 0.036174s
Tgt/Actual = 0.01/0.049385s, diff =
83 matches
Mail list logo