ChrisK wrote:
That is new. Ah, I see GHC.Conc.forkIO now has a note:
GHC note: the new thread inherits the /blocked/ state of the parent
(see 'Control.Exception.block').
BUT...doesn't this change some of the semantics of old code that used forkIO ?
Yes, it is a change to the semantics. I
ChrisK wrote:
A safer gimmick...
Ben Franksen wrote:
tickWhileDoing :: String - IO a - IO a
tickWhileDoing msg act = do
hPutStr stderr msg hPutChar stderr ' ' hFlush stderr
start_time - getCPUTime
tickerId - forkIO ticker
... an async exception here will leave the ticker runnning
david48 wrote:
Threads won't give you a speedup unless you run the program on a
multi-core/multi-proc machine.
That's actually not true. Threads allow you managing your IO blocking
better, and not making IO block your whole program can certainly speed
it up by a couple of orders of
Jules Bean wrote:
ChrisK wrote:
A safer gimmick...
Ben Franksen wrote:
tickWhileDoing :: String - IO a - IO a
tickWhileDoing msg act = do
hPutStr stderr msg hPutChar stderr ' ' hFlush stderr
start_time - getCPUTime
tickerId - forkIO ticker
... an async exception here will leave
Brad Clow wrote:
On Nov 28, 2007 11:30 AM, Matthew Brecknell [EMAIL PROTECTED] wrote:
Even with threads, results are evaluated only when they are needed (or
when forced by a strictness annotation). So the thread that needs a
result (or forces it) first will be the one to evaluate it.
So does
On 11/27/07, Matthew Brecknell [EMAIL PROTECTED] wrote:
wait_first :: [Wait a] - IO (a, [Wait a])
wait_first [] = error wait_first: nothing to wait for
wait_first ws = atomically (do_wait ws) where
do_wait [] = retry
do_wait (Wait w : ws) = do
r - readTVar w
case r of
Belatedly I realized that this answer should have been going to the list:
---
ChrisK wrote:
On Mittwoch, 28. November 2007, you wrote:
A safer gimmick...
Ben Franksen wrote:
tickWhileDoing :: String - IO a - IO a
tickWhileDoing msg act = do
hPutStr stderr msg hPutChar
Ryan Ingram said:
Interesting, although this seems like a perfect use for orelse:
wait_stm :: Wait a - STM a
wait_stm (Wait w) = readTVar w = maybe retry return
wait :: Wait a - IO a
wait w = atomically $ wait_stm w
wait_first :: [Wait a] - IO (a, [Wait a])
wait_first [] = error
On Nov 28, 2007 11:07 PM, Maurício [EMAIL PROTECTED] wrote:
Sorry, I don't agree. I try to write things in a
way that when you read it you can get an intuition
on why it's doing what it's doing; even when the
That's what comment are for :)
generate. So, instead of checking if threads have
Bryan O'Sullivan wrote:
But wait, there's more! If you're using the threaded RTS, you often
need to know how many threads you can run concurrently, for example to
explicitly split up a compute-bound task. This value is exposed at
runtime by the numCapabilities variable in the GHC.Conc
On Nov 29, 2007, at 13:38 , Andrew Coppin wrote:
Bryan O'Sullivan wrote:
But wait, there's more! If you're using the threaded RTS, you
often need to know how many threads you can run concurrently, for
example to explicitly split up a compute-bound task. This value
is exposed at runtime
A safer gimmick...
Ben Franksen wrote:
tickWhileDoing :: String - IO a - IO a
tickWhileDoing msg act = do
hPutStr stderr msg hPutChar stderr ' ' hFlush stderr
start_time - getCPUTime
tickerId - forkIO ticker
... an async exception here will leave the ticker runnning
res -
After I have spawned a thread with 'forkIO',
how can I check if that thread work has
finished already? Or wait for it?
The best way to do this is using
Control.Exception.finally: (...)
Changing ugly code for bad performance is not
that usual in Haskell code :(
I think you
On Nov 28, 2007 5:07 PM, Maurício [EMAIL PROTECTED] wrote:
Sorry if I sound rude. I just saw a place for a
small joke, and used it. Chris code is pretty
elegant to what it is supposed to do. However,
knowing if a thread has finished is just 1 bit of
information. There's probably a reason why
Dan Weston wrote:
Silly or not, if I compile with -threaded, I always link in the
one-liner C file:
char *ghc_rts_opts = -N2;
so I don't have to remember at runtime whether it should run with 2
cores or not. This just changes the default to 2 cores, so I am still
free to run on only one
Andrew Coppin wrote:
Dan Weston wrote:
Silly or not, if I compile with -threaded, I always link in the
one-liner C file:
char *ghc_rts_opts = -N2;
Ah... you learn something useful every day! I was going to ask on IRC
whether there's any way to do this - but now I don't need to bother.
Maurício wrote:
Hi,
After I have spawned a thread with
'forkIO', how can I check if that
thread work has finished already?
Or wait for it?
Thanks,
Maurício
The best way to do this is using Control.Exception.finally:
myFork :: IO () - IO (ThreadId,MVar ())
myFork todo =
m -
Hi,
After I have spawned a thread with 'forkIO',
how can I check if that thread work has
finished already? Or wait for it?
The best way to do this is using
Control.Exception.finally: (...)
These techniques are needed because forkIO is a
very lightweight threading mechanism. Adding
briqueabraque:
Hi,
After I have spawned a thread with 'forkIO',
how can I check if that thread work has
finished already? Or wait for it?
The best way to do this is using
Control.Exception.finally: (...)
These techniques are needed because forkIO is a
very lightweight
If you would like to wait on multiple threads, you can use STM like so:
import Control.Concurrent
import Control.Concurrent.STM
import Control.Exception
main = do
tc - atomically $ newTVar 2
run tc (print (last [1..1]))
run tc (print (last [1..11000]))
print Waiting
I was just watching top while executing this and noticed that it
really only used one core (I am using GHC 6.8.1 on a MacBook). Does
anyone know why?
On Nov 28, 2007 10:34 AM, Brad Clow [EMAIL PROTECTED] wrote:
If you would like to wait on multiple threads, you can use STM like so:
import
On Tuesday 27 November 2007 18:46:00 Brad Clow wrote:
I was just watching top while executing this and noticed that it
really only used one core (I am using GHC 6.8.1 on a MacBook). Does
anyone know why?
Did you compile with -threaded, and run with +RTS -N2?
Cheers,
Spencer Janssen
Silly mistake. I had compiled with -threaded, but forgot the +RTS -N2.
However, I have a more complex app, where I haven't forgotton to use
the right flags :-) and the utilisation of cores is very poor. I am
thinking it is due to laziness. I am currently wondering how GHC
handles the case where
Silly or not, if I compile with -threaded, I always link in the
one-liner C file:
char *ghc_rts_opts = -N2;
so I don't have to remember at runtime whether it should run with 2
cores or not. This just changes the default to 2 cores, so I am still
free to run on only one core with the
Brad Clow:
However, I have a more complex app, where I haven't forgotton to use
the right flags :-) and the utilisation of cores is very poor. I am
thinking it is due to laziness. I am currently wondering how GHC
handles the case where the function that is being forked uses lazy
arguments?
On Nov 28, 2007 11:30 AM, Matthew Brecknell [EMAIL PROTECTED] wrote:
Even with threads, results are evaluated only when they are needed (or
when forced by a strictness annotation). So the thread that needs a
result (or forces it) first will be the one to evaluate it.
So does GHC implement some
Brad Clow:
If you would like to wait on multiple threads, you can use STM like so:
import Control.Concurrent
import Control.Concurrent.STM
import Control.Exception
main = do
tc - atomically $ newTVar 2
run tc (print (last [1..1]))
run tc (print (last [1..11000]))
Brad Clow:
So does GHC implement some sychronisation given that a mutation is
occuring under the covers, ie. the thunk is being replaced by the
result?
I believe so, but I have no idea of the details.
I am using a TVar to build results of forked functions in. I had
a quick go at changing to
On Nov 28, 2007 2:39 PM, Matthew Brecknell [EMAIL PROTECTED] wrote:
Brad Clow:
Don's library is fairly simple. It adds a strictness annotation to force
each value you write to a MVar or Chan, so for example,
(Control.Concurrent.MVar.Strict.putMVar v x) is basically equivalent to
29 matches
Mail list logo