On Fri, Dec 16, 2005 at 04:41:05PM +0300, Bulat Ziganshin wrote:
> Hello Joel,
>
> Friday, December 16, 2005, 3:22:46 AM, you wrote:
>
> >> TZ> You don't have to check "every few seconds". You can determine
> >> TZ> exactly how much you have to sleep - just check the timeout/
> >> event with
> >
On Dec 16, 2005, at 1:41 PM, Bulat Ziganshin wrote:
JR> I do not have several fixed waiting periods, they are
determined by
JR> the user.
by the user of library? by the poker player? what you exactly mean?
By the user of the library. Timers are used imprecisely, to send a
timeout event i
Hello Joel,
Friday, December 16, 2005, 3:22:46 AM, you wrote:
>> TZ> You don't have to check "every few seconds". You can determine
>> TZ> exactly how much you have to sleep - just check the timeout/
>> event with
>> TZ> the lowest ClockTime.
JR> The scenario above does account for the situatio
Hello Simon,
Thursday, December 15, 2005, 4:53:27 PM, you wrote:
SM> The 3k threads are still GC'd, but they are not actually *copied* during
SM> GC.
SM> It'll increase the memory overhead per thread from 2k (1k * 2 for
SM> copying) to 4k (4k block, no overhead for copying).
Simon, why not to i
On 16 December 2005 15:19, Lennart Augustsson wrote:
> John Meacham wrote:
>> On Thu, Dec 15, 2005 at 02:02:02PM -, Simon Marlow wrote:
>>
>>> With 2k connections the overhead of select() is going to start to
>>> be a problem. You would notice the system time going up.
>>> -threaded may hel
John Meacham wrote:
On Thu, Dec 15, 2005 at 02:02:02PM -, Simon Marlow wrote:
With 2k connections the overhead of select() is going to start to be a
problem. You would notice the system time going up. -threaded may help
with this, because it calls select() less often.
we should be usin
On 16.12 07:03, Tomasz Zielonka wrote:
> On 12/16/05, Einar Karttunen wrote:
> > To matters nontrivial all the *nix variants use a different
> > more efficient replacement for poll.
>
> So we should find a library that offers a unified
> interface for all of them, or implement one ourselves.
>
>
On Fri, Dec 16, 2005 at 07:03:46AM +0100, Tomasz Zielonka wrote:
> On 12/16/05, Einar Karttunen wrote:
> > To matters nontrivial all the *nix variants use a different
> > more efficient replacement for poll.
>
> So we should find a library that offers a unified
> interface for all of them, or imp
On 12/16/05, Einar Karttunen wrote:
> To matters nontrivial all the *nix variants use a different
> more efficient replacement for poll.
So we should find a library that offers a unified
interface for all of them, or implement one ourselves.
I am pretty sure such a library exists. It should fall
Einar Karttunen writes:
> To matters nontrivial all the *nix variants use a different
> more efficient replacement for poll.
> Solaris has /dev/poll
> *BSD (and OS X) has kqueue
> Linux has epoll
Since this is 'cafe, here's a page has some performance testing of
epoll:
http://lse.sourcefor
On 15.12 17:14, John Meacham wrote:
> On Thu, Dec 15, 2005 at 02:02:02PM -, Simon Marlow wrote:
> > With 2k connections the overhead of select() is going to start to be a
> > problem. You would notice the system time going up. -threaded may help
> > with this, because it calls select() less o
On Thu, Dec 15, 2005 at 02:02:02PM -, Simon Marlow wrote:
> With 2k connections the overhead of select() is going to start to be a
> problem. You would notice the system time going up. -threaded may help
> with this, because it calls select() less often.
we should be using /dev/poll on syste
Bulat,
On Dec 14, 2005, at 9:00 PM, Bulat Ziganshin wrote:
TZ> You don't have to check "every few seconds". You can determine
TZ> exactly how much you have to sleep - just check the timeout/
event with
TZ> the lowest ClockTime.
this scenario don't count that we can receive new request while
Hello Tomasz,
Wednesday, December 14, 2005, 10:48:43 PM, you wrote:
TZ> You don't have to check "every few seconds". You can determine
TZ> exactly how much you have to sleep - just check the timeout/event with
TZ> the lowest ClockTime.
this scenario don't count that we can receive new request wh
Hello Joel,
Thursday, December 15, 2005, 5:13:17 PM, you wrote:
>>> The statistics are phys/VM, CPU usage in % and #packets/transfer
>>> speed
>>>
>>> Total: 1345, Lobby: 1326, Failed: 0, 102/184, 50%, 90/8kb
>>> Total: 1395, Lobby: 1367, Failed: 2
>>> Total: 1421, Lobby: 1394, Faile
On Dec 15, 2005, at 2:02 PM, Simon Marlow wrote:
Hmm, your machine is spending 50% of its time doing nothing, and the
network traffic is very low. I wouldn't expect 2k connections to pose
any problem at all, so further investigation is definitely required.
With 2k connections the overhead of s
On Dec 15, 2005, at 2:02 PM, Simon Marlow wrote:
The statistics are phys/VM, CPU usage in % and #packets/transfer
speed
Total: 1345, Lobby: 1326, Failed: 0, 102/184, 50%, 90/8kb
Total: 1395, Lobby: 1367, Failed: 2
Total: 1421, Lobby: 1394, Failed: 4
Total: 1490, Lobby: 146
On 15 December 2005 10:21, Joel Reymont wrote:
> Here are statistics that I gathered. I'm almost done modifying the
> program to use 1 timer thread instead of 1 per bot as well as writing
> to the socket from the writer thread. This should reduce the number
> of threads from 6k (2k x 3) to 2k plus
On 15 December 2005 10:21, Joel Reymont wrote:
> Here are statistics that I gathered. I'm almost done modifying the
> program to use 1 timer thread instead of 1 per bot as well as writing
> to the socket from the writer thread. This should reduce the number
> of threads from 6k (2k x 3) to 2k plus
On Thu, Dec 15, 2005 at 10:46:55AM +, Joel Reymont wrote:
> One idea would be to index the timer on ThreadId and name and stick
> Nothing into the timer action once the timer has been fired/stopped.
> Since timers are restarted with the same name quite often this would
> just keep one rel
One idea would be to index the timer on ThreadId and name and stick
Nothing into the timer action once the timer has been fired/stopped.
Since timers are restarted with the same name quite often this would
just keep one relatively big map in memory. The additional ThreadId
would help distin
After a chat with Einar on #haskell I realized that I would have,
say, 4k expiring timers and maybe 12k timers that are started and
then killed. That would make a 16k element map on which 3/4 of the
operations are O(n=16k) (Einar).
I need a better abstraction I guess. I also need to be able
Here are statistics that I gathered. I'm almost done modifying the
program to use 1 timer thread instead of 1 per bot as well as writing
to the socket from the writer thread. This should reduce the number
of threads from 6k (2k x 3) to 2k plus change.
It appears that +RTS -k3k does make a d
Something like this. If someone inserts a timer while we are doing
our checking we can always catch it on the next iteration of the loop.
--- Now runs unblocked
checkTimers :: IO ()
checkTimers =
do t <- readMVar timers -- takes it and puts it back
case M.size t of
-- no tim
On Dec 15, 2005, at 12:08 AM, Einar Karttunen wrote:
timeout = 500 -- 1 second
Is that correct?
I think so. threadDelay takes microseconds.
Here is a nice trick for you:
Thanks!
--- The filter expression is kind of long...
stopTimer :: String -> IO ()
stopTimer name =
block $
On Thu, Dec 15, 2005 at 09:32:38AM +, Joel Reymont wrote:
> Well, my understanding is that once I do a takeMVar I must do a
> putMVar under any circumstances. This is why I was blocking checkTimers.
Perhaps you could use modifyMVar:
http://www.haskell.org/ghc/docs/latest/html/libraries/base
Well, my understanding is that once I do a takeMVar I must do a
putMVar under any circumstances. This is why I was blocking checkTimers.
On Dec 15, 2005, at 12:08 AM, Einar Karttunen wrote:
Is there a reason you need block for checkTimers?
What you certainly want to do is ignore exceptions
fr
Hello Joel,
Wednesday, December 14, 2005, 7:55:36 PM, you wrote:
JR> With a 1 minute keep-alive timeout system is starting to get stressed
JR> almost right away. There's verbose logging going on and almost every
JR> event/packet sent and received is traced. The extra logging of the
JR> timeou
On 14.12 23:07, Joel Reymont wrote:
> Something like this? Comments are welcome!
> timeout :: Int
> timeout = 500 -- 1 second
Is that correct?
> {-# NOINLINE timers #-}
> timers :: MVar Timers
> timers = unsafePerformIO $ newMVar M.empty
>
> --- Call this first
> initTimers :: IO ()
> initT
On Dec 14, 2005, at 7:48 PM, Tomasz Zielonka wrote:
You don't have to check "every few seconds". You can determine
exactly how much you have to sleep - just check the timeout/event with
the lowest ClockTime.
Something like this? Comments are welcome!
It would be cool to not have to export an
On Dec 14, 2005, at 7:48 PM, Tomasz Zielonka wrote:
You don't have to check "every few seconds". You can determine
exactly how much you have to sleep - just check the timeout/event with
the lowest ClockTime.
Right, thanks for the tip! I would need to way a predefined amount of
time when the
On Wed, Dec 14, 2005 at 07:11:15PM +, Joel Reymont wrote:
> I figure I can have a single timer thread and a timer map keyed on
> ClockTime. I would try to get the min. key from the map every few
> seconds, compare it to clock time, fire of the event as needed,
> remove the timer and repea
On Dec 14, 2005, at 6:06 PM, Bulat Ziganshin wrote:
as i already said, you can write to socket directly in your worker
thread
True. 1 less thread to deal with... multiplied by 4,000.
you can use just one timeouts thread for all your bots. if this
timeout is constant across program run, then
Hello Joel,
Wednesday, December 14, 2005, 7:55:36 PM, you wrote:
JR> In my current architecture I launch a two threads per socket where
JR> the socket reader places results in a TMVar and the socket writer
JR> takes input from a TChan.
as i already said, you can write to socket directly in you
Folks,
In my current architecture I launch a two threads per socket where
the socket reader places results in a TMVar and the socket writer
takes input from a TChan. I also have the worker thread the does the
bulk of packet processing and a timer thread. The time thread sleeps
for a few m
35 matches
Mail list logo