On Mon, Nov 08, 2004 at 12:23:23PM +0100, Gatti Lorenzo wrote:
> > The problem is that in some applications it is very natural for
> > the main thread to add elements to the queue. Which means that
> > if you work with a queue limited in size deadlocks can occur.
> 
> Quite a strange idea. In a normal application, if memory is really
> exhausted (almost never) a well behaved process fails instead of
> waiting for itself to free up some memory.

It has nothing to do with memory exhaustion. But with the choice
of limiting the number of slots in your queue.

> Note that exhausting
> memory isn't the same as filling a limited size producer/consumer
> queue, which in a Python program is an aberration (flexible length
> queues are significantly easier to implement) that serves no useful purpose. 

No that is not an aberration. If you have a producer process that
puts items faster on the queue than the consumer can get them off.
having a queue with limited size will block your producer from
time to time.

> > Working with an unlimited queue can result in queus that grow
> > without bounds.
> 
> It's not the queue's fault: if the specific job and the scheduling
> algorithm cause a peak of N queue items, either there is enough memory
> and CPU for them or not. Having a data structure that grows adaptively
> is useful, while failing with available resources remaining is inefficient.

It has nothing to do with enough CPU and memory. I have had a program
where the producers put so many elements on the queue, that the consumer
couldn't keep up and since the consumer was responsible
to put a representation of what already was done in a window, the whole
thing became sluggisch and hopelessly behind.

> > I agree that queue like behaviour is the way to handle these
> > kind of things but my feeling is that python queues are a
> > bit too simple to be used in general although they can be
> > enough in (most) specific cases.
> 
> A "feeling"? Do you have examples?

I'll have to see if I can dig them up. I spend some time writing
a demo program with threads and pygtk and even working with a
queue doesn't always prevent trouble. I was interrested in getting
a working program so didn't care too much in keeping the problem
cases, so I'm not sure if I can reproduce them. But if you are
interested I'll have a look around.

> > It also may depend on whether you are satisfied with a polling
> > solution on your queue or want the main thread only to examine the
> > queue when it is not empty.
> 
> A thread could sleep for a time t, process items until the queue is
> empty and sleep again.  Increasing t increases latency but also
> efficiency, covering the spectrum between busy waiting and a very
> high probability of finding something in the queue.

But that doesn't help if your producers are faster than your consumers.

-- 
Antoon Pardon
_______________________________________________
pygtk mailing list   [EMAIL PROTECTED]
http://www.daa.com.au/mailman/listinfo/pygtk
Read the PyGTK FAQ: http://www.async.com.br/faq/pygtk/

Reply via email to