> The problem is that in some applications it is very natural for
> the main thread to add elements to the queue. Which means that
> if you work with a queue limited in size deadlocks can occur.

Quite a strange idea. In a normal application, if memory is really exhausted 
(almost never) a well behaved process fails instead of waiting for itself to 
free up some memory.
Note that exhausting memory isn't the same as filling a limited size 
producer/consumer queue, which in a Python program is an aberration (flexible 
length queues are significantly easier to implement) that serves no useful 
purpose. 
 
> Working with an unlimited queue can result in queus that grow
> without bounds.

It's not the queue's fault: if the specific job and the scheduling algorithm 
cause a peak of N queue items, either there is enough memory and CPU for them 
or not. Having a data structure that grows adaptively is useful, while failing 
with available resources remaining is inefficient.
 
> I agree that queue like behaviour is the way to handle these
> kind of things but my feeling is that python queues are a
> bit too simple to be used in general although they can be
> enough in (most) specific cases.

A "feeling"? Do you have examples?

> It also may depend on whether you are satisfied with a polling
> solution on your queue or want the main thread only to examine the
> queue when it is not empty.

A thread could sleep for a time t, process items until the queue is empty and 
sleep again.
Increasing t increases latency but also efficiency, covering the spectrum 
between busy waiting and a very high probability of finding something in the 
queue.
 
Lorenzo Gatti
_______________________________________________
pygtk mailing list   [EMAIL PROTECTED]
http://www.daa.com.au/mailman/listinfo/pygtk
Read the PyGTK FAQ: http://www.async.com.br/faq/pygtk/

Reply via email to