On Sat, Nov 29, 2014 at 9:07 AM, Nick Coghlan <ncogh...@gmail.com> wrote:

> Guido wrote a specific micro-benchmark for that case in one of the
> other threads. On his particular system, the overhead was around 150
> ns per link in the chain at the point the data processing pipeline was
> shut down. In most scenarios where a data processing pipeline is worth
> setting up in the first place, the per-item handling costs (which
> won't change) are likely to overwhelm the shutdown costs (which will
> get marginally slower).
>

If I hadn't written that benchmark I wouldn't recognize what you're talking
about here. :-) This is entirely off-topic, but if I didn't know it was
about one generator calling next() to iterate over another generator, I
wouldn't have understood what pattern you refer to as a data processing
pipeline. And I still don't understand how the try/except *setup* cost
became *shut down* cost of the pipeline. But that doesn't matter, since the
number of setups equals the number of shut downs.

-- 
--Guido van Rossum (python.org/~guido)
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to