[Jacco van Dorp <j.van.d...@deonet.nl>]
> I've sometimes thought that exhaust(iterator) or iterator.exhaust() would be
> a good thing to have - I've often wrote code doing basically "call this
> for every element in this container, and idc about return values", but find
> myself using a list comprehension instead of generator. I guess it's such an
> edge case that exhaust(iterator) as builtin would be overkill (but perhaps
> itertools could have it ?), and most people don't pass around iterators, so
> (f(x) for x in y).exhaust() might not look natural to most people.
"The standard" clever way to do this is to create a 0-sized deque:
>>> from collections import deque
>>> deque((i for i in range(1000)), 0)
The deque constructor consumes the entire iterable "at C speed", but
throws all the results away because the deque's maximum size is too
small to hold any of them ;-)
> It could return the value for the last() semantics, but I think exhaustion
> would often be more important than the last value.
>>> deque((i for i in range(1000)), 1)
In that case the deque only has enough room to remember one element,
and so remembers the last one it sees. Of course this generalizes to
larger values too:
>>> for x in deque((i for i in range(1000)), 5):
I think I'd like to see itertools add a `drop(iterable, n=None)`
function. If `n` is not given, it would consume the entire iterable.
Else for an integer n >= 0, it would return an iterator that skips
over the first `n` values of the input iterable.
`drop n xs` has been in Haskell forever, and is also in the Python
I'm not happy about switching the argument order from those, but would
really like to omit `n` as a a way to spell "pretend n is infinity",
so there would be no more need for the "empty deque" trick.
Python-ideas mailing list
Code of Conduct: http://python.org/psf/codeofconduct/