str_iterator, bytes_iterator, range_iterator, list_iterator, and tuple_iterator (and probably others) should have a method that is capable of efficiently advancing the iterator, instead of having to call next repeatedly.
I suggest adding an itertools.advance function which dispatches to a dunder __advance__ method (if one exists) or, as a fallback, calls next repeatedly. Then, the iterators mentioned above (and any others capable of efficiently doing so) would implement __advance__, which would directly manipulate their index to efficiently "jump" the desired number of elements in constant-time rather than linear-time. For example, if you have a large list and want to iterate over it, but skip the first 50000 elements, you should be able to do something like: it = iter(mylist) itertools.advance(it, 50000) Note that you technically can do this with itertools.islice by supplying a start value, but itertools.islice effectively just repeatedly calls next on your behalf, so if you're skipping a lot of elements, it's unnecessarily slow. As a side note, I noticed that list_iterator has __setstate__ which can be used to (more or less) accomplish this, but that seems very hack-y. Although it is setting the index directly (rather than adding to it), so it'd be more awkward to use if the iterator is already partially exhausted. it = iter(mylist) it.__setstate__(50000) _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/3FYI47IU2PKE42WWXL4AG7UNT4GBOWMD/ Code of Conduct: http://python.org/psf/codeofconduct/