On Sun, Oct 25, 2020, at 18:45, Chris Angelico wrote:
> If you actually DO need to read null-terminated records from a file
> that's too big for memory, it's probably worth just rolling your own
> buffering, reading a chunk at a time and splitting off the interesting
> parts. It's not hugely difficult, and it's a good exercise to do now
> and then. And yes, I can see the temptation to get Python to do it,
> but unfortunately, newline support is such a weird mess of
> cross-platform support that I don't think it needs to be made more
> complicated :)

Maybe a getdelim method that ignores all the newline support complexity and 
just reads until it reaches the specified character? It would make sense on 
binary files too.

The problem with rolling your own buffering is that there's not really a good 
way to put back the unused data after the delimiter if you're mixing this 
processing with something else. You'd have to do it a character at a time, 
which would be very inefficient in pure python.
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/CXZWUKIIJNGP7EDXG7P3CHZKF3XW2P6P/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to