Bob Ippolito <[email protected]> added the comment:
I think the best start would be to add a bit of documentation with an example
of how you could work with newline delimited json using the existing module
as-is. On the encoding side you need to ensure that it's a compact
representation without embedded newlines, e.g.:
for obj in objs:
yield json.dumps(obj, separators=(',', ':')) + '\n'
I don't think it would make sense to support this directly from dumps, as it's
really multiple documents rather than the single document that every other form
of dumps will output.
On the read side it would be something like:
for doc in lines:
yield json.loads(doc)
I'm not sure if this is common enough (and separable enough from I/O and error
handling constraints) to be worth adding the functionality directly into json
module. I think it would be more appropriate in the short to medium term that
the each service (e.g. BigQuery) would have its own module with helper
functions or framework that encapsulates the protocol that the particular
service speaks.
----------
_______________________________________
Python tracker <[email protected]>
<https://bugs.python.org/issue34529>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com