On Aug 12, 2019, at 15:34, Christopher Barker wrote:
>
> In fact, I'm pretty sure that setting custom separators the only way to get
> it to generate invalid JSON now,
There’s also allow_nan. But that one is clearly intentional, because so many
other implementations (including the original JS
On Aug 12, 2019, at 15:18, Christopher Barker wrote:
>
> If that is the goal, then strings would need a hook, too, as Unicode allows
> different normalized forms for some "characters" (see previous discussion for
> this, I, frankly, don't quite "get" it.
Although normalization can be a
Nick Timkovich wrote:
> I actually gave a talk along these lines at the Chicago Python (ChiPy)
> meetup last week: slides
> https://docs.google.com/presentation/d/1v5z4f-FQkS-bQYE-Xv5SvA6cyaTiqlxs2w2C...
Nice presentation. I've adapted the examples in the "how to parent" section to
illustrate the
On Mon, Aug 12, 2019 at 3:58 PM Chris Angelico wrote:
> But if there is a way to support that use-case without the foot gun, then
> I think that's a better option.
>
> It's more that there's no reason to go to great lengths to *stop* you
> from encoding invalid JSON. For it to block it, it
On Tue, Aug 13, 2019 at 8:34 AM Christopher Barker wrote:
>
> On Mon, Aug 12, 2019 at 2:59 PM Chris Angelico wrote:
>>
>> On Tue, Aug 13, 2019 at 7:55 AM Christopher Barker
>> wrote:
>> > That may mean the cat's out of the bag, and we can neglect any role of the
>> > json module to try to
On Mon, Aug 12, 2019 at 2:59 PM Chris Angelico wrote:
> On Tue, Aug 13, 2019 at 7:55 AM Christopher Barker
> wrote:
> > That may mean the cat's out of the bag, and we can neglect any role of
> the json module to try to enforce valid JSON, but still...
>
> The *decoder* will enforce valid JSON,
On Mon, Aug 12, 2019 at 9:53 AM Richard Musil wrote:
> Christopher, I understood that the risk of producing invalid JSON if
> custom type is allowed to serialize into output stream seems to be a major
> problem for many. This has been already mentioned in this discussion.
> However, I thought it
On 2019-08-08 11:52, Ryan Fox wrote:
My proposal is a new exception class as the preferred base for
user-defined exceptions:
>>> class MyException(ExceptionTemplate):
... message = 'Bad thing happened during {action} in {context}'
>>> raise MyException(action=current_action,
On Tue, Aug 13, 2019 at 7:55 AM Christopher Barker wrote:
> That may mean the cat's out of the bag, and we can neglect any role of the
> json module to try to enforce valid JSON, but still...
>
The *decoder* will enforce valid JSON, but the *encoder* doesn't need
to stop you from doing what
Another doc note:
I see this:
"""
it is common for JSON numbers to be deserialized into IEEE 754 double
precision numbers and thus subject to that representation’s range and
precision limitations. This is especially relevant when serializing Python
int values of extremely large magnitude, or
On Sun, Aug 11, 2019 at 10:05 PM Chris Angelico wrote:
> > But it makes me nervous -- I think the goal of the json module is to
> produce valid json, and nothing else. Opening up a protocol would allow
> users to fairly easily, and maybe inadvertently, create invalid JSON. I'm
> not sure there
side note:
I"m reading teh json docs more closely now, and noticed:
"""
parse_float, if specified, will be called with the string of every JSON
float to be decoded. By default, this is equivalent to float(num_str). This
can be used to use another datatype or parser for JSON floats (e.g.
I actually gave a talk along these lines at the Chicago Python (ChiPy)
meetup last week: slides
https://docs.google.com/presentation/d/1v5z4f-FQkS-bQYE-Xv5SvA6cyaTiqlxs2w2CI1yZcAU/edit?usp=sharing
Part of the argument was about using pure standard library so a
self-contained script/repo could run
On Aug 11, 2019, at 19:01, malin...@163.com wrote:
>
> The idea is mixing `PyLongObject` with `Python 2's PyIntObject`
> implementation.
>
> For example, on a 64-bit platform, if (an integer >=-9223372036854775808 and
> <=9223372036854775807), PyLongObject uses a native C type `signed long` to
Christopher, I understood that the risk of producing invalid JSON if custom
type is allowed to serialize into output stream seems to be a major problem for
many. This has been already mentioned in this discussion. However, I thought it
was related to the original idea of "raw output" (for
The idea is mixing `PyLongObject` with `Python 2's PyIntObject` implementation.
For example, on a 64-bit platform, if (an integer >=-9223372036854775808 and
<=9223372036854775807), PyLongObject uses a native C type `signed long` to
represent it.
People mostly use +-* operations, maybe using
16 matches
Mail list logo