On 2021-05-08 01:43, Pablo Galindo Salgado wrote:
Some update on the numbers. We have made some draft implementation to corroborate the numbers with some more realistic tests and seems that our original calculations were wrong.
The actual increase in size is quite bigger than previously advertised:

Using bytes object to encode the final object and marshalling that to disk (so using uint8_t) as the underlying
type:

BEFORE:

❯ ./python -m compileall -r 1000 Lib > /dev/null
❯ du -h Lib -c --max-depth=0
70M     Lib
70M     total

AFTER:
❯ ./python -m compileall -r 1000 Lib > /dev/null
❯ du -h Lib -c --max-depth=0
76M     Lib
76M     total

So that's an increase of 8.56 % over the original value. This is storing the start offset and end offset with no compression
whatsoever.

[snip]

I'm wondering if it's possible to compromise with one position that's not as complete but still gives a good hint:

For example:

  File "test.py", line 6, in lel
    return 1 + foo(a,b,c=x['z']['x']['y']['z']['y'], d=e)
                                              ^

TypeError: 'NoneType' object is not subscriptable

That at least tells you which subscript raised the exception.


Another example:

  Traceback (most recent call last):
    File "test.py", line 4, in <module>
      print(1 / x + 1 / y)
              ^
  ZeroDivisionError: division by zero

as distinct from:

  Traceback (most recent call last):
    File "test.py", line 4, in <module>
      print(1 / x + 1 / y)
                      ^
  ZeroDivisionError: division by zero
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/4RGQALI6T6HBNRDUUEYX4FA2YKTZDBNA/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to