Re: [Python-Dev] Modify PyMem_Malloc to use pymalloc for performance

2016-02-04 Thread M.-A. Lemburg
On 03.02.2016 22:03, Victor Stinner wrote:
> Hi,
> 
> There is an old discussion about the performance of PyMem_Malloc()
> memory allocator. CPython is stressing a lot memory allocators. Last
> time I made statistics, it was for the PEP 454:
> "For example, the Python test suites calls malloc() , realloc() or
> free() 270,000 times per second in average."
> https://www.python.org/dev/peps/pep-0454/#log-calls-to-the-memory-allocator
> 
> I proposed a simple change: modify PyMem_Malloc() to use the pymalloc
> allocator which is faster for allocation smaller than 512 bytes, or
> fallback to malloc() (which is the current internal allocator of
> PyMem_Malloc()).
> 
> This tiny change makes Python up to 6% faster on some specific (macro)
> benchmarks, and it doesn't seem to make Python slower on any
> benchmark:
> http://bugs.python.org/issue26249#msg259445
> 
> Do you see any drawback of using pymalloc for PyMem_Malloc()?

Yes: You cannot free memory allocated using pymalloc with the
standard C lib free().

It would be better to go through the list of PyMem_*() calls
in Python and replace them with PyObject_*() calls, where
possible.

> Does anyone recall the rationale to have two families to memory allocators?

The PyMem_*() APIs were needed to have a cross-platform malloc()
implementation which returns standard C lib free()able memory,
but also behaves well when passing 0 as size.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Experts (#1, Feb 04 2016)
>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
>>> Python Database Interfaces ...   http://products.egenix.com/
>>> Plone/Zope Database Interfaces ...   http://zope.egenix.com/


::: We implement business ideas - efficiently in both time and costs :::

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
  http://www.malemburg.com/

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Modify PyMem_Malloc to use pymalloc for performance

2016-02-04 Thread M.-A. Lemburg
On 04.02.2016 13:29, Victor Stinner wrote:
> Hi,
> 
> 2016-02-04 11:17 GMT+01:00 M.-A. Lemburg :
>>> Do you see any drawback of using pymalloc for PyMem_Malloc()?
>>
>> Yes: You cannot free memory allocated using pymalloc with the
>> standard C lib free().
> 
> That's not completly new.
> 
> If Python is compiled in debug mode, you get a fatal error with a huge
> error message if you free the memory allocated by PyMem_Malloc() using
> PyObject_Free() or PyMem_RawFree().
> 
> But yes, technically it's possible to use free() when Python is *not*
> compiled in debug mode.

Debug mode is a completely different beast ;-)

>> It would be better to go through the list of PyMem_*() calls
>> in Python and replace them with PyObject_*() calls, where
>> possible.
> 
> There are 536 calls to the functions PyMem_Malloc(), PyMem_Realloc()
> and PyMem_Free().
> 
> I would prefer to modify a single place having to replace 536 calls :-/

You have a point there, but I don't think it'll work out
that easily, since we are using such calls to e.g. pass
dynamically allocated buffers to code in extensions (which then
have to free the buffers again).

>>> Does anyone recall the rationale to have two families to memory allocators?
>>
>> The PyMem_*() APIs were needed to have a cross-platform malloc()
>> implementation which returns standard C lib free()able memory,
>> but also behaves well when passing 0 as size.
> 
> Yeah, PyMem_Malloc() & PyMem_Free() help to have a portable behaviour.
> But, why not PyObject_Malloc() & PObject_Free() were not used in the
> first place?

Good question. I guess developers simply thought of PyObject_Malloc()
being for PyObjects, not arbitrary memory buffers, most likely
because pymalloc was advertised as allocator for Python objects,
not random chunks of memory.

Also: PyObject_*() APIs were first introduced with pymalloc, and
no one really was interested in going through all the calls to
PyMem_*() APIs and convert those to use the new pymalloc at the
time.

All this happened between Python 1.5.2 and 2.0.

One of the reasons probably also was that pymalloc originally
did not return memory back to the system malloc(). This was
changed only some years ago.

> An explanation can be that PyMem_Malloc() can be called without the
> GIL held. But it wasn't true before Python 3.4, since PyMem_Malloc()
> called (indirectly) PyObject_Malloc() when Python was compiled in
> debug mode, and PyObject_Malloc() requires the GIL to be held.
> 
> When I wrote the PEP 445, there was a discussion about the GIL. It was
> proposed to allow to call PyMem_xxx() without the GIL:
> https://www.python.org/dev/peps/pep-0445/#gil-free-pymem-malloc
> 
> This option was rejected.

AFAIR, the GIL was not really part of the consideration at the time.
We used pymalloc for PyObject allocation, that's all.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Experts (#1, Feb 04 2016)
>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
>>> Python Database Interfaces ...   http://products.egenix.com/
>>> Plone/Zope Database Interfaces ...   http://zope.egenix.com/


::: We implement business ideas - efficiently in both time and costs :::

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
  http://www.malemburg.com/

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Modify PyMem_Malloc to use pymalloc for performance

2016-02-04 Thread Victor Stinner
Hi,

2016-02-04 11:17 GMT+01:00 M.-A. Lemburg :
>> Do you see any drawback of using pymalloc for PyMem_Malloc()?
>
> Yes: You cannot free memory allocated using pymalloc with the
> standard C lib free().

That's not completly new.

If Python is compiled in debug mode, you get a fatal error with a huge
error message if you free the memory allocated by PyMem_Malloc() using
PyObject_Free() or PyMem_RawFree().

But yes, technically it's possible to use free() when Python is *not*
compiled in debug mode.


> It would be better to go through the list of PyMem_*() calls
> in Python and replace them with PyObject_*() calls, where
> possible.

There are 536 calls to the functions PyMem_Malloc(), PyMem_Realloc()
and PyMem_Free().

I would prefer to modify a single place having to replace 536 calls :-/


>> Does anyone recall the rationale to have two families to memory allocators?
>
> The PyMem_*() APIs were needed to have a cross-platform malloc()
> implementation which returns standard C lib free()able memory,
> but also behaves well when passing 0 as size.

Yeah, PyMem_Malloc() & PyMem_Free() help to have a portable behaviour.
But, why not PyObject_Malloc() & PObject_Free() were not used in the
first place?

An explanation can be that PyMem_Malloc() can be called without the
GIL held. But it wasn't true before Python 3.4, since PyMem_Malloc()
called (indirectly) PyObject_Malloc() when Python was compiled in
debug mode, and PyObject_Malloc() requires the GIL to be held.

When I wrote the PEP 445, there was a discussion about the GIL. It was
proposed to allow to call PyMem_xxx() without the GIL:
https://www.python.org/dev/peps/pep-0445/#gil-free-pymem-malloc

This option was rejected.

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Opcode cache in ceval loop

2016-02-04 Thread Matthias Bussonnier

> On Feb 4, 2016, at 08:22, Sven R. Kunze  wrote:
> 
> On 04.02.2016 16:57, Matthias Bussonnier wrote:
>>> On Feb 3, 2016, at 13:22, Yury Selivanov  wrote:
>>> 
>>> 
>>> An ideal way would be to calculate a hit/miss ratio over time
>>> for each cached opcode, but that would be an expensive
>>> calculation.
>> Do you mean like a sliding windows ?
>> Otherwise if you just want a let's say 20% miss threshold, you increment by 
>> 1 on hit,
>> and decrement by 4 on miss.
> 
> Division is expensive.

I'm not speaking about division here. 
if you +M / -N the counter will decrease in average only if the hit/miss ratio 
is below N/(M+N), but you do not need to do the division. 

Then you deoptimize only if you get < 0. 

> 
>> 
>> On Feb 3, 2016, at 13:37, Sven R. Kunze  wrote:
>> 
>>> On 03.02.2016 22:22, Yury Selivanov wrote:
 One way of tackling this is to give each optimized opcode
 a counter for hit/misses.  When we have a "hit" we increment
 that counter, when it's a miss, we decrement it.
>>> Within a given range, I suppose. Like:
>>> 
>>> c = min(c+1, 100)
>> 
>> Min might be overkill, maybe you can use a or mask, to limit the windows 
>> range
>> to 256 consecutive call ?
> 
> Sure, that is how I would have written it in Python. But I would suggest an 
> AND mask. ;-)


Sure, implementation detail I would say. Should not write emails before 
breakfast...

The other problem, with the mask, is if your increment hit 256 you wrap around 
back to 0
where it deoptimize (which is not what you want), so you might need to not mask 
the
sign bit and deoptimize only on a certain negative threshold.


Does it make sens ?

-- 
M

 

> 
> Best,
> Sven

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] More optimisation ideas

2016-02-04 Thread Terry Reedy

On 2/4/2016 12:18 PM, Sven R. Kunze wrote:

On 04.02.2016 14:09, Nick Coghlan wrote:

On 2 February 2016 at 06:39, Andrew Barnert via Python-Dev
  wrote:

On Feb 1, 2016, at 09:59,mike.romb...@comcast.net  wrote:

  If the stdlib were to use implicit namespace packages
(https://www.python.org/dev/peps/pep-0420/  ) and the various
loaders/importers as well, then python could do what I've done with an
embedded python application for years.  Freeze the stdlib (or put it
in a zipfile or whatever is fast).  Then arrange PYTHONPATH to first
look on the filesystem and then look in the frozen/ziped storage.

This is a great solution for experienced developers, but I think it would be 
pretty bad for novices or transplants from other languages (maybe even 
including Python 2).

There are already multiple duplicate questions every month on StackOverflow from people 
asking "how do I find the source to stdlib module X". The canonical answer 
starts off by explaining how to import the module and use its __file__, which everyone is 
able to handle. If we have to instead explain how to work out the .py name from the 
qualified module name, how to work out the stdlib path from sys.path, and then how to 
find the source from those two things, with the caveat that it may not be installed at 
all on some platforms, and how to make sure what they're asking about really is a stdlib 
module, and how to make sure they aren't shadowing it with a module elsewhere on 
sys.path, that's a lot more complicated. Especially when you consider that some people on 
Windows and Mac are writing Py
  thon scripts without ever learning how to use the terminal or find their 
Python packages via Explorer/Finder.

For folks that *do* know how to use the terminal:

$ python3 -m inspect --details inspect
Target: inspect
Origin: /usr/lib64/python3.4/inspect.py
Cached: /usr/lib64/python3.4/__pycache__/inspect.cpython-34.pyc
Loader: <_frozen_importlib.SourceFileLoader object at 0x7f0d8d23d9b0>

(And if they just want to *read* the source code, then leaving out
"--details" prints the full module source, and would work even if the
standard library were in a zip archive)


This is completely inadequate as a replacement for loading source into 
an editor, even if just for reading.


First, on Windows, the console defaults to 300 lines.  Print more and 
only the last 300 lines remain.  The max is buffer size is .  But 
setting the buffer to that is obnoxious because the buffer is then 
padded with blank lines to make  lines.  The little rectangle that 
one grabs in the scrollbar is then scaled down to almost nothing, 
becoming hard to grab.


Second is navigation.  No Find, Find-next, or Find-all.  Because of 
padding, moving to the unpadded 'bottom of file' is difficult.


Third, for a repository version, I would have to type, without error, 
instead of 'python3', some version of, for instance, some suffix of 
'F:/python/dev/35/PcBuild//python_d.exe'.  "" 
depends, I believe, on the build options.



I want to see and debug also core Python in PyCharm and this is not
acceptable.

If you want to make it opt-in, fine. But opt-out is a no-go. I have a
side-by-side comparison as we use Java and Python in production. It's
the *ease of access* that makes Python great compared to Java.

@Andrew
Even for experienced developers it just sucks and there are more
important things to do.


I agree that removing stdlib python source files by default is an poor 
idea. The disk space saved is trivial.  So, for me, would be nearly all 
of the time saving.


Over recent versions, more and more source files have been linked to in 
the docs.  Guido recently approved of linking the rest.  Removing source 
contradicts this trend.


Easily loading modules, including stdlib modules, into an IDLE Editor 
Window is a documented feature that goes back to the original commit in 
Aug 2000.  We not not usually break stdlib features without 
acknowledgement, some decussion, and a positive decision to do so.


Someone has already mentioned the degredation of tracebacks.

So why not just leave the source files alone in /Lib.  As far as I can 
see, they would not hurt anything   At least on Windows, zip files are 
treated as directories and python35.zip comes before /Lib on sys.path.


The Windows installer currently has an option, selected by default I 
believe, to run compileall.  Add to compileall an option to compile all 
to python35.zip rather than __pycache and  and use that in that 
installer.  Even if the zip is including in the installer, 
compileall-zip + source files would let adventurous people patch their 
stdlib files.


Editing a stdlib file, to see if a confirmed bug disappeared (it did), 
was how I made my first code contribution. If I had had to download and 
setup svn and maybe visual c to try a one line change, I would not have 
done it.


--
Terry Jan Reedy


___
Python-Dev mailing list

Re: [Python-Dev] Python environment registration in the Windows Registry

2016-02-04 Thread Nick Coghlan
On 3 February 2016 at 15:15, Steve Dower  wrote:
> Presented in PEP-like form here, but if feedback suggests
> just putting it in the docs I'm okay with that too.

We don't really have anywhere in the docs to track platform
integration topics like this, so an Informational PEP is your best
bet.

Cheers,
Nick.

P.S. While I guess you *could* try to figure out a suitable home in
the docs, I don't think you'd gain anything particularly useful from
the additional effort

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Modify PyMem_Malloc to use pymalloc for performance

2016-02-04 Thread M.-A. Lemburg
On 04.02.2016 14:25, Victor Stinner wrote:
> Thanks for your feedback, you are asking good questions :-)
> 
> 2016-02-04 13:54 GMT+01:00 M.-A. Lemburg :
>>> There are 536 calls to the functions PyMem_Malloc(), PyMem_Realloc()
>>> and PyMem_Free().
>>>
>>> I would prefer to modify a single place having to replace 536 calls :-/
>>
>> You have a point there, but I don't think it'll work out
>> that easily, since we are using such calls to e.g. pass
>> dynamically allocated buffers to code in extensions (which then
>> have to free the buffers again).
> 
> Ah, interesting. But I'm not sure that we delegate the responsability
> of freeing the memory to external libraries. Usually, it's more the
> opposite: a library gives us an allocated memory block, and we have to
> free it. No?

Sometimes, yes, but we also do allocations for e.g.
parsing values in Python argument tuples (e.g. using
"es" or "et"):

https://docs.python.org/3.6/c-api/arg.html

We do document to use PyMem_Free() on those; not sure whether
everyone does this though.

> I checked if we call directly malloc() to pass the buffer to a
> library, but I failed to find such case.
>
> Again, in debug mode, calling free() on a memory block allocated by
> PyMem_Malloc() will likely crash. Since we run the Python test suite
> with a Python compiled in debug mode, we would already have detected
> such bug, no?

The Python test suite doesn't test Python C extensions,
so it's not surprising that it passes :-)

> See also my old issue http://bugs.python.org/issue18203 which replaced
> almost all direct calls to malloc() with PyMem_Malloc() or
> PyMem_RawMalloc().
> 
>> Good question. I guess developers simply thought of PyObject_Malloc()
>> being for PyObjects,
> 
> Yeah, I also understood that, but in practice, it looks like
> PyMem_Malloc() is slower than so using it makes the code less
> efficient than it can be.
> 
> Instead of teaching developers that well, in fact, PyObject_Malloc()
> is unrelated to object programming, I think that it's simpler to
> modify PyMem_Malloc() to reuse pymalloc ;-)

Perhaps if you add some guards somewhere :-)

Seriously, this may work if C extensions use the APIs
consistently, but in order to tell, we'd need to check
few. I know that I switched over all mx Extensions to
use PyObject_*() instead of PyMem_*() or native malloc()
several years ago and have not run into any issues.

I guess the main question then is whether pymalloc is good enough
for general memory allocation needs; and the answer may well be
"yes".

BTW: Tuning pymalloc for commonly used object sizes is
another area where Python could gain better performance,
i.e. reserve more / pre-allocate space for often used block
sizes. pymalloc will also only work well for small blocks
(up to 512 bytes). Everything else is routed to the
system malloc().

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Experts (#1, Feb 04 2016)
>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
>>> Python Database Interfaces ...   http://products.egenix.com/
>>> Plone/Zope Database Interfaces ...   http://zope.egenix.com/


::: We implement business ideas - efficiently in both time and costs :::

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
  http://www.malemburg.com/

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] More optimisation ideas

2016-02-04 Thread Nick Coghlan
On 2 February 2016 at 02:40, R. David Murray  wrote:
> On the other hand, if the distros go the way Nick has (I think) been
> advocating, and have a separate 'system python for system scripts' that
> is independent of the one installed for user use, having the system-only
> python be frozen and sourceless would actually make sense on a couple of
> levels.

While omitting Python source files does let us reduce base image sizes
(quite significantly), the current perspective in Fedora and Project
Atomic is that going bytecode-only (whether frozen or not) breaks too
many things to be worthwhile. As one simple example, it means
tracebacks no longer include source code lines, dramatically
increasing the difficulty of debugging failures.

As such, we're more likely to pursue minimisation efforts by splitting
the standard library up into "stuff essential distro components use"
and "the rest of the standard library that upstream defines" than by
figuring out how to avoid shipping source files (I believe Debian
already makes this distinction with the python-minimal vs python
split).

Zipping up the standard library doesn't break tracebacks though, so
it's potentially worth exploring that option further.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Modify PyMem_Malloc to use pymalloc for performance

2016-02-04 Thread Victor Stinner
Thanks for your feedback, you are asking good questions :-)

2016-02-04 13:54 GMT+01:00 M.-A. Lemburg :
>> There are 536 calls to the functions PyMem_Malloc(), PyMem_Realloc()
>> and PyMem_Free().
>>
>> I would prefer to modify a single place having to replace 536 calls :-/
>
> You have a point there, but I don't think it'll work out
> that easily, since we are using such calls to e.g. pass
> dynamically allocated buffers to code in extensions (which then
> have to free the buffers again).

Ah, interesting. But I'm not sure that we delegate the responsability
of freeing the memory to external libraries. Usually, it's more the
opposite: a library gives us an allocated memory block, and we have to
free it. No?

I checked if we call directly malloc() to pass the buffer to a
library, but I failed to find such case.

Again, in debug mode, calling free() on a memory block allocated by
PyMem_Malloc() will likely crash. Since we run the Python test suite
with a Python compiled in debug mode, we would already have detected
such bug, no?

See also my old issue http://bugs.python.org/issue18203 which replaced
almost all direct calls to malloc() with PyMem_Malloc() or
PyMem_RawMalloc().


> Good question. I guess developers simply thought of PyObject_Malloc()
> being for PyObjects,

Yeah, I also understood that, but in practice, it looks like
PyMem_Malloc() is slower than so using it makes the code less
efficient than it can be.

Instead of teaching developers that well, in fact, PyObject_Malloc()
is unrelated to object programming, I think that it's simpler to
modify PyMem_Malloc() to reuse pymalloc ;-)

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Opcode cache in ceval loop

2016-02-04 Thread Nick Coghlan
On 3 February 2016 at 06:49, Stephen J. Turnbull  wrote:
> Yury Selivanov writes:
>
>  > Not sure about that... PEPs take a LOT of time :(
>
> Informational PEPs need not take so much time, no more than you would
> spend on ceval.txt.  I'm sure a PEP would get a lot more attention
> from reviewers, too.
>
> Even if you PEP the whole thing, as you say it's a (big ;-)
> implementation detail.  A PEP won't make things more controversial (or
> less) than they already are.  I don't see why it would take that much
> more time than ceval.txt.

For a typical PEP, you need to explain both the status quo *and* the
state after the changes, as well as provide references to the related
discussions.

I think in this case the main target audience for the technical
details should be future maintainers, so Yury writing a ceval.txt akin
to the current dictnotes.txt, listsort.txt, etc would cover the
essentials.

If someone else wanted to also describe the change in a PEP for ease
of future reference, using Yury's ceval.txt as input, I do think that
would be a good thing, but I wouldn't want to make the enhancement
conditional on someone volunteering to do that.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] speed.python.org

2016-02-04 Thread Nick Coghlan
On 4 February 2016 at 16:48, Zachary Ware  wrote:
> I'm happy to announce that speed.python.org is finally functional!
> There's not much there yet, as each benchmark builder has only sent
> one result so far (and one of those involved a bit of cheating on my
> part), but it's there.
>
> There are likely to be rough edges that still need smoothing out.
> When you find them, please report them at
> https://github.com/zware/codespeed/issues or on the sp...@python.org
> mailing list.
>
> Many thanks to Intel for funding the work to get it set up and to
> Brett Cannon and Benjamin Peterson for their reviews.

This is great to hear!

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] More optimisation ideas

2016-02-04 Thread Nick Coghlan
On 2 February 2016 at 06:39, Andrew Barnert via Python-Dev
 wrote:
> On Feb 1, 2016, at 09:59, mike.romb...@comcast.net wrote:
>>
>>  If the stdlib were to use implicit namespace packages
>> ( https://www.python.org/dev/peps/pep-0420/ ) and the various
>> loaders/importers as well, then python could do what I've done with an
>> embedded python application for years.  Freeze the stdlib (or put it
>> in a zipfile or whatever is fast).  Then arrange PYTHONPATH to first
>> look on the filesystem and then look in the frozen/ziped storage.
>
> This is a great solution for experienced developers, but I think it would be 
> pretty bad for novices or transplants from other languages (maybe even 
> including Python 2).
>
> There are already multiple duplicate questions every month on StackOverflow 
> from people asking "how do I find the source to stdlib module X". The 
> canonical answer starts off by explaining how to import the module and use 
> its __file__, which everyone is able to handle. If we have to instead explain 
> how to work out the .py name from the qualified module name, how to work out 
> the stdlib path from sys.path, and then how to find the source from those two 
> things, with the caveat that it may not be installed at all on some 
> platforms, and how to make sure what they're asking about really is a stdlib 
> module, and how to make sure they aren't shadowing it with a module elsewhere 
> on sys.path, that's a lot more complicated. Especially when you consider that 
> some people on Windows and Mac are writing Python scripts without ever 
> learning how to use the terminal or find their Python packages via 
> Explorer/Finder.

For folks that *do* know how to use the terminal:

$ python3 -m inspect --details inspect
Target: inspect
Origin: /usr/lib64/python3.4/inspect.py
Cached: /usr/lib64/python3.4/__pycache__/inspect.cpython-34.pyc
Loader: <_frozen_importlib.SourceFileLoader object at 0x7f0d8d23d9b0>

(And if they just want to *read* the source code, then leaving out
"--details" prints the full module source, and would work even if the
standard library were in a zip archive)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] More optimisation ideas

2016-02-04 Thread Steven D'Aprano
On Thu, Feb 04, 2016 at 07:58:30PM -0500, Terry Reedy wrote:

> >>For folks that *do* know how to use the terminal:
> >>
> >>$ python3 -m inspect --details inspect
> >>Target: inspect
> >>Origin: /usr/lib64/python3.4/inspect.py
> >>Cached: /usr/lib64/python3.4/__pycache__/inspect.cpython-34.pyc
> >>Loader: <_frozen_importlib.SourceFileLoader object at 0x7f0d8d23d9b0>
> >>
> >>(And if they just want to *read* the source code, then leaving out
> >>"--details" prints the full module source, and would work even if the
> >>standard library were in a zip archive)
> 
> This is completely inadequate as a replacement for loading source into 
> an editor, even if just for reading.
[...]

I agree with Terry. The inspect trick Nick describes above is a great 
feature to have, but it's not a substitute for opening the source in an 
editor, not even on OSes where the command line tools are more powerful 
than Windows' default tools.

[...]
> I agree that removing stdlib python source files by default is an poor 
> idea. The disk space saved is trivial.  So, for me, would be nearly all 
> of the time saving.

I too would be very reluctant to remove the source files from Python by 
default, but I have an alternative. I don't know if this is a ridiculous 
idea or not, but now that the .pyc bytecode files are kept in a separate 
__pycache__ directory, could we freeze that directory and leave the 
source files available for reading?

(I'm not even sure if this suggestion makes sense, since I'm not really 
sure what "freezing" the stdlib entails. Is it documented anywhere?)


-- 
Steve
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Opcode cache in ceval loop

2016-02-04 Thread Stephen J. Turnbull
Nick Coghlan writes:

 > If someone else wanted to also describe the change in a PEP for ease
 > of future reference, using Yury's ceval.txt as input, I do think that
 > would be a good thing, but I wouldn't want to make the enhancement
 > conditional on someone volunteering to do that.

I wasn't suggesting making it conditional, I was encouraging Yury to
do it himself as the most familiar with the situation.  I may be
underestimating the additional cost, but it seems to me explaining
both before and after would be very useful to people who've hacked
ceval in the past.  (Presumably Yury would just be explaining
"after" in his ceval.txt.)

The important thing is to make it discoverable, though, and I don't
care if it's done by PEP or not.  In fact, perhaps "let Yury be Yury",
plus an informational PEP listing all of the *.txt files in the tree
would be more useful?  Or in the devguide?

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Speeding up CPython 5-10%

2016-02-04 Thread Nick Coghlan
On 3 February 2016 at 03:52, Brett Cannon  wrote:
> Fifth, if we manage to show that a C API can easily be added to CPython to
> make a JIT something that can simply be plugged in and be useful, then we
> will also have a basic JIT framework for people to use. As I said, our use
> of CoreCLR is just for ease of development. There is no reason we couldn't
> use ChakraCore, v8, LLVM, etc. But since all of these JIT compilers would
> need to know how to handle CPython bytecode, we have tried to design a
> framework where JIT compilers just need a wrapper to handle code emission
> and our framework that we are building will handle driving the code emission
> (e.g., the wrapper needs to know how to emit add_integer(), but our
> framework handles when to have to do that).

That could also be really interesting in the context of pymetabiosis
[1] if it meant that PyPy could still at least partially JIT the
Python code running on the CPython side of the boundary.

Cheers,
Nick.

[1] https://github.com/rguillebert/pymetabiosis


-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] speed.python.org

2016-02-04 Thread Nick Coghlan
On 4 February 2016 at 16:48, Zachary Ware  wrote:
> I'm happy to announce that speed.python.org is finally functional!
> There's not much there yet, as each benchmark builder has only sent
> one result so far (and one of those involved a bit of cheating on my
> part), but it's there.
>
> There are likely to be rough edges that still need smoothing out.
> When you find them, please report them at
> https://github.com/zware/codespeed/issues or on the sp...@python.org
> mailing list.
>
> Many thanks to Intel for funding the work to get it set up and to
> Brett Cannon and Benjamin Peterson for their reviews.

Heh, cdecimal utterly demolishing the old pure Python decimal module
on the telco benchmark means normalising against CPython 3.5 rather
than 2.7 really isn't very readable :)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Opcode cache in ceval loop

2016-02-04 Thread Matthias Bussonnier

> On Feb 3, 2016, at 13:22, Yury Selivanov  wrote:
> 
> 
> An ideal way would be to calculate a hit/miss ratio over time
> for each cached opcode, but that would be an expensive
> calculation.

Do you mean like a sliding windows ?
Otherwise if you just want a let's say 20% miss threshold, you increment by 1 
on hit, 
and decrement by 4 on miss. 


On Feb 3, 2016, at 13:37, Sven R. Kunze  wrote:

> On 03.02.2016 22:22, Yury Selivanov wrote:
>> One way of tackling this is to give each optimized opcode
>> a counter for hit/misses.  When we have a "hit" we increment
>> that counter, when it's a miss, we decrement it.
> 
> Within a given range, I suppose. Like:
> 
> c = min(c+1, 100)


Min might be overkill, maybe you can use a or mask, to limit the windows range
to 256 consecutive call ? 
-- 
M





> 
> Yury
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/bussonniermatthias%40gmail.com

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Opcode cache in ceval loop

2016-02-04 Thread Sven R. Kunze

On 04.02.2016 16:57, Matthias Bussonnier wrote:

On Feb 3, 2016, at 13:22, Yury Selivanov  wrote:


An ideal way would be to calculate a hit/miss ratio over time
for each cached opcode, but that would be an expensive
calculation.

Do you mean like a sliding windows ?
Otherwise if you just want a let's say 20% miss threshold, you increment by 1 
on hit,
and decrement by 4 on miss.


Division is expensive.



On Feb 3, 2016, at 13:37, Sven R. Kunze  wrote:


On 03.02.2016 22:22, Yury Selivanov wrote:

One way of tackling this is to give each optimized opcode
a counter for hit/misses.  When we have a "hit" we increment
that counter, when it's a miss, we decrement it.

Within a given range, I suppose. Like:

c = min(c+1, 100)


Min might be overkill, maybe you can use a or mask, to limit the windows range
to 256 consecutive call ?


Sure, that is how I would have written it in Python. But I would suggest 
an AND mask. ;-)


Best,
Sven
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] More optimisation ideas

2016-02-04 Thread Sven R. Kunze

On 04.02.2016 14:09, Nick Coghlan wrote:

On 2 February 2016 at 06:39, Andrew Barnert via Python-Dev
 wrote:

On Feb 1, 2016, at 09:59, mike.romb...@comcast.net wrote:

  If the stdlib were to use implicit namespace packages
( https://www.python.org/dev/peps/pep-0420/ ) and the various
loaders/importers as well, then python could do what I've done with an
embedded python application for years.  Freeze the stdlib (or put it
in a zipfile or whatever is fast).  Then arrange PYTHONPATH to first
look on the filesystem and then look in the frozen/ziped storage.

This is a great solution for experienced developers, but I think it would be 
pretty bad for novices or transplants from other languages (maybe even 
including Python 2).

There are already multiple duplicate questions every month on StackOverflow from people 
asking "how do I find the source to stdlib module X". The canonical answer 
starts off by explaining how to import the module and use its __file__, which everyone is 
able to handle. If we have to instead explain how to work out the .py name from the 
qualified module name, how to work out the stdlib path from sys.path, and then how to 
find the source from those two things, with the caveat that it may not be installed at 
all on some platforms, and how to make sure what they're asking about really is a stdlib 
module, and how to make sure they aren't shadowing it with a module elsewhere on 
sys.path, that's a lot more complicated. Especially when you consider that some people on 
Windows and Mac are writing Python scripts without ever learning how to use the terminal 
or find their Python packages via Explorer/Finder.

For folks that *do* know how to use the terminal:

$ python3 -m inspect --details inspect
Target: inspect
Origin: /usr/lib64/python3.4/inspect.py
Cached: /usr/lib64/python3.4/__pycache__/inspect.cpython-34.pyc
Loader: <_frozen_importlib.SourceFileLoader object at 0x7f0d8d23d9b0>

(And if they just want to *read* the source code, then leaving out
"--details" prints the full module source, and would work even if the
standard library were in a zip archive)


I want to see and debug also core Python in PyCharm and this is not 
acceptable.


If you want to make it opt-in, fine. But opt-out is a no-go. I have a 
side-by-side comparison as we use Java and Python in production. It's 
the *ease of access* that makes Python great compared to Java.


@Andrew
Even for experienced developers it just sucks and there are more 
important things to do.



Best,
Sven

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python environment registration in the Windows Registry

2016-02-04 Thread Alexander Walters
I am well aware of this.  In the SO question I referenced, being the 
first google hit related this this... that is the answer *I* gave. It 
only works, in my experience, 60% of the time, and not with two biggie 
packages (pywin32, for which you have to go to third parties to get the 
wheel, which do not include all of pywin32, and wxpython) and perhaps more.


On 2/3/2016 14:32, Matthew Einhorn wrote:
On Wed, Feb 3, 2016 at 3:15 AM, Alexander Walters 
> wrote:


...just when I thought I have solved the registry headaches I have
been dealing with...

I am not saying this proposal will make the registry situation
worse, but it may break my solution to the headaches Python's
registry use causes with some non-standard module installers (and
even the standard distutils exe installers, but that is being
mitigated). In the wild exist modules with their own EXE or MSI
installers that check the registry for 'the system python'.  No
matter how hard you hit them, they will only install to *that one
python*. 



If I remember correctly, you can use `wheel convert filename.exe` on 
those installers which create a wheel that you can install. I think 
that's what I used to do with pywin32 before pypiwin32 came along.


I just tested it and it still works on the pywin32 exe.



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/tritium-list%40sdamon.com


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] speed.python.org

2016-02-04 Thread Victor Stinner
Great!

2016-02-04 7:48 GMT+01:00 Zachary Ware :
> I'm happy to announce that speed.python.org is finally functional!
> There's not much there yet, as each benchmark builder has only sent
> one result so far (and one of those involved a bit of cheating on my
> part), but it's there.
>
> There are likely to be rough edges that still need smoothing out.
> When you find them, please report them at
> https://github.com/zware/codespeed/issues or on the sp...@python.org
> mailing list.
>
> Many thanks to Intel for funding the work to get it set up and to
> Brett Cannon and Benjamin Peterson for their reviews.
>
> Happy benchmarking,
> --
> Zach
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com