[RELEASE] Python 3.7.2 and 3.6.8 are now available

2018-12-24 Thread Ned Deily
https://blog.python.org/2018/12/python-372-and-368-are-now-available.html

Python 3.7.2 and 3.6.8 are now available. Python 3.7.2 is the next
maintenance release of Python 3.7, the latest feature release of Python.
You can find Python 3.7.2 here:
https://www.python.org/downloads/release/python-372/

See the What’s New In Python 3.7 document for more information about the
many new features and optimizations included in the 3.7 series. Detailed
information about the changes made in 3.7.2 can be found in its change log.

We are also happy to announce the availability of Python 3.6.8:
https://www.python.org/downloads/release/python-368/

Python 3.6.8 is planned to be the last bugfix release of Python 3.6. Per
our support policy, we plan to provide security fixes for Python 3.6 as
needed through 2021, five years following its initial release.

Thanks to all of the many volunteers who help make Python Development and
these releases possible! Please consider supporting our efforts by
volunteering yourself or through organization contributions to the Python
Software Foundation.

https://docs.python.org/3.7/whatsnew/3.7.html
https://docs.python.org/3.7/whatsnew/changelog.html#python-3-7-2-final
https://docs.python.org/3.6/whatsnew/changelog.html#python-3-6-8-final
https://www.python.org/psf/

--
  Ned Deily
  n...@python.org -- []

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: cython3: Cannot start!

2018-12-24 Thread Stefan Behnel
Paulo da Silva schrieb am 22.12.18 um 19:26:
> Sorry if this is OT.
> 
> I decided to give cython a try and cannot run a very simple program!
> 
> 1. I am using kubuntu 18.04 and installe cython3 (not cython).
> 
> 2. My program tp.pyx:
> 
> # cython: language_level=3
> print("Test",2)
> 
> 3. setup.py
> from distutils.core import setup
> from Cython.Build import cythonize
> 
> setup(
> ext_modules = cythonize("tp.pyx")
> )
> 
> 4. Running it:
> python3 setup.py build_ext --inplace
> python3 -c 'import tp'
> 
> 5. output:
> ('Test', 2)
> 
> This is wrong for python3! It should output
> Test 2
> 
> It seems that it is parsing tp.pyx as a python2 script!
> 
> I tried to change the print to python2
> print "Test",2
> 
> and it recognizes the syntax and outputs
> Test 2
> 
> So, how can I tell cython to use python3?

Ubuntu 18.04 ships Cython 0.26, which has a funny bug that you hit above.
It switches the language-level too late, so that the first token (or word)
in the file is parsed with Py2 syntax. In your case, that's the print
statement, which is really parsed as (Py2) statement here, not as (Py3)
function. In normal cases, the language level does not matter for the first
statement in the source (because, why would you have a print() there?), so
it took us a while to find this bug.

pip-installing Cython will get you the latest release, where this bug is
resolved.

Stefan

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Side by side comparison - CPython, nuitka, PyPy

2018-12-24 Thread Stefan Behnel
Anthony Flury via Python-list schrieb am 21.12.18 um 09:06:
> I thought I would look at a side by side comparison of CPython, nuitka and
> PyPy

Interesting choice. Why nuitka?


> *The functionality under test*
> 
> I have a library (called primelib) which implements a Sieve of Erathoneses
> in pure Python - it was orginally written as part of my project Euler attempts
> 
> Not only does it build a sieve to test primality, it also builds an
> iterable list of primes, and has functionality to calculate the prime
> factors (and exponents) and also calculate all divisors of a given integer
> (up to the size of the sieve).
> 
> To test the primelib there is a simple harness which :
> 
>  * Builds a sieve for integers from 2 to 104729 (104729 is the 10,000th
>    prime number)
>  * Using a pre-built list from primes.utm.edu -
>  o For every integer from 2 to 104729 the prime sieve and pre-built
>    list agree on the primality or non-primality
>  o confirm that the list of ALL primes identified by the sieve is
>    the same as the pre-built list.
>  o For every integer from 2 to 104729, get primelib to generate the
>    prime factors and exponents - and comfirm that they multiply up
>    to the expected integer
>  o For every integer from 2 to 104729 get primelib to generate the
>    divisors on the integer, and confirm that each divisor does
>    divide cleanly into the integer
> 
> The Sieve is rebuilt between each test, there is no caching of data between
> test cases, so the test harness forces a lot of recalculations.
> 
> I have yet to convert primelib to be Python 3 compatible.
> 
> Exactly the same test harness was run in all 3 cases :
> 
>  * Under CPython 2.7.15, the execution of the test harness took around
>    75 seconds to execute over 5 runs - fastest 73, slowest 78.
>  * Under Nuitka 0.6, the execution of the test harness after compiler
>    took around 85 seconds over 5 runes, fastest 84, slowest 86.
>  * Under PyPy, the execution of the test harness took 4.9 seconds on
>    average over 5 runs, fastest 4.79, slowest 5.2
> 
> I was very impressed at the execution time improvement under PyPy, and a
> little surprised about the lack of improvement under Nuitka.
> 
> I know Nuitka is a work in progress, but given that Nuitka compiles Python
> to C code I would have expected some level of gain, especially in a maths
> heavy implementation.

It compiles to C, yes, but that by itself doesn't mean that it makes it run
faster. Remember that CPython is also written in C, so why should a simple
static translation from Python code to C make it run faster than in CPython?

Cython [1], on the other hand, is an optimising Python-to-C compiler, which
aims to generate fast code and allow users to manually tune it. That's when
you start getting real speedups that are relevant for real-world code.


> This comparison is provided for information only, and is not intended as
> any form of formal benchmark. I don't claim that primelib is as efficient
> as it could be - although every effort was made to try to make it as fast
> as I could.

I understand that it came to life as an exercise, and you probably won't
make production use of it. Actually, I doubt that there is a shortage of
prime detection libraries. ;) Still, thanks for the writeup. It's helpful
to see comparisons of "code how people write it" under different runtimes
from time to time.

Stefan


[1] http://cython.org/

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Fastest first

2018-12-24 Thread Stefan Behnel
Avi Gross schrieb am 17.12.18 um 01:00:
> SHORT VERSION: a way to automatically run multiple algorithms in parallel
> and kill the rest when one returns an answer.

One (somewhat seasonal) comment on this: it doesn't always have to be about
killing (processes or threads). You might also consider a cooperative
implementation, where each of the algorithms is allowed to advance by one
"step" in each "round", and is simply discarded when a solution is found
elsewhere, or when it becomes unlikely that this specific algorithm will
contribute a future solution. This could be implemented via a sequence of
generators or coroutines in Python. Such an approach is often used in
simulations (e.g. SimPy and other "discrete event" simulators), where exact
control over the concurrency pattern is desirable.

Stefan

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why am I getting Error 405 while uploading my package to https://test.pypi.org/legacy?

2018-12-24 Thread sntshkmr60
>   $ pip install readme_renderer[md]

Thanks a lot for this, I wasn't able to figure it out earlier.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: cython3: Cannot start!

2018-12-24 Thread Paulo da Silva
Às 14:07 de 24/12/18, Stefan Behnel escreveu:
> Paulo da Silva schrieb am 22.12.18 um 19:26:
...

> 
> Ubuntu 18.04 ships Cython 0.26, which has a funny bug that you hit above.
> It switches the language-level too late, so that the first token (or word)
> in the file is parsed with Py2 syntax. In your case, that's the print
> statement, which is really parsed as (Py2) statement here, not as (Py3)
> function. In normal cases, the language level does not matter for the first
> statement in the source (because, why would you have a print() there?), so
> it took us a while to find this bug.
> 
> pip-installing Cython will get you the latest release, where this bug is
> resolved.
> 

Thank you Stefan for the clarification.
Regards.
Paulo

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why am I getting Error 405 while uploading my package to https://test.pypi.org/legacy?

2018-12-24 Thread ant
sntshkm...@gmail.com wrote:
>>   $ pip install readme_renderer[md]
>
> Thanks a lot for this, I wasn't able to figure it out earlier.

  does it work ok now?  got the upload to go to pypitest?


  ant
-- 
https://mail.python.org/mailman/listinfo/python-list


RE: Fastest first

2018-12-24 Thread Avi Gross
Stefan,

Yes, techniques like the one you mention are some of the ways I am aware of but 
they may not be the easiest way to automate.

I was thinking of a style of solving problems. I note python (on various 
platforms) has an amazing number of ways to do things in parallel and quite a 
few methods those tasks can communicate with the parent(s) or each other.

My goal was not so much to kill processes but to make a killing.

OK, maybe not. Just a pun.

The obvious goal is to have a framework where you have multiple parties trying 
to solve a problem using divergent methods with no obvious way in advance to 
figure out which would be first with a good answer. Some of the algorithms used 
might be heuristic and you might not want the first one that finishes but 
rather the first one that returns a value say within 2% of the best possible 
result. An example would be designing an airplane that comes within 2% of the 
results of one with negligible friction flying in fairly empty space. If the 
best current plane only is within 20% of that then any algorithm getting within 
10% might be good enough.

And some algorithms may not reliably terminate on the data supplied. Eventually 
they need to be stopped even if no other solution comes in.

Your message seems to be talking about threads running within the same process 
as the parent and being time switched. If writing code for that environment, I 
am aware of many ways of building the module as described. Yes, you want the 
threads to regularly yield back control for many reasons including some 
dominating at the expense of others. So if they are regularly pausing and 
yielding, you can set it up so they get a message and perform carefully 
controlled apoptosis. They might carefully clean up what is needed, such as 
closing open files, release any locks, delete any objects exclusively used by 
them, perhaps scribble away some data, and exit. Sure, the parent could just 
kill them but that can leave a mess.

And your other suggestions also make sense in some scenarios. Yes, a thread 
that seems to be making slow progress might even suspend itself for a while and 
check to see if others have made more progress.

Let me be very concrete here. I am thinking of an example such as wanting to 
calculate pi to some enormous number of decimal places. Don't ask why or it 
will spoil the mood.

There are a stupendous number of ways to calculate pi. Many involve summing the 
results of an infinite series. Some do very little on each iteration and some 
do very complex things. Most of them can be viewed as gradually having more and 
more digits trailing off to the "right" that no longer change with each 
iteration while the ones further to the right are clearly not significant yet 
since they keep changing. So an algorithm might stop every hundred iterations 
and calculate how many digits seem quite firm since the last check. One 
algorithm might report to a central bulleting board that they are getting five 
more digits and another might report 0 and yet another might report 352. It 
might make sense for only the fastest ones to continue at full pace. But if an 
algorithm later slows down, perhaps another one should be revived. A quick 
search online shows lots of ways to do such a set of analyses but note many do 
not return pi directly but something like pi/4 or 1/pi. There is even a formula 
that calculates the nth digit of pi without calculating any that came before!

Clearly you might be able to start hundreds of such methods at once. It may be 
better not to have them all within one process. Since none of them would 
terminate before reaching say the billionth digit, and even the fastest may run 
for a very long time, you can imagine an algorithm where each one checks their 
progress periodically and the slowest one quite every 10 minutes till only the 
fastest survive and perhaps are now running faster.

But none of this applies to my original question. I am talking about whether 
anyone has created a general framework for a much simpler scenario. Loosely 
speaking, I might be asked to create a list of objects or functions that 
somehow can be started independently in the same way. I hand this over to some 
module that takes my list and does the rest and eventually returns results. 
That module might accept additional choices and instructions that fine tune 
things. But the overall code might look like:

Import some module
Make a new object that will run things for me as some kind of controller.
Invoke incantations on that controller that include setting things and 
especially getting it the list of things to do.
I ask it to get started and either wait or go do something else.
At some later point I find it has completed. The results of the problem are 
somewhere I can reach. Perhaps there are logs I can search to see more details 
like which methods quit or were stopped or what their last result was before ...

Please see the above as an outline or example, not exa

Re: Fastest first

2018-12-24 Thread Cameron Simpson

On 24Dec2018 16:29, Avi Gross  wrote:
But none of this applies to my original question. I am talking about 
whether anyone has created a general framework for a much simpler 
scenario. Loosely speaking, I might be asked to create a list of 
objects or functions that somehow can be started independently in the 
same way. I hand this over to some module that takes my list and does 
the rest and eventually returns results. That module might accept 
additional choices and instructions that fine tune things. But the 
overall code might look like:


Import some module
Make a new object that will run things for me as some kind of controller.
Invoke incantations on that controller that include setting things and 
especially getting it the list of things to do.
I ask it to get started and either wait or go do something else.
At some later point I find it has completed. The results of the problem 
are somewhere I can reach. Perhaps there are logs I can search to see 
more details like which methods quit or were stopped or what their last 
result was before ...


Well, I have a pair of modules named "cs.result" and "cs.later" in PyPI 
which offer this.  I imagine the futures module also does: I had 
cs.later before that arrived.


cs.result has a class called a Result, instances are objects with a 
.result attribute which may be fulfilled later; they're also callable 
which returns the .result - when it becomes ready.  cs.later is a 
capacity managed queue using these Result objects.


Basicly, you can use a cs.later.Later to submit functions to be run, or 
make cs.result.Results directly and tell them to fulfil with an 
arbitrary function. Both of these dispatch Threads to run the function.


Having submitting a bunch of functions and having the corresponding 
Results to hand, you can wait for the Results as they complete, which 
does what you're after: get the earliest result first.


Typical code looks like:

   # a Later with 4 permitted parallel tasks at a given time
   L = Later(4)
   LFs = []
   # get a LateFunction from the Later (a Result subclass)
   LF = Later.defer(function, *args, **kwargs)
   LFs.append(LF)

You can submit as many as you like, and keep them in a list for example 
such as "LFs" above.


There's a report generator function which takes a collection of Results 
and yields them as they complete.


   for LF in report(LFs):
   print(LF, "completed")

You can clearly build on this depending on your use case.

The typical implementation uses Threads, but of course a Thread can run 
a subprocess if that's sensible, and so forth, or you can use some other 
system to fulfil the Results; from the outside the user doesn't care.


Cheers,
Cameron Simpson 
--
https://mail.python.org/mailman/listinfo/python-list