[Python-Dev] Can I make marshal.dumps() slower but stabler?

2018-07-11 Thread INADA Naoki
I'm working on making pyc stable, via stablizing marshal.dumps()
https://bugs.python.org/issue34093

Sadly, it makes marshal.dumps() 40% slower.
Luckily, this overhead is small (only 4%) for dumps(compile(source)) case.

So my question is:  May I remove unstable but faster code?

Or should I make this optional and we maintain two complex code?
If so, should this option enabled by default or not?


For example, xmlrpc uses marshal.  But xmlrpc has significant overhead
other than marshaling, like dumps(compile(source)) case.  So I expect
marshal.dumps() performance is not critical for it too.

Is there any real application which marshal.dumps() performance is critical?

-- 
INADA Naoki  
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Micro-benchmarks for PEP 580

2018-07-11 Thread INADA Naoki
On Thu, Jul 12, 2018 at 4:54 AM Jeroen Demeyer  wrote:
>
> On 2018-07-11 10:50, Victor Stinner wrote:
> > As you wrote, the
> > cost of function costs is unlikely the bottleneck of application.
>
> With that idea, METH_FASTCALL is not needed either. I still find it very
> strange that nobody seems to question all the crazy existing
> optimizations for function calls in CPython, yet claiming at the same
> time that those are just stupid micro-optimizations which are surely not
> important for real applications.

METH_FASTCALL for pyfunction and builtin cfunction made application
significantly faster.  It is proven by application benchmark, not only micro
benchmark.

On the other hand, calling 3rd party extension is much less frequently.
Our benchmark suite contains some extension call, but it is not so frequent
and I can't find any significant boost when using METH_FASTCALL on it.
That's why METH_FASTCALL benefit is proven for builtins, but not for
3rd parties.

If you want to prove it, you can add benchmark heavily using extension
call.  With it, we can measure impact of using METH_FASTCALL in 3rd
party extensions.

---

But for now, I'm +1 to enable FASTCALL in custom type in 3.8.
Cython author confirmed they really want to use custom method type
and lack of FASTCALL support will block them.

So my current point is: should we go PEP 576 or 580, or middle of them?


>
> Anyway, I'm thinking about real-life benchmarks but that's quite hard.
> One issue is that PEP 580 by itself does not make existing faster, but
> allows faster code to be written in the future.

At this time, no need to show performance difference.
I need some application benchmark which are our target for optimize.

We can understand concretely how PEP 580 (and possible future optimization
based on PEP 580) can boost some type of applications by these target
sample application benchmarks.
We can estimate performance impact using these benchmarks too.
We can write PoC to measure performance impact too.

But for now, we don't have any target application in our benchmark suite.
I think It is showstopper for us.

> A second issue is that
> Cython (my main application) already contains optimizations for
> Cython-to-Cython calls. So, to see the actual impact of PEP 580, I
> should disable those.
>

Could you more concrete?
Which optimization do you refer?  direct call of cdef/cpdef?
METH_FASTCALL + LOAD_METHOD?
Or more futher optimization PEP 580 enables?

Switching "bidning=True" and "binding=False" is not enough for it?

I expect we can have several Cython modules which calls
each other.  I feel it's right way to simulate "Cython (not Python) as a
glue language" workload.

Anyway, I don't request you to show "performance impact".
I request only "target application we want to optimize with PEP 580 and
future optimization based on PEP 580" for now.

Regards,

-- 
INADA Naoki  
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Accepting PEP 572, Assignment Expressions

2018-07-11 Thread Guido van Rossum
As anticippated, after a final round of feedback I am hereby accepting PEP
572, Assignment Expressions: https://www.python.org/dev/peps/pep-0572/

Thanks to everyone who participated in the discussion or sent a PR.

Below is a list of changes since the last post (https://mail.python.org/
pipermail/python-dev/2018-July/154557.html) -- they are mostly cosmetic so
I won't post the doc again, but if you want to go over them in detail,
here's the history of the file on GitHub: https://github.com/python/
peps/commits/master/pep-0572.rst, and here's a diff since the last posting:
https://github.com/python/peps/compare/26e6f61f...master (sadly it's
repo-wide -- you can click on Files changed and then navigate to
pep-0572.rst).

   - Tweaked the example at line 95-100 to use result = ... rather than return
   ... so as to make a different rewrite less feasible
   - Replaced the weak "2-arg iter" example with Giampaolo Roloda's while
   chunk := file.read(8192): process(chunk)
   - *Added prohibition of unparenthesized assignment expressions in
   annotations and lambdas*
   - Clarified that TargetScopeError is a *new* subclass of SyntaxError
   - Clarified the text forbidding assignment to comprehension loop control
   variables
   - Clarified that the prohibition on := with annotation applies to
   *inline* annotation (i.e. they cannot be syntactically combined in the
   same expression)
   - Added conditional expressions to the things := binds less tightly than
   - Dropped section "This could be used to create ugly code"
   - Clarified the example in Appendix C

Now on to the implementation work! (Maybe I'll sprint on this at the
core-dev sprint in September.)

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Micro-benchmarks for PEP 580

2018-07-11 Thread Jeroen Demeyer

On 2018-07-11 10:50, Victor Stinner wrote:

As you wrote, the
cost of function costs is unlikely the bottleneck of application.


With that idea, METH_FASTCALL is not needed either. I still find it very 
strange that nobody seems to question all the crazy existing 
optimizations for function calls in CPython, yet claiming at the same 
time that those are just stupid micro-optimizations which are surely not 
important for real applications.


Anyway, I'm thinking about real-life benchmarks but that's quite hard. 
One issue is that PEP 580 by itself does not make existing faster, but 
allows faster code to be written in the future. A second issue is that 
Cython (my main application) already contains optimizations for 
Cython-to-Cython calls. So, to see the actual impact of PEP 580, I 
should disable those.



Jeroen.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Micro-benchmarks for PEP 580

2018-07-11 Thread Jeroen Demeyer

On 2018-07-11 10:27, Antoine Pitrou wrote:

I agree PEP 580 is extremely complicated and it's not obvious what the
maintenance burden will be in the long term.


But the status quo is also very complicated! If somebody would write a 
PEP describing the existing implementation of builtin_function_or_method 
and method_descriptor with all its optimizations, probably you would 
also find it complicated.


Have you actually looked at the existing implementation in 
Python/ceval.c and Object/call.c for calling objects? One of the things 
that PEP 580 offers is replacing 5 (yes, five!) functions 
_PyCFunction_FastCallKeywords, _PyCFunction_FastCallDict, 
_PyMethodDescr_FastCallKeywords, _PyMethodDef_RawFastCallKeywords, 
_PyMethodDef_RawFastCallDict by a single function PyCCall_FASTCALL.


Anyway, it would help if you could say why you (and others) think that 
it's complicated. Sure, there are many details to be considered (for 
example, the section about Descriptor behavior), but those are not 
essential to understand what the PEP does. I wrote the PEP as a complete 
specification, give full details. Maybe I should add a section just 
explaining the core ideas without details?



Jeroen.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Micro-benchmarks for PEP 580

2018-07-11 Thread Jeroen Demeyer

On 2018-07-11 00:48, Victor Stinner wrote:

About your benchmark results:

"FASTCALL unbound method(obj, 1, two=2): Mean +- std dev: 42.6 ns +- 29.6 ns"

That's a very big standard deviation :-(


Yes, I know. My CPU was overheating and was slowed down. But that seemed 
to have happened for a small number of benchmarks only.


But given that you find these benchmarks stupid anyway, I assume that 
you don't really care.



Jeroen.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] A more flexible task creation

2018-07-11 Thread Michel Desmoulin

> To be honest, I see "async with" being abused everywhere in asyncio,
> lately.  I like to have objects with start() and stop() methods, but
> everywhere I see async context managers.>
> Fine, add nursery or whatever, but please also have a simple start() /
> stop() public API.
> 
> "async with" is only good for functional programming.  If you want to go
> more of an object-oriented style, you tend to have start() and stop()
> methods in your classes, which will call start() & stop() (or close())
> methods recursively on nested resources.  So of the libraries (aiopg,
> I'm looking at you) don't support start/stop or open/close well.

Wouldn't calling __enter__ and __exit__ manually works for you ? I
started coding begin() and stop(), but I removed them, as I couldn't
find a use case for them.

And what exactly is the use case that doesn't work with `async with` ?
The whole point is to spot the boundaries of the tasks execution easily.
If you start()/stop() randomly, it kinda defeat the purpose.

It's a genuine question though. I can totally accept I overlooked a
valid use case.


> 
> I tend to slightly agree, but OTOH if asyncio had been designed to not
> schedule tasks automatically on __init__ I bet there would have been
> other users complaining that "why didn't task XX run?", or "why do tasks
> need a start() method, that is clunky!".  You can't please everyone...

Well, ensure_future([schedule_immediatly=True]) and
asyncio.create_task([schedule_immediatly=True] would take care of that.
They are the entry point for the task creation and scheduling.

> 
> Also, in
>              task_list = run.all(foo(), foo(), foo())
> 
> As soon as you call foo(), you are instantiating a coroutine, which
> consumes memory, while the task may not even be scheduled for a long
> time (if you have 5000 potential tasks but only execute 10 at a time,
> for example).

Yes but this has the benefit of accepting any awaitable, not just
coroutine. You don't have to wonder what to pass, or which form. It's
always the same. Too many APi are hard to understand because you never
know if it accept a callback, a coroutine function, a coroutine, a task,
a future...

For the same reason, request.get() create and destroys a session every
time. It's inefficient, but way easier to understand, and fits the
majority of the use cases.

> 
> But if you do as Yuri suggested, you'll instead accept a function
> reference, foo, which is a singleton, you can have many foo references
> to the function, but they will only create coroutine objects when the
> task is actually about to be scheduled, so it's more efficient in terms
> of memory.

I made some test, and the memory consumption is indeed radically smaller
if you just store references if you just compare it to the same unique
raw coroutine.

However, this is a rare case. It assumes that:

- you have a lot of tasks
- you have a max concurrency
- the max concurrency is very small
- most tasks reuse a similar combination of callables and parameters

It's a very specific narrow case. Also, everything you store on the
scope will be wrapped into a Future object no matter if it's scheduled
or not, so that you can cancel it later. So the scale of the memory
consumption is not as much.

I didn't want to compromise the quality of the current API for the
general case for an edge case optimization.

On the other hand, this is a low hanging fruit and on plateforms such as
raspi where asyncio has a lot to offer, it can make a big difference to
shave up 20 of memory consumption of a specific workload.

So I listened and implemented an escape hatch:

import random
import asyncio

import ayo

async def zzz(seconds):
await asyncio.sleep(seconds)
print(f'Slept for {seconds} seconds')


@ayo.run_as_main()
async def main(run_in_top):

async with ayo.scope(max_concurrency=10) as run:
for _ in range(1):
run.from_callable(zzz, 0.005) # or run.asap(zzz(0.005))

This would only lazily create the awaitable (here the coroutine) on
scheduling. I see a 15% of memory saving for the WHOLE program if using
`from_callable()`.

So definitly a good feature to have, thank you.

But again, and I hope Yuri is reading this because he will implement
that for uvloop, and this will trickles down to asyncio, I think we
should not compromise the main API for this.

asyncio is hard enough to grok, and too many concepts fly around. The
average Python programmer has been experienced way easier things from
past Python encounter.

If we want, one day, that asyncio is consider the clean AND easy way to
do async, we need to work on the API.

asyncio.run() is a step in the right direction (although again I wish we
implemented that 2 years ago when I talked about it instead of telling
me no).

Now if we add nurseries, it should hide the rest of the complexity. Not
add to it.

___
Python-Dev mailing list
Python-Dev@python.org

Re: [Python-Dev] Micro-benchmarks for PEP 580

2018-07-11 Thread Nick Coghlan
On 11 July 2018 at 18:50, Victor Stinner  wrote:
> I'm skeptical about "50% gain": I want to see a working implementation
> and reproduce benchmarks myself to believe that :-) As you wrote, the
> cost of function costs is unlikely the bottleneck of application.
>
> Sorry, I didn't read all these PEPs about function calls, but IMHO a
> minor speedup on micro benchmarks must not drive a PEP. If someone
> wants to work on CPython performance, I would suggest to think bigger
> and target 2x speedup on applications. To get to this point, the
> bottleneck is the C API and so we have to fix our C API first.

Folks, I'd advise against focusing too heavily on CPython performance
when reviewing PEP 580, as PEP 580 is *not primarily about CPython
performance*. The key optimisations it enables have already been
implemented in the form of FASTCALL, so nothing it does is going to
make CPython faster.

Instead, we're being approached in our role as the *central standards
management group for the Python ecosystem*, similar to the way we were
involved in the establishment of PEP 3118 as the conventional
mechanism for zero-copy data sharing. While Stefan Krah eventually
brought memoryview up to speed as a first class PEP 3118 buffer
exporter and consumer, adding the memoryview builtin wasn't the
*point* of that PEP - the point of the PEP was to let libraries like
NumPy and PIL share the same backing memory across different Python
objects without needing to know about each other directly.

The request being made is a similar case of ecosystem enablement -
it's less about the performance of any specific case (although that's
certainly a significant intended benefit), and more about providing
participants in the Python ecosystem more freedom to architect their
projects in the way that makes the most sense from an ongoing
maintenance perspective, without there being a concrete and measurable
performance *penalty* in breaking a large monolithic extension module
up into smaller independently updatable components.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Micro-benchmarks for PEP 580

2018-07-11 Thread Victor Stinner
2018-07-11 2:12 GMT+02:00 INADA Naoki :
> If my idea has 50% gain and current PEP 580 has only 5% gain,
> why we should accept PEP 580?
> But no one know real gain, because there are no realistic application
> which bottleneck is calling overhead.

I'm skeptical about "50% gain": I want to see a working implementation
and reproduce benchmarks myself to believe that :-) As you wrote, the
cost of function costs is unlikely the bottleneck of application.

Sorry, I didn't read all these PEPs about function calls, but IMHO a
minor speedup on micro benchmarks must not drive a PEP. If someone
wants to work on CPython performance, I would suggest to think bigger
and target 2x speedup on applications. To get to this point, the
bottleneck is the C API and so we have to fix our C API first.

http://vstinner.readthedocs.io/python_new_stable_api.html

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Micro-benchmarks for function calls (PEP 576/579/580)

2018-07-11 Thread Victor Stinner
2018-07-11 9:19 GMT+02:00 Andrea Griffini :
> May be is something obvious but I find myself forgetting often about
> the fact that most modern CPUs can change speed (and energy consumption)
> depending on a moving average of CPU load.
>
> If you don't disable this "green" feature and the benchmarks are quick then
> the
> result can have huge variations depending on exactly when and if the CPU
> switches to fast mode.

If you use "sudo python3 -m perf system tune": Turbo Boost is disabled
and the CPU frequency is fixed.

More into at:
http://perf.readthedocs.io/en/latest/system.html

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Micro-benchmarks for PEP 580

2018-07-11 Thread Antoine Pitrou
On Wed, 11 Jul 2018 09:12:11 +0900
INADA Naoki  wrote:
> 
> Without example application, I can't consider PEP 580 seriously.
> Microbenchemarks doesn't attract me.
> And PEP 576 seems much simpler and straightforward way to expose
> FASTCALL.

I agree PEP 580 is extremely complicated and it's not obvious what the
maintenance burden will be in the long term.

Regards

Antoine.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] why is not 64-bit installer the default download link for Windows?

2018-07-11 Thread Paul Moore
On 11 July 2018 at 06:39, Steven D'Aprano  wrote:
> On Wed, Jul 11, 2018 at 05:14:34AM +0300, Ivan Pozdeev via Python-Dev wrote:
>> On 11.07.2018 1:41, Victor Stinner wrote:
>> >2018-07-09 18:01 GMT+02:00 Steve Dower :
>> >>The difficulty is that they *definitely* can use the 32-bit version, and
>> >>those few who are on older machines or older installs of Windows may not
>> >>understand why the link we provide didn't work for them.
>
> I think Steve's comment is right on the money.
>
> Although professional programmers should be a bit more technically
> competent than the average user, many are just hobbiest programmers, or
> school kids who are just as clueless as the average user since they
> *are* average users.

I'm perfectly happy for the default installer that you get from the
obvious first choice button (the one that says "Python 3.7.0") on the
"Downloads" drop-down to be the 32-bit installer[1]. But making people
who know they want the 64-bit installer click through "View the full
list of downloads" -> "Release 3.7.0", scroll down to the bottom of a
page that looks more like a release note if you just glance at the
top, and find "Windows x86-64 executable installer" is a bit much. And
the convoluted route is a nightmare for people like me to explain when
I'm trying to tell people who I know should be getting the 64-bit
version, how to do so.

Which is why I'd like to see a bit more choice on that initial
dropdown. Just a second button for the 64-bit version is enough - for
the full lists the set of links to the left of the dropdown is fine.

Paul

[1] Although I strongly dislike the fact that there's no indication at
all in that dropdown that what you're getting *is* the 32 bit version,
short of hovering over the link and knowing the naming convention of
the installers :-(.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Micro-benchmarks for PEP 580

2018-07-11 Thread INADA Naoki
First of all, please don't be so defensive.

I just say I need example target application.   I don't against to PEP 580.
Actually speaking, I lean to PEP 580 than PEP 576, although I wonder if
some part of PEP 580 can be simplified or postponed.

But PEP 580 is very complicated.  I need enough evidence what
PEP 580 provides before voting to PEP 580.

I know Python is important for data scientists and Cython is important
for them too.
But I don't have any examples target applications because I'm not data
scientist and I don't use Jupyter, numpy, etc.
Python's performance test suite doesn't contain such applications too.
So we can't measure or estimate benefits.

That's why I requested real world application sample again and again.

In my experience, I need Cython for making hotspot faster.  Calling overhead
is much smaller than the hotspot.  I haven't focused on inter extension calling
performance because `cdef` or `cpdef` is enough.
So I really don't have any target application which bottleneck is
calling performance of Cython.


> >>> Currently, we create temporary long object for passing argument.
> >>> If there is protocol for exposeing format used by PyArg_Parse*, we can
> >>> bypass temporal Python object and call myfunc_impl directly.
>
> Note that this is not fast at all. It actually has to parse the format
> description at runtime. For really fast calls, this should be avoided, and
> it can be avoided by using a str object for the signature description and
> interning it. That relies on signature normalisation, which requires a
> proper formal specification of C/C++ signature strings, which ... is pretty
> much the can of worms that Antoine mentioned.
>

Please don't start discussion about detail.
This is just an example of possible optimization of future.
(And I'm happy about hearing Cython will tackling this).

> > If my idea has 50% gain and current PEP 580 has only 5% gain,
> > why we should accept PEP 580?
>
> Because PEP 580 is also meant as a preparation for a fast C-to-C call
> interface in Python.
>
> Unpacking C callables is quite an involved protocol, and we have been
> pushing the idea around and away in the Cython project for some seven
> years. It's about time to consider it more seriously now, and there are
> plans to implement it on top of PEP 580 (which is also mentioned in the PEP).
>

I want to see it before accepting PEP 580.

>
> > And PEP 576 seems much simpler and straightforward way to expose
> > FASTCALL.
>
> Except that it does not get us one step forward on the path towards what
> you proposed. So, why would *you* prefer it over PEP 580?
>

I prefer PEP 580!
I just don't have enough rational to accept PEP 580 complexities.

>>
>> But I have worrying about it.  If we do it for all function, it makes Python
>> binary fatter, consume more CPU cache.  Once CPU cache is start
>> stashing, application performance got slower quickly.
>
> Now, I'd like to see benchmark numbers for that before I believe it. Macro
> benchmarks, not micro benchmarks! *wink*

Yes, when I try inlining argument parsing or other optimization having
significant memory overhead, I'll try to macro benchmark of cache efficiency.

But for now, I'm working on making Python memory footprint smaller,
not fatter.

Regards,
-- 
INADA Naoki  
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Micro-benchmarks for function calls (PEP 576/579/580)

2018-07-11 Thread Andrea Griffini
May be is something obvious but I find myself forgetting often about
the fact that most modern CPUs can change speed (and energy consumption)
depending on a moving average of CPU load.

If you don't disable this "green" feature and the benchmarks are quick then
the
result can have huge variations depending on exactly when and if the CPU
switches to fast mode.

On Wed, Jul 11, 2018 at 12:53 AM Victor Stinner  wrote:

> The pyperformance benchmark suite had micro benchmarks on function
> calls, but I removed them because they were sending the wrong signal.
> A function call by itself doesn't matter to compare two versions of
> CPython, or CPython to PyPy. It's also very hard to measure the cost
> of a function call when you are using a JIT compiler which is able to
> inline the code into the caller... So I removed all these stupid
> "micro benchmarks" to a dedicated Git repository:
> https://github.com/vstinner/pymicrobench
>
> Sometimes, I add new micro benchmarks when I work on one specific
> micro optimization.
>
> But more generally, I suggest you to not run micro benchmarks and
> avoid micro optimizations :-)
>
> Victor
>
> 2018-07-10 0:20 GMT+02:00 Jeroen Demeyer :
> > Here is an initial version of a micro-benchmark for C function calling:
> >
> > https://github.com/jdemeyer/callbench
> >
> > I don't have results yet, since I'm struggling to find the right options
> to
> > "perf timeit" to get a stable result. If somebody knows how to do this,
> help
> > is welcome.
> >
> >
> > Jeroen.
> > ___
> > Python-Dev mailing list
> > Python-Dev@python.org
> > https://mail.python.org/mailman/listinfo/python-dev
> > Unsubscribe:
> > https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/agriff%40tin.it
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] why is not 64-bit installer the default download link for Windows?

2018-07-11 Thread Glenn Linderman

On 7/10/2018 11:14 PM, Stephen J. Turnbull wrote:

Ivan Pozdeev via Python-Dev writes:

  > "One or more issues caused the setup to fail. Please fix the issues and
  > the retry setup. For more information see the log file .
  >
  > 0x80070661 - This installation package is not supported by this
  > processor type. Contact your product vendor."
  >
  > Pretty descriptive in my book.

Experience shows that's definitely not descriptive enough for my
university's students (and starting from AY 2021 we're moving to
Python 3 as the university-wide programming course language, yay!)
They have no idea that "processor type" means "word size", or what
alternative package to look for.  Sometimes they take the "contact
vendor" wording to mean "package is broken".  I don't think the
Japanese or Chinese students will have 32-bit machines (haven't seen
one among my advisees since March 2016), but we do get some students
from less wealthy countries who may be using older machines.

So I think it would be really nice if the installer detects the
wordsize mismatch, and issues an explicit message like

 This package is intended for a 64-it machine, but yours is a 32-bit
 machine.

 Please download and install the package specifically for 32-bit
 machines instead.
Which would be far, far better, regardless of which bitness(es) of 
installer is(are) displayed (prominently) on the web site.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] why is not 64-bit installer the default download link for Windows?

2018-07-11 Thread Stephen J. Turnbull
Ivan Pozdeev via Python-Dev writes:

 > "One or more issues caused the setup to fail. Please fix the issues and 
 > the retry setup. For more information see the log file .
 > 
 > 0x80070661 - This installation package is not supported by this 
 > processor type. Contact your product vendor."
 > 
 > Pretty descriptive in my book.

Experience shows that's definitely not descriptive enough for my
university's students (and starting from AY 2021 we're moving to
Python 3 as the university-wide programming course language, yay!)
They have no idea that "processor type" means "word size", or what
alternative package to look for.  Sometimes they take the "contact
vendor" wording to mean "package is broken".  I don't think the
Japanese or Chinese students will have 32-bit machines (haven't seen
one among my advisees since March 2016), but we do get some students
from less wealthy countries who may be using older machines.

So I think it would be really nice if the installer detects the
wordsize mismatch, and issues an explicit message like

This package is intended for a 64-it machine, but yours is a 32-bit
machine.

Please download and install the package specifically for 32-bit
machines instead.

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com