[Python-ideas] Re: multiprocessing: hybrid CPUs

2021-08-18 Thread Stephen J. Turnbull
Christopher Barker writes:

 > The worker pool approach is probably the way to go, but there is a fair bit
 > of overhead to creating a multiprocessing job. So fewer, larger jobs are
 > faster than many small jobs.

True, but processing those rows would have to be awfully fast for the
increase in overhead from 16 chunks x 10^6 rows/chunk to 64 chunks x
250,000 rows/chunk to matter, and that would be plenty granular to
give a good approximation to his 2 chunks by fast core : 1 chunk by
slow core nominal goal with a single queue, multiple workers
approach.  (Of course, it almost certainly will do a lot better, since
2 : 1 was itself a very rough approximation, but the single queue
approach adjusts to speed differences automatically.)

And if it's that fast, he could do it on a single core, and still done
by the time he's finished savoring a sip of coffee. ;-)

Steve
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/TCC7ZZLP7YMOCWSKIC2KXQQVBKT3UIMZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Stack-scoped variables

2021-08-18 Thread Cameron Simpson
On 18Aug2021 16:30, Paul Prescod  wrote:
>Let's imagine I have an algorithm which depends on a context variable.
>
>I write an algorithm (elided below for space) which depends on it. Then I 
>realize that I can improve the performance of my algorithm by using 
>concurrent.futures. But my algorithm will change its behaviour because it does 
>not inherit the context. Simply trying to parallelize an "embarrassingly 
>parallel" algorithm changes its behaviour.
>
>What I really want is a stack-scoped variable: a variable which retains its 
>value for all child scopes whether in the same thread or not, unless it is 
>overwritten in a child scope (whether in the same thread or not).

Do you mean a facility for setting an attribute on an object and 
reversing that when you exit the scope where you set it?

I've got a stackattrs context manager I use for that. Example use:

# this is on PyPI
from cs.context import stackattrs

with stackattrs(getcontext(), prec=8):
# .prec=8
vals = map(my_algorithm, range(0, 10))
# .prec is whatever it used to be, including absent

Otherwise it is not clear to me what you are after. If this isn't what 
you want, can you describe what's different?

Cheers,
Cameron Simpson 
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/76G7B7DICTKHZRB6QOOAXYYKYDCSSMQS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: New 'How to Write a Good Bug Report' Article for Docs

2021-08-18 Thread Thomas Grainger
Jack DeVries wrote:
> Hi All!
> We are trying to replace a link in the official docs which is now
> broken, but used to link to this article:
> https://web.archive.org/web/20210613191914/https://developer.mozilla.org/en-...
> Can you offer a suggestion for a replacement? The bad link has already
> been removed. Also see the bpo:
> https://bugs.python.org/issue44830
> Also see the rather stale discourse post to the same effect as this
> email (asking for replacement article ideas):
> https://discuss.python.org/t/alternate-article-for-how-to-wite-good-bug-repo...
> Thanks!
> Jack
Looks like the content is back online with a redirect pending

https://github.com/mdn/content/issues/8036#issuecomment-901262333
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/32PVUY5IWFOGMFNJB4S5Q4S35K7LZRBD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Stack-scoped variables

2021-08-18 Thread Paul Prescod
Let's imagine I have an algorithm which depends on a context variable.

I write an algorithm (elided below for space) which depends on it. Then I 
realize that I can improve the performance of my algorithm by using 
concurrent.futures. But my algorithm will change its behaviour because it does 
not inherit the context. Simply trying to parallelize an "embarrassingly 
parallel" algorithm changes its behaviour.

What I really want is a stack-scoped variable: a variable which retains its 
value for all child scopes whether in the same thread or not, unless it is 
overwritten in a child scope (whether in the same thread or not).

from decimal import getcontext
import concurrent.futures


def my_algorithm(input):
# some real algorithm here, which relies on decimal precision
return getcontext().prec


getcontext().prec = 8
vals = map(my_algorithm, range(0, 10))
print(list(vals))

with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
# Start the load operations and mark each future with its URL
results = executor.map(my_algorithm, range(0, 10))

print(list(results))

Results:

[8, 8, 8, 8, 8, 8, 8, 8, 8, 8]
[28, 28, 28, 28, 28, 28, 28, 28, 28, 28]
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/LKZRC7OK7GSBYWLGO2ZOCSBM3C2G3WEW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: multiprocessing: hybrid CPUs

2021-08-18 Thread Christopher Barker
The worker pool approach is probably the way to go, but there is a fair bit
of overhead to creating a multiprocessing job. So fewer, larger jobs are
faster than many small jobs.

So you do want to make the jobs as large as you can without wasting CPU
time.

-CHB

On Wed, Aug 18, 2021 at 9:09 AM Barry  wrote:

>
>
> > On 18 Aug 2021, at 16:03, Chris Angelico  wrote:
> >
> > On Thu, Aug 19, 2021 at 12:52 AM Marc-Andre Lemburg 
> wrote:
> >>
> >>> On 18.08.2021 15:58, Chris Angelico wrote:
> >>> On Wed, Aug 18, 2021 at 10:37 PM Joao S. O. Bueno <
> jsbu...@python.org.br> wrote:
> 
>  So,
>  It is out of scope of Pythonmultiprocessing, and, as I perceive it,
> from
>  the stdlib as a whole to be able to allocate specific cores for each
> subprocess -
>  that is automatically done by the O.S. (and of course, the O.S.
> having an interface
>  for it, one can write a specific Python library which would allow
> this granularity,
>  and it could even check core capabilities).
> >>>
> >>> Python does have a way to set processor affinity, so it's entirely
> >>> possible that this would be possible. Might need external tools
> >>> though.
> >>
> >> There's os.sched_setaffinity(pid, mask) you could use from within
> >> a Python task scheduler, if this is managing child processes (you need
> >> the right permissions to set the affinity).
> >
> > Right; I meant that it might require external tools to find out which
> > processors you want to align with.
> >
> >> Or you could use the taskset command available on Linux to fire
> >> up a process on a specific CPU core. lscpu gives you more insight
> >> into the installed set of available cores.
> >
> > Yes, those sorts of external tools.
> >
> > It MAY be possible to learn about processors by reading /proc/cpuinfo,
> > but that'd still be OS-specific (no idea which Unix-like operating
> > systems have that, and certainly Windows doesn't).
>
> And next you find out that you have to understand the NUMA details
> of your system because the memory attached to the CPUs is not the same
> speed.
>
> >
> > All in all, far easier to just divide the job into far more pieces
> > than you have processors, and then run a pool.
>
> As other already stated using a worker pool solves this problem for you.
> All you have to do it break your big job into suitable small pieces.
>
> Barry
> >
> > ChrisA
> > ___
> > Python-ideas mailing list -- python-ideas@python.org
> > To unsubscribe send an email to python-ideas-le...@python.org
> > https://mail.python.org/mailman3/lists/python-ideas.python.org/
> > Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/UQNSUSHUONT4AO6NJEPEUENQG2AINAMO/
> > Code of Conduct: http://python.org/psf/codeofconduct/
> >
>
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/62AXMS62J2H7TBHANIXZTTS2RJPUZZ5Z/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-- 
Christopher Barker, PhD (Chris)

Python Language Consulting
  - Teaching
  - Scientific Software Development
  - Desktop GUI and Web Development
  - wxPython, numpy, scipy, Cython
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/LIJV3DAK3I6J3QQFJ2HTGVJHHLQZIHCL/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: multiprocessing: hybrid CPUs

2021-08-18 Thread Barry


> On 18 Aug 2021, at 16:03, Chris Angelico  wrote:
> 
> On Thu, Aug 19, 2021 at 12:52 AM Marc-Andre Lemburg  wrote:
>> 
>>> On 18.08.2021 15:58, Chris Angelico wrote:
>>> On Wed, Aug 18, 2021 at 10:37 PM Joao S. O. Bueno  
>>> wrote:
 
 So,
 It is out of scope of Pythonmultiprocessing, and, as I perceive it, from
 the stdlib as a whole to be able to allocate specific cores for each 
 subprocess -
 that is automatically done by the O.S. (and of course, the O.S. having an 
 interface
 for it, one can write a specific Python library which would allow this 
 granularity,
 and it could even check core capabilities).
>>> 
>>> Python does have a way to set processor affinity, so it's entirely
>>> possible that this would be possible. Might need external tools
>>> though.
>> 
>> There's os.sched_setaffinity(pid, mask) you could use from within
>> a Python task scheduler, if this is managing child processes (you need
>> the right permissions to set the affinity).
> 
> Right; I meant that it might require external tools to find out which
> processors you want to align with.
> 
>> Or you could use the taskset command available on Linux to fire
>> up a process on a specific CPU core. lscpu gives you more insight
>> into the installed set of available cores.
> 
> Yes, those sorts of external tools.
> 
> It MAY be possible to learn about processors by reading /proc/cpuinfo,
> but that'd still be OS-specific (no idea which Unix-like operating
> systems have that, and certainly Windows doesn't).

And next you find out that you have to understand the NUMA details
of your system because the memory attached to the CPUs is not the same speed.

> 
> All in all, far easier to just divide the job into far more pieces
> than you have processors, and then run a pool.

As other already stated using a worker pool solves this problem for you.
All you have to do it break your big job into suitable small pieces.

Barry
> 
> ChrisA
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-ideas@python.org/message/UQNSUSHUONT4AO6NJEPEUENQG2AINAMO/
> Code of Conduct: http://python.org/psf/codeofconduct/
> 

___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/62AXMS62J2H7TBHANIXZTTS2RJPUZZ5Z/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: multiprocessing: hybrid CPUs

2021-08-18 Thread Chris Angelico
On Thu, Aug 19, 2021 at 12:52 AM Marc-Andre Lemburg  wrote:
>
> On 18.08.2021 15:58, Chris Angelico wrote:
> > On Wed, Aug 18, 2021 at 10:37 PM Joao S. O. Bueno  
> > wrote:
> >>
> >> So,
> >> It is out of scope of Pythonmultiprocessing, and, as I perceive it, from
> >> the stdlib as a whole to be able to allocate specific cores for each 
> >> subprocess -
> >> that is automatically done by the O.S. (and of course, the O.S. having an 
> >> interface
> >> for it, one can write a specific Python library which would allow this 
> >> granularity,
> >> and it could even check core capabilities).
> >
> > Python does have a way to set processor affinity, so it's entirely
> > possible that this would be possible. Might need external tools
> > though.
>
> There's os.sched_setaffinity(pid, mask) you could use from within
> a Python task scheduler, if this is managing child processes (you need
> the right permissions to set the affinity).

Right; I meant that it might require external tools to find out which
processors you want to align with.

> Or you could use the taskset command available on Linux to fire
> up a process on a specific CPU core. lscpu gives you more insight
> into the installed set of available cores.

Yes, those sorts of external tools.

It MAY be possible to learn about processors by reading /proc/cpuinfo,
but that'd still be OS-specific (no idea which Unix-like operating
systems have that, and certainly Windows doesn't).

All in all, far easier to just divide the job into far more pieces
than you have processors, and then run a pool.

ChrisA
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/UQNSUSHUONT4AO6NJEPEUENQG2AINAMO/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: multiprocessing: hybrid CPUs

2021-08-18 Thread Marc-Andre Lemburg
On 18.08.2021 15:58, Chris Angelico wrote:
> On Wed, Aug 18, 2021 at 10:37 PM Joao S. O. Bueno  
> wrote:
>>
>> So,
>> It is out of scope of Pythonmultiprocessing, and, as I perceive it, from
>> the stdlib as a whole to be able to allocate specific cores for each 
>> subprocess -
>> that is automatically done by the O.S. (and of course, the O.S. having an 
>> interface
>> for it, one can write a specific Python library which would allow this 
>> granularity,
>> and it could even check core capabilities).
> 
> Python does have a way to set processor affinity, so it's entirely
> possible that this would be possible. Might need external tools
> though.

There's os.sched_setaffinity(pid, mask) you could use from within
a Python task scheduler, if this is managing child processes (you need
the right permissions to set the affinity).

Or you could use the taskset command available on Linux to fire
up a process on a specific CPU core. lscpu gives you more insight
into the installed set of available cores.

multiprocessing itself does not have functionality to define the
affinity upfront or to select which payload goes to which worker.
I suppose you could implement a Pool subclass to handle such cases,
though.

Changing the calculation model is probably better, as already
suggested. Having smaller chunks of work makes it easier to even
out work load across workers in a cluster of different CPUs. You
then don't have to worry about the details of the CPUs - you just
need to play with the chunk size parameter a bit.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Experts (#1, Aug 18 2021)
>>> Python Projects, Coaching and Support ...https://www.egenix.com/
>>> Python Product Development ...https://consulting.egenix.com/


::: We implement business ideas - efficiently in both time and costs :::

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   https://www.egenix.com/company/contact/
 https://www.malemburg.com/

___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/QJBYMO2FDD2PHHAACNI2BQZIBXJV7AZT/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: multiprocessing: hybrid CPUs

2021-08-18 Thread Chris Angelico
On Wed, Aug 18, 2021 at 10:37 PM Joao S. O. Bueno  wrote:
>
> So,
> It is out of scope of Pythonmultiprocessing, and, as I perceive it, from
> the stdlib as a whole to be able to allocate specific cores for each 
> subprocess -
> that is automatically done by the O.S. (and of course, the O.S. having an 
> interface
> for it, one can write a specific Python library which would allow this 
> granularity,
> and it could even check core capabilities).

Python does have a way to set processor affinity, so it's entirely
possible that this would be possible. Might need external tools
though.

> As it stands however, is that you simply have to change your approach:
> instead of dividing yoru workload into different cores before starting, the
> common approach there is to set up worker processes, one per core, or
> per processor thread, and use those as a pool of resources to which
> you submit your processing work in chunks.
> In that way, if a worker happens to be in a faster core, it will be
> done with its chunk earlier and accept more work before
> slower cores are available.
>

But I agree with this. Easiest to just subdivide further.

ChrisA
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/AAIOSDVRE57G3ARY6YGNATW4YBP5G7UA/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: multiprocessing: hybrid CPUs

2021-08-18 Thread c . buhtz

Dear Joan,

Am 18.08.2021 14:36 schrieb Joao S. O. Bueno:
As it stands however, is that you simply have to change your approach:
instead of dividing yoru workload into different cores before starting, 
the

common approach there is to set up worker processes, one per core, or
per processor thread, and use those as a pool of resources to which
you submit your processing work in chunks.
In that way, if a worker happens to be in a faster core, it will be
done with its chunk earlier and accept more work before
slower cores are available.

thanks for your feedback and your idea about the alternative solution 
with the ProcessPool.


I think this I will use this for future projects. Thanks a lot.

Kind
Christian
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/M7FG2RW2YP5ROOYFFEV5PH2DSSP6EJQM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: multiprocessing: hybrid CPUs

2021-08-18 Thread Joao S. O. Bueno
So,
It is out of scope of Pythonmultiprocessing, and, as I perceive it, from
the stdlib as a whole to be able to allocate specific cores for each
subprocess -
that is automatically done by the O.S. (and of course, the O.S. having an
interface
for it, one can write a specific Python library which would allow this
granularity,
and it could even check core capabilities).

As it stands however, is that you simply have to change your approach:
instead of dividing yoru workload into different cores before starting, the
common approach there is to set up worker processes, one per core, or
per processor thread, and use those as a pool of resources to which
you submit your processing work in chunks.
In that way, if a worker happens to be in a faster core, it will be
done with its chunk earlier and accept more work before
slower cores are available.

If you use "concurrent.futures" or a similar approach, this pattern will
happen naturally with no
specific fiddling needed on your part.

On Wed, 18 Aug 2021 at 09:19,  wrote:

> Hello,
>
> before posting to python-dev I thought is is the best to discuss this
> here. And I assume that someone else had the same idea then me before.
> Maybe you can point me to the relevant discussion/ticket.
>
> I read about Intels hybrid CPUs. It means there are multiple cores e.g.
> 8 high-speed cores and 8 low-speed (but more energy efficient) cores
> combined in one CPU.
>
> In my use cases I do parallelize with Pythons multiprocessing package to
> work on millions of rows on pandas.DataFrame objects. This are task that
> are not vecotrizable. I simple cut the DataFrame horizontal in pieces
> (numbered by the available cores).
>
> But when the cores are different in there "speed" I need to know that.
> e.g. with a 16 core CPU where half of the cores low/slow and every core
> has 1 million rows to work on. The 8 high speed cores are finishing
> earlier and just waiting untill the slow cores are finished. It would be
> more efficient if the 8 high speed cores each would work on 1,3 million
> rows and the low speed cores each on 0,7 million rows. It is not perfect
> but better. I know that they will not finish all at the same timepoint.
> But their end time will be closer together.
>
> But to do this I need to know the type of the cores.
>
> Am I wrong?
>
> Are there any plans in the Python development taking this into account?
>
> Kind
> Christian
> ___
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/C3BYESZBZT2PNQSWCW3HGD25AGABJGOJ/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/K4MVBQMCOVE64CG76WSBXH5MJZSWRQ3Z/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Notation for subscripts.

2021-08-18 Thread Matsuoka Takuo
Dear Guido van Rossum,

Thank you for bringing the PEP's to my attention.

The idea of PEP 637 on a[*x] is different from my idea. The PEP's idea
appears making subscription analogous to function calls. In the end,
a[*x] would have been equivalent to

  a[tuple(x)]

if the PEP had been adopted. a[*x] in PEP 646 is similar, but has more
restrictions on what formulas can fill the brackets. I understand only
the a[*x] part of PEP 637 has been found of enough interest so
far. For this part, 637 seems better than 646 in that it looks
simpler.

A concern about pursuing analogy with function calls may be there
already is a difference, namely, a[x] and a[x,] are not
equivalent while f(x) and f(x,) are. In particular, a[*(x, )] not
being equivalent to a[x] might at least be confusing.

On the other hand, my idea was just a slight change from the current
syntax by simple two steps described in my previous post. In
particular,

  a[*x]

would remain invalid, but

  a[*x, ]

would be valid and equivalent to a[tuple(x)] (just as

  *x,

at certain places is equivalent to tuple(x) ).

PEP 637 has proposed to leave a[] invalid, which in their scheme seems
confusing to me since a[*()] would be valid (and equivalent to
a[tuple(())] or a[()] ), as well as seems to go against the analogy to
calling of functions: f() is valid, just not equivalent to f(()) .

Best regards,
Takuo


2021年8月16日(月) 10:58 Guido van Rossum :
>
> Have you seen PEP 637? IIRC it has discussions on a[] and a[*x]. Note that it 
> was rejected, but the idea of a[*x] is being resurrected for PEP 646.
>
> On Fri, Aug 13, 2021 at 5:43 AM Matsuoka Takuo  wrote:
>>
>> Dear Developers,
>>
>> Given a subscriptable object s, the intended rule for the notation for
>> getting an item of s seems that, for any expression {e}, such as
>> "x, ",
>>   s[{e}]
>> (i.e., s[x, ] if {e} is "x, ") means the same as
>>   s[({e})]
>> (i.e., s[(x, )] in the considered case), namely, should be evaluated
>> as s.__getitem__(({e})) (or s.__class_getitem__(({e})) when that
>> applies). If this is the rule, then it looks simple and hence
>> friendly to the user. However, there are at least two exceptions:
>>
>> (1) The case where {e} is the empty expression "":
>> The expression
>>   s[]
>> raises SyntaxError instead of being evaluated in the same way as
>> s[()] is.
>>
>> (2) The case where {e} contains "*" for unpacking:
>> An expression containing the unpacking notation, such as
>>   s[*iterable, ]
>> raises SyntaxError instead of being evaluated in the same way as
>> s[(*iterable, )] in this example, is.
>>
>> Are these (and other if any) exceptions justified? If not, I propose
>> having the described rule to have full effect if that would simplify
>> the syntax. This would affect currently working codes which rely on
>> SyntaxError raised in either of the described ways (through eval, exec
>> or import??). I wonder if reliance on SyntaxError in these cases
>> should be supported in all future versions of Python.
>>
>> Best regards,
>> Takuo Matsuoka
>> ___
>> Python-ideas mailing list -- python-ideas@python.org
>> To unsubscribe send an email to python-ideas-le...@python.org
>> https://mail.python.org/mailman3/lists/python-ideas.python.org/
>> Message archived at 
>> https://mail.python.org/archives/list/python-ideas@python.org/message/V2WFMNVJLUBXVQFPNHH4TJNRYNPK2BKJ/
>> Code of Conduct: http://python.org/psf/codeofconduct/
>
>
>
> --
> --Guido van Rossum (python.org/~guido)
> Pronouns: he/him (why is my pronoun here?)
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/YOONIQ6EBDQYHS3JP4Q3ENIYZCGYJEE7/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] SyntaxError says: "can't use starred expression here", but its not the same "starred expression" defined in the Language Reference.

2021-08-18 Thread Matsuoka Takuo
The error mentioned in the title is this.

>>> *()
  File "", line 1
SyntaxError: can't use starred expression here

According to the Language Reference

https://docs.python.org/3/reference/expressions.html#expression-lists

it's not really a starred expression. In the context of defining the
notion of starred expressiont, it looks like it's called a starred
item. I think the message is confusing and should be fixed. I don't
know if "starred item" is the right name to be used there, but it's
at least documented (in the Reference).

Best regards,
Takuo
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/GBSKQRG5CKOUIOGDUYSXQJOKA3VJHCGT/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] multiprocessing: hybrid CPUs

2021-08-18 Thread c . buhtz

Hello,

before posting to python-dev I thought is is the best to discuss this 
here. And I assume that someone else had the same idea then me before. 
Maybe you can point me to the relevant discussion/ticket.


I read about Intels hybrid CPUs. It means there are multiple cores e.g. 
8 high-speed cores and 8 low-speed (but more energy efficient) cores 
combined in one CPU.


In my use cases I do parallelize with Pythons multiprocessing package to 
work on millions of rows on pandas.DataFrame objects. This are task that 
are not vecotrizable. I simple cut the DataFrame horizontal in pieces 
(numbered by the available cores).


But when the cores are different in there "speed" I need to know that.
e.g. with a 16 core CPU where half of the cores low/slow and every core 
has 1 million rows to work on. The 8 high speed cores are finishing 
earlier and just waiting untill the slow cores are finished. It would be 
more efficient if the 8 high speed cores each would work on 1,3 million 
rows and the low speed cores each on 0,7 million rows. It is not perfect 
but better. I know that they will not finish all at the same timepoint. 
But their end time will be closer together.


But to do this I need to know the type of the cores.

Am I wrong?

Are there any plans in the Python development taking this into account?

Kind
Christian
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/C3BYESZBZT2PNQSWCW3HGD25AGABJGOJ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: Notation for subscripts.

2021-08-18 Thread Matsuoka Takuo
Matsuoka Takuo :
>
> Now, is "1,2," more boxed up than "*(1,2)," is? The *current* rule
> surely says the former is a tuple at some places and the latter
> is not,

Actually, this was wrong. First of all,

>>> *(1,2),
(1, 2)

Moreover, while the Language Reference says

  return_stmt ::=  "return" [expression_list]

at

https://docs.python.org/3/reference/simple_stmts.html#the-return-statement

things like

  return *(),

are already allowed and () gets returned in this case, and so on! I
haven't examined everything, but so far, subscription is the only
place I've found where a starred expression in place of an expression
list indeed raises SyntaxError.

In particular, there really doesn't seem to be any reason why a
starred expression should be rejected from any place where an
expression list is accepted. In fact, rejection should _not_ be
expected, it seems.

My proposal at this point would be:
(1) remove the definition of "expression_list" from the specification of
the syntax
(2) replace every occurrence of it in the specification with
"starred_expression".

At most places this seems to only recover the current behaviour. (Even
if this idea does not survive in the end, I think the Language Reference
should be fixed to describe the behaviour correctly. The actual
behaviour is making better sense to me than what the Reference is
describing now.)

A minor question to be left then would be which new instance of
"starred_expression" in the specification of the syntax may further be
replaced with the optional one or "[starred_expression]", e.g.,
whether

  s[]

would be better allowed and evaluated to s.__getitem__(()). This may
be complicated. For example, I'm already used to returning None with

  return

and so on. Even then, s[] doesn't seem very bad to me.

Best regards,
Takuo
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/6CKPSNDUPMA2HMX2I25ECFEQ4F4CD2XB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Re: New 'How to Write a Good Bug Report' Article for Docs

2021-08-18 Thread c . buhtz

Bug Report
https://github.com/mdn/content/issues/8036
___
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/LPEBIVIWMDINEBVZTSSKINCNWCAPQTYN/
Code of Conduct: http://python.org/psf/codeofconduct/