Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]

2018-04-09 Thread Tim Peters
[Peter O'Connor ]
> Ok, so it seems everyone's happy with adding an initial_value argument.

Heh - that's not clear to me ;-)


> Now, I claim that while it should be an option, the initial value should NOT
> be returned by default.  (i.e. the returned generator should by default
> yield N elements, not N+1).

-1 on that.

- It goes against prior art.  Haskell's scanl does return the initial
value, and nobody on the planet has devoted more quality thought to
how streams "should work" than those folks.  The Python itertoolz
package's accumulate already supports an optional `initial=` argument,
and also returns it when specified.  It requires truly compelling
arguments to go against prior art.

- It's "obvious" that the first value should be returned if specified.
The best evidence:  in this thread's first message, it was "so
obvious" to Raymond that the implementation he suggested did exactly
that.  I doubt it even occurred to him to question whether it should.
It didn't occur to me either, but my mind is arguably "polluted" by
significant prior exposure to functional languages.

- In all but one "real life" example so far (including the
slice-summer class I stumbled into today), the code _wanted_ the
initial value to be returned.  The sole exception was one of the three
instances in Will Ness's wheel sieve code, where he discarded the
unwanted (in that specific case) initial value via a plain

next(wheel)

Which is telling:  it's easy to discard a value you don't want, but to
inject a value you _do_ want but don't get requires something like
reintroducing the

chain([value_i_want], the_iterable_that_didn't_give_the_value_i_want)

trick the new optional argument is trying to get _away_ from.  Talk
about ironic ;-)


I would like to see a simple thing added to itertools to make dropping
unwanted values easier, though:

"""
drop(iterable, n=None)
Return an iterator whose next() method returns all but the first `n`
values from the iterable.  If specified, `n` must be an integer >= 0.
By default (`n`=None), the iterator is run to exhaustion.
"""

Then, e.g.,

- drop(it, 0) would effectively be a synonym for iter(it).

- drop((it, 1) would skip over the first value from the iterable.

- drop(it) would give "the one obvious way" to consume an iterator
completely (for some reason that's a semi-FAQ,, and is usually
answered by suggesting the excruciatingly obscure trick of feeding the
iterable to a 0-size collections.deque constructor)..

Of course Haskell has had `drop` all along, although not the "run to
exhaustion" part.


> Example: suppose we're doing the toll booth thing, and we want to yield a
> cumulative sum of tolls so far.  Suppose someone already made a
> reasonable-looking generator yielding the cumulative sum of tolls for today:
>
> def iter_cumsum_tolls_from_day(day, toll_amount_so_far):
> return accumulate(get_tolls_from_day(day, initial=toll_amount_so_far))
>
> And now we want to make a way to get all tolls from the month.  One might
> reasonably expect this to work:
>
> def iter_cumsum_tolls_from_month(month, toll_amount_so_far):
> for day in month:
> for cumsum_tolls in iter_cumsum_tolls_from_day(day,
> toll_amount_so_far = toll_amount_so_far):
> yield cumsum_tolls
> toll_amount_so_far = cumsum_tolls
>
> But this would actually DUPLICATE the last toll of every day - it appears
> both as the last element of the day's generator and as the first element of
> the next day's generator.

I didn't really follow the details there, but the suggestion would be
the same regardless: drop the duplicates you don't want.

Note that making up an example in your head isn't nearly as persuasive
as "real life" code.  Code can be _contrived_ to "prove" anything.


> This is why I think that there should be an additional
> "include_initial_in_return=False" argument.  I do agree that it should be an
> option to include the initial value (your "find tolls over time-span"
> example shows why), but that if you want that you should have to show that
> you thought about that by specifying "include_initial_in_return=True"

It's generally poor human design to have a second optional argument
modify the behavior of yet another optional argument.  If the presence
of the latter can have two distinct modes of operation, then people
_will_ forget which one the default mode is, making code harder to
write and harder to read.

Since "return the value" is supported by all known prior art, and by
the bulk of "real life" Python codes known so far, "return the value"
should be the default.  But far better to make it the only mode rather
than _just_ the default mode.  Then there's nothing to be forgotten
:-)
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]

2018-04-09 Thread Peter O'Connor
* correction to brackets from first example:

def iter_cumsum_tolls_from_day(day, toll_amount_so_far):
return accumulate(get_tolls_from_day(day), initial=toll_amount_so_far)


On Mon, Apr 9, 2018 at 11:55 PM, Peter O'Connor 
wrote:

> Ok, so it seems everyone's happy with adding an initial_value argument.
>
> Now, I claim that while it should be an option, the initial value should
> NOT be returned by default.  (i.e. the returned generator should by default
> yield N elements, not N+1).
>
> Example: suppose we're doing the toll booth thing, and we want to yield a
> cumulative sum of tolls so far.  Suppose someone already made a
> reasonable-looking generator yielding the cumulative sum of tolls for today:
>
> def iter_cumsum_tolls_from_day(day, toll_amount_so_far):
> return accumulate(get_tolls_from_day(day, initial=toll_amount_so_far))
>
> And now we want to make a way to get all tolls from the month.  One might
> reasonably expect this to work:
>
> def iter_cumsum_tolls_from_month(month, toll_amount_so_far):
> for day in month:
> for cumsum_tolls in iter_cumsum_tolls_from_day(day,
> toll_amount_so_far = toll_amount_so_far):
> yield cumsum_tolls
> toll_amount_so_far = cumsum_tolls
>
> But this would actually DUPLICATE the last toll of every day - it appears
> both as the last element of the day's generator and as the first element of
> the next day's generator.
>
> This is why I think that there should be an additional "
> include_initial_in_return=False" argument.  I do agree that it should be
> an option to include the initial value (your "find tolls over time-span"
> example shows why), but that if you want that you should have to show that
> you thought about that by specifying "include_initial_in_return=True"
>
>
>
>
>
> On Mon, Apr 9, 2018 at 10:30 PM, Tim Peters  wrote:
>
>> [Tim]
>> >> while we have N numbers, there are N+1 slice indices.  So
>> >> accumulate(xs) doesn't quite work.  It needs to also have a 0 inserted
>> >> as the first prefix sum (the empty prefix sum(xs[:0]).
>> >>
>> >> Which is exactly what a this_is_the_initial_value=0 argument would do
>> >> for us.
>>
>> [Greg Ewing ]
>> > In this case, yes. But that still doesn't mean it makes
>> > sense to require the initial value to be passed *in* as
>> > part of the input sequence.
>> >
>> > Maybe the best idea is for the initial value to be a
>> > separate argument, but be returned as the first item in
>> > the list.
>>
>> I'm not sure you've read all the messages in this thread, but that's
>> exactly what's being proposed.  That. e.g., a new optional argument:
>>
>> accumulate(xs, func, initial=S)
>>
>> act like the current
>>
>>  accumulate(chain([S], xs), func)
>>
>> Note that in neither case is the original `xs` modified in any way,
>> and in both cases the first value generated is S.
>>
>> Note too that the proposal is exactly the way Haskell's `scanl` works
>> (although `scanl` always requires specifying an initial value - while
>> the related `scanl1` doesn't allow specifying one).
>>
>> And that's all been so since the thread's first message, in which
>> Raymond gave a proposed implementation:
>>
>> _sentinel = object()
>>
>> def accumulate(iterable, func=operator.add, start=_sentinel):
>> it = iter(iterable)
>> if start is _sentinel:
>> try:
>> total = next(it)
>> except StopIteration:
>> return
>> else:
>> total = start
>> yield total
>> for element in it:
>> total = func(total, element)
>> yield total
>>
>> > I can think of another example where this would make
>> > sense. Suppose you have an initial bank balance and a
>> > list of transactions, and you want to produce a statement
>> > with a list of running balances.
>> >
>> > The initial balance and the list of transactions are
>> > coming from different places, so the most natural way
>> > to call it would be
>> >
>> >result = accumulate(transactions, initial = initial_balance)
>> >
>> > If the initial value is returned as item 0, then the
>> > result has the following properties:
>> >
>> >result[0] is the balance brought forward
>> >result[-1] is the current balance
>> >
>> > and this remains true in the corner case where there are
>> > no transactions.
>>
>> Indeed, something quite similar often applies when parallelizing
>> search loops of the form:
>>
>>  for candidate in accumulate(chain([starting_value], cycle(deltas))):
>>
>> For a sequence that eventually becomes periodic in the sequence of
>> deltas it cycles through, multiple processes can run independent
>> searches starting at carefully chosen different starting values "far"
>> apart.  In effect, they're each a "balance brought forward" pretending
>> that previous chunks have 

Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]

2018-04-09 Thread Peter O'Connor
Ok, so it seems everyone's happy with adding an initial_value argument.

Now, I claim that while it should be an option, the initial value should
NOT be returned by default.  (i.e. the returned generator should by default
yield N elements, not N+1).

Example: suppose we're doing the toll booth thing, and we want to yield a
cumulative sum of tolls so far.  Suppose someone already made a
reasonable-looking generator yielding the cumulative sum of tolls for today:

def iter_cumsum_tolls_from_day(day, toll_amount_so_far):
return accumulate(get_tolls_from_day(day, initial=toll_amount_so_far))

And now we want to make a way to get all tolls from the month.  One might
reasonably expect this to work:

def iter_cumsum_tolls_from_month(month, toll_amount_so_far):
for day in month:
for cumsum_tolls in iter_cumsum_tolls_from_day(day,
toll_amount_so_far = toll_amount_so_far):
yield cumsum_tolls
toll_amount_so_far = cumsum_tolls

But this would actually DUPLICATE the last toll of every day - it appears
both as the last element of the day's generator and as the first element of
the next day's generator.

This is why I think that there should be an additional "
include_initial_in_return=False" argument.  I do agree that it should be an
option to include the initial value (your "find tolls over time-span"
example shows why), but that if you want that you should have to show that
you thought about that by specifying "include_initial_in_return=True"





On Mon, Apr 9, 2018 at 10:30 PM, Tim Peters  wrote:

> [Tim]
> >> while we have N numbers, there are N+1 slice indices.  So
> >> accumulate(xs) doesn't quite work.  It needs to also have a 0 inserted
> >> as the first prefix sum (the empty prefix sum(xs[:0]).
> >>
> >> Which is exactly what a this_is_the_initial_value=0 argument would do
> >> for us.
>
> [Greg Ewing ]
> > In this case, yes. But that still doesn't mean it makes
> > sense to require the initial value to be passed *in* as
> > part of the input sequence.
> >
> > Maybe the best idea is for the initial value to be a
> > separate argument, but be returned as the first item in
> > the list.
>
> I'm not sure you've read all the messages in this thread, but that's
> exactly what's being proposed.  That. e.g., a new optional argument:
>
> accumulate(xs, func, initial=S)
>
> act like the current
>
>  accumulate(chain([S], xs), func)
>
> Note that in neither case is the original `xs` modified in any way,
> and in both cases the first value generated is S.
>
> Note too that the proposal is exactly the way Haskell's `scanl` works
> (although `scanl` always requires specifying an initial value - while
> the related `scanl1` doesn't allow specifying one).
>
> And that's all been so since the thread's first message, in which
> Raymond gave a proposed implementation:
>
> _sentinel = object()
>
> def accumulate(iterable, func=operator.add, start=_sentinel):
> it = iter(iterable)
> if start is _sentinel:
> try:
> total = next(it)
> except StopIteration:
> return
> else:
> total = start
> yield total
> for element in it:
> total = func(total, element)
> yield total
>
> > I can think of another example where this would make
> > sense. Suppose you have an initial bank balance and a
> > list of transactions, and you want to produce a statement
> > with a list of running balances.
> >
> > The initial balance and the list of transactions are
> > coming from different places, so the most natural way
> > to call it would be
> >
> >result = accumulate(transactions, initial = initial_balance)
> >
> > If the initial value is returned as item 0, then the
> > result has the following properties:
> >
> >result[0] is the balance brought forward
> >result[-1] is the current balance
> >
> > and this remains true in the corner case where there are
> > no transactions.
>
> Indeed, something quite similar often applies when parallelizing
> search loops of the form:
>
>  for candidate in accumulate(chain([starting_value], cycle(deltas))):
>
> For a sequence that eventually becomes periodic in the sequence of
> deltas it cycles through, multiple processes can run independent
> searches starting at carefully chosen different starting values "far"
> apart.  In effect, they're each a "balance brought forward" pretending
> that previous chunks have already been done.
>
> Funny:  it's been weeks now since I wrote an accumulate() that
> _didn't_ want to specify a starting value - LOL ;-)
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___

Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]

2018-04-09 Thread Tim Peters
[Tim]
>> while we have N numbers, there are N+1 slice indices.  So
>> accumulate(xs) doesn't quite work.  It needs to also have a 0 inserted
>> as the first prefix sum (the empty prefix sum(xs[:0]).
>>
>> Which is exactly what a this_is_the_initial_value=0 argument would do
>> for us.

[Greg Ewing ]
> In this case, yes. But that still doesn't mean it makes
> sense to require the initial value to be passed *in* as
> part of the input sequence.
>
> Maybe the best idea is for the initial value to be a
> separate argument, but be returned as the first item in
> the list.

I'm not sure you've read all the messages in this thread, but that's
exactly what's being proposed.  That. e.g., a new optional argument:

accumulate(xs, func, initial=S)

act like the current

 accumulate(chain([S], xs), func)

Note that in neither case is the original `xs` modified in any way,
and in both cases the first value generated is S.

Note too that the proposal is exactly the way Haskell's `scanl` works
(although `scanl` always requires specifying an initial value - while
the related `scanl1` doesn't allow specifying one).

And that's all been so since the thread's first message, in which
Raymond gave a proposed implementation:

_sentinel = object()

def accumulate(iterable, func=operator.add, start=_sentinel):
it = iter(iterable)
if start is _sentinel:
try:
total = next(it)
except StopIteration:
return
else:
total = start
yield total
for element in it:
total = func(total, element)
yield total

> I can think of another example where this would make
> sense. Suppose you have an initial bank balance and a
> list of transactions, and you want to produce a statement
> with a list of running balances.
>
> The initial balance and the list of transactions are
> coming from different places, so the most natural way
> to call it would be
>
>result = accumulate(transactions, initial = initial_balance)
>
> If the initial value is returned as item 0, then the
> result has the following properties:
>
>result[0] is the balance brought forward
>result[-1] is the current balance
>
> and this remains true in the corner case where there are
> no transactions.

Indeed, something quite similar often applies when parallelizing
search loops of the form:

 for candidate in accumulate(chain([starting_value], cycle(deltas))):

For a sequence that eventually becomes periodic in the sequence of
deltas it cycles through, multiple processes can run independent
searches starting at carefully chosen different starting values "far"
apart.  In effect, they're each a "balance brought forward" pretending
that previous chunks have already been done.

Funny:  it's been weeks now since I wrote an accumulate() that
_didn't_ want to specify a starting value - LOL ;-)
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-09 Thread David Mertz
I continue to find all this weird new syntax to create absurdly long
one-liners confusing and mysterious. Python is not Perl for a reason.

On Mon, Apr 9, 2018, 5:55 PM Peter O'Connor 
wrote:

> Kyle, you sounded so reasonable when you were trashing
> itertools.accumulate (which I now agree is horrible).  But then you go and
> support Serhiy's madness:  "smooth_signal = [average for average in [0] for
> x in signal for average in [(1-decay)*average + decay*x]]" which I agree
> is clever, but reads more like a riddle than readable code.
>
> Anyway, I continue to stand by:
>
> (y:= f(y, x) for x in iter_x from y=initial_y)
>
> And, if that's not offensive enough, to its extension:
>
> (z, y := f(z, x) -> y for x in iter_x from z=initial_z)
>
> Which carries state "z" forward but only yields "y" at each iteration.
> (see proposal: https://github.com/petered/peps/blob/master/pep-.rst)
>
> Why am I so obsessed?  Because it will allow you to conveniently replace
> classes with more clean, concise, functional code.  People who thought they
> never needed such a construct may suddenly start finding it indispensable
> once they get used to it.
>
> How many times have you written something of the form?:
>
> class StatefulThing(object):
>
> def __init__(self, initial_state, param_1, param_2):
> self._param_1= param_1
> self._param_2 = param_2
> self._state = initial_state
>
> def update_and_get_output(self, new_observation):  # (or just
> __call__)
> self._state = do_some_state_update(self._state,
> new_observation, self._param_1)
> output = transform_state_to_output(self._state, self._param_2)
> return output
>
> processor = StatefulThing(initial_state = initial_state, param_1 = 1,
> param_2 = 4)
> processed_things = [processor.update_and_get_output(x) for x in x_gen]
>
> I've done this many times.  Video encoding, robot controllers, neural
> networks, any iterative machine learning algorithm, and probably lots of
> things I don't know about - they all tend to have this general form.
>
> And how many times have I had issues like "Oh no now I want to change
> param_1 on the fly instead of just setting it on initialization, I guess I
> have to refactor all usages of this class to pass param_1 into
> update_and_get_output instead of __init__".
>
> What if instead I could just write:
>
> def update_and_get_output(last_state, new_observation, param_1,
> param_2)
> new_state = do_some_state_update(last_state, new_observation,
> _param_1)
> output = transform_state_to_output(last_state, _param_2)
> return new_state, output
>
> processed_things = [state, output:= update_and_get_output(state, x,
> param_1=1, param_2=4) -> output for x in observations from
> state=initial_state]
>
> Now we have:
> - No mutable objects (which cuts of a whole slew of potential bugs and
> anti-patterns familiar to people who do OOP.)
> - Fewer lines of code
> - Looser assumptions on usage and less refactoring. (if I want to now pass
> in param_1 at each iteration instead of just initialization, I need to make
> no changes to update_and_get_output).
> - No need for state getters/setters, since state is is passed around
> explicitly.
>
> I realize that calling for changes to syntax is a lot to ask - but I still
> believe that the main objections to this syntax would also have been raised
> as objections to the now-ubiquitous list-comprehensions - they seem hostile
> and alien-looking at first, but very lovable once you get used to them.
>
>
>
>
> On Sun, Apr 8, 2018 at 1:41 PM, Kyle Lahnakoski 
> wrote:
>
>>
>>
>> On 2018-04-05 21:18, Steven D'Aprano wrote:
>> > (I don't understand why so many people have such an aversion to writing
>> > functions and seek to eliminate them from their code.)
>> >
>>
>> I think I am one of those people that have an aversion to writing
>> functions!
>>
>> I hope you do not mind that I attempt to explain my aversion here. I
>> want to clarify my thoughts on this, and maybe others will find
>> something useful in this explanation, maybe someone has wise words for
>> me. I think this is relevant to python-ideas because someone with this
>> aversion will make different language suggestions than those that don't.
>>
>> Here is why I have an aversion to writing functions: Every unread
>> function represents multiple unknowns in the code. Every function adds
>> to code complexity by mapping an inaccurate name to specific
>> functionality.
>>
>> When I read code, this is what I see:
>>
>> >x = you_will_never_guess_how_corner_cases_are_handled(a, b, c)
>> >y =
>> you_dont_know_I_throw_a_BaseException_when_I_do_not_like_your_arguments(j,
>> k, l)
>>
>> Not everyone sees code this way: I see people read method calls, make a
>> number of wild assumptions about how those methods work, AND THEY ARE
>> CORRECT!  How do they do 

Re: [Python-ideas] Is there any idea about dictionary destructing?

2018-04-09 Thread Joao S. O. Bueno
On 9 April 2018 at 22:10, Brett Cannon  wrote:
>
>
> On Mon, 9 Apr 2018 at 05:18 Joao S. O. Bueno  wrote:
>>

>> we could even call this approach a name such as "function call".
>
>
> The harsh sarcasm is not really called for.

Indeed - on rereading, I have to agree on that.

I do apologize for the sarcasm. - really, I not only stand corrected:
I recognize i was incorrect to start with.

But my argument that this feature is needless language bloat stands.

On the othe hand, as for getting variable names out of _shallow_  mappings,
I've built that feature in a package I authored, using a context manager
to abuse the import mechanism -

In [96]: from extradict import MapGetter

In [97]: data = {"A": None, "B": 10}

In [98]: with MapGetter(data):
   ...: from data import A, B
   ...:

In [99]: A, B
Out[99]: (None, 10)


That is on Pypi and can be used by anyone right now.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]

2018-04-09 Thread Greg Ewing

Tim Peters wrote:

while we have N numbers, there are N+1 slice indices.  So
accumulate(xs) doesn't quite work.  It needs to also have a 0 inserted
as the first prefix sum (the empty prefix sum(xs[:0]).

Which is exactly what a this_is_the_initial_value=0 argument would do
for us.


In this case, yes. But that still doesn't mean it makes
sense to require the initial value to be passed *in* as
part of the input sequence.

Maybe the best idea is for the initial value to be a
separate argument, but be returned as the first item in
the list.

I can think of another example where this would make
sense. Suppose you have an initial bank balance and a
list of transactions, and you want to produce a statement
with a list of running balances.

The initial balance and the list of transactions are
coming from different places, so the most natural way
to call it would be

   result = accumulate(transactions, initial = initial_balance)

If the initial value is returned as item 0, then the
result has the following properties:

   result[0] is the balance brought forward
   result[-1] is the current balance

and this remains true in the corner case where there are
no transactions.

--
Greg
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Is there any idea about dictionary destructing?

2018-04-09 Thread Brett Cannon
On Mon, 9 Apr 2018 at 05:18 Joao S. O. Bueno  wrote:

> I have an idea for an inovative, unanbiguous, straightforward and
> backwards compatible syntax for that,
> that evena llows one to pass metadata along the operation so that the
> results can be tweaked acording
> to each case's needs.
>
> What about:
>
> new_data = dict_feed({
> "direct": "some data",
> "nested": {
> "lst_data": [1, 2, 3],
> "int_data": 1
> }
> },
> data
> )
>
> we could even call this approach a name such as "function call".
>

The harsh sarcasm is not really called for.

-Brett


>
>
> In other words, why to bloat the language with hard to learn, error prone,
> grit-looking syntax, when a simple plain function call is perfectly
> good, all you need to do over
> your suggestion is to type the function name and a pair of parentheses?
>
>
>
>
> On 7 April 2018 at 14:26, thautwarm  wrote:
> > We know that Python support the destructing of iterable objects.
> >
> > m_iter = (_ for _ in range(10))
> > a, *b, c = m_iter
> >
> > That's pretty cool! It's really convenient when there're many corner
> cases
> > to handle with iterable collections.
> > However destructing in Python could be more convenient if we support
> > dictionary destructing.
> >
> > In my opinion, dictionary destructing is not difficult to implement and
> > makes the syntax more expressive. A typical example is data access on
> nested
> > data structures(just like JSON), destructing a dictionary makes the logic
> > quite clear:
> >
> > data = {
> > "direct": "some data",
> > "nested": {
> > "lst_data": [1, 2, 3],
> > "int_data": 1
> > }
> > }
> > {
> >"direct": direct,
> > "nested": {
> > "lst_data": [a, b, c],
> > }
> > } = data
> >
> >
> > Dictionary destructing might not be very well-known but it really helps.
> The
> > operations on nested key-value collections are very frequent, and the
> codes
> > for business logic are not readable enough until now. Moreover Python is
> now
> > popular in data processing which must be enhanced by the entire support
> of
> > data destructing.
> >
> > Here are some implementations of other languages:
> > Elixir, which is also a popular dynamic language nowadays.
> >
> > iex> %{} = %{:a => 1, 2 => :b}
> > %{2 => :b, :a => 1}
> > iex> %{:a => a} = %{:a => 1, 2 => :b}
> > %{2 => :b, :a => 1}
> > iex> a
> > 1
> > iex> %{:c => c} = %{:a => 1, 2 => :b}
> > ** (MatchError) no match of right hand side value: %{2 => :b, :a => 1}
> >
> > And in F#, there is something similar to dictionary destructing(actually,
> > this destructs `struct` instead)
> > type MyRecord = { Name: string; ID: int } let IsMatchByName record1
> (name:
> > string) = match record1 with | { MyRecord.Name = nameFound; MyRecord.ID
> = _;
> > } when nameFound = name -> true | _ -> false let recordX = { Name =
> > "Parker"; ID = 10 } let isMatched1 = IsMatchByName recordX "Parker" let
> > isMatched2 = IsMatchByName recordX "Hartono"
> >
> > All of them partially destructs(or matches) a dictionary.
> >
> > thautwarm
> >
> >
> > ___
> > Python-ideas mailing list
> > Python-ideas@python.org
> > https://mail.python.org/mailman/listinfo/python-ideas
> > Code of Conduct: http://python.org/psf/codeofconduct/
> >
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]

2018-04-09 Thread Greg Ewing

Peter O'Connor wrote:
The behaviour where the first element of the return is the same as the 
first element of the input can be weird and confusing.  E.g. compare:


 >> list(itertools.accumulate([2, 3, 4], lambda accum, val: accum-val))
[2, -1, -5]
 >> list(itertools.accumulate([2, 3, 4], lambda accum, val: val-accum))
[2, 1, 3]


This is another symptom of the fact that the first item in
the list is taken to be the initial value. There's no way
to interpret these results in terms of an assumed initial
value, because neither of those functions has a left
identity.

--
Greg
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]

2018-04-09 Thread Tim Peters
Woo hoo!  Another coincidence.  I just happened to be playing with
this problem today:

You have a large list - xs - of N numbers.  It's necessary to compute slice sums

sum(xs[i:j])

for a great many slices, 0 <= i <= j <= N.

For concreteness, say xs is a time series representing a toll booth
receipt's by hour across years.  "Management" may ask for all sorts of
sums - by 24-hour period, by week, by month, by year, by season, ...

A little thought showed that sum(xs[i:j]) = sum(xs[:j]) - sum(xs[:i]),
so if we precomputed just the prefix sums, the sum across an arbitrary
slice could be computed thereafter in constant time.  Hard to beat
that ;-)

But computing the prefix sums is exactly what accumulate() does!  With
one twist:  while we have N numbers, there are N+1 slice indices.  So
accumulate(xs) doesn't quite work.  It needs to also have a 0 inserted
as the first prefix sum (the empty prefix sum(xs[:0]).

Which is exactly what a this_is_the_initial_value=0 argument would do
for us.  As is, using the chain trick:

class SliceSummer:
def __init__(self, xs):
from itertools import accumulate, chain
self.N = N = len(xs)
if not N:
raise ValueError("need a non-empty sequence")
self.prefixsum = list(accumulate(chain([0], xs)))
assert len(self.prefixsum) == N+1

def slicesum(self, i, j):
N = self.N
if not 0 <= i <= j <= N:
raise ValueError(f"need 0 <= {i} <= {j} <= {N}")
return self.prefixsum[j] - self.prefixsum[i]

def test(N):
from random import randrange
xs = [randrange(-10, 11) for _ in range(N)]
ntried = 0
ss = SliceSummer(xs)
NP1 = N + 1
for i in range(NP1):
for j in range(i, NP1):
ntried += 1
assert ss.slicesum(i, j) == sum(xs[i:j])
assert ntried == N * NP1 // 2 + NP1, ntried
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-09 Thread Peter O'Connor
Kyle, you sounded so reasonable when you were trashing itertools.accumulate
(which I now agree is horrible).  But then you go and support Serhiy's
madness:  "smooth_signal = [average for average in [0] for x in signal for
average in [(1-decay)*average + decay*x]]" which I agree is clever, but
reads more like a riddle than readable code.

Anyway, I continue to stand by:

(y:= f(y, x) for x in iter_x from y=initial_y)

And, if that's not offensive enough, to its extension:

(z, y := f(z, x) -> y for x in iter_x from z=initial_z)

Which carries state "z" forward but only yields "y" at each iteration.
(see proposal: https://github.com/petered/peps/blob/master/pep-.rst)

Why am I so obsessed?  Because it will allow you to conveniently replace
classes with more clean, concise, functional code.  People who thought they
never needed such a construct may suddenly start finding it indispensable
once they get used to it.

How many times have you written something of the form?:

class StatefulThing(object):

def __init__(self, initial_state, param_1, param_2):
self._param_1= param_1
self._param_2 = param_2
self._state = initial_state

def update_and_get_output(self, new_observation):  # (or just
__call__)
self._state = do_some_state_update(self._state, new_observation,
self._param_1)
output = transform_state_to_output(self._state, self._param_2)
return output

processor = StatefulThing(initial_state = initial_state, param_1 = 1,
param_2 = 4)
processed_things = [processor.update_and_get_output(x) for x in x_gen]

I've done this many times.  Video encoding, robot controllers, neural
networks, any iterative machine learning algorithm, and probably lots of
things I don't know about - they all tend to have this general form.

And how many times have I had issues like "Oh no now I want to change
param_1 on the fly instead of just setting it on initialization, I guess I
have to refactor all usages of this class to pass param_1 into
update_and_get_output instead of __init__".

What if instead I could just write:

def update_and_get_output(last_state, new_observation, param_1, param_2)
new_state = do_some_state_update(last_state, new_observation,
_param_1)
output = transform_state_to_output(last_state, _param_2)
return new_state, output

processed_things = [state, output:= update_and_get_output(state, x,
param_1=1, param_2=4) -> output for x in observations from
state=initial_state]

Now we have:
- No mutable objects (which cuts of a whole slew of potential bugs and
anti-patterns familiar to people who do OOP.)
- Fewer lines of code
- Looser assumptions on usage and less refactoring. (if I want to now pass
in param_1 at each iteration instead of just initialization, I need to make
no changes to update_and_get_output).
- No need for state getters/setters, since state is is passed around
explicitly.

I realize that calling for changes to syntax is a lot to ask - but I still
believe that the main objections to this syntax would also have been raised
as objections to the now-ubiquitous list-comprehensions - they seem hostile
and alien-looking at first, but very lovable once you get used to them.




On Sun, Apr 8, 2018 at 1:41 PM, Kyle Lahnakoski 
wrote:

>
>
> On 2018-04-05 21:18, Steven D'Aprano wrote:
> > (I don't understand why so many people have such an aversion to writing
> > functions and seek to eliminate them from their code.)
> >
>
> I think I am one of those people that have an aversion to writing
> functions!
>
> I hope you do not mind that I attempt to explain my aversion here. I
> want to clarify my thoughts on this, and maybe others will find
> something useful in this explanation, maybe someone has wise words for
> me. I think this is relevant to python-ideas because someone with this
> aversion will make different language suggestions than those that don't.
>
> Here is why I have an aversion to writing functions: Every unread
> function represents multiple unknowns in the code. Every function adds
> to code complexity by mapping an inaccurate name to specific
> functionality.
>
> When I read code, this is what I see:
>
> >x = you_will_never_guess_how_corner_cases_are_handled(a, b, c)
> >y =
> you_dont_know_I_throw_a_BaseException_when_I_do_not_like_your_arguments(j,
> k, l)
>
> Not everyone sees code this way: I see people read method calls, make a
> number of wild assumptions about how those methods work, AND THEY ARE
> CORRECT!  How do they do it!?  It is as if there are some unspoken
> convention about how code should work that's opaque to me.
>
> For example before I read the docs on
> itertools.accumulate(list_of_length_N, func), here are the unknowns I see:
>
> * Does it return N, or N-1 values?
> * How are initial conditions handled?
> * Must `func` perform the initialization by accepting just one
> parameter, and accumulate with 

Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]

2018-04-09 Thread Neil Girdhar


On Friday, April 6, 2018 at 9:03:05 PM UTC-4, Raymond Hettinger wrote:
>
> > On Friday, April 6, 2018 at 8:14:30 AM UTC-7, Guido van Rossum wrote: 
> > On Fri, Apr 6, 2018 at 7:47 AM, Peter O'Connor  
> wrote: 
> >>   So some more humble proposals would be: 
> >> 
> >> 1) An initializer to itertools.accumulate 
> >> functools.reduce already has an initializer, I can't see any 
> controversy to adding an initializer to itertools.accumulate 
> > 
> > See if that's accepted in the bug tracker. 
>
> It did come-up once but was closed for a number reasons including lack of 
> use cases.  However, Peter's signal processing example does sound 
> interesting, so we could re-open the discussion. 
>
> For those who want to think through the pluses and minuses, I've put 
> together a Q as food for thought (see below).  Everybody's design 
> instincts are different -- I'm curious what you all think think about the 
> proposal. 
>
>
> Raymond 
>
> - 
>
> Q. Can it be done? 
> A. Yes, it wouldn't be hard. 
>
> _sentinel = object() 
>
> def accumulate(iterable, func=operator.add, start=_sentinel): 
> it = iter(iterable) 
> if start is _sentinel: 
> try: 
> total = next(it) 
> except StopIteration: 
> return 
> else: 
> total = start 
> yield total 
> for element in it: 
> total = func(total, element) 
> yield total 
>
> Q. Do other languages do it? 
> A. Numpy, no. R, no. APL, no. Mathematica, no. Haskell, yes. 
>

Isn't numpy a yes?  
https://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.accumulate.html

They definitely support it for add and multiply.  It's defined, but doesn't 
seem to work on custum ufuncs (the result of frompyfunc).
  

>
> * 
> http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.accumulate.html
>  
> * https://stat.ethz.ch/R-manual/R-devel/library/base/html/cumsum.html 
> * http://microapl.com/apl/apl_concepts_chapter5.html 
>   \+ 1 2 3 4 5 
>   1 3 6 10 15 
> * https://reference.wolfram.com/language/ref/Accumulate.html 
> * https://www.haskell.org/hoogle/?hoogle=mapAccumL 
>
>
> Q. How much work for a person to do it currently? 
> A. Almost zero effort to write a simple helper function: 
>
>myaccum = lambda it, func, start: accumulate(chain([start], it), func) 
>
>
> Q. How common is the need? 
> A. Rare. 
>
>
> Q. Which would be better, a simple for-loop or a customized itertool? 
> A. The itertool is shorter but more opaque (especially with respect 
>to the argument order for the function call): 
>
> result = [start] 
> for x in iterable: 
>  y = func(result[-1], x) 
>  result.append(y) 
>
> versus: 
>
> result = list(accumulate(iterable, func, start=start)) 
>
>
> Q. How readable is the proposed code? 
> A. Look at the following code and ask yourself what it does: 
>
> accumulate(range(4, 6), operator.mul, start=6) 
>
>Now test your understanding: 
>
> How many values are emitted? 
> What is the first value emitted? 
> Are the two sixes related? 
> What is this code trying to accomplish? 
>
>
> Q. Are there potential surprises or oddities? 
> A. Is it readily apparent which of assertions will succeed? 
>
> a1 = sum(range(10)) 
> a2 = sum(range(10), 0) 
> assert a1 == a2 
>
> a3 = functools.reduce(operator.add, range(10)) 
> a4 = functools.reduce(operator.add, range(10), 0) 
> assert a3 == a4 
>
> a4 = list(accumulate(range(10), operator.add)) 
> a5 = list(accumulate(range(10), operator.add, start=0)) 
> assert a5 == a6 
>
>
> Q. What did the Python 3.0 Whatsnew document have to say about reduce()? 
> A. "Removed reduce(). Use functools.reduce() if you really need it; 
> however, 99 percent of the time an explicit for loop is more readable." 
>
>
> Q. What would this look like in real code? 
> A. We have almost no real-world examples, but here is one from a 
> StackExchange post: 
>
> def wsieve():   # wheel-sieve, by Will Ness.
> ideone.com/mqO25A->0hIE89 
> wh11 = [ 2,4,2,4,6,2,6,4,2,4,6,6, 2,6,4,2,6,4,6,8,4,2,4,2, 
>  4,8,6,4,6,2,4,6,2,6,6,4, 2,4,6,2,6,4,2,4,2,10,2,10] 
> cs = accumulate(cycle(wh11), start=11) 
> yield( next( cs))   #   cf. ideone.com/WFv4f 
> ps = wsieve()   # 
> codereview.stackexchange.com/q/92365/9064 
> p = next(ps)# 11 
> psq = p*p   # 121 
> D = dict( zip( accumulate(wh11, start=0), count(0)))   # start 
> from 
> sieve = {} 
> for c in cs: 
> if c in sieve: 
> wheel = 

Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]

2018-04-09 Thread Tim Peters
[Tim]
>> Then why was [accumulate] generalized to allow any 2-argument function?

[Raymond]
> Prior to 3.2, accumulate() was in the recipes section as pure Python
> code.  It had no particular restriction to numeric types.
>
> I received a number of requests for accumulate() to be promoted
> to a real itertool (fast, tested, documented C code with a stable API).
> I agreed and accumulate() was added to itertools in 3.2.  It worked
> with anything supporting __add__, including str, bytes, lists, and
> tuples.

So that's the restriction Nick had in mind:  a duck-typing kind, in
that it would blow up on types that don't participate in
PyNumber_Add():

> More specifically, accumulate_next() called PyNumber_Add() without
< any particular type restriction.
>
> Subsequently, I got requests to generalize accumulate() to support any
> arity-2 function (with operator.mul offered as the motivating example).

Sucker ;-)

>  Given that there were user requests and there were ample precedents
> in other languages, I acquiesced despite having some reservations (if used
> with a lambda, the function call overhead might make accumulate() slower
> than a plain Python for-loop without the function call). So, that generalized
> API extension went into 3.3 and has remained unchanged ever since.
>
> Afterwards, I was greeted with the sound of crickets.  Either it was nearly
> perfect or no one cared or both ;-)

Or nobody cared _enough_ to endure a 100-message thread arguing about
an objectively minor change ;-)

If you can borrow Guido's time machine, I'd suggest going back to the
original implementation, except name it `cumsum()` instead, and leave
`accumulate()` to the 3rd-party itertools packages (at least one of
which (itertoolz) has supported an optional "initial" argument all
along).

> It remains one of the least used itertools.

I don't see how that can be known.  There are at least tens of
thousands of Python programmers nobody on this list has ever heard
about - or from - writing code that's invisible to search engines.

I _believe_ it - I just don't see how it can be known.


> ...
> Honestly, I couldn't immediately tell what this code was doing:
>
> list(accumulate([8, 4, "k"], lambda x, y: x + [y], first_result=[]))

Of course you couldn't:  you think of accumulate() as being about
running sums, and _maybe_ some oddball out there using it for running
products.  But that's a statement about your background, seeing code
you've never seen before, not about the function.  Nobody knows
immediately, at first sight, what

 list(accumulate([8, 4, 6], lambda x, y: x + y, first_result=0))

does either.  It's learned.  If your background were in, e.g., Haskell
instead, then in the latter case you'd picture a list [a, b, c, ...]
and figure it out from thinking about what the prefixes of

0 + a + b + c + 

compute.  In exactly the same way, in the former case you'd think
about what the prefixes of

[] + [a] + [b] + [c] + ...

compute.  They're equally obvious _after_ undertaking that easy
exercise, but clear as mud before doing so.


> This may be a case where a person would be better-off without accumulate() at 
> all.

De gustibus non est disputandum.


>> In short, for _general_ use `accumulate()` needs `initial` for exactly
>> the same reasons `reduce()` needed it.

> The reduce() function had been much derided, so I've had it mentally filed
> in the anti-pattern category.  But yes, there may be wisdom there.

The current accumulate() isn't just akin to reduce(), it _is_
reduce(), except a drunken reduce() so nauseated it vomits its
internal state out after each new element it eats ;-)


>> BTW, the type signatures on the scanl (requires an initial value) and
>> scanl1 (does not support an initial value) implementations I pasted
>> from Haskell's Standard Prelude give a deeper reason:  without an
>> initial value, a list of values of type A can only produce another
>> list of values of type A via scanl1.  The dyadic function passed must
>> map As to As.  But with an initial value supplied of type B, scanl can
>> transform a list of values of type A to a list of values of type B.
>> While that may not have been obvious in the list prefix example I
>> gave, that was at work:  a list of As was transformed into a list _of_
>> lists of As.  That's impossible for scanl1 to do, but easy for scanl.

> Thanks for pointing that out.  I hadn't considered that someone might
> want to transform one type into another using accumulate().  That is
> pretty far from my mental model of what accumulate() was intended for.

It's nevertheless what the current function supports - nothing being
suggested changes that one whit.  It's "worse" in Python because while
only `scanl` in Haskell can "change types", the current `scanl1`-like
Python `accumulate` can change types too.  Perhaps the easiest way to
see that is by noting that

map(f, xs)

is generally equivalent to

accumulate(xs, lambda x, y: f(y))

right now.  

Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]

2018-04-09 Thread Peter O'Connor
Also Tim Peter's one-line example of:

print(list(itertools.accumulate([1, 2, 3], lambda x, y: str(x) + str(y

I think makes it clear that itertools.accumulate is not the right vehicle
for this change - we should make a new itertools function with a required
"initial" argument.

On Mon, Apr 9, 2018 at 1:44 PM, Peter O'Connor 
wrote:

> It seems clear that the name "accumulate" has been kind of antiquated
> since the "func" argument was added and "sum" became just a default.
>
> And people seem to disagree about whether the result should have a length
> N or length N+1 (where N is the number of elements in the input iterable).
>
> The behaviour where the first element of the return is the same as the
> first element of the input can be weird and confusing.  E.g. compare:
>
> >> list(itertools.accumulate([2, 3, 4], lambda accum, val: accum-val))
> [2, -1, -5]
> >> list(itertools.accumulate([2, 3, 4], lambda accum, val: val-accum))
> [2, 1, 3]
>
> One might expect that since the second function returned the negative of
> the first function, and both are linear, that the results of the second
> would be the negative of the first, but that is not the case.
>
> Maybe we can instead let "accumulate" fall into deprecation, and instead
> add a new more general itertools "reducemap" method:
>
> def reducemap(iterable: Iterable[Any], func: Callable[(Any, Any), Any],
> initial: Any, include_initial_in_return=False): -> Generator[Any]
>
> Benefits:
> - The name is more descriptive of the operation (a reduce operation where
> we keep values at each step, like a map)
> - The existence of include_initial_in_return=False makes it somewhat
> clear that the initial value will by default NOT be provided in the
> returning generator
> - The mandatory initial argument forces you to think about initial
> conditions.
>
> Disadvantages:
> - The most common use case (summation, product), has a "natural" first
> element (0, and 1, respectively) when you'd now be required to write out.
> (but we could just leave accumulate for sum).
>
> I still prefer a built-in language comprehension syntax for this like: (y
> := f(y, x) for x in x_vals from y=0), but for a huge discussion on that see
> the other thread.
>
> --- More Examples (using "accumulate" as the name for now)  ---
>
> # Kalman filters
> def kalman_filter_update(state, measurement):
> ...
> return state
>
> online_trajectory_estimate = accumulate(measurement_generator, func=
> kalman_filter_update, initial = initial_state)
>
> ---
>
> # Bayesian stats
> def update_model(prior, evidence):
>...
>return posterior
>
> model_history  = accumulate(evidence_generator, func=update_model,
> initial = prior_distribution)
>
> ---
>
> # Recurrent Neural networks:
> def recurrent_network_layer_step(last_hidden, current_input):
> new_hidden = 
> return new_hidden
>
> hidden_state_generator = accumulate(input_sequence, func=
> recurrent_network_layer_step, initial = initial_hidden_state)
>
>
>
>
> On Mon, Apr 9, 2018 at 7:14 AM, Nick Coghlan  wrote:
>
>> On 9 April 2018 at 14:38, Raymond Hettinger 
>> wrote:
>> >> On Apr 8, 2018, at 6:43 PM, Tim Peters  wrote:
>> >> In short, for _general_ use `accumulate()` needs `initial` for exactly
>> >> the same reasons `reduce()` needed it.
>> >
>> > The reduce() function had been much derided, so I've had it mentally
>> filed in the anti-pattern category.  But yes, there may be wisdom there.
>>
>> Weirdly (or perhaps not so weirdly, given my tendency to model
>> computational concepts procedurally), I find the operation of reduce()
>> easier to understand when it's framed as "last(accumulate(iterable,
>> binop, initial=value)))".
>>
>> Cheers,
>> Nick.
>>
>> --
>> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>> ___
>> Python-ideas mailing list
>> Python-ideas@python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]

2018-04-09 Thread Peter O'Connor
It seems clear that the name "accumulate" has been kind of antiquated since
the "func" argument was added and "sum" became just a default.

And people seem to disagree about whether the result should have a length N
or length N+1 (where N is the number of elements in the input iterable).

The behaviour where the first element of the return is the same as the
first element of the input can be weird and confusing.  E.g. compare:

>> list(itertools.accumulate([2, 3, 4], lambda accum, val: accum-val))
[2, -1, -5]
>> list(itertools.accumulate([2, 3, 4], lambda accum, val: val-accum))
[2, 1, 3]

One might expect that since the second function returned the negative of
the first function, and both are linear, that the results of the second
would be the negative of the first, but that is not the case.

Maybe we can instead let "accumulate" fall into deprecation, and instead
add a new more general itertools "reducemap" method:

def reducemap(iterable: Iterable[Any], func: Callable[(Any, Any), Any],
initial: Any, include_initial_in_return=False): -> Generator[Any]

Benefits:
- The name is more descriptive of the operation (a reduce operation where
we keep values at each step, like a map)
- The existence of include_initial_in_return=False makes it somewhat clear
that the initial value will by default NOT be provided in the returning
generator
- The mandatory initial argument forces you to think about initial
conditions.

Disadvantages:
- The most common use case (summation, product), has a "natural" first
element (0, and 1, respectively) when you'd now be required to write out.
(but we could just leave accumulate for sum).

I still prefer a built-in language comprehension syntax for this like: (y
:= f(y, x) for x in x_vals from y=0), but for a huge discussion on that see
the other thread.

--- More Examples (using "accumulate" as the name for now)  ---

# Kalman filters
def kalman_filter_update(state, measurement):
...
return state

online_trajectory_estimate = accumulate(measurement_generator, func=
kalman_filter_update, initial = initial_state)

---

# Bayesian stats
def update_model(prior, evidence):
   ...
   return posterior

model_history  = accumulate(evidence_generator, func=update_model, initial
= prior_distribution)

---

# Recurrent Neural networks:
def recurrent_network_layer_step(last_hidden, current_input):
new_hidden = 
return new_hidden

hidden_state_generator = accumulate(input_sequence, func=
recurrent_network_layer_step, initial = initial_hidden_state)




On Mon, Apr 9, 2018 at 7:14 AM, Nick Coghlan  wrote:

> On 9 April 2018 at 14:38, Raymond Hettinger 
> wrote:
> >> On Apr 8, 2018, at 6:43 PM, Tim Peters  wrote:
> >> In short, for _general_ use `accumulate()` needs `initial` for exactly
> >> the same reasons `reduce()` needed it.
> >
> > The reduce() function had been much derided, so I've had it mentally
> filed in the anti-pattern category.  But yes, there may be wisdom there.
>
> Weirdly (or perhaps not so weirdly, given my tendency to model
> computational concepts procedurally), I find the operation of reduce()
> easier to understand when it's framed as "last(accumulate(iterable,
> binop, initial=value)))".
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Accepting multiple mappings as positional arguments to create dicts

2018-04-09 Thread Daniel Moisset
No worries, already implemented features happens so often in this list that
there's a story about Guido going back in a time machine to implement them
;-)

Just wanted to check that I had understood what you suggested correctly

On 9 April 2018 at 12:42, Andrés Delfino  wrote:

> Sorry, I didn't know that kwargs unpacking in dictionaries displays don't
> raise a TypeError exception.
>
> On Mon, Apr 9, 2018 at 8:23 AM, Daniel Moisset 
> wrote:
>
>> In which way would this be different to {**mapping1, **mapping2,
>> **mapping3} ?
>>
>> On 8 April 2018 at 22:18, Andrés Delfino  wrote:
>>
>>> Hi!
>>>
>>> I thought that maybe dict could accept several mappings as positional
>>> arguments, like this:
>>>
>>> class Dict4(dict):
 def __init__(self, *args, **kwargs):
 if len(args) > 1:
 if not all([isinstance(arg, dict) for arg in args]):
 raise TypeError('Dict4 expected instances of dict since
 multiple positional arguments were passed')

 temp = args[0].copy()

 for arg in args[1:]:
 temp.update(arg)

 super().__init__(temp, **kwargs)
 else:
 super().__init__(*args, **kwargs)

>>>
>>> AFAIK, this wouldn't create compatibility problems, since you can't pass
>>> two positional arguments now anyways.
>>>
>>> It would be useful to solve the "sum/union dicts" discussion, for
>>> example: requests.get(url, params=dict(params, {'foo': bar})
>>>
>>> Whar are your thoughts?
>>>
>>> ___
>>> Python-ideas mailing list
>>> Python-ideas@python.org
>>> https://mail.python.org/mailman/listinfo/python-ideas
>>> Code of Conduct: http://python.org/psf/codeofconduct/
>>>
>>>
>>
>>
>> --
>> Daniel F. Moisset - UK Country Manager - Machinalis Limited
>> www.machinalis.co.uk 
>> Skype: @dmoisset T: + 44 7398 827139
>>
>> 1 Fore St, London, EC2Y 9DT
>> 
>>
>> Machinalis Limited is a company registered in England and Wales.
>> Registered number: 10574987.
>>
>
>


-- 
Daniel F. Moisset - UK Country Manager - Machinalis Limited
www.machinalis.co.uk 
Skype: @dmoisset T: + 44 7398 827139

1 Fore St, London, EC2Y 9DT

Machinalis Limited is a company registered in England and Wales. Registered
number: 10574987.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Is there any idea about dictionary destructing?

2018-04-09 Thread Joao S. O. Bueno
I have an idea for an inovative, unanbiguous, straightforward and
backwards compatible syntax for that,
that evena llows one to pass metadata along the operation so that the
results can be tweaked acording
to each case's needs.

What about:

new_data = dict_feed({
"direct": "some data",
"nested": {
"lst_data": [1, 2, 3],
"int_data": 1
}
},
data
)

we could even call this approach a name such as "function call".


In other words, why to bloat the language with hard to learn, error prone,
grit-looking syntax, when a simple plain function call is perfectly
good, all you need to do over
your suggestion is to type the function name and a pair of parentheses?




On 7 April 2018 at 14:26, thautwarm  wrote:
> We know that Python support the destructing of iterable objects.
>
> m_iter = (_ for _ in range(10))
> a, *b, c = m_iter
>
> That's pretty cool! It's really convenient when there're many corner cases
> to handle with iterable collections.
> However destructing in Python could be more convenient if we support
> dictionary destructing.
>
> In my opinion, dictionary destructing is not difficult to implement and
> makes the syntax more expressive. A typical example is data access on nested
> data structures(just like JSON), destructing a dictionary makes the logic
> quite clear:
>
> data = {
> "direct": "some data",
> "nested": {
> "lst_data": [1, 2, 3],
> "int_data": 1
> }
> }
> {
>"direct": direct,
> "nested": {
> "lst_data": [a, b, c],
> }
> } = data
>
>
> Dictionary destructing might not be very well-known but it really helps. The
> operations on nested key-value collections are very frequent, and the codes
> for business logic are not readable enough until now. Moreover Python is now
> popular in data processing which must be enhanced by the entire support of
> data destructing.
>
> Here are some implementations of other languages:
> Elixir, which is also a popular dynamic language nowadays.
>
> iex> %{} = %{:a => 1, 2 => :b}
> %{2 => :b, :a => 1}
> iex> %{:a => a} = %{:a => 1, 2 => :b}
> %{2 => :b, :a => 1}
> iex> a
> 1
> iex> %{:c => c} = %{:a => 1, 2 => :b}
> ** (MatchError) no match of right hand side value: %{2 => :b, :a => 1}
>
> And in F#, there is something similar to dictionary destructing(actually,
> this destructs `struct` instead)
> type MyRecord = { Name: string; ID: int } let IsMatchByName record1 (name:
> string) = match record1 with | { MyRecord.Name = nameFound; MyRecord.ID = _;
> } when nameFound = name -> true | _ -> false let recordX = { Name =
> "Parker"; ID = 10 } let isMatched1 = IsMatchByName recordX "Parker" let
> isMatched2 = IsMatchByName recordX "Hartono"
>
> All of them partially destructs(or matches) a dictionary.
>
> thautwarm
>
>
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-ideas] Fwd: Is there any idea about dictionary destructing?

2018-04-09 Thread Thautwarm Zhao
I'm sorry that I didn't send a copy of the discussions here.

-- Forwarded message --
From: Thautwarm Zhao 
Date: 2018-04-09 1:24 GMT+08:00
Subject: Re: [Python-ideas] Is there any idea about dictionary destructing?
To: "Eric V. Smith" 


Thank you, Eric. Your links really help me and I've investigated it
carefully.

After reading them, I found the discussion almost focused on a non-nested
data structure.

The flatten key-value pairs might be easily replaced by something like

  x, y, z = [some_dict[k] for k in ('a', 'b', 'c')]

I couldn't agree more but, when it comes to nested,

 some_dict = {
  'a': {
   'b': {
 'c': V1},
   'e': V2
},
  'f': V3
  }

I agree that there could be other ways as intuitive as the dict destructing
 to
get `V1, V2, V3` instead, however dict destructing refers to the
consistency of Python language behaviours.

When I'm writing these codes:

[a, *b] = [1, 2, 3]

The LHS is actually equals to RHS, and if we implement a way to apply this
on dictionary

{'a': a, 'b': b, '@': c, **other} = {'a': 1, 'b': 2, '@': 3, '*': 4}

It also presents that LHS equals to RHS. Dict destructing/constructing is
totally compatible to Python unpack/pack,
just as what iterable destructing/constructing does. It's neat when we talk
about Python's data structures we can talk about
the consistency, readability and the expression of intuition.

In the real world, the following one could really help when it comes to the
field of data storage.

 some_dict =  {'a': [1, 2, 3, {"d": 4, "f": 5}]}
 {'a': [b, *c, {"d": e, **_}]} =  some_dict

The LHS doesn't only show the structure of some variable intuitively(this
makes review easier, too), but also supplies a way to access data in fewer
codes.

In the previous talk people have shown multiple usages of dict destructing
in real world:

   - Django Rest Framework validate
   - Load config files and use them,  specifically Yaml/JSON data
access.

In fact, any other operation on dictionary than simply getting a value from
a key might needs dict destructing, just when the task is complicated
enough.

I do think the usages are general enough now to make us allow similar
syntax to do above tasks.

P.S:

Some other advices in the previous talk like the following:

 'a' as x, 'b' as y, 'c' as z = some_dict
 'a': x, 'b': y, 'c': z = some_dict
 mode, height, width = **prefs

Either of them conflicts against the current syntax, or does mismatch the
consistency of Python language(LHS != RHS).

thautwarm


2018-04-08 5:39 GMT+08:00 Eric V. Smith :

> There was a long thread last year on a subject, titled "Dictionary
> destructing and unpacking.":
> https://mail.python.org/pipermail/python-ideas/2017-June/045963.html
>
> You might want to read through it and see what ideas and problems were
> raised then.
>
> In that discussion, there's also a link to an older pattern matching
> thread:
> https://mail.python.org/pipermail/python-ideas/2015-April/032907.html
>
> Eric
>
> On 4/7/2018 1:26 PM, thautwarm wrote:
>
>> We know that Python support the destructing of iterable objects.
>>
>> m_iter= (_for _in range(10))
>> a,*b, c= m_iter
>>
>> That's pretty cool! It's really convenient when there're many corner
>> cases to handle with iterable collections.
>> However destructing in Python could be more convenient if we support
>> dictionary destructing.
>>
>> In my opinion, dictionary destructing is not difficult to implement and
>> makes the syntax more expressive. A typical example is data access on
>> nested data structures(just like JSON), destructing a dictionary makes the
>> logic quite clear:
>>
>> data= {
>>  "direct": "some data",
>>  "nested": {
>>  "lst_data": [1,2,3],
>>  "int_data": 1
>> }
>> }
>> {
>> "direct": direct,
>>  "nested": {
>>  "lst_data": [a, b, c],
>>  }
>> }= data
>>
>>
>> Dictionary destructing might not be very well-known but it really helps.
>> The operations on nested key-value collections are very frequent, and the
>> codes for business logic are not readable enough until now. Moreover Python
>> is now popular in data processing which must be enhanced by the entire
>> support of data destructing.
>>
>> Here are some implementations of other languages:
>> Elixir, which is also a popular dynamic language nowadays.
>>
>> |iex> %{} = %{:a => 1, 2 => :b} %{2 => :b, :a => 1} iex> %{:a => a} =
>> %{:a => 1, 2 => :b} %{2 => :b, :a => 1} iex> a 1 iex> %{:c => c} = %{:a =>
>> 1, 2 => :b} ** (MatchError) no match of right hand side value: %{2 => :b,
>> :a => 1}|
>>
>> And in F#, there is something similar to dictionary destructing(actually,
>> this destructs `struct` instead)
>> type MyRecord = { Name: string; ID: int } letIsMatchByName record1 (name:
>> string) = matchrecord1 with| { 

Re: [Python-ideas] Accepting multiple mappings as positional arguments to create dicts

2018-04-09 Thread Andrés Delfino
Sorry, I didn't know that kwargs unpacking in dictionaries displays don't
raise a TypeError exception.

On Mon, Apr 9, 2018 at 8:23 AM, Daniel Moisset 
wrote:

> In which way would this be different to {**mapping1, **mapping2,
> **mapping3} ?
>
> On 8 April 2018 at 22:18, Andrés Delfino  wrote:
>
>> Hi!
>>
>> I thought that maybe dict could accept several mappings as positional
>> arguments, like this:
>>
>> class Dict4(dict):
>>> def __init__(self, *args, **kwargs):
>>> if len(args) > 1:
>>> if not all([isinstance(arg, dict) for arg in args]):
>>> raise TypeError('Dict4 expected instances of dict since
>>> multiple positional arguments were passed')
>>>
>>> temp = args[0].copy()
>>>
>>> for arg in args[1:]:
>>> temp.update(arg)
>>>
>>> super().__init__(temp, **kwargs)
>>> else:
>>> super().__init__(*args, **kwargs)
>>>
>>
>> AFAIK, this wouldn't create compatibility problems, since you can't pass
>> two positional arguments now anyways.
>>
>> It would be useful to solve the "sum/union dicts" discussion, for
>> example: requests.get(url, params=dict(params, {'foo': bar})
>>
>> Whar are your thoughts?
>>
>> ___
>> Python-ideas mailing list
>> Python-ideas@python.org
>> https://mail.python.org/mailman/listinfo/python-ideas
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>>
>
>
> --
> Daniel F. Moisset - UK Country Manager - Machinalis Limited
> www.machinalis.co.uk 
> Skype: @dmoisset T: + 44 7398 827139
>
> 1 Fore St, London, EC2Y 9DT
> 
>
> Machinalis Limited is a company registered in England and Wales.
> Registered number: 10574987.
>
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Accepting multiple mappings as positional arguments to create dicts

2018-04-09 Thread Daniel Moisset
In which way would this be different to {**mapping1, **mapping2,
**mapping3} ?

On 8 April 2018 at 22:18, Andrés Delfino  wrote:

> Hi!
>
> I thought that maybe dict could accept several mappings as positional
> arguments, like this:
>
> class Dict4(dict):
>> def __init__(self, *args, **kwargs):
>> if len(args) > 1:
>> if not all([isinstance(arg, dict) for arg in args]):
>> raise TypeError('Dict4 expected instances of dict since
>> multiple positional arguments were passed')
>>
>> temp = args[0].copy()
>>
>> for arg in args[1:]:
>> temp.update(arg)
>>
>> super().__init__(temp, **kwargs)
>> else:
>> super().__init__(*args, **kwargs)
>>
>
> AFAIK, this wouldn't create compatibility problems, since you can't pass
> two positional arguments now anyways.
>
> It would be useful to solve the "sum/union dicts" discussion, for example:
> requests.get(url, params=dict(params, {'foo': bar})
>
> Whar are your thoughts?
>
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
>
>


-- 
Daniel F. Moisset - UK Country Manager - Machinalis Limited
www.machinalis.co.uk 
Skype: @dmoisset T: + 44 7398 827139

1 Fore St, London, EC2Y 9DT

Machinalis Limited is a company registered in England and Wales. Registered
number: 10574987.
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]

2018-04-09 Thread Nick Coghlan
On 9 April 2018 at 14:38, Raymond Hettinger  wrote:
>> On Apr 8, 2018, at 6:43 PM, Tim Peters  wrote:
>> In short, for _general_ use `accumulate()` needs `initial` for exactly
>> the same reasons `reduce()` needed it.
>
> The reduce() function had been much derided, so I've had it mentally filed in 
> the anti-pattern category.  But yes, there may be wisdom there.

Weirdly (or perhaps not so weirdly, given my tendency to model
computational concepts procedurally), I find the operation of reduce()
easier to understand when it's framed as "last(accumulate(iterable,
binop, initial=value)))".

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-09 Thread Rhodri James

On 09/04/18 11:52, Rhodri James wrote:

On 07/04/18 09:54, Cammil Taank wrote:

Care to repeat those arguments?


Indeed.

*Minimal use of characters*


Terseness is not necessarily a virtue.  While it's good not to be 
needlessly verbose, Python is not Perl and we are not trying to do 
everything on one line.  Overly terse code is much less readable, as all 
the obfustication competitions demonstrate.  I'm afraid I count this one 
*against* your proposal.



*Thoughts on odd usage of "!"*

In the English language, `!` signifies an exclamation, and I am
imagining a similar usage to that of introducing something by its name
in an energetic way. For example a boxer walking in to the ring:

"Muhammed_Ali! ", "x! get_x()"


I'm afraid that's a very personal interpretation.  In particular, '!' 
normally ends a sentence very firmly, so expecting the expression to 
carry on is a little counter-intuitive.  For me, my expectations of '!' 
run roughly as:


   * factorial (from my maths degree)
   * array dereference (because I am old: a!2 was the equivalent of a[2] 
in BCPL)

   * an exclamation, much overused in writing
   * the author was bitten by Yahoo! at an early age.


Also logical negation in C-like languages, of course.  Sorry, I'm a bit 
sleep-deprived this morning.


--
Rhodri James *-* Kynesim Ltd
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-09 Thread Rhodri James

On 07/04/18 09:54, Cammil Taank wrote:

Care to repeat those arguments?


Indeed.

*Minimal use of characters*


Terseness is not necessarily a virtue.  While it's good not to be 
needlessly verbose, Python is not Perl and we are not trying to do 
everything on one line.  Overly terse code is much less readable, as all 
the obfustication competitions demonstrate.  I'm afraid I count this one 
*against* your proposal.



*Thoughts on odd usage of "!"*

In the English language, `!` signifies an exclamation, and I am
imagining a similar usage to that of introducing something by its name
in an energetic way. For example a boxer walking in to the ring:

"Muhammed_Ali! ", "x! get_x()"


I'm afraid that's a very personal interpretation.  In particular, '!' 
normally ends a sentence very firmly, so expecting the expression to 
carry on is a little counter-intuitive.  For me, my expectations of '!' 
run roughly as:


  * factorial (from my maths degree)
  * array dereference (because I am old: a!2 was the equivalent of a[2] 
in BCPL)

  * an exclamation, much overused in writing
  * the author was bitten by Yahoo! at an early age.

--
Rhodri James *-* Kynesim Ltd
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] PEP 572: Statement-Local Name Bindings, take three!

2018-04-09 Thread Nick Coghlan
On 9 April 2018 at 01:01, Steven D'Aprano  wrote:
> On Sun, Apr 08, 2018 at 09:25:33PM +1000, Nick Coghlan wrote:
>
>> I was writing a new stdlib test case today, and thinking about how I
>> might structure it differently in a PEP 572 world, and realised that a
>> situation the next version of the PEP should discuss is this one:
>>
>> # Dict display
>> data = {
>> key_a: 1,
>> key_b: 2,
>> key_c: 3,
>> }
>>
>> # Set display with local name bindings
>> data = {
>> local_a := 1,
>> local_b := 2,
>> local_c := 3,
>>}
>
> I don't understand the point of these examples. Sure, I guess they would
> be legal, but unless you're actually going to use the name bindings,
> what's the point in defining them?

That *would* be the point.

In the case where it occurred to me, the actual code I'd written
looked like this:

   curdir_import = ""
   curdir_relative = os.curdir
   curdir_absolute = os.getcwd()
   all_spellings = [curdir_import, curdir_relative, curdir_absolute]

(Since I was testing the pydoc CLI's sys.path manipulation, and wanted
to cover all the cases).


>> I don't think this is bad (although the interaction with dicts is a
>> bit odd), and I don't think it counts as a rationale either, but I do
>> think the fact that it becomes possible should be noted as an outcome
>> arising from the "No sublocal scoping" semantics.
>
> If we really wanted to keep the sublocal scoping, we could make
> list/set/dict displays their own scope too.
>
> Personally, that's the only argument for sublocal scoping that I like
> yet: what happens inside a display should remain inside the display, and
> not leak out into the function.
>
> So that has taken me from -1 on sublocal scoping to -0.5 if it applies
> to displays.

Inflicting the challenges that comprehensions have at class scope on
all container displays wouldn't strike me as a desirable outcome (plus
there's also the problem that full nested scopes are relatively
expensive at runtime).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Start argument for itertools.accumulate() [Was: Proposal: A Reduce-Map Comprehension and a "last" builtin]

2018-04-09 Thread Greg Ewing

Raymond Hettinger wrote:


I don't want to overstate the case, but I do think a function signature that
offers a "first_value" option is an invitation to treat the first value as
being distinct from the rest of the data stream.


I conjecture that the initial value is *always* special,
and the only cases where it seems not to be are where
you're relying on some implicit initial value such as
zero.

--
Greg

___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-ideas] Proposal: A Reduce-Map Comprehension and a "last" builtin

2018-04-09 Thread Jacco van Dorp
> With the increased emphasis on iterators and generators in Python 3.x,
> the lack of a simple expression level equivalent to "for item in
> iterable: pass" is occasionally irritating, especially when
> demonstrating behaviour at the interactive prompt.

I've sometimes thought that exhaust(iterator) or iterator.exhaust() would be
a good thing to have - I've often wrote code doing basically "call this function
for every element in this container, and idc about return values", but find
myself using a list comprehension instead of generator. I guess it's such an
edge case that exhaust(iterator) as builtin would be overkill (but perhaps
itertools could have it ?), and most people don't pass around iterators, so
(f(x) for x in y).exhaust() might not look natural to most people. It
could return
the value for the last() semantics, but I think exhaustion would often be more
important than the last value.

2018-04-09 0:58 GMT+02:00 Greg Ewing :
> Kyle Lahnakoski wrote:
>
>> Consider Serhiy Storchaka's elegant solution, which I reformatted for
>> readability
>>
>>> smooth_signal = [
>>> average
>>>for average in [0]
>>>for x in signal
>>> for average in [(1-decay)*average + decay*x]
>>> ]
>
>
> "Elegant" isn't the word I would use, more like "clever".
> Rather too clever, IMO -- it took me some head scratching
> to figure out how it does what it does.
>
> And it would have taken even more head scratching, except
> there's a clue as to *what* it's supposed to be doing:
> the fact that it's assigned to something called
> "smooth_signal" -- one of those "inaccurate names" that
> you disparage so much. :-)
>
> --
> Greg
>
> ___
> Python-ideas mailing list
> Python-ideas@python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/
___
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/