On Sat, May 18, 2019 at 8:17 PM Yonatan Zunger wrote:
> Instead, we should permit any expression to be used. If a value does not
> expose an __enter__ method, it should behave as though its __enter__
> method is return self; if it does not have an __exit__ method, it should
> behave as though
On Sun, May 12, 2019, 5:36 PM Paul Moore wrote:
> On Sun, 12 May 2019 at 21:06, David Mertz wrote:
> > I thought of 'as' initially, and it reads well as English. But it felt
> to me like the meaning was too different from the other meanings of 'as' in
> Python. I might be pers
On Sun, May 12, 2019, 3:33 PM Gustavo Carneiro wrote:
> # Hypothetical future labelled break:
>> def find_needle_in_haystacks():
>> for haystack in glob.glob('path/to/stuff/*') label HAYSTACKS:
>> fh = open(fname)
>> header = fh.readline()
>> if get_format(header) ==
the break issue).
On Sun, May 12, 2019, 3:38 PM Chris Angelico wrote:
> On Mon, May 13, 2019 at 3:26 AM David Mertz wrote:
> >
> > To be clear in this thread, I don't think I'm really ADVOCATING for a
> multi-level break. My comments are simply noting that I persona
To be clear in this thread, I don't think I'm really ADVOCATING for a
multi-level break. My comments are simply noting that I personally fairly
often encounter the situation where they would be useful. At the same
time, I worry about Python gaining sometimes-useful features that
complicate the
ly bookkeeping lines instead).
On Sat, May 11, 2019, 11:22 PM Chris Angelico wrote:
> On Sun, May 12, 2019 at 1:16 PM David Mertz wrote:
> >
> > Ok, sure. But what if different seen_enough() conditions are in the two
> inner loops? By "break to outer" ... I mean, d
Terry reminds me of a common case I encounter that cannot be transformed
into a loop over itertools.product(). E.g.
for main in stuff:
if thing_about(main):
for detail in more_stuff(stuff, main):
if seen_enough(detail):
# break to outer somehow
else:
I don't love the 'break break' syntax idea. But I find myself wanting to
break out of nested loops quite often. Very rarely could this be
transformed to an 'itertools.permutation' de-nesting, albeit perhaps more
often than I actually do that.
My usual kludge is a sentinel 'break_outer' variable
On Thu, Apr 25, 2019 at 6:42 PM Peter O'Connor
wrote:
> Despite the general beauty of Python, I find myself constantly violating
> the "don't repeat yourself" maxim when trying to write clear, fully
> documented code. Take the following example:
>
You do know that OPTIONAL type annotations are
On Mon, Apr 8, 2019, 6:26 AM Steven D'Aprano wrote:
> Given all the points already raised, I think that an explicit SortedList
> might be more appropriate.
>
This one looks cool. I've read about it, but haven't used it:
http://www.grantjenks.com/docs/sortedcontainers/
I think a "sort hint"
On Mon, Apr 8, 2019, 5:46 AM Paul Moore wrote:
> Still not convinced this is safe enough to be worth it ;-)
>
I'm convinced it's NOT safe enough to be worth it.
On the other hand, a sortedlist subclass that maintained its invariant
(probably remembering a key) sounds cool. I think there are
On Mon, Apr 1, 2019 at 8:54 PM Steven D'Aprano wrote:
> I can think of at least one English suffix pair that clash: -ify, -fy.
> How about other languages? How comfortable are you to say that nobody
> doing text processing in German or Hindi will need to deal with clashing
> affixes?
>
Here
On Mon, Apr 1, 2019 at 10:11 PM Dan Sommers <
2qdxy4rzwzuui...@potatochowder.com> wrote:
> So I've seen someone (likely David Mertz?) ask for something
> like filename.strip_suffix(('.png', '.jpg')). What is the
> context? Is it strictly a filename processing program? Do
>
On Sun, Mar 31, 2019, 9:35 PM Steven D'Aprano wrote:
> > That's simply not true, and I think it's clearly illustrated by the
> example I gave a few times. Not just conceivably, but FREQUENTLY I write
> code to accomplish the effect of the suggested:
> >
> > basename = fname.rstrip(('.jpg',
On Sun, Mar 31, 2019, 8:11 PM Steven D'Aprano wrote:
> Regarding later proposals to add support for multiple affixes, to
> recursively delete the affix repeatedly, and to take an additional
> argument to limit how many affixes will be removed: YAGNI.
>
That's simply not true, and I think it's
I just found this nice summary. It's not complete, but it looks well
written. https://tomassetti.me/parsing-in-python/
On Sun, Mar 31, 2019, 3:09 PM David Mertz wrote:
> There are about a half dozen widely used parsing libraries for Python.
> Each one of them takes a dramatically dif
On Sun, Mar 31, 2019 at 12:09 PM MRAB wrote:
> > That said, I really like Brandt's ideas of expanding the signature of
> > .lstrip/.rstrip instead.
> >
> > mystring.rstrip("abcd") # remove any of these single character suffixes
>
> It removes _all_ of the single character suffixes.
>
> >
The only reason I would support the idea would be to allow multiple
suffixes (or prefixes). Otherwise, it just does too little for a new
method. But adding that capability of startswith/endswith makes the cut off
something easy to get wrong and non-trivial to implement.
That said, I really like
On Sat, Mar 30, 2019, 8:42 AM Steven D'Aprano wrote:
> Most of us have had to cut a prefix or a suffix from a string, often a
> file extension. Its not as common as, say, stripping whitespace, but it
> happens often enough.
I do this all the time! I never really thought about wanting a method
Dropping the mailing list is another topic that often comes up, and is
always a terrible idea. Every suggester had a different platform in mind,
only consistent in all being vastly worse than email for this purpose
That said, if someone writes a FAQ about this mailing list, the first
answer can
All of this would be well served by a 3rd party library on PyPI. Strings
already have plenty of methods (probably too many). Having `stringtools`
would be nice to import a bunch of simple functions from.
On Mon, Mar 25, 2019 at 10:45 AM Alex Grigoryev wrote:
> strip_prefix and strip_suffix I
On Thu, Mar 21, 2019, 10:15 PM Steven D'Aprano wrote:
> What would be the most useful behaviour for dict "addition" in your
> opinion?
>
Probably what I would use most often was a "lossless" merging in which
duplicate keys resulted in the corresponding value becoming a set
containing all the
On Thu, Mar 21, 2019, 7:48 PM Steven D'Aprano wrote:
> A number of people including Antoine and Serhiy seem to have taken the
> position that merely adding dict.__add__ will make existing code using +
> harder to understand, as you will need to consider not just numeric
> addition and
I dislike the symbol '+' to mean "dictionary merging with value updates." I
have no objection to, and mildly support, adding '|' with this meaning.
It's not really possible to give "that one example" where + for meeting
makes code less clear... In my eyes it would be EVERY such use. Every
example
There are few cases where I would approve of 'if x is True'. However, the
names used in the example suggest it could be one of those rare cases.
Settings of True/False/None (i.e. not set) seem like a reasonable pattern.
In fact, in code like that, merely "truthy" values are probably a bug that
It was a VERY long time ago when True and False were not singletons. I
don't think we should still try to write code based on rules that stopped
applying more than a decade ago.
On Mon, Mar 18, 2019, 5:42 PM Greg Ewing
wrote:
> Oleg Broytman wrote:
> >Three-way (tri state) checkbox. You
This is an interesting challenge you have. However, this list is for
proposing ideas for changes in the Python language itself, in particular
the CPython reference implementation.
Python-list or some discussion site dealing with machine learning or
natural language processing would be appropriate
y. This is
> low-level tooling, really.
>
> So at least now there is a clear separation of concerns (and dedicated
> issues management/roadmap, which is also quite convenient. Not to mention
> readability !).
>
> To cover your concern: decopatch depends on makefun, so both come at th
t, I'm not sure I ever want them independently in practice. I did
write the book _Functional Programming in Python_, so I'm not entirely
unfamiliar with function wrappers.
On Tue, Mar 12, 2019, 10:18 AM David Mertz wrote:
> The wrapt module I linked to (not funtools.wraps) provides all the
>
The wrapt module I linked to (not funtools.wraps) provides all the
capabilities you mention since 2013. It allows mixed use of decorators as
decorator factories. It has a flat style.
There are some minor API difference between your libraries and wrapt, but
the concept is very similar. Since yours
What advantage do you perceive decopatch to have over wrapt? (
https://github.com/GrahamDumpleton/wrapt)
On Tue, Mar 12, 2019, 5:37 AM Sylvain MARIE via Python-ideas <
python-ideas@python.org> wrote:
> Dear python enthusiasts,
>
> Writing python decorators is indeed quite a tideous process, in
Maybe it's just the C++ IO piping that makes me like it, but these actually
seem intuitive to me, whereas `+` or even `|` leaves me queasy.
On Sat, Mar 9, 2019 at 7:40 PM Ian Foote wrote:
> > It might also be worth considering YAML's own dict merge operator, the
> > "<<" operator, as in
I'm really old ... I remember thinking how clever attrgetter() was when it
was after to Python 2.4.
On Fri, Mar 8, 2019, 7:51 PM David Mertz wrote:
> You could use the time machine:
> https://docs.python.org/3/library/operator.html
>
> On Fri, Mar 8, 2019, 11:57 AM Samuel Li wrote
You could use the time machine:
https://docs.python.org/3/library/operator.html
On Fri, Mar 8, 2019, 11:57 AM Samuel Li wrote:
> Don't know if this has been suggested before. Instead of writing something
> like
>
> >>> map(lambda x: x.upper(), ['a', 'b', 'c'])
>
> I suggest this syntax:
> >>>
On Mon, Mar 4, 2019, 11:45 AM Steven D'Aprano wrote:
> > Like other folks in the thread, I also want to merge dicts three times
> per
> > year.
>
> I'm impressed that you have counted it with that level of accuracy. Is it
> on the same three days each year, or do they move about? *wink*
>
To be
On Mon, Mar 4, 2019, 8:30 AM Serhiy Storchaka wrote:
> But is merging two dicts a common enough problem that needs introducing
> an operator to solve it? I need to merge dicts maybe not more than one
> or two times by year, and I am fine with using the update() method.
> Perhaps {**d1, **d2} can
"foo" + "bar" != "bar" + "foo"
On Wed, Feb 27, 2019, 12:35 PM George Castillo wrote:
> The key conundrum that needs to be solved is what to do for `d1 + d2` when
>> there are overlapping keys. I propose to make d2 win in this case, which is
>> what happens in `d1.update(d2)` anyways. If you
I've sort of lost the threads about who recommends what. I do not think
that PEP8 needs to include a sentence like "Better tooling would be really
cool" ... notwithstanding that I think that is a true sentence.
On Mon, Feb 25, 2019 at 2:50 PM Jonathan Fine wrote:
> Hi David
>
> Thank you for
I find the main pain point of line width limits to be string literals that
call out to some *other* code-like thing. Long URLs, user messages, and
long SQL are three common examples.
It's actually less an issue with SQL since that is itself more readable
across multiple lines, plus SQL itself
As a human, and one who reads and writes code even, I know that MY ability
to understands the meaning of a line of code starts to plummet when it
reaches about 65-70 characters in length.
Yes, of course there are some "it depends" caveats that make some lines
easier and some harder. But an 80
Wow... Sorry about all my typos from autocorrect. I should not write a
longish reply on my tablet, or should proofread and correct.
On Tue, Feb 19, 2019, 9:30 AM David Mertz On Tue, Feb 19, 2019, 9:07 AM simon.bordeyne wrote:
>
>> I find that 100 to 120 characters is ideal as far as li
imple setting to set the number of characters
> before a linting warning occurs would be acceptable.
>
Every linter I know about is customizable this way.
>
>
> Envoyé depuis mon smartphone Samsung Galaxy.
>
> Message d'origine
> De : David Mertz
>
19, 2019, 1:11 AM Anders Hovmöller
>
> > On 19 Feb 2019, at 05:48, David Mertz wrote:
> >
> > You either have much better eyes to read tiny fonts than I do, or maybe
> a much larger monitor (it's hard for me to fit a 30"" monitor in my laptop
> bag).
>
You either have much better eyes to read tiny fonts than I do, or maybe a
much larger monitor (it's hard for me to fit a 30"" monitor in my laptop
bag).
But that's not even the real issue. If the characters were in giant letters
on billboards, I still would never want more than 80 of them on a
On Fri, Feb 8, 2019 at 3:17 PM Christopher Barker
wrote:
> >vec_seq = Vector(seq)
> >(vec_seq * 2).name.upper()
> ># ... bunch more stuff
> >seq = vec_seq.unwrap()
>
> what type would .unwrap() return?
>
The idea—and the current toy implementation/alpha—has .unwrap return
On Thu, Feb 7, 2019 at 6:48 PM Steven D'Aprano wrote:
> I'm sorry, I did not see your comment that you thought new syntax was a
> bad idea. If I had, I would have responded directly to that.
>
Well... I don't think it's the worst idea ever. But in general adding more
operators is something I
Many apologies if people got one or more encrypted versions of this.
On 2/7/19 12:13 AM, Steven D'Aprano wrote:
It wasn't a concrete proposal, just food for thought. Unfortunately the
thinking seems to have missed the point of the Julia syntax and run off
with the idea of a wrapper class.
I did
On Mon, Feb 4, 2019 at 7:14 AM Kirill Balunov
wrote:
> len(v) # -> 12
>
> v[len] # ->
>
>
> In this case you can apply any function, even custom_linked_list from
> my_inhouse_module.py.
>
I think I really like this idea. Maybe as an extra spelling but still
allow .apply() to do the same
On Mon, Feb 4, 2019, 12:47 AM Christopher Barker
> I've lost track if who is advocating what, but:
>
Well, I made a toy implementation of a Vector class. I'm not sure what that
means I advocate other than the existence of a module on GitHub.
FWIW, I called the repo 'stringpy' as a start, so
How would you spell these with funcoperators?
v.replace("a","b").upper().count("B")
vec1.replace("PLACEHOLDER", vec2)
concat = vec1 + vec2
On Sun, Feb 3, 2019, 6:40 PM Robert Vanden Eynde
>
> On Sat, 2 Feb 2019, 21:46 Brendan Barnwell
> Yeah, it's called pip install funcoperators :
>
>>
On Sun, Feb 3, 2019, 6:36 PM Greg Ewing
> But they only cover the special case of a function that takes
> elements from just one input vector. What about one that takes
> coresponding elements from two or more vectors?
>
What syntax would you like? Not necessarily new syntax per se, but what
>
> >>> len(v) # Number of elements in the Vector `v`
>
Agreed, this should definitely be the behavior. So how do we get a vector
of lengths of each element?
> >>> # Compute the length of each element of the Vector `v`
> >>> v.apply(len)
> >>> v @ len
>
Also possible is:
v.len()
We
On Sun, Feb 3, 2019 at 3:16 PM Ronald Oussoren
wrote:
> The @ operator is meant for matrix multiplication (see PEP 465) and is
> already used for that in NumPy. IMHO just that is a good enough reason for
> not using @ as an elementwise application operator (ignoring if having an
> such an
On Sun, Feb 3, 2019 at 1:32 PM Ryan Gonzalez wrote:
> - I seriously doubt he's going to come back.
>
The election for Steering Council is underway, and Guido is one of the
candidates. He may or may not be one of 5 members of the SC, but that's up
to the voters among the core committers. But if
can use in map. But I do
> agree it’s not easy to have functions with parameters. That’s why I used
> functools.partial
>
I really did not understand how that was meant to work. But it was a whole
lot of lines to accomplish something very small either way.
> On Sun 3 Feb 2019 at 19:2
On Sun, Feb 3, 2019 at 3:54 AM Adrien Ricocotam wrote:
> I think all the issues you have right now would go of using another
> operation. I proposed the @ notation that is clear and different from
> everything else,
>
plus the operator is called "matmul" so it completely makes sense. The the
>
Guido left for a variety of personal reasons, only some of which I know.
Soon we will have a wonderful new Steering Council who will act as a
collective BDFL (except each for finite terms, and not dictator since it's
a council... but benevolent :-)).
On Sun, Feb 3, 2019, 12:33 PM James Lu Are
>
> I think it should follow the pre-existing behaviour of list, set, tuple,
> etc.
>
> >>> Vector("hello")
>
>
I try to keep the underlying datatype of the wrapped collection as much
as possible. Casting a string to a list changes that.
>>> Vector(d)
>>> Vector(tuple(d))
>>> Vector(set(d))
On Sat, Feb 2, 2019 at 10:00 PM MRAB wrote:
> Perhaps a reserved attribute that let's you refer to the vector itself
> instead of its members, e.g. '.self'?
>
> >>> len(v)
>
> >>> len(v.self)
> 12
>
I like that! But I'm not sure if '.self' is misleading. I use an attribute
called '._it'
> list(vi)
['Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
>>> vi
>
>>> list(vi)
[]
On Sat, Feb 2, 2019 at 9:03 PM David Mertz wrote:
> Slightly more on my initial behavior:
>
> >>> Vector({1:2,3:4})
> TypeError: Ambiguity vectorizing a map, perhaps
a plain string already
does (largely just the same thing slower).
On Sat, Feb 2, 2019 at 8:54 PM David Mertz wrote:
> Here is a very toy proof-of-concept:
>
> >>> from vector import Vector
> >>> l = "Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec".split
Here is a very toy proof-of-concept:
>>> from vector import Vector
>>> l = "Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec".split()
>>> v = Vector(l)
>>> v
>>> v.strip().lower().replace('a','X')
>>> vt = Vector(tuple(l))
>>> vt
>>> vt.lower().replace('o','X')
My few lines are at
My double dot was a typo on my tablet, not borrowing Julia syntax, in this
case.
On Sat, Feb 2, 2019, 6:43 PM David Mertz On Sat, Feb 2, 2019, 6:23 PM Christopher Barker
>
>> a_list_of_strings.strip().lower().title()
>>
>> is a lot nicer than:
>>
>> [s.
On Sat, Feb 2, 2019, 6:23 PM Christopher Barker
> a_list_of_strings.strip().lower().title()
>
> is a lot nicer than:
>
> [s.title() for s in (s.lower() for s in [s.strip(s) for s in
> a_list_of_strings])]
>
> or
>
> list(map(str.title, (map(str.lower, (map(str.strip, a_list_of_strings
> #
Beyond possibly saving 3-5 characters, I continue not to see anything
different from map in this discussion.
list(vector) applies list to the vector itself.
> list.(vector) applies list to each component of vector.
>
In Python:
list(seq) applies list to the sequence itself
map(list, seq)
I still haven't seen any examples that aren't already spelled 'map(fun, it)'
On Sat, Feb 2, 2019, 3:17 PM Jeff Allen On 02/02/2019 18:44, MRAB wrote:
>
> On 2019-02-02 17:31, Adrien Ricocotam wrote:
> > I personally would the first option to be the case. But then vectors
> shouldn't be list-like
On Sat, Feb 2, 2019, 8:15 AM Oleg BroytmanFor the question "Does Python REPL need more batteries?" is your
> answer "No, just point people to IDLE"?
>If it is - well, I disagree. I implemented a lot of enhancements for
> REPL myself, and I don't like and avoid GUI programs
>
IPython and
On Fri, Feb 1, 2019, 6:16 PM Adrien Ricocotam A thing I thought about but I'm not satisfy is using the new
> matrix-multiplication operator:
>
> my_string_vector @ str.lower
>
> def compute_grad(a_student):
> return "you bad"
> my_student_vector @ compute_grad
>
This is
t response there is extremely useful.
I do not want python-ideas to resemble those. It is simply not the
appropriate kind of discussion.
On Fri, Feb 1, 2019, 7:30 PM Abe Dillon [David Mertz]
>
>> I have absolutely no interest in any system that arranges comments in
>> anything but related threa
I have absolutely no interest in any system that arranges comments in
anything but related thread and chronological order. I DO NOT want any
rating or evaluation of comments of any kind other than my own evaluation
based on reading them. Well, also in reading the informed opinions of other
If any non-email system is adopted, it will exclude me, and probably many
other contributors to this list. A mailing list is an appropriate and
useful format. "Discussion systems" are not.
On Fri, Feb 1, 2019 at 1:36 PM Abe Dillon wrote:
> I've pitched this before but gotten little feedback
On Fri, Feb 1, 2019 at 12:43 PM Adrien Ricocotam
wrote:
> What I think is bad using mailing list it's the absence of votes. I'd
> often like to just hit a "+1" button for some mails just to say to the
> author I'm with him and I think s.he's ideas are great.
>
I feel like the strongest virtue
On Thu, Jan 31, 2019 at 12:52 PM Chris Barker via Python-ideas <
python-ideas@python.org> wrote:
> I know that when I'm used to working with numpy and then need to do some
> string processing or some such, I find myself missing this "vectorization"
> -- if I want to do the same operation on a
On Wed, Jan 30, 2019, 4:23 PM Abe Dillon Consider that math.pi and math.e are constants that are not all caps,
> have you ever been tempted to re-bind those variables?
>
I generally use 'from math import pi as PI' because the lower case is
confusing and misnamed.
I really don't get the "two different signatures" concern. The two
functions do different things, why would we expect them to automatically
share a signature.
There are a zillion different open() functions or methods in the standard
library, and far more in third party software. They each have
.
On Tue, Jan 29, 2019, 10:46 PM David Mertz Of course not! The request was for something that worked on Python
> *collections*. If the OP wanted something that worked on iterables in
> general, we'd need a different function with different behavior.
>
> Of course, it also doesn't work on
Of course not! The request was for something that worked on Python
*collections*. If the OP wanted something that worked on iterables in
general, we'd need a different function with different behavior.
Of course, it also doesn't work on dictionaries. I don't really have any
ideas what the desired
ficient implementation either.
> Original Message ----
> On Jan 29, 2019, 18:44, David Mertz < me...@gnosis.cx> wrote:
>
>
> stringify = lambda it: type(it)(map(str, it))
>
> Done! Does that really need to be in the STDLIB?
>
> On Tue, Jan 29, 2019, 7:11 PM
stringify = lambda it: type(it)(map(str, it))
Done! Does that really need to be in the STDLIB?
On Tue, Jan 29, 2019, 7:11 PM Alex Shafer via Python-ideas <
python-ideas@python.org wrote:
> 1) I'm in favor of adding a stringify method to all collections
>
> 2) strings are special and worthy of a
On Tue, Jan 29, 2019, 12:22 AM Brendan Barnwell
> What would you expect to happen with this line:
> >
> > ['foo', b'foo', 37, re.compile('foo')].join('_')
>
> That problem already exists with str.join though. It's just
> currently spelled this way:
>
> ','.join(['foo', b'foo', 37,
On Mon, Jan 28, 2019 at 8:44 PM Jamesie Pic wrote:
> ['cancel', name].join('_')
>
This is a frequent suggestion. It is also one that makes no sense
whatsoever if you think about Python's semantics. What would you expect to
happen with this line:
['foo', b'foo', 37,
millions of there's objects, memory probably isn't that
important. But I guess you might... and namedtuple did sell itself as "less
memory than small dictionaries"
On Sat, Jan 26, 2019, 1:26 PM Christopher Barker On Sat, Jan 26, 2019 at 10:13 AM David Mertz wrote:
>
>> In
On Sat, Jan 26, 2019, 1:21 PM Christopher Barker
> As I understand it, functions return either a single value, or a tuple of
> values -- there is nothing special about how assignment is happening when a
> function is called.
>
No. It's simpler than that! Functions return a single value, period.
Indeed! I promise to use dataclass next time I find myself about to use
namedtuple. :-)
I'm pretty sure that virtually all my uses will allow that.
On Sat, Jan 26, 2019, 1:09 PM Eric V. Smith
>
> On 1/26/2019 12:30 PM, David Mertz wrote:
> > On Sat, Jan 26, 2019 at 10:31 AM Ste
On Sat, Jan 26, 2019 at 10:31 AM Steven D'Aprano
wrote:
> In what way is it worse, given that returning a namedtuple with named
>
fields is backwards compatible with returning a regular tuple? We can
> have our cake and eat it too.
> Unless the caller does a type-check, there is no difference.
I was going to write exactly they're same idea Steven did.
Right now you can simply design APIs to return dictionaries or, maybe
better, namedtuples. Namedtuples are really nice since you can define new
attributes when you upgrade an API without breaking any old coffee that
used the prior
You could write a context manager that used an arbitrary callback passed in
to handle exceptions (including re-raising as needed). This doesn't require
new syntax, just writing a custom CM.
On Tue, Jan 22, 2019, 4:20 PM Barry Scott
>
> On 22 Jan 2019, at 20:31, Michael Selik wrote:
>
> On Tue,
>
> One possible argument for making PASS the default, even if that means
>> implementation-dependent behaviour with NANs, is that in the absense of a
>> clear preference for FAIL or RETURN, at least PASS is backwards compatible.
>>
>> You might shoot yourself in the foot, but at least you know
On Tue, Jan 8, 2019 at 11:57 PM Tim Peters wrote:
> I'd like to see internal consistency across the central-tendency
> statistics in the presence of NaNs. What happens now:
>
I think consistent NaN-poisoning would be excellent behavior. It will
always make sense for median (and its variants).
On 2019-01-07 16:34, Steven D'Aprano wrote:
> > On Mon, Jan 07, 2019 at 10:05:19AM -0500, David Mertz wrote:
> [snip]
> >> It's not hard to manually check for NaNs and
> >> generate those in your own code.
> >
> > That is correct, but by that logic, we don'
On Mon, Jan 7, 2019 at 12:19 PM David Mertz wrote:
> Under a partial ordering, a median may not be unique. Even under a total
> ordering this is true if some subset of elements form an equivalence
> class. But under partial ordering, the non-uniqueness can get much weirder.
&g
On Mon, Jan 7, 2019, 11:38 AM Steven D'Aprano Its not a bug in median(), because median requires the data implement a
> total order. Although that isn't explicitly documented, it is common sense:
> if the data cannot be sorted into smallest-to-largest order, how can you
> decide which value is in
On Mon, Jan 7, 2019 at 6:50 AM Steven D'Aprano wrote:
> > I'll provide a suggested batch on the bug. It will simply be a wholly
> > different implementation of median and friends.
>
> I ask for a documentation patch and you start talking about a whole new
> implementation. Huh.
> A new
On Mon, Jan 7, 2019 at 1:27 AM Steven D'Aprano wrote:
> > In [4]: statistics.median([9, 9, 9, nan, 1, 2, 3, 4, 5])
> > Out[4]: 1
> > In [5]: statistics.median([9, 9, 9, nan, 1, 2, 3, 4])
> > Out[5]: nan
>
> The second is possibly correct if one thinks that the median of a list
> containing NAN
This statement is certainly false:
>
> * If two items are equal, and pairwise inequality is deterministic,
> exchanging the items does not affect the sorting of other items in the list.
>
Just to demonstrate this obviousness:
>>> sorted([9, 9, 9, b, 1, 2, 3, a])
[1, 2, 3, A, B, 9, 9, 9]
>>>
ist.
On Sun, Jan 6, 2019 at 11:09 PM Tim Peters wrote:
> [David Mertz ]
> > Thanks Tim for clarifying. Is it even the case that sorts are STABLE in
> > the face of non-total orderings under __lt__? A couple quick examples
> > don't refute that, but what I tried was not
[... apologies if this is dup, got a bounce ...]
> [David Mertz ]
>> I have to say though that the existing behavior of
`statistics.median[_low|_high|]`
>> is SURPRISING if not outright wrong. It is the behavior in existing
Python,
>> but it is very strange.
>>
>&g
I have to say though that the existing behavior of
`statistics.median[_low|_high|]` is SURPRISING if not outright wrong. It
is the behavior in existing Python, but it is very strange.
The implementation simply does whatever `sorted()` does, which is an
implementation detail. In particular,
Would these policies be named as strings or with an enum? Following Pandas,
we'd probably support both. I won't bikeshed the names, but they seem to
cover desired behaviors.
On Sun, Jan 6, 2019, 7:28 PM Steven D'Aprano Bug #33084 reports that the statistics library calculates median and
> other
Like everyone other than Abe in this thread, I find judicious use of
CONSTANTS to be highly readable and useful.
Yes, there is a little wiggle room about just how constant a constant has
to be since Python doesn't have a straightforward way to create real
constants. Very rarely I might change a
701 - 800 of 1105 matches
Mail list logo