Re: [Python-Dev] How is the GitHub workflow working for people?

2018-02-22 Thread Gregory P. Smith
On Tue, Feb 20, 2018 at 6:50 PM Brett Cannon  wrote:

> It's been a year and 10 days since we moved to GitHub, so I figured now is
> as good a time as any to ask people if they are generally happy with the
> workflow and if there is a particular sticking point to please bring it up
> on the core-workflow mailing list so we can potentially address it.
>

+1 happy!  Especially with the amazing automation piled on top.

It makes it sooo much easier to deal with changes coming from people than
anything involving manual patch files and clients.  Even within github's
quite limited concept of a code review tool (from a Googler perspective).

I do feel like we need more CI resources during sprints.  But we always
need more everything resources during sprints so that is nothing new and
not related to github itself.

The move to our github workflow is a win for all Python users in the world.

-gps
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How is the GitHub workflow working for people?

2018-02-22 Thread Gregory P. Smith
How often do we find ourselves grumbling over .py file style in PRs on
github?  If the answer to that isn't very often, the rest of my response
below seems moot. :)

On Wed, Feb 21, 2018 at 7:30 PM Guido van Rossum  wrote:

> Where I work we have some teams using flake8 and some teams that use
> pylint, and while pylint is more thorough, it is also slower and pickier,
> and the general sense is to strongly prefer flake8.
>
> I honestly expect that running either with close-to-default flags on
> stdlib code would be a nightmare, and I wouldn't want *any* directives for
> either one to appear in stdlib code, ever.
>
> In some ideal future all code would just be reformatted before it's
> checked in -- we're very far from that, and I used to be horrified by the
> very idea, but in the Go world this is pretty much standard practice, and
> the people at work who are using it are loving it. So I'm trying to have an
> open mind about this. But there simply isn't a tool that does a good enough
> job of this.
>

I don't know the specifics of your idea of "a good enough job of this" for
an auto-formatter is (we'd need to settle on that definition in a PEP)..
but there is for my definition: yapf .

Many teams require it pre-check-in for their code at Google these days. We
also use it for auto-reformatting of surrounding lines of code during all
sorts of mass-refactorings.  IIRC Lukas said Instagram/Facebook adopted it
as standard practice as well.

Some teams go all the way to enforce a "yapf must suggest no changes to the
edited areas of .py files" pre-submit error on their projects.

As Eric (and I believe Lukas in the past) has mentioned: auto formatters
don't have to produce mythical(*) "perfect" style the way individuals might
choose - they need to be good enough to keep people from arguing about
style with one another.  That's the productivity and consistency win.

What we need now is not more opinions on which formatter or linter is best.
> We need someone to actually do some work and estimate how much code would
> be changed if we ran e.g. tabnanny.py (or something more advanced!) over
> the entire stdlib, how much code would break (even the most conservative
> formatter sometimes breaks code that wasn't expecting to be reformatted --
> e.g. we used to have tests with significant trailing whitespace), and how
> often the result would be just too ugly to look at. If you're not willing
> to work on that, please don't respond to this thread.
>

Indeed.  There are at *least* four Python style and gotcha checkers in wide
use for Python code out there today.  Prematurely picking one up front
rather than coming up for criteria of which limited set of things we would
want and could practically use on the stdlib + testsuite seems wrong.

I've added "run yapf on all of CPython's .py files" to my list of things to
explore (or encourage someone else to) some day...

-gps

(*) caveat: Given that Guido has obtained History status
 and is BDFL he
can define mythical perfect style
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Dataclasses, frozen and __post_init__

2018-02-22 Thread Jim J. Jewett
On Mon, Feb 19, 2018 at 5:06 PM, Chris Barker - NOAA Federal <
chris.barker at noaa.gov> wrote:

> If I have this right, on the discussion about frozen and hash, a use
> case was brought up for taking a few steps to create an instance (and
> thus wanting it not frozen) and then wanting it hashable.

> Which pointed to the idea of a “ freeze this from now on” method.

> This seems another use case — maybe it would be helpful to be able to
> freeze an instance after creation for multiple use-cases?

Yes, it would be helpful.  But in practice, I've just limited the hash
function to only the attributes that are available before I need to
stick the object in a dict.  In practice, that has always been more
than sufficient.

-jJ
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Should the dataclass frozen property apply to subclasses?

2018-02-22 Thread Nick Coghlan
On 22 February 2018 at 20:55, Eric V. Smith  wrote:
> On 2/22/2018 1:56 AM, Raymond Hettinger wrote:
>>
>> When working on the docs for dataclasses, something unexpected came up.
>> If a dataclass is specified to be frozen, that characteristic is inherited
>> by subclasses which prevents them from assigning additional attributes:
>>
>>  >>> @dataclass(frozen=True)
>>  class D:
>>  x: int = 10
>>
>>  >>> class S(D):
>>  pass
>>
>>  >>> s = S()
>>  >>> s.cached = True
>>  Traceback (most recent call last):
>>File "", line 1, in 
>>  s.cached = True
>>File
>> "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/dataclasses.py",
>> line 448, in _frozen_setattr
>>  raise FrozenInstanceError(f'cannot assign to field {name!r}')
>>  dataclasses.FrozenInstanceError: cannot assign to field 'cached'
>
>
> This is because "frozen-ness" is implemented by adding __setattr__ and
> __delattr__ methods in D, which get inherited by S.
>
>> Other immutable classes in Python don't behave the same way:
>>
>>
>>  >>> class T(tuple):
>>  pass
>>
>>  >>> t = T([10, 20, 30])
>>  >>> t.cached = True
>>
>>  >>> class F(frozenset):
>>  pass
>>
>>  >>> f = F([10, 20, 30])
>>  >>> f.cached = True
>>
>>  >>> class B(bytes):
>>  pass
>>
>>  >>> b = B()
>>  >>> b.cached = True
>
>
> The only way I can think of emulating this is checking in __setattr__ to see
> if the field name is a field of the frozen class, and only raising an error
> in that case.

If you were going to do that then it would likely make more sense to
convert the frozen fields to data descriptors, so __setattr__ only
gets called for attempts to add new attributes.

Then for the `frozen=False` case, the decorator could force
__setattr__ and __delattr__ back to the default implementations from
object, rather than relying on the behaviour inherited from base
classes.

> A related issue is that dataclasses derived from frozen dataclasses are
> automatically "promoted" to being frozen.
>
 @dataclass(frozen=True)
> ... class A:
> ... i: int
> ...
 @dataclass
> ... class B(A):
> ... j: int
> ...
 b = B(1, 2)
 b.j = 3
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "C:\home\eric\local\python\cpython\lib\dataclasses.py", line 452, in
> _frozen_setattr
> raise FrozenInstanceError(f'cannot assign to field {name!r}')
> dataclasses.FrozenInstanceError: cannot assign to field 'j'
>
> Maybe it should be an error to declare B as non-frozen?

It would be nice to avoid that, as a mutable subclass of a frozen base
class could be a nice way to model hashable-but-mutable types:

>>> @dataclass(frozen=True) # This is the immutable/hashable bit
... class A:
... i: int
...
>>> @dataclass # This adds the mutable-but-comparable parts
... class B(A):
... j: int
... __hash__ = A.__hash__


However, disallowing this case for now *would* be a reasonable way to
postpone making a decision until 3.8 - in the meantime, folks would
still be able to experiment by overriding __setattr__ and __delattr__
after the dataclass decorator sets them.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] The `for y in [x]` idiom in comprehensions

2018-02-22 Thread Stephen J. Turnbull
Barry Warsaw writes:

 > My questions are 1) will this become idiomatic enough to be able to
 > understand at a glance what is going on,

Is it similar enough to

def f(x=[0]):

which is sometimes seen as a way to produce a mutable default value
for function arguments, to be "idiomatic"?

 > rather than having to pause to reason about what that 1-element
 > list-like syntax actually means, and 2) will this encourage even
 > more complicated comprehensions that are less readable than just
 > expanding the code into a for-loop?

Of course it will encourage more complicated comprehensions, and we
know that complexity is less readable.  On the other hand, a for loop
with a temporary variable will take up at least 3 statements vs. a
one-statement comprehension.

I don't have an opinion about the equities there.  I myself will
likely use the [(y, f(y)) for x in xs for y in costly(x)] idiom very
occasionally, with emphasis on "very" (for almost all "costly"
functions I might use that's the Knuthian root of error).  But I don't
know how others feel about it.

Steve

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] The `for y in [x]` idiom in comprehensions

2018-02-22 Thread Barry Warsaw
On Feb 22, 2018, at 11:04, Serhiy Storchaka  wrote:
> 
> Stephan Houben proposed an idiom which looks similar to new hypothetic syntax:
> 
>result = [y + g(y) for x in range(10) for y in [f(x)]]
> 
> `for y in [expr]` in a comprehension means just assigning expr to y. I never 
> seen this idiom before, but it can be a good replacement for a hypothetic 
> syntax for assignment in comprehensions. It changes the original 
> comprehension less than other approaches, just adds yet one element in a 
> sequence of for-s and if-s. I think that after using it more widely it will 
> become pretty idiomatic.

My questions are 1) will this become idiomatic enough to be able to understand 
at a glance what is going on, rather than having to pause to reason about what 
that 1-element list-like syntax actually means, and 2) will this encourage even 
more complicated comprehensions that are less readable than just expanding the 
code into a for-loop?

for-loops-are-not-evil-ly y’rs,
-Barry



signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] The `for y in [x]` idiom in comprehensions

2018-02-22 Thread Ethan Furman

On 02/22/2018 11:54 AM, Joao S. O. Bueno wrote:
> On 22 February 2018 at 16:04, Serhiy Storchaka wrote:

>> Stephan Houben proposed an idiom which looks similar to new hypothetic
>> syntax:
>>
>>  result = [y + g(y) for x in range(10) for y in [f(x)]]
>
> This thing has bitten me in the past -

Do you recall how?  That would be useful information.

--
~Ethan~
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] The `for y in [x]` idiom in comprehensions

2018-02-22 Thread Joao S. O. Bueno
This thing has bitten me in the past -

At the time I put together the "stackfull" package -

if allows stuff like:

from stackfull import push, pop
...
 [push(f(x)) + g(pop()) for x in range(10)]


It is painfully simple in its workings: it creates a plain old list in
the fame f_locals and uses
that as a stack in all stackfull.* operations.

Just posting because people involved in this thread might want to
experiment with that.
(it is on pypi)

   js
 -><-



On 22 February 2018 at 16:04, Serhiy Storchaka  wrote:
> Yet one discussion about reusing common subexpressions in comprehensions
> took place last week on the Python-ideas maillist (see topic "Temporary
> variables in comprehensions" [1]). The problem is that in comprehension like
> `[f(x) + g(f(x)) for x in range(10)]` the subexpression `f(x)` is evaluated
> twice. In normal loop you can introduce a temporary variable for `f(x)`. The
> OP wanted to add a special syntax for introducing temporary variables in
> comprehensions. This idea already was discussed multiple times in the past.
>
> There are several ways of resolving this problem with existing syntax.
>
> 1. Inner generator expression:
>
> result = [y + g(y) for y in (f(x) for x in range(10))]
>
> 2. The same, but with extracting the inner generator expression as a
> variable:
>
> f_samples = (f(x) for x in range(10))
> result = [y+g(y) for y in f_samples]
>
> 3. Extracting the expression with repeated subexpressions as a function with
> local variables:
>
> def func(x):
> y = f(x)
> return y + g(y)
> result = [func(x) for x in range(10)]
>
> 4. Implementing the whole comprehension as a generator function:
>
> def gen():
> for x in range(10):
> y = f(x)
> yield y + g(y)
> result = list(gen())
>
> 5. Using a normal loop instead of a comprehension:
>
> result = []
> for x in range(10):
> y = f(x)
> result.append(y + g(y))
>
> And maybe there are other ways.
>
> Stephan Houben proposed an idiom which looks similar to new hypothetic
> syntax:
>
> result = [y + g(y) for x in range(10) for y in [f(x)]]
>
> `for y in [expr]` in a comprehension means just assigning expr to y. I never
> seen this idiom before, but it can be a good replacement for a hypothetic
> syntax for assignment in comprehensions. It changes the original
> comprehension less than other approaches, just adds yet one element in a
> sequence of for-s and if-s. I think that after using it more widely it will
> become pretty idiomatic.
>
> I have created a patch that optimizes this idiom, making it as fast as a
> normal assignment. [2] Yury suggested to ask Guido on the mailing list if he
> agrees that this language patten is worth optimizing/promoting.
>
> [1] https://mail.python.org/pipermail/python-ideas/2018-February/048971.html
> [2] https://bugs.python.org/issue32856
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/jsbueno%40python.org.br
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] The `for y in [x]` idiom in comprehensions

2018-02-22 Thread Serhiy Storchaka
Yet one discussion about reusing common subexpressions in comprehensions 
took place last week on the Python-ideas maillist (see topic "Temporary 
variables in comprehensions" [1]). The problem is that in comprehension 
like `[f(x) + g(f(x)) for x in range(10)]` the subexpression `f(x)` is 
evaluated twice. In normal loop you can introduce a temporary variable 
for `f(x)`. The OP wanted to add a special syntax for introducing 
temporary variables in comprehensions. This idea already was discussed 
multiple times in the past.


There are several ways of resolving this problem with existing syntax.

1. Inner generator expression:

result = [y + g(y) for y in (f(x) for x in range(10))]

2. The same, but with extracting the inner generator expression as a 
variable:


f_samples = (f(x) for x in range(10))
result = [y+g(y) for y in f_samples]

3. Extracting the expression with repeated subexpressions as a function 
with local variables:


def func(x):
y = f(x)
return y + g(y)
result = [func(x) for x in range(10)]

4. Implementing the whole comprehension as a generator function:

def gen():
for x in range(10):
y = f(x)
yield y + g(y)
result = list(gen())

5. Using a normal loop instead of a comprehension:

result = []
for x in range(10):
y = f(x)
result.append(y + g(y))

And maybe there are other ways.

Stephan Houben proposed an idiom which looks similar to new hypothetic 
syntax:


result = [y + g(y) for x in range(10) for y in [f(x)]]

`for y in [expr]` in a comprehension means just assigning expr to y. I 
never seen this idiom before, but it can be a good replacement for a 
hypothetic syntax for assignment in comprehensions. It changes the 
original comprehension less than other approaches, just adds yet one 
element in a sequence of for-s and if-s. I think that after using it 
more widely it will become pretty idiomatic.


I have created a patch that optimizes this idiom, making it as fast as a 
normal assignment. [2] Yury suggested to ask Guido on the mailing list 
if he agrees that this language patten is worth optimizing/promoting.


[1] https://mail.python.org/pipermail/python-ideas/2018-February/048971.html
[2] https://bugs.python.org/issue32856

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How is the GitHub workflow working for people?

2018-02-22 Thread Eric Snow
On Thu, Feb 22, 2018 at 9:30 AM, Paul Moore  wrote:
> My experience on pip is that automated style review is helpful for
> avoiding debates over subjective details.

This is the allure of Go's official linting tools.  Nobody is happy
with *all* the style choices but there isn't any room to fuss about it
so people don't.  Without that overhead, folks are more apt to lint
their changes.  (At least, that was my experience after several years
working on projects written in Go.)  One nice thing is that it frees
you up to argue about other things. :)

> But it does result in a
> certain level of "tweak to satisfy the style checker" churn in PRs.
> That can be frustrating when CI takes a long time to run.

I had exactly that experience on one particularly large Go project (on
GitHub, with slow CI, driven by bots).  To make matters worse, the
project had a dozen people actively working on it, meaning a high
potential that your PR would not apply cleanly if you took too long to
merge it.  So, coupled with slow CI, linting failures were
particularly onerous.  We all got in the habit of locally running the
linter frequently.  IIRC, I even set up VIM to run the linter whenever
I saved.

FWIW, I'm fine with a bot that leaves a message (or a review) if there
are linting issues.  Regardless, we should careful about adding any
extra overhead to our workflow, particularly since the move to GH has
been driven by the desire to reduce overhead.

-eric
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How is the GitHub workflow working for people?

2018-02-22 Thread Paul Moore
On 22 February 2018 at 16:08, Antoine Pitrou  wrote:
> On Thu, 22 Feb 2018 07:51:17 -0800
> Steve Dower  wrote:
>> It then becomes grunt work for reviewers, who also have to carefully balance 
>> encouraging new contributors against preventing the code base from getting 
>> worse.
>
> That's a fair point I hadn't considered.  OTOH the style issues I
> usually comment on as a reviewer aren't the kind that would be caught
> by an automated style check (I tend to ask for comments or docstrings,
> or be nitpicky about some variable or function name).  YMMV :-)
>
>> I’d rather have a review bot that can detect problems in PRs and comment on 
>> them. We can choose to merge anyway and it won’t keep being noisy, but it 
>> also saves committers from potentially telling someone their contribution 
>> isn’t welcome because of their camelCase.
>
> Yeah, that sounds like an interesting feature.

My experience on pip is that automated style review is helpful for
avoiding debates over subjective details. But it does result in a
certain level of "tweak to satisfy the style checker" churn in PRs.
That can be frustrating when CI takes a long time to run.

Paul
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How is the GitHub workflow working for people?

2018-02-22 Thread Antoine Pitrou
On Thu, 22 Feb 2018 07:51:17 -0800
Steve Dower  wrote:
> It then becomes grunt work for reviewers, who also have to carefully balance 
> encouraging new contributors against preventing the code base from getting 
> worse.

That's a fair point I hadn't considered.  OTOH the style issues I
usually comment on as a reviewer aren't the kind that would be caught
by an automated style check (I tend to ask for comments or docstrings,
or be nitpicky about some variable or function name).  YMMV :-)

> I’d rather have a review bot that can detect problems in PRs and comment on 
> them. We can choose to merge anyway and it won’t keep being noisy, but it 
> also saves committers from potentially telling someone their contribution 
> isn’t welcome because of their camelCase.

Yeah, that sounds like an interesting feature.

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How is the GitHub workflow working for people?

2018-02-22 Thread Steve Dower
It then becomes grunt work for reviewers, who also have to carefully balance 
encouraging new contributors against preventing the code base from getting 
worse.

I’d rather have a review bot that can detect problems in PRs and comment on 
them. We can choose to merge anyway and it won’t keep being noisy, but it also 
saves committers from potentially telling someone their contribution isn’t 
welcome because of their camelCase. (Maybe Mariatta’s tutorial will build this 
bot? Maybe I’ll go and learn how to do it myself :) )

(and here ends my contribution on this topic. Pretty sure we're firmly in 
core-workflow territory now.)

Top-posted from my Windows phone

From: Ethan Furman
Sent: Thursday, February 22, 2018 7:39
To: python-dev@python.org
Subject: Re: [Python-Dev] How is the GitHub workflow working for people?

On 02/22/2018 02:12 AM, Antoine Pitrou wrote:

> Overall it makes contributing more of a PITA than it needs be.  Do
> automatic style *fixes* if you want, but please don't annoy me with
> automatic style checks that ask me to do tedious grunt work on my spare
> time.

+1

--
~Ethan~

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How is the GitHub workflow working for people?

2018-02-22 Thread Ethan Furman

On 02/22/2018 02:12 AM, Antoine Pitrou wrote:


Overall it makes contributing more of a PITA than it needs be.  Do
automatic style *fixes* if you want, but please don't annoy me with
automatic style checks that ask me to do tedious grunt work on my spare
time.


+1

--
~Ethan~

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Should the dataclass frozen property apply to subclasses?

2018-02-22 Thread Ivan Levkivskyi
On 22 February 2018 at 10:55, Eric V. Smith  wrote:

> On 2/22/2018 1:56 AM, Raymond Hettinger wrote:
>
>> Other immutable classes in Python don't behave the same way:
>
>
>>  >>> class T(tuple):
>>  pass
>>
>>  >>> t = T([10, 20, 30])
>>  >>> t.cached = True
>>
>>  >>> class F(frozenset):
>>  pass
>>
>>  >>> f = F([10, 20, 30])
>>  >>> f.cached = True
>>
>>  >>> class B(bytes):
>>  pass
>>
>>  >>> b = B()
>>  >>> b.cached = True
>>
>
> The only way I can think of emulating this is checking in __setattr__ to
> see if the field name is a field of the frozen class, and only raising an
> error in that case.
>

How about checking that the type of self is the type where decorator was
applied? For example (pseudocode):

def dataclass(cls, ...):
def _set_attr(self, attr, value):
if type(self) is not cls:
 use super()
else:
raise AttributeError
cls.__setattr__ = _set_attr

It can be also more sophisticated, for example raising for all fields on
class where frozen=True was used, while only on frozen fields for
subclasses.

--
Ivan
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Should the dataclass frozen property apply to subclasses?

2018-02-22 Thread Eric V. Smith

On 2/22/2018 1:56 AM, Raymond Hettinger wrote:

When working on the docs for dataclasses, something unexpected came up.  If a 
dataclass is specified to be frozen, that characteristic is inherited by 
subclasses which prevents them from assigning additional attributes:

 >>> @dataclass(frozen=True)
 class D:
 x: int = 10

 >>> class S(D):
 pass

 >>> s = S()
 >>> s.cached = True
 Traceback (most recent call last):
   File "", line 1, in 
 s.cached = True
   File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/dataclasses.py",
 line 448, in _frozen_setattr
 raise FrozenInstanceError(f'cannot assign to field {name!r}')
 dataclasses.FrozenInstanceError: cannot assign to field 'cached'


This is because "frozen-ness" is implemented by adding __setattr__ and 
__delattr__ methods in D, which get inherited by S.



Other immutable classes in Python don't behave the same way:


 >>> class T(tuple):
 pass

 >>> t = T([10, 20, 30])
 >>> t.cached = True

 >>> class F(frozenset):
 pass

 >>> f = F([10, 20, 30])
 >>> f.cached = True

 >>> class B(bytes):
 pass

 >>> b = B()
 >>> b.cached = True


The only way I can think of emulating this is checking in __setattr__ to 
see if the field name is a field of the frozen class, and only raising 
an error in that case.


A related issue is that dataclasses derived from frozen dataclasses are 
automatically "promoted" to being frozen.


>>> @dataclass(frozen=True)
... class A:
... i: int
...
>>> @dataclass
... class B(A):
... j: int
...
>>> b = B(1, 2)
>>> b.j = 3
Traceback (most recent call last):
  File "", line 1, in 
  File "C:\home\eric\local\python\cpython\lib\dataclasses.py", line 
452, in _frozen_setattr

raise FrozenInstanceError(f'cannot assign to field {name!r}')
dataclasses.FrozenInstanceError: cannot assign to field 'j'

Maybe it should be an error to declare B as non-frozen?

Eric.

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How is the GitHub workflow working for people?

2018-02-22 Thread Antoine Pitrou
On Wed, 21 Feb 2018 14:19:54 -0800
Barry Warsaw  wrote:
> On Feb 21, 2018, at 13:22, Guido van Rossum  wrote:
> > 
> > I'm willing to reconsider if there's a good enough tool. Ditto for C code 
> > (or do we already do it for C?).  
> 
> For Python code, flake8 --possibly with our own custom plugins— is the way to 
> go.  It would probably take some kind of ratchet or transition period before 
> all of the stdlib were compliant.  We’d have to be careful of the inevitable 
> raft of PRs to fix things, which may distract from actual bug fixes and 
> improvements.  OTOH, that would be another external dependency pulled in for 
> core Python development.

Everytime I contribute to a project which has automatic style checks in
CI, I find myself having to push pointless "cleanup" commits because
the CI is barking at me for some ridiculous reason (such as putting two
linefeeds instead of one between two Python functions).  Then I have to
wait some more until CI finishes, especially if builds take long or
build queues are clogged.

Overall it makes contributing more of a PITA than it needs be.  Do
automatic style *fixes* if you want, but please don't annoy me with
automatic style checks that ask me to do tedious grunt work on my spare
time.

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com