Re: Checking if email is valid

2023-11-01 Thread Ian Hobson via Python-list

See https://www.linuxjournal.com/article/9585?page=0,0

On 01/11/2023 17:09, Simon Connah via Python-list wrote:

Hi,

I'm building a simple project using smtplib and have a question. I've been 
doing unit testing but I'm not sure how to check if an email message is valid. 
Using regex sounds like a bad idea to me and the other options I found required 
paying for third party services.

Could someone push me in the right direction please? I just want to find out if 
a string is a valid email address.

Thank you.

Simon.




--
Ian Hobson
Tel (+66) 626 544 695
--
https://mail.python.org/mailman/listinfo/python-list


Re: Getty fully qualified class name from class object

2023-08-23 Thread Ian Pilcher via Python-list

On 8/22/23 11:13, Greg Ewing via Python-list wrote:

Classes have a __module__ attribute:

 >>> logging.Handler.__module__
'logging'


Not sure why I didn't think to look for such a thing.  Looks like it's
as simple as f'{cls.__module__}.{cls.__qualname__}'.

Thanks!

--

Google  Where SkyNet meets Idiocracy


--
https://mail.python.org/mailman/listinfo/python-list


Getty fully qualified class name from class object

2023-08-22 Thread Ian Pilcher via Python-list

How can I programmatically get the fully qualified name of a class from
its class object?  (I'm referring to the name that is shown when str()
or repr() is called on the class object.)

Neither the __name__ or __qualname__ class attributes include the
module.  For example:

  >>> import logging

  >>> str(logging.Handler)
  ""

  >>> logging.Handler.__name__
  'Handler'
  >>> logging.Handler.__qualname__
  'Handler'

How can I programmatically get 'logging.Handler' from the class object?

--

Google  Where SkyNet meets Idiocracy

--
https://mail.python.org/mailman/listinfo/python-list


Which more Pythonic - self.__class__ or type(self)?

2023-03-02 Thread Ian Pilcher

Seems like an FAQ, and I've found a few things on StackOverflow that
discuss the technical differences in edge cases, but I haven't found
anything that talks about which form is considered to be more Pythonic
in those situations where there's no functional difference.

Is there any consensus?

--

Google  Where SkyNet meets Idiocracy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Tool that can document private inner class?

2023-02-08 Thread Ian Pilcher

On 2/8/23 08:25, Weatherby,Gerard wrote:

No.

I interpreted your query as “is there something that can read docstrings 
of dunder methods?”


Have you tried the Sphinx specific support forums? 
https://www.sphinx-doc.org/en/master/support.html 


Yes.  I've posted to both the -user and -dev groups, and I've also filed
an issue[1].  I haven't received a response.

Thus my conclusion that Sphinx, specifically autodoc, simply can't do
this.

[1] https://github.com/sphinx-doc/sphinx/issues/11181

--

Google  Where SkyNet meets Idiocracy


--
https://mail.python.org/mailman/listinfo/python-list


Re: Tool that can document private inner class?

2023-02-07 Thread Ian Pilcher

On 2/7/23 14:53, Weatherby,Gerard wrote:

Yes.

Inspect module

import inspect


class Mine:

def __init__(self):
self.__value = 7

def __getvalue(self):
/"""Gets seven"""
/return self.__value


mine = Mine()
data = inspect.getdoc(mine)
for m in inspect.getmembers(mine):
if '__getvalue' in m[0]:
     d = inspect.getdoc(m[1])
print(d)



Can inspect generate HTML documentation, à la Sphinx and other tools?

--

Google  Where SkyNet meets Idiocracy


--
https://mail.python.org/mailman/listinfo/python-list


Tool that can document private inner class?

2023-02-07 Thread Ian Pilcher

I've been banging my head on Sphinx for a couple of days now, trying to
get it to include the docstrings of a private (name starts with two
underscores) inner class.  All I've managed to do is convince myself
that it really can't do it.

See https://github.com/sphinx-doc/sphinx/issues/11181.

Is there a tool out there that can do this?

--

Google  Where SkyNet meets Idiocracy

--
https://mail.python.org/mailman/listinfo/python-list


Re: set.add() doesn't replace equal element

2022-12-31 Thread Ian Pilcher

On 12/30/22 17:00, Paul Bryan wrote:
It seems to me like you have to ideas of what "equal" means. You want to 
update a "non-equal/equal" value in the set (because of a different time 
stamp). If you truly considered them equal, the time stamp would be 
irrelevant and updating the value in the set would be unnecessary.


I would:

a) /not/ consider two different leases with two different time stamps to 
be equal, and
b) as already mentioned, store them in another data structure like a 
dictionary.


Not knowing the specifics of the DHCP object structure, if a DHCP lease 
object has some immutable key or other durable immutable attribute, I 
would be inclined to make that the dictionary key, and store the DHCP 
object as the value.


I have come to the conclusion that you are correct.  Thanks!

--

Google  Where SkyNet meets Idiocracy


--
https://mail.python.org/mailman/listinfo/python-list


Re: set.add() doesn't replace equal element

2022-12-30 Thread Ian Pilcher

On 12/30/22 15:47, Paul Bryan wrote:
What kind of elements are being added to the set? Can you show 
reproducible sample code?


The objects in question are DHCP leases.  I consider them "equal" if
the lease address (or IPv6 prefix) is equal, even if the timestamps have
changed.  That code is not small, but it's easy to demonstrate the
behavior.

>>> import datetime
>>> class Foo(object):
... def __init__(self, index):
... self.index = index
... self.timestamp = datetime.datetime.now()
... def __eq__(self, other):
... return type(other) is Foo and other.index == self.index
... def __hash__(self):
... return hash(self.index)
... def __repr__(self):
... return f'Foo({self.index}) created at {str(self.timestamp)}'
...
>>> f1 = Foo(1)
>>> s = { f1 }
>>> s
{Foo(1) created at 2022-12-30 16:24:12.352908}
>>> f2 = Foo(1)
>>> f2
Foo(1) created at 2022-12-30 16:24:35.489208
>>> s.add(f2)
>>> s
{Foo(1) created at 2022-12-30 16:24:12.352908}

--

Google  Where SkyNet meets Idiocracy


--
https://mail.python.org/mailman/listinfo/python-list


set.add() doesn't replace equal element

2022-12-30 Thread Ian Pilcher

I just discovered this behavior, which is problematic for my particular
use.  Is there a different set API (or operator) that can be used to
add an element to a set, and replace any equal element?

If not, am I correct that I should call set.discard() before calling
set.add() to achieve the behavior that I want?

--

Google  Where SkyNet meets Idiocracy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Calling pselect/ppoll/epoll_pwait

2022-12-13 Thread Ian Pilcher

On 12/2/22 14:00, Ian Pilcher wrote:

Does Python provide any way to call the "p" variants of the I/O
multiplexing functions?


Just to close this out ...

As others suggested, there's no easy way to call the "p" variants of the
I/O multiplexing functions, but this can be worked around by "mapping"
signals to file descriptors.

There are a few ways to accomplish this.

1. Use a Linux signalfd.  There's at least one library out there that
   provides signalfd support to Python.

2. Use signal.set_wakeup_fd()[1].  I didn't really explore this, as it
   appears that there isn't any way to filter the signals that will be
   reported.

3. Roll your own.  This turned out to be really simple for my use case,
   which is simply to set an exit flag and wake my program up if it
   receives SIGINT or SIGTERM.

   _sel = selectors.DefaultSelector()
   _exit_flag = False
   _sig_pipe_r, _sig_pipe_w = os.pipe2(os.O_NONBLOCK | os.O_CLOEXEC)

   def _sig_handler(signum, frame):
   global _exit_flag
   _exit_flag = True
   os.write(_sig_pipe_w, b'\x00')

   _sel.register(_sig_pipe_r, selectors.EVENT_READ)
   # register other file descriptors of interest
   signal.signal(signal.SIGINT, _sig_handler)
   signal.signal(signal.SIGTERM, _sig_handler)
   while not _exit_flag:
   ready = _sel.select()
   # handle other file descriptors

[1] https://docs.python.org/3/library/signal.html#signal.set_wakeup_fd

--

Google  Where SkyNet meets Idiocracy


--
https://mail.python.org/mailman/listinfo/python-list


Calling pselect/ppoll/epoll_pwait

2022-12-02 Thread Ian Pilcher

Does Python provide any way to call the "p" variants of the I/O
multiplexing functions?

Looking at the documentation of the select[1] and selectors[2] modules,
it appears that they expose only the "non-p" variants.

[1] https://docs.python.org/3/library/select.html
[2] https://docs.python.org/3/library/selectors.html

--

Google  Where SkyNet meets Idiocracy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Dealing with non-callable classmethod objects

2022-11-12 Thread Ian Pilcher

On 11/12/22 14:57, Cameron Simpson wrote:
You shouldn't need a throwaway class, just use the name "classmethod" 
directly - it's the type!


     if not callable(factory):
     if type(factory) is classmethod:
     # replace fctory with a function calling factory.__func__
     factory = lambda arg: factory.__func__(classmethod, arg)
     else:
     raise TypeError("unhandled factory type %s:%r" % 
(type(factory), factory)

     value = factory(d[attr])


Or I could use the C++ version:

  face << palm;

Thanks!

--

Google  Where SkyNet meets Idiocracy


--
https://mail.python.org/mailman/listinfo/python-list


Re: Dealing with non-callable classmethod objects

2022-11-12 Thread Ian Pilcher

On 11/11/22 16:47, Cameron Simpson wrote:

On 11Nov2022 15:29, Ian Pilcher  wrote:

* Can I improve the 'if callable(factory):' test above?  This treats
 all non-callable objects as classmethods, which is obviously not
 correct.  Ideally, I would check specifically for a classmethod, but
 there doesn't seem to be any literal against which I could check the
 factory's type.


Yeah, it does feel a bit touchy feely.

You could see if the `inspect` module tells you more precise things 
about the `factory`.


The other suggestion I have is to put the method name in `_attrs`; if 
that's a `str` you could special case it as a well known type for the 
factory and look it up with `getattr(cls,factory)`.


So I've done this.

class _HasUnboundClassMethod(object):
@classmethod
def _classmethod(cls):
pass  # pragma: no cover
_methods = [ _classmethod ]

_ClassMethodType = type(_HasUnboundClassMethod._methods[0])

Which allows me to do this:

def __init__(self, d):
for attr, factory in self._attrs.items():
if callable(factory):
value = factory(d[attr])
else:
assert type(factory) is self._ClassMethodType
value = factory.__func__(type(self), d[attr])
setattr(self, attr, value)

It's a bit cleaner, although I'm not thrilled about having a throwaway
class, just to define a literal that ought to be provided by the
runtime.

--

Google  Where SkyNet meets Idiocracy


--
https://mail.python.org/mailman/listinfo/python-list


Re: Superclass static method name from subclass

2022-11-11 Thread Ian Pilcher

On 11/11/22 11:02, Thomas Passin wrote:

You can define a classmethod in SubClass that seems to do the job:

class SuperClass(object):
   @staticmethod
   def spam():  # "spam" and "eggs" are a Python tradition
   print('spam from SuperClass')

class SubClass(SuperClass):
     @classmethod
     def eggs(self):
     super().spam()

SubClass.eggs()  # Prints "spam from SuperClass"



That did it!  Thanks!

--

Google  Where SkyNet meets Idiocracy


--
https://mail.python.org/mailman/listinfo/python-list


Re: Superclass static method name from subclass

2022-11-11 Thread Ian Pilcher

On 11/11/22 11:29, Dieter Maurer wrote:

Ian Pilcher wrote at 2022-11-11 10:21 -0600:


   class SuperClass(object):
   @staticmethod
   def foo():
   pass

   class SubClass(SuperClass):
   bar = SuperClass.foo
 ^^

Is there a way to do this without specifically naming 'SuperClass'?


Unless you overrode it, you can use `self.foo` or `SubClass.foo`;
if you overrode it (and you are using either Python 3 or
Python 2 and a so called "new style class"), you can use `super`.
When you use `super` outside a method definition, you must
call it with parameters.


SubClass.foo doesn't work, because 'SubClass' doesn't actually exist
until the class is defined.

>>> class SubClass(SuperClass):
...   bar = SubClass.foo
...
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 2, in SubClass
NameError: name 'SubClass' is not defined. Did you mean: 'SuperClass'?

Similarly, self.foo doesn't work, because self isn't defined:

>>> class SubClass(SuperClass):
...   bar = self.foo
...
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 2, in SubClass
NameError: name 'self' is not defined

--

Google  Where SkyNet meets Idiocracy


--
https://mail.python.org/mailman/listinfo/python-list


Dealing with non-callable classmethod objects

2022-11-11 Thread Ian Pilcher

I am trying to figure out a way to gracefully deal with uncallable
classmethod objects.  The class hierarchy below illustrates the issue.
(Unfortunately, I haven't been able to come up with a shorter example.)


import datetime


class DUID(object):

_subclasses = {}

def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls._subclasses[cls.duid_type] = cls

def __init__(self, d):
for attr, factory in self._attrs.items():
setattr(self, attr, factory(d[attr]))

@classmethod
def from_dict(cls, d):
subcls = cls._subclasses[d['duid_type']]
return subcls(d)


class DuidLL(DUID):

@staticmethod
def _parse_l2addr(addr):
return bytes.fromhex(addr.replace(':', ''))

duid_type = 'DUID-LL'
_attrs = { 'layer2_addr': _parse_l2addr }


class DuidLLT(DuidLL):

@classmethod
def _parse_l2addr(cls, addr):
return super()._parse_l2addr(addr)

duid_type = 'DUID-LLT'
_attrs = {
'layer2_addr': _parse_l2addr,
'time': datetime.datetime.fromisoformat
}


A bit of context on why I want to do this ...

This is a simplified subset of a larger body of code that parses a
somewhat complex configuration.  The configuration is a YAML document,
that pyyaml parses into a dictionary (which contains other dictionaries,
lists, etc., etc.).  My code then parses that dictionary into an object
graph that represents the configuration.

Rather than embedding parsing logic into each of my object classes, I
have "lifted" it into the parent class (DUID in the example).  A
subclasses need only provide a few attributes that identifies its
required and optional attributes, default values, etc. (simplified to
DuidLL._attrs and DuidLLT._attrs in the example).

The parent class factory function (DUID.from_dict) uses the information
in the subclass's _attrs attribute to control how it parses the
configuration dictionary.  Importantly, a subclass's _attrs attribute
maps attribute names to "factories" that are used to parse the values
into various types of objects.

Thus, DuidLL's 'layer2_addr' attribute is parsed with its
_parse_l2addr() static method, and DuidLLT's 'time' attribute is parsed
with datetime.datetime.fromisoformat().  A factory can be any callable
object that takes a dictionary as its only argument.

This works with static methods (as well as normal functions and object
types that have an appropriate constructor):


duid_ll = DUID.from_dict({ 'duid_type': 'DUID-LL', 'layer2_addr': 
'de:ad:be:ef:00:00' })
type(duid_ll)



duid_ll.duid_type

'DUID-LL'

duid_ll.layer2_addr

b'\xde\xad\xbe\xef\x00\x00'

It doesn't work with a class method, such as DuidLLT._parse_l2addr():


duid_llt = DUID.from_dict({ 'duid_type': 'DUID-LLT', 'layer2_addr': 
'de:ad:be:ef:00:00', 'time': '2015-09-04T07:53:04-05:00' })

Traceback (most recent call last):
  File "", line 1, in 
  File "/home/pilcher/subservient/wtf/wtf.py", line 19, in from_dict
return subcls(d)
  File "/home/pilcher/subservient/wtf/wtf.py", line 14, in __init__
setattr(self, attr, factory(d[attr]))
TypeError: 'classmethod' object is not callable

In searching, I've found a few articles that discuss the fact that
classmethod objects aren't callable, but the situation actually seems to
be more complicated.

>>> type(DuidLLT._parse_l2addr)

>>> callable(DuidLLT._parse_l2addr)
True

The method itself is callable, which makes sense.  The factory function
doesn't access it directly, however, it gets it out of the _attrs
dictionary.

>>> type(DuidLLT._attrs['layer2_addr'])

>>> callable(DuidLLT._attrs['layer2_addr'])
False

I'm not 100% sure, but I believe that this is happening because the
class (DuidLLT) doesn't exist at the time that its _attrs dictionary is
defined.  Thus, there is no class to which the method can be bound at
that time and the dictionary ends up containing the "unbound version."

Fortunately, I do know the class in the context from which I actually
need to call the method, so I am able to call it with its __func__
attribute.  A modified version of DUID.__init__() appears to work:

def __init__(self, d):
for attr, factory in self._attrs.items():
if callable(factory):  # <= ???!
value = factory(d[attr])
else:
value = factory.__func__(type(self), d[attr])
setattr(self, attr, value)

A couple of questions (finally!):

* Is my analysis of why this is happening correct?

* Can I improve the 'if callable(factory):' test above?  This treats
  all non-callable objects as classmethods, which is obviously not
  correct.  Ideally, I would check specifically for a classmethod, but
  there doesn't seem to be any literal against which I could check the
  factory's type.

Note:  I am aware that there are any number of workarounds for this
issue.  I just want to make sure that I understand what is going on, and
determine if there's a 

Superclass static method name from subclass

2022-11-11 Thread Ian Pilcher

Is it possible to access the name of a superclass static method, when
defining a subclass attribute, without specifically naming the super-
class?

Contrived example:

  class SuperClass(object):
  @staticmethod
  def foo():
  pass

  class SubClass(SuperClass):
  bar = SuperClass.foo
^^

Is there a way to do this without specifically naming 'SuperClass'?

--

Google  Where SkyNet meets Idiocracy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Comparing sequences with range objects

2022-04-09 Thread Ian Hobson



On 09/04/2022 13:14, Christian Gollwitzer wrote:

Am 08.04.22 um 09:21 schrieb Antoon Pardon:

The first is really hard. Not only may information be missing, no single
single piece of information is unique or immutable. Two people may have
the same name (I know about several other "Peter Holzer"s), a single
person might change their name (when I was younger I went by my middle
name - how would you know that "Peter Holzer" and "Hansi Holzer" are the
same person?), they will move (= change their address), change jobs,
etc. Unless you have a unique immutable identifier that's enforced by
some authority (like a social security number[1]), I don't think there
is a chance to do that reliably in a program (although with enough data,
a heuristic may be good enough).


Yes I know all that. That is why I keep a bucket of possible duplicates
per "identifying" field that is examined and use some heuristics at the
end of all the comparing instead of starting to weed out the duplicates
at the moment something differs.

The problem is, that when an identifying field is judged to be unusable,
the bucket to be associated with it should conceptually contain all other
records (which in this case are the indexes into the population list).
But that will eat a lot of memory. So I want some object that behaves as
if it is a (immutable) list of all these indexes without actually 
containing

them. A range object almost works, with the only problem it is not
comparable with a list.



Then write your own comparator function?

Also, if the only case where this actually works is the index of all 
other records, then a simple boolean flag "all" vs. "these items in the 
index list" would suffice - doesn't it?


 Christian

Writing a comparator function is only possible for a given key. So my 
approach would be:


1) Write a comparator function that takes params X and Y, such that:
if key data is missing from X, return 1
If key data is missing from Y return -1
if X > Y return 1
if X < Y return -1
return 0 # They are equal and key data for both is present

2) Sort the data using the comparator function.

3) Run through the data with a trailing enumeration loop, merging 
matching records together.


4) If there are no records copied out with missing
key data, then you are done, so exit.

5) Choose a new key and repeat from step 1).

Regards

Ian

--
Ian Hobson
Tel (+66) 626 544 695

--
This email has been checked for viruses by AVG.
https://www.avg.com

--
https://mail.python.org/mailman/listinfo/python-list


[issue38242] Revert the new asyncio Streams API

2022-03-22 Thread Ian Good


Change by Ian Good :


--
nosy: +icgood
nosy_count: 9.0 -> 10.0
pull_requests: +30142
pull_request: https://github.com/python/cpython/pull/13143

___
Python tracker 
<https://bugs.python.org/issue38242>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46550] __slots__ updates despite being read-only

2022-01-28 Thread Ian Lee


Ian Lee  added the comment:

@ronaldoussoren - right, I agree that I think that raising the AttributeErrors 
is the right thing. The part that feels like a bug to me is that the exception 
is saying it is read only and yet it is not being treated it that way (even 
though as you point out, the end result doesn't "work").

Maybe this is something about the augmented assignment that I'm just not 
grokking... I read the blurb @eryksun posted several times, but not seeming to 
see what is going on.

--

___
Python tracker 
<https://bugs.python.org/issue46550>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46550] __slots__ updates despite being read-only

2022-01-27 Thread Ian Lee


Ian Lee  added the comment:

@sobolevn - Hmm, interesting.. I tested in python 3.9 which I had available, 
and I can reproduce your result, but I think it's different because you are 
using a tuple. If I use a list then I see my same reported behavior in 3.9:


```python
Python 3.9.10 (main, Jan 26 2022, 20:56:53)
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> class A:
... __slots__ = ('x',)
...
>>> a = A()
>>> a.__slots__
('x',)
>>> a.__slots__ += ('y',)
Traceback (most recent call last):
  File "", line 1, in 
AttributeError: 'A' object attribute '__slots__' is read-only
>>> a.__slots__
('x',)
>>>
>>>
>>>
>>> class B:
... __slots__ = ['x']
...
>>> b = B()
>>> b.__slots__
['x']
>>> b.__slots__ += ['y']
Traceback (most recent call last):
  File "", line 1, in 
AttributeError: 'B' object attribute '__slots__' is read-only
>>> b.__slots__
['x', 'y']
```

--

___
Python tracker 
<https://bugs.python.org/issue46550>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46550] __slots__ updates despite being read-only

2022-01-27 Thread Ian Lee

New submission from Ian Lee :

Hi there - I admit that I don't really understand the internals here, so maybe 
there is a good reason for this, but I thought it was weird when I just ran 
across it. 

If I create a new class `A`, and set it's `__slots`:

```python
➜  ~ docker run -it python:3.10
Python 3.10.2 (main, Jan 26 2022, 20:07:09) [GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.

>>> class A(object):
... __slots__ = ["foo"]
...
>>> A.__slots__
['foo']
```


If I then go to add a new attribute to extend it on the class, that works:

```python
>>> A.__slots__ += ["bar"]
>>> A.__slots__
['foo', 'bar']
```

But then if I create an instance of that class, and try to update `__slots__` 
on that instnace, I get an AttributeError that `__slots__` is read-only, and 
yet it still is updating the `__slots__` variable:

```python
>>> a = A()
>>> a.__slots__
['foo', 'bar']

>>> a.__slots__ += ["baz"]
Traceback (most recent call last):
  File "", line 1, in 
AttributeError: 'A' object attribute '__slots__' is read-only

>>> a.__slots__
['foo', 'bar', 'baz']
>>> A.__slots__
['foo', 'bar', 'baz']
```

Maybe there is a good reason for this, but I was definitely surprised that I 
would get a "this attribute is read-only" error, and yet still see that 
attribute updated.

I first found this in python 3.8.5, but I also tested using docker to generate 
the above example using docker python:3.10 which gave me python 3.10.2.

Cheers!

--
components: Library (Lib)
messages: 411886
nosy: IanLee1521
priority: normal
severity: normal
status: open
title: __slots__ updates despite being read-only
type: behavior
versions: Python 3.10, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue46550>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45677] [doc] improve sqlite3 docs

2021-11-21 Thread Ian Fisher


Ian Fisher  added the comment:

I think it would also be helpful to make the examples at the top simpler/more 
idiomatic, e.g. using a context manager for the connection and calling 
conn.execute directly instead of spawning a cursor.

I think the information about the isolation_level parameter should also be 
displayed more prominently as many people are surprised to find sqlite3 
automatically opening transactions.

--
nosy: +iafisher

___
Python tracker 
<https://bugs.python.org/issue45677>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26651] Deprecate register_adapter() and register_converter() in sqlite3

2021-11-21 Thread Ian Fisher


Ian Fisher  added the comment:

See bpo-45858 for a more limited proposal to only deprecate the default 
converters.

--

___
Python tracker 
<https://bugs.python.org/issue26651>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45858] Deprecate default converters in sqlite3

2021-11-21 Thread Ian Fisher


Ian Fisher  added the comment:

See also bpo-26651 for a related proposal to deprecate the converter/adapter 
infrastructure entirely.

The proposal in this bug is more limited: remove the default converters (though 
I think the default adapters should stay), but continue to allow users to 
define their own converters.

--

___
Python tracker 
<https://bugs.python.org/issue45858>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45858] Deprecate default converters in sqlite3

2021-11-21 Thread Ian Fisher


New submission from Ian Fisher :

Per discussion at 
https://discuss.python.org/t/fixing-sqlite-timestamp-converter-to-handle-utc-offsets/,
 the default converters in SQLite3 have several bugs and are probably not worth 
continuing to maintain, so I propose deprecating them and removing them in a 
later version of Python.

Since the converters are opt-in, this should not affect most users of SQLite3.

--
components: Library (Lib)
messages: 406727
nosy: erlendaasland, iafisher
priority: normal
severity: normal
status: open
title: Deprecate default converters in sqlite3
type: behavior

___
Python tracker 
<https://bugs.python.org/issue45858>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45611] pprint - low width overrides depth folding

2021-10-26 Thread Ian Currie


New submission from Ian Currie :

Reproducible example:

>>> from pprint import pprint
>>> data = [["aa"],[2],[3],[4],[5]]
>>> pprint(data)
[["aa"], [2], [3], [4], [5]]

>>> pprint(data, depth=1)
[[...], [...], [...], [...], [...]]

>>> pprint(data, depth=1, width=7)
[[...],
 [...],
 [...],
 [...],
 [...]]

>>> pprint(data, depth=1, width=6)
[['aa'],
 [2],
 [3],
 [4],
 [5]]

The depth "folds" everything deeper than 1 level. Then if you lower the width 
of the output enough, what was once on one line, will now be split by newlines.

The bug is, if you lower the width below seven characters. Which is the length 
of `[[...],`  It seems to override the `depth` parameter and print the 
everything with no folding. This is true of deeply nested structures too.

Expected behavior: for the folding `...` to remain.

Why put the width so low?

I came across this because of the behavior of `compact`:

By default, if a data structure can fit on one line, it will be displayed on 
one line. `compact` only affects sequences that are longer than the given width.

**There is no way to force compact as False for short items**, so as to make 
sure all items, even short ones, appear on their own line.

[1,2,3] - will always appear on its own line, there is no way to make it appear 
like:

[1,
 2,
 3]

The only way is by setting a very low width.

--
components: Library (Lib)
messages: 405027
nosy: iansedano
priority: normal
severity: normal
status: open
title: pprint - low width overrides depth folding
type: behavior
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue45611>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34798] pprint ignores the compact parameter for dicts

2021-10-25 Thread Ian


Ian  added the comment:

I came across this and was confused by it too. I also don't understand the 
justification with not having dicts to be affected by the `compact` parameter.

If the "compact form" is having separate entries or elements on one line, 
instead of having each element separated by a new line, then it seems like 
inconsistent behavior.

**If a dict is short enough, it will appear in "compact form", just like a 
list.**

If a dict is too long for the width, then each item will appear in "expanded 
form", also like a list.
However, the actual compact parameter only affects sequence items. Why is this?

There is no reason given in #19132. It does mention a review, but it doesn't 
seem to be available, or I don't know how to get to it, to understand the 
reason for that decision.

--
nosy: +iansedano

___
Python tracker 
<https://bugs.python.org/issue34798>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45335] Default TIMESTAMP converter in sqlite3 ignores UTC offset

2021-10-24 Thread Ian Fisher


Change by Ian Fisher :


--
keywords: +patch
pull_requests: +27469
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/29200

___
Python tracker 
<https://bugs.python.org/issue45335>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19065] sqlite3 timestamp adapter chokes on timezones

2021-10-07 Thread Ian Fisher


Change by Ian Fisher :


--
nosy: +iafisher

___
Python tracker 
<https://bugs.python.org/issue19065>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26651] Deprecate register_adapter() and register_converter() in sqlite3

2021-10-07 Thread Ian Fisher


Change by Ian Fisher :


--
nosy: +iafisher

___
Python tracker 
<https://bugs.python.org/issue26651>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45335] Default TIMESTAMP converter in sqlite3 ignores UTC offset

2021-10-05 Thread Ian Fisher


Ian Fisher  added the comment:

Okay, I started a discussion here: 
https://discuss.python.org/t/fixing-sqlite-timestamp-converter-to-handle-utc-offsets/10985

--

___
Python tracker 
<https://bugs.python.org/issue45335>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45335] Default TIMESTAMP converter in sqlite3 ignores UTC offset

2021-10-05 Thread Ian Fisher


Ian Fisher  added the comment:

> Another option could be to deprecate the current behaviour and then change it 
> to being timezone aware in Python 3.13.

This sounds like the simplest option.

I'd be interested in working on this myself, if you think it's something that a 
new CPython contributor could handle.

--

___
Python tracker 
<https://bugs.python.org/issue45335>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45335] Default TIMESTAMP converter in sqlite3 ignores UTC offset

2021-10-04 Thread Ian Fisher


Ian Fisher  added the comment:

Unfortunately fixing this will have to be considered a backwards-incompatible 
change, since Python doesn't allow naive and aware datetime objects to be 
compared, so if sqlite3 starts returning aware datetimes, existing code might 
break.

Alternatively, perhaps this could be fixed in conjunction with changing 
sqlite3's API to allow per-database converters and adapters. Then, the old 
global TIMESTAMP converter could be retained for compatibility with existing 
code, and new code could opt-in to a per-database TIMESTAMP converter with the 
correct behavior.

--
title: Default TIMESTAMP converter in sqlite3 ignores time zone -> Default 
TIMESTAMP converter in sqlite3 ignores UTC offset

___
Python tracker 
<https://bugs.python.org/issue45335>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45335] Default TIMESTAMP converter in sqlite3 ignores time zone

2021-10-02 Thread Ian Fisher


Ian Fisher  added the comment:

Substitute "UTC offset" for "time zone" in my comment above.

I have attached a minimal Python program demonstrating data loss from this bug.

--
Added file: https://bugs.python.org/file50324/timestamp.py

___
Python tracker 
<https://bugs.python.org/issue45335>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45335] Default TIMESTAMP converter in sqlite3 ignores time zone

2021-09-30 Thread Ian Fisher


New submission from Ian Fisher :

The SQLite converter that the sqlite3 library automatically registers for 
TIMESTAMP columns 
(https://github.com/python/cpython/blob/main/Lib/sqlite3/dbapi2.py#L66) ignores 
the time zone even if it is present and always returns a naive datetime object.

I think that the converter should return an aware object if the time zone is 
present in the database. As it is, round trips of TIMESTAMP values from the 
database to Python and back might erase the original time zone info.

Now that datetime.datetime.fromisoformat is in Python 3.7, this should be easy 
to implement.

--
components: Library (Lib)
messages: 402979
nosy: iafisher
priority: normal
severity: normal
status: open
title: Default TIMESTAMP converter in sqlite3 ignores time zone
type: enhancement

___
Python tracker 
<https://bugs.python.org/issue45335>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45241] python REPL leaks local variables when an exception is thrown

2021-09-19 Thread Ian Henderson


Ian Henderson  added the comment:

Ah, you're right -- it looks like the 'objs' global is what's keeping these 
objects alive.  Sorry for the noise.

--
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue45241>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45241] python REPL leaks local variables when an exception is thrown

2021-09-19 Thread Ian Henderson


New submission from Ian Henderson :

To reproduce, copy the following code:


import gc
gc.collect()
objs = gc.get_objects()
for obj in objs:
try:
if isinstance(obj, X):
print(obj)
except NameError:
class X:
pass

def f():
x = X()
raise Exception()

f()



then open a Python REPL and paste repeatedly at the prompt.  Each time the code 
runs, another copy of the local variable x is leaked.  This was originally 
discovered while using PyTorch -- tensors leaked this way tend to exhaust GPU 
memory pretty quickly.

Version Info:
Python 3.9.7 (default, Sep  3 2021, 04:31:11) 
[Clang 12.0.5 (clang-1205.0.22.9)] on darwin

--
components: Interpreter Core
messages: 402144
nosy: ianh2
priority: normal
severity: normal
status: open
title: python REPL leaks local variables when an exception is thrown
type: resource usage
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue45241>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45081] dataclasses that inherit from Protocol subclasses have wrong __init__

2021-09-12 Thread Ian Good


Ian Good  added the comment:

Julian,

That is certainly a workaround, however the behavior you are describing is 
inconsistent with PEP-544 in both word and intention. From the PEP:

> To explicitly declare that a certain class implements a given protocol, it 
> can be used as a regular base class.

It further describes the semantics of inheriting as "unchanged" from a "regular 
base class". If the semantics are "unchanged" then it should follow that 
super().__init__() would pass through the protocol to the object.__init__, just 
like a "regular base class" would if it does not override __init__.

Furthermore, the intention of inheriting a Protocol as described in the PEP:

> Static analysis tools are expected to automatically detect that a class 
> implements a given protocol. So while it's possible to subclass a protocol 
> explicitly, it's not necessary to do so for the sake of type-checking.

The purpose of adding a Protocol sub-class as an explicit base class is thus 
only to improve static analysis, it should *not* to modify the runtime 
semantics.

Consider the case where a package maintainer wants to enhance the flexibility 
of their types by transitioning from using an ABC to using structural 
sub-typing. That simple typing change would be a breaking change to the package 
consumers, who must now remove a super().__init__() call.

Ian

--

___
Python tracker 
<https://bugs.python.org/issue45081>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45081] dataclasses that inherit from Protocol subclasses have wrong __init__

2021-09-11 Thread Ian Good


Ian Good  added the comment:

I believe this was a deeper issue that affected all classes inheriting 
Protocol, causing a TypeError on even the most basic case (see attached):

Traceback (most recent call last):
  File "/.../test.py", line 14, in 
MyClass()
  File "/.../test.py", line 11, in __init__
super().__init__()
  File 
"/usr/local/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/typing.py",
 line 1083, in _no_init
raise TypeError('Protocols cannot be instantiated')
TypeError: Protocols cannot be instantiated


This was a new regression in 3.9.7 and seems to be resolved by this fix. The 
desired behavior should be supported according to PEP 544: 
https://www.python.org/dev/peps/pep-0544/#explicitly-declaring-implementation

--
nosy: +icgood
Added file: https://bugs.python.org/file50277/test.py

___
Python tracker 
<https://bugs.python.org/issue45081>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: neoPython : Fastest Python Implementation: Coming Soon

2021-05-05 Thread Ian Clark
I wish you best of luck, on top of everything else it looks like the
neopython namespace has already been eaten up by some crypto project

On Wed, May 5, 2021 at 11:18 AM Benjamin Schollnick <
bscholln...@schollnick.net> wrote:

> > Why? The currently extant Python implementations contribute to climate
> change as they are so inefficient;
>
> That same argument can be made for every triple-AAA video game, game
> console, etc.
>
> Python is more efficient for some problem sets, and Python is less
> efficient for other problem sets.
>
> Please feel free to come out with NeoPython.  When you are done, and it is
> backward compatible with existing Python code, I’ll be happy to benchmark
> it against Native python.  But don’t blame Python for global climate
> change.  There are plenty of bigger “causations” to Global climate change,
> then a freakin’ computer programming language.
>
> - Benjamin
>
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue43895] Unnecessary Cache of Shared Object Handles

2021-04-23 Thread Ian H


Change by Ian H :


--
keywords: +patch
pull_requests: +24282
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/25487

___
Python tracker 
<https://bugs.python.org/issue43895>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43895] Unnecessary Cache of Shared Object Handles

2021-04-20 Thread Ian H


Ian H  added the comment:

Proposed patch is in https://github.com/python/cpython/pull/25487.

--

___
Python tracker 
<https://bugs.python.org/issue43895>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43895] Unnecessary Cache of Shared Object Handles

2021-04-20 Thread Ian H


New submission from Ian H :

While working on another project I noticed that there's a cache of shared 
object handles kept inside _PyImport_FindSharedFuncptr. See 
https://github.com/python/cpython/blob/b2b6cd00c6329426fc3b34700f2e22155b44168c/Python/dynload_shlib.c#L51-L55.
 It appears to be an optimization to work around poor caching of shared object 
handles in old libc implementations. After some testing, I have been unable to 
find any meaningful performance difference from this cache, so I propose we 
remove it to save the space.

My initial tests were on Linux (Ubuntu 18.04). I saw no discernible difference 
in the time for running the Python test suite with a single thread. Running the 
test suite using a single thread shows a lot of variance, but after running 
with and without the cache 40 times the mean times with/without the cache was 
nearly the same. Interpreter startup time also appears to be unaffected. This 
was all with a debug build, so I'm in the process of collecting data with a 
release build to see if that changes anything.

--
messages: 391453
nosy: Ian.H
priority: normal
severity: normal
status: open
title: Unnecessary Cache of Shared Object Handles
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue43895>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43870] C API Functions Bypass __import__ Override

2021-04-16 Thread Ian H

New submission from Ian H :

Some of the import-related C API functions are documented as bypassing an 
override to builtins.__import__. This appears to be the case, but the 
documentation is incomplete in this regard. For example, PyImport_ImportModule 
is implemented by calling PyImport_Import which does respect an override to 
builtins.__import__, but PyImport_ImportModule doesn't mention respecting an 
override. On the other hand some routines (like 
PyImport_ImportModuleLevelObject) do not respect an override to the builtin 
import.

Is this something that people are open to having fixed? I've been working on an 
academic project downstream that involved some overrides to the __import__ 
machinery (I haven't figured out a way to do this with just import hooks) and 
having some modules skip going through our override threw us for a bad 
debugging loop. The easiest long-term fix from our perspective is to patch the 
various PyImport routines to always respect an __import__ override. This 
technically is a backwards compatibility break, but I'm unsure if anyone is 
actually relying on the fact that specific C API functions bypass 
builtins.__import__ entirely. It seems more likely that the current behavior 
will cause bugs downstream like it did for us.

--
messages: 391220
nosy: Ian.H
priority: normal
severity: normal
status: open
title: C API Functions Bypass __import__ Override
type: behavior
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue43870>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43819] ExtensionFileLoader Does Not Implement invalidate_caches

2021-04-12 Thread Ian H


New submission from Ian H :

Currently there's no easy way to get at the internal cache of module spec 
objects for compiled extension modules. See 
https://github.com/python/cpython/blob/20ac34772aa9805ccbf082e700f2b033291ff5d2/Python/import.c#L401-L415.
 For example, these module spec objects continue to be cached even if we call 
importlib.invalidate_caches. ExtensionFileLoader doesn't implement the 
corresponding method for this.

The comment in the C file referenced above implies this is done this way to 
avoid re-initializing extension modules. I'm not sure if this can be fixed, but 
I figured I'd ask for input. Our use-case is an academic project where we've 
been experimenting with building an interface for linker namespaces into Python 
to allow for (among other things) loading multiple copies of any module without 
explicit support from that module. We've been able to do this without having 
custom builds of Python. We've instead gone the route of overriding some of the 
import machinery at runtime. To make this work we need a way to prevent caching 
of previous import-related information about a specific extension module. We 
currently have to rely on an unfortunate hack to get access to the internal 
cache of module spec objects for extension modules and modify that dictionary 
manually. What we have works, but any sort of alternative would be welcome.

--
messages: 390905
nosy: Ian.H
priority: normal
severity: normal
status: open
title: ExtensionFileLoader Does Not Implement invalidate_caches
type: behavior
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue43819>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43749] venv module does not copy the correct python exe

2021-04-06 Thread Ian Norton


Change by Ian Norton :


--
keywords: +patch
pull_requests: +23954
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/25216

___
Python tracker 
<https://bugs.python.org/issue43749>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43749] venv module does not copy the correct python exe

2021-04-06 Thread Ian Norton


Ian Norton  added the comment:

This may also cause https://bugs.python.org/issue35644

--

___
Python tracker 
<https://bugs.python.org/issue43749>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43749] venv module does not copy the correct python exe

2021-04-06 Thread Ian Norton


New submission from Ian Norton :

On windows, the venv module does not copy the correct python exe if the current 
running exe (eg sys.executable) has been renamed (eg, named python3.exe)

venv will only make copies of python.exe, pythonw.exe, python_d.exe or 
pythonw_d.exe.

If for example the python executable has been renamed from python.exe to 
python3.exe (eg, to co-exist in a system where multiple pythons are on PATH) 
then this can fail with errors like:

Error: [WinError 2] The system cannot find the file specified

When venv tries to run pip in the new environment.

If the running python executable is a differently named copy then errors like 
the one described in https://bugs.python.org/issue40588 are seen.

--
components: Library (Lib)
messages: 390329
nosy: Ian Norton
priority: normal
severity: normal
status: open
title: venv module does not copy the correct python exe
versions: Python 3.10, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue43749>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43047] logging.config formatters documentation is out of sync with code

2021-01-27 Thread Ian Wienand


Change by Ian Wienand :


--
keywords: +patch
pull_requests: +23182
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/24358

___
Python tracker 
<https://bugs.python.org/issue43047>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43047] logging.config formatters documentation is out of sync with code

2021-01-27 Thread Ian Wienand


New submission from Ian Wienand :

The dict based configuration does not mention the "class" option, and neither 
the ini-file or dict sections mention the style tag.

--
components: Library (Lib)
messages: 385825
nosy: iwienand
priority: normal
severity: normal
status: open
title: logging.config formatters documentation is out of sync with code
type: enhancement

___
Python tracker 
<https://bugs.python.org/issue43047>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42442] Tarfile to stdout documentation example

2020-11-23 Thread Ian Laughlin


New submission from Ian Laughlin :

Recommend adding example to tarfile documentation to provide example of writing 
a tarfile to stdout.


example:

files = [(file_1, filename_1), (file_2, filename_2)]


with tarfile.open(fileobj=sys.stdout.buffer, mode = 'w|gz') as tar:
for file, filename in files:
file_obj = io.BytesIO() #starts a BytesIO object
file_obj.write(file.encode('utf-8')) #writes the file to the BytesIO 
object
info = tarfile.TarInfo(filename) #creates the TarInfo
file_obj.seek(0) #goes to the beginning of the BytesIO object else it 
won't write
info.size = len(file) #sets the length of the file
tar.addfile(info, fileobj=file_obj) #writes the tar to stdout.

--
assignee: docs@python
components: Documentation
messages: 381665
nosy: docs@python, ilaughlin
priority: normal
severity: normal
status: open
title: Tarfile to stdout documentation example
versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue42442>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42225] Tkinter hangs or crashes when displaying astral chars

2020-11-08 Thread Ian Strawbridge


Ian Strawbridge  added the comment:

On Ubuntu, Tk version is showing as 8.6.10
On Windows 10, Tk version is showing as 8.6.9

--

___
Python tracker 
<https://bugs.python.org/issue42225>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42225] Tkinter hangs or crashes when displaying astral chars

2020-11-08 Thread Ian Strawbridge


Ian Strawbridge  added the comment:

Further to the information I posted on Stack Overflow (referred to above) 
relating to reproducing emoticon characters from Idle under Ubuntu, I have done 
more testing.  Based on some of the code/comments above, I tried modifications 
which I hoped might identify errors before Idle crashed.
At a simple level I can generate some error information in a Ubuntu terminal 
from the following.
usr/bin$ idle-python3.8
Entering chr(0x1f624) gives the following error message in terminal.
X Error of failed request:  BadLength (poly request too large or internal Xlib 
length error)
  Major opcode of failed request:  139 (RENDER)
  Minor opcode of failed request:  20 (RenderAddGlyphs)
  Serial number of failed request:  4484
  Current serial number in output stream:  4484

Another test used this code.
--
def FileSave(sav_file_name,outputstring):
with open(sav_file_name, "a", encoding="utf8",newline='') as myfile:
myfile.write(outputstring)

def FileSave1(sav_file_name,eoutputstring):
with open(sav_file_name, "a", encoding="utf8",newline='') as myfile:
myfile.write(eoutputstring)

tk = True
if tk:
from tkinter import Tk
from tkinter.scrolledtext import ScrolledText
root = Tk()
text = ScrolledText(root, width=80, height=40)
text.pack()
def print1(txt):
text.insert('insert', txt+'\n')

errors = []
outputstring = "Characters:"+ "\n"+"\n"
eoutputstring = "Errors:"+ "\n"+"\n"

#for i in range(0x1f600, 0x1f660):   #crashes at 0x1f624
for i in range(0x1f623, 0x1f624):  # 1f624, 1f625 then try 1f652  
chars = chr(i)
decimal = str(int(hex(i)[2:],16))
try:
outputstring = str(hex(i))+" "+decimal+" "+chars+ "\n"
FileSave("Charsfile.txt", outputstring)
print1(f"{hex(i)} {decimal} {chars}")
print(f"{hex(i)} {decimal} {chars}")
except Exception as e:
print(str(hex(i)))
eoutputstring = str(hex(i))+ "\n"
FileSave1("Errorfile.txt", eoutputstring)
errors.append(f"{hex(i)} {e}")

print("ERRORS:")

for line in errors:
print(line)

--
With the range starting at 0x1f623 and changing the end point, in Ubuntu, with 
end point 0x1f624, this prints ok, but if higher numbers are used the Idle 
windows all closed. However on some occasions, if I began with end point at 
0x1f624 and run, then without closing the editor window I increased the end 
point to 0x1f625, save and run, the Text window would close, but the console 
window would remain open.  I could then increase the upper range further and 
repeat and more characters would print to the console.
I have attached screenshots of the console output with the 
fonts-noto-color-emoji fonts package installed(with font), then with this 
package uninstalled (no font) and finally the same when run under Windows 10.  
For the console output produced while the font package is installed, if I 
select in the character column where there is a blank space, "something" can be 
selected.  If I save the console as a text file or select all the rows, copy 
and paste to a text file, the missing characters are revealed. When the font 
package is uninstalled, the missing characters are truely missing.  It is the 
apparently missing characters (such as 0x1f624, 0x1f62c, 0x1f641, 0x1f642, 
0x1f644-0x1f64f) which appear to be causing the Idle crashes.  Presumably such 
as 0x1f650 and 0x1f651 are unallocated codes so show up as rectangular 
outlines. 

In none of the tests with the more complex code above did I manage to generate 
any error output.

My set up is as follows.
Ubuntu 20.04.1 LTS
x86_64
GNOME version: 3.36.3
Python 3.8.6 (default, Sep 25 2020, 21:22:01) 
Tk version: 8.6.10
[GCC 7.5.0] on linux

Hopefully, the above might give some pointers to handling these characters.

--
nosy: +IanSt1
versions: +Python 3.8 -Python 3.10
Added file: https://bugs.python.org/file49581/Screenshots_128547-128593.pdf

___
Python tracker 
<https://bugs.python.org/issue42225>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



_pydoc.css

2020-10-25 Thread Ian Gay
The default colors of pydoc are truly horrendous! Has anyone written a
_pydoc.css file to produce something reasonable?

(running 3.6 on OpenSuse 15.1)

Ian

-- 
*** To reply by e-mail, make w single in address **
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue41883] ctypes pointee goes out of scope, then pointer in struct dangles and crashes

2020-09-30 Thread Ian M. Hoffman


Ian M. Hoffman  added the comment:

I agree with you. When I wrote "desired behavior" I intended it to mean "my 
selfishly desired outcome of not loading my struct with a dangling pointer." 
This issue seems to have descended into workarounds that treat the symptoms; 
I'm all for treating the cause.

--

___
Python tracker 
<https://bugs.python.org/issue41883>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41883] ctypes pointee goes out of scope, then pointer in struct dangles and crashes

2020-09-29 Thread Ian M. Hoffman


Ian M. Hoffman  added the comment:

You are correct.

After further review, I found an older ctypes issue #12836 which was then 
enshrined in a workaround in the numpy.ndarray.ctypes interface to vanilla 
ctypes.

https://numpy.org/doc/stable/reference/generated/numpy.ndarray.ctypes.html

Numpy ctypes has both a `data` method for which "a reference will not be kept 
to the array" and a `data_as` method which has the desired behavior: "The 
returned pointer will keep a reference to the array."

So, we've all got our workarounds. What remains is whether/how to implement a 
check in Python for the dangling pointer. I have no advice on that, except that 
it is desirable to avoid the fault crash, no matter who is to blame.

--

___
Python tracker 
<https://bugs.python.org/issue41883>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41883] ctypes pointee goes out of scope, then pointer in struct dangles and crashes

2020-09-28 Thread Ian M. Hoffman


New submission from Ian M. Hoffman :

A description of the problem, complete example code for reproducing it, and a 
work-around are available on SO at the link:

https://stackoverflow.com/questions/64083376/python-memory-corruption-after-successful-return-from-a-ctypes-foreign-function

In summary: (1) create an array within a Python function, (2) create a 
ctypes.Structure with a pointer to that array, (3) return that struct from the 
Python function, (4) pass the struct out and back to a foreign function, (5) 
Python can successfully dereference the return from the foreign function, then 
(6) Python crashes.

As far as I can tell, when the array in the function goes out of scope at the 
end of the function, the pointer to it in the struct becomes dangling ... but 
the dangling doesn't catch up with Python until the very end when the Python 
struct finally goes out of scope in Python and the GC can't find its pointee.

I've reproduced this on Windows and linux with gcc- and MSVC-compiled Python 
3.6 and 3.8.

Perhaps it is not good practice on my part to have let the array go out of 
scope, but perhaps a warning from Python (or at least some internal awareness 
that the memory is no longer addressed) is in order so that Python doesn't 
crash upon failing to free it.

This may be related to #39217; I can't tell.

--
components: ctypes
messages: 377652
nosy: NankerPhelge
priority: normal
severity: normal
status: open
title: ctypes pointee goes out of scope, then pointer in struct dangles and 
crashes
type: crash
versions: Python 3.6, Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue41883>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: What kind of magic do I need to get python to talk to Excel xlsm file?

2020-09-03 Thread Ian Hill
If you don't need to do any specific data analysis using pandas, try
openpyxl.

It will read xlsm files and it quite straight forward to use.

https://openpyxl.readthedocs.io/en/stable/

pip install openpyxl.

Ian
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Silly question, where is read() documented?

2020-08-29 Thread Ian Hobson



https://docs.python.org/3/tutorial/inputoutput.html#methods-of-file-objects

(It is in the top result returned by Google, searching for
Python read documentation)

On 29/08/2020 17:18, Chris Green wrote:

Well it sounds a silly question but I can't find the documentation for
read().  It's not a built-in function and it's not documented with
(for example) the file type object sys.stdin.

So where is it documented?  :-)



--
Ian Hobson


--
This email has been checked for viruses by AVG.
https://www.avg.com

--
https://mail.python.org/mailman/listinfo/python-list


Re: LittleRookie

2020-08-19 Thread Ian Hill
You can access Dr.Chuck's Python for Everybody course here
https://www.py4e.com/

or you need to be on the audit track on Coursera.

R.

ushills
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue41389] Garbage Collector Ignoring Some (Not All) Circular References of Identical Type

2020-07-27 Thread Ian O'Shaughnessy


Ian O'Shaughnessy  added the comment:

>I don't know of any language that guarantees all garbage will be collected 
>"right away". Do you?

I'm not an expert in this domain, so, no. I am however attempting to find a way 
to mitigate this issue. Do you have any suggestions how I can avoid these 
memory spikes? Weak references? Calling gc.collect() on regular intervals 
doesn't seem to work consistently.

--

___
Python tracker 
<https://bugs.python.org/issue41389>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41389] Garbage Collector Ignoring Some (Not All) Circular References of Identical Type

2020-07-27 Thread Ian O'Shaughnessy

Ian O'Shaughnessy  added the comment:

"Leak" was likely the wrong word.

It does appear problematic though.

The loop is using a fixed number of variables (yes, there are repeated dynamic 
allocations, but they fall out of scope with each iteration), only one of these 
variables occupies 1MB of ram (aside from the static variable).

The problem: There's only really one variable occupying 1MB of in-scope memory, 
yet the app's memory usage can/will exceed 1GB after extended use.

At the very least, this is confusing – especially given the lack of user 
control to prevent it from happening once it's discovered as a problem.

--

___
Python tracker 
<https://bugs.python.org/issue41389>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41389] Garbage Collector Ignoring Some (Not All) Circular References of Identical Type

2020-07-24 Thread Ian O'Shaughnessy


Ian O'Shaughnessy  added the comment:

For a long running process (greatly exceeding a million iterations) the 
uncollected garbage will become too large for the system (many gigabytes). A 
manual execution of the gc would be required.

That seems flawed given that Python is a garbage collected language, no?

--

___
Python tracker 
<https://bugs.python.org/issue41389>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41389] Garbage Collector Ignoring Some (Not All) Circular References of Identical Type

2020-07-24 Thread Ian O'Shaughnessy


New submission from Ian O'Shaughnessy :

Using a script that has two classes A and B which contain a circular reference 
variable, it is possible to cause a memory leak that is not captured by default 
gc collection. Only by running gc.collect() manually do the circular references 
get collected.

Attached is a sample script that replicates the issue.

Output starts:

Ram used: 152.17 MB - A: Active(125) / Total(2485) - B: Active(124) / 
Total(2484)
Ram used: 148.17 MB - A: Active(121) / Total(12375) - B: Active(120) / 
Total(12374)
Ram used: 65.88 MB - A: Active(23) / Total(22190) - B: Active(22) / Total(22189)
Ram used: 77.92 MB - A: Active(35) / Total(31935) - B: Active(34) / Total(31934)

After 1,000,000 cycles 1GB of ram is being consumed:

Ram used: 1049.68 MB - A: Active(1019) / Total(975133) - B: Active(1018) / 
Total(975132)
Ram used: 1037.64 MB - A: Active(1007) / Total(984859) - B: Active(1006) / 
Total(984858)
Ram used: 952.34 MB - A: Active(922) / Total(994727) - B: Active(921) / 
Total(994726)
Ram used: 970.41 MB - A: Active(940) / Total(100) - B: Active(940) / 
Total(100)

--
files: gc.bug.py
messages: 374210
nosy: ian_osh
priority: normal
severity: normal
status: open
title: Garbage Collector Ignoring Some (Not All) Circular References of 
Identical Type
type: resource usage
versions: Python 3.7
Added file: https://bugs.python.org/file49337/gc.bug.py

___
Python tracker 
<https://bugs.python.org/issue41389>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35786] get_lock() method is not present for Values created using multiprocessing.Manager()

2020-07-06 Thread Ian Jacob Bertolacci


Ian Jacob Bertolacci  added the comment:

What's being done about this?
I would say this is less "misleading documentation" and more "incorrect 
implementation"

There is also not an obvious temporary work-around.

--
nosy: +IanBertolacci

___
Python tracker 
<https://bugs.python.org/issue35786>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39619] os.chroot is not enabled on HP-UX builds

2020-02-12 Thread Ian Norton


New submission from Ian Norton :

When building on HP-UX using:

The configure stage fails to detect chroot().  This is due to setting  
_XOPEN_SOURCE to a value higher than 500.

The fix for this is to not set _XOPEN_SOURCE when configuring for HP-UX

--
components: Interpreter Core
messages: 361921
nosy: Ian Norton
priority: normal
severity: normal
status: open
title: os.chroot is not enabled on HP-UX builds
type: enhancement
versions: Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39619>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39480] referendum reference is needlessly annoying

2020-01-28 Thread Ian Jackson


New submission from Ian Jackson :

The section "Fancier Output Formatting" has the example below.  This will 
remind many UK readers of the 2016 EU referendum.  About half of those readers 
will be quite annoyed.

This annoyance seems entirely avoidable; a different example which did not 
refer to politics would demonstrate the behaviour just as well.

Changing this example would (in the words of the CoC) also show more empathy, 
and be more considerate towards, python contributors unhappy with recent 
political developments in the UK, without having to make anyone else upset in 
turn.

  >>> year = 2016
  >>> event = 'Referendum'
  >>> f'Results of the {year} {event}'
  'Results of the 2016 Referendum'

  >>> yes_votes = 42_572_654
  >>> no_votes = 43_132_495
  >>> percentage = yes_votes / (yes_votes + no_votes)
  >>> '{:-9} YES votes  {:2.2%}'.format(yes_votes, percentage)' 42572654 YES 
votes  49.67%'

--
assignee: docs@python
components: Documentation
messages: 360883
nosy: diziet, docs@python
priority: normal
severity: normal
status: open
title: referendum reference is needlessly annoying
versions: Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39480>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38869] Unexpectedly variable result

2019-11-20 Thread Ian Carr-de Avelon


New submission from Ian Carr-de Avelon :

I can't understand why the result of changes() in the example file changes. I 
get:
[[6.90642211e-310]
 [1.01702662e-316]
 [1.58101007e-322]]
[[0.]
 [0.]
 [0.]]
with an Ubuntu 14 system that has had a lot of changes made. I've checked the 
same happens on pythonanywhere.com so it does not seem to just be my system is 
broken. I wondered if there was some strange state in cv2 I don't know about, 
but as commenting out the tvec=np.zeros(3) line removes the behaviour I think 
there is something strange here. Now I've got it down to a few lines I can find 
a work around  and obviously numpy and opencv are huge packages and it may be 
in their court, but I think it is worth someone taking a look who knows more 
than me.
Yours
Ian

--
files: bug.py
messages: 357105
nosy: IanCarr
priority: normal
severity: normal
status: open
title: Unexpectedly variable result
type: behavior
versions: Python 2.7
Added file: https://bugs.python.org/file48727/bug.py

___
Python tracker 
<https://bugs.python.org/issue38869>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Any socket library to communicate with kernel via netlink?

2019-11-20 Thread Ian Pilcher

On 11/18/19 9:23 PM, lampahome wrote:

As title, I tried to communicate with kernel via netlink. But I failed when
I receive msg from kernel.

The weird point is sending successfully from user to kernel, failed when
receiving from kernel.

So I want to check code in 3rd library and dig in, but always found library
called netlinkg but it actually does something like modify network address
or check network card...

Any idea is welcome



https://pypi.org/project/pyroute2/

--

Ian Pilcher arequip...@gmail.com
 "I grew up before Mark Zuckerberg invented friendship" 


--
https://mail.python.org/mailman/listinfo/python-list


Distutils - bdist_rpm - specify python interpretter location

2019-10-30 Thread Ian Pilcher

I am trying to use Distutils "bdist_rpm" function on Fedora 30.  It is
failing, because Fedora does not provide a "python" executable; it
provides /usr/bin/python2 and /usr/bin/python3.

The error message is:

  env: 'python': No such file or directory
  error: Bad exit status from /var/tmp/rpm-tmp.a3xWMd (%build)

When run with the --spec-only option, one can see where 'python' is
used in the generated SPEC file:

  %build
  env CFLAGS="$RPM_OPT_FLAGS" python setup.py build

  %install
  python setup.py install -O1 --root=$RPM_BUILD_ROOT 
--record=INSTALLED_FILES


Is there a way to tell Distutils to use 'python2'?

Thanks!

--
========
Ian Pilcher arequip...@gmail.com
 "I grew up before Mark Zuckerberg invented friendship" 


--
https://mail.python.org/mailman/listinfo/python-list


[issue34975] start_tls() difficult when using asyncio.start_server()

2019-10-28 Thread Ian Good


Ian Good  added the comment:

#36889 was reverted, so this is not resolved.

I'm guessing this needs to be moved to 3.9 now too. Is my original PR worth 
revisiting? https://github.com/python/cpython/pull/13143/files

--
resolution: fixed -> 
status: closed -> open

___
Python tracker 
<https://bugs.python.org/issue34975>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: What's the purpose the hook method showing in a class definition?

2019-10-20 Thread Ian Hobson

Hi Jach,

On 20/10/2019 09:34, jf...@ms4.hinet.net wrote:

What puzzles me is how a parent's method foo() can find its child's method 
goo(), no matter it was overwrote or not? MRO won't explain this and I can't 
find document about it also:-(


This is a generalised description - Python may be slightly different.

When foo invokes goo the search for goo starts at the class of the 
object (which is B), not the class of the executing method (i.e not A). 
It then proceeds to look for goo up the class hierarchy - first in B, 
then A then Object.


If that fails the RTS modifies the call, to look for a magic method, and 
starts again at B. When the magic method is found in Object, you get the 
"not found" error. If you implement the magic method in A or B it will 
be run instead.


Regards

Ian

--
This email has been checked for viruses by AVG.
https://www.avg.com

--
https://mail.python.org/mailman/listinfo/python-list


Re: Get __name__ in C extension module

2019-10-07 Thread Ian Pilcher

On 10/7/19 2:09 AM, Barry Scott wrote:

I meant pass in the specific named logger that you C code will use.


Right, but I'm assuming that your C/C++ code will receive the logger
object by calling PyArg_ParseTuple (or similar), and I further assume
that you want to validate that the object actually is a logger.  Doing
that validation (by using an "O!" unit in the PyArg_ParseTuple format
string) requires access to the logging.Logger type object, and I was
unable to find a way to access that object by name.

--
========
Ian Pilcher arequip...@gmail.com
 "I grew up before Mark Zuckerberg invented friendship" 

--
https://mail.python.org/mailman/listinfo/python-list


Re: Get __name__ in C extension module

2019-10-06 Thread Ian Pilcher

On 10/6/19 12:55 PM, Barry Scott wrote:

Then the answer to your question is simple. Do it in python and passt
logger into the C++ module.


Funny thing, that's exactly where I started this journey.  I couldn't
figure out how to get the logging.Logger type object, so that I could
use a "O!" format string unit.  This led me to read a bit more about
the logging framework, which led me to the advice to get loggers by
name, rather than passing them around, etc., etc.


Next I would never code directly against the C API. Its a pain to use
and get right, get the ref counts wrong and you get memory leaks of
worse crash python.


Well, I like driving cars with manual transmissions, so ...

--
========
Ian Pilcher arequip...@gmail.com
 "I grew up before Mark Zuckerberg invented friendship" 

--
https://mail.python.org/mailman/listinfo/python-list


Re: Get __name__ in C extension module

2019-10-06 Thread Ian Pilcher

On 10/6/19 11:55 AM, MRAB wrote:
Don't you already have the module's name? You have to specify it in the 
PyModuleDef struct that you pass to PyModule_Create.


I do.  Perhaps I'm trying to be too Pythonic, but there's so much advice
out there about using getLogger(__name__) in Python code, rather than
hardcoding the name.  I've been trying to follow that pattern in my
extension module.

Calling PyModule_Create returns a reference to the module, and you can 
get its namespace dict with PyModule_GetDict(...).


Right.  I have that in my module init function, but how do I access that
later in one of my extension functions?  The only thing I can think of
would be to save it in a static variable, but static variables seem to
be a no-no in extensions.

--

Ian Pilcher arequip...@gmail.com
 "I grew up before Mark Zuckerberg invented friendship" 

--
https://mail.python.org/mailman/listinfo/python-list


Re: Get __name__ in C extension module

2019-10-06 Thread Ian Pilcher

On 10/5/19 12:55 PM, Ian Pilcher wrote:

This is straightforward, except that I cannot figure out how to retrieve
the __name__.


Making progress.  I can get a __name__ value with:

  PyDict_GetItemString(PyEval_GetGlobals(), "__name__")

I say "a __name__ value" because the returned value is actually that of
the calling module, not the name of my extension.

Is this normal?

Thanks!

--
====
Ian Pilcher arequip...@gmail.com
 "I grew up before Mark Zuckerberg invented friendship" 


--
https://mail.python.org/mailman/listinfo/python-list


Get __name__ in C extension module

2019-10-05 Thread Ian Pilcher

On 10/4/19 4:30 PM, Ian Pilcher wrote:

Ideally, I would pass my existing Logging.logger object into my C
function and use PyObject_CallMethod to call the appropriate method on
it (info, debug, etc.).


As I've researched this further, I've realized that this isn't the
correct approach.

My extension should be doing the C equivalent of:

logger = logging.getLogger(__name__)

This is straightforward, except that I cannot figure out how to retrieve
the __name__.  I can get it from the module object with
PyModule_GetName, but that requires that I have a reference to the
module object in order to do so.

I would have thought that this would be easy for a module function to
access, but I haven't been able to find any API which does this.
(Module functions get NULL as their 'self' argument.)

Any pointers appreciated.  Thanks!

--

Ian Pilcher arequip...@gmail.com
 "I grew up before Mark Zuckerberg invented friendship" 


--
https://mail.python.org/mailman/listinfo/python-list


Using a logging.Logger in a C extension

2019-10-04 Thread Ian Pilcher

I am working my way through writing a C extension, and I've realized
that I need to log a few messages from the C code.

Ideally, I would pass my existing Logging.logger object into my C
function and use PyObject_CallMethod to call the appropriate method on
it (info, debug, etc.).

PyArg_ParseTuple should be able to handle this with an "O!" format unit,
but I can't figure out how to retrieve the type object for
logging.Logger.

Any hints, links, etc. appreciated.

Thanks!

--
========
Ian Pilcher arequip...@gmail.com
 "I grew up before Mark Zuckerberg invented friendship" 


--
https://mail.python.org/mailman/listinfo/python-list


[issue38300] Documentation says destuction of TemporaryDirectory object will also delete it, but it does not.

2019-09-27 Thread Ian


Ian  added the comment:

I'm sorry, I should've thought to check my python version. I was on 3.6.3 which 
it would not be deleted, updated to 3.6.8 and it works as intended.

--
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed
versions:  -Python 3.7

___
Python tracker 
<https://bugs.python.org/issue38300>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38300] Documentation says destuction of TemporaryDirectory object will also delete it, but it does not.

2019-09-27 Thread Ian


New submission from Ian :

The documentation found here 
https://docs.python.org/3.7/library/tempfile.html#tempfile.TemporaryDirectory 
states the following

"On completion of the context or destruction of the temporary directory object 
the newly created temporary directory and all its contents are removed from the 
filesystem."

However calling del on the object does not call the cleanup method.

t = tempfile.TemporaryDirectory()
del t

I'm not sure if that is incorrect documentation or my own misunderstanding of 
what you call destruction. I tested adding my own def __del__(): self.cleanup() 
which worked as I expected.

--
messages: 353393
nosy: iarp
priority: normal
severity: normal
status: open
title: Documentation says destuction of TemporaryDirectory object will also 
delete it, but it does not.
type: behavior
versions: Python 3.6, Python 3.7

___
Python tracker 
<https://bugs.python.org/issue38300>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Irritating bytearray behavior

2019-09-16 Thread Ian Pilcher

I am using a bytearray to construct a very simple message, that will be
sent across the network.  The message should always be 20 bytes:

  2 bytes - address family (AF_INET or AF_INET6) - network byte order
  2 bytes - (padding)
  4 or 16 bytes - IP address

The size of the IP address is dependent on whether it is an IPv4 address
(4 bytes) or an IPv6 address (16 bytes).  In the IPv4 case, it should be
followed by 12 bytes of padding, to keep the message size consistent.

Naïvely, I thought that I could do this:

   ip = ipaddress.ip_address(unicode(addr))
   msg = bytearray(20)
   msg[1] = socket.AF_INET if ip.version == 4 else socket.AF_INET6
   msg[4:] = ip.packed
   sock.sendto(msg, dest)

This doesn't work in the IPv4 case, because the bytearray gets truncated
to only 8 bytes (4 bytes plus the size of ip.packed).

Is there a way to avoid this behavior copy the contents of ip.packed
into the bytearray without changing its size?

TIA

--

Ian Pilcher arequip...@gmail.com
 "I grew up before Mark Zuckerberg invented friendship" 


--
https://mail.python.org/mailman/listinfo/python-list


Re: ``if var'' and ``if var is not None''

2019-09-01 Thread Ian Kelly
On Sun, Sep 1, 2019, 8:58 AM Terry Reedy  wrote:

> On 9/1/2019 2:12 AM, Hongyi Zhao wrote:
>
> > The following two forms are always equivalent:
> > ``if var'' and ``if var is not None''
>
> Aside from the fact that this is false, why would you post such a thing?
>   Trolling?  Did you hit  [Send] prematurely?
>

I suspect it was posted as a question despite not being phrased as such.

>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: itertools cycle() docs question

2019-08-21 Thread Ian Kelly
On Wed, Aug 21, 2019 at 12:36 PM Calvin Spealman 
wrote:
>
> The point is to demonstrate the effect, not the specific implementation.

But still yes, that's pretty much exactly what it does. The main difference
between the "roughly equivalent to" code and the actual implementation is
that the former is in Python while the latter is in C.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Enumerate - int object not subscriptable

2019-08-20 Thread Ian Kelly
Or use the "pairwise" recipe from the itertools docs:

from itertools import tee

def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iterable)
next(b, None)
return zip(a, b)

for num1, num2 in pairwise(a):
print(num1, num2)

On Tue, Aug 20, 2019 at 7:42 AM Cousin Stanley 
wrote:

> Sayth Renshaw wrote:
>
> > I want to do basic math with a list.
> >
> > a = [1, 2, 3, 4, 5, 6, 7, 8]
> >
> > for idx, num in enumerate(a):
> > print(idx, num)
> >
> > This works, but say I want to print the item value
> > at the next index as well as the current.
> >
> > for idx, num in enumerate(a):
> >
> > print(num[idx + 1], num)
> > 
>
>
> #!/usr/bin/env python3
>
> # sum each adjacent pair of elements in a list
>
> ls = list( range( 10 , 1 , -1 ) )
>
> print('\n  ' , ls , '\n' )
>
> for enum , n in enumerate( range( len( ls ) - 1 ) ) :
>
> i_left , i_rite = ls[ n : n + 2 ]
>
> i_tot = i_left  +  i_rite
>
> print( '  {:2d} :  {:2d}  +  {:2d} = {:4d} '.format( enum , i_left ,
> i_rite , i_tot ) )
>
>
> --
> Stanley C. Kitching
> Human Being
> Phoenix, Arizona
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Remote/Pair-Programming in-the-cloud

2019-08-04 Thread Ian Kelly
On Sat, Aug 3, 2019, 9:25 AM Bryon Tjanaka  wrote:

> Depending on how often you need to run the code, you could use a google doc
> and copy the code over when you need to run. Of course, if you need linters
> and other tools to run frequently this would not work.
>

I've conducted a number of remote interviews using Google Docs and while
it's okay for that use case I wouldn't really recommend it for actual
productivity. It's not designed to be an IDE nor does it satisfactorily
replace one.

>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: if bytes != str:

2019-08-04 Thread Ian Kelly
On Sun, Aug 4, 2019, 9:02 AM Hongyi Zhao  wrote:

> Hi,
>
> I read and learn the the following code now:
>
> https://github.com/shadowsocksr-backup/shadowsocksr-libev/blob/master/src/
> ssrlink.py
> 
>
> In this script, there are the following two customized functions:
>
>
> --
> def to_bytes(s):
> if bytes != str:
> if type(s) == str:
> return s.encode('utf-8')
> return s
>
> def to_str(s):
> if bytes != str:
> if type(s) == bytes:
> return s.decode('utf-8')
> return s
> --
>
> I've the following confusion on the above code:
>
> Why should use `if bytes != str:' here?  I mean, this will always return
> True, IMO.
>
> See my following test in ipython:
>
> In[20]: bytes != str
> Out[20]: True
>
>
> Any hints on this?
>

In Python 2.7, bytes and str are the same type.

>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: super or not super?

2019-07-16 Thread Ian Kelly
On Tue, Jul 16, 2019 at 1:21 AM Chris Angelico  wrote:
>
> On Tue, Jul 16, 2019 at 3:32 PM Ian Kelly  wrote:
> >
> > Just using super() is not enough. You need to take steps if you want to
> > ensure that you class plays nicely with MI. For example, consider the
> > following:
> >
> > class C1:
> > def __init__(self, name):
> > self._name = name
> >
> > class C2(C1):
> > def __init__(self, name, value):
> > super().__init__(name)
> > self._value = value
> >
> > This usage of super is just fine for the single-inheritance shown here.
But
> > there are two reasons why this cannot be neatly pulled into an MI
> > hierarchy. Can you spot both of them?
>
> Well, obviously it's violating LSP by changing the signature of
> __init__, which means that you have to be aware of its position in the
> hierarchy. If you want things to move around smoothly, you HAVE to
> maintain a constant signature (which might mean using *args and/or
> **kwargs cooperatively).

That's pretty close to what I had in mind. Many people treat __init__ as a
constructor (I know, it's not) and so long as you're following that
doctrine it's not really an LSP violation. But anything else that gets
worked into an MI hierarchy has to somehow be compatible with both of these
method signatures while also adding whatever new parameters it needs, which
is a problem. The usual advice for working with this is to use **kwargs
cooperatively as you suggested. *args doesn't work as well because every
class needs to know the absolute index of every positional parameter it's
interested in, and you can't just trim off the end as you go since you
don't know if the later arguments have been consumed yet. With **kwargs you
can just pop arguments off the dict as you go.

On Tue, Jul 16, 2019 at 2:06 AM Antoon Pardon  wrote:
>
> I guess the second problem is that C1 doesn't call super. Meaning that if
> someone else uses this in a multiple heritance scheme, and the MRO reaches
> C1, the call doesn't get propagated to the rest.

That's it. With single inheritance it's both easy and common to assume that
your base class is object, and since object.__init__ does nothing there's
no point in calling it. But with MI, that assumption isn't valid.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: super or not super?

2019-07-15 Thread Ian Kelly
On Sun, Jul 14, 2019 at 7:14 PM Chris Angelico  wrote:
>
> On Mon, Jul 15, 2019 at 10:51 AM Paulo da Silva
>  wrote:
> >
> > Às 15:30 de 12/07/19, Thomas Jollans escreveu:
> > > On 12/07/2019 16.12, Paulo da Silva wrote:
> > >> Hi all!
> > >>
> > >> Is there any difference between using the base class name or super to
> > >> call __init__ from base class?
> > >
> > > There is, when multiple inheritance is involved. super() can call
> > > different 'branches' of the inheritance tree if necessary.
> > > ...
> >
> > Thank you Jollans. I forgot multiple inheritance. I never needed it in
> > python, so far.
> >
>
> Something to consider is that super() becomes useful even if someone
> else uses MI involving your class. Using super() ensures that your
> class will play nicely in someone else's hierarchy, not just your own.

Just using super() is not enough. You need to take steps if you want to
ensure that you class plays nicely with MI. For example, consider the
following:

class C1:
def __init__(self, name):
self._name = name

class C2(C1):
def __init__(self, name, value):
super().__init__(name)
self._value = value

This usage of super is just fine for the single-inheritance shown here. But
there are two reasons why this cannot be neatly pulled into an MI
hierarchy. Can you spot both of them?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How Do You Replace Variables With Their Values?

2019-07-14 Thread Ian Kelly
On Thu, Jul 11, 2019 at 11:10 PM Chris Angelico  wrote:
>
> On Fri, Jul 12, 2019 at 2:30 PM Aldwin Pollefeyt
>  wrote:
> >
> > Wow, I'm so sorry I answered on the question : "How do you replace a
> > variable with its value". For what i understood with the example values,
> > CrazyVideoGamez wants 3 variables named like the meal-names in
dictionary.
> > Yes, it's not secure unless you work with your own dataset (just like
> > sending your own created commands with set=True in subprocess). Yes
there
> > might be better solutions for the real problem. But maybe the user
really
> > has a purpose for it, in a secure environment with own datatset, it's a
> > valid answer for "How do you replace a variable with its value".
> >
>
> What you gave was dangerous advice, and yes, there IS a better
> solution - and an easier one. If you want to create variables
> dynamically, then just create them!
>
> for meal, parts in dinner.items():
> globals()[meal.replace(' ','_')] = dinner[meal]
>
> Python has a rich set of metaprogramming tools. Don't just always
> reach for exec and caveat it with "it's okay if you trust everything".

To be fair, if dinner is untrusted then this new version is still unsafe.
You've just allowed it to shadow any global or built-in it wants to.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Seeking help regarding Python code

2019-07-11 Thread Ian Hobson

Hi,

Afraid the formatting gremlins got to your data before we saw it,
so I am taking a guess at what you want to achieve.

On 11/07/19 06:54, Debasree Banerjee wrote:


I want to calculate the difference between LAQN_NO2 and NO2_RAW everyday at
04:00 and add that value to NO2_RAW values in all rows on a particular day.
I have 2 months of data and I want to do this for each day.


What is the difference value at 04:00? Your times all appear to be the 
same. You doubt you mean you run the program at 04:00 every day, because
that will not update any rows that don't yet exist, and anyway NO2_RAW + 
(LAQN_NO2 - NO2_RAW) simply equals LAQN_NO2.


Once calculated, I would either store it in a new field (and add to 
NO2_RAW on every use), or copy the whole day's dataset to a new table. 
That way, should you have to re-run, you have not lost the old value of 
NO2_RAW.


Writing the code will come after you understand exactly what to achieve, 
and how a computer could do it.


Regards

Ian
(Another one).


--
https://mail.python.org/mailman/listinfo/python-list


[issue36889] Merge StreamWriter and StreamReader into just asyncio.Stream

2019-05-16 Thread Ian Good


Change by Ian Good :


--
nosy: +icgood

___
Python tracker 
<https://bugs.python.org/issue36889>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34975] start_tls() difficult when using asyncio.start_server()

2019-05-09 Thread Ian Good


Ian Good  added the comment:

I added start_tls() to StreamWriter. My implementation returns a new 
StreamWriter that should be used from then on, but it could be adapted to 
modify the current writer in-place (let me know).

I've added docs, an integration test, and done some additional "real-world" 
testing with an IMAP server I work on. Specifically, "openssl s_client -connect 
xxx -starttls imap" works like a charm, and no errors/warnings are logged on 
disconnect.

--
type:  -> enhancement

___
Python tracker 
<https://bugs.python.org/issue34975>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Conway's game of Life, just because.

2019-05-07 Thread Ian Kelly
On Tue, May 7, 2019 at 1:00 PM MRAB  wrote:
>
> On 2019-05-07 19:29, Eli the Bearded wrote:
> > In comp.lang.python, Paul Rubin   wrote:
> >
> > Thanks for posting this. I'm learning python and am very familiar with
> > this "game".
> >
> >> #!/usr/bin/python3
> >> from itertools import chain
> >>
> >> def adjacents(cell):# generate coordinates of cell
neighbors
> >> x, y = cell # a cell is just an x,y coordinate pair
> >> return ((x+i,y+j) for i in [-1,0,1] for j in [-1,0,1] if i or j)
> >
> [snip]
> >
> > Elijah
> > --
> > is the torus game board unintentional?
> >
> I've never seen a version of Conway's Game of Life where the board
> doesn't wrap around.

I don't think I've ever seen one where it does.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue34975] start_tls() difficult when using asyncio.start_server()

2019-05-06 Thread Ian Good


Change by Ian Good :


--
keywords: +patch
pull_requests: +13056
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue34975>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Syntax for one-line "nonymous" functions in "declaration style"

2019-04-02 Thread Ian Kelly
On Tue, Apr 2, 2019 at 1:43 AM Alexey Muranov 
wrote:
>
> > On Mon, Apr 1, 2019 at 3:52 PM Alexey Muranov  > gmail.com>
> > wrote:
> > >
> > > I only see a superficial analogy with `super()`, but perhaps it is
> > > because you did not give much details of you suggestion.
> >
> > No, it's because the analogy was not meant to be anything more than
> > superficial. Both are constructs of syntactic magic that aid
> > readability at
> > a high level but potentially obscure the details of execution (in
> > relatively unimportant ways) when examined at a low level.
>
> Since i understand that the "super() magic" is just evaluation in a
> predefined environment, it does not look so very magic.

It's the reason why this doesn't work:

superduper = super

class A:
def f(self):
return 42

class B(A):
def f(self):
return superduper().f()

>>> B().f()
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 3, in f
RuntimeError: super(): __class__ cell not found

But this does:

class C(A):
def f(self):
return superduper().f()
not super

>>> C().f()
42

I don't know, seems magical to me.

> Moreover, without this "magic", `super()` would have just produced an
> error.  So this magic did not change behaviour of something that worked
> before, it made "magically" work something that did not work before
> (but i am still not excited about it).

I'm curious how you feel about this example then (from the CPython 3.7.2
REPL; results from different Python implementations or from scripts that
comprise a single compilation unit may vary)?

>>> 372 is 372
True
>>> b = 372; b is 372
True
>>> b = 372
>>> b is 372
False

> > Maybe it was from my talk of implementing this by replacing the
> > assignment
> > with an equivalent def statement in the AST. Bear in mind that the def
> > statement is already just a particular kind of assignment: it creates
> > a
> > function and assigns it to a name. The only difference between the
> > original
> > assignment and the def statement that replaces it is in the __name__
> > attribute of the function object that gets created. The proposal just
> > makes
> > the direct lambda assignment and the def "assignment" to be fully
> > equivalent.
>
> `def` is not an assignment, it is more than that.

def is an assignment where the target is constrained to a single variable
and the expression is constrained to a newly created function object
(optionally "decorated" first with one or more composed function calls).
The only ways in which:

@decorate
def foo(blah):
return stuff

is more than:

foo = decorate(lambda blah: stuff)

are: 1) the former syntactically allows statements inside the function
body, not just expressions; 2) the former syntactically allows annotations
on the function; and 3) the former syntactically sets a function name and
the latter doesn't. In other words, all of the differences ultimately boil
down to syntax.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Syntax for one-line "nonymous" functions in "declaration style"

2019-04-02 Thread Ian Kelly
On Mon, Apr 1, 2019 at 3:52 PM Alexey Muranov 
wrote:
>
> I only see a superficial analogy with `super()`, but perhaps it is
> because you did not give much details of you suggestion.

No, it's because the analogy was not meant to be anything more than
superficial. Both are constructs of syntactic magic that aid readability at
a high level but potentially obscure the details of execution (in
relatively unimportant ways) when examined at a low level.

> On the other hand, i do use assignment in Python, and you seem to
> propose to get rid of assignment or to break it.

I thought the proposal was clear and succinct. "When [lambda expressions]
are directly assigned to a variable, Python would use the variable name as
the function name." That's all. I don't know where you got the idea I was
proposing "to get rid of assignment".

Maybe it was from my talk of implementing this by replacing the assignment
with an equivalent def statement in the AST. Bear in mind that the def
statement is already just a particular kind of assignment: it creates a
function and assigns it to a name. The only difference between the original
assignment and the def statement that replaces it is in the __name__
attribute of the function object that gets created. The proposal just makes
the direct lambda assignment and the def "assignment" to be fully
equivalent.

> Note that
>
> foo.bar = baz
>
> and
>
> foo[bar] = baz

I wrote "directly assigned to a variable", not to an attribute or an item.
These are not part of the suggestion.

> Do you propose to desugar it into a method/function call and to get rid
> of assignments in the language completely? Will the user be able to
> override this method? Something like:
>
> setvar("foo", bar)  # desugaring of foo = bar

No, this is entirely unrelated to what I suggested.

> I am so perplexed by the proposed behaviour of `f = lambda...`, that i
> need to ask the followng: am i right to expact that

I'm not going to address what these would do because I haven't developed
the idea to this level of detail, nor do I intend to. It was just an idea
put forth to address the complaint that lambda functions assigned to
variables don't get properly named, as well as the obstacle that the
proposed "f(x) = ..." syntax not only explicitly refuses to address this,
but potentially makes the problem worse by making lambda assignment more
attractive without doing anything to actually name them. I have no
expectation of writing a PEP for it.

> I suppose in any case that
>
> return lambda x: 
>
> and
>
> result = lambda x: 
> return result
>
> would not return the same result, which is not what i want.

Correct, the first would return a function with the name "" (as it
does now), while the second would return a function with the name "result".
Whether or not that is more useful than "" in this case is up to
the reader.

> I tried to imagine what semantics of the language could cause your
> proposed behaviour of `f = lambda...` and couldn't think of anything
> short of breaking the language.

I'm fairly sure the language would continue to function just fine. It
creates a gotcha to be sure, but it's far from being the only one, or even
the biggest one that has to do with lambdas (that award surely goes to the
behavior of lambda closures created in a loop, which comes up on this list
with some regularity).
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Syntax for one-line "nonymous" functions in "declaration style"

2019-03-31 Thread Ian Kelly
On Sun, Mar 31, 2019 at 1:09 PM Alexey Muranov 
wrote:
>
> On dim., Mar 31, 2019 at 6:00 PM, python-list-requ...@python.org wrote:
> > On Sat, Mar 30, 2019, 5:32 AM Alexey Muranov
> > 
> > wrote:
> >
> >>
> >>  On ven., Mar 29, 2019 at 4:51 PM, python-list-requ...@python.org
> >> wrote:
> >>  >
> >>  > There could perhaps be a special case for lambda expressions such
> >>  >  that,
> >>  > when they are directly assigned to a variable, Python would use
> >> the
> >>  > variable name as the function name. I expect this could be
> >>  >  accomplished by
> >>  > a straightforward transformation of the AST, perhaps even by just
> >>  >  replacing
> >>  > the assignment with a def statement.
> >>
> >>  If this will happen, that is, if in Python assigning a
> >> lambda-defined
> >>  function to a variable will mutate the function's attributes, or
> >> else,
> >>  if is some "random" syntactically-determined cases
> >>
> >>  f = ...
> >>
> >>  will stop being the same as evaluating the right-hand side and
> >>  assigning the result to "f" variable, it will be a fairly good extra
> >>  reason for me to go away from Python.
> >>
> >
> > Is there a particular reason you don't like this? It's not too
> > different
> > from the syntactic magic Python already employs to support the
> > 0-argument
> > form of super().
>
> I do not want any magic in a programming language i use, especially if
> it breaks simple rules.
>
> I do not like 0-argument `super()` either, but at least I do not have
> to use it.

Well, you wouldn't have to use my suggestion either, since it only applies
to assignments of the form "f = lambda x: blah". As has already been
stated, the preferred way to do this is with a def statement. So just use a
def statement for this, and it wouldn't affect you (unless you *really*
want the function's name to be "" for some reason).

That said, that's also the reason why this probably wouldn't happen. Why go
to the trouble of fixing people's lambda assignments for them when the
preferred fix would be for them to do it themselves by replacing them with
def statements?

> Neither i like how a function magically turns into a generator if the
> keyword `yield` appears somewhere within its definition.

I agree, there should have been a required syntactic element on the "def"
line as well to signal it immediately to the reader. It won't stop me from
using them, though.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Syntax for one-line "nonymous" functions in "declaration style"

2019-03-30 Thread Ian Kelly
On Sat, Mar 30, 2019, 5:32 AM Alexey Muranov 
wrote:

>
> On ven., Mar 29, 2019 at 4:51 PM, python-list-requ...@python.org wrote:
> >
> > There could perhaps be a special case for lambda expressions such
> >  that,
> > when they are directly assigned to a variable, Python would use the
> > variable name as the function name. I expect this could be
> >  accomplished by
> > a straightforward transformation of the AST, perhaps even by just
> >  replacing
> > the assignment with a def statement.
>
> If this will happen, that is, if in Python assigning a lambda-defined
> function to a variable will mutate the function's attributes, or else,
> if is some "random" syntactically-determined cases
>
> f = ...
>
> will stop being the same as evaluating the right-hand side and
> assigning the result to "f" variable, it will be a fairly good extra
> reason for me to go away from Python.
>

Is there a particular reason you don't like this? It's not too different
from the syntactic magic Python already employs to support the 0-argument
form of super().
-- 
https://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   5   6   7   8   9   10   >