[issue46073] ast.unparse produces: 'FunctionDef' object has no attribute 'lineno' for valid module

2021-12-14 Thread Brian Carlson


Brian Carlson  added the comment:

I don't think passing `lineno` and `column` is preferred. It makes code 
generation harder because `lineno` and `column` are hard to know ahead of when 
code is being unparsed.

--

___
Python tracker 
<https://bugs.python.org/issue46073>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46073] ast.unparse produces: 'FunctionDef' object has no attribute 'lineno' for valid module

2021-12-14 Thread Brian Carlson


Brian Carlson  added the comment:

The second solution seems more optimal, in my opinion. I monkey patched the 
function like this in my own code:
```
def get_type_comment(self, node):
comment = self._type_ignores.get(node.lineno) if hasattr(node, "lineno") 
else node.type_comment
if comment is not None:
return f" # type: {comment}"
```

Thanks!

--

___
Python tracker 
<https://bugs.python.org/issue46073>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46073] ast.unparse produces: 'FunctionDef' object has no attribute 'lineno' for valid module

2021-12-14 Thread Brian Carlson


New submission from Brian Carlson :

Test file linked. When unparsing the output from ast.parse on a simple class, 
unparse throws an error: 'FunctionDef' object has no attribute 'lineno' for a 
valid class and valid AST. It fails when programmatically building the module 
AST as well.

It seems to be from this function: 
https://github.com/python/cpython/blob/1cbb88736c32ac30fd530371adf53fe7554be0a5/Lib/ast.py#L790

--
components: Library (Lib)
files: test.py
messages: 408546
nosy: TheRobotCarlson
priority: normal
severity: normal
status: open
title: ast.unparse produces: 'FunctionDef' object has no attribute 'lineno' for 
valid module
type: crash
versions: Python 3.10, Python 3.11, Python 3.9
Added file: https://bugs.python.org/file50490/test.py

___
Python tracker 
<https://bugs.python.org/issue46073>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37699] Explicit mention of raised ValueError's after .detach() of underlying IO buffer

2021-11-29 Thread Brian Skinn


Brian Skinn  added the comment:

Indeed, I hadn't been thinking about the testing/maintenance burden to CPython 
or other implementations when I made the suggestion.

I no longer have a strong opinion about this change, so I am happy to 
reject/close.

--
resolution:  -> rejected
stage:  -> resolved
status: pending -> closed

___
Python tracker 
<https://bugs.python.org/issue37699>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45833] NamedTemporaryFile deleted before enclosing context manager exit

2021-11-17 Thread Brian McCutchon


New submission from Brian McCutchon :

Consider the following code:

# Copyright 2021 Google LLC.
# SPDX-License-Identifier: Apache-2.0
import contextlib
import os

@contextlib.contextmanager
def my_tmp_file():
  with tempfile.NamedTemporaryFile('w') as f:
yield f

os.stat(my_tmp_file().__enter__().name)  # File not found
os.stat(contextlib.ExitStack().enter_context(my_tmp_file()).name)  # Same

I would expect the file to still exist, as __exit__ has not been called and I 
can't see why the file would have been closed. Also, it performs as expected 
when using NamedTemporaryFile directly, but not when it is nested in another 
context manager. It also performs as expected when my_tmp_file() or 
contextlib.ExitStack() is used in a "with" statement.

--
components: Library (Lib)
messages: 406494
nosy: Brian McCutchon
priority: normal
severity: normal
status: open
title: NamedTemporaryFile deleted before enclosing context manager exit
type: behavior
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue45833>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45391] 3.10 objects.inv classifies UnionType as data

2021-10-06 Thread Brian Skinn


Brian Skinn  added the comment:

Identifiers starting with two uppercase letters returns a HUGE list.

>>> pat2 = re.compile(r"([.][A-Z][A-Z])[^.]*$")

Filtering down by only those that contain.lower() "type":

>>> pprint([obj.name for obj in inv.objects if obj.role == "data" and 
>>> pat2.search(obj.name) and "type" in obj.name.lower()])

['errno.EPROTOTYPE',
 'locale.LC_CTYPE',
 'sqlite3.PARSE_DECLTYPES',
 'ssl.CHANNEL_BINDING_TYPES',
 'token.TYPE_COMMENT',
 'token.TYPE_IGNORE',
 'typing.TYPE_CHECKING',
 'xml.parsers.expat.XMLParserType']

Of these, only 'xml.parsers.expat.XMLParserType' seems to me a likely problem 
entry.

--

___
Python tracker 
<https://bugs.python.org/issue45391>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45391] 3.10 objects.inv classifies UnionType as data

2021-10-06 Thread Brian Skinn


Brian Skinn  added the comment:

If I understand the problem correctly, these mis-attributions of roles (to 
'data' instead of 'class' come about when a thing that is technically a class 
is defined in source using simple assignment, as with UnionType.

Problematic entries will thus have 'data' as role, and their identifiers will 
be camel-cased.

So, as a quick search to identify likely candidates:


>>> import re, sphobjinv as soi
>>> from pprint import pprint
>>> inv = soi.Inventory(url="https://docs.python.org/3.10/objects.inv;)

# Find entries where the first character after the final period
# is uppercase, and the second character after the final period
# is lowercase.
>>> pat = re.compile(r"([.][A-Z][a-z])[^.]*$")

>>> pprint([obj.name for obj in inv.objects if obj.role == "data" and 
>>> pat.search(obj.name)])

['_thread.LockType',
 'ast.PyCF_ALLOW_TOP_LEVEL_AWAIT',
 'ast.PyCF_ONLY_AST',
 'ast.PyCF_TYPE_COMMENTS',
 'importlib.resources.Package',
 'importlib.resources.Resource',
 'socket.SocketType',
 'types.AsyncGeneratorType',
 'types.BuiltinFunctionType',
 'types.BuiltinMethodType',
 'types.CellType',
 'types.ClassMethodDescriptorType',
 'types.CoroutineType',
 'types.EllipsisType',
 'types.FrameType',
 'types.FunctionType',
 'types.GeneratorType',
 'types.GetSetDescriptorType',
 'types.LambdaType',
 'types.MemberDescriptorType',
 'types.MethodDescriptorType',
 'types.MethodType',
 'types.MethodWrapperType',
 'types.NoneType',
 'types.NotImplementedType',
 'types.UnionType',
 'types.WrapperDescriptorType',
 'typing.Annotated',
 'typing.Any',
 'typing.AnyStr',
 'typing.Callable',
 'typing.ClassVar',
 'typing.Concatenate',
 'typing.Final',
 'typing.Literal',
 'typing.NoReturn',
 'typing.Optional',
 'typing.ParamSpecArgs',
 'typing.ParamSpecKwargs',
 'typing.Tuple',
 'typing.TypeAlias',
 'typing.TypeGuard',
 'typing.Union',
 'weakref.CallableProxyType',
 'weakref.ProxyType',
 'weakref.ProxyTypes',
 'weakref.ReferenceType']


I would guess those 'ast.PyCF...' objects can be ignored, they appear to be 
constants?

--
nosy: +bskinn

___
Python tracker 
<https://bugs.python.org/issue45391>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40027] re.sub inconsistency beginning with 3.7

2021-09-22 Thread Brian


Brian  added the comment:

txt = ' test'
txt = re.sub(r'^\s*', '^', txt)

substitutes once because the * is greedy.

txt = ' test'
txt = re.sub(r'^\s*?', '^', txt)

substitutes twice, consistent with the \Z behavior.

--

___
Python tracker 
<https://bugs.python.org/issue40027>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40027] re.sub inconsistency beginning with 3.7

2021-09-22 Thread Brian


Brian  added the comment:

I just ran into this change in behavior myself.

It's worth noting that the new behavior appears to match perl's behavior:

# perl -e 'print(("he" =~ s/e*\Z/ah/rg), "\n")'
hahah

--
nosy: +bsammon

___
Python tracker 
<https://bugs.python.org/issue40027>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45151] Logger library with task scheduler

2021-09-09 Thread Brian Hunt

New submission from Brian Hunt :

Version: Python 3.9.3
Package: Logger + Windows 10 Task Scheduler
Error Msg: None

Behavior:
I built a logging process to use with a python script that will be scheduled 
with Windows 10 task scheduler (this could be a bug in task scheduler too, but 
starting here). I noticed it wasn’t working and started trying to trace the 
root of the issue.

If I created a new file called scratch.py and ran some code, the logs showed 
up. However, the exact same code in the exact same folder (titled: run_xyz.py) 
didn’t log those same messages. It appears that something in either the task 
scheduler or logger library doesn’t like the fact that my file name contains an 
underscore because as soon as I pointed my task scheduler task that didn’t log 
to my other file, it worked again. Also, when I finally removed the underscores 
it started working. I believe it is Logger library related because the task 
with underscores does in fact run the python code and generate the script 
output.



Code in both files:
-a_b_c.py code---
import os
import pathlib
import sys

pl_path = pathlib.Path(__file__).parents[1].resolve()
sys.path.append(str(pl_path))



from src.Core.Logging import get_logger
#
logger = get_logger(__name__, False)
logger.info("TESTING_USing taskScheduler")



---src.Core.Logging.py get_logger code
import logging
import datetime
import time
import os

# from logging.handlers import SMTPHandler

from config import build_stage, log_config
from Pipelines.Databases import sqlAlchemy_logging_con


class DBHandler(logging.Handler):
def __init__(self, name):
"""

:param name: Deprecated
"""
logging.StreamHandler.__init__(self)
self.con = sqlAlchemy_logging_con()
self.sql = """insert into Logs (LogMessage, Datetime, FullPathNM, 
LogLevelNM, ErrorLine) values ('{message}', '{dbtime}', '{pathname}', 
'{level}', '{errLn}')"""
self.name = name



def formatDBTime(self, record):
record.dbtime = datetime.strftime("#%Y/%m/%d#", 
datetime.localtime(record.created))

def emit(self, record):
creation_time = time.strftime("%Y-%m-%d %H:%M:%S", 
time.localtime(record.created))

try:
self.format(record)

if record.exc_info:
record.exc_text = 
logging._defaultFormatter.formatException(record.exc_info)
else:
record.exc_text = ""

sql = self.sql.format(message=record.message
  , dbtime=creation_time
  , pathname=record.pathname
  , level=record.levelname
  , errLn=record.lineno)

self.con.execute(sql)




except:
pass


def get_logger(name, use_local_logging=False):
"""
Returns a logger based on a name given. Name should be __name__ variable or 
unique for each call.
Never call more than one time in a given file as it clears the logger. Use 
config.log_config to define configuration
of the logger and its handlers.
:param name:
:return: logger
"""
logger = logging.getLogger(name)
logger.handlers.clear()

# used to send logs to local file location. Level set to that of logger
if use_local_logging:
handler = logging.FileHandler("Data\\Output\\log_" + build_stage + 
str(datetime.date.today()).replace("-","") + ".log")
handler.setLevel(log_config['logger_level'])
logger.addHandler(handler)

dbhandler = DBHandler(name)
dbhandler.setLevel(log_config['db_handler_level'])
logger.addHandler(dbhandler)
logger.setLevel(log_config['logger_level'])



return logger

--
components: IO, Library (Lib), Windows
messages: 401482
nosy: btjehunt, paul.moore, steve.dower, tim.golden, zach.ware
priority: normal
severity: normal
status: open
title: Logger library with task scheduler
type: behavior
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue45151>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44992] functools.lru_cache does not guarantee cache hits for equal values

2021-08-24 Thread Brian Lee


Brian Lee  added the comment:

Thanks for clarifying - I see now that the docs specifically call out the lack 
of guarantees here with "usually but not always regard them as equivalent".

I did want to specifically explain the context of my bug; 

1. NumPy's strings have some unexpected behavior because they have fixed-length 
strings (represented inline) and var-length strings (which are pointers to 
Python strings). Various arcana dictate which version you get, and wrappers 
like pandas.read_csv can also throw a wrench in the mix. It is quite easy for 
the nominal "string type" to change from under you, which is how I stumbled on 
this bug.

2. I was using functools.cache as a way to intern objects and short-circuit 
otherwise very expensive equality calculations by reducing them to pointer 
comparisons - hence my desire for exact cache hits when typed=False.

While I agree this is Working As Documented, it does not Work As Expected in my 
opinion. I would expect the stdlib optimized implementation to follow the same 
behavior as this naive implementation, which does consider "hello world" and 
np.str_("hello world") to be equivalent.

def cache(func):
  _cache = {}
  @functools.wraps(func)
  def wrapped(*args, **kwargs):
cache_key = tuple(args) + tuple(kwargs.items())
if cache_key not in _cache:
  _cache[cache_key] = func(*args, **kwargs)
return _cache[cache_key]
  return wrapped

--
title: functools.lru_cache does not consider strings and numpy strings as 
equivalent -> functools.lru_cache does not guarantee cache hits for equal values

___
Python tracker 
<https://bugs.python.org/issue44992>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44992] functools.lru_cache does not consider strings and numpy strings as equivalent

2021-08-24 Thread Brian Lee


New submission from Brian Lee :

This seems like unexpected behavior: Two keys that are equal and have equal 
hashes should yield cache hits, but they do not.


Python 3.9.6 (default, Aug 18 2021, 19:38:01) 
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import functools
>>> 
>>> import numpy as np
>>> 
>>> @functools.lru_cache(maxsize=None)
... def f(x):
...   return x
... 
>>> py_str = 'hello world'
>>> np_str = np.str_(py_str)
>>> 
>>> assert py_str == np_str
>>> assert hash(py_str) == hash(np_str)
>>> 
>>> assert f.cache_info().currsize == 0
>>> f(py_str)
'hello world'
>>> assert f.cache_info().currsize == 1
>>> f(np_str)
'hello world'
>>> assert f.cache_info().currsize == 2
>>> print(f.cache_info())
CacheInfo(hits=0, misses=2, maxsize=None, currsize=2)

--
components: Library (Lib)
messages: 400209
nosy: brilee
priority: normal
severity: normal
status: open
title: functools.lru_cache does not consider strings and numpy strings as 
equivalent
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue44992>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44819] assertSequenceEqual does not use _getAssertEqualityFunc

2021-08-04 Thread Brian


Brian  added the comment:

I've attached an example of what I want. It contains a class, a function to be 
tested, and a test class which tests the function.

What TestCase.addTypeEqualityFunc feels like it offers is a chance to compare 
objects however I feel like is needed for each test. Sometimes all I want is to 
compare the properties of the objects, and maybe not even all of the 
properties! When I have a list of these objects and I've added an equality 
function for the object, I was expecting the test class to use my equality 
function when comparing objects in the list.

--
Added file: https://bugs.python.org/file50201/example.py

___
Python tracker 
<https://bugs.python.org/issue44819>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44819] assertSequenceEqual does not use _getAssertEqualityFunc

2021-08-03 Thread Brian


New submission from Brian :

Like the title says, TestCase.assertSequenceEqual does not behave like 
TestCase.assertEqual where it uses TestCase._getAssertEqualityFunc. Instead, 
TestCase.assertSequenceEqual uses `item1 != item2`. That way I can do something 
like this:

```
def test_stuff(self):
   self.addTypeEqualityFunc(
  MyObject,
  comparison_method_which_compares_how_i_want,
   )
   self.assertListEqual(
  get_list_of_objects(),
  [MyObject(...), MyObject(...)],
   )
```

--
components: Tests
messages: 398851
nosy: Rarity
priority: normal
severity: normal
status: open
title: assertSequenceEqual does not use _getAssertEqualityFunc
type: behavior
versions: Python 3.6, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue44819>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44476] "subprocess not supported for isolated subinterpreters" when running venv

2021-06-21 Thread Brian Curtin


Brian Curtin  added the comment:

I think there was either something stale that got linked wrong or some other 
kind of build failure, as I just built v3.10.0b3 tag again from a properly 
cleaned environment and this is no longer occurring. Sorry for the noise.

--
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue44476>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44476] "subprocess not supported for isolated subinterpreters" when running venv

2021-06-21 Thread Brian Curtin


Brian Curtin  added the comment:

Hmm. I asked around about this and someone installed 3.10.0-beta3 via pyenv and 
this worked fine. 

For whatever it's worth, this was built from source on OS X 10.14.6 via a 
pretty normal setup of `./configure` with no extra flags and then `make install`

--

___
Python tracker 
<https://bugs.python.org/issue44476>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44476] "subprocess not supported for isolated subinterpreters" when running venv

2021-06-21 Thread Brian Curtin

New submission from Brian Curtin :

I'm currently trying to run my `tox` testing environment—all of which runs 
under `coverage`—against 3.10b3 and am running into some trouble. I initially 
thought it was something about tox or coverage, but it looks lower level than 
that as the venv script in the stdlib isn't working. I'm not actually sure if 
the direct problem I'm experiencing is the same as what I'm listing below, but 
it reproduces the same result:

$ python3.10 -m venv ~/.envs/test
Error: subprocess not supported for isolated subinterpreters

Here's the version I'm running: Python 3.10.0b3 (tags/v3.10.0b3:865714a117, Jun 
17 2021, 17:37:28) [Clang 10.0.1 (clang-1001.0.46.4)] on darwin


In case it's helpful, here's a full traceback from tox:

action: py310-unit, msg: getenv
cwd: /Users/briancurtin/elastic/cloud/python-services-v3
cmd: 
/Users/briancurtin/elastic/cloud/python-services-v3/.tox/py310-unit/bin/python 
-m pip install coverage -rrequirements.txt -rtest-requirements.txt
Collecting coverage
  Using cached coverage-5.5.tar.gz (691 kB)
ERROR: Error subprocess not supported for isolated subinterpreters while 
executing command python setup.py egg_info
ERROR: Exception:
Traceback (most recent call last):
  File 
"/Users/briancurtin/elastic/cloud/python-services-v3/.tox/py310-unit/lib/python3.10/site-packages/pip/_internal/cli/base_command.py",
 line 188, in _main
status = self.run(options, args)
  File 
"/Users/briancurtin/elastic/cloud/python-services-v3/.tox/py310-unit/lib/python3.10/site-packages/pip/_internal/cli/req_command.py",
 line 185, in wrapper
return func(self, options, args)
  File 
"/Users/briancurtin/elastic/cloud/python-services-v3/.tox/py310-unit/lib/python3.10/site-packages/pip/_internal/commands/install.py",
 line 332, in run
requirement_set = resolver.resolve(
  File 
"/Users/briancurtin/elastic/cloud/python-services-v3/.tox/py310-unit/lib/python3.10/site-packages/pip/_internal/resolution/legacy/resolver.py",
 line 179, in resolve
discovered_reqs.extend(self._resolve_one(requirement_set, req))
  File 
"/Users/briancurtin/elastic/cloud/python-services-v3/.tox/py310-unit/lib/python3.10/site-packages/pip/_internal/resolution/legacy/resolver.py",
 line 362, in _resolve_one
abstract_dist = self._get_abstract_dist_for(req_to_install)
  File 
"/Users/briancurtin/elastic/cloud/python-services-v3/.tox/py310-unit/lib/python3.10/site-packages/pip/_internal/resolution/legacy/resolver.py",
 line 314, in _get_abstract_dist_for
abstract_dist = self.preparer.prepare_linked_requirement(req)
  File 
"/Users/briancurtin/elastic/cloud/python-services-v3/.tox/py310-unit/lib/python3.10/site-packages/pip/_internal/operations/prepare.py",
 line 487, in prepare_linked_requirement
abstract_dist = _get_prepared_distribution(
  File 
"/Users/briancurtin/elastic/cloud/python-services-v3/.tox/py310-unit/lib/python3.10/site-packages/pip/_internal/operations/prepare.py",
 line 91, in _get_prepared_distribution
abstract_dist.prepare_distribution_metadata(finder, build_isolation)
  File 
"/Users/briancurtin/elastic/cloud/python-services-v3/.tox/py310-unit/lib/python3.10/site-packages/pip/_internal/distributions/sdist.py",
 line 40, in prepare_distribution_metadata
self.req.prepare_metadata()
  File 
"/Users/briancurtin/elastic/cloud/python-services-v3/.tox/py310-unit/lib/python3.10/site-packages/pip/_internal/req/req_install.py",
 line 550, in prepare_metadata
self.metadata_directory = self._generate_metadata()
  File 
"/Users/briancurtin/elastic/cloud/python-services-v3/.tox/py310-unit/lib/python3.10/site-packages/pip/_internal/req/req_install.py",
 line 525, in _generate_metadata
return generate_metadata_legacy(
  File 
"/Users/briancurtin/elastic/cloud/python-services-v3/.tox/py310-unit/lib/python3.10/site-packages/pip/_internal/operations/build/metadata_legacy.py",
 line 70, in generate_metadata
call_subprocess(
  File 
"/Users/briancurtin/elastic/cloud/python-services-v3/.tox/py310-unit/lib/python3.10/site-packages/pip/_internal/utils/subprocess.py",
 line 185, in call_subprocess
proc = subprocess.Popen(
  File "/usr/local/lib/python3.10/subprocess.py", line 962, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
  File "/usr/local/lib/python3.10/subprocess.py", line 1773, in _execute_child
self.pid = _posixsubprocess.fork_exec(
RuntimeError: subprocess not supported for isolated subinterpreters
WARNING: You are using pip version 20.1.1; however, version 21.1.2 is available.
You should consider upgrading via the 
'/Users/briancurtin/elastic/cloud/python-services-v3/.tox/py310-unit/bin/python 
-m pip install --upgrade pip' command.

--
messages: 396262
nosy: brian.curtin
priority: normal
severity: normal
status: open
title: "subprocess not sup

[issue28007] Bad .pyc files prevent import of otherwise valid .py files.

2021-02-19 Thread Brian Hulette


Brian Hulette  added the comment:

Hey there, I just came across this bug when looking into a problem with 
corrupted pyc files. Was the patch ever applied? I'm still seeing the original 
behavior in Python 3.7.

Thanks!

--
nosy: +hulettbh

___
Python tracker 
<https://bugs.python.org/issue28007>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42974] tokenize reports incorrect end col offset and line string when input ends without explicit newline

2021-01-20 Thread Brian Romanowski


Brian Romanowski  added the comment:

I took a look at Parser/tokenizer.c.  From what I can tell, the tokenizer does 
fake a newline character when the input buffer does not end with actual newline 
characters and that the returned NEWLINE token has an effective length of 1 
because of this faked newline character.  That is, tok->cur - tok->start == 1 
when the NEWLINE token is returned.

If this part of the C tokenizer is meant to be modeled exactly by the Python 
tokenize module, then the current code is correct.  If there is some wiggle 
room because tok->start and tok->cur are converted into line numbers and column 
offsets, then maybe it's acceptable to change them?  If not, then the current 
documentation is misleading because the newline_token_2.end[1] element from my 
original example is not "...  column where the token ends in the source".  
There is no such column.

I'm not sure whether the C tokenizer exposes anything like 
newline_token_2.string, directly.  If so, does it hold the faked newline 
character or does it hold the empty string like the current tokenize module 
does?

I'm also not sure whether the C tokenizer exposes anything like 
newline_token_2.line.  If it does, I'd be surprised if the faked newline would 
cause this to somehow become the empty string instead of the actual line 
content.  So I'm guessing that current tokenize module's behavior here is still 
a real bug?  If not, then this is another case that might benefit from some 
edge-case documentation.

--

___
Python tracker 
<https://bugs.python.org/issue42974>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42974] tokenize reports incorrect end col offset and line string when input ends without explicit newline

2021-01-20 Thread Brian Romanowski


Brian Romanowski  added the comment:

Shoot, just realized that consistency isn't the important thing here, the most 
important thing is that the tokenizer module exactly matches the output of the 
Python tokenizer.  It's possible that my changes violate that constraint, I'll 
take a look at that.

--

___
Python tracker 
<https://bugs.python.org/issue42974>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42974] tokenize reports incorrect end col offset and line string when input ends without explicit newline

2021-01-19 Thread Brian Romanowski


Change by Brian Romanowski :


--
keywords: +patch
pull_requests: +23085
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/24260

___
Python tracker 
<https://bugs.python.org/issue42974>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42974] tokenize reports incorrect end col offset and line string when input ends without explicit newline

2021-01-19 Thread Brian Romanowski


New submission from Brian Romanowski :

The tokenize module's tokenizer functions output incorrect (or at least 
misleading) information when the content being tokenized does not end in a line 
ending character.  This is related to the fix for issue<33899>  which added the 
NEWLINE tokens for this case but did not fill out the whole token tuple 
correctly.

The bug can be seen by running a version of the test in 
Lib/test/test_tokenize.py:

import io, tokenize

newline_token_1 = 
list(tokenize.tokenize(io.BytesIO("x\n".encode('utf-8')).readline))[-2]
newline_token_2 = 
list(tokenize.tokenize(io.BytesIO("x".encode('utf-8')).readline))[-2]

print(newline_token_1)
print(newline_token_2)

# Prints:
# TokenInfo(type=4 (NEWLINE), string='\n', start=(1, 1), end=(1, 2), line='x\n')
# TokenInfo(type=4 (NEWLINE), string='', start=(1, 1), end=(1, 2), line='')  # 
bad "end" and "line"!

Notice that "len(newline_token_2.string) == 0" but "newline_token_2.end[1] - 
newline_token_2.start[1] == 1".  Seems more consistent if the 
newline_token_2.end == (1, 1).

Also, newline_token_2.line should hold the physical line rather than the empty 
string.  This would make it consistent with newline_token_1.line.

I'll add a PR shortly with a change so the output from the two cases is:

TokenInfo(type=4 (NEWLINE), string='\n', start=(1, 1), end=(1, 2), line='x\n')
TokenInfo(type=4 (NEWLINE), string='', start=(1, 1), end=(1, 1), line='x')

If this looks reasonable, I can backport it for the other branches.  Thanks!

--
components: Library (Lib)
messages: 385313
nosy: romanows
priority: normal
severity: normal
status: open
title: tokenize reports incorrect end col offset and line string when input 
ends without explicit newline
versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue42974>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36702] test_dtrace failed

2020-12-23 Thread Brian Costlow


Brian Costlow  added the comment:

There are actually two different issues here.

dtrace -q will not work on Fedora-based linux (haven't tried elsewhere) and 
that probably should be corrected, but that is NOT what causes the test fail. 

The tests' setup checks whether dtrace is usuable, and since it is not, those 
tests are skipped. 

However, stap IS usable, so those tests run.

test.test_dtrace.SystemTapOptimizedTests.test_line will always fail because it 
expects files in dtracedata (line.stp and line.stp.expected) that are not there.

I've attached a file showing isolated runs of test_dtrace on a newly built 
Python 3.8.6 on two Centos 7 systems.

The first is against the official Centos 7 Docker container, and stap fails 
because Linuxkit kernel modules are not installed. The test_dtrace check for a 
working stap fails, and all 4 tests are skipped.

The second is against a virtualized Centos 7 where the kernel modules are 
properly installed and stap works.

I don't see how test_dtrace ever passes on a system with a working /bin/stap 
command.

--
nosy: +brian.costlow
Added file: https://bugs.python.org/file49696/test_dtrace_stap_bug.txt

___
Python tracker 
<https://bugs.python.org/issue36702>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42096] zipfile.is_zipfile incorrectly identifying a gzipped file as a zip archive

2020-10-27 Thread Brian Kohan


Brian Kohan  added the comment:

I concur with Gregory. It seems that the action here is to just make it 
apparent in the docs the very real possibility of false positives.

In my experience processing data from the wild, I see a pretty high rate of 
about 1/1000. I'm sure the probability is a function of the types of files I'm 
working with. But in any case, is_zipfile can't be made to be sufficient in and 
of itself for reliably identifying zip files. It still has utility in weeding 
out true negatives though. In my case I don't ever expect to see a self 
extracting file or a file compounded into an executable so I use the results of 
is_zipfile as well as a manual check of the magic bytes at the start. So far so 
good.

--

___
Python tracker 
<https://bugs.python.org/issue42096>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42096] zipfile.is_zipfile incorrectly identifying a gzipped file as a zip archive

2020-10-22 Thread Brian Kohan


Brian Kohan  added the comment:

Hi all,

I'm experiencing the same issue. I took a look at the is_zipfile code - seems 
like its not checking the start of the file for the magic numbers, but looking 
deeper in. I presume because the magic numbers at the start are considered 
unreliable for some reason? Seems like this opens the check up to unlucky 
random false positives though.

Offending file:

https://www.dropbox.com/s/t2kafn6ek1m2huy/CHPI_Rinex.crx?dl=1

--
nosy: +bckohan
versions: +Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue42096>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37426] getpass.getpass not working with on windows when ctrl+v is used to enter the string

2020-08-23 Thread Brian Rutledge


Brian Rutledge  added the comment:

In addition to Ctrl+V, Shift+Insert also doesn't work. This behavior is the 
same Command Prompt and PowerShell on Windows 10.

Workarounds include:

- Clicking `Edit > Paste` from the window menu
- Enabling `Properties > Options > Use Ctrl+Shift+C/V as Copy/Paste` from the 
menu
- Use right-click to paste
- Using the new Windows Terminal: 
https://www.microsoft.com/en-us/p/windows-terminal/9n0dx20hk701

I stumbled upon this issue while investigating 
https://github.com/pypa/twine/issues/671. Thanks for writing it up!

--
nosy: +bhrutledge2
versions: +Python 3.10, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue37426>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29269] test_socket failing in solaris

2020-08-03 Thread Brian Vandenberg


Brian Vandenberg  added the comment:

I accidentally hit submit too early.

I tried changing the code in posixmodule.c to use lseek(), something like the 
following:

offset = lseek( in, 0, SEEK_CUR );

do {
  ret = sendfile(...);
} while( ... );
lseek( in, offset, SEEK_SET );

... however, in addition to readfile not advancing the file pointer it also 
doesn't seem to cause an EOF condition.  In my first attempt at the above I was 
doing this after the loop:

lseek( in, offset, SEEK_CUR );

... and it just kept advancing the file pointer well beyond the end of the file 
and sendfile() had absolutely no qualms about reading beyond the end of the 
file.

I even tried adding a read() after the 2nd lseek to see if I could force an EOF 
condition but that didn't do it.

--

___
Python tracker 
<https://bugs.python.org/issue29269>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29269] test_socket failing in solaris

2020-08-03 Thread Brian Vandenberg


Brian Vandenberg  added the comment:

Christian, you did exactly what I needed.  Thank you.

I don't have the means to do a git bisect to find where it broke.  It wasn't a 
problem around 3.3 timeframe and I'm not sure when this sendfile stuff was 
implemented.

The man page for sendfile says "The sendfile() function does not modify the 
current file pointer of in_fd, (...)".  In other words the read pointer for the 
input descriptor won't be advanced.  They expect you to use it like this:

offset = 0;
do {
  ret = sendfile(in, out, , len);
} while( ret < 0 && (errno == EAGAIN || errno == EINTR) );

... though making that change in posixmodule.c would break this test severely 
since the send & receive code is running on the same thread.

In posixmodule.c I don't see anything that attempts to return the number of 
bytes successfully sent.  Since the input file descriptor won't have its read 
pointer advanced, the variable "offset" must be set to the correct offset 
value, otherwise it just keeps reading the first 32k of the file that was 
generated for the test.

--

___
Python tracker 
<https://bugs.python.org/issue29269>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29269] test_socket failing in solaris

2020-08-02 Thread Brian Vandenberg


Brian Vandenberg  added the comment:

Solaris will be around for at least another 10-15 years.

The least you could do is look into it and offer some speculations.

--

___
Python tracker 
<https://bugs.python.org/issue29269>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41086] Exception for uninstantiated interpolation (configparser)

2020-06-22 Thread Brian Faherty


New submission from Brian Faherty :

The ConfigParser in Lib has a parameter called `interpolation`, that expects an 
instance of a subclass of Interpolation. However, when ConfigParser is given an 
argument of an uninstantiated subclass of Interpolation, the __init__ function 
of ConfigParser accepts it and continues on. This results in a later receiving 
an error message along the lines of `TypeError: before_set()
missing 1 required positional argument: 'value'` when functions are later 
called on the ConfigParser instance. This delay between the feedback and the 
original mistake has led to a few bugs open on the issue tracker 
(https://bugs.python.org/issue26831 and https://bugs.python.org/issue26469. 
Both of which were closed after a quick and simple explanation, which can be 
easily implemented in the library itself. 

I've created a PR for this work and will attach it shortly. Please let me know 
if there is a better name for the exception other than 
`InterpolationIsNotInstantiatedError`. It seems long, but also in line with the 
other Errors already in configparser.

--
components: Library (Lib)
messages: 372137
nosy: Brian Faherty
priority: normal
severity: normal
status: open
title: Exception for uninstantiated interpolation (configparser)
type: behavior
versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 
3.9

___
Python tracker 
<https://bugs.python.org/issue41086>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39685] Python 3.8 regression Socket operation on non-socket

2020-05-28 Thread Brian May


Brian May  added the comment:

Consensus seems to be that this is a bug in sshuttle, not a bug in python. 
Thanks for the feedback.

I think this bug can be closed now...

--

___
Python tracker 
<https://bugs.python.org/issue39685>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40406] MagicMock __aenter__ should be AsyncMock(return_value=MagicMock())

2020-05-01 Thread Brian Curtin


Brian Curtin  added the comment:

graingert: Do you have a workaround for this? I'm doing roughly the same thing 
with an asyncpg connection pool nested with a transaction and am getting 
nowhere.


async with pg_pool.acquire() as conn:
async with conn.transaction():
...

--
nosy: +brian.curtin

___
Python tracker 
<https://bugs.python.org/issue40406>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40274] 3D plotting library doesn't occlude objects by depth/perspective (objects in the scene are printed in layers).

2020-04-13 Thread Brian O'Sullivan


Brian O'Sullivan  added the comment:

matplotlib yes
Will do.

--
resolution: third party -> 
status: closed -> open

___
Python tracker 
<https://bugs.python.org/issue40274>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40274] 3D plotting library doesn't occlude objects by depth/perspective (objects in the scene are printed in layers).

2020-04-13 Thread Brian O'Sullivan


New submission from Brian O'Sullivan :

3D plotting library doesn't occlude objects by depth/perspective.

When the figure is regenerated during the animation, each additional line and 
points is printed on top of the prior lines and points.

Bug resolution:
 - incorporate occlusions in the 3D plotting / animation library

--
components: Library (Lib)
files: hopf1.py
messages: 366334
nosy: mo-geometry
priority: normal
severity: normal
status: open
title: 3D plotting library doesn't occlude objects by depth/perspective 
(objects in the scene are printed in layers).
type: enhancement
versions: Python 3.6
Added file: https://bugs.python.org/file49060/hopf1.py

___
Python tracker 
<https://bugs.python.org/issue40274>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39645] Expand concurrent.futures.Future's public API

2020-03-02 Thread Brian Quinlan


Brian Quinlan  added the comment:

I'll try to take a look at this before the end of the week, but I'm currently 
swamped with other life stuff :-(

--

___
Python tracker 
<https://bugs.python.org/issue39645>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39685] Python 3.8 regression Socket operation on non-socket

2020-02-18 Thread Brian May


New submission from Brian May :

After upgrading to Python 3.8, users of sshuttle report seeing this error:

Traceback (most recent call last):
  File "", line 1, in 
  File "assembler.py", line 38, in 
  File "sshuttle.server", line 298, in main
  File "/usr/lib/python3.8/socket.py", line 544, in fromfd
return socket(family, type, proto, nfd)
  File "/usr/lib/python3.8/socket.py", line 231, in __init__
_socket.socket.__init__(self, family, type, proto, fileno)
OSError: [Errno 88] Socket operation on non-socket

https://github.com/sshuttle/sshuttle/issues/381

The cause of the error is this line: 
https://github.com/sshuttle/sshuttle/blob/6ad4473c87511bcafaec3d8d0c69dfcb166b48ed/sshuttle/server.py#L297
 which does:

socket.fromfd(sys.stdin.fileno(), socket.AF_INET, socket.SOCK_STREAM)
socket.fromfd(sys.stdout.fileno(), socket.AF_INET, socket.SOCK_STREAM)

Where sys.stdin and sys.stdout are stdin/stdout provided by the ssh server when 
it ran our remote ssh process.

I believe this change in behavior is as a result of a fix for the following 
bug: https://bugs.python.org/issue35415

I am wondering if this is a bug in Python for causing such a regression, or a 
bug in sshuttle. Possibly sshuttle is using socket.fromfd in a way that was 
never intended?

Would appreciate an authoritative answer on this.

Thanks

--
components: IO
messages: 362255
nosy: brian
priority: normal
severity: normal
status: open
title: Python 3.8 regression Socket operation on non-socket
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue39685>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39205] Hang when interpreter exits after ProcessPoolExecutor.shutdown(wait=False)

2020-01-27 Thread Brian Quinlan


Brian Quinlan  added the comment:


New changeset 884eb89d4a5cc8e023deaa65001dfa74a436694c by Brian Quinlan in 
branch 'master':
bpo-39205: Tests that highlight a hang on ProcessPoolExecutor shutdown (#18221)
https://github.com/python/cpython/commit/884eb89d4a5cc8e023deaa65001dfa74a436694c


--

___
Python tracker 
<https://bugs.python.org/issue39205>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39205] Hang when interpreter exits after ProcessPoolExecutor.shutdown(wait=False)

2020-01-27 Thread Brian Quinlan


Change by Brian Quinlan :


--
keywords: +patch
pull_requests: +17601
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/18221

___
Python tracker 
<https://bugs.python.org/issue39205>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39251] outdated windows store links in WindowsApps folder

2020-01-07 Thread Brian McKim


New submission from Brian McKim :

When I uninstalled the windows store version 3.8 it appears to have placed two 
links in my \AppData\Local\Microsoft\WindowsApps folder (though they may have 
always been there), python.exe and python3.exe. When I run these in PowerShell 
both send me to the 3.7 version in the store. There is a note on the page 
stating this version is not guaranteed to be stable and points them to the 3.8 
version. As these are to make the install as painless as possible these should 
point to the most stable version; 3.8 in the store.

--
components: Windows
files: Annotation 2020-01-07 161226.png
messages: 359547
nosy: Brian McKim, paul.moore, steve.dower, tim.golden, zach.ware
priority: normal
severity: normal
status: open
title: outdated windows store links in WindowsApps folder
type: enhancement
versions: Python 3.7, Python 3.8
Added file: https://bugs.python.org/file48831/Annotation 2020-01-07 161226.png

___
Python tracker 
<https://bugs.python.org/issue39251>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39205] Hang when interpreter exits after ProcessPoolExecutor.shutdown(wait=False)

2020-01-03 Thread Brian Quinlan


New submission from Brian Quinlan :

```
from concurrent.futures import ProcessPoolExecutor
import time

t = ProcessPoolExecutor(max_workers=3)
t.map(time.sleep, [1,2,3])
t.shutdown(wait=False)
```

Results in this exception and then a hang (i.e. Python doesn't terminate):
```
Exception in thread QueueManagerThread:
Traceback (most recent call last):
  File "/usr/local/google/home/bquinlan/cpython/Lib/threading.py", line 944, in 
_bootstrap_inner
self.run()
  File "/usr/local/google/home/bquinlan/cpython/Lib/threading.py", line 882, in 
run
self._target(*self._args, **self._kwargs)
  File 
"/usr/local/google/home/bquinlan/cpython/Lib/concurrent/futures/process.py", 
line 352, in _queue_management_worker
_add_call_item_to_queue(pending_work_items,
  File 
"/usr/local/google/home/bquinlan/cpython/Lib/concurrent/futures/process.py", 
line 280, in _add_call_item_to_queue
call_queue.put(_CallItem(work_id,
  File "/usr/local/google/home/bquinlan/cpython/Lib/multiprocessing/queues.py", 
line 82, in put
raise ValueError(f"Queue {self!r} is closed")
ValueError: Queue  is closed
```

--
assignee: bquinlan
messages: 359257
nosy: bquinlan
priority: normal
severity: normal
status: open
title: Hang when interpreter exits after 
ProcessPoolExecutor.shutdown(wait=False)
type: behavior
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue39205>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38923] Spurious OSError "Not enough memory resources" when allocating memory using multiprocessing.RawArray

2019-11-26 Thread Brian Kardon


Brian Kardon  added the comment:

Ah, thank you so much! That makes sense. I'll have to switch to 64-bit python. 

I've marked this as closed - hope that's the right thing to do here.

--
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue38923>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38923] Spurious OSError "Not enough memory resources" when allocating memory using multiprocessing.RawArray

2019-11-26 Thread Brian Kardon


New submission from Brian Kardon :

When I try to create a series of multiprocessing.RawArray objects, I get an 
"OSError: Not enough memory resources are available to process this command". 
However, by my calculations, the total amount of memory I'm trying to allocate 
is just about 1 GB, and my system reports that it has around 20 GB free memory. 
I can't find any information about any artificial memory limit, and my system 
seems to have plenty of memory for this, so it seems like a bug to me. This is 
my first issue report, so I apologize if I'm doing this wrong.

The attached script produces the following output on my Windows 10 64-bit 
machine with Python 3.7:

Creating buffer # 0
Creating buffer # 1
Creating buffer # 2
Creating buffer # 3
Creating buffer # 4
...etc...
Creating buffer # 276
Creating buffer # 277
Creating buffer # 278
Creating buffer # 279
Creating buffer # 280
Traceback (most recent call last):
  File ".\Cruft\memoryErrorTest.py", line 10, in 
buffers.append(mp.RawArray(imageDataType, imageDataSize))
  File "C:\Users\Brian 
Kardon\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\context.py",
 line 129, in RawArray
return RawArray(typecode_or_type, size_or_initializer)
  File "C:\Users\Brian 
Kardon\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\sharedctypes.py",
 line 61, in RawArray
obj = _new_value(type_)
  File "C:\Users\Brian 
Kardon\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\sharedctypes.py",
 line 41, in _new_value
wrapper = heap.BufferWrapper(size)
  File "C:\Users\Brian 
Kardon\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\heap.py", 
line 263, in __init__
    block = BufferWrapper._heap.malloc(size)
  File "C:\Users\Brian 
Kardon\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\heap.py", 
line 242, in malloc
(arena, start, stop) = self._malloc(size)
  File "C:\Users\Brian 
Kardon\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\heap.py", 
line 134, in _malloc
arena = Arena(length)
  File "C:\Users\Brian 
Kardon\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\heap.py", 
line 38, in __init__
buf = mmap.mmap(-1, size, tagname=name)
OSError: [WinError 8] Not enough memory resources are available to process this 
command

--
components: Library (Lib), Windows, ctypes
files: memoryErrorTest.py
messages: 357531
nosy: bkardon, paul.moore, steve.dower, tim.golden, zach.ware
priority: normal
severity: normal
status: open
title: Spurious OSError "Not enough memory resources" when allocating memory 
using multiprocessing.RawArray
type: behavior
versions: Python 3.7
Added file: https://bugs.python.org/file48744/memoryErrorTest.py

___
Python tracker 
<https://bugs.python.org/issue38923>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38478] inspect.signature.bind does not correctly handle keyword argument with same name as positional-only parameter

2019-10-14 Thread Brian Shaginaw


New submission from Brian Shaginaw :

>>> import inspect
>>> def foo(bar, /, **kwargs):
...   print(bar, kwargs)
...
>>> foo(1, bar=2)
1 {'bar': 2}
>>> inspect.signature(foo).bind(1, bar=2)
Traceback (most recent call last):
  File "", line 1, in 
  File 
"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/inspect.py", 
line 3025, in bind
return self._bind(args, kwargs)
  File 
"/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/inspect.py", 
line 2964, in _bind
raise TypeError(
TypeError: multiple values for argument 'bar'


Python 3.8 introduced positional-only parameters, which allow parameter names 
to remain available for use in **kwargs. It looks like `inspect.signature.bind` 
does not recognize this, and thinks the parameter is being passed twice, which 
causes the above TypeError.

Expected result: 

--
components: Library (Lib)
messages: 354683
nosy: brian.shaginaw
priority: normal
severity: normal
status: open
title: inspect.signature.bind does not correctly handle keyword argument with 
same name as positional-only parameter
type: behavior
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue38478>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38029] Should io.TextIOWrapper raise an error at instantiation if a StringIO is passed as 'buffer'?

2019-09-04 Thread Brian Skinn


New submission from Brian Skinn :

If I read the docs correctly, io.TextIOWrapper is meant to provide a str-typed 
interface to an underlying bytes stream.

If a TextIOWrapper is instantiated with the underlying buffer=io.StringIO(), it 
breaks:

>>> import io
>>> tw = io.TextIOWrapper(io.StringIO())
>>> tw.write(b'abcd\n')
Traceback (most recent call last):
  File "", line 1, in 
TypeError: write() argument must be str, not bytes
>>> tw.write('abcd\n')
5
>>> tw.read()
Traceback (most recent call last):
  File "", line 1, in 
TypeError: string argument expected, got 'bytes'
>>> tw.read(1)
Traceback (most recent call last):
  File "", line 1, in 
TypeError: underlying read() should have returned a bytes-like object, not 'str'



Would it be better for TextIOWrapper to fail earlier, at instantiation-time, 
for this kind of (unrecoverably?) broken type mismatch?

--
components: Library (Lib)
messages: 351139
nosy: bskinn
priority: normal
severity: normal
status: open
title: Should io.TextIOWrapper raise an error at instantiation if a StringIO is 
passed as 'buffer'?
type: behavior
versions: Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue38029>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36714] Tweak doctest 'example' regex to allow a leading ellipsis in 'want' line

2019-08-09 Thread Brian Skinn


Brian Skinn  added the comment:

On reflection, it would probably be better to limit the ELLIPSIS to 3 or 4 
periods ('[.]{3,4}'); otherwise, it would be impossible to express an ellipsis 
followed by a period in a 'want'.

--

___
Python tracker 
<https://bugs.python.org/issue36714>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36714] Tweak doctest 'example' regex to allow a leading ellipsis in 'want' line

2019-08-09 Thread Brian Skinn


Brian Skinn  added the comment:

I suppose one alternative solution might be to tweak the ELLIPSIS feature of 
doctest, such that it would interpret a run of >=3 periods in a row (matching 
regex pattern of "[.]{3,}") as 'ellipsis'.

The regex for PS2 could then have a negative lookahead added, so that it *only* 
matches three periods, plus optionally other content: '\.\.\.(?!\.)'

That way, a line like "... foo" would retain the current meaning of "'source' 
line, consisting of PS2 plus the identifier 'foo'", but the meaning of 
"arbitrary content followed by ' foo'" could be achieved by " foo", since 
the leading "" would NOT match the negative lookahead for PS2.

In other situations, where "..." is *not* the leading non-whitespace content, 
the old behavior suffices: the PS2 regex won't match anyways, so it'll be left 
for ELLIPSIS to process.

--

___
Python tracker 
<https://bugs.python.org/issue36714>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36714] Tweak doctest 'example' regex to allow a leading ellipsis in 'want' line

2019-08-09 Thread Brian Skinn

Brian Skinn  added the comment:

Mm, agreed--that regex wouldn't be hard to write.

The problem is, AFAICT there's irresolvable syntactic ambiguity in a line 
starting with exactly three periods, if the doctest PS2 specification is not 
constrained to be exactly "... ". In such a case, "..." could mark either (1) 
an ellipsis standing in for an entire line of 'want', or (2) a PS2, marking a 
blank line in 'source'.

I don't really think aggressive lookahead would help much -- an arbitrary 
number of following lines could contain exactly "...", and the intended 
transition from 'source' to 'want' could lie at any one of them.  The 
nonrecursive nature of regex is unhelpful here, but I don't think one could 
even write a recursive-descent parser, or similar, that could be 100% reliable 
on a single comparison. It would have to test the string against all the 
various splits between 'source' and 'want' along those "..." lines, and see if 
any match. Hairy mess.

AFAICT, defining "... " as PS2, and "..." as 'ellipsis representing a whole 
line' is the cleanest solution from a logical point of view.

Of course, then it's *visually* confusing, because trailing space. ¯\_(ツ)_/¯

--

___
Python tracker 
<https://bugs.python.org/issue36714>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37699] Explicit mention of raised ValueError's after .detach() of underlying IO buffer

2019-07-28 Thread Brian Skinn


New submission from Brian Skinn :

Once the underlying buffer/stream is .detach()ed from an instance of a subclass 
of TextIOBase or BufferedIOBase, accession of most attributes defined on 
TextIOBase/BufferedIOBase or the IOBase parent, as well as calling of most 
methods defined on TextIOBase/BufferedIOBase/IOBase, results in raising of a 
ValueError.

Currently, the documentation of both .detach() methods states simply:

> After the raw stream has been detached, the buffer is in an unusable state.

I propose augmenting the above to something like the following in the docs for 
both .detach() methods, to make this behavior more explicit:

> After the raw stream has been detached, the buffer
> is in an unusable state. As a result, accessing/calling most
> attributes/methods of either :class:`BufferedIOBase` or its
> :class:`IOBase` parent will raise :exc:`ValueError`.

I confirm that the error raised for both `BufferedReader` and `TextIOWrapper` 
after `.detach()` *is* ValueError in all of 3.5.7, 3.6.8, 3.7.3, and 3.8.0b1.

--
assignee: docs@python
components: Documentation
messages: 348594
nosy: bskinn, docs@python
priority: normal
severity: normal
status: open
title: Explicit mention of raised ValueError's after .detach() of underlying IO 
buffer
type: enhancement
versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue37699>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37699] Explicit mention of raised ValueError's after .detach() of underlying IO buffer

2019-07-28 Thread Brian Skinn


Change by Brian Skinn :


--
type: enhancement -> behavior

___
Python tracker 
<https://bugs.python.org/issue37699>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31783] Race condition in ThreadPoolExecutor when scheduling new jobs while the interpreter shuts down

2019-07-01 Thread Brian Quinlan


Brian Quinlan  added the comment:

Can I add "needs backport to 3.8" and "needs backport to 3.7" labels now or
do I have to use cherry_picker at this point?

On Mon, Jul 1, 2019 at 3:55 PM Ned Deily  wrote:

>
> Ned Deily  added the comment:
>
> > I don't know what the backport policy is.
>
> It does seem that the devguide does not give much guidance on this; I've
> opened an issue about it (https://github.com/python/devguide/issues/503).
> But, in general, if the fix is potentially beneficial and does not add
> undue risk or an incompatibility, we would generally consider backporting
> it to the currently active maintenance branches; at the moment, that would
> be 3.8 (in beta phase) and 3.7 (maintenance nmode).  We have a lot of
> buildbot tests that show non-deterministic failures, some possibly due to
> concurrent.futures.  If there is a chance that this fix might mitigate
> those, I'd like to consider it for backporting.
>
> --
>
> ___
> Python tracker 
> <https://bugs.python.org/issue31783>
> ___
>
>

--

___
Python tracker 
<https://bugs.python.org/issue31783>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31783] Race condition in ThreadPoolExecutor when scheduling new jobs while the interpreter shuts down

2019-07-01 Thread Brian Quinlan


Brian Quinlan  added the comment:

I don't know what the backport policy is. The bug is only theoretical AFAIK
i.e. someone noticed it through code observation but it has not appeared in
the wild.

On Mon, Jul 1, 2019 at 3:25 PM Ned Deily  wrote:

>
> Ned Deily  added the comment:
>
> Brian, should this fix be backported to 3.8 and 3.7?  As it stands now, it
> will only be showing up in Python 3.9 unless you add the backport labels to
> the original PR.  Unless it cna be shown to be a security issue, it would
> not be appropriate to backport to 3.6 at this stage.
>
> --
> nosy: +ned.deily
>
> ___
> Python tracker 
> <https://bugs.python.org/issue31783>
> ___
>
>

--

___
Python tracker 
<https://bugs.python.org/issue31783>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31783] Race condition in ThreadPoolExecutor when scheduling new jobs while the interpreter shuts down

2019-06-28 Thread Brian Quinlan


Change by Brian Quinlan :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue31783>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31783] Race condition in ThreadPoolExecutor when scheduling new jobs while the interpreter shuts down

2019-06-28 Thread Brian Quinlan


Brian Quinlan  added the comment:


New changeset 242c26f53edb965e9808dd918089e664c0223407 by Brian Quinlan in 
branch 'master':
bpo-31783: Fix a race condition creating workers during shutdown (#13171)
https://github.com/python/cpython/commit/242c26f53edb965e9808dd918089e664c0223407


--

___
Python tracker 
<https://bugs.python.org/issue31783>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37294] concurrent.futures.ProcessPoolExecutor and multiprocessing.pool.Pool fail with super

2019-06-28 Thread Brian Quinlan


Change by Brian Quinlan :


--
resolution:  -> duplicate
stage:  -> resolved
status: open -> closed
superseder:  -> function changed when pickle bound method object

___
Python tracker 
<https://bugs.python.org/issue37294>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37208] Weird exception behaviour in ProcessPoolExecutor

2019-06-28 Thread Brian Quinlan


Change by Brian Quinlan :


--
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue37208>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37208] Weird exception behaviour in ProcessPoolExecutor

2019-06-14 Thread Brian Quinlan


Change by Brian Quinlan :


--
resolution:  -> duplicate
superseder:  -> picke cannot dump Exception subclasses with different super() 
args

___
Python tracker 
<https://bugs.python.org/issue37208>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37287] picke cannot dump Exception subclasses with different super() args

2019-06-14 Thread Brian Quinlan


Change by Brian Quinlan :


--
title: picke cannot dump exceptions subclasses with different super() args -> 
picke cannot dump Exception subclasses with different super() args

___
Python tracker 
<https://bugs.python.org/issue37287>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37287] picke cannot dump exceptions subclasses with different super() args

2019-06-14 Thread Brian Quinlan


New submission from Brian Quinlan :

$ ./python.exe nopickle.py
TypeError: __init__() missing 1 required positional argument: 'num'

The issue is that the arguments passed to Exception.__init__ (via `super()`) 
are collected into `args` and then serialized by pickle e.g.

>>> PickleBreaker(5).args
()
>>> PickleBreaker(5).__reduce_ex__(3)
(, (), {'num': 5})
>>> # The 1st index is the `args` tuple

Then, during load, the `args` tuple is used to initialize the Exception i.e. 
PickleBreaker(), which results in the `TypeError`

See https://github.com/python/cpython/blob/master/Modules/_pickle.c#L6769

--
components: Library (Lib)
files: nopickle.py
messages: 345647
nosy: bquinlan
priority: normal
severity: normal
status: open
title: picke cannot dump exceptions subclasses with different super() args
versions: Python 3.7, Python 3.8, Python 3.9
Added file: https://bugs.python.org/file48420/nopickle.py

___
Python tracker 
<https://bugs.python.org/issue37287>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37208] Weird exception behaviour in ProcessPoolExecutor

2019-06-14 Thread Brian Quinlan


Brian Quinlan  added the comment:

That's a super interesting bug! It looks like this issue is that your exception 
can't be round-tripped using pickle i.e.

>>> class PoolBreaker(Exception):
... def __init__(self, num):
... super().__init__()
... self.num = num
... 
>>> import pickle
>>> pickle.loads(pickle.dumps(PoolBreaker(5)))
Traceback (most recent call last):
  File "", line 1, in 
TypeError: __init__() missing 1 required positional argument: 'num'

--
assignee:  -> bquinlan

___
Python tracker 
<https://bugs.python.org/issue37208>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24980] Allow for providing 'on_new_thread' callback to 'concurrent.futures.ThreadPoolExecutor'

2019-06-14 Thread Brian Quinlan


Brian Quinlan  added the comment:

Joshua, I'm closing this since I haven't heard from you in a month. Please 
re-open if you use case isn't handled by `initializer` and `initargs`.

--
assignee:  -> bquinlan
resolution:  -> out of date
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue24980>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37134] Use PEP570 syntax in the documentation

2019-06-06 Thread Brian Skinn


Brian Skinn  added the comment:

:thumbsup:

Glad I happened to be in the right place at the right time to put it together. 
I'll leave the tabslash repo up for future reference.

--

___
Python tracker 
<https://bugs.python.org/issue37134>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37134] Use PEP570 syntax in the documentation

2019-06-05 Thread Brian Skinn


Brian Skinn  added the comment:

Brett, to be clear, this sounds like the tabbed solution is not going to be 
used at this point? If so, I'll pull down the tabbed docs I'm hosting.

--

___
Python tracker 
<https://bugs.python.org/issue37134>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37134] Use PEP570 syntax in the documentation

2019-06-05 Thread Brian Skinn


Brian Skinn  added the comment:

First, for anyone interested, there are screenshots and links to docs versions 
at the SC GH issue 
(https://github.com/python/steering-council/issues/12#issuecomment-498856524, 
and following) where we're exploring what the tabbed approach to the PEP570 
docs might look like.

Second, I'd like to suggest an additional aspect of the docs situation for 
consideration.

Regardless of which specific approach is decided upon here, it seems likely the 
consensus will be a course of action requiring touching the majority 
(entirety?) of the stdlib API docs. Currently, all of the API docs are manually 
curated in the .rst, at minimum for the reasons laid out by Victor in bpo25461 
(https://bugs.python.org/issue25461#msg253381). If there's any chance that the 
balance of factors has changed such that using autodoc (or similar) *would* now 
be preferable, IWSTM that now is a good time to have that discussion, so that 
an autodoc conversion could be done at the same time as the PEP570 rework.

--

___
Python tracker 
<https://bugs.python.org/issue37134>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37134] [EASY] Use PEP570 syntax in the documentation

2019-06-04 Thread Brian Skinn


Change by Brian Skinn :


--
nosy: +bskinn

___
Python tracker 
<https://bugs.python.org/issue37134>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37084] _ctypes not failing, can't find reason

2019-05-28 Thread Brian Spratke


New submission from Brian Spratke :

I am trying to cross compile Python 3.7 for Android. I have Python building, 
but I keep getting an error that _ctypes failed to build, but I see nothing 
that jumps out as a reason.  

_ctypes_test builds, before that I see this INFO message
INFO: Can't locate Tcl/Tk libs and/or headers

 grpmodule and crypt module have issues as well, but I do not feel that those 
are related.

Are there any other ideas people can throw out?

--
components: Cross-Build
files: Python3.7Output.zip
messages: 343822
nosy: Alex.Willmer, Brian Spratke
priority: normal
severity: normal
status: open
title: _ctypes not failing, can't find reason
versions: Python 3.7
Added file: https://bugs.python.org/file48375/Python3.7Output.zip

___
Python tracker 
<https://bugs.python.org/issue37084>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36780] Interpreter exit blocks waiting for futures of shut-down ThreadPoolExecutors

2019-05-09 Thread Brian Quinlan


Brian Quinlan  added the comment:

We can bike shed over the name in the PR ;-)

--

___
Python tracker 
<https://bugs.python.org/issue36780>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26903] ProcessPoolExecutor(max_workers=64) crashes on Windows

2019-05-09 Thread Brian Quinlan


Change by Brian Quinlan :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue26903>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32679] concurrent.futures should store full sys.exc_info()

2019-05-09 Thread Brian Quinlan


Brian Quinlan  added the comment:

My understanding is that tracebacks have a pretty large memory profile so I'd 
rather not keep them alive. Correct me if I'm wrong about that.

--

___
Python tracker 
<https://bugs.python.org/issue32679>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24882] ThreadPoolExecutor doesn't reuse threads until #threads == max_workers

2019-05-08 Thread Brian Quinlan


Brian Quinlan  added the comment:

After playing with it for a while, https://github.com/python/cpython/pull/6375 
seems reasonable to me.

It needs tests and some documentation.

Antoine, are you still -1 because of the complexity increase?

--

___
Python tracker 
<https://bugs.python.org/issue24882>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24882] ThreadPoolExecutor doesn't reuse threads until #threads == max_workers

2019-05-08 Thread Brian Quinlan


Brian Quinlan  added the comment:

When I first wrote and started using ThreadPoolExecutor, I had a lot of code 
like this:

with ThreadPoolExecutor(max_workers=500) as e:
  e.map(download, images)

I didn't expect that `images` would be a large list but, if it was, I wanted 
all of the downloads to happen in parallel.

I didn't want to have to explicitly take into account the list size when 
starting the executor (e.g. max_works=min(500, len(images))) but I also didn't 
want to create 500 threads up front when I only needed a few.

My use case involved transient ThreadPoolExecutors so I didn't have to worry 
about idle threads.

In principle, I'd be OK with trying to avoid unnecessary thread creation if the 
implementation can be simple and efficient enough.

https://github.com/python/cpython/pull/6375 seems simple enough but I haven't 
convinced myself that it works yet ;-)

--

___
Python tracker 
<https://bugs.python.org/issue24882>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30006] Deadlocks in `concurrent.futures.ProcessPoolExecutor`

2019-05-08 Thread Brian Quinlan


Brian Quinlan  added the comment:

Was this fixed by https://github.com/python/cpython/pull/3895 ?

--

___
Python tracker 
<https://bugs.python.org/issue30006>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36395] Add deferred single-threaded/fake executor to concurrent.futures

2019-05-08 Thread Brian Quinlan


Brian Quinlan  added the comment:

Brian, I was looking for an example where the current executor isn't sufficient 
for testing i.e. a useful test that would be difficult to write with a real 
executor but would be easier with a fake.

Maybe you have such an example from your tests?

--

___
Python tracker 
<https://bugs.python.org/issue36395>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31783] Race condition in ThreadPoolExecutor when scheduling new jobs while the interpreter shuts down

2019-05-07 Thread Brian Quinlan


Change by Brian Quinlan :


--
pull_requests: +13087
stage:  -> patch review

___
Python tracker 
<https://bugs.python.org/issue31783>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31783] Race condition in ThreadPoolExecutor when scheduling new jobs while the interpreter shuts down

2019-05-07 Thread Brian Quinlan


Brian Quinlan  added the comment:

I think that ProcessPoolExecutor might have a similar race condition - but not 
in exactly this code path since it would only be with the queue management 
thread (which is only started once).

--

___
Python tracker 
<https://bugs.python.org/issue31783>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36395] Add deferred single-threaded/fake executor to concurrent.futures

2019-05-07 Thread Brian McCutchon


Brian McCutchon  added the comment:

No, I do not have such an example, as most of my tests try to fake the 
executors.

--

___
Python tracker 
<https://bugs.python.org/issue36395>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31783] Race condition in ThreadPoolExecutor when scheduling new jobs while the interpreter shuts down

2019-05-07 Thread Brian Quinlan

Brian Quinlan  added the comment:

Great report Steven!

I was able to reproduce this with the attached patch (just adds some sleeps and 
prints) and this script:

from threading import current_thread
from concurrent.futures import ThreadPoolExecutor
from time import sleep

pool = ThreadPoolExecutor(100)

def f():
print("I'm running in: ", current_thread().name)

def g():
print("I'm running in: ", current_thread().name)
for _ in range(100):
pool.submit(f)
sleep(0.1)

pool.submit(g)
sleep(1.5)

The output for me was:

 Creating new thread:  ThreadPoolExecutor-0_0
I'm running in:  ThreadPoolExecutor-0_0
Setting _shutdown
 Killing 1 workers 
 Creating new thread:  ThreadPoolExecutor-0_1
I'm running in:  ThreadPoolExecutor-0_1

So another thread was created *after* shutdown.

It seems like the most obvious way to fix this is by adding a lock for the 
global _shutdown variable.

--
keywords: +patch
Added file: https://bugs.python.org/file48311/find-race.diff

___
Python tracker 
<https://bugs.python.org/issue31783>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31783] Race condition in ThreadPoolExecutor when scheduling new jobs while the interpreter shuts down

2019-05-07 Thread Brian Quinlan


Change by Brian Quinlan :


--
assignee:  -> bquinlan

___
Python tracker 
<https://bugs.python.org/issue31783>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22729] concurrent.futures `wait` and `as_completed` depend on private api

2019-05-07 Thread Brian Quinlan


Brian Quinlan  added the comment:

How did the experiment go? Are people still interested in this?

--

___
Python tracker 
<https://bugs.python.org/issue22729>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22630] `concurrent.futures.Future.set_running_or_notify_cancel` does not notify cancel

2019-05-07 Thread Brian Quinlan


Brian Quinlan  added the comment:

Ben, do you still think that your patch is relevant or shall we close this bug?

--

___
Python tracker 
<https://bugs.python.org/issue22630>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24980] Allow for providing 'on_new_thread' callback to 'concurrent.futures.ThreadPoolExecutor'

2019-05-07 Thread Brian Quinlan


Brian Quinlan  added the comment:

Do the `initializer` and `initargs` parameters deal with this use case for you?

https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor

--

___
Python tracker 
<https://bugs.python.org/issue24980>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24767] can concurrent.futures len(Executor) return the number of tasks?

2019-05-07 Thread Brian Quinlan


Brian Quinlan  added the comment:

If we supported this, aren't we promising that we will always materialize the 
iterator passed to map?

I think that we'd need a really strong use-case for this to be worth-while.

--

___
Python tracker 
<https://bugs.python.org/issue24767>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36395] Add deferred single-threaded/fake executor to concurrent.futures

2019-05-07 Thread Brian Quinlan


Brian Quinlan  added the comment:

Hey Brian,

I understand the non-determinism. I was wondering if you had a non-theoretical 
example i.e. some case where the non-determinism had impacted a real test that 
you wrote?

--

___
Python tracker 
<https://bugs.python.org/issue36395>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36780] Interpreter exit blocks waiting for futures of shut-down ThreadPoolExecutors

2019-05-07 Thread Brian Quinlan


Brian Quinlan  added the comment:

Hey Hrvoje,

I agree that #1 is the correct approach. `disown` might not be the best name - 
maybe `allow_shutdown` or something. But we can bike shed about that later.

Are you interested in writing a patch?

--

___
Python tracker 
<https://bugs.python.org/issue36780>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36395] Add deferred single-threaded/fake executor to concurrent.futures

2019-05-06 Thread Brian McCutchon


Brian McCutchon  added the comment:

I understand your hesitation to add a fake. Would it be better to make it 
possible to subclass Executor so that a third party implementation of this can 
be developed?

As for an example, here is an example of nondeterminism when using a 
ThreadPoolExecutor with a single worker. It sometimes prints "False" and 
sometimes "True" on my machine.

from concurrent import futures
import time

complete = False

def complete_eventually():
  global complete
  for _ in range(15):
pass
  complete = True

with futures.ThreadPoolExecutor(max_workers=1) as pool:
  pool.submit(complete_eventually)
  print(complete)

--

___
Python tracker 
<https://bugs.python.org/issue36395>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24195] Add `Executor.filter` to concurrent.futures

2019-05-06 Thread Brian Quinlan


Brian Quinlan  added the comment:

Hey Ethan, I'm really sorry about dropping the ball on this. I've been burnt 
out on Python stuff for the last couple of years.

When we left this, it looked like the -1s were in the majority and no one new 
has jumped on to support `filter`.

If you wanted to add this, I wouldn't object. But I've been inactive so long 
that I don't think that I should make the decision.

--

___
Python tracker 
<https://bugs.python.org/issue24195>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23697] Module level map & submit for concurrent.futures

2019-05-06 Thread Brian Quinlan


Brian Quinlan  added the comment:

Using a default executor could be dangerous because it could lead to deadlocks. 
For example:

mylib.py


def my_func():
  tsubmit(...)
  tsubmit(...)
  tsubmit(somelib.some_func, ...)


somelib.py
--

def some_func():
  tsubmit(...) # Potential deadlock if no more free threads.

--

___
Python tracker 
<https://bugs.python.org/issue23697>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22361] Ability to join() threads in concurrent.futures.ThreadPoolExecutor

2019-05-06 Thread Brian Quinlan


Brian Quinlan  added the comment:

So you actually use the result of ex.submit i.e. use the resulting future?

If you don't then it might be easier to just create your own thread.

--

___
Python tracker 
<https://bugs.python.org/issue22361>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36395] Add deferred single-threaded/fake executor to concurrent.futures

2019-05-06 Thread Brian McCutchon


Brian McCutchon  added the comment:

Mostly nondeterminism. It seems like creating a ThreadPoolExecutor with one 
worker could still be nondeterministic, as there are two threads: the main 
thread and the worker thread. It gets worse if multiple executors are needed.

Another option would be to design and document futures.Executor to be extended 
so that I can make my own fake executor.

--

___
Python tracker 
<https://bugs.python.org/issue36395>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36395] Add deferred single-threaded/fake executor to concurrent.futures

2019-05-06 Thread Brian Quinlan


Brian Quinlan  added the comment:

Do you have a example that you could share?

I can't think of any other fakes in the standard library and I'm hesitant to be 
the person who adds the first one ;-)

--

___
Python tracker 
<https://bugs.python.org/issue36395>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26374] concurrent_futures Executor.map semantics better specified in docs

2019-05-06 Thread Brian Quinlan


Change by Brian Quinlan :


--
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue26374>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26374] concurrent_futures Executor.map semantics better specified in docs

2019-05-06 Thread Brian Quinlan


Brian Quinlan  added the comment:

Can we close this bug then?

--

___
Python tracker 
<https://bugs.python.org/issue26374>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36395] Add deferred single-threaded/fake executor to concurrent.futures

2019-05-06 Thread Brian Quinlan


Brian Quinlan  added the comment:

Hey Brian, why can't you use threads in your unit tests? Are you worried about 
non-determinism or resource usage? Could you make a ThreadPoolExecutor with a 
single worker?

--

___
Python tracker 
<https://bugs.python.org/issue36395>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26903] ProcessPoolExecutor(max_workers=64) crashes on Windows

2019-05-06 Thread Brian Quinlan


Change by Brian Quinlan :


--
keywords: +patch
pull_requests: +13045
stage: needs patch -> patch review

___
Python tracker 
<https://bugs.python.org/issue26903>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36780] Interpreter exit blocks waiting for futures of shut-down ThreadPoolExecutors

2019-05-06 Thread Brian Quinlan


Brian Quinlan  added the comment:

OK, I completely disagree with my statement:

"""If you added this as an argument to shutdown() then you'd probably also have 
to add it as an option to the constructors (for people using Executors as 
context managers). But, if you have to add it to the constructor anyway, you 
may as well *only* add it to the constructor."""

This functionality wouldn't apply to context manager use (since the point of 
the context manager is to ensure resource clean-up). So maybe another argument 
to shutdown() would make sense

--

___
Python tracker 
<https://bugs.python.org/issue36780>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26903] ProcessPoolExecutor(max_workers=64) crashes on Windows

2019-05-06 Thread Brian Quinlan


Brian Quinlan  added the comment:

BTW, the 61 process limit comes from:

63 -  - 

--

___
Python tracker 
<https://bugs.python.org/issue26903>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26903] ProcessPoolExecutor(max_workers=64) crashes on Windows

2019-05-06 Thread Brian Quinlan


Change by Brian Quinlan :


--
assignee:  -> bquinlan

___
Python tracker 
<https://bugs.python.org/issue26903>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26903] ProcessPoolExecutor(max_workers=64) crashes on Windows

2019-05-06 Thread Brian Quinlan


Brian Quinlan  added the comment:

If no one has short-term plans to improve multiprocessing.connection.wait, then 
I'll update the docs to list this limitation, ensure that ProcessPoolExecutor 
never defaults to >60 processes on windows and raises a ValueError if the user 
explicitly passes a larger number.

--

___
Python tracker 
<https://bugs.python.org/issue26903>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36780] Interpreter exit blocks waiting for futures of shut-down ThreadPoolExecutors

2019-05-06 Thread Brian Quinlan


Brian Quinlan  added the comment:

>> The current behavior is explicitly documented, so presumably
>> it can't be (easily) changed

And it isn't clear that it would be desirable to change this even if it were 
possible - doing structured resource clean-up seems consistent with the rest of 
Python.

That being said, I can see the user case for this.

If you added this as an argument to shutdown() then you'd probably also have to 
add it as an option to the constructors (for people using Executors as context 
managers). But, if you have to add it to the constructor anyway, you may as 
well *only* add it to the constructor.

I'll let an active maintainer weigh in on whether, on balance, they think that 
this functionality is worth complicating the API.

--

___
Python tracker 
<https://bugs.python.org/issue36780>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36714] Tweak doctest 'example' regex to allow a leading ellipsis in 'want' line

2019-04-26 Thread Brian Skinn


Brian Skinn  added the comment:

Ahh, this *will* break some doctests: any with blank PS2 lines in the 'source' 
portion without the explicit trailing space:

1] >>> def foo():
2] ...print("bar")
3] ...
4] ...print("baz")
5] >>> foo()
6] bar
7] baz

If line 3 contains exactly "..." instead of starting with "... ", it will not 
be recognized as a PS2 line and the example will be parsed as:

'source'
>>> def foo():
...print("bar")

'want'
...
...print("baz")

IMO this isn't a *terribly* unreasonable tradeoff, though -- it would enable 
the specific ellipsis use-case as in the OP, at the cost of breaking some 
doctests, which shouldn't(?) be in any critical paths?

--

___
Python tracker 
<https://bugs.python.org/issue36714>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36695] Change (regression?) in v3.8.0a3 doctest output after capturing the stderr output from a raised warning

2019-04-24 Thread Brian Skinn


Brian Skinn  added the comment:

LOL. No special thanks necessary, that last post only turned into something 
coherent (and possibly correct, it seems...) after a LOT of diving into the 
source, fiddling with the code, and (((re-)re-)re-)writing! Believe me, it 
reads as a lot more knowledgeable and confident than I actually felt while 
writing it. :-D

Thanks to all of you for coming along with me on this dive into the CPython 
internals!

--

___
Python tracker 
<https://bugs.python.org/issue36695>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   3   4   5   6   7   8   9   10   >