[issue28855] Compiler warnings in _PyObject_CallArg1()

2016-12-01 Thread Benjamin Peterson

Benjamin Peterson added the comment:

I also think forcing callers to cast is fine. Most of our APIs require PyObject 
*.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23507] Tuple creation is too slow

2016-12-01 Thread STINNER Victor

STINNER Victor added the comment:

haypo: "status: open -> closed"

Oops, it's a mistake, sorry. I only wanted to ask the question, not to close 
the issue.

Serhiy Storchaka: "No, I'm referring to the crashing. The code that worked 
before your changes now are crashing."

Sorry, I didn't notice that you attached a script! I opened the issue #28858.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28858] Fastcall uses more C stack

2016-12-01 Thread STINNER Victor

New submission from STINNER Victor:

Serhiy Storchaka reported that Python 3.6 crashs earlier than Python 3.5 on 
calling json.dumps() when sys.setrecursionlimit() is increased.

I tested the script he wrote. Results on Python built in release mode:

Python 3.7:

...
58100 116204
Segmentation fault (core dumped)

Python 3.6:

...
74800 149604
Segmentation fault (core dumped)

Python 3.5:

...
74700 149404
Segmentation fault (core dumped)

Oh, it seems like Python 3.7 does crash earlier.

But to be clear, it's hard to control the usage of the C stack.

--
files: stack_overflow.py
messages: 282228
nosy: haypo
priority: normal
severity: normal
status: open
title: Fastcall uses more C stack
versions: Python 3.7
Added file: http://bugs.python.org/file45731/stack_overflow.py

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28740] Add sys.getandroidapilevel()

2016-12-01 Thread STINNER Victor

STINNER Victor added the comment:

@Xavier: Cool, thanks for checking :-) I don't have access to an Android 
platform yet.

--
resolution:  -> fixed
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28740] Add sys.getandroidapilevel()

2016-12-01 Thread Xavier de Gaye

Xavier de Gaye added the comment:

Here is the output of getandroidapilevel(), a verbose run of test_sys and a run 
of the test suite on the Android x86 emulator API 21. All the results are as 
expected, the failed tests are the usual ones, BTW all of the failed tests have 
either a patch ready to be commited as soon as the 3.6.1 branch is created, or 
have a patch ready, pending a review.

root@generic_x86:/data/data/org.bitbucket.pyona # python
Python 3.7.0a0 (default:96245d4af0ca+, Dec  2 2016, 07:59:12)
[GCC 4.2.1 Compatible Android Clang 3.8.256229 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.getandroidapilevel()
21
>>>

===
root@generic_x86:/data/data/org.bitbucket.pyona # python -m test -v test_sys
...
test_getandroidapilevel (test.test_sys.SysModuleTest) ... ok
...
Ran 45 tests in 2.567s

OK (skipped=3)
1 test OK.

Total duration: 3 sec
Tests result: SUCCESS

===
root@generic_x86:/data/data/org.bitbucket.pyona # python -m test
...
358 tests OK.

8 tests failed:
test_asyncio test_cmd_line test_pathlib test_pwd test_site
test_socket test_tarfile test_warnings

38 tests skipped:
test_asdl_parser test_concurrent_futures test_crypt test_curses
test_dbm_gnu test_dbm_ndbm test_devpoll test_gdb test_grp
test_idle test_kqueue test_lzma test_msilib
test_multiprocessing_fork test_multiprocessing_forkserver
test_multiprocessing_main_handling test_multiprocessing_spawn
test_nis test_ossaudiodev test_smtpnet test_socketserver test_spwd
test_startfile test_tcl test_timeout test_tix test_tk
test_ttk_guionly test_ttk_textonly test_turtle test_urllib2net
test_urllibnet test_wait3 test_winconsoleio test_winreg
test_winsound test_xmlrpc_net test_zipfile64

Total duration: 20 min 31 sec
Tests result: FAILURE

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



What do you think: good idea to launch a marketplace on python+django?

2016-12-01 Thread Gus_G
Hello, what do you think about building a marketplace website on connection of 
python+django? End effect-side should look and work similar to these: 
https://zoptamo.com/uk/s-abs-c-uk, https://www.ownerdirect.com/ . What are your 
opinions on this idea? Maybe there is other, better way to build it?
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28856] %b format for bytes does not support objects that follow the buffer protocol

2016-12-01 Thread Serhiy Storchaka

Changes by Serhiy Storchaka :


--
nosy: +ethan.furman, serhiy.storchaka

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5322] Python 2.6 object.__new__ argument calling autodetection faulty

2016-12-01 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

Will commit to 3.5-3.7 after releasing 3.6.0.

--
versions:  -Python 2.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5322] Python 2.6 object.__new__ argument calling autodetection faulty

2016-12-01 Thread Roundup Robot

Roundup Robot added the comment:

New changeset a37cc3d926ec by Serhiy Storchaka in branch '2.7':
Issue #5322: Fixed setting __new__ to a PyCFunction inside Python code.
https://hg.python.org/cpython/rev/a37cc3d926ec

--
nosy: +python-dev

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28857] SyncManager and Main Process fail to communicate after reboot or stoping with Ctrl - C

2016-12-01 Thread Nagarjuna Arigapudi

New submission from Nagarjuna Arigapudi:

"SyncManager" and "Main Process" and main process look at different directories 
and fail to communicate, causing shutdown of all process, immediately after 
start of program. This behavior is seen in both 2.7 and 3.5. The logging of 2.7 
is more clear, It tells the file  name it is looking for.

Extract of Program:

manager = Manager()
lst1 = manager.list([[]]*V1)  
lst2 = manager.list(range(v2)) 
lst3 = manager.list([0]*V3) 
lst4 = manager.list([0]*V3) 
lst5 = manager.list([0]*V3)  
initializeData(lst1,lst2,lst3,lst4,lst5)
procs = []
for indx in range(noOfProcs):
procs.append(Process(target=workerProc, args=(lst1,lst2,lst3,lst4,lst5)))
procs[indx].start()

bContinueWorking = True
while (bContinueWorking):
logger.debug("Main thread about to sleep")   
time.sleep(300)
globLOCK.acquire()
if(not lst1):
bContinueWorking = False
try:
doPickle(lst1)
except Exception, ep:
logger.error("failed to pickle" +str(ep))
finally:
globLOCK.release()


The program works well. but if the program is terminated, it will not start 
back. rebooting or cleaning temporary files does not fix the issue.

below is log when it fails. ( beneath the log is other log where it runs 
successfully)

FAIL LOG
[DEBUG/MainProcess] Star of Application
[DEBUG/MainProcess] created semlock with handle 139965318860800
[DEBUG/MainProcess] created semlock with handle 139965318856704
[DEBUG/MainProcess] created semlock with handle 139965318852608
[DEBUG/MainProcess] created semlock with handle 139965318848512
[DEBUG/MainProcess] created semlock with handle 139965318844416
[DEBUG/MainProcess] created semlock with handle 139965318840320
[INFO/SyncManager-1] child process calling self.run()

***[INFO/SyncManager-1] created temp directory /tmp/pymp-xTqdkd***

[DEBUG/MainProcess] requesting creation of a shared 'list' object
[INFO/SyncManager-1] manager serving at '/tmp/pymp-xTqdkd/listener-eDG1yJ'
[DEBUG/SyncManager-1] 'list' callable returned object with id '7f4c34316f80'
[DEBUG/MainProcess] INCREF '7f4c34316f80'
[DEBUG/MainProcess] requesting creation of a shared 'list' object
[DEBUG/SyncManager-1] 'list' callable returned object with id '7f4c3432a758'
[DEBUG/MainProcess] INCREF '7f4c3432a758'
[DEBUG/MainProcess] requesting creation of a shared 'list' object
[DEBUG/SyncManager-1] 'list' callable returned object with id '7f4c3432a7a0'
[DEBUG/MainProcess] INCREF '7f4c3432a7a0'
[DEBUG/MainProcess] requesting creation of a shared 'list' object
[DEBUG/SyncManager-1] 'list' callable returned object with id '7f4c3432a7e8'
[DEBUG/MainProcess] INCREF '7f4c3432a7e8'
[DEBUG/MainProcess] requesting creation of a shared 'list' object
[DEBUG/SyncManager-1] 'list' callable returned object with id '7f4c3432a830'
[DEBUG/MainProcess] INCREF '7f4c3432a830'
[DEBUG/MainProcess] thread 'MainThread' does not own a connection
[DEBUG/MainProcess] making connection to manager
[DEBUG/SyncManager-1] starting server thread to service 'MainProcess'

***[DEBUG/MainProcess] failed to connect to address 
/tmp/pymp-LOMHoT/listener-EbLeup***

[Errno 2] No such file or directory
Initialization failed Exiting
[INFO/MainProcess] process shutting down


SUCCESS LOG

[DEBUG/MainProcess] Star of Application
[DEBUG/MainProcess] created semlock with handle 139830888992768
[DEBUG/MainProcess] created semlock with handle 139830888988672
[DEBUG/MainProcess] created semlock with handle 139830888984576
[DEBUG/MainProcess] created semlock with handle 139830888980480
[DEBUG/MainProcess] created semlock with handle 139830888976384
[DEBUG/MainProcess] created semlock with handle 139830888972288
[INFO/SyncManager-1] child process calling self.run()
[INFO/SyncManager-1] created temp directory /tmp/pymp-UiHuij
[DEBUG/MainProcess] requesting creation of a shared 'list' object
[INFO/SyncManager-1] manager serving at '/tmp/pymp-UiHuij/listener-lS7hf5'
[DEBUG/SyncManager-1] 'list' callable returned object with id '7f2ce78c6f80'
[DEBUG/MainProcess] INCREF '7f2ce78c6f80'
[DEBUG/MainProcess] requesting creation of a shared 'list' object
[DEBUG/SyncManager-1] 'list' callable returned object with id '7f2ce78da758'
[DEBUG/MainProcess] INCREF '7f2ce78da758'
[DEBUG/MainProcess] requesting creation of a shared 'list' object
[DEBUG/SyncManager-1] 'list' callable returned object with id '7f2ce78da7a0'
[DEBUG/MainProcess] INCREF '7f2ce78da7a0'
[DEBUG/MainProcess] requesting creation of a shared 'list' object
[DEBUG/SyncManager-1] 'list' callable returned object with id '7f2ce78da7e8'
[DEBUG/MainProcess] INCREF '7f2ce78da7e8'
[DEBUG/MainProcess] requesting creation of a shared 'list' object
[DEBUG/SyncManager-1] 'list' callable returned object with id '7f2ce78da830'
[DEBUG/MainProcess] INCREF '7f2ce78da830'
[DEBUG/MainProcess] thread 'MainThread' does not own a connection
[DEBUG/MainProcess] making connection to manager
[DEBUG/SyncManager-1] 

[issue28855] Compiler warnings in _PyObject_CallArg1()

2016-12-01 Thread Benjamin Peterson

Benjamin Peterson added the comment:

(Sorry, I noticed and landed a fix before completely reading the issue.)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28855] Compiler warnings in _PyObject_CallArg1()

2016-12-01 Thread Benjamin Peterson

Benjamin Peterson added the comment:

It doesn't seem like the question is whether to use inline functions but 
whether to force all callers to cast. Your original code would work if you 
added all the casts in your static_inline.patch patch.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26861] shutil.copyfile() doesn't close the opened files

2016-12-01 Thread Serhiy Storchaka

Changes by Serhiy Storchaka :


--
status: open -> pending

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28855] Compiler warnings in _PyObject_CallArg1()

2016-12-01 Thread Roundup Robot

Roundup Robot added the comment:

New changeset 96245d4af0ca by Benjamin Peterson in branch 'default':
fix _PyObject_CallArg1 compiler warnings (closes #28855)
https://hg.python.org/cpython/rev/96245d4af0ca

--
nosy: +python-dev
resolution:  -> fixed
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28847] dumbdbm should not commit if in read mode

2016-12-01 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

The first part is committed in 2.7. I'll commit it in 3.5-3.7 after releasing 
3.6.0.

--
versions:  -Python 2.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28847] dumbdbm should not commit if in read mode

2016-12-01 Thread Roundup Robot

Roundup Robot added the comment:

New changeset 0516f54491cb by Serhiy Storchaka in branch '2.7':
Issue #28847: dubmdbm no longer writes the index file in when it is not
https://hg.python.org/cpython/rev/0516f54491cb

--
nosy: +python-dev

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: correct way to catch exception with Python 'with' statement

2016-12-01 Thread Steve D'Aprano
On Fri, 2 Dec 2016 11:26 am, DFS wrote:

>> For most programs, yes, it probably will never be a problem to check
>> for existence, and then assume that the file still exists.  But put that
>> code on a server, and run it a couple of million times, with dozens of
>> other processes also manipulating files, and you will see failures.
> 
> 
> If it's easy for you, can you write some short python code to simulate
> that?

Run these scripts simultaneously inside the same directory, and you will see
a continual stream of error messages:

# -- a.py -- 
filename = 'data'
import os, time

def run():
if os.path.exists(filename):
with open(filename):
pass
else:
print('file is missing!')
# re-create it
with open(filename, 'w'):
pass

while True:
try:
run()
except IOError:
pass
time.sleep(0.05)



# -- b.py --
filename = 'data'
import os, time

while True:
try:
os.remove(filename)
except OSError:
pass
time.sleep(0.05)




The time.sleep() calls are just to slow them down slightly. You can leave
them out if you like.




-- 
Steve
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, things got worse.

-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28847] dumbdbm should not commit if in read mode

2016-12-01 Thread Serhiy Storchaka

Changes by Serhiy Storchaka :


--
assignee:  -> serhiy.storchaka
versions: +Python 2.7, Python 3.5, Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23507] Tuple creation is too slow

2016-12-01 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

No, I'm referring to the crashing. The code that worked before your changes now 
are crashing. Please revert your changes and find a way to optimize a function 
calling without increasing stack consumption.

--
resolution: fixed -> 
status: closed -> open

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: compile error when using override

2016-12-01 Thread Steve D'Aprano
On Fri, 2 Dec 2016 01:35 pm, Ho Yeung Lee wrote:

> from __future__ import division
> import ast
> from sympy import *
> x, y, z, t = symbols('x y z t')
> k, m, n = symbols('k m n', integer=True)
> f, g, h = symbols('f g h', cls=Function)
> import inspect

Neither ast nor inspect is used. Why import them?

The only symbols you are using are x and y.


> def op2(a,b):
> return a*b+a

This doesn't seem to be used. Get rid of it.


> class AA(object):
> @staticmethod
> def __additionFunction__(a1, a2):
> return a1*a2 #Put what you want instead of this
> def __multiplyFunction__(a1, a2):
> return a1*a2+a1 #Put what you want instead of this
> def __divideFunction__(a1, a2):
> return a1*a1*a2 #Put what you want instead of this

None of those methods are used. Get rid of them.

> def __init__(self, value):
> self.value = value
> def __add__(self, other):
> return self.value*other.value

Sorry, you want AA(5) + AA(2) to return 10?

> def __mul__(self, other):
> return self.value*other.value + other.value
> def __div__(self, other):
> return self.value*other.value*other.value
> 
> solve([AA(x)*AA(y) + AA(-1), AA(x) + AA(-2)], x, y)

I don't understand what you are trying to do here. What result are you
execting?

Maybe you just want this?

from sympy import solve, symbols
x, y = symbols('x y')
print( solve([x*y - 1, x - 2], x, y) )

which prints the result:
[(2, 1/2)]


Perhaps if you explain what you are trying to do, we can help better.

But please, cut down your code to only code that is being used!




-- 
Steve
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, things got worse.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: correct way to catch exception with Python 'with' statement

2016-12-01 Thread Steve D'Aprano
On Fri, 2 Dec 2016 11:26 am, DFS wrote:

> On 12/01/2016 06:48 PM, Ned Batchelder wrote:
>> On Thursday, December 1, 2016 at 2:31:11 PM UTC-5, DFS wrote:
>>> After a simple test below, I submit that the above scenario would never
>>> occur.  Ever.  The time gap between checking for the file's existence
>>> and then trying to open it is far too short for another process to sneak
>>> in and delete the file.
>>
>> It doesn't matter how quickly the first operation is (usually) followed
>> by the second.  Your process could be swapped out between the two
>> operations. On a heavily loaded machine, there could be a very long
>> time between them
> 
> 
> How is it possible that the 'if' portion runs, then 44/100,000ths of a
> second later my process yields to another process which deletes the
> file, then my process continues.
> 
> Is that governed by the dreaded GIL?

No, that has nothing to do with the GIL. It is because the operating 
system is a preemptive multi-processing operating system. All modern OSes 
are: Linux, OS X, Windows.

Each program that runs, including the OS itself, is one or more processes.
Typically, even on a single-user desktop machine, you will have dozens of
processes running simultaneously.

Every so-many clock ticks, the OS pauses whatever process is running, 
more-or-less interrupting whatever it was doing, passes control on to 
another process, then the next, then the next, and so on. The application 
doesn't have any control over this, it can be paused at any time, 
normally just for a small fraction of a second, but potentially for 
seconds or minutes at a time if the system is heavily loaded.



> "The mechanism used by the CPython interpreter to assure that only one
> thread executes Python bytecode at a time."
> 
> But I see you posted a stack-overflow answer:
> 
> "In the case of CPython's GIL, the granularity is a bytecode
> instruction, so execution can switch between threads at any bytecode."
> 
> Does that mean "chars=f.read().lower()" could get interrupted between
> the read() and the lower()?

Yes, but don't think about Python threads. Think about the OS.

I'm not an expert on the low-level hardware details, so I welcome
correction, but I think that you can probably expect that the OS can
interrupt code execution between any two CPU instructions. Something like
str.lower() is likely to be thousands of CPU instructions, even for a small
string.


[...]
> With a 5ms window, it seems the following code would always protect the
> file from being deleted between lines 4 and 5.
> 
> 
> 1 import os,threading
> 2 f_lock=threading.Lock()
> 3 with f_lock:
> 4   if os.path.isfile(filename):
> 5 with open(filename,'w') as f:
> 6   process(f)
> 
> 
> 
> 
>> even if on an average machine, they are executed very quickly.

Absolutely not. At least on Linux, locks are advisory, not mandatory. Here
are a pair of scripts that demonstrate that. First, the well-behaved script
that takes out a lock:

# --- locker.py ---
import os, threading, time

filename = 'thefile.txt'
f_lock = threading.Lock()

with f_lock:
print '\ntaking lock'
if os.path.isfile(filename):
print filename, 'exists and is a file'
time.sleep(10)
print 'lock still active'
with open(filename,'w') as f:
print f.read()

# --- end ---


Now, a second script which naively, or maliciously, just deletes the file:

# --- bandit.py ---
import os, time
filename = 'thefile.txt'
time.sleep(1)
print 'deleting file, mwahahahaha!!!'
os.remove(filename)
print 'deleted'

# --- end ---



Now, I run them both simultaneously:

[steve@ando thread-lock]$ touch thefile.txt # ensure file exists
[steve@ando thread-lock]$ (python locker.py &) ; (python bandit.py &)
[steve@ando thread-lock]$ 
taking lock
thefile.txt exists and is a file
deleting file, mwahahahaha!!!
deleted
lock still active
Traceback (most recent call last):
  File "locker.py", line 14, in 
print f.read()
IOError: File not open for reading



This is on Linux. Its possible that Windows behaves differently, and I don't
know how to run a command in the background in command.com or cmd.exe or
whatever you use on Windows.


[...]
> Also, this is just theoretical (I hope).  It would be terrible system
> design if all those dozens of processes were reading and writing and
> deleting the same file.

It is not theoretical. And it's not a terrible system design, in the sense
that the alternatives are *worse*.

* Turn the clock back to the 1970s and 80s with single-processing 
  operating systems? Unacceptable -- even primitive OSes like DOS 
  and Mac System 5 needed to include some basic multiprocessing 
  capability.

- And what are servers supposed to do in this single-process world?

- Enforce mandatory locks? A great way for malware or hostile users
  to perform Denial Of Service attacks.

Even locks being left around accidentally can be a real pain: Windows 

How to properly retrieve data using requests + bs4 from multiple pages in a site?

2016-12-01 Thread Juan C.
I'm a student and my university uses Moodle as their learning management
system (LMS). They don't have Moodle Web Services enabled and won't be
enabling it anytime soon, at least for students. The university programs
have the following structure, for example:

1. Bachelor's Degree in Computer Science (duration: 8 semesters)

1.1. Unit 01: Mathematics Fundamental (duration: 1 semester)
1.1.1. Algebra I (first 3 months)
1.1.2. Algebra II (first 3 months)
1.1.3. Calculus I (last 3 months)
1.1.4. Calculus II (last 3 months)
1.1.5. Unit Project (throughout the semester)

1.2. Unit 02: Programming (duration: 1 semester)
1.2.1. Programming Logic (first 3 months)
1.2.2. Data Modelling with UML (first 3 months)
1.2.3. Python I (last 3 months)
1.2.4. Python II (last 3 months)
1.2.5. Unit Project (throughout the semester)

Each course/project have a bunch of assignments + one final assignment.
This goes on, totalizing 8 (eight) units, which will make up for a 4-year
program. I'm building my own client-side Moodle API to be consumed by my
scripts. Currently I'm using 'requests' + 'bs4' to do the job. My code:

package moodle/

user.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from .program import Program
import requests


class User:
   _AUTH_URL = 'http://lms.university.edu/moodle/login/index.php'

   def __init__(self, username, password, program_id):
  self.username = username
  self.password = password
  session = requests.session()
  session.post(self._AUTH_URL, {"username": username, "password":
password})
  self.program = Program(program_id=program_id, session=session)

   def __str__(self):
  return self.username + ':' + self.password

   def __repr__(self):
  return '' % self.username

   def __eq__(self, other):
  if isinstance(other, self):
 return self.username == other.username
  else:
 return False

==

program.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from .unit import Unit
from bs4 import BeautifulSoup


class Program:
   _PATH = 'http://lms.university.edu/moodle/course/index.php?categoryid='

   def __init__(self, program_id, session):
  response = session.get(self._PATH + str(program_id))
  soup = BeautifulSoup(response.text, 'html.parser')

  self.name = soup.find('ul',
class_='breadcrumb').find_all('li')[-2].text.replace('/', '').strip()
  self.id = program_id
  self.units = [Unit(int(item['data-categoryid']), session) for item in
soup.find_all('div', {'class': 'category'})]

   def __str__(self):
  return self.name

   def __repr__(self):
  return '' % (self.name, self.id)

   def __eq__(self, other):
  if isinstance(other, self):
 return self.id == other.id
  else:
 return False

==

unit.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from .course import Course
from bs4 import BeautifulSoup


class Unit:
   _PATH = 'http://lms.university.edu/moodle/course/index.php?categoryid='

   def __init__(self, unit_id, session):
  response = session.get(self._PATH + str(unit_id))
  soup = BeautifulSoup(response.text, 'html.parser')

  self.name = soup.find('ul',
class_='breadcrumb').find_all('li')[-1].text.replace('/', '').strip()
  self.id = unit_id
  self.courses = [Course(int(item['data-courseid']), session) for item
in soup.find_all('div', {'class': 'coursebox'})]

   def __str__(self):
  return self.name

   def __repr__(self):
  return '' % (self.name, self.id)

   def __eq__(self, other):
  if isinstance(other, self):
 return self.id == other.id
  else:
 return False

==

course.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-


from .assignment import Assignment
import re
from bs4 import BeautifulSoup


class Course:
   _PATH = 'http://lms.university.edu/moodle/course/view.php?id='

   def __init__(self, course_id, session):
  response = session.get(self._PATH + str(course_id))
  soup = BeautifulSoup(response.text, 'html.parser')

  self.name = soup.find('h1').text
  self.id = course_id
  self.assignments = [Assignment(int(item['href'].split('id=')[-1]),
session) for item in
 soup.find_all('a', href=re.compile(r'http://lms
\.university\.edu/moodle/mod/assign/view.php\?id=.*'))]

   def __str__(self):
  return self.name

   def __repr__(self):
  return '' % (self.name, self.id)

   def __eq__(self, other):
  if isinstance(other, self):
 return self.id == other.id
  else:
 return False

==

assignment.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from bs4 import BeautifulSoup


class Assignment:
   _PATH = 'http://lms.university.edu/moodle/mod/assign/view.php?id='

   def __init__(self, assignment_id, session):
  response = session.get(self._PATH + str(assignment_id))
  soup = BeautifulSoup(response.text, 'html.parser')

  self.name = soup.find('h2').text
  self.id = assignment_id
  self.sent = soup.find('td', 

Re: correct way to catch exception with Python 'with' statement

2016-12-01 Thread Ned Batchelder
On Thursday, December 1, 2016 at 7:26:18 PM UTC-5, DFS wrote:
> On 12/01/2016 06:48 PM, Ned Batchelder wrote:
> > On Thursday, December 1, 2016 at 2:31:11 PM UTC-5, DFS wrote:
> >> After a simple test below, I submit that the above scenario would never
> >> occur.  Ever.  The time gap between checking for the file's existence
> >> and then trying to open it is far too short for another process to sneak
> >> in and delete the file.
> >
> > It doesn't matter how quickly the first operation is (usually) followed
> > by the second.  Your process could be swapped out between the two
> > operations. On a heavily loaded machine, there could be a very long
> > time between them
> 
> 
> How is it possible that the 'if' portion runs, then 44/100,000ths of a 
> second later my process yields to another process which deletes the 
> file, then my process continues.

A modern computer is running dozens or hundreds (or thousands!) of
processes "all at once". How they are actually interleaved on the
small number of actual processors is completely unpredictable. There
can be an arbitrary amount of time passing between any two processor
instructions.

I'm assuming you've measured this program on your own computer, which
was relatively idle at the moment.  This is hardly a good stress test
of how the program might execute under more burdened conditions.

> 
> Is that governed by the dreaded GIL?
> 
> "The mechanism used by the CPython interpreter to assure that only one 
> thread executes Python bytecode at a time."
> 
> But I see you posted a stack-overflow answer:
> 
> "In the case of CPython's GIL, the granularity is a bytecode 
> instruction, so execution can switch between threads at any bytecode."
> 
> Does that mean "chars=f.read().lower()" could get interrupted between 
> the read() and the lower()?

Yes.  But even more importantly, the Python interpreter is itself a
C program, and it can be interrupted between any two instructions, and
another program on the computer could run instead.  That other program
can fiddle with files on the disk.

> 
> I read something interesting last night:
> https://www.jeffknupp.com/blog/2012/03/31/pythons-hardest-problem/
> 
> "In the new GIL, a hard timeout is used to instruct the current thread 
> to give up the lock. When a second thread requests the lock, the thread 
> currently holding it is compelled to release it after 5ms (that is, it 
> checks if it needs to release it every 5ms)."
> 
> With a 5ms window, it seems the following code would always protect the 
> file from being deleted between lines 4 and 5.
> 
> 
> 1 import os,threading
> 2 f_lock=threading.Lock()
> 3 with f_lock:
> 4   if os.path.isfile(filename):
> 5 with open(filename,'w') as f:
> 6   process(f)
> 
> 

You seem to be assuming that the program that might delete the file
is the same program trying to read the file.  I'm not assuming that.
My Python program might be trying to read the file at the same time
that a cron job is running a shell script that is trying to delete
the file.

> Also, this is just theoretical (I hope).  It would be terrible system 
> design if all those dozens of processes were reading and writing and 
> deleting the same file.

If you can design your system so that you know for sure no one else
is interested in fiddling with your file, then you have an easier
problem.  So far, that has not been shown to be the case. I'm
talking more generally about a program that can't assume those
constraints.

--Ned.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Request Help With Byte/String Problem

2016-12-01 Thread Wildman via Python-list
On Wed, 30 Nov 2016 14:39:02 +0200, Anssi Saari wrote:

> There'll be a couple more issues with the printing but they should be
> easy enough.

I finally figured it out, I think.  I'm not sure if my changes are
what you had in mind but it is working.  Below is the updated code.
Thank you for not giving me the answer.  I was a good learning
experience for me and that was my purpose in the first place.

def format_ip(addr):
return str(int(addr[0])) + '.' + \ # replace ord() with int()
   str(int(addr[1])) + '.' + \
   str(int(addr[2])) + '.' + \
   str(int(addr[3]))

ifs = all_interfaces()
for i in ifs: # added decode("utf-8")
print("%12s   %s" % (i[0].decode("utf-8"), format_ip(i[1])))

Thanks again!

-- 
 GNU/Linux user #557453
May the Source be with you.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Request Help With Byte/String Problem

2016-12-01 Thread Wildman via Python-list
On Wed, 30 Nov 2016 07:54:45 -0500, Dennis Lee Bieber wrote:

> On Tue, 29 Nov 2016 22:01:51 -0600, Wildman via Python-list
>  declaimed the following:
> 
>>I really appreciate your reply.  Your suggestion fixed that
>>problem, however, a new error appeared.  I am doing some
>>research to try to figure it out but no luck so far.
>>
>>Traceback (most recent call last):
>>  File "./ifaces.py", line 33, in 
>>ifs = all_interfaces()
>>  File "./ifaces.py", line 21, in all_interfaces
>>name = namestr[i:i+16].split('\0', 1)[0]
>>TypeError: Type str doesn't support the buffer API
> 
>   The odds are good that this is the same class of problem -- you are
> providing a Unicode string to a procedure that wants a byte-string (or vice
> versa)
> 
> https://docs.python.org/3/library/array.html?highlight=tostring#array.array.tostring

That helped.  Thanks.

-- 
 GNU/Linux user #557453
The cow died so I don't need your bull!
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue23224] LZMADecompressor object is only initialized in __init__

2016-12-01 Thread Aaron Hill

Aaron Hill added the comment:

I've upload a patch which should address the issue in both the lzma and bz2 
modules.

--
keywords: +patch
nosy: +Aaron1011
Added file: http://bugs.python.org/file45730/fix-lzma-bz2-segfaults.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: compile error when using override

2016-12-01 Thread Ho Yeung Lee
from __future__ import division 
import ast 
from sympy import * 
x, y, z, t = symbols('x y z t') 
k, m, n = symbols('k m n', integer=True) 
f, g, h = symbols('f g h', cls=Function) 
import inspect 
def op2(a,b): 
return a*b+a 

class AA(object):
@staticmethod
def __additionFunction__(a1, a2):
return a1*a2 #Put what you want instead of this
def __multiplyFunction__(a1, a2):
return a1*a2+a1 #Put what you want instead of this
def __divideFunction__(a1, a2):
return a1*a1*a2 #Put what you want instead of this
def __init__(self, value):
self.value = value
def __add__(self, other):
return self.value*other.value
def __mul__(self, other):
return self.value*other.value + other.value
def __div__(self, other):
return self.value*other.value*other.value

solve([AA(x)*AA(y) + AA(-1), AA(x) + AA(-2)], x, y)

>>> class AA(object):
... @staticmethod
... def __additionFunction__(a1, a2):
... return a1*a2 #Put what you want instead of this
... def __multiplyFunction__(a1, a2):
... return a1*a2+a1 #Put what you want instead of this
... def __divideFunction__(a1, a2):
... return a1*a1*a2 #Put what you want instead of this
... def __init__(self, value):
... self.value = value
... def __add__(self, other):
... return self.value*other.value
... def __mul__(self, other):
... return self.value*other.value + other.value
... def __div__(self, other):
... return self.value*other.value*other.value
...
>>> solve([AA(x)*AA(y) + AA(-1), AA(x) + AA(-2)], x, y)
Traceback (most recent call last):
  File "", line 1, in 
TypeError: unsupported operand type(s) for +: 'Add' and 'AA'


On Thursday, December 1, 2016 at 7:19:58 PM UTC+8, Steve D'Aprano wrote:
> On Thu, 1 Dec 2016 05:26 pm, Ho Yeung Lee wrote:
> 
> > import ast
> > from __future__ import division
> 
> That's not actually your code. That will be a SyntaxError.
> 
> Except in the interactive interpreter, "__future__" imports must be the very
> first line of code.
> 
> 
> > class A:
> >     @staticmethod
> >     def __additionFunction__(a1, a2):
> >         return a1*a2 #Put what you want instead of this
> 
> That cannot work in Python 2, because you are using a "classic"
> or "old-style" class. For staticmethod to work correctly, you need to
> inherit from object:
> 
> class A(object):
> ...
> 
> 
> Also, do not use double-underscore names for your own functions or methods.
> __NAME__ (two leading and two trailing underscores) are reserved for
> Python's internal use. You should not invent your own.
> 
> Why do you need this "additionFunction" method for? Why not put this in the
> __add__ method?
> 
> >   def __add__(self, other):
> >       return self.__class__.__additionFunction__(self.value, other.value)
> >   def __mul__(self, other):
> >       return self.__class__.__multiplyFunction__(self.value, other.value)
> 
> They should be:
> 
> def __add__(self, other):
> return self.additionFunction(self.value, other.value)
> 
> def __mul__(self, other):
> return self.multiplyFunction(self.value, other.value)
> 
> Or better:
> 
> def __add__(self, other):
> return self.value + other.value
> 
> def __mul__(self, other):
> return self.value * other.value
> 
> 
> 
> -- 
> Steve
> “Cheer up,” they said, “things could be worse.” So I cheered up, and sure
> enough, things got worse.

-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28856] %b format for bytes does not support objects that follow the buffer protocol

2016-12-01 Thread Alexander Belopolsky

New submission from Alexander Belopolsky:

Python 3.7.0a0 (default:be70d64bbf88, Dec  1 2016, 21:21:25)
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from array import array
>>> a = array('B', [1, 2])
>>> b'%b' % a
Traceback (most recent call last):
  File "", line 1, in 
TypeError: %b requires bytes, or an object that implements __bytes__, not 
'array.array'
>>> m = memoryview(a)
>>> b'%b' % m
Traceback (most recent call last):
  File "", line 1, in 
TypeError: %b requires bytes, or an object that implements __bytes__, not 
'memoryview'

Accorfing to documentation [1] objects that follow the buffer protocol should 
be supported.  Both array.array and memoryview follow the buffer protocol.

[1]: 
https://docs.python.org/3/library/stdtypes.html#printf-style-bytes-formatting


See also issue 20284 and PEP 461.

--
messages: 282215
nosy: belopolsky
priority: normal
severity: normal
stage: test needed
status: open
title: %b format for bytes does not support objects that follow the buffer 
protocol
type: behavior

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23507] Tuple creation is too slow

2016-12-01 Thread STINNER Victor

STINNER Victor added the comment:

>  I opened a new issue #28852 to track this performance regression.

So can I close again this issue?

--
resolution:  -> fixed
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28852] sorted(range(1000)) is slower in Python 3.7 compared to Python 3.5

2016-12-01 Thread STINNER Victor

STINNER Victor added the comment:

On my laptop, the revision introducing the performance regression is:
---
changeset:   101858:5a62d682636e
user:Brett Cannon 
date:Fri Jun 10 14:37:21 2016 -0700
files:   Doc/library/os.rst Lib/test/test_os.py Misc/NEWS 
Modules/posixmodule.c
description:
Issue #27186: Add os.PathLike support to DirEntry

Initial patch thanks to Jelle Zijlstra.
---

This change is unrelated to sorted(list). It looks more like a "random 
performance" caused by code placement:

* https://haypo.github.io/analysis-python-performance-issue.html
* https://haypo.github.io/journey-to-stable-benchmark-deadcode.html

According to perf record/perf report, the benchmark spends most of its time in 
PyObject_RichCompareBool() and long_richcompare():

Overhead  Command  Shared Object   Symbol
  41,98%  python   python  [.] PyObject_RichCompareBool
  35,36%  python   python  [.] long_richcompare
   8,52%  python   python  [.] listsort
   6,29%  python   python  [.] listextend
   5,31%  python   python  [.] list_dealloc

So I guess that the exact code placement of these two functions has a 
"signifiant" impact on performance. "Significant":

* rev b0be24a2f16c (fast): Median +- std dev: 15.0 us +- 0.1 us
* rev 5a62d682636e (slow): Median +- std dev: 16.3 us +- 0.0 us

The revision 5a62d682636e makes sorted(list) 9% slower.

--

Enabling PGO on compilation should help to get a more reliable code placement, 
and so more stable performances.

I suggest to close this issue as NOTABUG: ./configure --with-pgo should already 
fix this issue.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28855] Compiler warnings in _PyObject_CallArg1()

2016-12-01 Thread STINNER Victor

New submission from STINNER Victor:

_PyObject_CallArg1() is the following macro:
---
#define _PyObject_CallArg1(func, arg) \
_PyObject_FastCall((func), &(arg), 1)
---

It works well in most cases, but my change 8f258245c391 or change b9c9691c72c5 
added compiler warnings, because an argument which is not directly a PyObject* 
type is passed as "arg".

I tried to cast in the caller: _PyObject_CallArg1(func, (PyObject*)arg), but 
sadly it doesn't work :-( I get a compiler error.

Another option is to cast after "&" in the macro:
---
 #define _PyObject_CallArg1(func, arg) \
-_PyObject_FastCall((func), &(arg), 1)
+_PyObject_FastCall((func), (PyObject **)&(arg), 1)
---

This option may hide real bugs, so I dislike it.

A better option is to stop using ugly C macros and use a modern static inline 
function: see attached static_inline.patch. This patch casts to PyObject* 
before calling _PyObject_CallArg1() to fix warnings.

The question is if compilers are able to emit efficient code for static inline 
functions using "&" on an argument. I wrote the macro to implement the "&" 
optimization: convert a PyObject* to a stack of arguments: C array of PyObject* 
objects.

--
files: static_inline.patch
keywords: patch
messages: 282212
nosy: benjamin.peterson, haypo, serhiy.storchaka
priority: normal
severity: normal
status: open
title: Compiler warnings in _PyObject_CallArg1()
versions: Python 3.7
Added file: http://bugs.python.org/file45729/static_inline.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28740] Add sys.getandroidapilevel()

2016-12-01 Thread STINNER Victor

STINNER Victor added the comment:

> The version > 0 check was done because sysconfig.get_config_var() returns 0 
> when a variable is found as '#undef' in pyconfig.h.

Oh right, I see. In this case, we don't need to have a special case in 
sys.getandroidapilevel().

I pushed getandroidapilevel-3.patch with a change: I added a small unit test. 
It checks that level > 0. I hesitated to also check for a maximum, but I'm not 
sure that it makes sense.

Xavier: can please double test that sys.getandroidapilevel() works on Android? 
If it's the case, we can move on the issue #28596 :-)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28740] Add sys.getandroidapilevel()

2016-12-01 Thread Roundup Robot

Roundup Robot added the comment:

New changeset be70d64bbf88 by Victor Stinner in branch 'default':
Add sys.getandroidapilevel()
https://hg.python.org/cpython/rev/be70d64bbf88

--
nosy: +python-dev

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: correct way to catch exception with Python 'with' statement

2016-12-01 Thread Ned Batchelder
On Thursday, December 1, 2016 at 2:31:11 PM UTC-5, DFS wrote:
> After a simple test below, I submit that the above scenario would never 
> occur.  Ever.  The time gap between checking for the file's existence 
> and then trying to open it is far too short for another process to sneak 
> in and delete the file.

It doesn't matter how quickly the first operation is (usually) followed
by the second.  Your process could be swapped out between the two
operations. On a heavily loaded machine, there could be a very long
time between them even if on an average machine, they are executed very
quickly.

For most programs, yes, it probably will never be a problem to check
for existence, and then assume that the file still exists.  But put that
code on a server, and run it a couple of million times, with dozens of
other processes also manipulating files, and you will see failures.

How to best deal with this situation depends on what might happen to the
file, and how you can best coordinate with those other programs. Locks
only help if all the interfering programs also use those same locks. A
popular strategy is to simply use the file, and deal with the error that
happens if the file doesn't exist, though that might not make sense
depending on the logic of the program.  You might have to check that the
file exists, and then also deal with the (slim) possibility that it then
doesn't exist.

--Ned.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Can json.dumps create multiple lines

2016-12-01 Thread Cecil Westerhof
On Thursday  1 Dec 2016 23:58 CET, Peter Otten wrote:

> Cecil Westerhof wrote:
>
>> On Thursday  1 Dec 2016 22:52 CET, Cecil Westerhof wrote:
>>
>>> Now I need to convert the database. But that should not be a big
>>> problem.
>>
>> I did the conversion with:
>> cursor.execute('SELECT tipID FROM tips')
>> ids = cursor.fetchall()
>> for id in ids:
>> id = id[0]
>> cursor.execute('SELECT tip from tips WHERE tipID = ?', [id])
>> old_value = cursor.fetchone()[0]
>> new_value = json.dumps(json.loads(old_value), indent = 0)
>> cursor.execute('UPDATE tips SET tip = ? WHERE tipID = ?',
>> [new_value, id])
>
> The sqlite3 module lets you define custom functions written in
> Python:
>
> db = sqlite3.connect(...)
> cs = db.cursor()
>
> def convert(s):
> return json.dumps(
> json.loads(s),
> indent=0
> )
>
> db.create_function("convert", 1, convert)
> cs.execute("update tips set tip = convert(tip)")

That is a lot better as what I did. Thank you.

-- 
Cecil Westerhof
Senior Software Engineer
LinkedIn: http://www.linkedin.com/in/cecilwesterhof
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue2771] Test issue

2016-12-01 Thread Ezio Melotti

Ezio Melotti added the comment:

Testing that the link to PR 46 works.
Pull Request 46 should also be a link.
PullRequest 46 too.
Generally you should just use PR46.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com




Re: Can json.dumps create multiple lines

2016-12-01 Thread Peter Otten
Cecil Westerhof wrote:

> On Thursday  1 Dec 2016 22:52 CET, Cecil Westerhof wrote:
> 
>> Now I need to convert the database. But that should not be a big
>> problem.
> 
> I did the conversion with:
> cursor.execute('SELECT tipID FROM tips')
> ids = cursor.fetchall()
> for id in ids:
> id = id[0]
> cursor.execute('SELECT tip from tips WHERE tipID = ?', [id])
> old_value = cursor.fetchone()[0]
> new_value = json.dumps(json.loads(old_value), indent = 0)
> cursor.execute('UPDATE tips SET tip = ? WHERE tipID = ?',
> [new_value, id])

The sqlite3 module lets you define custom functions written in Python:

db = sqlite3.connect(...)
cs = db.cursor()
 
def convert(s):
return json.dumps(
json.loads(s),
indent=0
)

db.create_function("convert", 1, convert)
cs.execute("update tips set tip = convert(tip)")


-- 
https://mail.python.org/mailman/listinfo/python-list


[issue26363] __builtins__ propagation is misleading described in exec and eval documentation

2016-12-01 Thread Xavier Combelle

Xavier Combelle added the comment:

not an inconsisties but in the eval documentaion nothing specify that the 
builtins propagate between levels updates

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Error In querying Genderize.io. Can someone please help

2016-12-01 Thread John Gordon
In  handa...@gmail.com 
writes:

> import requests
> import json
> names={'katty','Shean','Rajat'};
> for name in names:
> request_string="http://api.genderize.io/?"+name
> r=requests.get(request_string)
> result=json.loads(r.content)

You're using http: instead of https:, and you're using ?katty instead
of ?name=katty, and therefore the host does not recognize your request
as an API call and redirects you to the normal webpage.

-- 
John Gordon   A is for Amy, who fell down the stairs
gor...@panix.com  B is for Basil, assaulted by bears
-- Edward Gorey, "The Gashlycrumb Tinies"

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Can json.dumps create multiple lines

2016-12-01 Thread Cecil Westerhof
On Thursday  1 Dec 2016 22:52 CET, Cecil Westerhof wrote:

> Now I need to convert the database. But that should not be a big
> problem.

I did the conversion with:
cursor.execute('SELECT tipID FROM tips')
ids = cursor.fetchall()
for id in ids:
id = id[0]
cursor.execute('SELECT tip from tips WHERE tipID = ?', [id])
old_value = cursor.fetchone()[0]
new_value = json.dumps(json.loads(old_value), indent = 0)
cursor.execute('UPDATE tips SET tip = ? WHERE tipID = ?', [new_value, 
id])

-- 
Cecil Westerhof
Senior Software Engineer
LinkedIn: http://www.linkedin.com/in/cecilwesterhof
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Can json.dumps create multiple lines

2016-12-01 Thread Cecil Westerhof
On Thursday  1 Dec 2016 17:55 CET, Zachary Ware wrote:

> On Thu, Dec 1, 2016 at 10:30 AM, Cecil Westerhof  wrote:
>> I would prefer when it would generate:
>> '[
>> "An array",
>> "with several strings",
>> "as a demo"
>> ]'
>>
>> Is this possible, or do I have to code this myself?
>
> https://docs.python.org/3/library/json.html?highlight=indent#json.dump
>
> Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 26 2016, 10:47:25) [GCC 4.2.1
> (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright",
> "credits" or "license" for more information.
 import json
 json.dumps(["An array", "with several strings", "as a demo"])
> '["An array", "with several strings", "as a demo"]'
 print(_)
> ["An array", "with several strings", "as a demo"]
 json.dumps(["An array", "with several strings", "as a demo"],
 indent=0)
> '[\n"An array",\n"with several strings",\n"as a demo"\n]'
 print(_)
> [
> "An array",
> "with several strings",
> "as a demo"
> ]

Works like a charm. Strings can contain newlines also, but then I do
not want a new line, but that works like a charm.

I used:
cursor.execute('INSERT INTO test (json) VALUES (?)' ,
   [json.dumps(['An array',
'with several strings',
'as a demo',
'and\none\nwith\na\nnewlines'],
   indent = 0)])

and that gave exactly what I wanted.

Now I need to convert the database. But that should not be a big
problem.


> I've also seen something about JSON support in SQLite, you may want
> to look into that.

I will do that, but later. I have what I need.

-- 
Cecil Westerhof
Senior Software Engineer
LinkedIn: http://www.linkedin.com/in/cecilwesterhof
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue27779] Sync-up docstrings in C version of the the decimal module

2016-12-01 Thread Lisa Roach

Lisa Roach added the comment:

This (should) be the patch with the python docstrings copied over to the C 
version.

--
Added file: http://bugs.python.org/file45728/docstrings.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Error In querying Genderize.io. Can someone please help

2016-12-01 Thread handar94
import requests
import json
names={'katty','Shean','Rajat'};
for name in names:
request_string="http://api.genderize.io/?"+name
r=requests.get(request_string)
result=json.loads(r.content)


Error---
Traceback (most recent call last):
  File "C:/Users/user/PycharmProjects/untitled7/Mis1.py", line 7, in 
result=json.loads(r.content)
  File "C:\Users\user\Anaconda2\lib\json\__init__.py", line 339, in loads
return _default_decoder.decode(s)
  File "C:\Users\user\Anaconda2\lib\json\decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "C:\Users\user\Anaconda2\lib\json\decoder.py", line 382, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded

Can someone please help.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue26363] __builtins__ propagation is misleading described in exec and eval documentation

2016-12-01 Thread Julien Palard

Julien Palard added the comment:

So, is there still an inconsistency in the documentation?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28840] IDLE: Document tk's long line display limitation

2016-12-01 Thread Terry J. Reedy

Changes by Terry J. Reedy :


--
stage:  -> test needed
title: IDLE not handling long lines correctly -> IDLE: Document tk's long line 
display limitation
versions: +Python 2.7, Python 3.6, Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28840] IDLE not handling long lines correctly

2016-12-01 Thread Terry J. Reedy

Terry J. Reedy added the comment:

Currently, the code displayed is the code saved and run when requested.  Your 
idea would require that IDLE keep a 'true' copy of the code separate from the 
'display' copy in the Text widget, with the two being kept in sync except in 
special situations such as super long lines.  This would be an error prone 
process.  It would require either having two text widgets, one displayed, one 
not, kept nearly in sync, or writing a Python equivalent of the non-display 
part of the text widget.  Or the true copy could be a virtual copy represented 
by the edits required to restore the true copy from the display copy.  Any of 
these would be tedious, error prone, and would make editing slower. I rejected 
the idea of making tabs visible with a special character for the same reason.

I looked at the tk Text doc to see if the line length limitation is mentioned.  
It is not that I could find ('2' occurs in the text, but no longer digit 
string).  It might be Windows-specific.  It is possible that there should be a 
new issue opened on the tcl/tk tracker, but more info should be collected 
first, to test on other OSes and directly with tcl/tk.  I have more pressing 
issues to work on ;-).

While looking, I read the entry on peer text widgets, which allow multiple 
views of the *same* underlying text data.  A peer can be restricted to a subset 
of lines but not part of a line or lines.  (There can also be different default 
fonts and insert cursor positions.)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28854] FIPS mode causes dead-lock in ssl module

2016-12-01 Thread Alex Gaynor

Changes by Alex Gaynor :


--
nosy: +alex, dstufft, janssen

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28854] FIPS mode causes dead-lock in ssl module

2016-12-01 Thread Christian Heimes

Christian Heimes added the comment:

It's a dead lock in OpenSSL. :(

if (n == CRYPTO_LOCK_RAND) {
fprintf(stderr, "%s%s %i %s:%i\n",
(mode & CRYPTO_READ) ? "R" : "W",
(mode & CRYPTO_LOCK) ? "L" : "U",
n, file, line);
}

test_random (test.test_ssl.BasicSocketTests) ... RLCK 18 fips_drbg_rand.c:124
RUNL 18 fips_drbg_rand.c:126

 RAND_status is 1 (sufficient randomness)
WLCK 18 fips_drbg_rand.c:80
WUNL 18 fips_drbg_rand.c:109
WLCK 18 fips_drbg_rand.c:80
WUNL 18 fips_drbg_rand.c:109
WLCK 18 md_rand.c:230
WUNL 18 md_rand.c:262
WLCK 18 md_rand.c:311
WUNL 18 md_rand.c:324
RLCK 18 fips_drbg_rand.c:124
RUNL 18 fips_drbg_rand.c:126
WLCK 18 rand_lib.c:240
RLCK 18 fips_drbg_rand.c:124

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28740] Add sys.getandroidapilevel()

2016-12-01 Thread Xavier de Gaye

Xavier de Gaye added the comment:

> getandroidapilevel-3.patch: Updated version of getandroidapilevel.patch: 
> replace "runtime" with "build time" in the doc :-) Remove also "version > 0" 
> check in support/__init__.py.

LGTM

> About the version > 0 check: would it make sense to add the check in 
> configure.ac? I guess that it was Xavier who wrote the test in 
> support/__init__.py?

The version > 0 check was done because sysconfig.get_config_var() returns 0 
when a variable is found as '#undef' in pyconfig.h.  As a reference, the list 
of the existing API levels with their corresponding Android versions and names:

  Android Version  ReleasedAPI Level  Name
  Android 7.1  October 201625 Nougat
  Android 7.0  August 2016 24 Nougat
  Android 6.0  August 2015 23 Marshmallow
  Android 5.1  March 2015  22 Lollipop
  Android 5.0  November 2014   21 Lollipop
  Android 4.4W June 2014   20 Kitkat Watch
  Android 4.4  October 201319 Kitkat
  Android 4.3  July 2013   18 Jelly Bean
  Android 4.2-4.2.2November 2012   17 Jelly Bean
  Android 4.1-4.1.1June 2012   16 Jelly Bean
  Android 4.0.3-4.0.4  December 2011   15 Ice Cream Sandwich
  Android 4.0-4.0.2October 201114 Ice Cream Sandwich
  Android 3.2  June 2011   13 Honeycomb
  Android 3.1.xMay 201112 Honeycomb
  Android 3.0.xFebruary 2011   11 Honeycomb
  Android 2.3.3-2.3.4  February 2011   10 Gingerbread
  Android 2.3-2.3.2November 2010   9  Gingerbread
  Android 2.2.xJune 2010   8  Froyo
  Android 2.1.xJanuary 20107  Eclair
  Android 2.0.1December 2009   6  Eclair
  Android 2.0  November 2009   5  Eclair
  Android 1.6  September 2009  4  Donut
  Android 1.5  May 20093  Cupcake
  Android 1.1  February 2009   2  Base
  Android 1.0  October 20081  Base

> I suggest to start to add sys.getandroidapilevel(), and then discuss how to 
> expose (or not) the runtime version.

Agreed and thanks for the patch Victor :)

--
components: +Interpreter Core
stage:  -> patch review

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Can json.dumps create multiple lines

2016-12-01 Thread Tim Chase
On 2016-12-01 17:30, Cecil Westerhof wrote:
> When I have a value dummy which contains:
> ['An array', 'with several strings', 'as a demo']
> Then json.dumps(dummy) would generate:
> '["An array", "with several strings", "as a demo"]'
> I would prefer when it would generate:
> '[
>  "An array",
>  "with several strings",
>  "as a demo"
>  ]'
> 
> Is this possible, or do I have to code this myself?

print(json.dumps(['An array', 'with several strings', 'as a demo'],
indent=0))

for the basics of what you ask, though you can change indent= to
indent the contents for readability.

-tkc



-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28854] FIPS mode causes dead-lock in ssl module

2016-12-01 Thread Christian Heimes

New submission from Christian Heimes:

Python's ssl module is dead-locking when OpenSSL is running in FIPS mode. I 
first noticed it with pip. The issue is also reproducible with Python's test 
suite.

$ sudo touch /etc/system-fips

$ OPENSSL_FORCE_FIPS_MODE=1 ./python -m test.regrtest -v test_ssl
== CPython 2.7.12+ (2.7:adb296e4bcaa, Dec 1 2016, 21:14:20) [GCC 6.2.1 20160916 
(Red Hat 6.2.1-2)]
==   Linux-4.8.8-200.fc24.x86_64-x86_64-with-fedora-24-Twenty_Four little-endian
==   /home/heimes/dev/python/2.7/build/test_python_29991
Testing with flags: sys.flags(debug=0, py3k_warning=0, division_warning=0, 
division_new=0, inspect=0, interactive=0, optimize=0, dont_write_bytecode=0, 
no_user_site=0, no_site=0, ignore_environment=0, tabcheck=0, verbose=0, 
unicode=0, bytes_warning=0, hash_randomization=0)
[1/1] test_ssl
test_ssl: testing with 'OpenSSL 1.0.2j-fips  26 Sep 2016' (1, 0, 2, 10, 15)
  under Linux ('Fedora', '24', 'Twenty Four')
  HAS_SNI = True
  OP_ALL = 0x83ff
  OP_NO_TLSv1_1 = 0x1000
test__create_stdlib_context (test.test_ssl.ContextTests) ... ok
test__https_verify_certificates (test.test_ssl.ContextTests) ... ok
test__https_verify_envvar (test.test_ssl.ContextTests) ... ok
test_cert_store_stats (test.test_ssl.ContextTests) ... ok
test_check_hostname (test.test_ssl.ContextTests) ... ok
test_ciphers (test.test_ssl.ContextTests) ... ok
test_constructor (test.test_ssl.ContextTests) ... ERROR
test_create_default_context (test.test_ssl.ContextTests) ... ok
test_get_ca_certs (test.test_ssl.ContextTests) ... ok
test_load_cert_chain (test.test_ssl.ContextTests) ... ERROR
test_load_default_certs (test.test_ssl.ContextTests) ... ok
test_load_default_certs_env (test.test_ssl.ContextTests) ... ok
test_load_default_certs_env_windows (test.test_ssl.ContextTests) ... skipped 
'Windows specific'
test_load_dh_params (test.test_ssl.ContextTests) ... ok
test_load_verify_cadata (test.test_ssl.ContextTests) ... ok
test_load_verify_locations (test.test_ssl.ContextTests) ... ok
test_options (test.test_ssl.ContextTests) ... ok
test_protocol (test.test_ssl.ContextTests) ... ERROR
test_session_stats (test.test_ssl.ContextTests) ... ERROR
test_set_default_verify_paths (test.test_ssl.ContextTests) ... ok
test_set_ecdh_curve (test.test_ssl.ContextTests) ... 


(gdb) bt
#0  0x7f0d9f8470c7 in do_futex_wait.constprop () from /lib64/libpthread.so.0
#1  0x7f0d9f847174 in __new_sem_wait_slow.constprop.0 () from 
/lib64/libpthread.so.0
#2  0x7f0d9f84721a in sem_wait@@GLIBC_2.2.5 () from /lib64/libpthread.so.0
#3  0x0051e013 in PyThread_acquire_lock (lock=0x27433a0, waitflag=1) at 
Python/thread_pthread.h:324
#4  0x7f0d937b0dce in _ssl_thread_locking_function (mode=5, n=18, 
file=0x7f0d96e417ac "fips_drbg_rand.c", line=124)
at /home/heimes/dev/python/2.7/Modules/_ssl.c:4000
#5  0x7f0d96df1d7e in fips_drbg_status () from /lib64/libcrypto.so.10
#6  0x7f0d96d75b0e in drbg_rand_add () from /lib64/libcrypto.so.10
#7  0x7f0d96d76645 in RAND_poll () from /lib64/libcrypto.so.10
#8  0x7f0d96d75237 in ssleay_rand_bytes () from /lib64/libcrypto.so.10
#9  0x7f0d96d75c33 in drbg_get_entropy () from /lib64/libcrypto.so.10
#10 0x7f0d96df10c8 in fips_get_entropy () from /lib64/libcrypto.so.10
#11 0x7f0d96df1226 in drbg_reseed () from /lib64/libcrypto.so.10
#12 0x7f0d96d75bb8 in drbg_rand_seed () from /lib64/libcrypto.so.10
#13 0x7f0d96d5da84 in ECDSA_sign_ex () from /lib64/libcrypto.so.10
#14 0x7f0d96d5db00 in ECDSA_sign () from /lib64/libcrypto.so.10
#15 0x7f0d96d3bc10 in pkey_ec_sign () from /lib64/libcrypto.so.10
#16 0x7f0d96d81359 in EVP_SignFinal () from /lib64/libcrypto.so.10
#17 0x7f0d96def373 in fips_pkey_signature_test () from 
/lib64/libcrypto.so.10
#18 0x7f0d96d38fe0 in EC_KEY_generate_key () from /lib64/libcrypto.so.10
#19 0x7f0d970d7d7c in ssl3_ctx_ctrl () from /lib64/libssl.so.10
#20 0x7f0d937af934 in set_ecdh_curve (self=0x7f0d92a23f78, 
name='prime256v1') at /home/heimes/dev/python/2.7/Modules/_ssl.c:3110
#21 0x004d8acc in call_function (pp_stack=0x7ffcc0334730, oparg=1) at 
Python/ceval.c:4340
#22 0x004d37a9 in PyEval_EvalFrameEx (
f=Frame 0x7f0d929bf460, for file 
/home/heimes/dev/python/2.7/Lib/test/test_ssl.py, line 1023, in 
test_set_ecdh_curve 
(self=, _type_equality_funcs={: 'assertMultiLineEqual', : 
'assertTupleEqual', : 'assertSetEqual', : 'assertListEqual', : 
'assertDictEqual', : 'assertSetEqual'}, 
_testMethodDoc=None, _testMethodName='test_load_default_certs_env_windows', 
_cleanups=[]) at remote 0x7f0d92a80ae0>, 'Windows specific')], 
_mirrorOutput=False, stream=<_WritelnDecorator(stream=) at remote 0x7f0d93747450>, testsRun=21, buffer=False, 
_original_stderr=, showAll=True, 
_stdout_buffer=None, _stderr_buffer=None, 

Re: Asyncio -- delayed calculation

2016-12-01 Thread Ian Kelly
On Thu, Dec 1, 2016 at 12:53 AM, Christian Gollwitzer  wrote:
> well that works - but I think it it is possible to explain it, without
> actually understanding what it does behind the scences:
>
> x = foo()
> # schedule foo for execution, i.e. put it on a TODO list

This implies that if you never await foo it will still get done at
some point (e.g. when you await something else), which for coroutines
would be incorrect unless you call ensure_future() on it.

Come to think about it, it would probably not be a bad style rule to
consider that when you call something that returns an awaitable, you
should always either await or ensure_future it (or something else that
depends on it). Barring the unusual case where you want to create an
awaitable but *not* immediately schedule it.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28852] sorted(range(1000)) is slower in Python 3.7 compared to Python 3.5

2016-12-01 Thread Eric N. Vander Weele

Changes by Eric N. Vander Weele :


--
nosy: +ericvw

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: OSError: [Errno 12] Cannot allocate memory

2016-12-01 Thread duncan smith
On 01/12/16 01:12, Chris Kaynor wrote:
> On Wed, Nov 30, 2016 at 4:54 PM, duncan smith  wrote:
>>
>> Thanks. So something like the following might do the job?
>>
>> def _execute(command):
>> p = subprocess.Popen(command, shell=False,
>>  stdout=subprocess.PIPE,
>>  stderr=subprocess.STDOUT,
>>  close_fds=True)
>> out_data, err_data = p.communicate()
>> if err_data:
>> print err_data
> 
> I did not notice it when I sent my first e-mail (but noted it in my
> second one) that the docstring in to_image is presuming that
> shell=True. That said, as it seems everybody is at a loss to explain
> your issue, perhaps there is some oddity, and if everything appears to
> work with shell=False, it may be worth changing to see if it does fix
> the problem. With other information since provided, it is unlikely,
> however.
> 
> Not specifying the stdin may help, however it will only reduce the
> file handle count by 1 per call (from 2), so there is probably a root
> problem that it will not help.
> 
> I would expect the communicate change to fix the problem, except for
> your follow-up indicating that you had tried that before without
> success.
> 
> Removing the manual stdout.read may fix it, if the problem is due to
> hanging processes, but again, your follow-up indicates thats not the
> problem - you should have zombie processes if that were the case.
> 
> A few new questions that you have not answered (nor have they been
> asked in this thread): How much memory does your system have? Are you
> running a 32-bit or 64-bit Python? Is your Python process being run
> with any additional limitations via system commands (I don't know the
> command, but I know it exists; similarly, if launched from a third
> app, it could be placing limits)?
> 
> Chris
> 

8 Gig, 64 bit, no additional limitations (other than any that might be
imposed by IDLE). In this case the simulation does consume *a lot* of
memory, but that hasn't been the case when I've hit this in the past. I
suppose that could be the issue here. I'm currently seeing if I can
reproduce the problem after adding the p.communicate(), but it seems to
be using more memory than ever (dog slow and up to 5 Gig of swap). In
the meantime I'm going to try to refactor to reduce memory requirements
- and 32 Gig of DDR3 has been ordered. I'll also dig out some code that
generated the same problem before to see if I can reproduce it. Cheers.

Duncan

Duncan
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28740] Add sys.getandroidapilevel()

2016-12-01 Thread Chi Hsuan Yen

Chi Hsuan Yen added the comment:

Sorry for mixing different issues and proposing bad alternatives.

My last hope is that someone looks into lemburg's msg281253:

"I don't think the sys module is the right place to put the API, since it 
doesn't have anything to do with the Python system internals."

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Can json.dumps create multiple lines

2016-12-01 Thread John Gordon
In <87lgvz4no8@equus.decebal.nl> Cecil Westerhof  writes:

> I started to use json.dumps to put things in a SQLite database. But I
> think it would be handy when it would be easy to change the values
> manually.

> When I have a value dummy which contains:
> ['An array', 'with several strings', 'as a demo']
> Then json.dumps(dummy) would generate:
> '["An array", "with several strings", "as a demo"]'
> I would prefer when it would generate:
> '[
>  "An array",
>  "with several strings",
>  "as a demo"
>  ]'

json.dumps() has an 'indent' keyword argument, but I believe it only
enables indenting of each whole element, not individual members of a list.

Perhaps something in the pprint module?

-- 
John Gordon   A is for Amy, who fell down the stairs
gor...@panix.com  B is for Basil, assaulted by bears
-- Edward Gorey, "The Gashlycrumb Tinies"

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Can json.dumps create multiple lines

2016-12-01 Thread Zachary Ware
On Thu, Dec 1, 2016 at 10:30 AM, Cecil Westerhof  wrote:
> I would prefer when it would generate:
> '[
>  "An array",
>  "with several strings",
>  "as a demo"
>  ]'
>
> Is this possible, or do I have to code this myself?

https://docs.python.org/3/library/json.html?highlight=indent#json.dump

Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 26 2016, 10:47:25)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import json
>>> json.dumps(["An array", "with several strings", "as a demo"])
'["An array", "with several strings", "as a demo"]'
>>> print(_)
["An array", "with several strings", "as a demo"]
>>> json.dumps(["An array", "with several strings", "as a demo"], indent=0)
'[\n"An array",\n"with several strings",\n"as a demo"\n]'
>>> print(_)
[
"An array",
"with several strings",
"as a demo"
]

I've also seen something about JSON support in SQLite, you may want to
look into that.

-- 
Zach
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28853] locals() and free variables

2016-12-01 Thread Marco Buttu

New submission from Marco Buttu:

The locals() documentation [1] says that "Free variables are returned by 
locals() when it is called in function blocks". A free variable inside a 
function has a global scope, and in fact it is not returned by locals()::

>>> x = 33
>>> def foo():
... print(x)
... print(locals())
... 
>>> foo()
33
{}

Maybe "function blocks" here means "closure"? Does the doc mean this?

>>> def foo():
... x = 33
... def moo():
... print(x)
... print(locals())
... return moo
... 
>>> moo = foo()
>>> moo()
33
{'x': 33}

In that case, I think it is better to write "closures" instead of 
"function blocks".


[1] https://docs.python.org/3/library/functions.html#locals

--
assignee: docs@python
components: Documentation
messages: 282200
nosy: docs@python, marco.buttu
priority: normal
severity: normal
status: open
title: locals() and free variables
type: enhancement
versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28850] Regression in Python 3: Subclassing PrettyPrinter.format doesn't work anymore

2016-12-01 Thread Guido van Rossum

Changes by Guido van Rossum :


--
nosy:  -gvanrossum

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Can json.dumps create multiple lines

2016-12-01 Thread Cecil Westerhof
I started to use json.dumps to put things in a SQLite database. But I
think it would be handy when it would be easy to change the values
manually.

When I have a value dummy which contains:
['An array', 'with several strings', 'as a demo']
Then json.dumps(dummy) would generate:
'["An array", "with several strings", "as a demo"]'
I would prefer when it would generate:
'[
 "An array",
 "with several strings",
 "as a demo"
 ]'

Is this possible, or do I have to code this myself?

-- 
Cecil Westerhof
Senior Software Engineer
LinkedIn: http://www.linkedin.com/in/cecilwesterhof
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28843] asyncio.Task implemented in C loses __traceback__ for exceptions

2016-12-01 Thread Yury Selivanov

Yury Selivanov added the comment:

Thanks for reviewing the patch, Inada-san. Closing the issue.

--
priority: release blocker -> normal
resolution:  -> fixed
stage:  -> resolved
status: open -> closed
type:  -> behavior

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28852] sorted(range(1000)) is slower in Python 3.7 compared to Python 3.5

2016-12-01 Thread STINNER Victor

STINNER Victor added the comment:

Benchmark:

   ./python -m perf timeit -s 'x=list(range(1000))' 'sorted(x)'

Python 3.6 and 3.7 compared to Python 3.5:

   $ python3 -m perf compare_to 3.5.json.gz 3.6.json.gz 3.7.json.gz
   Median +- std dev: [3.5] 18.4 us +- 0.9 us -> [3.6] 20.5 us +- 0.9 us: 1.11x 
slower (+11%)
   Median +- std dev: [3.5] 18.4 us +- 0.9 us -> [3.7] 19.8 us +- 1.1 us: 1.08x 
slower (+8%)

I compiled Python with "./configure && make". The benchmark should be run again 
using LTO+PGO compilation to get more reliable benchmark results.

It seems like the benchmark is not very stable even with system tune (python3 
-m perf system tune, isolcpus and rcu_nocbs in the Linux command line). I ran 
the benchmark 3 times using --append to concatenate all runs to get enough 
samples.

Histograms:

$ python3 -m perf hist 3.5.json.gz 3.6.json.gz 3.7.json.gz 
[ 3.5.json ]
15.0 us:  1 #
15.2 us:  0 |
15.5 us:  3 ###
15.7 us:  4 
16.0 us:  7 ###
16.2 us:  5 #
16.5 us: 16 
16.7 us:  4 
17.0 us:  8 
17.2 us: 10 ##
17.4 us:  7 ###
17.7 us:  5 #
17.9 us:  5 #
18.2 us: 23 
18.4 us: 77 
###
18.7 us:  5 #
18.9 us:  0 |
19.2 us:  0 |
19.4 us:  0 |
19.7 us:  0 |
19.9 us:  0 |
20.1 us:  0 |
20.4 us:  0 |
20.6 us:  0 |
20.9 us:  0 |
21.1 us:  0 |

[ 3.6.json ]
15.0 us:  0 |
15.2 us:  0 |
15.5 us:  0 |
15.7 us:  0 |
16.0 us:  0 |
16.2 us:  0 |
16.5 us:  0 |
16.7 us:  0 |
17.0 us:  0 |
17.2 us:  0 |
17.4 us:  2 ###
17.7 us:  2 ###
17.9 us:  3 #
18.2 us:  4 ###
18.4 us:  3 #
18.7 us:  7 
18.9 us:  5 
19.2 us:  8 #
19.4 us:  6 ##
19.7 us:  7 
19.9 us:  9 ###
20.1 us: 24 
20.4 us: 16 ###
20.6 us: 47 
###
20.9 us: 27 #
21.1 us: 10 #

[ 3.7.json ]
15.0 us:  0 |
15.2 us:  0 |
15.5 us:  0 |
15.7 us:  0 |
16.0 us:  0 |
16.2 us:  1 ##
16.5 us:  0 |
16.7 us:  2 ###
17.0 us:  4 ##
17.2 us:  6 #
17.4 us:  4 ##
17.7 us: 11 #
17.9 us: 10 
18.2 us: 14 ##
18.4 us: 10 
18.7 us:  5 
18.9 us:  3 #
19.2 us: 10 
19.4 us:  6 #
19.7 us: 13 #
19.9 us: 50 
###
20.1 us: 10 
20.4 us: 19 ##
20.6 us:  2 ###
20.9 us:  0 |
21.1 us:  0 |

--
Added file: http://bugs.python.org/file45727/3.5.json.gz

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28843] asyncio.Task implemented in C loses __traceback__ for exceptions

2016-12-01 Thread Roundup Robot

Roundup Robot added the comment:

New changeset c9f68150cf90 by Yury Selivanov in branch '3.6':
Issue #28843: Fix asyncio C Task to handle exceptions __traceback__.
https://hg.python.org/cpython/rev/c9f68150cf90

New changeset a21a8943c59e by Yury Selivanov in branch 'default':
Merge 3.6 (issue #28843)
https://hg.python.org/cpython/rev/a21a8943c59e

--
nosy: +python-dev

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28851] namedtuples field_names sequence preferred

2016-12-01 Thread Raymond Hettinger

Raymond Hettinger added the comment:

Thanks for the reminder.  I'll make the doc updates after 3.6 is released 
(we're trying to not make any changes at all right now).

--
resolution:  -> later
versions:  -Python 3.3, Python 3.4, Python 3.5

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23507] Tuple creation is too slow

2016-12-01 Thread STINNER Victor

STINNER Victor added the comment:

Serhiy Storchaka: "Victor, your changes introduced a regression."

I suppose that you are refering to sorted(list) which seems to be slower. Do 
you have a specific change in mind? As I wrote, it doesn't use a key function, 
and so should not be impacted by my work on fast calls. I opened a new issue 
#28852 to track this performance regression.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28852] sorted(range(1000)) is slower in Python 3.7 compared to Python 3.5

2016-12-01 Thread STINNER Victor

New submission from STINNER Victor:

Follow-up of my comment http://bugs.python.org/issue23507#msg282187 :

"sorted(list): Median +- std dev: [3.5] 17.5 us +- 1.0 us -> [3.7] 19.7 us +- 
1.1 us: 1.12x slower (+12%)"

"(...) sorted(list) is slower. I don't know why sorted(list) is slower. It 
doesn't use a key function, and so should not be impacted by FASTCALL changes 
made since Python 3.6."

--
messages: 282194
nosy: haypo, serhiy.storchaka
priority: normal
severity: normal
status: open
title: sorted(range(1000)) is slower in Python 3.7 compared to Python 3.5
type: performance
versions: Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: The Case Against Python 3

2016-12-01 Thread Ned Batchelder
On Thursday, December 1, 2016 at 9:03:46 AM UTC-5, Paul  Moore wrote:
> While I agree that f-strings are more dangerous than people will immediately 
> realise (the mere fact that we call them f-*strings* when they definitely 
> aren't strings is an example of that), the problem here is clearly (IMO) with 
> the sloppy checking in gettext.


Can you elaborate on the dangers as you see them?

--Ned.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28740] Add sys.getandroidapilevel()

2016-12-01 Thread Xavier de Gaye

Xavier de Gaye added the comment:

@Chi Hsuan Yen
And please, let us not waste any more time on lost battles, this suggestion of 
using sys.implementation has already been rejected at issue 27442 (see 
msg269748) as you must know since you were involved in the discussion there.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18971] Use argparse in the profile/cProfile modules

2016-12-01 Thread Wolfgang Maier

Wolfgang Maier added the comment:

oops, typing in wrong window. Very sorry.

--
nosy: +wolma
title: calendar -> Use argparse in the profile/cProfile modules

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18971] calendar

2016-12-01 Thread Wolfgang Maier

Changes by Wolfgang Maier :


--
title: Use argparse in the profile/cProfile modules -> calendar

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28839] _PyFunction_FastCallDict(): replace PyTuple_New() with PyMem_Malloc()

2016-12-01 Thread STINNER Victor

STINNER Victor added the comment:

I pushed th echange b9c9691c72c5 to replace PyObject_CallFunctionObjArgs() with 
_PyObject_CallNoArg() or _PyObject_CallArg1(). These functions are simpler and 
don't allocate
memory on the C stack.

I made similar to PyObject_CallFunctionObjArgs() in Python 3.6 to this issue: 
don't create a tuple but a small stack allocated on the C stack or allocate 
heap memory to pass a C array of PyObject* to _PyObject_FastCall().

Currently, PyObject_CallFunctionObjArgs() uses a small stack for up to 5 
positional arguments: it allocates 40 bytes on the C stack.

Serhiy Storchaka: "The problem with C stack overflow is not new, but your patch 
may make it worse (I don't know if it actually make it worse)."

I consider that 80 bytes is small enough for a C stack. As I wrote, we can 
reduce this "small stack" in the future if somone reports issues.


"Py_EnterRecursiveCall() is used for limiting Python stack. It can't prevent C 
stack overflow."

I know that the protection is not perfect. It's an heuristic.

I don't even know if it counts Python builtin functions, or only pure Python 
functions.


But I'm not sure that I understood your comment: do you suggest to use a tuple 
and reject this issue? To reduce the size of the small stack? Or to only 
allocate memory on the heap memory?

If the issue is the memory allocated on the C stack, maybe we can use a free 
list for "stacks" (C array of PyObject*), as done for tuples? I'm not sure that 
a free list for PyMem_Malloc()/PyMem_Free() is useful, since it uses our 
efficient pymalloc.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23507] Tuple creation is too slow

2016-12-01 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

Victor, your changes introduced a regression.

--
resolution: fixed -> 
status: closed -> open

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28740] Add sys.getandroidapilevel()

2016-12-01 Thread Xavier de Gaye

Xavier de Gaye added the comment:

> How about renaming sys.implementation._multiarch to 
> sys.implementation.target_architecture and make it public? 
> sys.androidapilevel() sounds too specific to me.

Please do not hijack this issue. The removal of sys.implementation._multiarch 
for Android is discussed in issue 28849 not here.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23507] Tuple creation is too slow

2016-12-01 Thread STINNER Victor

STINNER Victor added the comment:

Serhiy: "I don't propose to commit this complicated patch, but these results 
can be used as a guide to the optimization of tuple creating. It is surprising 
to me that this patch has any effect at all."

A new "fast call" calling convention was added to Python 3.6 to avoid the 
creation of temporary tuples to pass function arguments. Benchmarks confirm 
that Python 3.7 is faster than Python 3.5 of functions that Serhiy suggested to 
optimize. I confirm that I was also surprised that fast call has a significant 
effect on performance!

Fast calls don't require complex code like reuse_argtuples_3.patch which 
requires to check carefully the reference counter. In the past, a similar 
optimization on property_descr_get() introduced complex bugs: see issue #24276 
and issue #26811. Fast calls don't use such issues.

The implementation of fast calls is currently incomplete: tp_new, tp_init and 
tp_call slots still require a tuple and a dict for positional and keyword 
arguments. I'm working on an enhancement for Python 3.7 to support also fast 
calls for these slots. In the meanwhile, callbacks using tp_call don't benefit 
yet on fast calls.

That's why I had to keep the tuple/refcount optimization in 
property_descr_get(), to not introduce a performance regression.

I consider that this issue can now be closed as "fixed".

--
resolution:  -> fixed
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23507] Tuple creation is too slow

2016-12-01 Thread STINNER Victor

STINNER Victor added the comment:

I rewrote bench_builtins.py to use my new perf module.

Python 3.7 is between 1.27x and 1.42x faster than Python 3.6, but sorted(list) 
is slower. I don't know why sorted(list) is slower. It doesn't use a key 
function, and so should not be impacted by FASTCALL changes made since Python 
3.6.

Benchmark results of Python 3.7 (include latest FASTCALL patches) compared to 
Python 3.5 (no FASTCALL):

filter(lambda x: x, range(1000)): Median +- std dev: [3.5] 126 us +- 7 us -> 
[3.7] 94.6 us +- 4.8 us: 1.34x faster (-25%)
map(lambda x, y: x+y, range(1000), range(1000)): Median +- std dev: [3.5] 163 
us +- 10 us -> [3.7] 115 us +- 7 us: 1.42x faster (-30%)
map(lambda x: x, range(1000)): Median +- std dev: [3.5] 115 us +- 6 us -> [3.7] 
90.9 us +- 5.4 us: 1.27x faster (-21%)
sorted(list): Median +- std dev: [3.5] 17.5 us +- 1.0 us -> [3.7] 19.7 us +- 
1.1 us: 1.12x slower (+12%)
sorted(list, key=lambda x: x): Median +- std dev: [3.5] 122 us +- 10 us -> 
[3.7] 91.8 us +- 5.7 us: 1.33x faster (-25%)

Note: you need the development version of perf to run the script (future 
version 0.9.2).

--
Added file: http://bugs.python.org/file45726/bench_builtins.py

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23507] Tuple creation is too slow

2016-12-01 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

Here is an example that shows increased stack consumption.

With old code it counts to 74700, with the patch it counts only to 65400.

Please revert your changes.

--
Added file: http://bugs.python.org/file45725/stack_overflow.py

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10444] A mechanism is needed to override waiting for Python threads to finish

2016-12-01 Thread Emanuel Barry

Emanuel Barry added the comment:

Let's just close this.

--
nosy: +ebarry
resolution:  -> rejected
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28835] Change in behavior when overriding warnings.showwarning and with catch_warnings(record=True)

2016-12-01 Thread STINNER Victor

STINNER Victor added the comment:

It seems like a regression of Python 3.6, probably introduced by myself with 
add addition of warnings._showwarningmsg() and the source parameter.

--
nosy: +haypo, ned.deily
priority: normal -> release blocker

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10444] A mechanism is needed to override waiting for Python threads to finish

2016-12-01 Thread Michael Hughes

Michael Hughes added the comment:

Given that this is from five years ago, and I've moved on, I honestly can't say 
I care too deeply about this.

My use case was for handling threads:
* created by inexperienced python programmers that don't know about daemons
* using third party python scripts that it would be easier not to edit

I feel that my proposed change handles that in a reasonable way, and doesn't 
complicate the interface for threads terribly. Most users can completely ignore 
the new method I proposed, and it won't affect them. For those going to look 
for it, it'll be there.

But again, I'm not even working in Python and no one else has chimed in on this 
in five years. Does it matter anymore?

- Michael

> On Nov 30, 2016, at 1:58 PM, Julien Palard  wrote:
> 
> 
> Julien Palard added the comment:
> 
> If nobody has nothing to add on this issue, I think it just should be closed.
> 
> --
> 
> ___
> Python tracker 
> 
> ___

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28638] Creating namedtuple is too slow to be used in common stdlib (e.g. functools)

2016-12-01 Thread Nick Coghlan

Nick Coghlan added the comment:

The concern with using the "generate a private module that can be cached" 
approach is that it doesn't generalise well - any time you want to 
micro-optimise a new module that way, you have to add a custom Makefile rule.

By contrast, Argument Clinic is a general purpose tool - adopting it for 
micro-optimisation in another file would just be a matter of adding that file 
to the list of files that trigger a clinic run. functools.py would be somewhat 
notable as the first Python file we do that for, but it isn't a novel concept 
overall.

That leads into my main comment on the AC patch: the files that are explicitly 
listed as triggering a new clinic run should be factored out into a named 
variable and that list commented accordingly.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28839] _PyFunction_FastCallDict(): replace PyTuple_New() with PyMem_Malloc()

2016-12-01 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

The problem with C stack overflow is not new, but your patch may make it worse 
(I don't know if it actually make it worse). Py_EnterRecursiveCall() is used 
for limiting Python stack. It can't prevent C stack overflow.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28851] namedtuples field_names sequence preferred

2016-12-01 Thread Serhiy Storchaka

Changes by Serhiy Storchaka :


--
assignee: docs@python -> rhettinger
nosy: +rhettinger

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23507] Tuple creation is too slow

2016-12-01 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

I think it would be better to optimize PyObject_CallFunctionObjArgs().

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23507] Tuple creation is too slow

2016-12-01 Thread Roundup Robot

Roundup Robot added the comment:

New changeset b9c9691c72c5 by Victor Stinner in branch 'default':
Replace PyObject_CallFunctionObjArgs() with fastcall
https://hg.python.org/cpython/rev/b9c9691c72c5

--
nosy: +python-dev

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28638] Creating namedtuple is too slow to be used in common stdlib (e.g. functools)

2016-12-01 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

Argument Clinic is not needed, since we can use Makefile.

--
Added file: http://bugs.python.org/file45724/functools-CacheInfo-Makefile.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: The Case Against Python 3

2016-12-01 Thread Paul Moore
On Tuesday, 29 November 2016 01:01:01 UTC, Chris Angelico  wrote:
> So what is it that's trying to read something and is calling an
> f-string a mere string?

gettext.c2py:

"""Gets a C expression as used in PO files for plural forms and returns a
Python lambda function that implements an equivalent expression.
"""
# Security check, allow only the "n" identifier
import token, tokenize
tokens = tokenize.generate_tokens(io.StringIO(plural).readline)
try:
danger = [x for x in tokens if x[0] == token.NAME and x[1] != 'n']
except tokenize.TokenError:
raise ValueError('plural forms expression error, maybe unbalanced 
parenthesis')
else:
if danger:
raise ValueError('plural forms expression could be dangerous')

So the only things that count as DANGER are NAME tokens that aren't "n". That 
seems pretty permissive...

While I agree that f-strings are more dangerous than people will immediately 
realise (the mere fact that we call them f-*strings* when they definitely 
aren't strings is an example of that), the problem here is clearly (IMO) with 
the sloppy checking in gettext.

Paul
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue28851] namedtuples field_names sequence preferred

2016-12-01 Thread Francesco Grondona

New submission from Francesco Grondona:

A change by mhettinger in the doc of Python 2 years ago implicitely stated a 
sequence of strings as preferred way to provide 'field_names' to a namedtuple:

https://github.com/python/cpython/commit/7be6326e09f2062315f995a18ab54baedfd0c0ff

Same change should be integrated in Python 3, I see no reason to prefer the 
single string version.

--
assignee: docs@python
components: Documentation
messages: 282177
nosy: docs@python, peentoon
priority: normal
severity: normal
status: open
title: namedtuples field_names sequence preferred
type: enhancement
versions: Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28733] Show how to use mock_open in modules other that __main__

2016-12-01 Thread Michał Bultrowicz

Michał Bultrowicz added the comment:

One more update - I had the problem, because I was using monkeypatch.setattr() 
from Pytest, and assumed that it will work the same as patch(). This assumption 
turned out to be wrong.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28839] _PyFunction_FastCallDict(): replace PyTuple_New() with PyMem_Malloc()

2016-12-01 Thread STINNER Victor

STINNER Victor added the comment:

> I agree with Josh, PyTuple_New() can be faster than PyMem_Malloc() due to 
> tuple free list.

According to benchmarks, PyTuple_New() is slower than PyMem_Malloc(). It's not 
surprising for me, using a tuple object requires extra work:

* Track and then untrack the object from the garbage collector
* Destructor uses Py_TRASHCAN_SAFE_BEGIN/Py_TRASHCAN_SAFE_END macros
* Some additional indirectons

When I started working on "fastcall", I was surprised that not creating tuples 
has a *significant* (positive) effect on performance. It seems to be between 5% 
and 45% faster. Obviously, it depends on the speed of the function body. The 
speedup is higher for faster functions, like fast functions implemented in C.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue27172] Undeprecate inspect.getfullargspec()

2016-12-01 Thread Nick Coghlan

Nick Coghlan added the comment:

Updated patch adding What's New and NEWS entries, and addressing Martin's 
review comments (mostly by accepting his suggestions).

I ended up leaving `inspect.getcallargs()` deprecated, and instead added a 
comment to issue 20438 noting how to achieve the bound args -> unbound args 
conversion in a general way: http://bugs.python.org/issue20438#msg282173

--
Added file: 
http://bugs.python.org/file45723/issue27172_undeprecate_getfullargspec_v2.diff

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20438] inspect: Deprecate getfullargspec?

2016-12-01 Thread Nick Coghlan

Nick Coghlan added the comment:

Noting for the record, as the general way of querying an unbound method for the 
name of the first parameter and adding it to the bound arguments:

def add_instance_arg(callable, bound_args):
try:
self = callable.__self__
func = callable.__func__
except AttributeError:
return # Not a bound method
unbound_sig = inspect.signature(func)
for name in unbound_sig.parameters:
bound_args.arguments[name] = self
break

>>> method = C().method
>>> sig = inspect.signature(method)
>>> sig

>>> args = sig.bind(1)
>>> args

>>> add_instance_arg(method, args)
>>> args
)>

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28638] Creating namedtuple is too slow to be used in common stdlib (e.g. functools)

2016-12-01 Thread INADA Naoki

INADA Naoki added the comment:

(reopen the issue to discuss about using Argument Clinic)

--
resolution: rejected -> 
status: closed -> open

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28699] Imap from ThreadPool behaves unexpectedly

2016-12-01 Thread fiete

fiete added the comment:

Since the only thing I know about the multiprocessing internals is what I just 
read in the source code trying to debug my imap_unordered call, I'll add the 
following example, not knowing whether this is already covered by everything 
you have until now.


import multiprocessing.pool

def gen():
raise Exception('generator exception')
yield 1
yield 2

for i in range(3):
with multiprocessing.pool.ThreadPool(3) as pool:
try:
print(list(pool.imap_unordered(lambda x: x*2, gen(
except Exception as e:
print(e)


This only prints 'generator exception' once for the first iteration. For the 
following iterations imap_unordered returns an empty list. This is the case for 
both Pool and ThreadPool.

--
nosy: +fiete

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28847] dumbdbm should not commit if in read mode

2016-12-01 Thread Jonathan Ng

Jonathan Ng added the comment:

#1 makes sense to be backported.

On Thu, Dec 1, 2016 at 8:41 PM, Serhiy Storchaka 
wrote:

>
> Serhiy Storchaka added the comment:
>
> This example is too artificial.
>
> But there is a real issue: opening read-only files in read mode. Currently
> this causes a PermissionError on closing.
>
> For backward compatibility flags 'r' and 'w' are ignored. I.e. opening
> with 'r' and 'w' creates a file if it is not existing, and opening with 'r'
> allows modifying the database. Since 3.6 this emits deprecation warnings
> (issue21708). In future versions this will be an error.
>
> Proposed patch makes two changes:
>
> 1. The index file no longer written if the database was not modified. This
> increases performance and adds a support of read-only files.
>
> 2. A deprecation warning is raised when the index file is absent in 'r'
> and 'w' modes. In future versions this will be an error.
>
> May be the first change can be backported.
>
> --
> keywords: +patch
> stage:  -> patch review
> type: behavior -> enhancement
> versions: +Python 3.7
> Added file: http://bugs.python.org/file45722/dbm_dumb_readonly.patch
>
> ___
> Python tracker 
> 
> ___
>

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28839] _PyFunction_FastCallDict(): replace PyTuple_New() with PyMem_Malloc()

2016-12-01 Thread STINNER Victor

STINNER Victor added the comment:

Serhiy: "small_stack increases C stack consumption even for calls without 
keyword arguments. This is serious problem since we can't control stack 
overflow."

This problem is not new and is worked around by Py_EnterRecursiveCall() macro 
which counts the depth of the Python stack.

I didn't notice any stack overflow crash with my patch. We can reduce the 
"small_stack" size later if needed.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28839] _PyFunction_FastCallDict(): replace PyTuple_New() with PyMem_Malloc()

2016-12-01 Thread STINNER Victor

STINNER Victor added the comment:

> Note: Using a simple printf() in the C code, I noticed that it is not 
> uncommon that _PyFunction_FastCallDict() is called with an empty dictionary 
> for keyword arguments.

Simplified Python example where _PyFunction_FastCallDict() is called with an 
empty dictionary:
---
def f2():
pass

def wrapper(func, *args, **kw):
# CALL_FUNCTION_EX: func(**{}) calls PyObject_Call() with kwargs={} which
# calls _PyFunction_FastCallDict()
func(*args, **kw)

def f():
# CALL_FUNCTION: calling wrapper calls fast_function() which calls
# _PyEval_EvalCodeWithName() which creates an empty dictionary for kw
wrapper(f2)

f()
---

But on this specific case, the speedup is *very* small: 3 nanoseconds :-)

./python -m perf timeit -s 'kw={}' -s 'def func(): pass' --duplicate=1000 
'func(**kw)'
(...)
Median +- std dev: [ref] 108 ns +- 4 ns -> [patch] 105 ns +- 5 ns: 1.02x faster 
(-2%)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28847] dumbdbm should not commit if in read mode

2016-12-01 Thread Serhiy Storchaka

Serhiy Storchaka added the comment:

This example is too artificial.

But there is a real issue: opening read-only files in read mode. Currently this 
causes a PermissionError on closing.

For backward compatibility flags 'r' and 'w' are ignored. I.e. opening with 'r' 
and 'w' creates a file if it is not existing, and opening with 'r' allows 
modifying the database. Since 3.6 this emits deprecation warnings (issue21708). 
In future versions this will be an error.

Proposed patch makes two changes:

1. The index file no longer written if the database was not modified. This 
increases performance and adds a support of read-only files.

2. A deprecation warning is raised when the index file is absent in 'r' and 'w' 
modes. In future versions this will be an error.

May be the first change can be backported.

--
keywords: +patch
stage:  -> patch review
type: behavior -> enhancement
versions: +Python 3.7
Added file: http://bugs.python.org/file45722/dbm_dumb_readonly.patch

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue28847] dumbdbm should not commit if in read mode

2016-12-01 Thread Jonathan Ng

Jonathan Ng added the comment:

I'm not sure how to create an OSError.

But perhaps something like this:

'''
from dbm import dumb
import os

db = dumb.open('temp', flag='n')
db['foo'] = 'bar'
db.close()

db = dumb.open('temp', flag='r')
print(len(db.keys()))
db.close

os.rename('temp.dir', 'temp_.dir') # simulates OSError
db = dumb.open('temp', flag='r')
os.rename('temp_.dir', 'temp.dir')
db.close()

db = dumb.open('temp', flag='r')
assert len(db.keys()) > 0
'''

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26363] __builtins__ propagation is misleading described in exec and eval documentation

2016-12-01 Thread Xavier Combelle

Xavier Combelle added the comment:

Hi Julien,

You are fully right that it is the builtin module dictionnary which is inserted 
in eval or exec context.

However, if a "__builtins__" entry is passed to eval or exec, this builtin 
module dictionnary is modified hence the following work: 

>>> d={"tata":"tata"}
>>> print(eval("tata",{'__builtins__':d}))
tata

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17490] Improve ast.literal_eval test suite coverage

2016-12-01 Thread Nick Coghlan

Nick Coghlan added the comment:

Right, the only reason this is still open is because I thought the extra test 
cases might be interesting in their own right (just not interesting enough to 
sit down and apply).

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >