Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Chris Angelico
On Sat, Jun 3, 2017 at 11:46 AM, Gregory Ewing
 wrote:
> Chris Angelico wrote:
>>
>> with psycopg2.connect(...) as conn:
>> with conn.trans() as trn:
>> for row in trn.execute("select ..."):
>> print(row)
>>
>> The outer context manager is optional, but not the inner one
>
>
> While I fully support making the use of transactions mandatory,
> I wouldn't like to be forced to use them in a with statement.
>
> In the application that I originally built my Firebird interface
> for, I had a mechanism where a user could open up a piece of
> data for editing, and then choose to save or cancel the edits.
> I implemented it by keeping a transaction around for the
> duration and then committing it or rolling it back. If a
> with statement were required around all transactions, I
> wouldn't have been able to do that.

You wouldn't be FORCED to, but it would be strongly recommended. You
could simply:

trn = conn.trans()

and then use it that way, but then you're responsible for calling
trn.commit() or trn.rollback(). You would also be responsible for the
longevity of your locks; if you hold a transaction waiting for a
human, you potentially keep some things locked for a long time. Which
is probably intentional as regards the primary record being edited,
but you'd also hold locks on anything else you touch too.

BTW, it should be possible to do:

with trn.trans() as subtrn:

on DBMSes that support subtransactions (eg PostgreSQL). For what that's worth.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Gregory Ewing

Chris Angelico wrote:

with psycopg2.connect(...) as conn:
with conn.trans() as trn:
for row in trn.execute("select ..."):
print(row)

The outer context manager is optional, but not the inner one


While I fully support making the use of transactions mandatory,
I wouldn't like to be forced to use them in a with statement.

In the application that I originally built my Firebird interface
for, I had a mechanism where a user could open up a piece of
data for editing, and then choose to save or cancel the edits.
I implemented it by keeping a transaction around for the
duration and then committing it or rolling it back. If a
with statement were required around all transactions, I
wouldn't have been able to do that.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Gregory Ewing

Chris Angelico wrote:

Always using a context manager is good practice and
great for code clarity.


Another thing about my Firebird interface was that you were
forced to always use transactions, because the transaction
object was the only thing that had methods for executing
statements.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Psycopg2 pool clarification

2017-06-02 Thread Israel Brewster
I've been using the psycopg2 pool class for a while now, using code similar to 
the following:

>>> pool=ThreadedConnectionPool(0,5,)
>>> conn1=pool.getconn()
>>> 
>>> pool.putconn(conn1)
 repeat later, or perhaps "simultaneously" in a different thread.

and my understanding was that the pool logic was something like the following:

- create a "pool" of connections, with an initial number of connections equal 
to the "minconn" argument
- When getconn is called, see if there is an available connection. If so, 
return it. If not, open a new connection and return that (up to "maxconn" total 
connections)
- When putconn is called, return the connection to the pool for re-use, but do 
*not* close it (unless the close argument is specified as True, documentation 
says default is False)
- On the next request to getconn, this connection is now available and so no 
new connection will be made
- perhaps (or perhaps not), after some time, unused connections would be closed 
and purged from the pool to prevent large numbers of only used once connections 
from laying around.

However, in some testing I just did, this doesn't appear to be the case, at 
least based on the postgresql logs. Running the following code:

>>> pool=ThreadedConnectionPool(0,5,)
>>> conn1=pool.getconn()
>>> conn2=pool.getconn()
>>> pool.putconn(conn1)
>>> pool.putconn(conn2)
>>> conn3=pool.getconn()
>>> pool.putconn(conn3)

produced the following output in the postgresql log:

2017-06-02 14:30:26 AKDT LOG:  connection received: host=::1 port=64786
2017-06-02 14:30:26 AKDT LOG:  connection authorized: user=logger 
database=flightlogs
2017-06-02 14:30:35 AKDT LOG:  connection received: host=::1 port=64788
2017-06-02 14:30:35 AKDT LOG:  connection authorized: user=logger 
database=flightlogs
2017-06-02 14:30:46 AKDT LOG:  disconnection: session time: 0:00:19.293 
user=logger database=flightlogs host=::1 port=64786
2017-06-02 14:30:53 AKDT LOG:  disconnection: session time: 0:00:17.822 
user=logger database=flightlogs host=::1 port=64788
2017-06-02 14:31:15 AKDT LOG:  connection received: host=::1 port=64790
2017-06-02 14:31:15 AKDT LOG:  connection authorized: user=logger 
database=flightlogs
2017-06-02 14:31:20 AKDT LOG:  disconnection: session time: 0:00:05.078 
user=logger database=flightlogs host=::1 port=64790

Since I set the maxconn parameter to 5, and only used 3 connections, I wasn't 
expecting to see any disconnects - and yet as soon as I do putconn, I *do* see 
a disconnection. Additionally, I would have thought that when I pulled 
connection 3, there would have been two connections available, and so it 
wouldn't have needed to connect again, yet it did. Even if I explicitly say 
close=False in the putconn call, it still closes the connection and has to open

What am I missing? From this testing, it looks like I get no benefit at all 
from having the connection pool, unless you consider an upper limit to the 
number of simultaneous connections a benefit? :-) Maybe a little code savings 
from not having to manually call connect and close after each connection, but 
that's easily gained by simply writing a context manager. I could get *some* 
limited benefit by raising the minconn value, but then I risk having 
connections that are *never* used, yet still taking resources on the DB server.

Ideally, it would open as many connections as are needed, and then leave them 
open for future requests, perhaps with an "idle" timeout. Is there any way to 
achieve this behavior?

---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---




-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Chris Angelico
On Sat, Jun 3, 2017 at 7:29 AM, Dennis Lee Bieber  wrote:
> On Sat, 3 Jun 2017 06:48:28 +1000, Chris Angelico 
> declaimed the following:
>
>>
>>Wait, you have transactions with MyISAM now? I thought MySQL supported
>>transactions with InnoDB but not MyISAM, and the reason you didn't get
>>transactional DDL was that the system catalog tables are mandatorily
>>MyISAM, even if all your own tables are InnoDB.
>>
>
> Not really transactions -- but locks on the "metadata" tables...
>
> http://www.chriscalender.com/tag/myisam-locks/

Oh. That's just the basic protection of "don't let anyone change the
table while we're using it". It doesn't mean you can roll back an
ALTER TABLE, much less take advantage of full transactional integrity.
In PostgreSQL, you can do something like this (pseudocode):

version = #select schema_version from metadata#
if version < 1:
#create table foo (id serial primary key, bar text not null)#
if version < 2:
#alter table foo add quux integer not null default 10#
if version < 3:
#create table spam (id serial primary key, foo_id int not null
references foo)#

#update metadata set schema_version = 3#
if version > 3:
raise IntegrityError("Cannot backlevel database")
#commit#

Now, even if anything crashes out while you're migrating the database
(either because the power fails, or because of an error in your code,
or anything), you have an absolute guarantee that the version field
and the database will be consistent - that version 2 *always* has both
bar and quux columns, etc. There's no way to have half a schema
migration done, or finish the migration but fail to update the version
marker, or anything. You KNOW that it's safe, even against logic
errors.

That's what transactional DDL gives you.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


HD LIVE مسلسل غرابيب سود الحلقة 8 @) ))MBC(( مشاهدة غرابيب سود الحلقة 8 الثامنة كاملة LBC

2017-06-02 Thread 6hktt . com
HD LIVE مسلسل غرابيب سود الحلقة 8 @) ))MBC(( مشاهدة غرابيب سود الحلقة 8 الثامنة 
كاملة LBCHD LIVE مسلسل غرابيب سود الحلقة 8 @) ))MBC(( مشاهدة غرابيب سود الحلقة 
8 الثامنة كاملة LBCHD LIVE مسلسل غرابيب سود الحلقة 8 @) ))MBC(( مشاهدة غرابيب 
سود الحلقة 8 الثامنة كاملة LBCHD LIVE مسلسل غرابيب سود الحلقة 8 @) ))MBC(( 
مشاهدة غرابيب سود الحلقة 8 الثامنة كاملة LBCHD LIVE مسلسل غرابيب سود الحلقة 8 
@) ))MBC(( مشاهدة غرابيب سود الحلقة 8 الثامنة كاملة LBCHD LIVE مسلسل غرابيب سود 
الحلقة 8 @) ))MBC(( مشاهدة غرابيب سود الحلقة 8 الثامنة كاملة LBC

غرابيب سود الحلقة 8 غرابيب سود الحلقة 8 مسلسل غرابيب سود الحلقة 8 مشاهدة غرابيب 
سود الحلقة 8 مشاهدة م
 
مسلسل غرابيب سود الحلقة 8

شاهد من هنا https://goo.gl/WCZEeQ
 
غرابيب سود الحلقة 8 غرابيب سود الحلقة 8 مسلسل غرابيب سود الحلقة 8 مشاهدة غرابيب 
سود الحلقة 8 مشاهدة م
 
مسلسل غرابيب سود الحلقة 8

شاهد من هنا https://goo.gl/WCZEeQ
 
غرابيب سود الحلقة 8 غرابيب سود الحلقة 8 مسلسل غرابيب سود الحلقة 8 مشاهدة غرابيب 
سود الحلقة 8 مشاهدة م
 
مسلسل غرابيب سود الحلقة 8

شاهد من هنا https://goo.gl/WCZEeQ
 
غرابيب سود الحلقة 8 غرابيب سود الحلقة 8 مسلسل غرابيب سود الحلقة 8 مشاهدة غرابيب 
سود الحلقة 8 مشاهدة م
 
مسلسل غرابيب سود الحلقة 8

شاهد من هنا https://goo.gl/WCZEeQ
 
غرابيب سود الحلقة 8 غرابيب سود الحلقة 8 مسلسل غرابيب سود الحلقة 8 مشاهدة غرابيب 
سود الحلقة 8 مشاهدة م
 
مسلسل غرابيب سود الحلقة 8

شاهد من هنا https://goo.gl/WCZEeQ
 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Chris Angelico
On Sat, Jun 3, 2017 at 5:31 AM, Jon Ribbens  wrote:
> On 2017-06-02, Chris Angelico  wrote:
>> On Sat, Jun 3, 2017 at 2:45 AM, Jon Ribbens  
>> wrote:
>>> Bewaare - MyISAM tables have no transactions for DML but they do have
>>> transactions for DDL. Insane but true.
>>
>> Not insane; not all DBMSes have transactional DDL, and of the major
>> ones, several have only more recently added it (despite having had
>> rock-solid transactional DML for decades). It's an excellent feature
>> but not easy to implement.
>
> I'm not saying that transactional DDL is insane (it isn't), but MyISAM
> tables having transactions *only* for DDL is... surprising. Especially
> when it suddenly appeared as a "feature" in between two versions. It
> took me quite a while to work out why our database was randomly hanging.

Wait, you have transactions with MyISAM now? I thought MySQL supported
transactions with InnoDB but not MyISAM, and the reason you didn't get
transactional DDL was that the system catalog tables are mandatorily
MyISAM, even if all your own tables are InnoDB.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Skip Montanaro
On Fri, Jun 2, 2017 at 2:40 PM, Neil Cerutti  wrote:

> You get autocommit with sqlite3 by setting isolation_level=None
> on the connection object.
>

Thanks for the pointer. I'd probably never have noticed the correspondence.

Skip
-- 
https://mail.python.org/mailman/listinfo/python-list


CherryPy Session object creation logic

2017-06-02 Thread Israel Brewster
I have a CherryPy app, for which I am using a PostgreSQL session. To be more 
exact, I modified a MySQL session class I found to work with PostgreSQL 
instead, and then I put this line in my code:

cherrypy.lib.sessions.PostgresqlSession = PostgreSQLSession

And this works fine. One thing about its behavior is bugging me, however: 
accessing a page instantiates (and deletes) *many* instances of this class, all 
for the same session. Doing some debugging, I counted 21 calls to the __init__ 
function when loading a single page. Logging in and displaying the next page 
hit it an additional 8 times. My theory is that essentially every time I try to 
read from or write to the session, CherryPy is instantiating a new 
PostgreSQLSession object, performing the request, and deleting the session 
object. In that simple test, that means 29 connections to the database, 29 
instantiations, etc - quite a bit of overhead, not to mention the load on my 
database server making/breaking those connections (although it handles it fine).

Is this "normal" behavior? Or did I mess something up with my session class? 
I'm thinking that ideally CherryPy would only create one object - and 
therefore, one DB connection - for a given session, and then simply hold on to 
that object until that session expired. But perhaps not?
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---




-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Neil Cerutti
On 2017-06-02, Skip Montanaro  wrote:
> On Fri, Jun 2, 2017 at 11:14 AM, Dennis Lee Bieber
>  wrote:
> I just checked, and the sqlite3 adapter I have access to
> (Python 2.7.13 in a Conda env, module version 2.6.0, SQLite3
> 3.13.0) has no autocommit attribute at all. I checked at the
> module, connection and cursor levels.

You get autocommit with sqlite3 by setting isolation_level=None
on the connection object. 

https://docs.python.org/2/library/sqlite3.html#sqlite3-controlling-transactions

-- 
Neil Cerutti

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Jon Ribbens
On 2017-06-02, Chris Angelico  wrote:
> On Sat, Jun 3, 2017 at 2:45 AM, Jon Ribbens  wrote:
>> Bewaare - MyISAM tables have no transactions for DML but they do have
>> transactions for DDL. Insane but true.
>
> Not insane; not all DBMSes have transactional DDL, and of the major
> ones, several have only more recently added it (despite having had
> rock-solid transactional DML for decades). It's an excellent feature
> but not easy to implement.

I'm not saying that transactional DDL is insane (it isn't), but MyISAM
tables having transactions *only* for DDL is... surprising. Especially
when it suddenly appeared as a "feature" in between two versions. It
took me quite a while to work out why our database was randomly hanging.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Very Slow Disk Writes when Writing Large Data Blocks

2017-06-02 Thread Irmen de Jong
On 2-6-2017 20:14, remmm wrote:

> These write speeds are in the range of 18 to 25 MBytes per second for 
> spinning disks and about 50 Mbytes/sec for SSDs.  Keep in mind these numbers 
> should be more like 120 MBytes/sec for spinning disks and 300 MBytes/sec for 
> SSDs.  

You'll only reach those numbers in the ideal situation. Is there just one 
program doing
this disk i/o, sequentially, from a single thread?
If it is not, you are probably suffering disk i/o trashing once you filled up 
the
drive's cache buffers.

For example using Crystal Disk Mark on one of my HDD drives it reports max 60 
MBytes/sec
write speed sequentially in the ideal case (ok, it is not a very fast drive...) 
but only
0.7 (!) in the random 4k block case.

Apparently Linux deals with this better than Windows, for your situation.

Other than that the only other thing I can think of is interference of other 
programs on
the system, such as malware protection or anti-virus tooling that is trying to 
scan your
big files at the same time.  That should be visible in Window's resource 
monitor tool
however, I think.


> I've since written test code using just python "write" 

Post it somewhere?

Irmen
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Skip Montanaro
On Fri, Jun 2, 2017 at 11:14 AM, Dennis Lee Bieber 
wrote:

> My conclusion:
> If using a DB-API compliant adapter, explicitly issuing "begin" and
> "commit" via .execute() should be avoided if one expects to be portable
> (change the adapter from one DBMS to another).
> Learn the behavior of the adapter (does any SQL start a
> transaction, or
> only INSERT/UPDATE/DELETE/REPLACE -- the latter seems to be the current
> SQLite3 documented behavior, exclusive of both editions of the "Definitive
> Guide" which imply that an active transaction will be commited upon
> executing a SELECT [Python help file for module states that SELECT does
> /not/ commit]) so you understand when it should be IN or OUT of a
> transaction state. *
>

I just checked, and the sqlite3 adapter I have access to (Python 2.7.13 in
a Conda env, module version 2.6.0, SQLite3 3.13.0) has no autocommit
attribute at all. I checked at the module, connection and cursor levels.
I'm using pyodbc via another layer added by others at work to connect to
SQL Server. That extra layer explicitly sets autocommit to True on the
underlying Connection object before returning it to the caller.

In my case, my code isn't terribly large. I think it's safer to set
autocommit to False and be explicit in my use of transactions.

Skip
-- 
https://mail.python.org/mailman/listinfo/python-list


Very Slow Disk Writes when Writing Large Data Blocks

2017-06-02 Thread remmm
I'm seeing slow write speeds from both Python and C code on some Windows 
workstations.  In particular both Python "write" and numpy "tofile" method 
suffers from this issue.  I'm wondering if anyone has any ideas regarding if 
this is a known issue, know the cause, or how to resolve the issue?  The 
details are below.

The slow write speed issue seems to occur when writing data in blocks of larger 
than 32767 512-byte disk sectors.  In terms of speed, write speed seems as 
expected until one gets to this 32767 limit and then the speed falls off as if 
all data beyond this is processed byte-by-byte.  I can't prove this is what is 
happening -- but speed tests generally support this theory.  These write speeds 
are in the range of 18 to 25 MBytes per second for spinning disks and about 50 
Mbytes/sec for SSDs.  Keep in mind these numbers should be more like 120 
MBytes/sec for spinning disks and 300 MBytes/sec for SSDs.

This issue seems to be system specific.  I originally saw this on my HP z640 
workstation using Python 2.7 under Windows 7.  Originally it was numpy writes 
of large arrays in the 100GB size range that highlighted the issue, but I've 
since written test code using just python "write" too and get similar results 
using various block sizes.  I've since verified this using cygwin mingw64 C and 
with Visual Studio C 2013.  I've also tested this on a variety of other 
systems.  My laptop does not show this speed issue, and not all z640 systems 
seem to show this issue though I've found several that do. IT has tested this 
on a clean Windows 7 image and on a Windows 10 image using yet another Z640 
system and they get similar results.  I've also not seen any Linux systems show 
this issue though I don't have any Z640's with Linux on them.  I have however 
run my tests on Linux Mint 17 running under VirtualBox on the same Z640 that 
showed the speed issue and using both Wine and native python and both 
 showed good performance and no slowdown.

A work around for this seems to be to enable full caching for the drive in 
device manager with the subsequent risk of data corruption.  This suggests for 
example that the issue is byte-by-byte flushing of data beyond the 32767 limit 
and that perhaps full cashing mitigates this some how.  The other work around 
is to write all data in blocks of less than the 32767 limit (which is about 
16Mbytes) as mentioned above. Of course reducing block size only works if you 
have the source code and the time and inclination to modify it.  There is an 
indication that some of the commercial code we use for science and engineering 
also may suffer from this issue.  

The impact of this issue also seems application specific.  The issue only 
becomes annoying when your regularity writing files of significant size (above 
say 10GB).  It also depends on how an application writes data, so not all 
applications that create large files may exhibit this issue.  As an example, 
Python numpy tofile method has this issue for large enough arrays and is the 
reason I started to investigate.

I don't really know where to go with this.  Is this a Windows issue?  Is it an 
RTL issue?  Is it a hardware, device driver, or bios issue?  Is there a stated 
OS or library limit to buffer sizes to things like C fwrite or Python write 
which makes this an application issue? Thoughts?

Thanks,
remmm
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Chris Angelico
On Sat, Jun 3, 2017 at 2:45 AM, Jon Ribbens  wrote:
> On 2017-06-02, Dennis Lee Bieber  wrote:
>>   Connector/Python (MySQL) [guess it is time for me to finally upgrade to
>> Python 3.x -- it was the delay in getting mysqldb ported that held me back]
>> does allow for turning on autocommit -- which is documented as issuing an
>> implicit commit after each SQL (which I take to mean each .execute() ), and
>> would likely cause problems with explicit BEGIN. Also not recommended for
>> InnoDB tables, but considered appropriate for MyISAM tables [no transaction
>> feature on those].
>
> Bewaare - MyISAM tables have no transactions for DML but they do have
> transactions for DDL. Insane but true.

Not insane; not all DBMSes have transactional DDL, and of the major
ones, several have only more recently added it (despite having had
rock-solid transactional DML for decades). It's an excellent feature
but not easy to implement. Personally, I like it enough that I choose
a DBMS based on features like that, but there are plenty of people who
aren't too bothered by it.

That said, though, MySQL is AFAIK the only full-scale DBMS that
doesn't support DDL rollback in 2017. I don't know whether you can
craft a transaction that mixes DDL and DML in all of them, but
certainly in most; and it's a great feature, because you can version
your schema trivially (putting a version number in a metadata table).
I love it. :)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Neil Cerutti
On 2017-06-02, Dennis Lee Bieber  wrote:
>
>   A bit of a long free-association rambling...
>
> On Fri, 2 Jun 2017 12:07:45 + (UTC), Neil Cerutti
>  declaimed the following:
>>You're probably not expected to interleave transaction control
>>commands from different levels of abstraction, e.g., only call
>>'commit' directly if you called 'begin' directly.
>
>   .execute("begin") 
> is likely not safe either.
>
> If the adapter has been set to "autocommit", it might issue an
> implicit "commit" after processing that execute -- wiping out
> the transaction one has explicitly started...
>
> If not in "autocommit", the adapter may (will) at some point
> issue an implicit "begin" -- resulting in an attempt to nest
> transactions within the one connection.
>
> My conclusion: 
>   If using a DB-API compliant adapter, explicitly issuing "begin" and
> "commit" via .execute() should be avoided if one expects to be portable
> (change the adapter from one DBMS to another).
>   Learn the behavior of the adapter (does any SQL start a transaction, or
> only INSERT/UPDATE/DELETE/REPLACE -- the latter seems to be the
> current SQLite3 documented behavior, exclusive of both editions
> of the "Definitive Guide" which imply that an active
> transaction will be commited upon executing a SELECT [Python
> help file for module states that SELECT does /not/ commit]) so
> you understand when it should be IN or OUT of a transaction
> state. *

Good point!

> * Mixing various SQLite3 documentation (both the engine and Python's
> module) gives a confusing mix:
>   The engine (per "Definite Guide") normally runs in autocommit -- and
> appears to only go into non-autocommit when a "begin" is issued.
>   The module (per DB-API) runs in non-autocommit -- and issues an
> implicit "begin" on the first of those DML operations mentioned above.
> So... SELECT prior to any of the listed operations is effectively
> auto-commit, as are any DDL operations (with the addition that DDL will
> perform a commit IF the module believes a transaction is open).

You configure the BEGIN operation by setting isolation_level.
Setting it to IMMEDIATE (or EXCLUSIVE) avoids the deferral of
lock acquisition.

> Given the two -- turning on autocommit in the module may result
> in no implicit "begin"; and transaction control is totally up
> to the user .execute("begin|commit"). 

Agreed.

> But this behavior may not match up with /other/ adapters, in
> which turning ON autocommit in the adapter could just mean it
> does a sequence of begin/SQL/commit for every .execute(). (per
> documentation, not experience)

sqlite3 behavior in autocommit matches up except when I
explicitly muck things up with an explicit BEGIN.

Conclusion seems to be that sqlite3 has a mode that permits
explicit BEGIN/COMMIT, but you shouldn't do it *except* in that
mode, and it's not portable.

-- 
Neil Cerutti

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Bug or intended behavior?

2017-06-02 Thread bob gailer

On 6/2/2017 1:28 PM, Jussi Piitulainen wrote:

sean.diza...@gmail.com writes:


Can someone please explain this to me?  Thanks in advance!

~Sean


Python 2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 12:39:47)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.

print "foo %s" % 1-2

Traceback (most recent call last):
   File "", line 1, in 
TypeError: unsupported operand type(s) for -: 'str' and 'int'

The per cent operator has precedence over minus. Spacing is not
relevant. Use parentheses.



In other words "foo %s" % 1 is executed, giving "1". Then "1"-2 is 
attempted giving the error.
Also: If there is more than one conversion specifier the right argument 
to % must be a tuple.
I usually write a tuple even if there is only one conversion specifier - 
that avoids the problem
you encountered and makes it easy to add more values when you add more 
conversion specifiers.


print "foo %s" % (1-2,)

Bob Gailer

--
https://mail.python.org/mailman/listinfo/python-list


Re: Bug or intended behavior?

2017-06-02 Thread Irmen de Jong
On 2-6-2017 19:17, sean.diza...@gmail.com wrote:
> Can someone please explain this to me?  Thanks in advance!
> 
> ~Sean
> 
> 
> Python 2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 12:39:47) 
> [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
> Type "help", "copyright", "credits" or "license" for more information.
 print "foo %s" % 1-2
> Traceback (most recent call last):
>   File "", line 1, in 
> TypeError: unsupported operand type(s) for -: 'str' and 'int'



The % operator has higher precedence than the - operator.
See https://docs.python.org/3/reference/expressions.html#operator-precedence
So what you wrote is equivalent to:

print ("foo %s" % 1) - 2

which means subtract the number 2 from the string "foo 1". Hence the error.

Solution is to use parentheses to make sure the things are evaluated in the 
order you
intended:

print "foo %s" % (1-2)


Irmen
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Bug or intended behavior?

2017-06-02 Thread justin walters
On Fri, Jun 2, 2017 at 10:17 AM,  wrote:

> Can someone please explain this to me?  Thanks in advance!
>
> ~Sean
>
>
> Python 2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 12:39:47)
> [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
> Type "help", "copyright", "credits" or "license" for more information.
> >>> print "foo %s" % 1-2
> Traceback (most recent call last):
>   File "", line 1, in 
> TypeError: unsupported operand type(s) for -: 'str' and 'int'
> >>>
> --
> https://mail.python.org/mailman/listinfo/python-list
>

Try this:

print "foo %d" % (1-2)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Bug or intended behavior?

2017-06-02 Thread Jussi Piitulainen
sean.diza...@gmail.com writes:

> Can someone please explain this to me?  Thanks in advance!
>
> ~Sean
>
>
> Python 2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 12:39:47) 
> [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
> Type "help", "copyright", "credits" or "license" for more information.
 print "foo %s" % 1-2
> Traceback (most recent call last):
>   File "", line 1, in 
> TypeError: unsupported operand type(s) for -: 'str' and 'int'


The per cent operator has precedence over minus. Spacing is not
relevant. Use parentheses.
-- 
https://mail.python.org/mailman/listinfo/python-list


Bug or intended behavior?

2017-06-02 Thread sean . dizazzo
Can someone please explain this to me?  Thanks in advance!

~Sean


Python 2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 12:39:47) 
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> print "foo %s" % 1-2
Traceback (most recent call last):
  File "", line 1, in 
TypeError: unsupported operand type(s) for -: 'str' and 'int'
>>> 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Manager for project templates, that allows "incremental" feature addition

2017-06-02 Thread Lele Gaifax
Lele Gaifax  writes:

> Paul  Moore  writes:
>
>> On Thursday, 23 March 2017 15:56:43 UTC, Paul  Moore  wrote:
>>
>> Sadly, it doesn't support Windows, which is what I use.
>
> FYI, I just saw https://pypi.python.org/pypi/what/0.4.0, that seems an
> alternative port of Inquirer.js. Unfortunately it's Py2 only :-\

I contributed support for Python 3, so what 0.5.2 is usable with modern
snakes!

> If you can try it under Windows, maybe I could be convinced in contributing
> some Py3 compatibility fixes, and try to support that too in my tinject.

I released https://pypi.python.org/pypi/metapensiero.tool.tinject/0.7, now
using what: I'd be curious to know if it works under Windows.

ciao, lele.
-- 
nickname: Lele Gaifax | Quando vivrò di quello che ho pensato ieri
real: Emanuele Gaifas | comincerò ad aver paura di chi mi copia.
l...@metapensiero.it  | -- Fortunato Depero, 1929.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Jon Ribbens
On 2017-06-02, Dennis Lee Bieber  wrote:
>   Connector/Python (MySQL) [guess it is time for me to finally upgrade to
> Python 3.x -- it was the delay in getting mysqldb ported that held me back]
> does allow for turning on autocommit -- which is documented as issuing an
> implicit commit after each SQL (which I take to mean each .execute() ), and
> would likely cause problems with explicit BEGIN. Also not recommended for
> InnoDB tables, but considered appropriate for MyISAM tables [no transaction
> feature on those].

Bewaare - MyISAM tables have no transactions for DML but they do have
transactions for DDL. Insane but true.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Neil Cerutti
On 2017-06-02, Frank Millman  wrote:
> As I said, I cannot prove this, but the theory fits the
> observed behaviour perfectly, so I have proceeded on the
> assumption that it is true. Therefore I now always run every
> SQL command or block of commands within a context manager,
> which always calls conn.commit() or conn.rollback() on exit,
> and I have not had any more problems. I use exactly the same
> code for sqlite3 and for Sql Server/pyodbc, and it has not
> caused any problems there either.

You're probably not expected to interleave transaction control
commands from different levels of abstraction, e.g., only call
'commit' directly if you called 'begin' directly.

-- 
Neil Cerutti

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Jon Ribbens
On 2017-06-02, Frank Millman  wrote:
> "Frank Millman"  wrote in message news:ogr3ff$sg1$1...@blaine.gmane.org...
>
>> By default, psycopg2 uses 'autocommit', which means that even a SELECT is
>> preceded by a 'BEGIN' statement internally. I never changed the default, so
>> all of the following assumes that autocommit is on.
>
> Oops - by default it does *not* use autocommit, so the following assumes 
> that it is off.

Indeed, the DB-API spec says that auto-commmit must be initially off.
This led to an extremely irritating situation whereby Python-MySQLdb
changed incompatibly between versions, it used to have auto-commit on
but was changed to bring it in line with the spec - and they didn't
even add any way of achieving the old backwards-compatible behaviour!

(You can call Connection.autocommit() but this has to happen after the
connection has already been established, and results in every new
connection starting with two completely pointless "SET autocommit 0"
"SET autocommit 1" commands.)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [OT] How to improve my programming skills?

2017-06-02 Thread Adriaan Renting



Adriaan Renting| Email: rent...@astron.nl
Software Engineer Radio Observatory
ASTRON | Phone: +31 521 595 100 (797 direct)
P.O. Box 2 | GSM:   +31 6 24 25 17 28
NL-7990 AA Dwingeloo   | FAX:   +31 521 595 101
The Netherlands| Web: http://www.astron.nl/~renting/



>>> On 1-6-2017 at 17:26, Mirko via Python-list
 wrote: 

> Hello everybody!
> 
> TLDR: Sorry for OT. Long-time Linux geek and hobby programmer wants 
> to improve his coding skills. What's most important: project 
> planing, algorithms and data structures, contributing to FOSS, web 
> development, learning other languages or something else?
>
Learning languages is something that sort of happens by itself as times
change.
 
> 
> Sorry for posting such a (long) off-topic question, which even is 
> partly even more about psychology, than about technology. But I'm 
> mostly using Python and Bash for programming and am reading this 
> mailing list/newsgroup for many years now, and hope it's Ok (feel 
> free to direct me somewhere more appropriate, as long it's an 
> equally helpful and friendly place).
> 
> I'm looking for a way (*the* way, ie. the "BEST(tm)" way) to improve

> my coding skills. While I'm a quite hard-core computer geek since 25

> years and a really good (hobbyist) Linux-SOHO-Admin, my programming 
> skills are less than sub-par. I wish to change that and become at 
> least am average hobby-programmer. I'm an excellent autodidact and 
> in fact, all what I know about computers is completely self-taught.
> 

The problem with being an autodidact is that you usually have big gaps
in your knowledge that you're not aware of.
The first question is if you want to be a better programmer or better
software engineer?
What I mean with that is do you want to make your code more robust and
efficient or do you want to learn how to design better code? The first
is very much about algorithms and the math behind them, while the second
is more about tools and methods to help you design.

> Now, the problem is, that I'm 41 years old now, have a day job 
> (which hasn't much to do with advanced computing stuff), friends and

> all that. There is little time left for this undertaking (something 
> like 5 - 10 hours per week), so I'm looking for the most efficient 
> ways to do this.
>

If you have little time, it will take a longer time to get better.
 
> I identify the following problems which could be worth improving:
> 
> - I never sit down and plan anything with pencil and paper, 
> flowcharts, UML or anything else. I just fire up Vim and start 
> typing. That works (albeit slowly) for simple programs, but as soon 
> as the project becomes a little more complex the parts and 
> components don't match well and I need to botch something together 
> to somehow make it work. And in the resulting mess I lose the
interest.
>
Any hour designing usually saves you many hours in coding.
UML is popular, but I'm a fan of Yourdon as well.
Design really starts with writing down what you're trying to do (use
cases),
why, how and within what limits. You get better through experience,
especially
when doing it for the first time, people often forget a lot of the
limits they have.

Part of it is also software management, in this case mostly on
yourself.
One of the books that's the foundation there is "The Mythical Man
Month".
It's old but it would help the industry if everyone had read it.
 
> - I never learned algorithms and data structures. I know *what* 
> (linked) lists, dicts, arrays, structs, and trees are; what binary 
> search or bubble-sort is, but I never really studied them, let alone

> implemented them for educative purposes.
>
You're touching on the difference between information and knowledge.
You know they exist, but they're not really part of your knowledge.
If you want to get better at programming, you need to learn more about
algorithms,
making excercises and actually implementing these will make them part
of your knowledge.
Some general math will also help, it really helps if you can analyse if
the code you're writing 
uses a linear or quadratic solution to your problem, and how it could
be rewritten.
At a higher level, there are also things like design patterns.
A standard book on that would be Design Patterns by Erich Gamma,
Richard Helm, Ralph Johnson en John Vlissides.
 
> - When it comes to coding, I'm heavily shy and unsure. I really know

> my stuff when it's about Linux-SOHO-Administration, but when trying 
> to contribute to FOSS projects I always hesitate for several reasons

> (somehow I never find those low-hanging fruits that everybody talks 
> about; either it's super-easy or to difficult.)
>
Sounds like you have typical sysadmin skills, but not really
programming or software engineering skills.
 
> - For my day job (which is absolutely not my dream job) it would be 
> quite useful to know web design/development, especially WordPress 
> stuff. But web programming always feel like being tr

Re: Circular iteration on tuple starting from a specific index

2017-06-02 Thread guillaume . paulet
@Gregory Ewing: you were right, your version without *chain* is faster 
and I quiet like it :)


```
from timeit import timeit
from itertools import chain, cycle, islice

def cycle_once_with_chain(sequence, start):
 return chain(islice(sequence, start, None), islice(sequence, 
start))



def cycle_once_without_chain(sequence, start):
 return islice(cycle(sequence), start, start + len(sequence))


sequence = tuple(i for i in range(100))

time_with_chain = timeit(
stmt='cycle_once_with_chain(sequence, 50)',
number=100, globals=globals()
)
print('Method with *chain* took: ', (time_with_chain /100), ' per 
call.')

# Method with *chain* took:  5.02595758977e-07  per call.

time_without_chain = timeit(
stmt='cycle_once_without_chain(sequence, 50)',
number=100, globals=globals()
)
print('Method without *chain* took: ', (time_without_chain /100), ' 
per call.')

#Method without *chain* took:  3.5880194699984714e-07  per call.
```

@Ian: Good point here, these two methods only works with sequences 
(list, tuple, string...).


I renamed it appropriately in the above sample code :)
--
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Chris Angelico
On Fri, Jun 2, 2017 at 6:10 PM, Gregory Ewing
 wrote:
> Chris Angelico wrote:
>>
>> Executing a query gives you some sort of object. That object either
>> contains all the results, or has the resource handle (or whatever)
>> needed to fetch more from the back end. That basically makes it a
>> cursor, so we're not far different after all :)
>
>
> The point is that my API doesn't make a big deal out of them.
> You don't typically think about them, just as you don't usually
> think much about the intermediate iterator created when you
> do "for x in some_list".

Which is a fully reasonable way to do things. And actually, on further
consideration, I think it's probably the better way; the number of
times you would want to explicitly create a cursor are so few that
they can be handled by some sort of unusual method call, and the basic
usage should look something like this:

with psycopg2.connect(...) as conn:
with conn.trans() as trn:
for row in trn.execute("select ..."):
print(row)

The outer context manager is optional, but not the inner one and the
method call, as I'm not a fan of the unusual usage where "with conn:"
creates a transaction - it's different from the way most context
managers work (managing the resource represented by the object itself,
not some additional resource allocated on __enter__). The iterator
used on trn.execute would be a cursor such as you describe.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Gregory Ewing

Frank Millman wrote:
I never changed the 
default, so all of the following assumes that autocommit is on.


I had many SELECT's, but I was not issuing any form of commit, so the 
locks built up. I solved my problem by always committing.


Something is screwy when a feature called "autocommit" results in
you having to issue explicit commits.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Gregory Ewing

Chris Angelico wrote:

Executing a query gives you some sort of object. That object either
contains all the results, or has the resource handle (or whatever)
needed to fetch more from the back end. That basically makes it a
cursor, so we're not far different after all :)


The point is that my API doesn't make a big deal out of them.
You don't typically think about them, just as you don't usually
think much about the intermediate iterator created when you
do "for x in some_list".

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Working with dictionaries and keys help please!

2017-06-02 Thread Gregory Ewing

On Thu, 1 Jun 2017 10:29 am, David D wrote:


Is there a way of performing this
where the key will update so that is continues to work sequentially?


It sounds like you don't want a dictionary at all, you want a list.
You can use the index() method to find the current "key" of an entry.

>>> people = ["John", "David", "Phil", "Bob"]
>>> people.index("Phil")
2
>>> people.remove("David")
>>> people.index("Phil")
1

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Chris Angelico
On Fri, Jun 2, 2017 at 5:18 PM, Frank Millman  wrote:
> As I said, I cannot prove this, but the theory fits the observed behaviour
> perfectly, so I have proceeded on the assumption that it is true. Therefore
> I now always run every SQL command or block of commands within a context
> manager, which always calls conn.commit() or conn.rollback() on exit, and I
> have not had any more problems. I use exactly the same code for sqlite3 and
> for Sql Server/pyodbc, and it has not caused any problems there either.

+1.

A bit more info: When you perform read-only queries against a
PostgreSQL database, you still have transactional integrity, just as
you would with mutating transactions. Two SELECT statements in the
same transaction will see a consistent view of the underlying
database. To accomplish this, the database creates low-grade locks, so
it knows which things you're using. (It's not quite that simple, since
Postgres uses MVCC, but broadly speaking it's so.) Thus transactions
are just as important for SELECT statements as they are for INSERT or
UPDATE... or, for that matter, ALTER TABLE (this is a point on which
not all DBMSes agree - transactional DDL is one of the features I love
about Postgres). Always using a context manager is good practice and
great for code clarity. I would be inclined to mandate it in a style
guide, if I were in charge of any good-sized psycopg2-based project.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Frank Millman

"Frank Millman"  wrote in message news:ogr3ff$sg1$1...@blaine.gmane.org...


By default, psycopg2 uses 'autocommit', which means that even a SELECT is

preceded by a 'BEGIN' statement internally. I never changed the default, so
all of the following assumes that autocommit is on.

Oops - by default it does *not* use autocommit, so the following assumes 
that it is off.


Frank


--
https://mail.python.org/mailman/listinfo/python-list


Re: Python DB API - commit() v. execute("commit transaction")?

2017-06-02 Thread Frank Millman
"Skip Montanaro"  wrote in message 
news:canc-5uz2ruxrwnax8pjevqztqbndc0aojz3ggeb04k1zfff...@mail.gmail.com...



Assuming the underlying database supports transactions, is there any

difference between calling the commit() method on the connection and
calling the execute method on the cursor with the "commit transaction"
statement? It seems a bit asymmetric to me to start a transaction with


  cur.execute("begin transaction")



but end it with



  conn.commit()


Yes there is a difference, at least as far as the combination of PostgreSQL 
and psycopg2 is concerned. I will use 'PSQL' in the following, to save me 
some typing.


A while ago I had to delve into PSQL locking, as I had a problem with locks 
not being cleared. I learned that, for a simple SELECT statement, PSQL 
checks to see if it is in a transaction. If not, it does not set any locks, 
but if it is, it creates a lock which is cleared on the next 
COMMIT/ROLLBACK.


By default, psycopg2 uses 'autocommit', which means that even a SELECT is 
preceded by a 'BEGIN' statement internally. I never changed the default, so 
all of the following assumes that autocommit is on.


I had many SELECT's, but I was not issuing any form of commit, so the locks 
built up. I solved my problem by always committing. However in my 
experimenting I found something curious.


I had one window open on a python session, where I could execute commands, 
and another on a psql session, where I could monitor the 'lock' table.


I found that, if I issued a SELECT, a lock was created, if I called 
conn.commit(), the lock was cleared. I could repeat this sequence and the 
pattern was consistent.


However, if I issued a SELECT and called cur.execute('commit'), the lock was 
cleared, but the next SELECT did *not* create a lock.


I worked out a possible reason for this, which I have not proved it by 
examining the source code of psycopg2, but is internally consistent. The 
theory goes like this -


psycopg2 is in one of two states - a transaction is active, or it is not 
active. If you execute any command, and a transaction is not active, it 
starts a transaction first. If you call conn.commit() or conn.rollback(), it 
sends the command to the database and resets its state. However, (and this 
is the theory,) if you call cur.execute('commit'), it sends the command to 
the database, but does not reset its state. So when you execute the next 
command, it thinks the transaction is still active, so it does not start a 
new transaction. PSQL, on the other hand, knows that the previous 
transaction has been committed, so if the next command is a SELECT, it does 
not create a lock.


As I said, I cannot prove this, but the theory fits the observed behaviour 
perfectly, so I have proceeded on the assumption that it is true. Therefore 
I now always run every SQL command or block of commands within a context 
manager, which always calls conn.commit() or conn.rollback() on exit, and I 
have not had any more problems. I use exactly the same code for sqlite3 and 
for Sql Server/pyodbc, and it has not caused any problems there either.


Frank Millman


--
https://mail.python.org/mailman/listinfo/python-list