Re: [sqlalchemy] update with custom Type

2013-08-29 Thread Sebastian Elsner
After much thought and playing around I think I may have 
over-complicated the problem and found an easier way to do what I wanted:


s.query(Foo).update({timcode_column: 
func.concat(func.left(Foo.timcode_column, 11), :, 24) },False)


Thanks for the help anyway :)


On 08/28/2013 08:08 PM, Michael Bayer wrote:

On Aug 28, 2013, at 9:59 AM, Sebastian Elsner sebast...@risefx.com wrote:


Now I would like to be able to do the following:

s.query(Foo).update({some_timecode: Foo.some_timecode.add_hours(5)}) # Adds 5 
hours to every Foo's timecode

I have seen this should be possible with a Comparator factory in 0.8 but I am 
stuck with 0.7 for now. How can I do this with 0.7?

just move out your function:


update({sometimecode: add_hours(Foo.some_timecode, 5)})


Something like this would also be OK for me:

s.query(Foo).update({some_timecode: Foo.some_timecode + TC(01:00:00:00:00)})

I have tried implementing the __add__ for both the TC and Timecode class and read the 
Augmenting Existing Types help, but failed to put the puzzle together.

Thank you for helping!

Sebastian

--
check out www.pointcloud9.com

Sebastian Elsner - Pipeline Technical Director - RISE

t: +49 30 20180300 flor...@risefx.com
f: +49 30 61651074 www.risefx.com

RISE FX GmbH
Schlesische Strasse 28, Aufgang B, 10997 Berlin
c/o action concept, An der Hasenkaule 1-7, 50354 Hürth
Geschaeftsfuehrer: Sven Pannicke, Robert Pinnow
Handelsregister Berlin HRB 106667 B

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.



--
check out www.pointcloud9.com

Sebastian Elsner - Pipeline Technical Director - RISE

t: +49 30 20180300 flor...@risefx.com
f: +49 30 61651074 www.risefx.com

RISE FX GmbH
Schlesische Strasse 28, Aufgang B, 10997 Berlin
c/o action concept, An der Hasenkaule 1-7, 50354 Hürth
Geschaeftsfuehrer: Sven Pannicke, Robert Pinnow
Handelsregister Berlin HRB 106667 B

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Possible bug in select.append_whereclause()?

2013-08-29 Thread Michael Bayer

On Aug 28, 2013, at 11:48 PM, gbr doubl...@directbox.com wrote:

 to compile the WHERE clause of query means we look at query.c.id which 
 means we must render the table query which is how it cycles.
 
 Why does it need to compile the whole query when I just want to use than 
 column name of the query?
  
 What it seems like you're looking to do is where(location.c.id  100).
 
 Correct. Unfortunately, I don't have access to `location.c.id`. The query is 
 generated and just returned as an object. All I can do to alter it, is by 
 using attributes of `query`. Is there any way of using the columns without 
 triggering a complete compilation of `query`?


OK well compilation means, we turn it into a string when you pass it to 
execute(), and obviously that has to go completely.  the issue isn't that we 
need to skip anything, it's that query.c.id is not the Column object you're 
looking for.

if you really wanted to get at that location.c.id and you really don't have the 
table or anything being passed, well it would be a lot better to alter that, 
but otherwise you could perhaps dig into froms like this:

my_table = s.froms[0]
s.append_whereclause(my_table.c.id)

that will work, it's just brittle, if your select() is against more than one 
Table, or a join, or something like that, your target table would probably no 
longer be as easy as s.froms[0].



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [sqlalchemy] Can you count the active/open sessions?

2013-08-29 Thread Michael Bayer

On Aug 28, 2013, at 11:44 PM, dndcu...@gmail.com wrote:

 I'd like to run some tests against my app to make sure I'm properly closing 
 all of the sessions that I open. Is there a way to get the number of open 
 database sessions with SQLAlchemy? Is this a strange request that hints I may 
 be taking the wrong approach?
 
 Currently I'm using scoped sessions with SQLAlchemy 0.8 like so:
 
 self.db = create_engine(connection_string)
 self.Session = scoped_session(sessionmaker(bind=self.db))
 
 # then a bunch of calls like
 session = self.Session()
 # ...execute queries
 self.Session.remove()
 
 
 I tried inspecting the engine and engine.pool objects, but neither of them 
 seem to have anything that stores or returns a session count. I also read 
 about event listening[1], but it doesn't look like the SessionMaker class has 
 events to hook into for opening or closing sessions.
 
 I realize that scoped sessions return a thread-local object, so just 
 incrementing and decrementing a counter for each session call won't work. But 
 we could track the session objects in a dictionary to avoid double-counting.
 
 Here's what I came up with:
 
 open_sessions = {}
 
 def open_session(self):
 session = self.Session()
 self.open_sessions[session] = True
 return session
 
 def close_session(self):
 session = self.Session()
 self.Session.remove()
 if session in self.open_sessions:
 del self.open_sessions[session]
 
 
 Is this the right approach?

well ideally your app would be constructed such that the Session lifecycle is 
controlled by a block that uses something like try:/finally: so that there's no 
doubt about things.  but if things aren't organized that way, i think what 
you're really concerned about is connections, not as much sessions, though you 
can use events to track both.   I'd track checked out connections using pool 
events: 
http://docs.sqlalchemy.org/en/rel_0_8/core/events.html#sqlalchemy.events.PoolEvents.checkout,
 
http://docs.sqlalchemy.org/en/rel_0_8/core/events.html#sqlalchemy.events.PoolEvents.checkin
 - just have a counter, increment it on checkout, decrement it on checkin.
For sessions, you can track the start/end of a session's transaction scope: 
http://docs.sqlalchemy.org/en/rel_0_8/orm/events.html#sqlalchemy.orm.events.SessionEvents.after_transaction_create
 
http://docs.sqlalchemy.org/en/rel_0_8/orm/events.html#sqlalchemy.orm.events.SessionEvents.after_transaction_end
 - there will be some nesting here per session, but again if you increment on 
create, decrement on end, a count of zero will indicate everything is closed. 
  



 
 
 ---
 [1] http://docs.sqlalchemy.org/en/rel_0_8/core/event.html
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.



signature.asc
Description: Message signed with OpenPGP using GPGMail


[sqlalchemy] sqlite string concats and datetime arithmetics

2013-08-29 Thread Greg Yang
I'm trying to get a series of datetimes using func.datetime. The format of 
input is func.datetime(basetime, '+ NNN seconds'), which works nicely if 
the shift applied is constant. However I need to add 10, 20, 30 seconds, 
etc to this base time. So I want something like func.datetime(basetime, 
concat('+', seconds.c.n, ' seconds')), but concat doesn't work for sqlite, 
which concatenates using the '||' operator. Is there working method to 
concat in sqlite?

Failing that, is there another way to get at what I want with datetime 
arithmetics?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Possible bug in select.append_whereclause()?

2013-08-29 Thread gbr
I see. I probably have to go back to the drawing board and revise how the 
whole thing works. Thanks anyway for the insight. At least I know that 
there isn't much I can do this way (the query actually consists of many 
joins). Handy to know how to access the from clause (unrelated to my post, 
but is there more documentation on this and other parts of the query?)...

Many thanks for the swift response (and SQLAlchemy for that matter :)).

On Thursday, August 29, 2013 9:46:02 PM UTC+10, Michael Bayer wrote:


 On Aug 28, 2013, at 11:48 PM, gbr doub...@directbox.com javascript: 
 wrote:

 to compile the WHERE clause of query means we look at query.c.id 
 which means we must render the table query which is how it cycles.


 Why does it need to compile the whole query when I just want to use than 
 column name of the query?
  

 What it seems like you're looking to do is where(location.c.id  100).


 Correct. Unfortunately, I don't have access to `location.c.id`. The query 
 is generated and just returned as an object. All I can do to alter it, is 
 by using attributes of `query`. Is there any way of using the columns 
 without triggering a complete compilation of `query`?



 OK well compilation means, we turn it into a string when you pass it to 
 execute(), and obviously that has to go completely.  the issue isn't that 
 we need to skip anything, it's that query.c.id is not the Column object 
 you're looking for.

 if you really wanted to get at that location.c.id and you really don't 
 have the table or anything being passed, well it would be a lot better to 
 alter that, but otherwise you could perhaps dig into froms like this:

 my_table = s.froms[0]
 s.append_whereclause(my_table.c.id)

 that will work, it's just brittle, if your select() is against more than 
 one Table, or a join, or something like that, your target table would 
 probably no longer be as easy as s.froms[0].



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] copying data w/o over-writing autoincrement fields

2013-08-29 Thread Aaron Krister Johnson
Hi all,

I have a script that bulk copies relevant data from one database server to 
another using the neat-o MetaData features in SQLAlchemy.

My script is a custom variation of this script: 

http://www.tylerlesmann.com/2009/apr/27/copying-databases-across-platforms-sqlalchemy/

My question is this: I want to have the import NOT over-write 
auto-incremented columns, i.e. I want it to respect when a column should be 
incremented on the destination table. So, is there a way to indicate that 
the copy process should copy everything *but* the auto-incremented columns?

The documentation for SQLAlchemy is vast, butI haven't yet seen anything 
that clearly indicates this is possible.I feel certain it is there 
somewhere, but I'm surely missing it.

Best,
Aaron.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] watining for table metadata lock

2013-08-29 Thread diverman
Hi,

we observed deadlock-like problem on our multi-component system with mysql 
database.

Our setup:

1) MySQL server 5.5 with many MyISAM tables Foo_timestamp, one per day 
(like partitioning)

2) C++ backend daemon
  * issuing CREATE TABLE IF NOT EXISTS Foo_timestamp (...) once per day 
after midnight
  * filling those tables with INSERT INTO Foo_timestamp during intraday

3) pythonic multi-threaded backend daemon (we call it 'worker')
  * reads Foo_timestamp tables every minute via ScopedSession and mapped 
classes 'Foo'
  * fills memcache with statistics computed from Foo_* tables

4) pythonic/apache/mod_wsgi single-threaded, multi-process webserver
  * runs also on shared codebase 'foobar', so it uses also ScopedSession 
(same as 'worker')
  * codebase is same when runing in webserver-mode and when running in 
worker-mode (ScopedSession is always used).

We observed that during rollover to next day the C++ BE daemon is blocked 
on 'Waiting for table metadata lock: CREATE TABLE IF NOT EXISTS 
Foo_x (...)'

Webserver is also blocked. Kill the 'worker' helps, all blocked stuff 
continues. If anyone could know:

- why it blocks since we have MyISAM,which are transaction-less
- why the worker is the blockator, since it only executes SELECTs
- how to fix it?

Thanks

Pavel

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Re: Can you count the active/open sessions?

2013-08-29 Thread Jonathan Vanasco
in general , It's worth reading these 2 sections of the docs if you haven't 
already :

http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#session-frequently-asked-questions
http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#using-thread-local-scope-with-web-applications

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] copying data w/o over-writing autoincrement fields

2013-08-29 Thread Simon King
On Thu, Aug 29, 2013 at 3:54 PM, Aaron Krister Johnson
akjmi...@gmail.com wrote:
 Hi all,

 I have a script that bulk copies relevant data from one database server to
 another using the neat-o MetaData features in SQLAlchemy.

 My script is a custom variation of this script:

 http://www.tylerlesmann.com/2009/apr/27/copying-databases-across-platforms-sqlalchemy/

 My question is this: I want to have the import NOT over-write
 auto-incremented columns, i.e. I want it to respect when a column should be
 incremented on the destination table. So, is there a way to indicate that
 the copy process should copy everything *but* the auto-incremented columns?

 The documentation for SQLAlchemy is vast, butI haven't yet seen anything
 that clearly indicates this is possible.I feel certain it is there
 somewhere, but I'm surely missing it.


Columns have an autoincrement property, but from a quick look at the
source it probably doesn't quite do what you need. Tables have a
private _autoincrement_column property which returns the
autoincrementing column for that Table if one exists:

https://bitbucket.org/zzzeek/sqlalchemy/src/97168dbf69f8aa21de2e764a4a4993215cb9b726/lib/sqlalchemy/sql/schema.py?at=master#cl-503

As long as you are ok using a private property which may change or
disappear in future versions of SQLAlchemy, you could exclude this
column when copying the data.

However, if you've got any foreign keys pointing at those
auto-incrementing columns, this scheme will break, because the foreign
keys will be inserted with their old values rather than the new ones.
Fixing that would be much more complicated.

Hope that helps,

Simon

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] watining for table metadata lock

2013-08-29 Thread Simon King
On Thu, Aug 29, 2013 at 4:48 PM, diverman pa...@schon.cz wrote:
 Hi,

 we observed deadlock-like problem on our multi-component system with mysql
 database.

 Our setup:

 1) MySQL server 5.5 with many MyISAM tables Foo_timestamp, one per day
 (like partitioning)

 2) C++ backend daemon
   * issuing CREATE TABLE IF NOT EXISTS Foo_timestamp (...) once per day
 after midnight
   * filling those tables with INSERT INTO Foo_timestamp during intraday

 3) pythonic multi-threaded backend daemon (we call it 'worker')
   * reads Foo_timestamp tables every minute via ScopedSession and mapped
 classes 'Foo'
   * fills memcache with statistics computed from Foo_* tables

 4) pythonic/apache/mod_wsgi single-threaded, multi-process webserver
   * runs also on shared codebase 'foobar', so it uses also ScopedSession
 (same as 'worker')
   * codebase is same when runing in webserver-mode and when running in
 worker-mode (ScopedSession is always used).

 We observed that during rollover to next day the C++ BE daemon is blocked on
 'Waiting for table metadata lock: CREATE TABLE IF NOT EXISTS Foo_x
 (...)'

 Webserver is also blocked. Kill the 'worker' helps, all blocked stuff
 continues. If anyone could know:

 - why it blocks since we have MyISAM,which are transaction-less
 - why the worker is the blockator, since it only executes SELECTs
 - how to fix it?


This sounds like more of a MySQL issue than an SQLAlchemy issue, so
you might have more luck on a MySQL mailing list. In answer to your
first question,
http://dev.mysql.com/doc/refman/5.5/en/metadata-locking.html suggests
that non-transactional tables can also be involved in metadata locks.
I'm afraid I can't help with the rest.

Simon

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] watining for table metadata lock

2013-08-29 Thread Simon King
On Thu, Aug 29, 2013 at 5:20 PM, Simon King si...@simonking.org.uk wrote:
 On Thu, Aug 29, 2013 at 4:48 PM, diverman pa...@schon.cz wrote:
 Hi,

 we observed deadlock-like problem on our multi-component system with mysql
 database.

 Our setup:

 1) MySQL server 5.5 with many MyISAM tables Foo_timestamp, one per day
 (like partitioning)

 2) C++ backend daemon
   * issuing CREATE TABLE IF NOT EXISTS Foo_timestamp (...) once per day
 after midnight
   * filling those tables with INSERT INTO Foo_timestamp during intraday

 3) pythonic multi-threaded backend daemon (we call it 'worker')
   * reads Foo_timestamp tables every minute via ScopedSession and mapped
 classes 'Foo'
   * fills memcache with statistics computed from Foo_* tables

 4) pythonic/apache/mod_wsgi single-threaded, multi-process webserver
   * runs also on shared codebase 'foobar', so it uses also ScopedSession
 (same as 'worker')
   * codebase is same when runing in webserver-mode and when running in
 worker-mode (ScopedSession is always used).

 We observed that during rollover to next day the C++ BE daemon is blocked on
 'Waiting for table metadata lock: CREATE TABLE IF NOT EXISTS Foo_x
 (...)'

 Webserver is also blocked. Kill the 'worker' helps, all blocked stuff
 continues. If anyone could know:

 - why it blocks since we have MyISAM,which are transaction-less
 - why the worker is the blockator, since it only executes SELECTs
 - how to fix it?


 This sounds like more of a MySQL issue than an SQLAlchemy issue, so
 you might have more luck on a MySQL mailing list. In answer to your
 first question,
 http://dev.mysql.com/doc/refman/5.5/en/metadata-locking.html suggests
 that non-transactional tables can also be involved in metadata locks.
 I'm afraid I can't help with the rest.


Googling mysql Waiting for table metadata lock gives a few clues.
This one looks like a nice short summary:

  http://www.chriscalender.com/?p=1189

Simon

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] watining for table metadata lock

2013-08-29 Thread diverman
Hi,

of course I searched the web before submitting the question. None of them 
pointed to what is the real problem.

This can be easily reproduced with sqlalchemy:

*console 1:*

mysql create table if not exists Foo (i int) engine MyISAM;
Query OK, 0 rows affected (0,00 sec)

mysql 


*console 2:*

 foo = engine.execute('select * from Foo;')
 

*now go back to console 1:*
mysql create table if not exists Foo (i int);
*Blocked! *

*now go to console 2:*
 del(foo)


*console 1:  has u**nblocked!*
Query OK, 0 rows affected, 1 warnings
mysql 

Pavel

Dne čtvrtek, 29. srpna 2013 18:23:12 UTC+2 Simon King napsal(a):

 On Thu, Aug 29, 2013 at 5:20 PM, Simon King 
 si...@simonking.org.ukjavascript: 
 wrote: 
  On Thu, Aug 29, 2013 at 4:48 PM, diverman pa...@schon.cz javascript: 
 wrote: 
  Hi, 
  
  we observed deadlock-like problem on our multi-component system with 
 mysql 
  database. 
  
  Our setup: 
  
  1) MySQL server 5.5 with many MyISAM tables Foo_timestamp, one per 
 day 
  (like partitioning) 
  
  2) C++ backend daemon 
* issuing CREATE TABLE IF NOT EXISTS Foo_timestamp (...) once per 
 day 
  after midnight 
* filling those tables with INSERT INTO Foo_timestamp during 
 intraday 
  
  3) pythonic multi-threaded backend daemon (we call it 'worker') 
* reads Foo_timestamp tables every minute via ScopedSession and 
 mapped 
  classes 'Foo' 
* fills memcache with statistics computed from Foo_* tables 
  
  4) pythonic/apache/mod_wsgi single-threaded, multi-process webserver 
* runs also on shared codebase 'foobar', so it uses also 
 ScopedSession 
  (same as 'worker') 
* codebase is same when runing in webserver-mode and when running in 
  worker-mode (ScopedSession is always used). 
  
  We observed that during rollover to next day the C++ BE daemon is 
 blocked on 
  'Waiting for table metadata lock: CREATE TABLE IF NOT EXISTS 
 Foo_x 
  (...)' 
  
  Webserver is also blocked. Kill the 'worker' helps, all blocked stuff 
  continues. If anyone could know: 
  
  - why it blocks since we have MyISAM,which are transaction-less 
  - why the worker is the blockator, since it only executes SELECTs 
  - how to fix it? 
  
  
  This sounds like more of a MySQL issue than an SQLAlchemy issue, so 
  you might have more luck on a MySQL mailing list. In answer to your 
  first question, 
  http://dev.mysql.com/doc/refman/5.5/en/metadata-locking.html suggests 
  that non-transactional tables can also be involved in metadata locks. 
  I'm afraid I can't help with the rest. 
  

 Googling mysql Waiting for table metadata lock gives a few clues. 
 This one looks like a nice short summary: 

   http://www.chriscalender.com/?p=1189 

 Simon 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] watining for table metadata lock

2013-08-29 Thread Jonathan Vanasco
MyISAM doesn't support transactions, but it does support locking. According 
to the docs, it actually relies on table locking for certain operations.

What are your sqlalchemy connection strings for the engine ?
Have you tried explicitly setting autocommit mode within the query ?
Have you tried verbose logging of SqlAlchemy and/pr MySql to see what 
commands are being sent to the server in which order to cause this ?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] watining for table metadata lock

2013-08-29 Thread Simon King
On Thu, Aug 29, 2013 at 6:01 PM, diverman pa...@schon.cz wrote:
 Dne čtvrtek, 29. srpna 2013 18:23:12 UTC+2 Simon King napsal(a):

 On Thu, Aug 29, 2013 at 5:20 PM, Simon King si...@simonking.org.uk
 wrote:
  On Thu, Aug 29, 2013 at 4:48 PM, diverman pa...@schon.cz wrote:
  Hi,
 
  we observed deadlock-like problem on our multi-component system with
  mysql
  database.
 
  Our setup:
 
  1) MySQL server 5.5 with many MyISAM tables Foo_timestamp, one per
  day
  (like partitioning)
 
  2) C++ backend daemon
* issuing CREATE TABLE IF NOT EXISTS Foo_timestamp (...) once per
  day
  after midnight
* filling those tables with INSERT INTO Foo_timestamp during
  intraday
 
  3) pythonic multi-threaded backend daemon (we call it 'worker')
* reads Foo_timestamp tables every minute via ScopedSession and
  mapped
  classes 'Foo'
* fills memcache with statistics computed from Foo_* tables
 
  4) pythonic/apache/mod_wsgi single-threaded, multi-process webserver
* runs also on shared codebase 'foobar', so it uses also
  ScopedSession
  (same as 'worker')
* codebase is same when runing in webserver-mode and when running in
  worker-mode (ScopedSession is always used).
 
  We observed that during rollover to next day the C++ BE daemon is
  blocked on
  'Waiting for table metadata lock: CREATE TABLE IF NOT EXISTS
  Foo_x
  (...)'
 
  Webserver is also blocked. Kill the 'worker' helps, all blocked stuff
  continues. If anyone could know:
 
  - why it blocks since we have MyISAM,which are transaction-less
  - why the worker is the blockator, since it only executes SELECTs
  - how to fix it?
 
 
  This sounds like more of a MySQL issue than an SQLAlchemy issue, so
  you might have more luck on a MySQL mailing list. In answer to your
  first question,
  http://dev.mysql.com/doc/refman/5.5/en/metadata-locking.html suggests
  that non-transactional tables can also be involved in metadata locks.
  I'm afraid I can't help with the rest.
 

 Googling mysql Waiting for table metadata lock gives a few clues.
 This one looks like a nice short summary:

   http://www.chriscalender.com/?p=1189

 Simon
 Hi,

 of course I searched the web before submitting the question. None of them
 pointed to what is the real problem.

 This can be easily reproduced with sqlalchemy:

 console 1:

 mysql create table if not exists Foo (i int) engine MyISAM;
 Query OK, 0 rows affected (0,00 sec)

 mysql


 console 2:

 foo = engine.execute('select * from Foo;')


 now go back to console 1:
 mysql create table if not exists Foo (i int);
 Blocked!

 now go to console 2:
 del(foo)


 console 1:  has unblocked!
 Query OK, 0 rows affected, 1 warnings
 mysql

 Pavel


I'm not quite sure what your point is. I assume you can also reproduce
it with plain MySQL (you may need to set @@autocommit=0; in one or
both consoles). I can't test it myself as I don't have MySQL 5.5.

Simon

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] watining for table metadata lock

2013-08-29 Thread diverman
Hi Jonathan,

I don't use any special connection parameters. All are defaults.

I I execute 

 engine.execute('set autocommit=1;') 
 foo = engine.execute('select * from Foo;')

...then on other console  CREATE TABLE IF NOT EXISTS Foo(i int) don't block.

So it seems it's an application bug. Autocommit should be True for 
ScopedSession when running in worker's context.

On Thursday, August 29, 2013 7:17:51 PM UTC+2, Jonathan Vanasco wrote:

 MyISAM doesn't support transactions, but it does support locking. 
 According to the docs, it actually relies on table locking for certain 
 operations.

 What are your sqlalchemy connection strings for the engine ?
 Have you tried explicitly setting autocommit mode within the query ?
 Have you tried verbose logging of SqlAlchemy and/pr MySql to see what 
 commands are being sent to the server in which order to cause this ?


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] watining for table metadata lock

2013-08-29 Thread diverman



 I'm not quite sure what your point is.


I was trying to figure out if that's a bug in worker, sqlalchemy, mysql or 
elsewhere, since I was unable to reproduce it in development environment, 
only in laboratory conditions using commands I posted to my previous e-mail.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] sqlite string concats and datetime arithmetics

2013-08-29 Thread Gunnlaugur Thor Briem
I would have expected the SQLite dialect to know how to compile concat
to ||if that's the operator. But failing that, something more explicit
like this
ought to do the trick:

from sqlalchemy.sql import literal_column
literal_column('+ ').op('||')(seconds.c.n).op('||')(literal_column('
seconds'))

Gulli



On Thu, Aug 29, 2013 at 12:33 PM, Greg Yang sorcerero...@gmail.com wrote:

 I'm trying to get a series of datetimes using func.datetime. The format of
 input is func.datetime(basetime, '+ NNN seconds'), which works nicely if
 the shift applied is constant. However I need to add 10, 20, 30 seconds,
 etc to this base time. So I want something like func.datetime(basetime,
 concat('+', seconds.c.n, ' seconds')), but concat doesn't work for sqlite,
 which concatenates using the '||' operator. Is there working method to
 concat in sqlite?

 Failing that, is there another way to get at what I want with datetime
 arithmetics?

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] checking script validity

2013-08-29 Thread monosij . forums
Hello - 

Not very familiar with sqlalchemy yet ...

Is there a way to check if a script I have generated (not using sqlalchemy) 
is valid for a particular DBMS (Postgres, MySQL, etc.)

It would be an ANSI SQL script and there is module ansisql but not able to 
find where I can check a script for validity against a particular DBMS 
(Postgres).

I would need to validate a create table script as well index creation 
script and foreign key constraint script.

Thank you.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Re: checking script validity

2013-08-29 Thread herzaso
Not quite related to sqlalchemy, but if you want a one-time check, check 
out sqlfiddle http://sqlfiddle.com/.


On Friday, August 30, 2013 8:30:09 AM UTC+3, monosij...@gmail.com wrote:

 Hello - 

 Not very familiar with sqlalchemy yet ...

 Is there a way to check if a script I have generated (not using 
 sqlalchemy) is valid for a particular DBMS (Postgres, MySQL, etc.)

 It would be an ANSI SQL script and there is module ansisql but not able to 
 find where I can check a script for validity against a particular DBMS 
 (Postgres).

 I would need to validate a create table script as well index creation 
 script and foreign key constraint script.

 Thank you.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.