[sqlalchemy] Application locking and SQLAlchemy

2012-12-02 Thread YKdvd
I'm starting to use SQLAlchemy to access a small, in-house database.  I 
need to add some rudimentary concurrency handling, and just wanted to see 
if my general impression as to the SQLAlchemy implications are correct.  As 
a hypothetical  example, assume something like a Users table, and some sort 
of grid-like editing display displaying multiple rows.  My intention is to 
create a Locks table, which identifies the 
table/primaryID/lockObtainedTimestamp of a locked row in a data table like 
Users.  An editing routine would have to acquire locks for the rows it 
wished to edit.  There's three basic cases:

1) Exclusive editor - I'd just obtain a Locks on some unique identifier 
specific to, say, editing Users, and only one editor would be allowed to 
update Users at a time.  This is just a simple semaphore, and once obtained 
I can do all my SQLAlchemy stuff freely, since it is assumed nothing else 
will write to the data table.  The existing non-DB scheme does this, 
although I'll be able to change this to case #2.

2) Entire grid - once the desired subset of records have been retrieved 
(perhaps all Users in a particular group or branch office), the grid 
enables all records to be modified, with a single Update once finished. 
 Here I'd have to loop through the Users returned from my SQLAlchemy query, 
and obtain a Locks for each one (one Locks row per User).  If successful, 
I'd then have to call refresh() for each of the objects (just in case 
something updated between query and obtaining the lock).

3) Single row - only one grid row is edited/saved at a time, so I'd just 
obtain a Locks for it and refresh() the single object before editing. 

I'm pretty sure I know what I need to make the Locks table work, but I just 
wanted to be sure that refresh() is the right SQLAlchemish thing to do in 
this situation.  This would be low-volume, in-house sort of thing, and this 
type of basic pessimistic locking is acceptable over trying to resolve an 
editing conflict using the built-in optimistic version_id_col / 
StaleDataError features, although that may be useful in places.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/U-jpoIxGoNoJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Application locking and SQLAlchemy

2012-12-02 Thread YKdvd
D'oh!  And I had just been reading the docs about a commit expiring loaded 
object instances, which is what the refresh() would have done.  So the mere 
accessing of that User object instance (say, UserObj.name=newNameFromEdit, 
or temp=UserObj.name) would first trigger a reload of that User from the 
database, once I've committed my lock-obtaining inserts to the Locks table 
based on an initial Users query.  

By the way, add me to the list of grateful SQLAlchemy users.  I've played 
with Doctrine in PHP a bit, and if nothing else the SQLA documentation is 
miles better.  As a starter it is mainly a magical way of replacing writing 
a pile of tedious CRUD, and as someone who has hazy memories of things like 
the original Btrieve, an ORM is almost unsporting... :)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/-LdgqLa40rYJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Multiple table join selectable update vs insert

2013-01-02 Thread YKdvd
I'm starting to work with an existing MySQL setup, where there's a master 
database (or, effectively, schema, it is all within one MySQL instance) 
with tables of general usefulness, and separate schemas for each specific 
project.  So there is a table master.users with all the basic information 
for a user, and each project would have something like project7.userlink 
with additional info.  userlink also has a couple of columns which 
duplicate those in users, so that, for instance, user.jobtitle can be 
overridden by userlink.jobtitle for that specific project, if the userlink 
column is non-NULL.  The existing usage (mainly PHP) has a view in the 
project database which presents the appropriate values via COALESCE, which 
was fine, but needs to change now that creating and updating the info is 
required (the view obviously wasn't updatable).

I've used the Mapping a Class against Multiple Tables pattern in the 
SQLAlchemy docs to use a join selectable:

MasterUser = Table(users, metadata, Column(id, Integer, 
primary_key=True) ... schema=master)
ProjUser = Table(userlink, metadata, Column(user_id, Integer, 
ForeignKey('master.users.id'), primary_key=True, )...)

UserMerge_join = join(MasterUser, ProjUser)
class UMerge(Base):
__table__ = UserMerge_join
id = column_property(MasterUser.c.id, ProjUser.c.user_id)
...

This seemed to work well for creating new users (no records yet in 
master.users and the project's userlink table), and retrieving those that 
have a userlink row in the whatever project database being connected to 
(which means the master.users row exists as well).  It failed trying to 
retrieve UserMerge instances where a matching master.users row existed but 
no row in the project's userlink table yet (not all users belong to every 
project), but I changed to the join to:

UserMerge_join = join(MasterUser, ProjUser, isouter=True)

and I can retrieve non-member users if necessary, in preparation to add 
them to the project.  But if I then modify and commit, it fails with 
StaleDataError: UPDATE statement on table 'userlink' expected to update 1 
row(s); 0 were matched..  SQLAlchemy seems to be saying that even though 
no row from userlink was present on the retrieval, it is expecting one on 
the update.  I can understand the error, since this same situation would be 
produced if indeed there had been a userlink record which was deleted by 
something outside the session.  I may be abusing the isouter feature, and 
I can handle this some other way (add existing users into a project by 
adding a bare ProjUser entry for them), but I thought I'd check to make 
sure there wasn't one more little trick I might be missing to have 
SQLAlchemy generate the necessary INSERT instead of UPDATE in this case for 
the userlink table portion of this composite object.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/4RqGSE-ywvsJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Events (load and after_attach) for instance stamping

2013-01-10 Thread YKdvd
In the MySQL system I'm redoing with SQLAlchemy, there is effectively a 
master' schema that exists once top-level stuff (including a table 
describing projects) , and project schemas (one identical set of tables 
per project), which have project-level stuff.  When an object is retrieved 
from a project, there is nothing in the data itself which indicates which 
project it was loaded from, but this may in certain cases be needed.  I 
have a nice collection of Project objects loaded from the master schema 
lying around, and I pass my own subclass of SQLA's Session to 
sessionmaker() so I can store a reference to the appropriate Project with a 
session.  But since Detached and Transient instances don't have an 
associated session, I can't just use object_session() to access the Project 
via the instance's session.  So I thought I'd stamp a reference to the 
appropriate Project onto instances.

SQLA's event system has the after_attach session event.  Hooking into 
this works for new instances I attach to a session with add(), but does't 
fire when items are loaded from a query - presumably attached means 
direct userland attachment only, not instances being associated with a 
session via a query. So it's only half the solution - there doesn't seem to 
be a session-level  event that fires in the latter case?

There is a load instance-level event, which seems to solve the other half 
of this.  The QueryContext passed to the handler contains the session, and 
I can stamp the instance with the Project reference I've stored in the 
session.  The only drawback is that I apparently need to separately attach 
the event listener to each project instance class?  Presumably I could 
somehow retrieve a list of classes from the declarative_base 
(_decl_class_registry_ ?) and walk through that adding the listeners.

Is this a reasonable approach, and between the two events I'll be seeing 
all the objects that enter the session either from database retrieval or my 
code?


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/_F71NtWi9MsJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Events (load and after_attach) for instance stamping

2013-01-10 Thread YKdvd
On Friday, January 11, 2013 2:34:09 AM UTC-4, Michael Bayer wrote:


  you can associate the instance event with the mapper() callable or 
 Mapper class, and it will take effect for all mapped classes. 


I think that would work for my case, although I'm a little fuzzy as to the 
exact syntax to provide the mapper reference for event.listen - I've just 
been using declarative and haven't dealt with mappers directly - each of my 
classes would have a different mapper instance, and I'd have to attach to 
each (no benefit over class attachment)?  Or using the Mapper class itself 
would trigger for anything mapped, and I'd have to discriminate in the 
handler for classes of interest, or create a Mapper subclass and somehow 
have my declarative_base subclass use it?  

  

Using 0.8 you can also associate the listener with a common base class, 
 such as your declarative Base, or even a mixin that's common to some subset 
 of classes. 


That sounds even nicer, and since I'm just starting out with no legacy code 
I've been meaning to try dropping 0.8 in even before it is final.  It 
sounds like I could just do something like 
event.listen(MyDeclarativeSubclass, load, myStamperFunc).

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/Npq3NDrxh10J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Events (load and after_attach) for instance stamping

2013-01-11 Thread YKdvd


 That sounds even nicer, and since I'm just starting out with no legacy 
 code I've been meaning to try dropping 0.8 in even before it is final.  It 
 sounds like I could just do something like 
 event.listen(MyDeclarativeSubclass, load, myStamperFunc).


 I popped in 0.8b2 and tried something like this:

class SubBase(object):
...
def MyLoadHandler(instance, context):
   print hit the load handler
MyBase = declarative_base(cls=SubBase)
event.listen(MyBase, load, MyLoadHandler)
...
class MyDataClass(MyBase):
... do all the declarative stuff
...
anInstance=session.query(MyDataClass).first()


It doesn't complain about a non-existent event when I install the handler, 
but the event handler doesn't seem to get called when the instance gets 
loaded with the query.  I'll try and do a minimal little standalone test to 
make sure it isn't something in my system messing things up, but in theory 
this should work?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/rQwgslRRDGsJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Events (load and after_attach) for instance stamping

2013-01-11 Thread YKdvd
On Friday, January 11, 2013 11:25:06 AM UTC-4, Michael Bayer wrote:


 in theory.  It's new stuff and was very tricky to get it to work, so feel 
 free to send a brief test along.


Here's a minimal example I quickly put together - it retrieves from the 
database, but the handler doesn't seem to fire, and so the print statement 
on the second-last line fails.  I'm using the standard MySQLdb with MySQL 
5.0.43 (sigh, they promise to update soon) - the server is on a Debian box, 
and the client code is running in Python 1.6.4 on Windows 7 (it's actually 
the 64-bit Python interpreter bundled with the Maya graphics package):

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Table, Column, Integer, event, create_engine
from sqlalchemy.orm import sessionmaker

def My_load_instance_handler(instance, context):
instance.itsProduction = None #should add itsProduction attribute to all 
loaded instances
print stamped loaded instance with production

AlchemyBase = declarative_base()
event.listen(AlchemyBase, load, My_load_instance_handler)
dbURL = mysql://User:password@%s/%s % (dbServer, dbName)  #replace 
with actual parameters
engine = create_engine(dbURL)
AlchemyBase.metadata.bind=engine

class MyDataClass(AlchemyBase):
__tablename__ = vw_users  #replace with actual table name
id = Column(Integer, primary_key=True)

Session = sessionmaker(bind=engine)
session = Session()
instance = session.query(MyDataClass).first()
print instance.id, instance.itsProduction #this fails with no itsProduction 
attribute
session.close()

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/GOcqdckIB0YJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] MySQL drivers

2013-01-24 Thread YKdvd
My main use of SQLAlchemy is in the Python incorporated into Autodesk's 
Maya graphics program, using the standard MySQL-Python driver. 
 Unfortunately for the most recent Maya 2013 version Autodesk has compiled 
their Python (2.6.4) with Visual Studio 2010 for Windows, instead of the 
VS2008 used for official python distributions.  This can create issues (two 
different C runtimes) for packages with C bits (like MySQL-Python) compiled 
with 2008.  I'm looking into compiling the driver myself, but I'm also 
looking at dropping in an alternate driver temporarily.  

The pymysql driver is pure python and supposedly a drop-in replacement for 
MySQL-Python.  I was wondering if anyone has had any real-world experience 
swapping it in, and know of any gotchas?  Our usage doesn't require 
super-duper performance and isn't doing anything horribly exotic.  There is 
one bit that is still importing MySQLdb directly and doing raw SQL stuff 
instead of SQLA, which is why a temporary drop-in replacement without a lot 
of side effects would be appealing. 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/M4JjMJhYS9MJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Detached instances and lazy-loaded attributes

2013-01-24 Thread YKdvd
I was going to look at some sort of walk and trigger but SQLA is elegant 
enough that I suspected I was either missing a function that already did 
this, or was fighting against the design philosophy.  Sounds like the 
latter... :)

My background gives detached a less ephemeral flavour, especially in a 
long-persisting rather than web app - this was sort of configuration 
stuff that was loaded and read-only referenced.  I'm sort of recreating a 
legacy system and designing on the fly, and hadn't quite resolved whether 
the loading session was going to remain open, and the bit that loaded the 
stuff didn't have a lazy/eager option.  I'll have no problem in this case 
to rejigger to eager-load at query time, or even make the relationships 
eager.  Thanks for the feedback.

On Thursday, January 24, 2013 2:54:14 PM UTC-4, YKdvd wrote:

 If I have an instance MyInstance with an attribute itsLazyAttribute 
 from a relationship, where everything is defined and queried with lazy 
 loading, then the attribute is lazily loaded, and only filled in when I 
 access it (perhaps, x=len(MyInstance.itsLazyAttribute).   But if not 
 accessed before the instance is detached (session closed, or expunge 
 called) then the attribute will be unavailable, and trigger a 
 DetachedInstanceError (not bound, lazy load operation cannot proceed).

 All fine, but is there any way to automatically trip all the lazy triggers 
 before detaching an instance?  Something like a 
 session.fullyLoad(MyInstance) type of call?  I couldn't seem to see 
 anything that would do this.  I can redo things to do an eager query, but I 
 was wondering if I was missing something?


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/HP2o7w4BAHsJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] composite ForeignKeyConstraint with a constant, non-column member?

2013-04-07 Thread YKdvd
I'm using SQLAlchemy on an existing database, with some one-many with the 
many row connecting to a many-many department/status system.  The main 
fields are something like this: 

Parent.id
Parent.mainStatus
Child.id
Child.parent_link
Child.department_id
Child.status_id
Department.id
DepartmentAssoc.id
DepartmentAssoc.department_id
DepartmentAssoc.status_id
DepartmentAssoc.seqOrder
Status.id
Status.whatever

Each Parent can have multiple Child rows, for different departments.  Each 
department can have a specific sequenced subset of Status rows, so there's 
a DepartmentAssoc table that provides this.  Child has a status value, and 
does a composite foreign key relationship (department_id, status_id) 
through DepartmentAssoc to the status system. Everything connects and works 
fine, and SQLA has those slick AssociationProxy things that make it even 
better than the original.  There's a couple different Parent/Child type 
setups like this that share that same association setup.

Unfortunately, there's also one weird parent variant that also has a naked 
status code mainStatus, which implicitly belongs to a specific department 
(call it #310).  The PHP never did complete links with this, but I'd like 
to provide the same sort of foreign key/association linkage to the status 
system that the various Child objects have. But since there is no 
department field in Parent I can't specify that half of the composite.  I 
assume there isn't any way of doing something like 
ForeignKeyConstraint(['f1', 'f2'], [310, 'da.status_id']) with a constant? 
 I can't think how I'd tell MySQL to do this, so it makes sense you can't. 
 I could probably kludge a department field into that Parent record to make 
it work, but I was wondering if their is some buried SQLA feature that 
might be put to use?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] non foreign-key relationship between schemas

2013-05-24 Thread YKdvd
I'm working with a MySQL setup that has, say, a Studio database (or 
schema), and multiple Production schemas (Production_1, Production_2, etc). 
 The Studio database has a seasons table, and the Production databases 
have episodes tables.  Currently, the episodes table doesn't have a 
foreign key linkage back to Production.productions (there isn't even a 
column containing the id from Production.productions), the original PHP 
usage didn't work this way.  For SQLAlchemy, I'd like to create a 
relationship so that if I have an engine for, say, Production_1, a 
retrieved episodes object does the many-one back to its productions 
parent.  Things are something like this:

class Production(Base):
__table_args__ = {'schema':'Studio'}
   id = Column('id', Integer, primary_key=True, nullable=False)

class Episode(Base):
   ...
  # some sort of relationship() back to Production, even though there is no 
column to use as a foreign key

Creating Custom Foreign Conditions documents the remote() and foreign() 
functions, and I was wondering if these could be used somehow.  I've played 
around with something like

production = relationship(Production, 
primaryjoin=remote(Production.id)==foreign(???)) 

but I can't seem to find anything that works.  I can provide some sort of 
instance method or property with the necessary id value for foreign(), but 
I'm not sure if this is acceptable, or even if the remote reference is 
correct (I've tried the string Studio.productions.id as well as the 
Production.id variable.  

I could probably add a production_id column to the episodes table - it 
would get filled with the same value for all records in a particular 
Production_?.episodes table.  That would let me do a normal foreign_key 
relationship and shouldn't break the legacy PHP access.  But I was curious 
if there is a way to torture SQLAlchemy into creating this sort of 
non-column relationship?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Re: non foreign-key relationship between schemas

2013-05-24 Thread YKdvd
Yeah, I was afraid of that, but I thought there might be something going on 
with the relationship layer that might do the knitting.  It would be a 
constant value (per schema/engine) I could have provided to the engine, 
metadata, mapper or whatever, but it isn't in the Episode row explicitly. 
 Oh well,

I'll take a look at the docs for whether a @property can handle writable 
collecitons, but I may just break down and put in that column for a 
constant-value foreign-key relationship - I wanted to try and backref it to 
the Studio.productions table so that productions loaded from a Production 
schema have a collection of episodes, if possible.

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] non foreign-key relationship between schemas

2013-05-24 Thread YKdvd


 Yup, all episodes in a Production schema would belong to one specific 
 production row.  I guess I was thinking about the relationship more in 
 terms of the automatic loading of the collection, and being able to 
 add/delete from it and have it reflected on flush.  It looks like @property 
 handles the first - I'll have to check the docs to see if a property can 
 provide an aware collection?  Good weekend research - the joys of reverse 
 designing from an existing setup!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] MutableDict.as_mutable(PickleType) blobs and mysqldump

2014-10-15 Thread YKdvd
I've got a SQLAlchemy model something like this:

class MyUserl(AlchemyBase): 
...
webPrefs = Column(MutableDict.as_mutable(PickleType) )

which runs against a MySQL 5.5.31 (Debian) database table:

CREATE TABLE myUserTable (
...
`webPrefs` blob DEFAULT NULL,
ENGINE=`InnoDB` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci

This works fine - the webPrefs dictionary field saves, reloads, all 
hunky-dory.  But when I use mysqldump to export this out, and import it to 
a different MySQL setup (OSX 5.1.66), SQLAlchemy throws an error loading a 
user record: UnpicklingError: invalid load key.

I'm assuming the blob field is getting munged somewhere.  The export is 
done with:
 mysqldump  -r dumpfile.sql --default-character-set=utf8

Should mysqldump be safe for the pickled-to-blob column, escaping anything 
unsafe, or am I going to need something like the hex-blob option on 
mysqldump?  The other possibility I see is that I'm getting the dump[file 
at the OSX site through a Mercurial repository, and perhaps Mercurial isn't 
safe for the mysqldump encoding.  I'm trying to obtain a direct binary copy 
of the dumpfile, but it will be a day or two and I thought I'd check to see 
if anyone can confirm whether this should in theory work.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Retrofitting class inheritance

2014-12-06 Thread YKdvd
I inherited a database which has two tables containing some similar 
information, with additional differences.  In terms of SQLAlchemy Class 
Inheritance Hierarchies docs, there is an engineer table and a manager 
table, but no Employee class or table.  Porting to Python and SQLA 
initially my Company equivalent just has relationships (engineers and 
managers) to the separate classes, and there's no inheritance involved.  

This works as per the original pre-Python intent, but I'd like to treat the 
common bits polymorphically in some cases, without adding/reworking tables 
or breaking existing code.  It looks like I can map this to concrete 
inheritance?  I'd move up the common fields to an AbstractConcreteBase 
equivalent of Employee (using declared_attr if necessary), have the two 
classes inherit from this, and give them mapper_args for a 
polymorphic_identity and concrete:True?  Existing relationships should 
still work, but I could now do an employees relationship for Company to 
get a mixed collection of Manager and Engineer objects for use when 
appropriate?

Also, is there any ballpark estimate for the 1.0 release timeframe (weeks 
vs months)?  It looks like it adds additional support for declared_attr and 
relationships for AbstractConcreteBase that at first glance sounds useful.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Polymorphic adjacency lists and many-to-many

2015-02-07 Thread YKdvd
I have model FruitCategories which uses the bidirectional adjacency list 
pattern (Node) from the documentation.  It is used to create a 
many-to-many connection to a model, say, Fruit, with an association table 
(fruits_cats), again vanilla like the docs.  This all works great.

I have other completely unrelated categories for unrelated entities in 
similar many-to-many , and I though I could convert this into a generic 
setup.   The categories table would become a general single-table 
polymorphic inheritance setup, where FruitCategories might have polymorphic 
type 1, HighwayCategories (which would map with a Highway entitiy) as 2, 
etc.  It would seem the different category systems could share the same 
table.  

What I was wondering was if there would be a way to share the association 
table in this setup, instead of having to create one for each new 
category-entity combo?  Presumably a polymorphic association table isn't a 
thing; could a polymorphic association object be set up, and each mapping 
combo would create a subclass of Category and the association object.  I 
have a hazy sense that declared_attr would have to be involved for one side 
of the association and the Category children definition, but my head starts 
spinning when I get this far.

Or am I going down a rabbit hole and should just create a new association 
table for each mapping combo?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Polymorphic adjacency lists and many-to-many

2015-02-15 Thread YKdvd
Thanks for the pointer to the Generic Associations page, somehow I managed 
to miss that.  I had the same qualms about referential integrity as the 
generic foreign description, but was hoping I was missing something.  

Doing separate association tables isn't really a big problem, just a bit of 
a pain that I have a slightly weird setup.  Each project shares a master 
database but has its own identical project database (which can relate to 
the master), so everything added to the project-side structure I have to 
arrange to create in a slowly expanding number of project databases. 
 Someday I should look at arranging to collapse the project tables into 
single copies with project discriminators, but there's never time... :)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] inspecting attribute_mapped_collection info of RelationshipProperty?

2017-02-22 Thread YKdvd
Let's say I have a model class Child, with an Integer column "birthOrder" 
and text column "birthSign", and a class Parent which has "children = 
relationship('Child', 
collection_class=attribute_mapped_collection('birthOrder'))".  This gives 
Parent a dictionary "children", keyed by the birth order.  I could do a 
Parent query with 
.join(Child).filter(Child.birthOrder==1).filter(Child.birthSign=="Virgo") 
 to find all the firstborn Virgos.

I'd like to dynamically build up a similar query, given the text 
"children[1].birthOrder" (plus knowledge that the operator and value were 
"==" and "Virgo").  I can split that string on "." to get "children[1]", 
and seeing the brackets I can assume the attribute_mapped_collection and 
split again to get "children" and the value "1",   Using inspect(), I can 
find out that "children" is a RelationshipProperty of the Child class, But 
how can I inspect to find out what column ("birthOrder") was passed in the 
attribute_mapped_collection?  The RelationshipProperty I get from inspect() 
has "collection_class", but this holds a lambda function.  Is knowledge 
that "birthOrder" was what was used lost inside that, or is there some 
other way I can find it?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Unnecessary SELECTS for association object instances via composite key?

2016-09-01 Thread YKdvd
I think I may have found the issue.  If I modify your example so that 
"C.k1" is a declared_attr, the assertion on the second C instance loads the 
(100,1000) record from table "ab" again.  I originally had my "k1" in a 
mixin, but it looks like the mixin isn't involved, and it is purely the 
declared_attr.  I assume that the "k1" coming from the declared_attr 
doesn't have quite the right karma in some way when used in the 
ForeignKeyConstraint in C that makes the relationship to AB?  I'll have to 
go back and dig into the docs, but am I using declared_attr decorator 
improperly, or is there a better way to refer to "k1" in the 
ForeignKeyConstraint?  

My modified version of your test is below.  Change the "useDeclared" 
variable to False to bypass the declared_attr behaviour and recover the 
working of the original test.

from sqlalchemy import * 
from sqlalchemy.orm import * 
from sqlalchemy.ext.declarative import declarative_base, declared_attr 

useDeclared = True # set True to see unneeded select on #2
useMixin = False # put the declared in the mixin if True

Base = declarative_base() 

class A(Base): 
 __tablename__ = 'a' 
 id = Column(Integer, primary_key=True) 

 assoc = relationship("AB") 

class B(Base): 
 __tablename__ = 'b' 
 id = Column(Integer, primary_key=True) 


class AB(Base): 
 __tablename__ = 'ab' 

 a_id = Column(ForeignKey('a.id'), primary_key=True) 
 b_id = Column(ForeignKey('b.id'), primary_key=True) 

 a = relationship("A") 
 b = relationship("B") 

class Mixin_C(object):
if useMixin:
@declared_attr
def k1(cls):
print "DECLARED K1"
return Column(Integer)

class C(Base, Mixin_C): 
 __tablename__ = 'c' 

 id = Column(Integer, primary_key=True) 
 if not useMixin:
if useDeclared:
@declared_attr
def k1(cls):
print "declared without mixin"
return Column(Integer)
else:
print "INLINE k1"
k1 = Column(Integer)
 k2 = Column(Integer) 
 assoc = relationship("AB") 

 __table_args__ = ( 
 ForeignKeyConstraint(['k1', 'k2'], ['ab.a_id', 'ab.b_id']), {}) 

e = create_engine("sqlite://", echo=True) 
Base.metadata.create_all(e) 

s = Session(e) 
a, b = A(id=100), B(id=1000) 

ab = AB(a=a, b=b) 

c1 = C(id=1, assoc=ab) 
c2 = C(id=2, assoc=ab) 
s.add_all([a, b, ab, c1, c2]) 
s.commit() 
s.close()  # clears everything 

c1 = s.query(C).get(1) 

print "#1 EMITS LAZYLOAD:" 
assoc = c1.assoc  # note we keep a strong reference here 

c2 = s.query(C).get(2) 

print "\n#2 SHOULD NOT EMIT LAZYLOAD%s"%("" if not useDeclared else ", but 
will because of declared_attr")
assert c2.assoc is assoc 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Unnecessary SELECTS for association object instances via composite key?

2016-08-28 Thread YKdvd
I'll see if I can add in the equivalent of any of the weirdness from the 
left/right classes or whatever from my setup to break this example and find 
out where I've messed up... :)
There's a couple of oddities from one of them that might be possibilities, 
its been a couple of years or more since they were set up, and I was pretty 
new to SQLAlchemy at the time.  I know some time ago I was getting an 
"SAWarning: relationship...conflict" that might have involved something in 
this part of the schema, and perhaps I didn't fix that up properly and it 
is somehow messing up the mapping when the StatusBlock ("C") looks to bring 
in the AssocInfo ("AB") records.

Thanks again!

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Unnecessary SELECTS for association object instances via composite key?

2016-08-28 Thread YKdvd
I've got a situation where SQLAlchemy (1.0.14, accessing MySQL via PyMySQL 
0.7.6) seems to be issuing unnecessary SELECTs to an Association Object 
table for instances that are already in the identity map.
Hopefully I've extracted the essentials of my setup in the pseudo-example 
below.  StatusBlock uses a composite foreign key constraint into AssocInfo, 
which is the association object; MyThings are a main object that contain 
many StatusBlock instances via the "statuses" relationship.  
Things work fine functionally, but it seems that every time a StatusBlock 
lazyloads and requires a specific AssocInfo instance, say with composite 
key (100,1000), a SELECT is issued for it, even if that AssocInfo instance 
is already in the identity map from having been loaded by another 
StatusBlock.
You can see at the comment "#TROUBLE HERE" below, when SQLAlchemy issues a 
SELECT to lazyload the StatusBlock from the second thing, it also issues a 
SELECT to get the (100,1000) AssocInfo record, even though it is in the 
identity map, having just been loaded for the first StatusBlock at the 
"FIRST LOAD" comment.  It is almost as if it thinks the (100,1000) 
AssocInfo record it needs for the second StatusBlock instance is different 
from the one it has from the first.  The problem comes if I loop through a 
couple thousand MyThing records, all indirectly accessing that same 
AssocInfo record, there's a couple thousand unnecessary SELECTs issued, 
which are probably contributing to a slowness I'm trying to speed up.

Is there something goofy about the way I've got this set up, or my 
understanding of the identity map and SELECT issuing, that I could change 
to eliminate the unnecessary SELECTs?  I'm checking to see if they might be 
some side effect of other things my full setup has (trying to model an 
existing system), and I'll see if I can set up a test system with the exact 
setup below to make sure the SELECTS are really generated in this 
simplified case.

# A1 and A2 are the left/right classes/tables that AssocInfo associates to
class AssocInfo(MyAlchemyBase): # our Association Object
 __tablename__ = "assoc"
 key1 = Column(Integer, ForeignKey("A1.id"), primary_key=True)
 key2 = Column(Integer, ForeignKey("A2.id"), primary_key=True)
 id = Column(Integer)
 # ... more fields
 itsA1 = relationship(A1, backref="assocs")
 itsA2 = relationship(A2)


class StatusBlock(MyAlchemyBase):
 __table_args__ = (ForeignKeyConstraint(['k1', 'k2'], ['assoc.key1', 
'assoc.key2']), {})
 id = Column(Integer, primary_key=True)
 thing_id = Column(Integer, ForeignKey('thing.id'))
 # (k1,k2) is the composite key into AssocInfo, as per __table_args__
 k1 = Column(Integer, ForeignKey("A1.id")
 k2 = Column(Integer)
 itsAssoc = relationship(AssocInfo)
 
class MyThings(MyAlchemyBase): 
 __tablename__ = "thing"
 id = Column(Integer, primary_key=True)
 statuses = relationship("StatusBlock", collection_class=
attribute_mapped_collection('k1'))
 __mapper_args__ = {'polymorphic_identity': 0} # hopefully unrelated 
polymorphic stuff


logging.basicConfig()
logging.getLogger('sqlalchemy.engine').setLevel(logging.DEBUG)
session = getASession()
thing1 = session.query(MyThings).get(1) # assume thing1 has a related 
StatusBlock with k1=100 and k2=1000
print thing1.statuses[100].itsAssoc  # FIRST LOAD
# lazyload will do a SELECT on "assoc" table to get (100,1000) instance
# confirm that the AssocInfo instance (100,1000) is in map
print session.identity_map.items() # yup


thing2 = session.query(MyThings).get(2) # different thing instance, but 
also has a StatusBlock with k1=100 and k2=1000
print session.identity_map.items() # confirm that (AssocInfo, (100,1000)) 
is still in map
print thing2.statuses[100].itsAssoc # "TROUBLE HERE"
# this shouldn't have had to SELECT to "assoc" to get (100,1000) record, 
but it did
"""just like it did as part of the first load, something like:
INFO:sqlalchemy.engine.base.Engine:SELECT AssocInfo.key1 AS 
AssocInfo_key1...
FROM AssocInfo 
WHERE AssocInfo.key2 = %(param_1)s AND AssocInfo.key1 = %(param_2)s
INFO:sqlalchemy.engine.base.Engine:{u'param_1': 1000, u'param_2': 100}
DEBUG:sqlalchemy.engine.base.Engine:Col (u'AssocInfo_key1', 
u'AssocInfo_key2', ...)
DEBUG:sqlalchemy.engine.base.Engine:Row (100, 1000, ...)
"""

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] change history and (auto)flushing

2017-05-25 Thread YKdvd
I'm using "marshmallow-sqlalchemy" (mmsqla), a small wrapper around the 
"marshmallow" marshalling/serializing library.  I'm using my defined mmsqla 
schemas to serialize a parent record with multi-level nested relationships 
and sending that to a webpage, which changes one of the relationship 
references, setting say the "favouriteFruit" of "Child1" from Apple to 
Orange, and posts back to my app.  The mmsqla deserialization process 
queries for the "Child1" instance, loads the Orange instance and assigns it 
to favouriteFruit (which was Apple), and puts Child1 in the parent's 
collection.  I can hack into mmsqla as it sets favouriteFruit to Orange and 
use sqlalchemy.orm.attributes.get_history() to indeed see that Orange is 
"added" and Apple is "deleted".

But by the time I get my parent instance back normally from mmsqla, the 
history is no longer available.  This would seem to because history doesn't 
survive a session flush().  I think that the default autoflush behaviour on 
a session is getting triggered by the queries for the remaining 
deserialization.  As far as I can tell, by the time mmsqla returns the 
parent instance to me, I have no way of knowing that some of the related 
stuff has been changed, and for certain types of changes I need to know 
this so I can run some additional code.  The history is already gone by the 
time the general schema post_load hooks get fired off, and there doesn't 
seem to be any supported way to provide a function to execute for a 
specific Nested scheme field.

I was thinking I might try setting session.autoflush to False just before 
calling the deserializer, and set it back when it returns, so that nothing 
is flushed.  I'll be checking with the mmsqla project as well, but I was 
curious if anyone had any experience with anything similar to this, and 
whether having autoflush off while a complex instance is being put back 
together by the deserializer sounds like a bad idea in principle (or would 
even work)?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Pool connections high-water, session query counter?

2017-09-08 Thread YKdvd
In a separate thread, Mike had a demo 

 
which kept a query count in a Session instance by stamping a reference to 
the Session in its connections.  It got removed on session commit/rollback:
# can also pop it on commit/rollback, but this won't 
# catch if the Session is just closed() without a commit/rollback. 
# if your app always commits/rollbacks, then this is fine. 
@event.listens_for(Session, "after_commit") 
@event.listens_for(Session, "after_rollback") 
def _c(session, transaction, connection): 
 connection.info.pop('log_session', None) 

The function claims to take three parms including "connection", but the 
docs say 

 
the Session.after_commit/after_rollback events only pass a single parm, 
"session", and I do seem to get a "takes exactly 3 arguments (1 given)" 
error when using this signature.  Am I missing something?  Without the 
connection being passed, popping it's link to a session ("log_session") 
can't occur.

The same example also pops it on the Engine "checkin" event - is there any 
significant case where "checkin" wouldn't be called and the connection 
would be reused in a subsequent session with that "log_session" still in 
its info dict?  And it probably wouldn't matter because the Session 
"after_begin" handler would re-stamp a new reference on a connection being 
reused - the commit/rollback pops should be superfluous?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Pool connections high-water, session query counter?

2017-09-05 Thread YKdvd
There's no large numbers involved in any of the relationships, and 
shouldn't be any conditionality.  For the route I've been looking at as an 
example, it is specifying a specific instance of Thing.  My pre-route stuff 
in Flask is loading a single instance each of what might be called 
parentThing and grandparentThing, as well as Thing.  In the route itself a 
relationship to perhaps 10 ThingStatus records will fire, and each of these 
has a one-to-one relationship to a Department record, a Status record and a 
User record.  There may be a couple of other relationships, but nothing 
involves a large collection.  

I'd assume that there's on the order of a few dozen simple queries being 
emitted as the various items are lazily loaded, and I'm not sure what could 
be happening with other requests that could make the MySQL server response 
time vary so widely to the overall set of requests this represents. 

Anyway, I'll be finding some time this week to add some code to dump out 
the individual SQL statements emitted and timings when the overall response 
time is large, checking the pooling overflow, etc.

On Friday, September 1, 2017 at 7:57:54 PM UTC-3, Mike Bayer wrote:
>
> doesn't sound like a lock (MySQL is a very un-lockish database), 
> sounds like you have a query that occasionally returns a huge number 
> of rows, such as from a broken JOIN condition or similar.The 
> occasional nature without any errors, as well as that the operation 
> you say should only be doing reads, suggests it is data and query time 
> related. 
>
> I'd use fine-grained logging with thread/process ids to identify slow 
> requests and potentially which kinds of operations are hitting the 
> issue. 
>
>
>
> On Fri, Sep 1, 2017 at 4:43 PM, YKdvd <david...@gmail.com > 
> wrote: 
> > Turned on the slow query log, doesn't seem to be that.  The fact that it 
> > isn't consistently bad for similar hits, and also seems to need some 
> load to 
> > be exposed makes me thing either some contention while inside the 
> request in 
> > Python, including lazy loads.  I have been able to get some timings on a 
> > somewhat (but not hugely) slow hit on one of the specific routes while 
> > running the Flask app in standalone debug mode, and it seems to indicate 
> > that the time is being taken inside Python, no problem with a lack of 
> Apache 
> > or WSGI threads causing the delay, and before doing any rendering of the 
> > HTML template for return.  It certainly seems to be related to DB stuff, 
> > rather than any CPU/Python issues. 
> > 
> > I'm willing to believe that this is something that was present in our 
> old 
> > MySQL 5.5 setup at a lower frequency or shorter delays, but I'm not sure 
> > what in the new setup would have caused a sudden exacerbation of the 
> > problem.  The old 5.5 MySQL had a query cache of either 8M or 16M, the 
> new 
> > 5.7 MySQL started with a 64M query_cache.  I've now disabled the 
> > query_cache, and while overall performance seems a little better, the 
> odd 
> > spikes are still there.  All tables of interest should be InnoDB, and 
> > there's a 2GB innodb_buffer_pool_size, which should be well in excess of 
> > actual data. 
> > 
> > The specific request route I've been trying to look at should only be 
> doing 
> > reads, no writes or updates, and I wouldn't think any of the little 
> > one-to-one relationships lazyloads it might indirectly trigger would be 
> > locked by a different thread, certainly not for appreciable time.  I'm 
> going 
> > to try debugging with the pools "timeout" set to a very low value (it 
> > defaults to 30 seconds) to see if it ever hits max_overflow and fails to 
> > instantly get a connection when needed, but out WSGI processes have 10 
> > threads and the pool shouldn't hit this until 5+10=15 connections are 
> > active, and unless I'm missing something I should not be using more than 
> one 
> > connection per thread - this was why I was checking for a built-in 
> > high-water counter for pool connections. 
> > 
> > I'll keep checking my understanding of MySQL and SQLAlchemy for anything 
> > that could be holding a row locked that I'm not realizing, or something 
> > similar.  SQLAlchemy isn't throwing any exceptions related to deadlocks, 
> > connections exhaused, etc, as these would bubble up and abort the Flask 
> > request. 
> > 
> > 
> > On Friday, September 1, 2017 at 4:51:08 PM UTC-3, Jonathan Vanasco 
> wrote: 
> >> 
> >> what do your database and server logs show?  are there slow query 
> >> timeouts, deadlocks, etc?  are these happening during peak load? 10-30 
> >> secon

[sqlalchemy] Re: Pool connections high-water, session query counter?

2017-09-01 Thread YKdvd
Turned on the slow query log, doesn't seem to be that.  The fact that it 
isn't consistently bad for similar hits, and also seems to need some load 
to be exposed makes me thing either some contention while inside the 
request in Python, including lazy loads.  I have been able to get some 
timings on a somewhat (but not hugely) slow hit on one of the specific 
routes while running the Flask app in standalone debug mode, and it seems 
to indicate that the time is being taken inside Python, no problem with a 
lack of Apache or WSGI threads causing the delay, and before doing any 
rendering of the HTML template for return.  It certainly seems to be 
related to DB stuff, rather than any CPU/Python issues.

I'm willing to believe that this is something that was present in our old 
MySQL 5.5 setup at a lower frequency or shorter delays, but I'm not sure 
what in the new setup would have caused a sudden exacerbation of the 
problem.  The old 5.5 MySQL had a query cache of either 8M or 16M, the new 
5.7 MySQL started with a 64M query_cache.  I've now disabled the 
query_cache, and while overall performance seems a little better, the odd 
spikes are still there.  All tables of interest should be InnoDB, and 
there's a 2GB innodb_buffer_pool_size, which should be well in excess of 
actual data.

The specific request route I've been trying to look at should only be doing 
reads, no writes or updates, and I wouldn't think any of the little 
one-to-one relationships lazyloads it might indirectly trigger would be 
locked by a different thread, certainly not for appreciable time.  I'm 
going to try debugging with the pools "timeout" set to a very low value (it 
defaults to 30 seconds) to see if it ever hits max_overflow and fails to 
instantly get a connection when needed, but out WSGI processes have 10 
threads and the pool shouldn't hit this until 5+10=15 connections are 
active, and unless I'm missing something I should not be using more than 
one connection per thread - this was why I was checking for a built-in 
high-water counter for pool connections.

I'll keep checking my understanding of MySQL and SQLAlchemy for anything 
that could be holding a row locked that I'm not realizing, or something 
similar.  SQLAlchemy isn't throwing any exceptions related to deadlocks, 
connections exhaused, etc, as these would bubble up and abort the Flask 
request.

On Friday, September 1, 2017 at 4:51:08 PM UTC-3, Jonathan Vanasco wrote:
>
> what do your database and server logs show?  are there slow query 
> timeouts, deadlocks, etc?  are these happening during peak load? 10-30 
> seconds looks more like an issue with competing locks or available workers, 
> than potentially pool checkouts or lazy loading.
>
> aside from db logs, i've found statsd very useful for debugging these 
> sorts of things.  we log both requests count/timing and hundreds of points 
> in our codebase as well.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Pool connections high-water, session query counter?

2017-09-01 Thread YKdvd
We have an internal Flask/SQLAlchemy webapp using MySQL and served up using 
Apache and mod_wsgi.  We recently upgraded the server, going from an older 
Debian/Apache2.2/MySQL5.5/mod_wsgi 3.x, to a current Ubuntu/Apache 
2.4/MySQL5.7/mod_wsgi4.5.x.  Something seems to be causing some of our page 
requests to occasionally take 10-30 seconds where the same or similar 
requests are much quicker at a different time.  One of the things I'm 
looking at is lazy/eager loading.  It looks like from this thread 

 
that a Session has no built-in counter of the number of queries it has 
executed?  I'll look at adding the event-handler suggestion to get a count.

Also, I'm curious if the QueuePool for the engines in use are ever hitting 
the max (pool_size+max_overflow).  There's a QueuePool.status() method 
which returns something like "Pool size: 5  Connections in pool: 1 Current 
Overflow: -4 Current Checked out connections: 0", but from a quick scan of 
the source it looks like nothing keeps track of the highest value that the 
queue's "overflow" ever reaches?  I don't think the sessions in my requests 
would use more that one connection, and the pools should be configured to 
allow more connections than the WSGI process has handler threads, but if 
I'm somehow wrong it could be that a session is hitting the pools "wait" 
period in trying to acquire a connection.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Hybrid property with subquery

2018-09-24 Thread YKdvd
I have an ORM setup with a "departments_milestones" table ("dm", 
DepartmentMilestone objects), with "department_id" and "seqOrder" Fields.  
Each department has a few records in this table, ordered within the 
department by "seqOrder" (so seqOrder is not unique).

Another object "ShotsStatus" contains a collection of these 
DepartmentMilestone objects, one for each department.  I'm interested in 
getting the ShotsStatus records that are at the "last" milestone in its 
department - the one with the highest "seqOrder" value for the department.  
In SQL I can do something like this, with a subquery to select the 
departmental milestone records, reverse sort them so the 'last' one is 
first, and LIMIT 1 to get it, which is compared to the outer record.

select * FROM shots_status ss
left outer join otherschema.departments_milestones dm on 
(dm.department_id=ss.dept_id AND dm.milestone_id = ss.status_id)
where ss.dept_id=723
and dm.id = (SELECT dmsq.id FROM plumber.departments_milestones dmsq WHERE 
dmsq.department_id=dm.department_id ORDER BY dmsq.seqOrder DESC LIMIT 1)

I'd like to create a hybrid property "isLast" on the DepartmentMilestone 
object that  returns True if it represents the 'last' milestone.  I'm 
having trouble figuring out what sort of sqlalchemy select coding I'd need 
to recreate the subquery from the SQL code to do this.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Hybrid property with subquery

2018-09-26 Thread YKdvd
I think that got me to where I have a working hybrid.  It looks something 
like this (I broke it up into multiple statements for debugging):

@isLast.expression
def isLast(cls):
dmsq = aliased(dm, name="dmsq")
q = cls.id == select([dmsq.id]).where(dmsq.department_id == 
cls.department_id).order_by(dmsq.seqOrder.desc()).limit(1).as_scalar()
print q
return q

The subquery is really just sort of a lookup, and leaving out the 
correlate() doesn't seem to chnge the query.  But since "cls" refers to the 
same object model ("dm"), I did have to use an alias ("dmsq") of it for the 
subquery.  I'm using MySQL, so the LIMIT() works.  This seems to be doing 
the right thing with my small test case, and the printed SQL is right, so 
if it works with the full set of data and queries I'll declare victory.  
Thanks again for the help.


On Tuesday, September 25, 2018 at 9:09:24 PM UTC-3, Mike Bayer wrote:
>
> On Mon, Sep 24, 2018 at 12:22 PM YKdvd > 
> wrote: 
> > 
> > I have an ORM setup with a "departments_milestones" table ("dm", 
> DepartmentMilestone objects), with "department_id" and "seqOrder" Fields. 
>  Each department has a few records in this table, ordered within the 
> department by "seqOrder" (so seqOrder is not unique). 
> > 
> > Another object "ShotsStatus" contains a collection of these 
> DepartmentMilestone objects, one for each department.  I'm interested in 
> getting the ShotsStatus records that are at the "last" milestone in its 
> department - the one with the highest "seqOrder" value for the department. 
>  In SQL I can do something like this, with a subquery to select the 
> departmental milestone records, reverse sort them so the 'last' one is 
> first, and LIMIT 1 to get it, which is compared to the outer record. 
> > 
> > select * FROM shots_status ss 
> > left outer join otherschema.departments_milestones dm on 
> (dm.department_id=ss.dept_id AND dm.milestone_id = ss.status_id) 
> > where ss.dept_id=723 
> > and dm.id = (SELECT dmsq.id FROM plumber.departments_milestones dmsq 
> WHERE dmsq.department_id=dm.department_id ORDER BY dmsq.seqOrder DESC LIMIT 
> 1) 
> > 
> > I'd like to create a hybrid property "isLast" on the DepartmentMilestone 
> object that  returns True if it represents the 'last' milestone.  I'm 
> having trouble figuring out what sort of sqlalchemy select coding I'd need 
> to recreate the subquery from the SQL code to do this. 
>
>
> it's a correlated select so it's along the lines of (note this is not 
> the exact SELECT you have): 
>
> @myproperty.expression 
> def myproperty(cls): 
> return select([Dmsq.id]).where(Dmsq.id == 
> cls.dept_id).correlate(cls).order_by(Dmsg.seq).limit(1).as_scalar() 
>
> the LIMIT 1 thing won't work on every backend but should make it on at 
> least MySQL and Postgresql. 
>
>
>
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Polymorphic relationships to different classes

2019-08-29 Thread YKdvd
Suppose I have two distinct and unrelated ORM classes, call them Powers, 
and Dominions (which are in tables "powers" and "dominions", both having an 
"id" field), and I want them both to have relationships to their own 
subclass of a single-table polymorphic setup, call it Things. 


class AbstractThing(...):
   __tablename__ = "things"

  polyType = Column(Integer, index=True)  # polymorphic key for various 
flavours of things
  __mapper_args__ = {'polymorphic_on': polyType, 'polymorphic_identity': -1}   
# never instantiated
  id = Column(Integer, primary_key=True)

  owner_id = Column(Integer)  # trying to relate this to completely different 
classes in the subclasses below

class PowerThing(AbstractThing):

  __mapper_args__ = {'polymorphic_identity': 1}

  ForeignKeyConstraint(['owner_id'], ['powers.id'])

  itsPower = relationship("Powers", backref="itsThings")


class DominionThing(AbstractThing):

  __mapper_args__ = {'polymorphic_identity': 2}

  ForeignKeyConstraint(['owner_id'], ['dominions.id'])

  itsDominion = relationship("Dominions", backref="itsThings")


Is it possible (even if inadvisable) to do something like this somehow?  The 
relationship() lines don't find a join condition, and adding a 
primaryjoin="PowerThing.owner_id==Powers.id" clause doesn't seem to do it.  

Is there some combination of foreign_keys and join conditions that would let 
"owner_id" relate to different tables in different polymorphic subclasses.  
I've got an existing setup like this that I'd like to map to SQLAlchemy for 
now, rather than redo.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/d08cef0c-0a4b-47ef-b473-626a152fe8e6%40googlegroups.com.


[sqlalchemy] Marshmallow speed and lazy/eager loading

2019-09-21 Thread YKdvd
I'm using the "marshmallow" serialization library to dump out SQLAlchemy 
objects, and I'm trying to track down some slowness issues.  The objects 
have various relationships (using marshmallow's Nested fields in the mm 
schema), and some of those have their own relationships, etc.  As an 
example with round numbers, I might have 50 objects.  The SQLAlchemy query 
might take 0.01 seconds with lazy loading, and 0.1 seconds with some 
obvious relationships given eagerloading options on the query.  

The marshmallow dump might then take 5 seconds.  But if I immediately make 
the dump call again, it might take only 0.5 seconds - even with a new 
instance of the marshmallow schema, which would seem to indicate it isn't 
purely poor performance by marshmallow.  I've been assuming the issue is 
SQLAlchemy lazyloading triggers being fired off as marshmallow accesses the 
various relationships in the 50 objects while dumping them - on the second 
dump, all the triggers have been sprung, and there's no SQLAlchemy overhead 
as marshmallow accesses the objects.

I've tried adding some eagerloading options (joinedload or subqueryload) 
for some of the relationships, and this does seem to slightly improve 
things for the first dump, but nothing like the half second time the second 
one takes.  Is there any way to configure a query to tell it to eagerload 
all possible relationships instead of specifying them individually, just so 
I know I haven't missed any?
And is there any other one-time SQLAlchemy overhead other than lazyloading 
that might cause such difference between marshmallow's first and second use 
of the objects?  I'm currently using 1.18 (until recently we had to support 
Python 2.6), but I briefly tried swapping in the current 1.3.x and the 
times seemed similar.


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/f116a9e8-aa8b-43f8-a3c6-5b1e91fc3b3c%40googlegroups.com.


[sqlalchemy] Re: Hybrid property "can't set attribute"

2019-12-19 Thread YKdvd
Ah, thanks very much, that's fixed it.  I must have missed that in the docs 
- I think this started out as a Python property and they later added the 
hybrid decorator.

On Thursday, December 19, 2019 at 4:34:34 PM UTC-4, YKdvd wrote:
>
> We have a "Users" model with this, which was a hybrid property to wrap the 
> "email" column temporarily.  The database column (MySQL 5.7)  is "email", 
> but defined by ORM as "_email", with an "email" hybrid property to access 
> and set:
>
> _email = Column(u'email', String(255))
> ...
>
> @hybrid_property
> def email(self):
>  return self._email.replace("olddomain.com", "newdomain.com")
> @email.setter
> def email_setter(self, val):
>  self._email = val
>
>
> In 1.1.18, something like "self.email = someEmailAddress" works fine.  We're 
> testing an upgrade to 1.3.11, and that now throws an "AttributeError: can't 
> set attribute" from hybrid.py __set__(). 
>
> That seems to be at a simple check "if self.fset is None", so it's almost as 
> if the decorator never stored the setter function?  I'm digging into the 
> hybrid docs, and it seems a pretty innocuous setter, but there might be 
> something about it that
>
> 1.3 is being stricter about?  I don't see any SAwarnings at startup that 
> might apply.  I changed the getter to simply return self._email in case that 
> was a problem, but that didn't help.
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/5b3e36b5-67d7-484b-aed9-2e4ae3ba94e6%40googlegroups.com.


[sqlalchemy] Hybrid property "can't set attribute"

2019-12-19 Thread YKdvd
We have a "Users" model with this, which was a hybrid property to wrap the 
"email" column temporarily.  The database column (MySQL 5.7)  is "email", 
but defined by ORM as "_email", with an "email" hybrid property to access 
and set:

_email = Column(u'email', String(255))
...

@hybrid_property
def email(self):
 return self._email.replace("olddomain.com", "newdomain.com")
@email.setter
def email_setter(self, val):
 self._email = val


In 1.1.18, something like "self.email = someEmailAddress" works fine.  We're 
testing an upgrade to 1.3.11, and that now throws an "AttributeError: can't set 
attribute" from hybrid.py __set__(). 

That seems to be at a simple check "if self.fset is None", so it's almost as if 
the decorator never stored the setter function?  I'm digging into the hybrid 
docs, and it seems a pretty innocuous setter, but there might be something 
about it that

1.3 is being stricter about?  I don't see any SAwarnings at startup that might 
apply.  I changed the getter to simply return self._email in case that was a 
problem, but that didn't help.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/eb31b100-a9e1-4ddc-9b76-d8a7651bb4dc%40googlegroups.com.


Index DDL not emitted

2014-06-02 Thread YKdvd
class Dummy(AlchemyBase):
__tablename__ = dummy
id = Column(Integer, primary_key=True)
f1 = Column(Integer, index=True)

When I create using the above on MySQL, the DDL is:

CREATE TABLE dummy (
id INTEGER NOT NULL AUTO_INCREMENT, 
f1 INTEGER, 
PRIMARY KEY (id)
)

Shouldn't there be an INDEX clause in the DDL to match the index=True for 
Column f1?  (This is on MySQL 5.5/Debian/SQLA 0.9.4/Python2.7).

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy-alembic group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.