Re: [sqlalchemy] Detached instances and lazy-loaded attributes

2013-01-24 Thread Michael Bayer

On Jan 24, 2013, at 1:54 PM, YKdvd wrote:

 If I have an instance MyInstance with an attribute itsLazyAttribute from 
 a relationship, where everything is defined and queried with lazy loading, 
 then the attribute is lazily loaded, and only filled in when I access it 
 (perhaps, x=len(MyInstance.itsLazyAttribute).   But if not accessed before 
 the instance is detached (session closed, or expunge called) then the 
 attribute will be unavailable, and trigger a DetachedInstanceError (not 
 bound, lazy load operation cannot proceed).
 
 All fine, but is there any way to automatically trip all the lazy triggers 
 before detaching an instance?  Something like a 
 session.fullyLoad(MyInstance) type of call?  I couldn't seem to see 
 anything that would do this.  I can redo things to do an eager query, but I 
 was wondering if I was missing something?

scalar, column-bound attributes will all load at once if you touch one of them. 
 But a relationship()-bound attribute is always an independent entity, for the 
reason that loading these relationships requires a whole SQL query for the one 
attribute as it is.   

the strategies for this kind of thing usually go in this order:

1. don't try to use detached objects.Philosophically, an object allows 
detachment only so that it can be stowed away somewhere temporarily, or moved 
between two Sessions.There's not a lot of use cases IMO that really require 
detached objects to be usable.  In my view it's kind of like the object is 
asleep.

2. try to load the object as fully as you need at load time, using eager 
loading.  this can save on having to do lots of individual SELECT statements if 
you're lazy loading.

3. to actually lazy load everything you'd need to employ some technique, in 0.8 
this is easy:

from sqlalchemy import inspect

insp = inspect(myobject)
for key in insp.relationships.keys():
getattr(myobject, key)

if in 0.7 you'd need to iterate like this:

from sqlalchemy.orm import object_mapper
from sqlalchemy.orm.properties import RelationshipProperty

mapper = object_mapper(myobject)
for rel in mapper.iterate_properties():
if isinstance(rel, RelationshipProperty):
getattr(myobject, rel.key)


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] MySQL drivers

2013-01-24 Thread YKdvd
My main use of SQLAlchemy is in the Python incorporated into Autodesk's 
Maya graphics program, using the standard MySQL-Python driver. 
 Unfortunately for the most recent Maya 2013 version Autodesk has compiled 
their Python (2.6.4) with Visual Studio 2010 for Windows, instead of the 
VS2008 used for official python distributions.  This can create issues (two 
different C runtimes) for packages with C bits (like MySQL-Python) compiled 
with 2008.  I'm looking into compiling the driver myself, but I'm also 
looking at dropping in an alternate driver temporarily.  

The pymysql driver is pure python and supposedly a drop-in replacement for 
MySQL-Python.  I was wondering if anyone has had any real-world experience 
swapping it in, and know of any gotchas?  Our usage doesn't require 
super-duper performance and isn't doing anything horribly exotic.  There is 
one bit that is still importing MySQLdb directly and doing raw SQL stuff 
instead of SQLA, which is why a temporary drop-in replacement without a lot 
of side effects would be appealing. 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/M4JjMJhYS9MJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Detached instances and lazy-loaded attributes

2013-01-24 Thread YKdvd
I was going to look at some sort of walk and trigger but SQLA is elegant 
enough that I suspected I was either missing a function that already did 
this, or was fighting against the design philosophy.  Sounds like the 
latter... :)

My background gives detached a less ephemeral flavour, especially in a 
long-persisting rather than web app - this was sort of configuration 
stuff that was loaded and read-only referenced.  I'm sort of recreating a 
legacy system and designing on the fly, and hadn't quite resolved whether 
the loading session was going to remain open, and the bit that loaded the 
stuff didn't have a lazy/eager option.  I'll have no problem in this case 
to rejigger to eager-load at query time, or even make the relationships 
eager.  Thanks for the feedback.

On Thursday, January 24, 2013 2:54:14 PM UTC-4, YKdvd wrote:

 If I have an instance MyInstance with an attribute itsLazyAttribute 
 from a relationship, where everything is defined and queried with lazy 
 loading, then the attribute is lazily loaded, and only filled in when I 
 access it (perhaps, x=len(MyInstance.itsLazyAttribute).   But if not 
 accessed before the instance is detached (session closed, or expunge 
 called) then the attribute will be unavailable, and trigger a 
 DetachedInstanceError (not bound, lazy load operation cannot proceed).

 All fine, but is there any way to automatically trip all the lazy triggers 
 before detaching an instance?  Something like a 
 session.fullyLoad(MyInstance) type of call?  I couldn't seem to see 
 anything that would do this.  I can redo things to do an eager query, but I 
 was wondering if I was missing something?


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/HP2o7w4BAHsJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Introducing SQLAlchemy-ORM-tree: a generic API for hierarchical data

2013-01-24 Thread Pirate Fibonacci

Mark,

I'm trying to test run your above example with SQLAlchemy-ORM-tree-0.1.2 
and SQLAlchemy-0.8.0b2 

i get the following traceback, any suggestions, thanks

Traceback (most recent call last):
  File ./t.py, line 28, in module
Node.tree.register()
  File 
/nfs/disks/rel/SQLAlchemy-ORM-tree-0.1.2/sqlalchemy_tree/manager/unmapped.py, 
line 216, in __get__
return self._get_class_manager(root_node_class)
  File 
/nfs/disks/rel/SQLAlchemy-ORM-tree-0.1.2/sqlalchemy_tree/manager/unmapped.py, 
line 250, in _get_class_manager
manager = self.class_manager)
  File 
/nfs/disks/rel/SQLAlchemy-ORM-tree-0.1.2/sqlalchemy_tree/options.py, line 
233, in class_mapped
self.parent_field_name = self._get_parent_field_name()
  File 
/nfs/disks/rel/SQLAlchemy-ORM-tree-0.1.2/sqlalchemy_tree/options.py, line 
260, in _get_parent_field_name
ucould not auto-detect parent field name; tree extension will not  \
ValueError: could not auto-detect parent field name; tree extension will 
not work property without a parent relationship defined

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/dSwo2N8kxjIJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Subqueries as a column

2013-01-24 Thread Dave Pedu
Hi all,

I am trying to run a query like this one, using sqlalchemy:

SELECT 
t.`ForumID`,
( SELECT `ID` FROM `posts` p WHERE `ThreadID` = t.`ID` ORDER BY p.`Date` 
DESC LIMIT 1 ) as `LastPost`
FROM `threads` t
WHERE t.`Deleted` = 0


I am unsure how to achieve the subquery that comes out as 'LastPost'.

Closest I have gotten is this:

lastPostQ = 
s.query(db.Post.ID).filter(db.Post.ThreadID==db.Thread.ID).order_by(desc(db.Post.Date)).limit(1).label(LastPost)
q = s.query(
db.Thread.ForumID,
lastPostQ
).\
join(db.Forum, db.Forum.ID==db.Thread.ForumID).\
join(db.ForumPermission, db.ForumPermission.ForumID==db.Thread.ForumID).\
filter(db.Thread.Deleted==0)


Which generates the following SQL:

SELECT
threads.ForumID AS threads_ForumID,
(SELECT posts.ID FROM posts, threads WHERE posts.ThreadID = 
threads.ID ORDER BY posts.Date DESC LIMIT :param_1) AS LastPost 
FROM
threads 
WHERE
threads.Deleted = :Deleted_1


Which is close, but the subqueries select statement is selecting FROM 
posts, threads where it should only be FROM posts.

I don't know how to procede from here. A member on IRC suggested 
using correlate() but I am unsure as how or where to use it, as the 
documentation was unclear.

Any help would be appreciated!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Subqueries as a column

2013-01-24 Thread Michael Bayer

On Jan 24, 2013, at 11:03 PM, Dave Pedu wrote:

 Hi all,
 
 I am trying to run a query like this one, using sqlalchemy:
 
 SELECT 
   t.`ForumID`,
   ( SELECT `ID` FROM `posts` p WHERE `ThreadID` = t.`ID` ORDER BY 
 p.`Date` DESC LIMIT 1 ) as `LastPost`
 FROM `threads` t
   WHERE t.`Deleted` = 0
 
 I am unsure how to achieve the subquery that comes out as 'LastPost'.
 
 Closest I have gotten is this:
 
 lastPostQ = 
 s.query(db.Post.ID).filter(db.Post.ThreadID==db.Thread.ID).order_by(desc(db.Post.Date)).limit(1).label(LastPost)
 q = s.query(
   db.Thread.ForumID,
   lastPostQ
 ).\
 join(db.Forum, db.Forum.ID==db.Thread.ForumID).\
 join(db.ForumPermission, db.ForumPermission.ForumID==db.Thread.ForumID).\
 filter(db.Thread.Deleted==0)
 
 Which generates the following SQL:
 
 SELECT
   threads.ForumID AS threads_ForumID,
   (SELECT posts.ID FROM posts, threads WHERE posts.ThreadID = 
 threads.ID ORDER BY posts.Date DESC LIMIT :param_1) AS LastPost 
 FROM
   threads 
 WHERE
   threads.Deleted = :Deleted_1
 
 Which is close, but the subqueries select statement is selecting FROM posts, 
 threads where it should only be FROM posts.
 
 I don't know how to procede from here. A member on IRC suggested using 
 correlate() but I am unsure as how or where to use it, as the documentation 
 was unclear.

if you're on 0.7 then the correlation used by query() is not automatic, you 
need to call query = query.correlate(Thread) on that subquery so it knows that 
Thread as a FROM object will be supplied from an enclosing query.  In 0.8 the 
correlation is automatic by default with Query, the same way as it works with a 
select() construct.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Custom Dialect - recommendations needed for handling of Sequences/lastrowid

2013-01-24 Thread jank
Hello,
I have implemented a dialect for a new database (EXASol). I have not done 
tests using the ORM layer of SA so far as I am primarily interested in the 
Core layer.
So far things worked out pretty well DDL and DML support are basically 
running.
The EXASol DB does not offer Sequences but autoincrement columns that are 
very similar to postgres SERIAL types.

Example DDL statement:

CREATE TABLE test_exadialect.test (
id INTEGER IDENTITY 10 NOT NULL, 
name VARCHAR(40) NOT NULL, 
age INTEGER, 
PRIMARY KEY (id)
)

Identity is the keyword to add autoincrement behavior to an Integer-like 
column. 10 is the initial value of the autoincrement.

This DDL statement is generated based on this table metadata:

Table('test', self.metadata,
  Column('id', Integer, Sequence('test.id.seq', start=10, 
optional=True), primary_key=True),
  Column('name', String(40), nullable=False),
  Column('age', Integer)
)

Looking at the postgres dialect implementation, I came to the conclusion 
that using Sequences is the only way to get the desired behavior. I have 
also implemented the get_lastrowid() method of the ExecutionContext class. 
This all works as expected albeit at the costs of an additional roundtrip 
for each single insert as the DB in question does not support RETURNING.

First question: is this the intended way to implement autoincrement 
behavior in the absence of support for explicit sequence objects in the DB?

No to the problem that I could not solve so far. I want to make the costs 
of fetching the last autoincrement id upon insert/update optional. In our 
use case we are fine with the DB determining the next id value without 
knowing about the value upon insert. I tried to fiddle around with various 
configuration switches. Namely:

   - postfetch_lastrowid
   - autoincrement

My first attempt was to set the postfetch_lastrowid switch to False. 
However, this switch seems to have wider implications than just switching 
off postfetching of the lastrowid. With the swtich to False the SQLCompiler 
generates different INSERT statement:

for:
test_tab.insert().values(name='foo', age=12).execute()

I do get...
with postfetch_lastrowid=True:
INSERT INTO test_exadialect.test (name, age) VALUES ('foo', 12)
with postfetch_lastrowid=False:
INSERT INTO test_exadialect.test (id, name, age) VALUES (NULL, 'foo', 12)
with this statement obviously being rejected by the DB as NULL is not 
allowed (and not desired) for the primary key column.

So far my understanding of SA is limited, but I assume that setting 
postfetch_rowid to False is interpreted by SA as this DB does not support 
sequences/autoincrement. 

I tried setting for the id column autoincrement=False would prevent the 
SQLCompiler from forcing it into the INSERT statement:

Column('id', Integer, Sequence('test.id.seq', start=10, optional=True), 
primary_key=True, 
autoincrement=False),

Running and debugging my test case, the column object had the value True 
for the autoincrement property. I assume that the combination of Sequence 
and primary_key somehow overrides the value to True but I am lost in the SA 
code base.

Second question: Can someone give me a hint or pointer on where to look? Am 
I doing something wrong or trying to misuse the autoincrement flag?

All I want to achieve is to make the fetching of the lastrowid optional. Do 
I have to implement my own dialect-specific flag? If so, what is the 
recommended way of doing this?

Thanks for your time and any hint/advice,

Jan 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.