[sqlalchemy] Use of table aliases

2010-12-15 Thread neurino
Hello I have 2 tables: data and acquisitions,
- each Acquisition has many Data
- each Data come from a different sensor
- the single sensor is identified by the couple Acquisition.id_centr,
Data.id_meas

No I need a query with one colum for each sensor and a row for each
Acquisition.datetime

This is how I get it (in case of two sensors) with SQL:

q = curs.execute(
SELECT a.datetime, d1.value, d2.value
FROM acquisitions AS a
LEFT JOIN data AS d1
ON a.id_acq=d1.id_acq
AND a.id_centr=159
AND d1.id_meas=1501
LEFT JOIN data AS d2
ON a.id_acq=d2.id_acq
AND a.id_centr=320
AND d2.id_meas=1551
)
for n, row in enumerate(q): print n, row
   :
0 (u'2010-09-02 12:05:00', 23.98, 25.67)
1 (u'2010-09-02 12:10:00', 23.77, 25.57)
2 (u'2010-09-02 12:15:00', 23.96, 25.57)
3 (u'2010-09-02 12:20:00', 24.78, 25.94)
4 (u'2010-09-02 12:25:00', 25.48, 26.27)
5 (u'2010-09-02 12:30:00', 25.91, 26.46)
6 (u'2010-09-02 12:35:00', 26.14, 26.62)
7 (u'2010-09-02 12:40:00', 26.32, 26.73)
8 (u'2010-09-02 12:45:00', 26.44, 26.80)
9 (u'2010-09-02 12:50:00', 26.55, 26.87)
10 (u'2010-09-02 12:55:00', 26.62, 26.92)
11 (u'2010-09-02 13:00:00', 26.67, 26.94)
12 (u'2010-09-02 13:05:00', 26.69, 26.94)
13 (u'2010-09-02 13:10:00', 26.71, 26.96)
14 (u'2010-09-02 13:15:00', 26.73, 26.98)

But I can't get the same result with sqlalchemy, here's my mapping:

data = Table('data', metadata,
Column('id_data', Integer, primary_key=True),
Column('id_meas', Integer, nullable=False),
Column('id_acq', Integer, ForeignKey('acquisitions.id_acq'),
nullable=False),
Column('value', Float, nullable=False),
)

acquisitions = Table('acquisitions', metadata,
Column('id_acq', Integer, primary_key=True),
Column('id_centr', Integer, nullable=False),
Column('datetime', DateTime, nullable=False),
#acquisitions with same id_centr and datetime are duplicates
UniqueConstraint('id_centr', 'datetime'),
)

orm.mapper(Data, data, properties={
'acquisitions': orm.relationship(Acquisition, backref='data'),
})
orm.mapper(Acquisition, acquisitions)

Any advice?

Thanks for your support
neurino

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Modeling a Tree-looking structure in SqlAlchemy.

2010-12-15 Thread Hector Blanco
Thank you all... As soon as I have the webserver where I'm going to
use that structure up and running, I'll try it and i'll let you
know...

2010/12/13 Laurent Rahuel laurent.rah...@gmail.com:
 Hello,

 You should also take a look at http://pypi.python.org/pypi/sqlamp/0.5.2, an 
 implementation of Materialized Path for SQLAlchemy.

 Regards,

 Laurent

 Le 13 déc. 2010 à 23:30, Russell Warren a écrit :

 Sorry, I just saw I messed up the nested sets SQLA example link.  Here
 is the right one:
 http://www.sqlalchemy.org/trac/browser/examples/nested_sets/nested_sets.py

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.


 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Deletion order during flush is not correct.

2010-12-15 Thread Will
An update.  This problem does occur with sqlite it's just that sqlite
doesn't enforce the foreign key so it doesn't throw an exception.

# output that deletes in the proper order
2010-12-15 14:33:52,197 INFO sqlalchemy.engine.base.Engine.0x...d050
BEGIN (implicit)
2010-12-15 14:33:52,197 INFO sqlalchemy.engine.base.Engine.0x...d050
INSERT INTO parents DEFAULT VALUES
2010-12-15 14:33:52,197 INFO sqlalchemy.engine.base.Engine.0x...d050
()
2010-12-15 14:33:52,198 INFO sqlalchemy.engine.base.Engine.0x...d050
INSERT INTO children (parent_id) VALUES (?)
2010-12-15 14:33:52,198 INFO sqlalchemy.engine.base.Engine.0x...d050
(1,)
2010-12-15 14:33:52,198 INFO sqlalchemy.engine.base.Engine.0x...d050
INSERT INTO children (parent_id) VALUES (?)
2010-12-15 14:33:52,198 INFO sqlalchemy.engine.base.Engine.0x...d050
(1,)
2010-12-15 14:33:52,198 INFO sqlalchemy.engine.base.Engine.0x...d050
COMMIT
2010-12-15 14:33:52,199 INFO sqlalchemy.engine.base.Engine.0x...d050
BEGIN (implicit)
2010-12-15 14:33:52,200 INFO sqlalchemy.engine.base.Engine.0x...d050
SELECT children.id AS children_id, children.parent_id AS
children_parent_id
FROM children
WHERE children.id = ?
2010-12-15 14:33:52,200 INFO sqlalchemy.engine.base.Engine.0x...d050
(1,)
2010-12-15 14:33:52,200 INFO sqlalchemy.engine.base.Engine.0x...d050
SELECT children.id AS children_id, children.parent_id AS
children_parent_id
FROM children
WHERE children.id = ?
2010-12-15 14:33:52,201 INFO sqlalchemy.engine.base.Engine.0x...d050
(2,)
2010-12-15 14:33:52,201 INFO sqlalchemy.engine.base.Engine.0x...d050
DELETE FROM children WHERE children.id = ?
2010-12-15 14:33:52,201 INFO sqlalchemy.engine.base.Engine.0x...d050
((1,), (2,))
2010-12-15 14:33:52,202 INFO sqlalchemy.engine.base.Engine.0x...d050
SELECT parents.id AS parents_id
FROM parents
WHERE parents.id = ?
2010-12-15 14:33:52,202 INFO sqlalchemy.engine.base.Engine.0x...d050
(1,)
2010-12-15 14:33:52,203 INFO sqlalchemy.engine.base.Engine.0x...d050
DELETE FROM parents WHERE parents.id = ?
2010-12-15 14:33:52,203 INFO sqlalchemy.engine.base.Engine.0x...d050
(1,)
2010-12-15 14:33:52,203 INFO sqlalchemy.engine.base.Engine.0x...d050
COMMIT



# output that deletes in the wrong order
2010-12-15 14:33:56,691 INFO sqlalchemy.engine.base.Engine.0x...6050
BEGIN (implicit)
2010-12-15 14:33:56,692 INFO sqlalchemy.engine.base.Engine.0x...6050
INSERT INTO parents DEFAULT VALUES
2010-12-15 14:33:56,692 INFO sqlalchemy.engine.base.Engine.0x...6050
()
2010-12-15 14:33:56,693 INFO sqlalchemy.engine.base.Engine.0x...6050
INSERT INTO children (parent_id) VALUES (?)
2010-12-15 14:33:56,693 INFO sqlalchemy.engine.base.Engine.0x...6050
(1,)
2010-12-15 14:33:56,693 INFO sqlalchemy.engine.base.Engine.0x...6050
INSERT INTO children (parent_id) VALUES (?)
2010-12-15 14:33:56,693 INFO sqlalchemy.engine.base.Engine.0x...6050
(1,)
2010-12-15 14:33:56,693 INFO sqlalchemy.engine.base.Engine.0x...6050
COMMIT
2010-12-15 14:33:56,694 INFO sqlalchemy.engine.base.Engine.0x...6050
BEGIN (implicit)
2010-12-15 14:33:56,695 INFO sqlalchemy.engine.base.Engine.0x...6050
SELECT parents.id AS parents_id
FROM parents
WHERE parents.id = ?
2010-12-15 14:33:56,695 INFO sqlalchemy.engine.base.Engine.0x...6050
(1,)
2010-12-15 14:33:56,695 INFO sqlalchemy.engine.base.Engine.0x...6050
DELETE FROM parents WHERE parents.id = ?
2010-12-15 14:33:56,695 INFO sqlalchemy.engine.base.Engine.0x...6050
(1,)
2010-12-15 14:33:56,696 INFO sqlalchemy.engine.base.Engine.0x...6050
SELECT children.id AS children_id, children.parent_id AS
children_parent_id
FROM children
WHERE children.id = ?
2010-12-15 14:33:56,696 INFO sqlalchemy.engine.base.Engine.0x...6050
(1,)
2010-12-15 14:33:56,697 INFO sqlalchemy.engine.base.Engine.0x...6050
SELECT children.id AS children_id, children.parent_id AS
children_parent_id
FROM children
WHERE children.id = ?
2010-12-15 14:33:56,697 INFO sqlalchemy.engine.base.Engine.0x...6050
(2,)
2010-12-15 14:33:56,697 INFO sqlalchemy.engine.base.Engine.0x...6050
DELETE FROM children WHERE children.id = ?
2010-12-15 14:33:56,697 INFO sqlalchemy.engine.base.Engine.0x...6050
((1,), (2,))
2010-12-15 14:33:56,697 INFO sqlalchemy.engine.base.Engine.0x...6050
COMMIT

On Dec 15, 2:17 pm, Will willman...@gmail.com wrote:
 
 Hello,

 I've been recently having a problem with sqlalchemy not flushing
 deletes in the proper order.  I've created a simple example for the
 problem that has been occuring.  I tried to run this using sqlite and
 it doesn't have any problems, it is only with Postgresql.

 One thing of note is that if there is only one Child it doesn't seem
 to
 have a problem, only when there are multiple children.  Not sure if
 that makes a difference in the SQLAlchemy code.
 

 from sqlalchemy import create_engine
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import scoped_session, sessionmaker, relationship
 from sqlalchemy.schema import Column, ForeignKey
 from sqlalchemy.types import Integer

 #engine = create_engine('sqlite:///')
 engine = 

Re: [sqlalchemy] Deletion order during flush is not correct.

2010-12-15 Thread Michael Bayer
This is an interesting edge case and I can probably ensure that the dependency 
between Parent/Child is present in the unit of work even if there is no known 
linkage at the Child.parent level for the objects actually present - ticket 
#2002 is added for this.

In the meantime, the uow needs to be aware of the linkage between Parent-Child 
when flush occurs.   Adding a backref children to the parent relationship 
will do it, or ensuring that child.parent is accessed before emitting the flush 
will do it. The usual way this kind of delete is performed is the delete 
cascade is added to the children backref, then the Parent is deleted alone, 
the deletes cascading to the Child objects naturally.

But this is a fun bug and I'll probably have a fix very soon, perhaps in 10 
minutes or maybe not.




On Dec 15, 2010, at 2:17 PM, Will wrote:

 
 Hello,
 
 I've been recently having a problem with sqlalchemy not flushing
 deletes in the proper order.  I've created a simple example for the
 problem that has been occuring.  I tried to run this using sqlite and
 it doesn't have any problems, it is only with Postgresql.
 
 One thing of note is that if there is only one Child it doesn't seem
 to
 have a problem, only when there are multiple children.  Not sure if
 that makes a difference in the SQLAlchemy code.
 
 
 from sqlalchemy import create_engine
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import scoped_session, sessionmaker, relationship
 from sqlalchemy.schema import Column, ForeignKey
 from sqlalchemy.types import Integer
 
 #engine = create_engine('sqlite:///')
 engine = create_engine('postgresql://test_runner@/testing_db')
 
 Model = declarative_base()
 
 
 class Parent(Model):
__tablename__ = 'parents'
 
id = Column(Integer, primary_key=True)
 
 
 class Child(Model):
__tablename__ = 'children'
 
id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey('parents.id'),
   nullable=False)
parent = relationship('Parent')
 
 
 def begin():
Begin transaction
#global transaction
#transaction = session.begin()
 
session.begin()
 
 
 def commit():
Commit transaction
#global transaction
#transaction.commit()
 
session.commit()
 
 
 Model.metadata.create_all(engine)
 
 parent = Parent()
 children = [Child(parent=parent), Child(parent=parent)]
 
 Session = sessionmaker(bind=engine, autocommit=True)
 session = Session()
 
 try:
session.bind.echo = True
 
begin()
session.add_all(children)
session.add(parent)
commit()
 
 
begin()
for child in children:
session.delete(child)
session.delete(parent)
commit()
 
session.bind.echo = False
 finally:
Model.metadata.drop_all(engine)
 
 
 
 From running the script I have two different outputs because it seems
 to run the deletes in a random order so subsequent runs will behave
 differently.
 
 
 # Example Failed Run
 
 2010-12-15 13:45:05,050 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 BEGIN (implicit)
 2010-12-15 13:45:05,051 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 INSERT INTO parents DEFAULT VALUES RETURNING parents.id
 2010-12-15 13:45:05,051 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 {}
 2010-12-15 13:45:05,052 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 INSERT INTO children (parent_id) VALUES (%(parent_id)s) RETURNING
 children.id
 2010-12-15 13:45:05,052 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 {'parent_id': 1}
 2010-12-15 13:45:05,053 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 INSERT INTO children (parent_id) VALUES (%(parent_id)s) RETURNING
 children.id
 2010-12-15 13:45:05,054 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 {'parent_id': 1}
 2010-12-15 13:45:05,054 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 COMMIT
 2010-12-15 13:45:05,055 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 BEGIN (implicit)
 2010-12-15 13:45:05,056 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 SELECT parents.id AS parents_id
 FROM parents
 WHERE parents.id = %(param_1)s
 2010-12-15 13:45:05,056 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 {'param_1': 1}
 2010-12-15 13:45:05,057 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 DELETE FROM parents WHERE parents.id = %(id)s
 2010-12-15 13:45:05,057 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 {'id': 1}
 2010-12-15 13:45:05,058 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 ROLLBACK
 2010-12-15 13:45:05,061 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 select relname from pg_class c join pg_namespace n on
 n.oid=c.relnamespace where n.nspname=current_schema() and
 lower(relname)=%(name)s
 2010-12-15 13:45:05,061 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 {'name': u'children'}
 2010-12-15 13:45:05,063 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 select relname from pg_class c join pg_namespace n on
 n.oid=c.relnamespace where n.nspname=current_schema() and
 lower(relname)=%(name)s
 2010-12-15 13:45:05,063 INFO 

Re: [sqlalchemy] Deletion order during flush is not correct.

2010-12-15 Thread Will Weaver
Wow, this has been a problem for me for the past 3 or 4 days and took
a while to get to that example.  Defining the backrefs or the
relationships in the opposite direction did the job.  I had
intentionally left out some of the backreffed relationships because I
didn't need them for what I was working on, but it definitely is worth
it to get this working.

Thanks a lot.

-Will

On Wed, Dec 15, 2010 at 2:45 PM, Michael Bayer mike...@zzzcomputing.com wrote:
 This is an interesting edge case and I can probably ensure that the 
 dependency between Parent/Child is present in the unit of work even if there 
 is no known linkage at the Child.parent level for the objects actually 
 present - ticket #2002 is added for this.

 In the meantime, the uow needs to be aware of the linkage between 
 Parent-Child when flush occurs.   Adding a backref children to the parent 
 relationship will do it, or ensuring that child.parent is accessed before 
 emitting the flush will do it.     The usual way this kind of delete is 
 performed is the delete cascade is added to the children backref, then 
 the Parent is deleted alone, the deletes cascading to the Child objects 
 naturally.

 But this is a fun bug and I'll probably have a fix very soon, perhaps in 10 
 minutes or maybe not.




 On Dec 15, 2010, at 2:17 PM, Will wrote:

 
 Hello,

 I've been recently having a problem with sqlalchemy not flushing
 deletes in the proper order.  I've created a simple example for the
 problem that has been occuring.  I tried to run this using sqlite and
 it doesn't have any problems, it is only with Postgresql.

 One thing of note is that if there is only one Child it doesn't seem
 to
 have a problem, only when there are multiple children.  Not sure if
 that makes a difference in the SQLAlchemy code.
 

 from sqlalchemy import create_engine
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import scoped_session, sessionmaker, relationship
 from sqlalchemy.schema import Column, ForeignKey
 from sqlalchemy.types import Integer

 #engine = create_engine('sqlite:///')
 engine = create_engine('postgresql://test_runner@/testing_db')

 Model = declarative_base()


 class Parent(Model):
    __tablename__ = 'parents'

    id = Column(Integer, primary_key=True)


 class Child(Model):
    __tablename__ = 'children'

    id = Column(Integer, primary_key=True)
    parent_id = Column(Integer, ForeignKey('parents.id'),
                       nullable=False)
    parent = relationship('Parent')


 def begin():
    Begin transaction
    #global transaction
    #transaction = session.begin()

    session.begin()


 def commit():
    Commit transaction
    #global transaction
    #transaction.commit()

    session.commit()


 Model.metadata.create_all(engine)

 parent = Parent()
 children = [Child(parent=parent), Child(parent=parent)]

 Session = sessionmaker(bind=engine, autocommit=True)
 session = Session()

 try:
    session.bind.echo = True

    begin()
    session.add_all(children)
    session.add(parent)
    commit()


    begin()
    for child in children:
        session.delete(child)
    session.delete(parent)
    commit()

    session.bind.echo = False
 finally:
    Model.metadata.drop_all(engine)


 
 From running the script I have two different outputs because it seems
 to run the deletes in a random order so subsequent runs will behave
 differently.
 

 # Example Failed Run
 
 2010-12-15 13:45:05,050 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 BEGIN (implicit)
 2010-12-15 13:45:05,051 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 INSERT INTO parents DEFAULT VALUES RETURNING parents.id
 2010-12-15 13:45:05,051 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 {}
 2010-12-15 13:45:05,052 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 INSERT INTO children (parent_id) VALUES (%(parent_id)s) RETURNING
 children.id
 2010-12-15 13:45:05,052 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 {'parent_id': 1}
 2010-12-15 13:45:05,053 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 INSERT INTO children (parent_id) VALUES (%(parent_id)s) RETURNING
 children.id
 2010-12-15 13:45:05,054 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 {'parent_id': 1}
 2010-12-15 13:45:05,054 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 COMMIT
 2010-12-15 13:45:05,055 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 BEGIN (implicit)
 2010-12-15 13:45:05,056 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 SELECT parents.id AS parents_id
 FROM parents
 WHERE parents.id = %(param_1)s
 2010-12-15 13:45:05,056 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 {'param_1': 1}
 2010-12-15 13:45:05,057 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 DELETE FROM parents WHERE parents.id = %(id)s
 2010-12-15 13:45:05,057 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 {'id': 1}
 2010-12-15 13:45:05,058 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 ROLLBACK
 2010-12-15 13:45:05,061 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 select relname from pg_class c join pg_namespace n on
 

Re: [sqlalchemy] Deletion order during flush is not correct.

2010-12-15 Thread Michael Bayer

It was in fact a one liner, so you can go back to your original code if you use 
the latest 0.6 tip:

http://hg.sqlalchemy.org/sqlalchemy/archive/rel_0_6.tar.gz

thanks for the bug report !


On Dec 15, 2010, at 3:41 PM, Will Weaver wrote:

 Wow, this has been a problem for me for the past 3 or 4 days and took
 a while to get to that example.  Defining the backrefs or the
 relationships in the opposite direction did the job.  I had
 intentionally left out some of the backreffed relationships because I
 didn't need them for what I was working on, but it definitely is worth
 it to get this working.
 
 Thanks a lot.
 
 -Will
 
 On Wed, Dec 15, 2010 at 2:45 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 This is an interesting edge case and I can probably ensure that the 
 dependency between Parent/Child is present in the unit of work even if there 
 is no known linkage at the Child.parent level for the objects actually 
 present - ticket #2002 is added for this.
 
 In the meantime, the uow needs to be aware of the linkage between 
 Parent-Child when flush occurs.   Adding a backref children to the parent 
 relationship will do it, or ensuring that child.parent is accessed before 
 emitting the flush will do it. The usual way this kind of delete is 
 performed is the delete cascade is added to the children backref, then 
 the Parent is deleted alone, the deletes cascading to the Child objects 
 naturally.
 
 But this is a fun bug and I'll probably have a fix very soon, perhaps in 10 
 minutes or maybe not.
 
 
 
 
 On Dec 15, 2010, at 2:17 PM, Will wrote:
 
 
 Hello,
 
 I've been recently having a problem with sqlalchemy not flushing
 deletes in the proper order.  I've created a simple example for the
 problem that has been occuring.  I tried to run this using sqlite and
 it doesn't have any problems, it is only with Postgresql.
 
 One thing of note is that if there is only one Child it doesn't seem
 to
 have a problem, only when there are multiple children.  Not sure if
 that makes a difference in the SQLAlchemy code.
 
 
 from sqlalchemy import create_engine
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import scoped_session, sessionmaker, relationship
 from sqlalchemy.schema import Column, ForeignKey
 from sqlalchemy.types import Integer
 
 #engine = create_engine('sqlite:///')
 engine = create_engine('postgresql://test_runner@/testing_db')
 
 Model = declarative_base()
 
 
 class Parent(Model):
__tablename__ = 'parents'
 
id = Column(Integer, primary_key=True)
 
 
 class Child(Model):
__tablename__ = 'children'
 
id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey('parents.id'),
   nullable=False)
parent = relationship('Parent')
 
 
 def begin():
Begin transaction
#global transaction
#transaction = session.begin()
 
session.begin()
 
 
 def commit():
Commit transaction
#global transaction
#transaction.commit()
 
session.commit()
 
 
 Model.metadata.create_all(engine)
 
 parent = Parent()
 children = [Child(parent=parent), Child(parent=parent)]
 
 Session = sessionmaker(bind=engine, autocommit=True)
 session = Session()
 
 try:
session.bind.echo = True
 
begin()
session.add_all(children)
session.add(parent)
commit()
 
 
begin()
for child in children:
session.delete(child)
session.delete(parent)
commit()
 
session.bind.echo = False
 finally:
Model.metadata.drop_all(engine)
 
 
 
 From running the script I have two different outputs because it seems
 to run the deletes in a random order so subsequent runs will behave
 differently.
 
 
 # Example Failed Run
 
 2010-12-15 13:45:05,050 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 BEGIN (implicit)
 2010-12-15 13:45:05,051 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 INSERT INTO parents DEFAULT VALUES RETURNING parents.id
 2010-12-15 13:45:05,051 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 {}
 2010-12-15 13:45:05,052 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 INSERT INTO children (parent_id) VALUES (%(parent_id)s) RETURNING
 children.id
 2010-12-15 13:45:05,052 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 {'parent_id': 1}
 2010-12-15 13:45:05,053 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 INSERT INTO children (parent_id) VALUES (%(parent_id)s) RETURNING
 children.id
 2010-12-15 13:45:05,054 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 {'parent_id': 1}
 2010-12-15 13:45:05,054 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 COMMIT
 2010-12-15 13:45:05,055 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 BEGIN (implicit)
 2010-12-15 13:45:05,056 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 SELECT parents.id AS parents_id
 FROM parents
 WHERE parents.id = %(param_1)s
 2010-12-15 13:45:05,056 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 {'param_1': 1}
 2010-12-15 13:45:05,057 INFO sqlalchemy.engine.base.Engine.0x...f5d0
 DELETE FROM parents WHERE parents.id = %(id)s

[sqlalchemy] Why does Session.merge only look at primary key and not all unique keys?

2010-12-15 Thread Russell Warren
Why does Session.merge only look at primary key and not all unique
keys?

Leaving aside some irritating DBMS restrictions on PKs and some
automatic indexing that tends to happen, the PK is not fundamentally
different than other unique keys and I don't see why SQLA
distinguishes them from an integrity/relationship perspective.  In
databases where it is already frustrating that they have funky PK
restrictions it is tough to make merge() work the way it seems it
should.  For example, in the code below this post, Sqlite requires the
autoincrementing field to be the PK, and you can't composite it with
another field... with these restrictions I can't get merge() to work
the way it should.

I was looking for a clean way in SQLAlchemy to do an insert if not
exists pattern, and merge() looked perfect, but I can't make it work
at the moment.

I'm also aware that in the sample code the 'name' field should really
just be the primary key and the problem goes away, but the reality
of the grander/real scheme is that the linking id is needed in
addition to other unique keys.

In addition to the docs, these existing threads are also very
relevant:

http://groups.google.com/group/sqlalchemy/browse_frm/thread/7483736b46d56943
http://groups.google.com/group/sqlalchemy/browse_thread/thread/79736ff7ef81d1b9/0b80b54dc45ecc28

To make the insert if not exists pattern work I'll likely/
reluctantly be doing the __new__ hack referred to in the latter thread
to achieve what I'm after in the end, but I really don't get why the
PK is getting special treatment.

Thanks,
Russ

Sample code:


from sqlalchemy import Column, Integer, String, MetaData, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, scoped_session
import logging

engine = create_engine('sqlite:///:memory:')
metadata = MetaData()
DeclarativeBase = declarative_base()

#Configure some clean and indented SQL logging...
class SqlFormatter(logging.Formatter):
def format(self, record):
prefix = SQL  
record.msg = prefix + record.msg.replace(\n, \n + prefix)
return logging.Formatter.format(self, record)
sqlaLogger = logging.getLogger('sqlalchemy.engine')
sqlaLogger.setLevel(logging.INFO)
handler = logging.StreamHandler()
handler.setFormatter(SqlFormatter(%(message)s))
sqlaLogger.addHandler(handler)

class MyStuff(DeclarativeBase):
__tablename__ = 'mystuff'

#Config below
id = Column(Integer, primary_key = True, autoincrement = True)
name = Column(String(100), nullable = False, unique = True)

#Config below no good due to composite PK...
#id = Column(Integer, primary_key = True, autoincrement = True)
#name = Column(String(100), nullable = False, primary_key = True)

#Config below doesn't give autoincrement...
#id = Column(Integer, primary_key = False, autoincrement = True)
#name = Column(String(100), nullable = False, primary_key = True)

def __init__(self, Name):
self.name = Name

DeclarativeBase.metadata.create_all(engine)
Session = sessionmaker(bind = engine)

print Attempting 'foo' merge into empty DB...
s1 = Session()
foo = s1.merge(MyStuff(foo))
s1.commit()
s1.close()

print Attempting 'foo' merge after it exists already...
s2 = Session()
foo = s2.merge(MyStuff(foo))
s2.commit()
s2.close()

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Why does Session.merge only look at primary key and not all unique keys?

2010-12-15 Thread Russell Warren
Whoops - my clipboard had an old pre-cleaned sample code in it that I
pasted.  Sorry about the excess lines... maybe the logging mod will be
useful for some people, though?

The code is still right, just not fully cleaned up.  The first Column
config is the one that works around the Sqlite PK/autoincrement
restriction, but doesn't work with merge() because it doesn't pick up
on the fact that 'foo' already exists and the merge tries to insert it
again, throwing an IntegrityError.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Why does Session.merge only look at primary key and not all unique keys?

2010-12-15 Thread Michael Bayer

On Dec 15, 2010, at 5:14 PM, Russell Warren wrote:

 Why does Session.merge only look at primary key and not all unique
 keys?

Well the theory of operation regarding merge() is based on that of the identity 
map, which is linked to object/row identity.  Consider that it also 
cascades along relationship paths.  It would be a difficult operation to define 
if it had to choose among multiple ways to determine the identity of each 
object along the cascade chain.


 
 Leaving aside some irritating DBMS restrictions on PKs and some
 automatic indexing that tends to happen, the PK is not fundamentally
 different than other unique keys

It is fundamentally different in that a database row within a reasonable schema 
has only one identity.  The usage of surrogate primary keys perhaps pollutes 
this concept to some degree.


 and I don't see why SQLA
 distinguishes them from an integrity/relationship perspective.  

SQLA at the ORM level doesn't really know about any other attributes being 
unique and it would incur excessive complexity to implement that as built-in, 
where complexity here means the bookkeeping associated with storing, 
retrieving, and modifying items in the identity map would become a much more 
time consuming affair (for some recent insight into my epic battle with time 
consumption, see 
http://techspot.zzzeek.org/2010/12/12/a-tale-of-three-profiles/ ).  It also 
would refer to all those unintended side effects of doing such, such as two 
objects that are swapping attribute values, etc.The database does a 
perfectly good job of maintaining UNIQUE constraints so we leave that whole 
affair out of the Python side.


 In
 databases where it is already frustrating that they have funky PK
 restrictions it is tough to make merge() work the way it seems it
 should.  For example, in the code below this post, Sqlite requires the
 autoincrementing field to be the PK, and you can't composite it with
 another field... with these restrictions I can't get merge() to work
 the way it should.
 I was looking for a clean way in SQLAlchemy to do an insert if not
 exists pattern, and merge() looked perfect, but I can't make it work
 at the moment.

The generic insert if not exists pattern that is extensible to whatever 
attributes you want is at: 
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/UniqueObject .

 
 I'm also aware that in the sample code the 'name' field should really
 just be the primary key and the problem goes away,

mmm the consensus I've noted for the past several years, as well as with my own 
experiences, is that we're better off with surrogate primary keys.   SQLA does 
support natural primary keys fully, and note that foreign keys which reference 
natural primary keys are entirely valid.   Mutation of these keys is supported 
naturally through ON UPDATE CASCADE and ON DELETE CASCADE.  But I find myself 
usually never using them (well actually I did a yearlong project a year ago 
that was all on natural PKs and it was really not worth it).


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Use of table aliases

2010-12-15 Thread Michael Bayer

On Dec 15, 2010, at 9:04 AM, neurino wrote:

 Hello I have 2 tables: data and acquisitions,
 - each Acquisition has many Data
 - each Data come from a different sensor
 - the single sensor is identified by the couple Acquisition.id_centr,
 Data.id_meas
 
 No I need a query with one colum for each sensor and a row for each
 Acquisition.datetime
 
 This is how I get it (in case of two sensors) with SQL:
 
 q = curs.execute(
SELECT a.datetime, d1.value, d2.value
FROM acquisitions AS a
LEFT JOIN data AS d1
ON a.id_acq=d1.id_acq
AND a.id_centr=159
AND d1.id_meas=1501
LEFT JOIN data AS d2
ON a.id_acq=d2.id_acq
AND a.id_centr=320
AND d2.id_meas=1551
)
 for n, row in enumerate(q): print n, row
   :
 0 (u'2010-09-02 12:05:00', 23.98, 25.67)
 1 (u'2010-09-02 12:10:00', 23.77, 25.57)
 2 (u'2010-09-02 12:15:00', 23.96, 25.57)
 3 (u'2010-09-02 12:20:00', 24.78, 25.94)
 4 (u'2010-09-02 12:25:00', 25.48, 26.27)
 5 (u'2010-09-02 12:30:00', 25.91, 26.46)
 6 (u'2010-09-02 12:35:00', 26.14, 26.62)
 7 (u'2010-09-02 12:40:00', 26.32, 26.73)
 8 (u'2010-09-02 12:45:00', 26.44, 26.80)
 9 (u'2010-09-02 12:50:00', 26.55, 26.87)
 10 (u'2010-09-02 12:55:00', 26.62, 26.92)
 11 (u'2010-09-02 13:00:00', 26.67, 26.94)
 12 (u'2010-09-02 13:05:00', 26.69, 26.94)
 13 (u'2010-09-02 13:10:00', 26.71, 26.96)
 14 (u'2010-09-02 13:15:00', 26.73, 26.98)
 
 But I can't get the same result with sqlalchemy, here's my mapping:
 
 data = Table('data', metadata,
Column('id_data', Integer, primary_key=True),
Column('id_meas', Integer, nullable=False),
Column('id_acq', Integer, ForeignKey('acquisitions.id_acq'),
nullable=False),
Column('value', Float, nullable=False),
)
 
 acquisitions = Table('acquisitions', metadata,
Column('id_acq', Integer, primary_key=True),
Column('id_centr', Integer, nullable=False),
Column('datetime', DateTime, nullable=False),
#acquisitions with same id_centr and datetime are duplicates
UniqueConstraint('id_centr', 'datetime'),
)
 
 orm.mapper(Data, data, properties={
'acquisitions': orm.relationship(Acquisition, backref='data'),
})
 orm.mapper(Acquisition, acquisitions)

to create aliases during an ORM query you use the aliased() construct.  There's 
examples at:

http://www.sqlalchemy.org/docs/orm/tutorial.html#using-aliases

you'd also be using sqlalchemy.and_() to formulate those outerjoin() conditions.




 
 Any advice?
 
 Thanks for your support
 neurino
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Why does Session.merge only look at primary key and not all unique keys?

2010-12-15 Thread Russell Warren
 Why does Session.merge only look at primary key and not all unique
 keys?

 Well the theory of operation regarding merge() is based on that of
 the identity map, which is linked to object/row identity.
 Consider that it also cascades along relationship paths.  It would
 be a difficult operation to define if it had to choose among
 multiple ways to determine the identity of each object along the
 cascade chain.

Ok.  That certainly makes sense for following relationships in the
merge (and in general).  But for the basic existence checking that is
required in the first step(s) of the merge to figure out whether the
object being merged already exists or not, it does not seem
unreasonable for it to check all unique keys.  ie: in your docs you
say the first merge step is It examines the primary key of the
instance.  Can't it be It checks any provided unique elements of the
instance.  From that point, normal/sensible identity map rules could
resume for cascade.  Is a checking is not the same as is related
to checking, which is clearly nuttier.

Although... you do mention later that ORM level has no knowledge of
unique attributes, so perhaps this is also impossible?  Is the
unique = True kwarg on the Column not kept anywhere?  Is it just
used for table creation and then turfed?

 Leaving aside some irritating DBMS restrictions on PKs and some
 automatic indexing that tends to happen, the PK is not fundamentally
 different than other unique keys

 It is fundamentally different in that a database row within a
 reasonable schema has only one identity.  The usage of surrogate
 primary keys perhaps pollutes this concept to some degree.

Ok again... but you also agree that the use of surrogate keys is
standard (and some say necessary) practice.  In the case of Sqlite
(as in my example), adding this surrogate key automatically makes the
schema unreasonable because you now need to have the primary key as
the meaningless surrogate key (single id column), and the natural key
ends up being relegated to just another unique index.  Unfortunately,
the latter renders the object/table useless for use with the useful
session.merge() function.  I don't recall the details, but I think
there may be a similar PostgreSQL limitation regarding autoincrements
as well.

 for some recent insight into my epic battle with time consumption,
 see http://techspot.zzzeek.org/2010/12/12/a-tale-of-three-profiles

Great link/post!  Seeing successful optimizations is always good, and
the writeup is quite informative.

Regarding RunSnakeRun, I've used it as well and like it.  However, I'm
a bit of a resolution junkie and the box frames in RunSnakeRun's
visual output are limited in how they will squish all the box borders
together.  If you haven't given kcachegrind a shot yet for viewing
cProfile results, you might want to give it a trial run as the visual
output is a better representation of timing scale, and the viewing is
more powerful as well (if resolution is too weak if an argument).
Some tips to get it to work well for python are here:
http://stackoverflow.com/questions/1896032/using-cprofile-results-with-kcachegrind

 The generic insert if not exists pattern that is extensible to
 whatever attributes you want is at:
 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/UniqueObject

Thanks for the example...  I would have been worried about how
fiddling with the __new__ constructor would interfere with query
loads, but that example shows how to make it work.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Session.add performance

2010-12-15 Thread Julian Scheid
In an application that is heavy on inserts and updates, cProfile
output is dominated by Session.add in which about 45% of time is
spent. Most of that time, in turn, is spent in cascade_iterator (43%).
I can provide more detailed information if needed.

The application does aggressive caching of data and has set
expire_on_commit=False, in order to keep database load down. Is that
the reason for Session.add slowness?

Is there a way I can speed this up while keeping a similar level of
cache aggressiveness?

For example, in one test run Session.__contains__ was invoked 25m
times over the course of only a few minutes, accounting for 27% of
total time spent.  Could it be a good idea to try and override this
function with one that's optimized for this specific use case?

Also, so far I haven't spent any effort expunging objects from the
session as soon as possible.  Some objects might linger for longer
than necessary.  Would they contribute to Session.add's overhead?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.