Re: [sqlalchemy] How to use BLOB datatype in alembic tool to support Postgresql?

2012-10-30 Thread Mike Bayer
take a look at LargeBinary:

http://docs.sqlalchemy.org/en/rel_0_7/core/types.html#sqlalchemy.types.LargeBinary



On Oct 30, 2012, at 2:27 PM, junepeach wrote:

 We want to support as many databases as we can. Right now we have BLOB data 
 type defined in our database schemas, both sqlite3 and mysql work fine with 
 Alembic migration tool on that data type, however it failed on Postgresql. 
 How should we handle that if we really want to use BLOB datatype? Thanks.
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To view this discussion on the web visit 
 https://groups.google.com/d/msg/sqlalchemy/-/5kKo3n8yXkAJ.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Using Alembic to maintain versions of SPs and other objects?

2012-10-30 Thread Mike Bayer

On Oct 30, 2012, at 2:39 PM, Don Dwiggins wrote:

 It appears that, at least currently, Alembic only directly manages tables 
 (although I guess one could include SQL code in the upgrade/downgrade 
 functions to add/delete/change SPs, user defined types, functions, etc.
 
 Am I right in this?  If so, do you have any plans or thoughts about 
 versioning other schema objects?

All of that is accessible via regular op.execute() instructions.   There's not 
much of a win in building hierarchies of Python objects over these constructs 
as they are only needed for migrations and are highly backend specific.

If a more abstracted approach were desired, you're free to define DDL instances 
and/or DDLElement classes to take advantage of backend-specific compilation 
forms, events, and other automations, though within the realm of stored 
procedures and functions there's obviously very little overlap across database 
vendors.

As for the autogenerate route, this is where if SQLAlchemy dialects supported 
dialect-agnostic reflection of SPs and other constructs, Alembic could begin to 
allow these constructs as well, but that would be a very large project.   

The Postgresql dialect does have some support for reflection of user-defined 
domains as well as enumerated types, but not yet the catch all CREATE TYPE 
construct.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Sub-classing declarative classes

2014-06-10 Thread Mike Bayer



On Tue Jun 10 15:47:00 2014, Noah Davis wrote:


some_model.py
---
[SQLA setup of Base class here]
class Alice(Base):
   __tablename__ = 'alice'
   id = Column(Integer, primary_key=True)
   value = Column(String)

class Bob(Base):
   __tablename__ = 'bob'
   id = Column(Integer, primary_key=True)
   subval = Column(String)
   alice_id = Colum(Integer, ForeignKey('alice.id'))
   alice = relationship('Alice', backref='bobs')


some_app.py

import some_model

class MyAlice(some_model.Alice):
   def myfunc(self):
   do_nothing_sql_related_here()

class MyBob(some_model.Bob):
   def otherfunc(self):
  again_something_unrelated()
-

This actually works okay out of the box if I select on the subclasses:
DBSession.query(MyAlice).filter(MyAlice.id==5).first() - MyAlice(...)

The problem, of course, is relations:
a = DBSession.query(MyAlice).filter(MyAlice.id=1).first()
a.bobs - [Bob(...), Bob(...), Bob(...)]
instead of
a.bobs - [MyBob(...), MyBob(...), MyBob(...)]

I suspect there's some way to tell the ORM to Do The Right Thing here,


Well IMHO it is doing the Right Thing right now, Alice has a 
relationship that points to Bob.  So it's going to give you Bob 
objects.   I don't know how any system could be devised such that it 
would know you want to go to MyBob instead.  Especially if you have 
MyBobOne, MyBobTwo, etc.


if you wanted MyBob you'd need to tell it that.   SQLA isn't really 
expecting this kind of thing but you can make it work, with warnings, 
like this:


from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class Alice(Base):
   __tablename__ = 'alice'
   id = Column(Integer, primary_key=True)
   bobs = relationship(Bob, back_populates=alice)

class Bob(Base):
   __tablename__ = 'bob'
   id = Column(Integer, primary_key=True)
   alice_id = Column(Integer, ForeignKey('alice.id'))
   alice = relationship('Alice', back_populates=bobs)


class MyAlice(Alice):
   def myfunc(self):
  print myfunc

   bobs = relationship(MyBob, back_populates=alice)

class MyBob(Bob):
   def otherfunc(self):
   print otherfunc

   alice = relationship('MyAlice', back_populates=bobs)

e = create_engine(sqlite://, echo=True)
Base.metadata.create_all(e)

s = Session(e)
s.add_all([
   MyAlice(
   bobs = [
   MyBob(),
   MyBob()
   ]
   )
])
s.commit()
s.close()

a1 = s.query(MyAlice).first()
print a1.bobs

this isn't really a great approach though.


but I have no idea what it might be. I'd like the particular
applications to be as unaware of the underlying table information as
possible. I guess in essence I'm trying to separate business logic
from the DB logic as much as possible. Maybe I'm heading down a
dead-end... I'm open to better suggestions.


well if your really want it that way, you can define Table objects 
separately.  Relationships you can get in there using declarative mixin 
patterns perhaps.But if you want a custom-defined MyAlice to point 
to a custom-defined MyBob you'd need some system that knows how to 
figure that out.


Here's a goofy way to do it by name, you might want to get into 
something more comprehensive but I'm hoping this is inspiration...


from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base, declared_attr

class DynamicHierarchy(object):

   @classmethod
   def hierarchy(cls, name):
   target_hierarchy = cls.hierarchy_name
   return cls._decl_class_registry[target_hierarchy + name]

Base = declarative_base(cls=DynamicHierarchy)

class Alice(Base):
   __abstract__ = True

   @declared_attr
   def __table__(cls):
   return Table(alice,
   cls.metadata,
   Column('id', Integer, primary_key=True),
   useexisting=True
   )

   @declared_attr
   def bobs(cls):
   return relationship(lambda: cls.hierarchy(Bob), 
back_populates=alice)


class Bob(Base):
   __abstract__ = True

   @declared_attr
   def __table__(cls):
   return Table(bob,
   cls.metadata,
   Column('id', Integer, primary_key=True),
   Column(alice_id, ForeignKey('alice.id')),
   useexisting=True
   )
   @declared_attr
   def alice(cls):
   return relationship(lambda: cls.hierarchy('Alice'), 
back_populates=bobs)



class MyAlice(Alice):
   hierarchy_name = My

   def myfunc(self):
  print myfunc


class MyBob(Bob):
   hierarchy_name = My

   def otherfunc(self):
   print otherfunc


e = create_engine(sqlite://, echo=True)
Base.metadata.create_all(e)

s = Session(e)
s.add_all([
   MyAlice(
   bobs = [
   MyBob(),
   MyBob()
   ]
   )
])
s.commit()
s.close()

a1 = s.query(MyAlice).first()

Re: [sqlalchemy] another quick question regarding abstract classes

2014-06-10 Thread Mike Bayer

On Tue Jun 10 15:36:09 2014, Richard Gerd Kuesters wrote:

so, here i am again with another weird question, but it may be
interesting for what it may come (i dunno yet).

the problem: i have a collection of abstract classes that, when
requested, the function (that does the request) checks in a internal
dictionary if that class was already created or creates it by using
declarative_base(cls=MyAbstractClass), that later can have an engine
and then work against a database.

i use this format because i work with multiple backends from multiple
sources, so abstract classes are a *must* here. now, the problem:
foreign keys and relationships. it's driving me nuts.

ok, let's say I have 2 classes, Foo and Bar, where Bar have one FK to Foo.


class Foo(object):
__abstract__ = True
foo_id = Column(...)
...

class Bar(object):
__abstract__ = True
foo_id = Column(ForeignKey(...))




/(those classes are just examples and weren't further coded because
it's a conceptual question)/

i know that the code might be wrong, because i can use @declared_attr
here and furthermore help sqlalchemy act accordingly (i don't know if
this is the right way to say it in english, but it is not a complain
about sqlalchemy actions).

ok, suppose I created two subclasses, one from each abstract model
(Foo and Bar) in a postgres database with some named schema, let's say
sc1. we then have sc1.foo and sc1.bar.

now, i want to create a third table, also from Bar, but in the sc2
schema, where its foreign key will reference sc1.foo, which postgres
supports nicely.

how can i work this out, in a pythonic and sqlalchemy friendly way?
does @declared_attr solves this? or do I have to map that foreign key
(and furthermore relationships) in the class mapper, before using it,
like a @classmethod of some kind?



@declared_attr can help since the decorated function is called with 
cls as an argument.  You can look on cls for __table_args__ or some 
other attribute if you need, and you can create a Table on the fly to 
serve as secondary, see 
http://docs.sqlalchemy.org/en/rel_0_9/_modules/examples/generic_associations/table_per_related.html 
for an example of what this looks like.







best regards and sorry for my english,
richard.

--
You received this message because you are subscribed to the Google
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com
mailto:sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] how to tell if a relationship was loaded or not ?

2014-06-11 Thread Mike Bayer

On 6/11/14, 2:17 PM, Jonathan Vanasco wrote:
 I can't find this in the API or by using `inspect` on an object.

 I'm trying to find out how to tell if a particular relationship was
loaded or not.

 ie, I loaded Foo from the ORM, and want to see if foo.bar was loaded.

 I thought it might have been the `.attrs[column].state` , which is an
InstanceState, but it doesn't appear to be so.

usually we do  key in obj.__dict__ or key in inspect(obj).dict.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] pandas - issue with SQL server and checking for table existence with user-defined default schema

2014-06-11 Thread Mike Bayer

On 6/11/14, 4:07 PM, Joris Van den Bossche wrote:
 Hi,

 Since version 0.14 (released two weeks ago), pandas uses sqlalchemy in
 the SQL reading and writing functions to support different database
 flavors. A user reported an issue with SQL server:
 https://github.com/pydata/pandas/issues/7422 (and question on SO:
 http://stackoverflow.com/questions/24126883/pandas-dataframe-to-sql-function-if-exists-parameter-not-working).

 The user has set the default schema to `test`, but
 `engine.has_table('table_name')` and `meta.tables` still seem to
 return the tables in schema `dbo`.
 This leads to the following issue in our sql writing function `to_sql`:
 - when creating the table (using `Table.create()`), it creates it in
 the schema set as default (so 'test')
 - when checking for existence of the table (needed to see if the
 function has to fail, or has to append to the existing table), it
 however checks if the table exists in the 'dbo' schema
 - for this reason, the function thinks the table does not yet exists,
 tries to create it, resulting in a |There is already an object named
 'foobar' in the database| error.

 Is there a way to resolve this? Is this an issue on our side, or
 possibly in sqlalchemy?

all sqlalchemy dialects make sure to determine what in fact is the
default schema upon first connect.  With SQL server, turn on
echo='debug' and you will see this query:

2014-06-11 19:14:02,064 INFO sqlalchemy.engine.base.Engine
SELECT default_schema_name FROM
sys.database_principals
WHERE principal_id=database_principal_id()
   
2014-06-11 19:14:02,065 INFO sqlalchemy.engine.base.Engine ()
2014-06-11 19:14:02,065 DEBUG sqlalchemy.engine.base.Engine Col
('default_schema_name',)
2014-06-11 19:14:02,065 DEBUG sqlalchemy.engine.base.Engine Row ('dbo', )

that row ('dbo', ) you see is what is being determined as the default
schema.a subsequent has_table() command with no explicit schema will
use this value:

SELECT [COLUMNS_1].[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME],
[COLUMNS_1].[COLUMN_NAME], [COLUMNS_1].[IS_NULLABLE],
[COLUMNS_1].[DATA_TYPE], [COLUMNS_1].[ORDINAL_POSITION],
[COLUMNS_1].[CHARACTER_MAXIMUM_LENGTH], [COLUMNS_1].[NUMERIC_PRECISION],
[COLUMNS_1].[NUMERIC_SCALE], [COLUMNS_1].[COLUMN_DEFAULT],
[COLUMNS_1].[COLLATION_NAME]
FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
WHERE [COLUMNS_1].[TABLE_NAME] = CAST(? AS NVARCHAR(max)) AND
[COLUMNS_1].[TABLE_SCHEMA] = CAST(? AS NVARCHAR(max))
2014-06-11 19:14:02,071 INFO sqlalchemy.engine.base.Engine ('foo', 'dbo')

so you want to look into the SQL server database_principal_id() function
and why that might not be working as expected.   If you see that the
function is returning NULL or None, there's a workaround which is that
you can specify schema_name='xyz' to create_engine() as an argument;
however this value is only used if the above query is returning NULL
(which it should not be).









 BTW, I tried this myself with PostgreSQL, but couldn't reproduce it.

 Kind regards,
 Joris
 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] uwsgi + sqlalchemy - forking concern

2014-06-15 Thread Mike Bayer
yeah what he said, but also, in a forking situation the easiest thing 
you can do is, as soon as the fork starts, say engine.dispose().  Or 
just engine = some new engine.  all you need to do is make sure no DB 
connections in the pool (or anywhere else) are allowed to continue in 
the fork.

On Sun Jun 15 22:01:09 2014, Jeff Dairiki wrote:
 On Sun, Jun 15, 2014 at 01:26:47PM -0700, Jonathan Vanasco wrote:
 i'm using sqlalchemy in pyramid, under uwsgi.

 i've been reading up on forking issues that can arise in this stack.

 from what I understand, issues arise because the SqlAlchemy Connection is
 established before the fork, resulting in each forked process potentially
 trying to use the same connection.  One of the fixes is to (re)connect
 after the fork.

 This is where i'm confused...  I don't know what a SqlAlchemy Connection
 refers to.

 I've read up on the uwsgi, sqlalchmy and pylons lists + docs -- I can't get
 any better info on what this SqlAlchemy connection is, or what component is
 having the issues.

 If the issue is with the SqlAlchemy engine connecting to the database, and
 it's internal db connection, then I should be fine -- SqlAlchemy doesn't
 connect in my apps until the first request.  I verified this with
 postgresql logs.

 If the issue is with creating a SqlAlchemy engine or sessionmaker, then I
 will need to rewrite some stuff.

 Does anyone know the specifics of the connection that has fork issues?


 A problem that can arise when using sqla connection pooling is (I'm not
 sure that this happens under all conditions, but it has happened to me):

 1. In the master process (e.g. in pyramid main()), engine is set up
(with pooling) a sqlconnection made, used, and closed.  The connection,
in this case is not actually closed, but just returned to the pool.

 2. Fork (e.g. uwsgi forks the workers)

 3. Each child process thinks it's got a connection in its pool.  The
problem is it's the same connection in each process.  When multiple
process try to use the same connection, it's bad, obviously.

 One fix, if you make connections in the master process, is, instead
 of closing the connection (connection.close()) invalidate it
 (connection.invalidate()).  Then the underlying connection is
 actually closed, rather than returned to the pool.

 There some on this in this thread:
   
 https://groups.google.com/forum/#!searchin/sqlalchemy/UnicodeEncodeErrors/sqlalchemy/Xf0fWsCqdCg/5-6aRpYcuecJ

 Jeff


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Intermittent error when retrieving an attribute of a persistent object

2014-06-16 Thread Mike Bayer
it's unlikely we can do much about that, jython is hardly used at all 
and is not a well-maintained platform; it regularly has subtle 
interpreter bugs and such.  pg8000 is in better shape these days but 
tracking down something like this would require careful pdb debugging 
and tracing.  That it's an intermittent bug makes it that much more 
difficult to track.

On Mon Jun 16 11:22:11 2014, tony.locke wrote:
 Hi, I'm using SQLAlchemy 0.8.6 on Jython 2.5.3 with the
 postgresql+pg8000 dialect, and I find I'm getting the following
 intermittent error when retrieving an attribute of a persistent
 object. The attribute is itself a persistent object. Any help would be
 most appreciated. Thanks, Tony.

 null org.python.core.PyException nullTraceback (most recent call last):

   *
   *   File string, line 55, in module
   *
   *   File string, line 55, in module
   *
   *   File D:\Program Files\Apache Software Foundation\Tomcat
 7.0\webapps\chellow\WEB-INF\lib-python\sqlalchemy\orm\attributes.py,
 line 316, in __get__
   *
   * return self.impl.get(instance_state(instance), dict_)
   *
   *   File D:\Program Files\Apache Software Foundation\Tomcat
 7.0\webapps\chellow\WEB-INF\lib-python\sqlalchemy\orm\attributes.py,
 line 613, in get
   *
   * value = self.callable_(state, passive)
   *
   *   File D:\Program Files\Apache Software Foundation\Tomcat
 7.0\webapps\chellow\WEB-INF\lib-python\sqlalchemy\orm\strategies.py,
 line 524, in _load_for_state
   *
   * return self._emit_lazyload(session, state, ident_key, passive)
   *
   *   File D:\Program Files\Apache Software Foundation\Tomcat
 7.0\webapps\chellow\WEB-INF\lib-python\sqlalchemy\orm\strategies.py,
 line 563, in _emit_lazyload
   *
   * return loading.load_on_ident(q, ident_key)
   *
   *   File D:\Program Files\Apache Software Foundation\Tomcat
 7.0\webapps\chellow\WEB-INF\lib-python\sqlalchemy\orm\loading.py,
 line 226, in load_on_ident
   *
   * return q.one()
   *
   *   File D:\Program Files\Apache Software Foundation\Tomcat
 7.0\webapps\chellow\WEB-INF\lib-python\sqlalchemy\orm\loading.py,
 line 226, in load_on_ident
   *
   * return q.one()
   *
   *   File D:\Program Files\Apache Software Foundation\Tomcat
 7.0\webapps\chellow\WEB-INF\lib-python\sqlalchemy\orm\query.py,
 line 2323, in one
   *
   * ret = list(self)
   *
   *   File D:\Program Files\Apache Software Foundation\Tomcat
 7.0\webapps\chellow\WEB-INF\lib-python\sqlalchemy\orm\loading.py,
 line 72, in instances
   *
   * rows = [process[0](row, None) for row in fetch]
   *
   *   File D:\Program Files\Apache Software Foundation\Tomcat
 7.0\webapps\chellow\WEB-INF\lib-python\sqlalchemy\orm\loading.py,
 line 356, in _instance
   *
   * tuple([row[column] for column in pk_cols])
   *
   *   File D:\Program Files\Apache Software Foundation\Tomcat
 7.0\webapps\chellow\WEB-INF\lib-python\sqlalchemy\engine\result.py,
 line 91, in __getitem__
   *
   * return self._row[index]
   *
   * IndexError: index out of range: 0
   *
   *
   *
   *   at org.python.core.Py.IndexError(Py.java:250)
   *
   *   at
 
 org.python.core.SequenceIndexDelegate.checkIdxAndGetItem(SequenceIndexDelegate.java:63)
   *
   *   at org.python.core.PySequence.seq___getitem__(PySequence.java:305)
   *
   *   at org.python.core.PySequence.__getitem__(PySequence.java:301)
   *
   *   at sqlalchemy.engine.result$py.__getitem__$9(D:\Program
 Files\Apache Software Foundation\Tomcat
 7.0\webapps\chellow\WEB-INF\lib-python\sqlalchemy\engine\result.py:91)
   *
   *   at sqlalchemy.engine.result$py.call_function(D:\Program
 Files\Apache Software Foundation\Tomcat
 7.0\webapps\chellow\WEB-INF\lib-python\sqlalchemy\engine\result.py)
   *
   *   at org.python.core.PyTableCode.call(PyTableCode.java:165)
   *
   *   at org.python.core.PyBaseCode.call(PyBaseCode.java:149)
   *
   *   at org.python.core.PyFunction.__call__(PyFunction.java:327)
   *
   *   at org.python.core.PyMethod.__call__(PyMethod.java:124)
   *
   *   at org.python.core.PyMethod.__call__(PyMethod.java:115)
   *
   *   at
 org.python.core.PyObjectDerived.__getitem__(PyObjectDerived.java:900)
   *
   *   at sqlalchemy.orm.loading$py._instance$9(D:\Program Files\Apache
 Software Foundation\Tomcat
 7.0\webapps\chellow\WEB-INF\lib-python\sqlalchemy\orm\loading.py:497)
   *
   *   at sqlalchemy.orm.loading$py.call_function(D:\Program
 Files\Apache Software Foundation\Tomcat
 7.0\webapps\chellow\WEB-INF\lib-python\sqlalchemy\orm\loading.py)
   *
   *   at org.python.core.PyTableCode.call(PyTableCode.java:165)
   *
   *   at org.python.core.PyBaseCode.call(PyBaseCode.java:149)
   *
   *   at org.python.core.PyFunction.__call__(PyFunction.java:327)
   *
   *   at sqlalchemy.orm.loading$py.instances$1(D:\Program Files\Apache
 Software Foundation\Tomcat
 

Re: [sqlalchemy] column ordering with union_all and joinedload?

2014-06-16 Thread Mike Bayer
SQLAlchemy targets mapped entities in a result set by their position 
within the Query object's entities.  It's not possible to have a UNION 
that returns some objects of type A and then some objects of type B in 
the same position, unless those two classes are related through 
polymorphic inheritance with a discriminator.



On Mon Jun 16 10:36:04 2014, Craig Sabbey wrote:
 I'm trying to union the same query for 2 tables with the same
 structure, along with joinedloads for these tables. When I use
 DBSession([TABLE]) for each query in the union, the columns come back
 in different orders. If I specify the columns
 (DBSession([TABLE.COL_A], [TABLE.COL_B], ...) then the joinedloads
 fail with Query has only expression-based entities.

 Here is some pseudo-code to hopefully make it clear what I'm trying to do:

   q0 = DBSession.query(T0).options(joinedload('ref')
   q1 = DBSession.query(T1).options(joinedload('ref')
   return q0.union_all(q1).all()

 Is there another way to perform this query?

 --
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is there a reason why there is no dialect specific fix for ORA-01795?

2014-06-17 Thread Mike Bayer

On 6/17/14, 4:32 AM, Ofir Herzas wrote:
 ORA-01795: maximum number of expressions in a list is 1000

 As I understand, there are several options to fix this issue
 (e.g.
 https://groups.google.com/forum/#!searchin/sqlalchemy/maximum$20number$20of$20expressions/sqlalchemy/Oa6YWNE0-IQ/6GVmy27B-FAJ
 or split in_ to several or_)

 Why not incorporate this fix into sqlalchemy?
SQLAlchemy's core SQL model is one-to-one with SQL.  Taking a single
statement and magically executing ten statements within the scope of an
execute() due to a particular operator being present would be very
inappropriate and surprising, and also wouldn't work for any queries
that are more complex than a simple WHERE x IN y.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is there a reason why there is no dialect specific fix for ORA-01795?

2014-06-17 Thread Mike Bayer

On 6/17/14, 9:30 AM, Ofir Herzas wrote:
 This can be done only for Oracle and only if the number of expressions
 is higher than 1000 (which would otherwise produce an exception)
 Regarding complex queries, I guess the several or_'s fix should work:
 or_(x.in_(1...1000), x.in_(1001...2000))

 How can this be done locally? (override in_ operator only for Oracle)
I've looked at the thread again and it's not 100% clear to me which
solution you're referring to.The linked message refers to using a
tuple_() format that somehow bypasses Oracle's normal limits; my
workaround is that I actually execute the statement multiple times as
needed.   Here it seems you're introducing a third potential workaround
which is to join them together with or_().

For the tuple_() format or the or_() format, operators are overridden at
the type level, see
http://docs.sqlalchemy.org/en/rel_0_9/core/types.html#types-operators.  
There's not yet a direct hook to override how in compiles to a SQL
string for Oracle only.

but again this would never be a default behavior because Oracle's 1000
element limit is there for a reason and we'd like to encourage users to
stick to a database's most idiomatic patterns, else be notified that
they're not.









 Thanks,
 Ofir


 On Tuesday, June 17, 2014 4:18:11 PM UTC+3, Michael Bayer wrote:


 On 6/17/14, 4:32 AM, Ofir Herzas wrote:
  ORA-01795: maximum number of expressions in a list is 1000
 
  As I understand, there are several options to fix this issue
  (e.g.
 
 
 https://groups.google.com/forum/#!searchin/sqlalchemy/maximum$20number$20of$20expressions/sqlalchemy/Oa6YWNE0-IQ/6GVmy27B-FAJ
 
 https://groups.google.com/forum/#%21searchin/sqlalchemy/maximum$20number$20of$20expressions/sqlalchemy/Oa6YWNE0-IQ/6GVmy27B-FAJ

  or split in_ to several or_)
 
  Why not incorporate this fix into sqlalchemy?
 SQLAlchemy's core SQL model is one-to-one with SQL.  Taking a single
 statement and magically executing ten statements within the scope
 of an
 execute() due to a particular operator being present would be very
 inappropriate and surprising, and also wouldn't work for any queries
 that are more complex than a simple WHERE x IN y.



 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: FAQ for CREATE TABLE output incomplete

2014-06-18 Thread Mike Bayer

On 6/18/14, 2:06 AM, rpkelly wrote:
 It seems like passing literal_binds=True to the call to
 sql_compiler.process in get_column_default_string will work, so long
 as SQLAlchemy can convert the values to literal binds. Which, in the
 example given, isn't the case.

the long standing practice for passing literals into server_default and
other places is to use literal_column():

server_default=func.foo(literal_column(bar))),

for the array, you need to use postgresql.array(), not func.array().  
It will work like this:

tbl = Table(derp, metadata,
Column(arr, ARRAY(Text),
server_default=array([literal_column('foo'),
literal_column('bar'),
literal_column('baz')])),
)

the docs suck. 
https://bitbucket.org/zzzeek/sqlalchemy/issue/3086/server_default-poorly-documented
is added (referring to
http://docs.sqlalchemy.org/en/rel_0_9/core/defaults.html#server-side-defaults).

then for literal_binds.  We've been slowly adding the use of this new
parameter with a fair degree of caution, both because it can still fail
on any non-trivial kind of datatype and also because a feature that
bypasses the bound parameter logic is just something we've avoided for
years, due to the great security hole it represents.We added it for
index expressions in #2742. 
https://bitbucket.org/zzzeek/sqlalchemy/issue/3087/literal_binds-in-server_default
will add it for server_default.   it's 1.0 for now but can be
potentially backported to 0.9.5.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: FAQ for CREATE TABLE output incomplete

2014-06-18 Thread Mike Bayer

On 6/18/14, 12:03 PM, Ryan Kelly wrote:
 On Wed, Jun 18, 2014 at 11:15 AM, Mike Bayer mike...@zzzcomputing.com wrote:
 On 6/18/14, 2:06 AM, rpkelly wrote:
 It seems like passing literal_binds=True to the call to
 sql_compiler.process in get_column_default_string will work, so long
 as SQLAlchemy can convert the values to literal binds. Which, in the
 example given, isn't the case.
 the long standing practice for passing literals into server_default and
 other places is to use literal_column():

 server_default=func.foo(literal_column(bar))),
 The issue with this is that in my actual code, the values are read
 from somewhere else, so
 I was trying to find a safe way to use them without having to deal
 with quoting/escaping
 issues so my code is Jimmy-proof

 for the array, you need to use postgresql.array(), not func.array().
 It will work like this:
 When I created the example, I changed the name of the function from
 make_array to array.
 So it actually is a function call to make_array, so it seems I need to
 put postgresql.array()
 inside of func.make_array() (or use the variadic form and unpack the list).

 tbl = Table(derp, metadata,
 Column(arr, ARRAY(Text),
 server_default=array([literal_column('foo'),
 literal_column('bar'),
 literal_column('baz')])),
 )

 the docs suck.
 https://bitbucket.org/zzzeek/sqlalchemy/issue/3086/server_default-poorly-documented
 is added (referring to
 http://docs.sqlalchemy.org/en/rel_0_9/core/defaults.html#server-side-defaults).

 then for literal_binds.  We've been slowly adding the use of this new
 parameter with a fair degree of caution, both because it can still fail
 on any non-trivial kind of datatype and also because a feature that
 bypasses the bound parameter logic is just something we've avoided for
 years, due to the great security hole it represents.We added it for
 index expressions in #2742.
 https://bitbucket.org/zzzeek/sqlalchemy/issue/3087/literal_binds-in-server_default
 will add it for server_default.   it's 1.0 for now but can be
 potentially backported to 0.9.5.
 Right, but I also don't think it's safe to issue DDL with arbitrary
 input as it currently stands,
 even values which are correctly escaped/formatted/etc. might result in
 name collisions or
 shadowing, or other undesirable behavior. I'm not sure if the
 documentation makes a
 statement about issuing DDL using information from untrusted sources,
 but it probably should.
We are already rendering expressions in Index objects with inline
literals.   I'm not sure under what scenario rendering bounds as
literals would produce a name collision. As far as DDL from
untrusted sources, certainly we could add language for that, though a
system that is rendering DDL on the fly from untrusted input (as opposed
to, some kind of schema-construction tool that trusts the user) is so
crazy that I doubt those folks read the docs that carefully anyway :).






 -Ryan Kelly


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Functions on column properties

2014-06-18 Thread Mike Bayer

On 6/18/14, 4:50 PM, Mike Solomon wrote:


 Le mercredi 18 juin 2014 22:03:33 UTC+3, Michael Bayer a écrit :



 if you can show the SQL you expect that would help.  it seems in
 your SO question you want a subquery, you'd have to define that:

 class Holder(..):
 some_prop = column_property(select([func.max(1 * col1 / col2)]))



 I tried this strategy, and the problem is that when it is used in
 combination with other things it creates a cross product.  In the
 below example, I'd like for all the queries to return 10 results but
 the last one returns 100 because it does not join the table generated
 in the max statement with the Fraction table.  Ideally, I'd like the
 last query to only generate 10 rows where the last two columns in each
 row are all the maximum numerator and denominator.

 The sql I'd expect would be like that in my SO question: rows selected
 from a table joined to itself.

 ECHO = False
 from sqlalchemy.orm import sessionmaker, aliased
 from sqlalchemy import create_engine
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy import Column, Integer, String, select, func
 from sqlalchemy.orm import relationship
 from sqlalchemy.ext.hybrid import hybrid_property

 engine = create_engine('sqlite:///:memory:', echo=ECHO)
 Base = declarative_base()
 class Fraction(Base):
__tablename__ = 'fractions'

id = Column(Integer, primary_key = True)
prop_1 = Column(Integer)
prop_2 = Column(Integer)
prop_3_num = Column(Integer)
prop_3_den = Column(Integer)

@hybrid_property
def prop_3(self) :
  return 0

@prop_3.expression
def prop_3(self) :
  alias_1 = aliased(Fraction)
  alias_2 = aliased(Fraction)
  statement1 = select([func.max(1.0 * alias_2.prop_3_num /
 alias_2.prop_3_den).label('fmax')])
  statement2 = select([(1.0 * alias_2.prop_3_num /
 alias_2.prop_3_den).label('fdec'),
 alias_2.prop_3_num.label('max_num'), alias_2.prop_3_den.label('max_den')])
  return select([statement2.c.max_num.label('prop_3_max_num'),
 statement2.c.max_den.label('prop_3_max_den')]).\
 select_from(statement2).join(statement1,
 onclause=statement1.c.fmax == statement2.c.fdec)

 Base.metadata.create_all(engine)

 Session = sessionmaker(bind=engine)
 session = Session()


 for x in range(10) :
   session.add(Fraction(prop_1=x%2, prop_2=x%4, prop_3_num = x+1,
 prop_3_den=x+2))

 session.commit()

 # should only print two rows...don't know why it prints 3
 for x in session.query(Fraction.prop_3) :
   print x

 for x in session.query(Fraction.prop_1) :
   print x

 for x in session.query(Fraction.prop_1, Fraction.prop_2) :
   print x

 for x in session.query(Fraction.prop_1, Fraction.prop_2,
 Fraction.prop_3) :
   print x
not looking deeply but the hybrid you have in prop_3 doesn't seem to
have any relationship to the base set of rows you're getting from
fractions.  it returns multiple rows because statement2 isn't using
any aggregates.

How about a straight SQL string?  what SQL do you expect?  these are
very easy to link to a hybrid.






 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to have SQL IF in sqlalchemy

2014-06-18 Thread Mike Bayer

On 6/18/14, 8:31 PM, Vineet Goel wrote:
 Hi,

 I am trying to convert the following SQL to SQLAlchemy:

 |SELECT teams.department, teams.team,
 IF(employee_managers.team_id IS NOT NULL, employee_managers.manager, 
 teams.default_manager) AS manager
 FROM teams
 LEFT JOIN employee_managers ON employee_managers.team_id = teams.id
 WHERE teams.department = 'hr' and (employee_managers.name = 'vineet' OR 
 employee_managers.name IS NULL)|

the SQL standard version of IF is CASE, SQLAlchemy supports CASE
directly see
http://docs.sqlalchemy.org/en/rel_0_9/core/sqlelement.html?highlight=case#sqlalchemy.sql.expression.case



 where my models look like this:

 |class EmployeeTeam(db.Model):
   __tablename__ = 'teams'
   id = db.Column(db.Integer, primary_key=True)
   name = db.Column(db.String(140), nullable=False)
   department = db.Column(db.String(140), nullable=False)
   team = db.Column(db.String(140), nullable=False)
   default_manager = db.Column(db.String(140), nullable=False)

 class EmployeeManagers(db.Model):
   __tablename__ = 'employee_managers'
   id = db.Column(db.Integer, primary_key=True)
   name = db.Column(db.String(140), nullable=False)
   team_id = db.Column(db.Integer, db.ForeignKey('teams.id'))
   team = db.relationship('EmployeeTeam')
   manager = db.Column(db.String(140), nullable=False)|
 After hours of googling, I found some func.IF kind of stuff but they
 all keep giving me an  OperationalError: (OperationalError) (1248,
 'Every derived table must have its own alias').

 Does anyone know a simple way to convert this SQL to SQLAlchemy?

 Any help would be much appreciated.
 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Functions on column properties

2014-06-19 Thread Mike Bayer

On 6/19/14, 4:09 AM, Mike Solomon wrote:



 It's difficult to issue a straight SQL string for the hybrid property
 itself because


sorry, I meant, please write the query *that you really want* as a SQL
string.   Don't use SQLAlchemy.   It's better to work in that direction.

If you don't know what the SQL you want is, that's a different issue,
I'd start on that part first.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] hybrid_properties and literals

2014-06-19 Thread Mike Bayer

On 6/19/14, 1:05 PM, AM wrote:
 What I am storing is things like string versions of lists, tuples and
 dicts, for e.g.:

 str([1, 2, 3])
 str({'a':1}

 and so on. ast.literal_eval will only parse those and return those, it
 does not evaluate expressions and statements so no real code at all.

 I got around this issue by creating a PythonASTLiteralColumn based on
 the example in the docs and that worked perfectly.

 Having said that I would still like to understand how to approach a
 situation where I want a hybrid property that is a normal python data
 type, if that's possible at all?
SQLAlchemy is a bridge between Python and SQL but there is no magic in
creating the thing on each side of that bridge.   If you want a SQL
query that interrogates a column of data in a way that is meaningful
regarding some kind of in-Python behavior, you need to decide how that
behavior will be expressed in your SQL query.   The hybrid will work
fine at the instance level but if you want it to behave meaningfully in
a SQL query you'd first need to know what the SELECT statement you want
will actually look like in terms of raw SQL.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] unable to open database file

2014-06-19 Thread Mike Bayer
no but this is more of a pysqlite/sqlite3 issue, you should ask on the
Python users list, and refer to the sqlite3.connect() function:

import sqlite3
conn = sqlite3.connect(/path/to/file.db)


On 6/19/14, 2:28 PM, Scott Horowitz wrote:
 Hi,

 A user of my applicable is getting a unable to open database file
 None None error because the file path to their database has a Á
 character in it. It works fine if the character is removed, but that
 is not a good solution.

 Does anyone know how to solve this?

 Thanks,
 Scott
 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Functions on column properties

2014-06-19 Thread Mike Bayer

On 6/19/14, 2:41 PM, Mike Solomon wrote:


 Le jeudi 19 juin 2014 16:10:19 UTC+3, Michael Bayer a écrit :


 On 6/19/14, 4:09 AM, Mike Solomon wrote:
 
 
 
  It's difficult to issue a straight SQL string for the hybrid
 property
  itself because


 sorry, I meant, please write the query *that you really want* as a
 SQL
 string.   Don't use SQLAlchemy.   It's better to work in that
 direction.

 If you don't know what the SQL you want is, that's a different issue,
 I'd start on that part first.


 Ah, OK.  The SQL below will group fractions based on tag and give the
 num(erator) and den(ominator) of the maximum for each group. In
 Python, I'd like to have a fraction class that has members tag and val
 where val is a hybrid_property combining num and den.   I'd like to be
 able to do a query like session.query(Fraction.tag,
 func.max(Fraction.val)).group_by(Fraction.tag) and get the SQL below:


 SELECT DISTINCT fraction_a.tag, fraction_a.high,
   fraction_b.num, fraction_b.den
 FROM
   (SELECT fraction.tag, max(1.0 * fraction.num / fraction.den) AS high
 FROM fraction
 GROUP BY fraction.tag)
   AS fraction_a JOIN
   (SELECT fraction.tag, fraction.num, fraction.den
 FROM fraction)
   AS fraction_b
   ON fraction_a.tag = fraction_b.tag
 AND fraction_a.high = 1.0 * fraction_b.num / fraction_b.den;
So to the extent that 1.0 * num / den is a column-based expression you
like to use in your query, it's a good candidate for a hybrid or
column_property (deferred one in case you don't want to load it
unconditionally).   But as far as the FROM clauses, when we work with
Query(), the FROM clauses are always distinct entities that we have to
combine together as we want, there's never any kind of implicit behavior
with that.

Here's the appropriate place to use column_property() (specifically
deferred() so that it doesn't get loaded by default) in terms of how the
query should come out:

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class Fraction(Base):
__tablename__ = 'fraction'

id = Column('id', Integer, primary_key=True)
tag = Column(Integer)
num = Column(Integer)
den = Column(Integer)

high = deferred(1.0 * num / den)

fraction_a = select([
Fraction.tag,
func.max(Fraction.high).label(high),
]).group_by(Fraction.tag).alias('fraction_a')

fraction_b = aliased(Fraction, name=fraction_b)
sess = Session()

q = sess.query(fraction_a, fraction_b.num, fraction_b.den).\
distinct().\
join(fraction_b, and_(
fraction_a.c.high == fraction_b.high,
fraction_a.c.tag == fraction_b.tag
)
)
print q

output:

SELECT DISTINCT fraction_a.tag AS fraction_a_tag, fraction_a.high AS
fraction_a_high, fraction_b.num AS fraction_b_num, fraction_b.den AS
fraction_b_den
FROM (SELECT fraction.tag AS tag, max((:param_1 * fraction.num) /
fraction.den) AS high
FROM fraction GROUP BY fraction.tag) AS fraction_a JOIN fraction AS
fraction_b ON fraction_a.high = (:param_2 * fraction_b.num) /
fraction_b.den AND fraction_a.tag = fraction_b.tag







 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] unable to open database file

2014-06-19 Thread Mike Bayer

On 6/19/14, 3:37 PM, Scott Horowitz wrote:
 Michael,

 Thanks for the hint about python's sqlite3.

 I'll just point out that I can work around the issue directly with
 sqlite3 by providing a relative path that does not include the character:

 import sqlite3, os
 os.chdir(/path/with/non/ascii/character)
 conn = sqlite3.connect(file.db)

 However if I take this same approach with sqlalchemy, it does not fix
 the issue. It appears that this is because sqlalchemy always provides
 the absolute path to sqlite3.
well if sqlite3.connect(os.path.abspath(relative/path)) is failing,
that's something for the Python core / SQLite folks regardless.

if you need a workaround right now you can pass creator to the engine:

e = create_engine(sqlite://, creator=lambda: sqlite3.connect(whatever))


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] correlating related deletes

2014-06-19 Thread Mike Bayer

On 6/19/14, 8:02 PM, Jonathan Vanasco wrote:
 Due to a business requirement and an odd distribution of data /
 performance issue, my database currently has 2 tables which are
 inter-related :

 class Relationship():
  id_a = int ,references table_a(id)
  id_b = int, references table_b(id)
  relationship_id = int, references relationships(id)

 class RelationshipFoo():
  id_a = int ,references table_a(id)
  id_b = int, references table_b(id)
  display_name = varchar(64)

 97% of the relationships in the database are the 'Relationship' type.
  This table is basically readonly. 
 3%, are 'RelationshipFoo', which have an editable 'name'.   This table
 is readwrite, as these are often edited.

 There was a noticeable gain moving from 1 to 2 tables.  So be it.

 Here's my issue.

 If I delete a RelationshipFoo, it requires me to also delete a
 corresponding `Relationship( id_a=foo.id_a , id_b=foo.id_b,
 relationship_id=5 )`

 For stuff like this, I would normally just use the engine directly.  

 However, this particular operation happens in a huge block of content
 management operations... and I don't want to emit any sql / `flush` to
 the database yet.  there's still handful of operations that could
 trigger a rollback.

 Aside from preloading a `Relationship` object and then marking it for
 deletion in the session, is there any trick/technique I can use to
 create a deletion request for the `Relationship` object that won't
 emit sql until I flush ?
cascading foreign key is one way. or before_delete() /
after_delete() event, just emit the DELETE straight on the connection,
no need to load Relationship.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Could not locate column in row for column

2014-06-20 Thread Mike Bayer

On 6/20/14, 3:59 AM, Belegnar Dragon wrote:
 Hello.

 SQLAlchemy 0.9.4

 In this code

 def unidecode_column_name(inspector, table, column_info):
 column_info['key'] = unidecode(column_info['name']).replace(u' ',
 u'_')

 engine = create_engine(mssql+pyodbc://%s:%s@RTBD %
 (settings.RT_USER, settings.RT_PWD), echo = True)
 metadata = MetaData(bind = engine)
 metadata.reflect(engine, only = [uЗаказы,], listeners =
 [('column_reflect', unidecode_column_name)])
 orders = metadata.tables[u'Заказы']
 class Order(object):
 pass
 mapper(Order, orders)

 sess = Session(engine)
 q = sess.query(Order)
 q.filter_by(Kod_zakazchika = u'F07301m').all()

 I've got the following error

 NoSuchColumnError Traceback (most recent call
 last)
 ipython-input-2-83aa85e1bb5a in module()
  1 q.filter_by(Kod_zakazchika = u'F07301m').all()

 local/lib/python2.7/site-packages/sqlalchemy/orm/query.pyc in all(self)
2290
2291 
 - 2292 return list(self)
2293
2294 @_generative(_no_clauseelement_condition)

 local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py in
 instances(query, cursor, context)
  70 process[0](row, rows)
  71 elif single_entity:
 --- 72 rows = [process[0](row, None) for row in fetch]
  73 else:
  74 rows = [util.KeyedTuple([proc(row, None) for proc
 in process],

 local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py in
 _instance(row, result)
 359 identitykey = (
 360 identity_class,
 -- 361 tuple([row[column] for column in
 pk_cols])
 362 )
 363

 local/lib/python2.7/site-packages/sqlalchemy/engine/result.pyc in
 _key_fallback(self, key, raiseerr)
 330 raise exc.NoSuchColumnError(
 331 Could not locate column in row for column
 '%s' %
 -- 332 expression._string_or_unprintable(key))
 333 else:
 334 return None

 NoSuchColumnError: Could not locate column in row for column
 '\\u0417\\u0430\\u043a\\u0430\\u0437\\u044b.\\u041d\\u043e\\u043c\\u0435\\u0440
 \\u0437\\u0430\\u043a\\u0430\\u0437\\u0430'

 The problem is i can't really debug this error because it isn't clear
 what should be in pk_cols

 Previously in orm/loading.py there is a string
 pk_cols = mapper.primary_key # line number 250

 So, pk_cols is a list of Column() objects.
 row is a list of values from query
 Surely, [row[column] for column in pk_cols] raises error, because
 column is a Column() object and can't be index for row. But i can't
 imagine how this code may work. Is this a bug?

the ResultProxy contains translation logic that receives Column objects
and locates the data by name.  This is documented at
http://docs.sqlalchemy.org/en/rel_0_9/core/tutorial.html#selecting and
is the primary method by which the ORM relates mapped columns to result
sets.

In this case the issue is most likely yet another pyodbc + unicode
encoding issue, of which there are many, and often there's no way to
work around.  Need to know 1. OS platform 2. FreeTDS version 3. UnixODBC
or iODBC (and what version) 4. pyodbc version 5. SQL Server version.   I
can try to test but often these unicode issues aren't easy to resolve
(have you tried pymssql? ).thanks.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Could not locate column in row for column

2014-06-20 Thread Mike Bayer

On 6/20/14, 7:32 AM, Mike Bayer wrote:
 NoSuchColumnError: Could not locate column in row for column
 '\\u0417\\u0430\\u043a\\u0430\\u0437\\u044b.\\u041d\\u043e\\u043c\\u0435\\u0440
 \\u0437\\u0430\\u043a\\u0430\\u0437\\u0430'

 The problem is i can't really debug this error because it isn't clear
 what should be in pk_cols

 Previously in orm/loading.py there is a string
 pk_cols = mapper.primary_key # line number 250

 So, pk_cols is a list of Column() objects.
 row is a list of values from query
 Surely, [row[column] for column in pk_cols] raises error, because
 column is a Column() object and can't be index for row. But i can't
 imagine how this code may work. Is this a bug?
 the ResultProxy contains translation logic that receives Column objects
 and locates the data by name.  This is documented at
 http://docs.sqlalchemy.org/en/rel_0_9/core/tutorial.html#selecting and
 is the primary method by which the ORM relates mapped columns to result
 sets.

 In this case the issue is most likely yet another pyodbc + unicode
 encoding issue, of which there are many, and often there's no way to
 work around.  Need to know 1. OS platform 2. FreeTDS version 3. UnixODBC
 or iODBC (and what version) 4. pyodbc version 5. SQL Server version.   I
 can try to test but often these unicode issues aren't easy to resolve
 (have you tried pymssql? ).thanks.

good news, I can reproduce this, and on my end at least it needs the
so-called description_encoding workaround.   We may have to revisit
the defaults on this parameter for modern versions of Pyodbc.  The test
below produces your error without the param, resolves with it.   Please
try this out on your create_engine(), thanks.

#! coding: utf-8

from sqlalchemy import *

engine = create_engine(mssql+pyodbc://scott:tiger@ms_2008, echo=True,
description_encoding='utf8')

colname = u'Заказ.Номер заказа'

m = MetaData()
t = Table(uЗаказы, m, Column(colname, String(30), key='somecol'))

m.drop_all(engine)
m.create_all(engine)

engine.execute(t.insert().values(somecol='some value'))
result = engine.execute(t.select().where(t.c.somecol == 'some value'))
row = result.fetchone()
print row[t.c.somecol]






-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Could not locate column in row for column

2014-06-20 Thread Mike Bayer

On 6/20/14, 12:14 PM, Mike Bayer wrote:
 On 6/20/14, 7:32 AM, Mike Bayer wrote:
 NoSuchColumnError: Could not locate column in row for column
 '\\u0417\\u0430\\u043a\\u0430\\u0437\\u044b.\\u041d\\u043e\\u043c\\u0435\\u0440
 \\u0437\\u0430\\u043a\\u0430\\u0437\\u0430'

 The problem is i can't really debug this error because it isn't clear
 what should be in pk_cols

 Previously in orm/loading.py there is a string
 pk_cols = mapper.primary_key # line number 250

 So, pk_cols is a list of Column() objects.
 row is a list of values from query
 Surely, [row[column] for column in pk_cols] raises error, because
 column is a Column() object and can't be index for row. But i can't
 imagine how this code may work. Is this a bug?
 the ResultProxy contains translation logic that receives Column objects
 and locates the data by name.  This is documented at
 http://docs.sqlalchemy.org/en/rel_0_9/core/tutorial.html#selecting and
 is the primary method by which the ORM relates mapped columns to result
 sets.

 In this case the issue is most likely yet another pyodbc + unicode
 encoding issue, of which there are many, and often there's no way to
 work around.  Need to know 1. OS platform 2. FreeTDS version 3. UnixODBC
 or iODBC (and what version) 4. pyodbc version 5. SQL Server version.   I
 can try to test but often these unicode issues aren't easy to resolve
 (have you tried pymssql? ).thanks.
 good news, I can reproduce this, and on my end at least it needs the
 so-called description_encoding workaround.   We may have to revisit
 the defaults on this parameter for modern versions of Pyodbc.  The test
 below produces your error without the param, resolves with it.   Please
 try this out on your create_engine(), thanks.

 #! coding: utf-8

 from sqlalchemy import *

 engine = create_engine(mssql+pyodbc://scott:tiger@ms_2008, echo=True,
 description_encoding='utf8')

 colname = u'Заказ.Номер заказа'

 m = MetaData()
 t = Table(uЗаказы, m, Column(colname, String(30), key='somecol'))

 m.drop_all(engine)
 m.create_all(engine)

 engine.execute(t.insert().values(somecol='some value'))
 result = engine.execute(t.select().where(t.c.somecol == 'some value'))
 row = result.fetchone()
 print row[t.c.somecol]
yeah this is the workaround for now, but totally this is a bug back to
0.8 and further, should be backported for 0.8, 0.9 and 1.0 in
https://bitbucket.org/zzzeek/sqlalchemy/issue/3091/update-description_encoding-for-pyodbc.
  
Two different issues located.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] A question regarding hybrid properties, relationships and schema changes

2014-06-20 Thread Mike Bayer

On 6/20/14, 3:38 PM, Ken Lareau wrote:
 So in the ongoing improvement of one of our internal databases, we created
 a new table named 'environments' whose SQLA code looks something like
 this:

 class Environment(Base):
 __tablename__ = 'environments'

 id = Column(u'environmentID', INTEGER(), primary_key=True)
 environment = Column(String(length=15), nullable=False, unique=True)
 env = Column(String(length=12), nullable=False, unique=True)
 domain = Column(String(length=32), nullable=False, unique=True)
 prefix = Column(String(length=1), nullable=False)

 Two of our tables recently needed conversion to stop using their own local
 'environment' column to using this table.  The first part's been put
 in place
 (a new foreign key for 'environment_id'), but to prevent large swaths
 of code
 from needing changes, a thought of using a hybrid property might allow the
 change to be hidden (until the code using it had been rewritten at least).

 My naive attempt was the following (just the relevant snippet):

 environment_obj = relationship('Environment')

 @hybrid_property
 def environment(self):
 return self.environment_obj.environment

 Unfortunately (and in hindsight for obvious reasons), this code
 doesn't work,
what does doesn't work mean here?   This will work at the instance
level.  At the query level, not so much, that's true, if you truly want
no code to change you'd need to implement an @expression here that's a
little inefficient, as it needs to do a correlated subq:

class HasEnv(Base):
__tablename__ = 'has_env'

id = Column(INTEGER, primary_key=True)
environment_id = Column(ForeignKey('environments.environmentID'))

environment_obj = relationship('Environment')

@hybrid_property
def environment(self):
return self.environment_obj.environment

@environment.expression
def environment(cls):
return select([Environment.environment]).\
where(Environment.id == cls.environment_id).correlate(cls).\
as_scalar()


s = Session()

print s.query(HasEnv).filter(HasEnv.environment == 'some env')

output:

SELECT has_env.id AS has_env_id, has_env.environment_id AS
has_env_environment_id
FROM has_env
WHERE (SELECT environments.environment
FROM environments
WHERE environments.environmentID = has_env.environment_id) = :param_1

wont perform well from a SQL perspective but will do the job...



 but a very brief conversation with someone on the #sqlalchemy channel on
 Freenode indicated there was no way to do this and all the relevant code
 must be reworked.  While it's only a few dozen places this occurs, I
 can see
 this coming up again in the future as further schema refactorings
 occur, so
 I turn to those with more expertise to find out if there is a way to
 accomplish
 what I desire, or if there's really no hope. :)  Any insight would be
 greatly
 appreciated.
I don't know how to fix this issue with IRC and stackoverflow that
people constantly are getting bad information. 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] getting error with column name end using geoalchemy2

2014-06-21 Thread Mike Bayer

On 6/21/14, 8:34 AM, Chung WONG wrote:
 Hi list,
 I am encountering a very strange error and I am scratching my head and
 got no idea what is going on. 

 class Line(Base):
 __tablename__ = 'lines'
 id = Column(Integer, Sequence('line_id_seq'), primary_key=True)

 start = Column(Geometry('POINT'), nullable=False, *index=False*)
 *end* = Column(Geometry('POINT'), nullable=False, *index=False*)


 On creating this table, it threw a strange error:

 /sqlalchemy.exc.ProgrammingError: (ProgrammingError) syntax error at
 or near end/
 /LINE 1: ...INDEX idx_lines_end ON public.lines USING GIST (end)/
 /   ^/
 / 'CREATE INDEX idx_lines_end ON public.lines USING GIST (end)' {}/


 The created table is :

 CREATE TABLE lines
 (
   id integer NOT NULL,
   start geometry(Point) NOT NULL,
   *end* geometry(Point) NOT NULL,
   CONSTRAINT lines_pkey PRIMARY KEY (id)
 )
 WITH (
   OIDS=FALSE
 );
 ALTER TABLE lines
   OWNER TO postgres;

 CREATE INDEX idx_lines_start
   ON lines
   USING gist
   (start);

 It is weird there are quotes surrounding the word *end* , and although
 I have specified *index=False*, for some reason indexs are still
 created automatically.
 Anything other than *end*, such as *end_, end1 *worked perfectly.
 Is end a keyword for *postgis* or *geoalchemy2*?
end is likely a reserved word, there's nothing wrong with using it as
a column name as SQLAlchemy will quote it, though the other poster has a
point that it's always better to avoid reserved words if for no other
reason than reducing verbosity.

as far as the Index i don't have an insight on the index being created
or not, if perhaps geoalchemy is involved there, not really sure.See
if changing the Geometry type to something else temporarily changes things.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is using column_property preferred over hybrid property in some circumstances?

2014-06-21 Thread Mike Bayer

On 6/21/14, 4:00 PM, Bao Niu wrote:
 The documentation regarding column_property and hybrid property says
 both methods are intended for linking a sql expression to an
 attribute, which is highly useful. But it is not obvious to junior
 users which one is preferred.
Heh.Well, one is much older than the other, and truthfully it is
also not obvious to *me* which one is preferred :).Hybrid property
is useful for more complex expressions where you'd like to have access
to an in-Python version when you already have an object loaded.

The way I used hybrids to a great extent, where i would not have used a
column_property(), was on an object that has dozens of methods used in a
complex report, and many of the methods built on each other (this is a
mock version of that):

class MyReportThing(Base):
# ...

   @hybrid_property
   def x(self):
 # ...

   @hybrid_property
   def y(self):
 # ...

   @hybrid_property
   def x_y_variance(self):
   return variance(self.x, self.y, ...)

   @hybrid_property
   def average_foo_x(self):
  # ..

  # many more @hybrid_property objects


So when we'd generate a full report, we'd load all the MyReportThing
objects in fully.  The table had perhaps 20 columns.  But the report had
about 100.  This is because from those 20 columns, the hybrids produced
80 other variants of the data within.This processing was much better
accomplished in Python than on the relational database side;  we only
had to load 20 columns in and then use those 20 cols to generate many
other answers about that data.   if it was all on the DB side there
would have been an enormous SQL statement with 100 columns.

But we still wanted to make these hybrids, because in some cases the
functions we had on MyReportThing were also useful in a query now and
then to use in query.filter(), like
query.filter(MyReportThing.average_foo_x  25).  In fact most of the
hybrids were part of an abstract interface that applied to many types of
objects, not just MyReportThing, and in this way the hybrids were very
flexible in-Python and sometimes in-SQL methods that didn't otherwise
have any complex interactions within the mapping process.Some
classes would override some of the hybrids to do something differently,
and some of them would even present in-Python only for some of them;
the @expression form would be overridden to raise a NotImplementedError
for some objects where calculating the values relied on things that
couldn't be acquired in a simple SQL expression.

I think for general use, business-level methods, hybrids are a lot more
flexible and work very well with heavily object oriented techniques.  
The column_property() on the other hand is more about data that you'd
like to load every time on your object.   When there's some view of data
that you really want at the SQL level, every time and up front when you
query(), column_property() can work better for that.But with
column_property() you really can't fall back onto any kind of
Python-side replacement of the feature.  It's more rigidly tied to the
structure of how the database is queried.

column_property() is also used for rudimental mapping tasks, like if two
columns in a table or a JOIN should have the same value.

 I started by reading the ORM tutorial. The tutorial often references
 links to other pages, which makes it impossible for a newbie to learn
 something just by reading the tutorial itself.
well the tutorial is supposed to give you the general idea of how to
interact between a set of database tables and objects using SQLAlchemy. 
However, it isn't by itself going to teach you much about object
oriented programming or about relational database programming
techniques.  I'm not sure of your background, but if it is the case that
you're learning a little bit about all three at the same time, that is a
much harder hill to climb.   I think this is somewhat the case for most
people that I interact with and is entirely normal - for example, I'm
now learning lots about Openstack, but as it turns out, in order to use
Openstack I really have to learn a whole lot more about networking as
well, and this is hard.  But I can't blame Openstack's docs for the fact
that I'm not super clear on routers/subnetworks/etc. and all that,
there's multiple concepts to learn at once.I watch very closely what
people say about SQLAlchemy and at the same time I try to get indicators
for where they are coming from; the people that really love SQLAlchemy
the most are those who already have a database background and are also
not hostile to object oriented programming.  The people who are the most
grumpy seem to be those who are learning database programming from the
ground up at the same time.   The product by itself, and its
documentation, is only trying to solve the problem of integrating these
two worlds; it can't go too far as it stands into filling in details on
either side of that bridge except to the degree that the work people put

Re: [sqlalchemy] Postgresql 9.4 JSONB support

2014-06-24 Thread Mike Bayer

On 6/24/14, 3:32 AM, Damian Dimmich wrote:
 Hello,

 I've started putting together support for JSONB - a new type
 introduced with the postgres 9.4 beta here:

 https://github.com/ddimmich/sqlalchemy

 As this shares a lot with the JSON datatype I've kept the code in the
 json.py file.  Please let me know if you think it should have its own
 file or would like me to structure the code differently.

 I've had a look through the tests and haven't found anything specify
 to dialect datatypes - are there no tests for these or am I just blind?
take a look at
https://bitbucket.org/zzzeek/sqlalchemy/src/f10eb28d90cbf73f4757897f52bf26722f98372e/test/dialect/postgresql/test_types.py?at=master#cl-1679,
that's pointing at the existing JSON tests for PG specifically.

As far as json.py it all depends on how much new code vs. shared we're
talking about.If all the operator logic and such stays the same and
just the data marshaling is different it is probably ok.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] A question regarding hybrid properties, relationships and schema changes

2014-06-24 Thread Mike Bayer

On 6/23/14, 8:09 PM, Ken Lareau wrote:

 if apptier:
 subq = (
 Session.query(
 Package.pkg_name,
 Package.version,
 Package.revision,
 AppDefinition.app_type,
 AppDeployment.environment
 ).join(Deployment)
  .join(AppDeployment)
  .join(AppDefinition)
  .filter(Package.pkg_name == package_name)
  .filter(AppDeployment.environment == env)
  .filter(AppDeployment.status != 'invalidated'))

 [...]

 # The actual column name must be used in the subquery
 # usage below; DB itself should be corrected
 versions = (Session.query(subq.c.appType,
 subq.c.version,
 subq.c.revision)
 .group_by(subq.c.appType, subq.c.environment)
 .all())
  
 The parts in the subquery work fine, but the access of the 'column'
 in the final query leads to this:

 Traceback (most recent call last):
   File stdin, line 1, in module
   File /home/klareau/repos/git/tagopsdb/tagopsdb/deploy/deploy.py,
 line 234, in find_deployed_version
 .group_by(subq.c.appType, subq.c.environment)
   File
 /home/klareau/.virtualenvs/tds/lib/python2.6/site-packages/sqlalchemy/util/_collections.py,
 line 174, in __getattr__
 raise AttributeError(key)
 AttributeError: environment

 This is not completely surprising, but I'm uncertain as the best way
 to fix this... help? :)
nothing is obviously wrong, assuming you are calling subquery() on
subq at the end.  It will be a select() construct which will have a
column called .environment on the .c. collection because
AppDeployment.environment is present.   Can't say why this attribute is
not here without full details.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] inheriting from mapped classes

2014-06-24 Thread Mike Bayer

On 6/24/14, 5:44 PM, Victor Olex wrote:
 So, what is the right idiom for building SQLAlchemy persistence into
 classes that need to do more than just that i.e. have run-time state.
 I was hoping that deriving from SQLAlchemy model classes, but that
 does not seem to be it. Another option would be to encapsulate a model
 class within the runtime class, but that way we need to wrap
 SQLAlchemy session and queries functionality into some helper functions.
Typically the class that has whatever methods and state that you want is
also mapped directly.  There's no need to have User and Thinker as
separate classes.

if you truly don't want anything to do with persistence visibly present
on classes, that's what classical mapping using mapper() and Table was
designed for, or alternatively you can make a business-level class as
abstract, or as a mixin:

class MyClass(Base):
__abstract__ = True

   def my_business_method(self):
  #...

class MyMappedClass(MyClass):
   # ...

depends really on how you need the two roles of business logic and
persistence to be separate.  Putting them all together is obviously the
most simplistic but that's what most people do, unless you're trying to
do something more J2EE-ish.





 On Friday, May 30, 2014 2:11:53 PM UTC-4, Michael Bayer wrote:

 yep… here’s the error:

 sqlalchemy.orm.exc.FlushError: Attempting to flush an item of type
 class '__main__.Thinker' as a member of collection
 Address.user. Expected an object of type class '__main__.User'
 or a polymorphic subclass of this type. If class
 '__main__.Thinker' is a subclass of class '__main__.User',
 configure mapper Mapper|User|users to load this subtype
 polymorphically, or set enable_typechecks=False to allow any
 subtype to be accepted for flush. 


 enable_typechecks=False disables this check:

 user = relationship(User, enable_typechecks=False,
 backref=backref('addresses', order_by=id))

 it just means that later on, when you hit some_address.user, you
 may get a User back, not a Thinker (or you will, if it hasn’t been
 expired.   you can’t rely on it being consistent).   If that’s OK,
 then set the flag - it just wants to check that this is what you
 intend.




 On May 30, 2014, at 1:51 PM, Victor Olex
 victo...@vtenterprise.com javascript: wrote:

 Hello all, long time no see...

 Is it OK to create classes, which inherit from mapped classes,
 but are not meant to be persited and how to do it as to avoid
 FlushError on related classes?

 from sqlalchemy import *
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import Session, relationship, backref

 Base = declarative_base()

 class User(Base):
 __tablename__ = 'users'
 id = Column(Integer, Sequence('user_id_seq'), primary_key=True)
 name = Column(String(50))
 fullname = Column(String(50))
 password = Column(String(12))

 class Address(Base):
 __tablename__ = 'addresses'
 id = Column(Integer, primary_key=True)
 email_address = Column(String, nullable=False)
 user_id = Column(Integer, ForeignKey('users.id
 http://users.id'))
 user = relationship(User, backref=backref('addresses',
 order_by=id))

 class Thinker(User):
 thought = 'Thoughts are not to be persited'

 e = create_engine('sqlite:///', echo=True)
 Base.metadata.bind = e
 Base.metadata.create_all()

 t = Thinker(name='Descartes')
 s = Session(bind=e)
 s.add(t)
 s.commit() # no problem
 a = Address(user=t, email='...@gmail.com javascript:')
 a = Address(user=t, email_addre...@gmail.com javascript:')
 s.commit() # FlushError

 Thanks,

 V.


 -- 
 You received this message because you are subscribed to the
 Google Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it,
 send an email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy
 http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout
 https://groups.google.com/d/optout.

 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe 

Re: [sqlalchemy] Oracle with sequence for primary key and that sequence out of sync

2014-06-25 Thread Mike Bayer

On 6/25/14, 5:50 AM, Ofir Herzas wrote:
 Sorry to barge in, but I'm having the exact same issue and I'm pretty
 sure no one altered the sequence manually.
 I'm using sqlalchemy 0.8, python 2.7.6, rhel 6.5, oracle 10g,
 cx_oracle with the same connection string as above

 This issue started just recently after running ok for more than a
 thousand times.
 I should also say that my application is installed at several
 customers, and I have this issue only at the one using Oracle.

I'm open to possibilities for how this could happen, short of the ALTER
SEQUENCE command being emitted, in which case feel free to grep
SQLAlchemy's codebase for this clause (it's not there).   From my
vantage point, if there is actually an issue that is implicitly making
this happen outside of application code, it would have to be on the
driver or server side somehow.





 On Tuesday, June 3, 2014 1:18:42 AM UTC+3, Michael Bayer wrote:


 On Jun 2, 2014, at 1:17 PM, Scott Koranda skor...@gmail.com
 javascript: wrote:

 
  I investigated and found that the sequence seq_admin_groups_id
  was now at the value 68 after having been used previously to
  insert rows with IDs in the 500s.
 
  I stopped the code and used sqlplus to change the sequence
  back to a value in the high 500s. I then restarted the code
  and the exception no longer occurred.
 
  I am unable to explain how the sequence seq_admin_groups_id
  went from in the 500s and working fine to suddenly being 68.
 
  The only place in the Python code where the sequence is used
  explicitly is in the definition of the AdminGroup() class.
 
  I would be grateful for any insights on how the sequence might
  have become out of sync or anything I can change in the code
  to prevent it from happening again.

 there’s nothing on the Python library side that could do that,
 someone had to have run an ALTER SEQUENCE on the database side in
 order for that to happen.

 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Treating many-to-many relationships as many-to-one

2014-06-25 Thread Mike Bayer

On 6/25/14, 7:11 AM, Tara Andrews wrote:
 Hello,

 I am trying to use SQLAlchemy to map a database that was made
 available to me, whose contents and schema cannot be altered. The
 database has a few cases where a many-to-one relationship is treated
 as a many-to-many - that is, instead of the child key being set
 directly onto the parent record, the relationship is mapped through an
 association table. So in the mapping it looks like this:

 vname_association = Table('VNameFactoid', Base.metadata,
 Column('vnameKey', Integer, ForeignKey('VariantName.vnameKey')),
 Column('factoidKey', Integer, ForeignKey('Factoid.factoidKey')))

 class Factoid(Base):
 __tablename__ = 'Factoid'
 tstamp = Column(Time)
 engDesc = Column(Text)
 origLDesc = Column(Text)
 factoidKey = Column(Integer, primary_key=True)
 vnameInfo = relationship(VariantName,
 secondary=vname_association, backref=factoidData)

 class VariantName(Base):
 __tablename__ = 'VariantName'
 tstamp = Column(Time)
 vnameKey = Column(Integer, primary_key=True)
 name = Column(String)


 This means that I can get at the variant name information from the
 factoid by saying
 my_variant_name = some_factoid.vnameInfo[0].name

 But it would be much nicer if I could dispense with that [0], since I
 know that there will never be more than one record returned in the
 list. Is there something I can do to be able to say this?
 my_variant_name = some_factoid.vnameInfo.name

yes you can apply the uselist=False option to the relationship.If
the relationship loads more than one row you'll get a warning.



 Best,
 -tara
 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] A question regarding hybrid properties, relationships and schema changes

2014-06-25 Thread Mike Bayer

On 6/25/14, 2:26 AM, Ken Lareau wrote:
 On Tue, Jun 24, 2014 at 3:35 PM, Mike Bayer mike...@zzzcomputing.com
 mailto:mike...@zzzcomputing.com wrote:


 On 6/23/14, 8:09 PM, Ken Lareau wrote:
 
  if apptier:
  subq = (
  Session.query(
  Package.pkg_name,
  Package.version,
  Package.revision,
  AppDefinition.app_type,
  AppDeployment.environment
  ).join(Deployment)
   .join(AppDeployment)
   .join(AppDefinition)
   .filter(Package.pkg_name == package_name)
   .filter(AppDeployment.environment == env)
   .filter(AppDeployment.status != 'invalidated'))
 
  [...]
 
  # The actual column name must be used in the subquery
  # usage below; DB itself should be corrected
  versions = (Session.query(subq.c.appType,
  subq.c.version,
  subq.c.revision)
  .group_by(subq.c.appType, subq.c.environment)
  .all())
 
  The parts in the subquery work fine, but the access of the 'column'
  in the final query leads to this:
 
  Traceback (most recent call last):
File stdin, line 1, in module
File /home/klareau/repos/git/tagopsdb/tagopsdb/deploy/deploy.py,
  line 234, in find_deployed_version
  .group_by(subq.c.appType, subq.c.environment)
File
 
 
 /home/klareau/.virtualenvs/tds/lib/python2.6/site-packages/sqlalchemy/util/_collections.py,
  line 174, in __getattr__
  raise AttributeError(key)
  AttributeError: environment
 
  This is not completely surprising, but I'm uncertain as the best way
  to fix this... help? :)
 nothing is obviously wrong, assuming you are calling subquery() on
 subq at the end.  It will be a select() construct which will have a
 column called .environment on the .c. collection because
 AppDeployment.environment is present.   Can't say why this
 attribute is
 not here without full details.


 Oops there were some additional filters I left out, but this is added
 before the final query:

 subq = (subq.order_by(AppDeployment.realized.desc())
 .subquery(name='t_ordered'))

 Not sure why it isn't working, but today my coworker an I massively
 rewrote one of the other methods to avoid this same issue, so maybe
 we should try the same for this one... though it would be nice to know
 what I goofed here, but not sure what additional information I can
 give that would help offhand...

what are the names that are actually present on subq.c ?   print
list(subq.c) should give an indication.



 -- 
 - Ken Lareau

 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] postgresql large objects

2014-06-25 Thread Mike Bayer

On 6/25/14, 2:35 AM, Jason Newton wrote:
 Hi,

 I'm wondering how I might get at postgresql's large object type
 (lob).  It's not to be confused with the TOASTED bytea, which are
 limited currently to 1 GiB yet in practice is much lower (for me
 400MiB)  - it's a special table + api designed to handle very large
 binary objects, like a few hundred MiBs to more recently TiBs. I don't
 see appropriate definitions anywhere and can't find any mention of it
 really with sqlalchemy. psycopg2 has support for it and calls it
 lobject, it provides a file like interface to the lob which is a good
 mapping since with libpgsql you use lo_creat, lo_seek, lo_write,
 lo_read to work with these beasts

 I took a look at UserDefinedType but on the bind_processor, this
 doesn't distinguish between inserts and updates.  With inserts, you'd
 use an oid allocated from lo_creat in the transaction.  On updates,
 you'd use lo_trunc/lo_write.  As one more constraint, you must be in a
 transaction before any of these functions are usable.  To reference
 large objects, as they are explicitly an out of table storage, the
 postgresql specific oid is used (which allows garbage collection,
 referential integrity checks etc).

 I'll also mention that other tables reference these large objects via
 oids, something like smart pointers in postgres.

 It'd be great to plug large objects into sqlalchemy properly - but can
 it be done?
well we just added the OID type in 0.9.5, so you at least have that.

The PG LOB feature is very sparsely documented - on PG's docs, they only
seem to document the C API
(www.postgresql.org/docs/9.2/static/lo-interfaces.html), feel free to
point me to better documentation on this.   

As far as transparently embedding this into INSERT/UPDATE, my first
thought is that this might be unwise considering how far removed these
functions are from any kind of mainstream usage in such statements -
particularly if separate statements have to be called per value to get
at OIDs or similar.That PG's docs barely mention this whole feature
much less any kind of regular SQL integration is a red flag.   PG's
BYTEA type is already arbitrarily large so there is probably not much
interest in a type like this.If it's the streaming feature you're
looking for, SQLA's usual approach such as that of Oracle's LOB is to
pre-stream it on the result set side (necessary, because fetching a
batch of rows requires it), and cx_Oracle doesn't provide too much
option to stream on the write side.   I've dealt a lot with streaming
datatypes back in the day but sometime in the early 2000's everyone just
stopped using them.

As the feature involves SQL functions I don't think you'd be dealing
only with bind_processor(), the SQL functions themselves would probably
be via SQL-level processing, see
http://docs.sqlalchemy.org/en/rel_0_9/core/types.html#applying-sql-level-bind-result-processing.
   
It's true these functions aren't given access to the compiler context
where you'd be able to discern INSERT from UPDATE, so I'd probably use
two variants of datatype, an InsertLOB and UpdateLOB datatype, then I'd
apply these to an INSERT or UPDATE statement at runtime probably using a
@compiles decorator -
http://docs.sqlalchemy.org/en/rel_0_9/core/compiler.html, that is, take
the existing LOB type and just swap it out for InsertLOB/UpdateLOB.   
Adding compiler context to TypeEngine is something we could look into as
a feature as well so that bind_expression() has this available somehow
and switching around types wouldn't be needed.

building this would involve stringing together hooks that are not quite
set up for this, however for me to really assist here I'd need to see
exact examples of what INSERT, UPDATE and anything else looks like in
conjunction with these functions.

I would note that psycopg2 itself also provides for extension types,
including custom Connection and Cursor subclasses.   If a lot of things
have to happen when these types are in play it might be something that
can occur at that level, PG's type API is obviously a lot more PG specific.




 -Jason
 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit 

Re: [sqlalchemy] postgresql large objects

2014-06-25 Thread Mike Bayer

On 6/25/14, 2:53 PM, Jason Newton wrote:
 Seems to fly in the face at the point of SQLA although integration
 difficulties are appreciated.  Most advanced postgresql drivers in any
 language bindings have added support for this type although none of
 them can hide that it's file like. 
SQLA doesn't go out of its way for unusual, very database-specific
features that up til now have demonstrated little to no real-world use
(nobody's ever asked for this feature before and googling about
Postgresql LOBs turns up very little).  There are tons of examples of
features like this across many different database backends.  If they are
easy to add, we add them, or if they are easy to address via a recipe,
we add the recipe up to the wiki.

But if the behavior requires substantial changes to the core and
dialect, and the ratio of complexity of impact to sparseness of need is
really high, it's not worth it and actually kind of damaging to most
users to complicate the library for use cases that are extremely rare
and can just as well be addressed by dropping down to raw DBAPI code. 
Complications/destabiliziations/performance degradations that are
hoisted onto the whole userbase for the benefit of a single feature that
is virtually never needed is the wrong choice to make; I'm presented
with this choice all the time and there's nearly always work to be done
in extricating ill-conceived features and behaviors that went in too
quickly.  I'm pretty confident that this feature won't require any of
that, but that remains to be seen.

Just in case this wasn't apparent, you certainly *can* use psycopg2's
bindings when you're in an otherwise SQLAlchemy app.  Worst case you can
retrieve a raw psycopg2 connection using connection.raw_connection and
do whatever you need.  If you truly have some use for LOBs, SQLAlchemy
isn't preventing you from using it, it's just not providing any niceties
around it.   The fact that these unusual use cases are not ever
prohibited by SQLA further raises the bar to adding first class support
for them.


 PG's BYTEA is NOT arbitrarily large, it has quite real practical
 limits and I've hit them regularly in storing compressed HDF5
 documents in the database as part of a bench testing framework.  The
 theoretical limit is 1GB but this limit is far less in practice
 (http://www.postgresql.org/message-id/cafj8pracfkoinp2uxeizod5krx29n2ofsoldh0w6ej7rxko...@mail.gmail.com
 ) . http://www.postgresql.org/docs/9.3/static/storage-toast.html  I'm
 not looking for streaming, retrieve/set binary buffers.  It'd be nice
 to translate it transparently to HDF5 python in-memory objects (ie h5py).
Python translation is very easy in SQLA, its just if you have special
needs for SQL syntaxes, that's where special behaviors may be needed. 
So far it sounds like the only blocking factor is that bind_sql needs to
distinguish between INSERT and UPDATE.  that's not a terribly tall order
though it is inconvenient in that the API would need a
backwards-compatibility layer.


  


 As the feature involves SQL functions I don't think you'd be
 dealing only with bind_processor(), the SQL functions themselves
 would probably be via SQL-level processing, see
 
 http://docs.sqlalchemy.org/en/rel_0_9/core/types.html#applying-sql-level-bind-result-processing.

 It's true these functions aren't given access to the compiler
 context where you'd be able to discern INSERT from UPDATE, so I'd
 probably use two variants of datatype, an InsertLOB and UpdateLOB
 datatype, then I'd apply these to an INSERT or UPDATE statement at
 runtime probably using a @compiles decorator -
 http://docs.sqlalchemy.org/en/rel_0_9/core/compiler.html, that is,
 take the existing LOB type and just swap it out for
 InsertLOB/UpdateLOB.Adding compiler context to TypeEngine is
 something we could look into as a feature as well so that
 bind_expression() has this available somehow and switching around
 types wouldn't be needed.


 Separate datatypes is clearly not a good approach to this.

Let me clarify that these separate datatypes would be totally invisible
to the user.   The user would work with a single LOB type.  Translation
to Insert/Update versions would not be explicit and would occur at the
point at which the insert/update construct is compiled.   At the moment
this may be the best approach short of modifying the library (but then
again I can't say much about the approach because i have little idea
what the SQL we're talking about looks like).
 
 Everything starts with a transaction block. 
the psycopg2 DBAPI is implicitly in a transaction block at all times
unless autocommit mode is set up, so there shouldn't be any problem here.

 Hand crafted inserts use server side lo_create(-1) (in sql) which
 allocates a new large object and returning clause to get the oid in
 one go back to the client side.  Then I start using the lobject api on
 the returned oid. 
I'd like to see explicit SQL, 

Re: [sqlalchemy] postgresql large objects

2014-06-25 Thread Mike Bayer

On 6/25/14, 5:35 PM, Jason Newton wrote:

  


 Hand crafted inserts use server side lo_create(-1) (in sql) which
 allocates a new large object and returning clause to get the oid
 in one go back to the client side.  Then I start using the
 lobject api on the returned oid. 
 I'd like to see explicit SQL, preferably in the form of a psycopg2
 script that illustrates all the operations you wish to support and
 specifically how they must interact with the database.


 I've attached example usages.  As I indicated in my prior email, right
 now I only do inserts/selects.
Here's my immediate thought about the INSERT (and the UPDATE) - the
first way is using events, the second would attempt to move this system
into something more native to the psycopg2 dialect:

Use the before_cursor_execute() and after_cursor_execute() events to get
at this:

http://docs.sqlalchemy.org/en/latest/core/events.html#sqlalchemy.events.ConnectionEvents.before_cursor_execute
http://docs.sqlalchemy.org/en/latest/core/events.html#sqlalchemy.events.ConnectionEvents.after_cursor_execute

1. Create a new type called PGLOB or whatever you want.  For starters,
this can just be a blank UserDefinedType subclass.

in before_cursor_execute:

2. determine if INSERT or UPDATE using context.isinsert, context.isupdate

3. examine the datatypes that SQLAlchemy is using here, by looking at
context.compiled.binds.  The values here are BindParameter objects, you
want to look in those for the ones that are of type PGLOB.

4. From context.compiled.binds, you have the names of the bound params
with the type.  Search and replace the statement for occurrences of
that bound parameter, replace with lo_creat(-1) or whatever you need
there.

5. Also append to the RETURNING clause those cols you need.

6. the statement as a return value will be used, if you set up the event
with retval=True (see the docs).

in after_cursor_execute:

7. in after_cursor_execute - call fetchone() to get the RETURNING
values.  Get that OID you care about then do that work with conn.lobject
and all that.  hopefully this doesnt mess up the existing cursor state.

8.  now the tricky part.  SQLAlchemy needs that row if you're doing
implicit returning to get at primary key values.   psycopg2's cursor
seems to have a scroll() method that works for client side cursors.  
I'd scroll it back one so that SQLA gets the state it expects.

Alternative system, more plugged in:

1. We would be creating new features on
sqlalchemy/dialects/postgresql/psycopg2.py -
PGExecutionContext_psycopg2.  Similar hooks are available here which you
can use to accomplish similar tasks;  you'd want to look at the
pre_exec(), post_exec() and possibly post_insert() methods, and maybe
even _fetch_implicit_returning().If SQLA were to support this more
natively, things would be happening at this level.   But again, I really
wouldn't want all kinds of hardcoded expectations of LOB objects in this
object taking up processing time for the vast majority of use cases that
don't use LOBs, so the extra logic here should be contained within
something that can easily be placed aside based on options or similar.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Joins across differently-sharded tables

2014-06-25 Thread Mike Bayer

On 6/25/14, 6:36 PM, Neil Hooey wrote:
 Is it possible to do a join query on two tables that have different
 shard keys in SQLAlchemy?

 For example, if you have two tables:

 users: [userId, name] (sharded on userId)
 messages: [messageId, userId, message] (sharded on messageId)

 Could you select all of the messages for a particular user with:

 select u.name http://u.name, m.userId, m.messageId, m.message
 from messages m
 join users u on m.userId = u.userId

 (Even though messages are sharded on messageId instead of userId?)

 I haven't seen an example of this in the code.
there's no way to do that because two shards are accessed via two
totally different database connections.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Aggregate functions across shards

2014-06-25 Thread Mike Bayer

On 6/25/14, 6:41 PM, Neil Hooey wrote:
 Do aggregate functions work across shards in SQLAlchemy?

 Such as count, sum, etc?

not across them, nope.  you need to merge those yourself.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Performance problem of simple orm usage

2014-06-26 Thread Mike Bayer

On 6/26/14, 11:49 AM, Hans-Peter Jansen wrote:
 Dear SQLAchemistas,

 I'm suffering from a performance problem from a simple sequence like this:

 rec is a sensor record coming redundantly from the network, enos is a 
 sensor device, ev is a sensor value record:

 def store_enos_rec(self, rec, srelay):
 enos = self.lookup_enos(rec.id, srelay)
 ts = rec.ts.replace(tzinfo = None)
 ev = self.session.query(EnOSValues).filter(EnOSValues.enos_id == 
 enos.id,
EnOSValues.ts == 
 ts).first()
 if ev:
 return
 # create record
 ev = EnOSValues(enos_id = enos.id, ts = ts, status = True, temp = 
 rec.temp)
 enos.values.append(ev)
 try:
 self.session.commit()
 except IntegrityError:
 self.session.rollback()

 This is running as a multiprocessing.Process on its own, but the 
 self.session.commit call is the bottleneck here, as shown in the profile 
 callgraph. Is there something simple, that can be done to mitigate the 
 issue, other than doing it straight down the sql way?

 Obviously, it is taking a lot of time in history mgmt. A former approach
 to simply dump the record without searching duplicates and catching the
 IntegrityError took even more time due to the session rollback.

 Thanks for your patience and any enlightenment,
It's very difficult to read this SVG and a traditional Python profile
output sorted by calltime would be helpful, but what I can see is that
you're calling commit() 20K times, so that is a lot.   commit() is not
intended to be something you call for every individual row of a large
dataset, it is better that you feed a few thousand objects into the
Session at a time before flushing and committing.

I'm not sure what kind of application this is but I would not be mixing
transactional control, that is the commit/rollback, inside of a business
method that only seeks to create some new objects.   There'd be a
containing pattern within which store_enos_rec() is called these 20K
times, then the commit() is called on the outside of that, or at least
every 1000 records or so.

Also, the create if not present pattern is often much better set up by
preloading all the objects you'll deal with up front:

existing_evs = dict(Session.query(EnosValues.enos_id, EnosValues))
if ev_id not in existing_evs:
Session.add(EnosValues(...))


If you truly need to optimize a method whose job it is to start a new
transaction, insert a single row if not present and then commit the
transaction fully, I'd use a core insert() construct for that.  
Spinning up a whole unit of work in that context is overkill if
performance is a concern.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] AssertionError: Type class 'sqlalchemy.engine.interfaces.Connectable' is already registered

2014-06-26 Thread Mike Bayer
that looks pretty much like modules are being reloaded in process.   
Either get web.py to not reload the sqlalchemy modules, or if it has
to, then get it to fully load sqlalchemy completely.   I see that
reload(mod) right in the stack trace there.




On 6/26/14, 1:14 PM, Dustin Oprea wrote:
 I'm using SQLAlchemy with web.py, and have used it many times in the
 past. I'm working on a project using gevent/greenlets, and everything
 has been fine for a couple of months, until today.

 Suddenly, I'm getting these at a time when I'm not even receiving any
 requests. Does anyone have any advice or steps to troubleshoot?

 Traceback (most recent call last):
   File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line
 237, in process
 return p(lambda: process(processors))
   File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line
 565, in processor
 h()
   File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line
 661, in __call__
 self.check(mod)
   File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line
 680, in check
 reload(mod)
   File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py,
 line 58, in module
 class Inspector(object):
   File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py,
 line 136, in Inspector
 @inspection._inspects(Connectable)
   File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/inspection.py,
 line 84, in decorate
 registered % type_)
 AssertionError: Type class
 'sqlalchemy.engine.interfaces.Connectable' is already registered

 Traceback (most recent call last):
   File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line
 237, in process
 return p(lambda: process(processors))
   File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line
 565, in processor
 h()
   File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line
 661, in __call__
 self.check(mod)
   File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line
 680, in check
 reload(mod)
   File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py,
 line 58, in module
 class Inspector(object):
   File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py,
 line 136, in Inspector
 @inspection._inspects(Connectable)
   File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/inspection.py,
 line 84, in decorate
 registered % type_)
 AssertionError: Type class
 'sqlalchemy.engine.interfaces.Connectable' is already registered

 Traceback (most recent call last):
   File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line
 237, in process
 return p(lambda: process(processors))
   File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line
 565, in processor
 h()
   File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line
 661, in __call__
 self.check(mod)
   File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line
 680, in check
 reload(mod)
   File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py,
 line 58, in module
 class Inspector(object):
   File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py,
 line 136, in Inspector
 @inspection._inspects(Connectable)
   File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/inspection.py,
 line 84, in decorate
 registered % type_)
 AssertionError: Type class
 'sqlalchemy.engine.interfaces.Connectable' is already registered

 Traceback (most recent call last):
   File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line
 237, in process
 return p(lambda: process(processors))
   File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line
 565, in processor
 h()
   File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line
 661, in __call__
 self.check(mod)
   File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line
 680, in check
 reload(mod)
   File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py,
 line 58, in module
 class Inspector(object):
   File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py,
 line 136, in Inspector
 @inspection._inspects(Connectable)
   File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/inspection.py,
 line 84, in decorate
 registered % type_)
 AssertionError: Type class
 

Re: [sqlalchemy] Performance problem of simple orm usage

2014-06-26 Thread Mike Bayer

On 6/26/14, 12:24 PM, Mike Bayer wrote:

 I'm not sure what kind of application this is but I would not be mixing
 transactional control, that is the commit/rollback, inside of a business
 method that only seeks to create some new objects.   There'd be a
 containing pattern within which store_enos_rec() is called these 20K
 times, then the commit() is called on the outside of that, or at least
 every 1000 records or so.

 Also, the create if not present pattern is often much better set up by
 preloading all the objects you'll deal with up front:

 existing_evs = dict(Session.query(EnosValues.enos_id, EnosValues))
 if ev_id not in existing_evs:
 Session.add(EnosValues(...))


 If you truly need to optimize a method whose job it is to start a new
 transaction, insert a single row if not present and then commit the
 transaction fully, I'd use a core insert() construct for that.  
 Spinning up a whole unit of work in that context is overkill if
 performance is a concern.
this is something that's pretty common and i may have the opportunity
soon to come up with a helper for this case.







-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] AssertionError: Type class 'sqlalchemy.engine.interfaces.Connectable' is already registered

2014-06-26 Thread Mike Bayer

On 6/26/14, 1:49 PM, Dustin Oprea wrote:


 On Jun 26, 2014 1:34 PM, Mike Bayer mike...@zzzcomputing.com
 mailto:mike...@zzzcomputing.com wrote:
 
  that looks pretty much like modules are being reloaded in
 process.Either get web.py to not reload the sqlalchemy modules,
 or if it has to, then get it to fully load sqlalchemy completely.  
 I see that reload(mod) right in the stack trace there.

 Sure, but I have a dozen web.py projects, and gave been using it for a
 while, and haven't had this issue until yesterday. I thought that too,
 but doubt that's it.


I guarantee you that is it - maybe it was not being called on this
particular module or combination of modules.  module reloading is bad IMHO.




 Dustin
 
 
 
 
 
  On 6/26/14, 1:14 PM, Dustin Oprea wrote:
 
  I'm using SQLAlchemy with web.py, and have used it many times in
 the past. I'm working on a project using gevent/greenlets, and
 everything has been fine for a couple of months, until today.
 
  Suddenly, I'm getting these at a time when I'm not even receiving
 any requests. Does anyone have any advice or steps to troubleshoot?
 
  Traceback (most recent call last):
File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line 237,
 in process
  return p(lambda: process(processors))
File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line 565,
 in processor
  h()
File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line 661,
 in __call__
  self.check(mod)
File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line 680,
 in check
  reload(mod)
File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py,
 line 58, in module
  class Inspector(object):
File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py,
 line 136, in Inspector
  @inspection._inspects(Connectable)
File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/inspection.py,
 line 84, in decorate
  registered % type_)
  AssertionError: Type class
 'sqlalchemy.engine.interfaces.Connectable' is already registered
 
  Traceback (most recent call last):
File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line 237,
 in process
  return p(lambda: process(processors))
File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line 565,
 in processor
  h()
File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line 661,
 in __call__
  self.check(mod)
File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line 680,
 in check
  reload(mod)
File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py,
 line 58, in module
  class Inspector(object):
File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py,
 line 136, in Inspector
  @inspection._inspects(Connectable)
File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/inspection.py,
 line 84, in decorate
  registered % type_)
  AssertionError: Type class
 'sqlalchemy.engine.interfaces.Connectable' is already registered
 
  Traceback (most recent call last):
File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line 237,
 in process
  return p(lambda: process(processors))
File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line 565,
 in processor
  h()
File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line 661,
 in __call__
  self.check(mod)
File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line 680,
 in check
  reload(mod)
File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py,
 line 58, in module
  class Inspector(object):
File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py,
 line 136, in Inspector
  @inspection._inspects(Connectable)
File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/inspection.py,
 line 84, in decorate
  registered % type_)
  AssertionError: Type class
 'sqlalchemy.engine.interfaces.Connectable' is already registered
 
  Traceback (most recent call last):
File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line 237,
 in process
  return p(lambda: process(processors))
File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line 565,
 in processor
  h()
File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line 661,
 in __call__
  self.check(mod)
File
 /usr/local/lib/python2.7/dist-packages/web/application.py, line 680,
 in check
  reload(mod)
File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py,
 line 58, in module
  class Inspector(object):
File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/reflection.py,
 line 136, in Inspector
  @inspection._inspects(Connectable)
File
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/inspection.py,
 line 84, in decorate
  registered

Re: [sqlalchemy] Performance problem of simple orm usage

2014-06-26 Thread Mike Bayer

On 6/26/14, 3:07 PM, Hans-Peter Jansen wrote:
 Okay, attached is a profile dump with .commit() out of the way. Here's
 the head of it: -- p = pstats.Stats('srelay.pstats') --
 p.strip_dirs().sort_stats('cumulative').print_stats(100) Thu Jun 26
 20:41:50 2014 srelay.pstats 55993702 function calls (54767154
 primitive calls) in 533.479 seconds Ordered by: cumulative time List
 reduced from 1277 to 100 due to restriction 100 ncalls tottime
 percall cumtime percall filename:lineno(function) 1 0.000 0.000
 533.523 533.523 cProfile.py:146(runcall) 1 0.174 0.174 533.523 533.523
 srelay.py:23(run) 24558 1.060 0.000 458.633 0.019
 srelay.py:74(store_enos_rec) 24586 0.111 0.000 434.802 0.018
 query.py:2280(first) 24586 0.493 0.000 434.687 0.018
 query.py:2143(__getitem__) 24634 0.114 0.000 431.682 0.018
 query.py:2361(__iter__) 24683 1.330 0.000 391.984 0.016
 session.py:1790(flush) 24634 0.052 0.000 391.950 0.016
 session.py:1137(_autoflush) 21043 0.628 0.000 390.597 0.019
 session.py:1841(_flush) 21043 0.396 0.000 385.326 0.018
 unitofwork.py:350(execute) 21043 0.417 0.000 350.813 0.017
 unitofwork.py:299(_generate_actions) 126514 0.814 0.000 348.487 0.003
 unitofwork.py:411(execute) 189809 0.726 0.000 343.341 0.002
 unitofwork.py:177(get_attribute_history) 21108 0.251 0.000 342.547
 0.016 dependency.py:440(presort_saves) 42240 9.870 0.000 302.804 0.007
 attributes.py:871(get_history) 42160 292.536 0.007 292.910 0.007
 attributes.py:1321(from_collection) 45788 0.175 0.000 55.708 0.001
 base.py:597(execute) 45735 0.410 0.000 53.341 0.001
 base.py:727(_execute_clauseelement) 45788 0.511 0.000 41.410 0.001
 base.py:812(_execute_context) 63209 39.185 0.001 39.294 0.001
 attributes.py:1255(as_state) 2 0.001 0.000 39.039 19.519
 process.py:83(__call__) 39 39.037 1.001 39.037 1.001 {time.sleep}
 24634 0.157 0.000 36.100 0.001 query.py:2375(_execute_and_instances)
 45788 0.098 0.000 34.287 0.001 default.py:323(do_execute) 45816 0.490
 0.000 34.199 0.001 cursors.py:164(execute) 24562 0.513 0.000 32.848
 0.001 fetch_srelay.py:57(fetch_srelay) 45816 0.083 0.000 32.323 0.001
 cursors.py:353(_query) 45816 0.217 0.000 31.993 0.001
 cursors.py:315(_do_query) 45816 30.949 0.001 30.949 0.001 {method
 'query' of '_mysql.connection' objects} 198 0.001 0.000 30.394 0.154
 fetch_srelay.py:44(recv) 396 0.003 0.000 30.393 0.077
 fetch_srelay.py:33(recvall) 392 30.039 0.077 30.039 0.077 {method
 'recv' of '_socket.socket' objects} 63126 0.203 0.000 26.584 0.000
 unitofwork.py:522(execute) Unfortunately, it still suffers from about
 the same runtime behavior. 

 Obviously, some operation triggers the flush method with about the
 same consequences.. 

OK, turn off autoflush - either Session(autoflush=False), or within the
critical block, do with session.no_autoflush:

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] A question regarding hybrid properties, relationships and schema changes

2014-06-26 Thread Mike Bayer
minimal mapping + the query against that mapping.


On 6/26/14, 4:11 PM, Ken Lareau wrote:
 On Thu, Jun 26, 2014 at 5:57 AM, Mike Bayer mike...@zzzcomputing.com
 mailto:mike...@zzzcomputing.com wrote:


 On 6/25/14, 8:06 PM, Ken Lareau wrote:
 On Wed, Jun 25, 2014 at 6:28 AM, Mike Bayer
 mike...@zzzcomputing.com mailto:mike...@zzzcomputing.com wrote:


 On 6/25/14, 2:26 AM, Ken Lareau wrote:
 On Tue, Jun 24, 2014 at 3:35 PM, Mike Bayer
 mike...@zzzcomputing.com mailto:mike...@zzzcomputing.com
 wrote:


 On 6/23/14, 8:09 PM, Ken Lareau wrote:
 
  if apptier:
  subq = (
  Session.query(
  Package.pkg_name,
  Package.version,
  Package.revision,
  AppDefinition.app_type,
  AppDeployment.environment
  ).join(Deployment)
   .join(AppDeployment)
   .join(AppDefinition)
   .filter(Package.pkg_name == package_name)
   .filter(AppDeployment.environment == env)
   .filter(AppDeployment.status !=
 'invalidated'))
 
  [...]
 
  # The actual column name must be used in the
 subquery
  # usage below; DB itself should be corrected
  versions = (Session.query(subq.c.appType,
  subq.c.version,
  subq.c.revision)
  .group_by(subq.c.appType,
 subq.c.environment)
  .all())
 
  The parts in the subquery work fine, but the access of
 the 'column'
  in the final query leads to this:
 
  Traceback (most recent call last):
File stdin, line 1, in module
File
 /home/klareau/repos/git/tagopsdb/tagopsdb/deploy/deploy.py,
  line 234, in find_deployed_version
  .group_by(subq.c.appType, subq.c.environment)
File
 
 
 /home/klareau/.virtualenvs/tds/lib/python2.6/site-packages/sqlalchemy/util/_collections.py,
  line 174, in __getattr__
  raise AttributeError(key)
  AttributeError: environment
 
  This is not completely surprising, but I'm uncertain
 as the best way
  to fix this... help? :)
 nothing is obviously wrong, assuming you are calling
 subquery() on
 subq at the end.  It will be a select() construct which
 will have a
 column called .environment on the .c. collection because
 AppDeployment.environment is present.   Can't say why
 this attribute is
 not here without full details.


 Oops there were some additional filters I left out, but this
 is added
 before the final query:

 subq = (subq.order_by(AppDeployment.realized.desc())
 .subquery(name='t_ordered'))

 Not sure why it isn't working, but today my coworker an I
 massively
 rewrote one of the other methods to avoid this same issue,
 so maybe
 we should try the same for this one... though it would be
 nice to know
 what I goofed here, but not sure what additional information
 I can
 give that would help offhand...

 what are the names that are actually present on subq.c ?  
 print list(subq.c) should give an indication.


 Adding that in to the method gave this:

  deploy.find_deployed_version('tdstest', 'development',
 version='9', apptier=True)
 [Column('pkg_name', String(length=255), table=t_ordered,
 nullable=False), Column('version', String(length=63),
 table=t_ordered, nullable=False), Column('revision',
 String(length=63), table=t_ordered, nullable=False),
 Column('appType', String(length=100), table=t_ordered,
 nullable=False), sqlalchemy.sql.elements.ColumnClause at
 0x1d20e10; %(30521360 anon)s]
 Traceback (most recent call last):
   File stdin, line 1, in module
   File
 /home/klareau/repos/git/tagopsdb/tagopsdb/deploy/deploy.py,
 line 237, in find_deployed_version
 .group_by(subq.c.appType, subq.c.environment)
   File
 
 /home/klareau/.virtualenvs/tds/lib/python2.6/site-packages/sqlalchemy/util/_collections.py,
 line 174, in __getattr__
 raise AttributeError(key)
 AttributeError: environment
 you need to put .label('environment') on that column before it
 finds its way into subq.  I dont have

Re: [sqlalchemy] Performance problem of simple orm usage

2014-06-26 Thread Mike Bayer

On 6/26/14, 5:18 PM, Hans-Peter Jansen wrote:
 Dear Mike,

 sorry for not coping with preferred reply behavior..

 On Donnerstag, 26. Juni 2014 15:26:02 Mike Bayer wrote:
 On 6/26/14, 3:07 PM, Hans-Peter Jansen wrote:
 Obviously, some operation triggers the flush method with about the
 same consequences..
 OK, turn off autoflush - either Session(autoflush=False), or within the
 critical block, do with session.no_autoflush:
 Okay, that makes a difference, shaving of about 40% of total runtime,
 after dealing with identical records in one session..

 Still, attributes is highest in profiles. I guess, this is, what you 
 called overkill in your first response. A helper for dealing with this 
 situation sounds interesting, though.

 Thu Jun 26 20:41:50 2014srelay.pstats

  55993702 function calls (54767154 primitive calls) in 533.479 seconds

Ordered by: internal time
List reduced from 1277 to 30 due to restriction 30

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
 42160  292.5360.007  292.9100.007 
 attributes.py:1321(from_collection)
 63209   39.1850.001   39.2940.001 attributes.py:1255(as_state)
from_collection is essentially returning a structure that describes
which members of a collection were removed or added.  if the collection
is totally empty or is unloaded, this method should be much faster.  try
not do to this:

enos.values.append(ev)

that's what's causing that latency to occur.   The one-to-many of
enos-ev is mirrored by a many to one from ev-enos - set enos as a
parent of ev instead, or even faster, just set the foreign key column on
ev to that of the primary key of enos.




39   39.0371.001   39.0371.001 {time.sleep}
 45816   30.9490.001   30.9490.001 {method 'query' of 
 '_mysql.connection' objects}
   392   30.0390.077   30.0390.077 {method 'recv' of 
 '_socket.socket' objects}
 422409.8700.000  302.8040.007 attributes.py:871(get_history)
 210834.2840.0006.5000.000 attributes.py:868(__copy)
 421582.2300.0002.2520.000 collections.py:711(__len__)
 210431.4660.0001.8510.000 topological.py:51(find_cycles)
   41594301.3690.0001.3690.000 {isinstance}
 246831.3300.000  391.9840.016 session.py:1790(flush)
 642077/247121.2920.000   12.8400.001 
 visitors.py:74(_compiler_dispatch)
1975701.2850.0004.2150.000 compiler.py:389(visit_label)
 457351.2350.0003.6150.000 default.py:391(_init_compiled)
2468471.1450.0001.6370.000 compiler.py:421(visit_column)
 245581.0600.000  458.6330.019 srelay.py:74(store_enos_rec)
 246361.0480.000   12.7090.001 compiler.py:1136(visit_select)
1975521.0300.0006.0000.000 
 compiler.py:1019(_label_select_column)
 246361.0150.0001.7930.000 result.py:171(__init__)
 631260.9390.0003.2020.000 
 persistence.py:275(_collect_update_commands)
 49200/491720.9370.0003.5390.000 db_scheme.py:91(__repr__)
1472400.9040.0001.2460.000 
 topological.py:15(sort_as_subsets)
1265140.8140.000  348.4870.003 unitofwork.py:411(execute)
 631910.7710.0001.9840.000 
 dependency.py:67(per_property_flush_actions)
6229250.7510.0000.9730.000 attributes.py:308(__get__)
6996820.7450.0000.7450.000 state.py:185(dict)
1898090.7260.000  343.3410.002 
 unitofwork.py:177(get_attribute_history)
 739280.6530.0000.7810.000 expression.py:3538(__init__)
 421700.6480.0000.6480.000 {method 'clear' of 'dict' 
 objects}
1262520.6390.0001.7950.000 persistence.py:802(_sort_states)


 Cheers,
 Pete


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] A question regarding hybrid properties, relationships and schema changes

2014-06-26 Thread Mike Bayer
a self contained version of before is attached, seems to work as is
(works in 0.8 too).  Modify it to show me it not working.

output:

SELECT t_ordered.pkg_name AS t_ordered_pkg_name, t_ordered.version AS
t_ordered_version, t_ordered.revision AS t_ordered_revision,
t_ordered.appType AS t_ordered_appType, t_ordered.environment_id AS
t_ordered_environment_id, t_ordered.realized AS t_ordered_realized,
t_ordered.user AS t_ordered_user, t_ordered.status AS t_ordered_status
FROM (SELECT packages.pkg_name AS pkg_name, packages.version AS version,
packages.revision AS revision, app_definitions.appType AS appType,
app_deployments.environment_id AS environment_id, (SELECT
environments.environment
FROM environments
WHERE environments.environmentID = app_deployments.environment_id) AS
anon_1, app_deployments.realized AS realized, app_deployments.user AS
user, app_deployments.status AS status
FROM packages JOIN deployments ON packages.package_id =
deployments.package_id JOIN app_deployments ON
deployments.DeploymentID = app_deployments.DeploymentID JOIN
app_definitions ON app_definitions.AppID = app_deployments.AppID
WHERE app_deployments.status != :status_1 AND (SELECT
environments.environment
FROM environments
WHERE environments.environmentID = app_deployments.environment_id) =
:param_1 ORDER BY app_deployments.realized DESC) AS t_ordered GROUP BY
t_ordered.appType, t_ordered.environment_id, t_ordered.pkg_name
HAVING t_ordered.status LIKE :status_2 AND
unix_timestamp(t_ordered.realized)  unix_timestamp(now()) -
:unix_timestamp_1




On 6/26/14, 7:41 PM, Ken Lareau wrote:
 On Thu, Jun 26, 2014 at 1:45 PM, Mike Bayer mike...@zzzcomputing.com
 mailto:mike...@zzzcomputing.com wrote:

 minimal mapping + the query against that mapping.

 Okay, for the other method I needed to change, I think I've put together
 complete enough examples and have attached the files.  The 'before'
 file is what we have currently (and doesn't quite do the right thing as
 it's only using the environment_id), and the 'after' file is what
 seemingly
 worked after my coworker and I tried a different route to avoid having
 to use a subquery.  One thing to note is there's a reciprocating rela-
 tionship added on the Deployment class to the Package class which
 initially caused a lovely stack overflow in Python until the joinedload
 options were added; this may be a bad idea, but it was the only way
 we knew how to get things working the way we needed. :)

 I tried to add all the imports as well in case there were any questions
 that might come from the code itself; hopefully I didn't miss any, but
 if there's still confusion, let me know.

 - Ken
  



 On 6/26/14, 4:11 PM, Ken Lareau wrote:
 On Thu, Jun 26, 2014 at 5:57 AM, Mike Bayer
 mike...@zzzcomputing.com mailto:mike...@zzzcomputing.com wrote:


 On 6/25/14, 8:06 PM, Ken Lareau wrote:
 On Wed, Jun 25, 2014 at 6:28 AM, Mike Bayer
 mike...@zzzcomputing.com mailto:mike...@zzzcomputing.com
 wrote:


 On 6/25/14, 2:26 AM, Ken Lareau wrote:
 On Tue, Jun 24, 2014 at 3:35 PM, Mike Bayer
 mike...@zzzcomputing.com
 mailto:mike...@zzzcomputing.com wrote:


 On 6/23/14, 8:09 PM, Ken Lareau wrote:
 
  if apptier:
  subq = (
  Session.query(
  Package.pkg_name,
  Package.version,
  Package.revision,
  AppDefinition.app_type,
  AppDeployment.environment
  ).join(Deployment)
   .join(AppDeployment)
   .join(AppDefinition)
   .filter(Package.pkg_name ==
 package_name)
   .filter(AppDeployment.environment ==
 env)
   .filter(AppDeployment.status !=
 'invalidated'))
 
  [...]
 
  # The actual column name must be used in
 the subquery
  # usage below; DB itself should be corrected
  versions = (Session.query(subq.c.appType,
  subq.c.version,
  subq.c.revision)
  .group_by(subq.c.appType,
 subq.c.environment)
  .all())
 
  The parts in the subquery work fine, but the
 access of the 'column'
  in the final query leads to this:
 
  Traceback (most recent call last):
File stdin, line 1, in module
File

Re: [sqlalchemy] A question regarding hybrid properties, relationships and schema changes

2014-06-26 Thread Mike Bayer
right, so a few emails ago I said:

  you need to put .label('environment') on that column before it finds
its way into subq.  I dont have the mappings here to review.   

here's that:

@environment.expression
def environment(cls):
return select(
[Environment.environment]
).where(
Environment.id == cls.environment_id
).correlate(cls).label('environment')





On 6/26/14, 9:50 PM, Ken Lareau wrote:
 Done, new file attached (this gives the same error message as the one
 I showed initially, at least on my system).

 - Ken
 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is it considered bad practice to have more than one session instance simultaneously in a web application?

2014-06-28 Thread Mike Bayer

On 6/28/14, 7:13 AM, Bao Niu wrote:
 My situation is like this:

 I am developing a web application, which has a Person class, which has
 /FirstName/ and /LastName/ attributes. Now I want to build their full
 name attribute and make this /full_name/ attribute queriable, by using
 hybrid_property, which entails query and hence session. This session
 for querying hybrid_property has its life cycle as long as that
 particular Person instance is active in memory, as in the running
 process the names might get changed, and need to communicate to the
 database.
  
 In the mean time, in this application I also need another Session
 instance to contain those Person instances themselves, and this
 Session instance has a quite different life cycle than the above one.
 I am using cherrypy.request to hold a thread-local session for this
 second purpose.

 Now it seems to me that both Session instances are necessary, I can't
 use one in place of the other. But because handling two sessions at
 the same time is inherently so confusing sometimes, I wonder if I am
 in the right direction? Is this generally considered bad? If it is,
 then how to deal with it? Thanks in advance.

If this is a web application, having a session that isn't lifecycled to
a request seems like it runs in some kind of background thread or
something.Otherwise, if its some session that stays open in the
cherrypy app while the app is doing nothing, and is only used by
requests (somehow?  session shouldn't be accessed by multiple things at
once) not serving requests, that's bad.   if you're using a Session as
some kind offline cache, that's not what it's for and it won't do a good
job of that because it isn't threadsafe.

think more in terms of database transactions. I use two sessions all
the time when I want to separate transactions, a background job is
working in a transaction for several seconds, but a second short
transaction is used to write messages to a log table, so that I can see
the log table grow from the outside while the long transaction keeps
going.   But database transactions overall should be short, and never
dormant waiting for something to happen, they should be burning through
the work they have to do as fast as possible and completing.  So should
your sessions.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] subqueryload relationship in polymorphic base class with order_by on subclass column results in (ProgrammingError) missing FROM-clause entry

2014-06-30 Thread Mike Bayer

On 6/30/14, 6:01 PM, univerio wrote:
 Consider the following configuration:

 class Employee(Base):
 __tablename__ = employee
 id = Column(Integer, primary_key=True)
 type = Column(String(100))
 cars = relationship(Car)
 __mapper_args__ = {
 polymorphic_on: type,
 }

 class Car(Base):
 __tablename__ = car
 id = Column(Integer, primary_key=True)
 employee_id = Column(Integer, ForeignKey(Employee.id))

 class Engineer(Employee):
 __tablename__ = engineer
 id = Column(Integer, ForeignKey(Employee.id), primary_key=True)
 specialty = Column(String(100))
 __mapper_args__ = {
 polymorphic_identity: engineer,
 }


 And the following query:

 session.add(Engineer())
 session.flush()
 E = with_polymorphic(Employee, [Engineer])
 
 session.query(E).options(subqueryload(E.cars)).order_by(Engineer.specialty).first()


 This results in the following exception:

 Traceback (most recent call last):
   File test.py, line 50, in main
 .order_by(Engineer.specialty).first()
   File /usr/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 line 2334, in first
 ret = list(self[0:1])
   File /usr/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 line 2201, in __getitem__
 return list(res)
   File
 /usr/lib/python2.7/site-packages/sqlalchemy/orm/loading.py, line
 72, in instances
 rows = [process[0](row, None) for row in fetch]
   File
 /usr/lib/python2.7/site-packages/sqlalchemy/orm/loading.py, line
 350, in _instance
 return _instance(row, result)
   File
 /usr/lib/python2.7/site-packages/sqlalchemy/orm/loading.py, line
 455, in _instance
 populate_state(state, dict_, row, isnew, only_load_props)
   File
 /usr/lib/python2.7/site-packages/sqlalchemy/orm/loading.py, line
 305, in populate_state
 populator(state, dict_, row)
   File
 /usr/lib/python2.7/site-packages/sqlalchemy/orm/strategies.py,
 line 1004, in load_collection_from_subq
 ()
   File
 /usr/lib/python2.7/site-packages/sqlalchemy/orm/strategies.py,
 line 951, in get
 self._load()
   File
 /usr/lib/python2.7/site-packages/sqlalchemy/orm/strategies.py,
 line 959, in _load
 lambda x: x[1:]
   File /usr/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 line 2405, in __iter__
 return self._execute_and_instances(context)
   File /usr/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 line 2420, in _execute_and_instances
 result = conn.execute(querycontext.statement, self._params)
   File
 /usr/lib/python2.7/site-packages/sqlalchemy/engine/base.py, line
 727, in execute
 return meth(self, multiparams, params)
   File
 /usr/lib/python2.7/site-packages/sqlalchemy/sql/elements.py,
 line 322, in _execute_on_connection
 return connection._execute_clauseelement(self, multiparams,
 params)
   File
 /usr/lib/python2.7/site-packages/sqlalchemy/engine/base.py, line
 824, in _execute_clauseelement
 compiled_sql, distilled_params
   File
 /usr/lib/python2.7/site-packages/sqlalchemy/engine/base.py, line
 954, in _execute_context
 context)
   File
 /usr/lib/python2.7/site-packages/sqlalchemy/engine/base.py, line
 1116, in _handle_dbapi_exception
 exc_info
   File
 /usr/lib/python2.7/site-packages/sqlalchemy/util/compat.py, line
 189, in raise_from_cause
 reraise(type(exception), exception, tb=exc_tb)
   File
 /usr/lib/python2.7/site-packages/sqlalchemy/engine/base.py, line
 947, in _execute_context
 context)
   File
 /usr/lib/python2.7/site-packages/sqlalchemy/engine/default.py,
 line 435, in do_execute
 cursor.execute(statement, parameters)
 sqlalchemy.exc.ProgrammingError: (ProgrammingError) missing
 FROM-clause entry for table engineer
 LINE 3: FROM employee ORDER BY engineer.specialty 
^
  'SELECT car.id AS car_id, car.employee_id AS car_employee_id,
 anon_1.employee_id AS anon_1_employee_id \nFROM (SELECT
 employee.id AS employee_id \nFROM employee ORDER BY
 engineer.specialty \n LIMIT %(param_1)s) AS anon_1 JOIN car ON
 anon_1.employee_id = car.employee_id ORDER BY anon_1.employee_id'
 {'param_1': 1}


 It looks like SQLAlchemy cannot figure out that the subquery needs to
 include a join to engineer. Is there some magical incantation that
 would make it work?

there's a little bit of a glitch here, however in any case, the ORDER BY
would be from E.Engineer.specialty.

The glitch is that the subq load at the moment seems to need an
additional hint as to what its selecting from:


Re: [sqlalchemy] subqueryload relationship in polymorphic base class with order_by on subclass column results in (ProgrammingError) missing FROM-clause entry

2014-06-30 Thread Mike Bayer

On 6/30/14, 7:03 PM, Mike Bayer wrote:


 there's a little bit of a glitch here, however in any case, the ORDER
 BY would be from E.Engineer.specialty.

 The glitch is that the subq load at the moment seems to need an
 additional hint as to what its selecting from:

 session.query(E).select_from(E).options(subqueryload(E.cars)).order_by(E.Engineer.specialty).first()

 will look into that.

OK, for now
session.query(E).select_from(E).options(subqueryload(E.cars)).order_by(Engineer.specialty).first()
is fine (don't need E.Engineer in the order_by), the issue is
https://bitbucket.org/zzzeek/sqlalchemy/issue/3106/subquery-load-on-with_polymorphic-entity
and with the patch there once I get tests in, the query will work as you
had it originally.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] subqueryload relationship in polymorphic base class with order_by on subclass column results in (ProgrammingError) missing FROM-clause entry

2014-06-30 Thread Mike Bayer

On 6/30/14, 7:29 PM, Jack Zhou wrote:
 Thanks for the quick response, Mike!

that's all fixed in master / rel_1_0 and rel_0_9 branches (as you know I
like to fix these deep polymorphic loader issues ASAP)





 On Mon, Jun 30, 2014 at 4:26 PM, Mike Bayer mike...@zzzcomputing.com
 mailto:mike...@zzzcomputing.com wrote:


 On 6/30/14, 7:03 PM, Mike Bayer wrote:


 there's a little bit of a glitch here, however in any case, the
 ORDER BY would be from E.Engineer.specialty.

 The glitch is that the subq load at the moment seems to need an
 additional hint as to what its selecting from:

 
 session.query(E).select_from(E).options(subqueryload(E.cars)).order_by(E.Engineer.specialty).first()

 will look into that.

 OK, for now
 
 session.query(E).select_from(E).options(subqueryload(E.cars)).order_by(Engineer.specialty).first()
 is fine (don't need E.Engineer in the order_by), the issue is
 
 https://bitbucket.org/zzzeek/sqlalchemy/issue/3106/subquery-load-on-with_polymorphic-entity
 and with the patch there once I get tests in, the query will work
 as you had it originally.



 -- 
 You received this message because you are subscribed to a topic in
 the Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/gvOJr4cSO-A/unsubscribe.
 To unsubscribe from this group and all its topics, send an email
 to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Postgresql - Index on a json field

2014-06-30 Thread Mike Bayer
SQLAlchemy's API allows CREATE INDEX via the Index construct:
http://docs.sqlalchemy.org/en/rel_0_9/core/constraints.html?highlight=index#indexes


On 6/30/14, 6:21 PM, Phillip Aquilina wrote:
 Using postgresql, I have a JSON type column. My understanding from
 their docs was that only jsonb columns could have an index created on
 them (a feature of postgresql 9.4) but then I found an SO answer
 http://stackoverflow.com/questions/17807030/how-to-create-index-on-json-field-in-postgres-9-3
  that
 said otherwise. I haven't had the chance to test it since I'm away
 from my dev environment, but the sqlalchemy docs seem to support this
 idea
 http://docs.sqlalchemy.org/en/rel_0_9/dialects/postgresql.html?highlight=json#sqlalchemy.dialects.postgresql.JSON,
 mentioning the availability of Index operations.

 Unless I'm missing something obvious (very possible), it seems like
 this can be done through sql, but is there a way to create an index on
 a json field through the sqlalchemy api? I can't seem to find a way to
 do this.

 Thanks,
 Phil
 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Postgresql - Index on a json field

2014-06-30 Thread Mike Bayer
I'm not familiar with any other style of index for this column type.  
If you can show me at
http://www.postgresql.org/docs/9.4/static/datatype-json.html or wherever
what specific DDL you're looking for, you can simply emit it using
engine.execute(ddl).


On 6/30/14, 11:02 PM, Phillip Aquilina wrote:
 Thanks for replying. I've read through that doc and I still don't see
 how that addresses my question. Is there somewhere in there that
 describes how to create an index on a json field?  It seems like to me
 it's simple to create an index on a column but this would be creating
 an index on nested data inside the column.

 - Phil

 On Monday, June 30, 2014 6:07:51 PM UTC-7, Michael Bayer wrote:

 SQLAlchemy's API allows CREATE INDEX via the Index construct:
 
 http://docs.sqlalchemy.org/en/rel_0_9/core/constraints.html?highlight=index#indexes
 
 http://docs.sqlalchemy.org/en/rel_0_9/core/constraints.html?highlight=index#indexes


 On 6/30/14, 6:21 PM, Phillip Aquilina wrote:
 Using postgresql, I have a JSON type column. My understanding
 from their docs was that only jsonb columns could have an index
 created on them (a feature of postgresql 9.4) but then I found an
 SO answer
 
 http://stackoverflow.com/questions/17807030/how-to-create-index-on-json-field-in-postgres-9-3
  that
 said otherwise. I haven't had the chance to test it since I'm
 away from my dev environment, but the sqlalchemy docs seem to
 support this idea
 
 http://docs.sqlalchemy.org/en/rel_0_9/dialects/postgresql.html?highlight=json#sqlalchemy.dialects.postgresql.JSON,
 mentioning the availability of Index operations.

 Unless I'm missing something obvious (very possible), it seems
 like this can be done through sql, but is there a way to create
 an index on a json field through the sqlalchemy api? I can't seem
 to find a way to do this.

 Thanks,
 Phil
 -- 
 You received this message because you are subscribed to the
 Google Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it,
 send an email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy
 http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout
 https://groups.google.com/d/optout.

 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Postgresql - Index on a json field

2014-06-30 Thread Mike Bayer
per the SO answer, you're looking for CREATE INDEX ON
publishers((info-'name'));.  Either you can emit this directly as a
string, or use Index, just as it states:

from sqlalchemy import create_engine, Integer, Index, Table, Column,
MetaData
from sqlalchemy.dialects.postgresql import JSON

e = create_engine(postgresql://scott:tiger@localhost/test, echo=True)

m = MetaData()
publishers = Table('publishers', m, Column('id', Integer),
Column('info', JSON))
Index(foo, publishers.c.info['name'].astext)

m.create_all(e)

output:

CREATE TABLE publishers (
id INTEGER,
info JSON
)


CREATE INDEX foo ON publishers ((info - 'name'))





On 7/1/14, 1:14 AM, Mike Bayer wrote:
 I'm not familiar with any other style of index for this column type.  
 If you can show me at
 http://www.postgresql.org/docs/9.4/static/datatype-json.html or
 wherever what specific DDL you're looking for, you can simply emit it
 using engine.execute(ddl).


 On 6/30/14, 11:02 PM, Phillip Aquilina wrote:
 Thanks for replying. I've read through that doc and I still don't see
 how that addresses my question. Is there somewhere in there that
 describes how to create an index on a json field?  It seems like to
 me it's simple to create an index on a column but this would be
 creating an index on nested data inside the column.

 - Phil

 On Monday, June 30, 2014 6:07:51 PM UTC-7, Michael Bayer wrote:

 SQLAlchemy's API allows CREATE INDEX via the Index construct:
 
 http://docs.sqlalchemy.org/en/rel_0_9/core/constraints.html?highlight=index#indexes
 
 http://docs.sqlalchemy.org/en/rel_0_9/core/constraints.html?highlight=index#indexes


 On 6/30/14, 6:21 PM, Phillip Aquilina wrote:
 Using postgresql, I have a JSON type column. My understanding
 from their docs was that only jsonb columns could have an index
 created on them (a feature of postgresql 9.4) but then I
 found an SO answer
 
 http://stackoverflow.com/questions/17807030/how-to-create-index-on-json-field-in-postgres-9-3
  that
 said otherwise. I haven't had the chance to test it since I'm
 away from my dev environment, but the sqlalchemy docs seem to
 support this idea
 
 http://docs.sqlalchemy.org/en/rel_0_9/dialects/postgresql.html?highlight=json#sqlalchemy.dialects.postgresql.JSON,
 mentioning the availability of Index operations.

 Unless I'm missing something obvious (very possible), it seems
 like this can be done through sql, but is there a way to create
 an index on a json field through the sqlalchemy api? I can't
 seem to find a way to do this.

 Thanks,
 Phil
 -- 
 You received this message because you are subscribed to the
 Google Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from
 it, send an email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy
 http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout
 https://groups.google.com/d/optout.

 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it,
 send an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] failed to locate a name error when using model with relationship

2014-07-01 Thread Mike Bayer

On 7/1/14, 1:17 PM, trusted...@gmail.com wrote:
 I have two classes, Artwork and Person. Artwork has a relationship to Person.
 However, when I try to use them, I get an error thrown:

 InvalidRequestError: When initializing mapper
 Mapper|Artwork|artwork, expression 'Person' failed to locate a
 name (name 'Person' is not defined). If this is a class name,
 consider adding this relationship() to the class 'model.Artwork'
 class after both dependent classes have been defined.



 Here are the classes themselves, defined in model/__init__.py
 class Artwork(db.Model, SimpleSerializeMixin):
 id = db.Column(db.Integer, primary_key=True)
 
 artist_id = db.Column(db.String(256), db.ForeignKey('person.sub'))
 artist = db.relationship('Person', backref='artworks')

 class Person(db.Model, SimpleSerializeMixin):
 sub = db.Column(db.String(256), primary_key=True)


 I checked with debugger and in
 sqlalchemy/ext/declarative/clsregistry.py (_class_resolver.__call__())
 there is a line:
 x = eval(self.arg, globals(), self._dict)
 No Person or Artwork or any other class defined in the file are
 present in globals(). self._dict is empty
 So it fails with an NameError exception.

 What could be the issue ?

it's usually that the Person code wasn't run, e.g. that the module in
which it is located was not imported, before you tried to use the
Artwork class.  All the tables/classes can be introduced to the Python
interpreter in any order, but once you try to use the mapping, e.g.
make an object or run a query, it resolves all the links and everything
has to be present.




 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Regarding the use of reciprocal relationships

2014-07-01 Thread Mike Bayer

On 7/1/14, 4:54 PM, Ken Lareau wrote:
 Related to one of my recent posted threads here, I'm recalling a certain
 conversation at PyCon where I was mentioning how a friend would define
 a many-to-many relationship by defining a relationship on both declarative
 classes involved, each pointing to the other, and the look of abject
 horror
 I received gave a good indication that this is very often not a good
 thing.
 However, I've run into a situation where this has proven useful:

 class Deployment(Base):
 __tablename__ = 'deployments'

 id = Column(u'DeploymentID', INTEGER(), primary_key=True)
 package_id = Column(
 INTEGER(),
 ForeignKey('packages.package_id', ondelete='cascade'),
 nullable=False
 )

 package = relationship('Package')


 class Package(Base):
 __tablename__ = 'packages'

 id = Column(u'package_id', INTEGER(), primary_key=True)
 pkg_name = Column(String(length=255), nullable=False)
 version = Column(String(length=63), nullable=False)
 revision = Column(String(length=63), nullable=False)

 deployments = relationship('Deployment')

 In this case, most of the time I'm looking to determine which deployments
 a given package belongs to, but there are times when I have a given
 deployment and am curious as to what package(s) are related to it, and
 unless I'm misunderstanding something (which I won't rule out I could
 be), a backref won't easily help in this instance.
I'm not actually familiar with what you mean by a backref won't easily
help.   Above, Package.deployments and Deployment.package refer to the
same linkage between the two classes, just in different directions.   A
backref above is equivalent to what you have, plus:

class Deployment(Base):
   # ...

  package = relationship(Package, back_populates='deployments')

class Package(Base):
  # ...

  deployments = relationship(Deployment, back_populates='package')

the importance of back_populates (or backref, essentially a shortcut to
the same thing) is mostly in how the collection or attribute responds to
in-Python changes and to a lesser (but sometimes important) extent how
the attributes are treated during the flush process.   The two
relationships don't have to be related to each other at the Python level
like this, but there's usually no reason to omit this information as it
only allows the ORM to better keep the two sides in sync.


 Of course, one of the gotchas in using this (and we did hit it in one of
 our queries) is if not careful, one can trigger a nice 'maximum recursion
 depth exceeded' exception (in our particular case, a 'joinedload' within
 the query was able to resolve that issue) and I'm sure there other poten-
 tial gotchas, so I guess this leads back to the main question at hand:
I don't doubt that this can happen but I'm not familiar at the moment
what the nature of this recursion issue would be.  

 Are there times when using reciprocal relationships is okay, and are
 there certain things that should be done to mitigate potential issues that
 can be caused by doing so, or are there better ways to accomplish the
 same thing?

It's not clear to me what the thing is that you want to accomplish
here.  If it's just, you want to set up the two relationships as
explicit code for readability, that's great, use back_populates.   This
is probably how apps should be doing it anyway, in the early SQLAlchemy
days there was a lot of pressure to not require too much boilerplate,
hence backref.   These days, the community has moved well past the
whole notion of super minimal declaration / magic == good, thankfully.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] senseless warning messages escape python logging

2014-07-02 Thread Mike Bayer
we use the warnings filter to turn warnings into exceptions, either from
the Python command line or programatically:

https://docs.python.org/2/library/warnings.html
https://docs.python.org/2/using/cmdline.html#cmdoption-W




On 7/2/14, 3:26 AM, Hans-Peter Jansen wrote:
 Dear SQLAchemistas,

 this is an issue, that my apps choke on from time to time, _related_ to SQLA. 

 Although, logging is set up correctly, some operations spit out senseless 
 warning messages like this:

 /usr/lib/python2.6/site-packages/sqlalchemy/engine/default.py:324: Warning: 
 Data truncated for column 'sendungref1' at row 1
   cursor.execute(statement, parameters)

 FYI, this is the solely message, catched by cron, from code that is exercised 
 heavily.

 Sure, I understand, that with some probability, this is triggered from MySql-
 python-1.2.3 under the covers, and I know, that the value given for this 
 column was too big, but without any context, it doesn't help in any way. 

 So strictly speaking, I'm barking up the wrong tree, but the question is, 
 have 
 you figured out a way to either catch or suppress those warnings? Is there a 
 mysql adapter, that cooperates better in this respect?

 Thanks for your insights,

 Pete


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] failed to locate a name error when using model with relationship

2014-07-02 Thread Mike Bayer

On 7/2/14, 10:21 AM, trusted...@gmail.com wrote:
 Hi Michael, thank you for the answer.
 Both classes are in the same file so I don't see how it could be
 possible that one class is used while other is not imported.
 Could you help me with that ?

that would mean you're doing something that is invoking Artwork as a
mapped class and causing the mapper config step to occur before it gets
down to Person.   The stack trace here would show exactly where that
originates.






 Andrey

 вторник, 1 июля 2014 г., 21:05:11 UTC+3 пользователь Michael Bayer
 написал:


 On 7/1/14, 1:17 PM, trust...@gmail.com javascript: wrote:
 I have two classes, Artwork and Person. Artwork has a relationship to 
 Person.
 However, when I try to use them, I get an error thrown:

 InvalidRequestError: When initializing mapper
 Mapper|Artwork|artwork, expression 'Person' failed to locate
 a name (name 'Person' is not defined). If this is a class
 name, consider adding this relationship() to the class
 'model.Artwork' class after both dependent classes have been
 defined.



 Here are the classes themselves, defined in model/__init__.py
 class Artwork(db.Model, SimpleSerializeMixin):
 id = db.Column(db.Integer, primary_key=True)
 
 artist_id = db.Column(db.String(256), db.ForeignKey('person.sub'))
 artist = db.relationship('Person', backref='artworks')

 class Person(db.Model, SimpleSerializeMixin):
 sub = db.Column(db.String(256), primary_key=True)


 I checked with debugger and in
 sqlalchemy/ext/declarative/clsregistry.py
 (_class_resolver.__call__())
 there is a line:
 x = eval(self.arg, globals(), self._dict)
 No Person or Artwork or any other class defined in the file
 are present in globals(). self._dict is empty
 So it fails with an NameError exception.

 What could be the issue ?

 it's usually that the Person code wasn't run, e.g. that the module
 in which it is located was not imported, before you tried to use
 the Artwork class.  All the tables/classes can be introduced to
 the Python interpreter in any order, but once you try to use the
 mapping, e.g. make an object or run a query, it resolves all the
 links and everything has to be present.




 -- 
 You received this message because you are subscribed to the
 Google Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it,
 send an email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy
 http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout
 https://groups.google.com/d/optout.

 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Postgresql - Index on a json field

2014-07-02 Thread Mike Bayer

On 7/2/14, 11:38 AM, Phillip Aquilina wrote:
 This worked as described. Thanks again. I have a followup question.
  It doesn't seem like there's an analog to
 table.create(checkfirst=True) for an Index.  I found this issue
 https://bitbucket.org/zzzeek/sqlalchemy/issue/527/indexcreate-should-take-checkfirst
 that seems to mention having this functionality but it doesn't look
 like it's been implemented? Is there a normal workaround for this?

typically Index is bundled with its parent Table, and the conditional
aspect of it comes from the Table being created conditionally.   
Otherwise, if the Index is added after the fact, typically people are
using migration tools to get that so that's where the conditional aspect
comes in.   So the case where Index.create() really needs conditional
behavior is slim.You can for now use inspector:

from sqlalchemy import inspect
insp = inspect(engine)
for idx in insp.get_indexes('tablename'):
if idx['name'] == 'myname':
   break
else:
Index('myname', x, y, z).create(engine)





 On Tuesday, July 1, 2014 10:03:40 AM UTC-7, Phillip Aquilina wrote:

 Ah! I'll give that a try. Thanks Mike.

 On Monday, June 30, 2014 10:23:13 PM UTC-7, Michael Bayer wrote:

 per the SO answer, you're looking for CREATE INDEX ON
 publishers((info-'name'));.  Either you can emit this
 directly as a string, or use Index, just as it states:

 from sqlalchemy import create_engine, Integer, Index, Table,
 Column, MetaData
 from sqlalchemy.dialects.postgresql import JSON

 e = create_engine(postgresql://scott:tiger@localhost/test,
 echo=True)

 m = MetaData()
 publishers = Table('publishers', m, Column('id', Integer),
 Column('info', JSON))
 Index(foo, publishers.c.info
 http://publishers.c.info['name'].astext)

 m.create_all(e)

 output:

 CREATE TABLE publishers (
 id INTEGER,
 info JSON
 )


 CREATE INDEX foo ON publishers ((info - 'name'))





 On 7/1/14, 1:14 AM, Mike Bayer wrote:
 I'm not familiar with any other style of index for this
 column type.   If you can show me at
 http://www.postgresql.org/docs/9.4/static/datatype-json.html
 http://www.postgresql.org/docs/9.4/static/datatype-json.html or
 wherever what specific DDL you're looking for, you can simply
 emit it using engine.execute(ddl).


 On 6/30/14, 11:02 PM, Phillip Aquilina wrote:
 Thanks for replying. I've read through that doc and I still
 don't see how that addresses my question. Is there somewhere
 in there that describes how to create an index on a json
 field?  It seems like to me it's simple to create an index
 on a column but this would be creating an index on nested
 data inside the column.

 - Phil

 On Monday, June 30, 2014 6:07:51 PM UTC-7, Michael Bayer wrote:

 SQLAlchemy's API allows CREATE INDEX via the Index
 construct:
 
 http://docs.sqlalchemy.org/en/rel_0_9/core/constraints.html?highlight=index#indexes
 
 http://docs.sqlalchemy.org/en/rel_0_9/core/constraints.html?highlight=index#indexes


 On 6/30/14, 6:21 PM, Phillip Aquilina wrote:
 Using postgresql, I have a JSON type column. My
 understanding from their docs was that only jsonb
 columns could have an index created on them (a feature
 of postgresql 9.4) but then I found an SO answer
 
 http://stackoverflow.com/questions/17807030/how-to-create-index-on-json-field-in-postgres-9-3
  that
 said otherwise. I haven't had the chance to test it
 since I'm away from my dev environment, but the
 sqlalchemy docs seem to support this idea
 
 http://docs.sqlalchemy.org/en/rel_0_9/dialects/postgresql.html?highlight=json#sqlalchemy.dialects.postgresql.JSON,
 mentioning the availability of Index operations.

 Unless I'm missing something obvious (very possible),
 it seems like this can be done through sql, but is
 there a way to create an index on a json field through
 the sqlalchemy api? I can't seem to find a way to do this.

 Thanks,
 Phil
 -- 
 You received this message because you are subscribed to
 the Google Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving
 emails from it, send an email to
 sqlalchemy+...@googlegroups.com.
 To post to this group, send email to
 sqlal...@googlegroups.com.
 Visit this group at
 http://groups.google.com/group/sqlalchemy
 http://groups.google.com/group/sqlalchemy.
 For more options, visit

Re: [sqlalchemy] defaultdict functionality for association proxies

2014-07-02 Thread Mike Bayer

On 7/2/14, 11:15 AM, Brian Findlay wrote:
 I've since added an event listener to perform a calculation each time
 a UserCourse object is set:


 # Recalculate 'bar' after updating UserCourse
 @event.listens_for(UserCourse.grade, 'set')
 def foo(target, value, oldvalue, initiator):
 courses = DBSession.query(Course).all()
 user = User.from_id(target.user_id)
 bar = 0
 for course in courses:
 bar += user.courses[course.title]
 user.bar = bar


 Here, 'bar' is some calculation involving a user's grade for each
 course. This is a somewhat contrived model (my application isn't
 really about courses and grades), but I thought it'd help to simplify
 my use case.

 There are no issues when a user, the courses, and the user's grades
 already exist in the database. However, when a new user submits a form
 with course grades in it, the 'foo' function is triggered and I get

 AttributeError: 'NoneType' object has no attribute 'courses'

well it's probably related to the fact that the set event is called
before the actual attribute association occurs, perhaps some reentrant
attribute case, not sure.

I'm not sure what the purpose of foo is or how it relates to the
problem stated.  If the desired feature is defaultdict capabilities,
that means, you want to have the get feature of the association proxy
to have special behavior.It seems like you'd want to subclass
AssociationDict to add that feature.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] defaultdict functionality for association proxies

2014-07-02 Thread Mike Bayer

On 7/2/14, 2:59 PM, Brian Findlay wrote:
 Mike, thanks for the response.

 (1) foo updates a particular User attribute based on a calculation
 performed on the user.courses collection. I'm listening for the set
 event on UserCourse objects to trigger foo to update that User
 attribute, but that isn't working with new users because -- as you say
 -- the set event is called before the actual attribute association
 occurs. What is the first event I could listen for that would
 recognize the new attribute association?
in that event I only see grade being set so it's not clear to me what
the bigger picture is.  If this is all within the association proxy
setup and within when a new UserCourse is created, I'd have to step
through w/ pdb to see when things happen, but often with these assoc
proxy cases, building out a custom proxier that does the things you want
is often necessary if you really want sophisticated behaviors.

we don't really have a solution to the attribute events being before the
thing is set.  adding all new after set events isn't possible without
adding even more latency, and attribute mutation operations are already
a huge performance bottleneck.   Association proxies and attribute
events are both handy but they only go so far in their capabilities.  


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] (Semi-)automated way to adjust constraint names via Alembic?

2014-07-02 Thread Mike Bayer

Well you can get at the names that were used in the DB (using Inspector,
or reflection) as well as the names that are in your metadata
([constraint for constraint in table.constraints for table in
metadata.tables.values()], but as far as matching them up I'm not sure,
it depends on what patterns you can find in the existing DB that you can
use.   maybe you can write a script that guesses, then it spits out a
list of oldname-newname, then you can manually correct it.



On 7/2/14, 6:08 PM, Ken Lareau wrote:
 So, in my ongoing quest to make my team's operations database far more
 sane than it currently is, I want to fix all the constraint naming in
 the database
 to match the naming convention setting I have added to my SQLAlchemy
 configuration for the database.  I could of course go through each
 table and
 determine each by hand, but I was wondering if there was a less manual
 (and error-prone) way to approach this, possibly via the autogeneration
 feature?  In case it matters, the database server is MySQL.

 -- 
 - Ken Lareau

 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Automatically set primary key to None when deleted?

2014-07-02 Thread Mike Bayer

On 7/2/14, 10:05 PM, Paul Molodowitch wrote:
 Suppose I have a super simple table like this:

 class Dinosaur(Base):
 __tablename__ = 'dinosaurs'
 id = Column(Integer, primary_key=True)
 name = Column(String(255))


 We assume that the id is set up in such a way that by default it
 always gets a unique value - ie, it uses autoincrement in MySQL, or a
 sequence in postgres, etc.

 Now, suppose I get an instance of this class, and then delete it:

 steggy = session.query(Dinosaur).filter_by(name='Steggy').one()
 print steggy.id
 session.delete(steggy)
 session.commit()
 print steggy.id


 What I'd ideally like to see is that it first print the id of the row
 that it pulled from the database, and then print 'None':

 30
 None


 Is there any way that I can configure the id column / property so that
 it is automatically cleared on delete like this?

the steggy object is a proxy for a database row.  when you delete that
row, then commit the transaction, the object is detached from the
session, and everything on it is expired.  there is no row.  check
inspect(steggy).deleted, it will say True - that means in your system,
the object is meaningless.  ideally no part of your program would be
looking at that proxy any more, you should throw it away.  it means nothing.

as far as setting everything to None, you could try a handler like this:

@event.listens_for(Session, 'after_flush')
def after_flush(sess, ctx):
for obj in sess.deleted:
mapper = inspect(obj)
for key in mapper.attrs.keys():
obj.__dict__[key] = None

If not, as a consolation prize, I'd also be interested in the easiest
way to query if a given instance exists in the database - ie, I could do
something like:

 session.exists(steggy)


 OR

 steggy.exists()



from sqlalchemy import inspect
def exists(session, obj):
state = inspect(obj)
return session.query(state.mapper).get(state.identity) is None

print exists(sess, a1)


 ...which, in this case, would simply run a query to see if any
 dinosaurs exist with the name Steggy. 
that's totally different.  That's a WHERE criterion on the name field,
which is not the primary key.  that's something specific to your class
there. 


 Needing to set up some extra parameters to make this possible - such
 as adding a unique constraint on the name column -
OK, so you want a function that a. receives an object b. looks for
UNIQUE constraints on it c. queries by those unique constraints (I guess
you want the first one present?  not clear.  a table can have a lot of
unique constraints on it) that would be:

from sqlalchemy import inspect, UniqueConstraint
def exists(session, obj):
state = inspect(obj)
table = state.mapper.local_table
for const in table.constraints:
if isinstance(const, UniqueConstraint):
   crit = and_(*[col == getattr(obj, col.key) for col in const])
   return session.query(state.mapper).filter(crit).count()  0
else:
   return False

the unique constraints are a set though.   not necessarily deterministic
which one it would locate first.  I'd use more of some kind of declared
system on the class:

class X(Base):
lookup_class_via = ('name',)

id = Column(Integer, primary_key=True)
name = Column(String)

from sqlalchemy import inspect, UniqueConstraint
def exists(session, obj):
crit = and_(*[col == getattr(obj, col.key) for col in
obj.__class__.lookup_class_via])
return session.query(state.mapper).filter(crit).count()  0


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] failed to locate a name error when using model with relationship

2014-07-03 Thread Mike Bayer

On 7/3/14, 3:23 AM, trusted...@gmail.com wrote:
 Hi, Michael,

 here is the stack trace:
   File /home/andrey/projects/art/artFlask/api/artList.py, line 30,
 in post
 item = app_ctx.create_item_from_context()
   File /home/andrey/projects/art/artFlask/utils/app_ctx.py, line 53,
 in create_item_from_context
 item = ModelClass(**data)
   File string, line 2, in __init__

this means on line 30 of your artList.py class, the Person class
hasn't been set up yet, or was set up and failed to do so and for some
reason the error didn't get reported (such as if some unusual system of
loading modules were present).


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] how to query one-to-many relationships, in a complicated way?

2014-07-03 Thread Mike Bayer

On 7/2/14, 9:24 AM, Chung WONG wrote:
 Hi list,

 For this problem I am even having trouble think of a proper subject
 for it.
 I try my best to express as clear as possible and sorry for any
 confusions.

 Say there are three classes with relationship defined as below: 
 class User(Base):
 __tablename__ = 'users'
 id = Column(Integer, Sequence('user_id_seq'), primary_key=True)

 class Feedback(Base):
 __tablename__ = 'feedbacks'
 id = Column(Integer, Sequence('feedback_id_seq'), primary_key=True)
 service_id = Column(Integer, ForeignKey('services.id'))
 score=Column(Integer)

 service=relationship('Service', backref=backref(feedbacks))

 class Service(Base):
 __tablename__ = 'services'
 id = Column(Integer, Sequence('service_id_seq'), primary_key=True)
 provider_id = Column(Integer, ForeignKey('users.id'))
 requester_id = Column(Integer, ForeignKey('users.id'))

 provider = relationship('User', foreign_keys=provider_id,
 backref='services_received')
 requester = relationship('User', foreign_keys=requester_id,
 backref='services_sent')


 *User* and *Service* is a one to many relationship, and a user can be
 a Service provider or requester.

 For *Service* and *Feedback, *it is a one to many relationship too. A
 requester can give a score to the Service.

 *The question is, how can I get the sum(scores) of all services for a
 user?*
 *
 *
 I thought I could do sth like:
 #provider is a given object
 *provider.services_recieved.feedbacks*
 but it threw error:
 *AttributeError: 'InstrumentedList' object has no attribute 'feedbacks'*

there's no query(x.y.z.bar) feature in the SQLAlchemy ORM.  To join
across entities, use join(), this is first introduced in the ORM
tutorial at
http://docs.sqlalchemy.org/en/rel_0_9/orm/tutorial.html#querying-with-joins.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Automatically set primary key to None when deleted?

2014-07-03 Thread Mike Bayer

On 7/3/14, 1:01 PM, Paul Molodowitch wrote:


 That makes sense... but if if it really means nothing, and we
 shouldn't be looking at it, then why keep it's attributes around at all? 
because it is an additional step to actually erase the attributes and
just hadn't been considered.


 Particularly since sqlalchemy has already established that it's
 willing to expire dict members when they may not be valid anymore -
 ie, what it does to clear any cached values from a row proxy after
 the session is committed.
well it doesn't expire the deleted object right now because it's been
evicted from the Session by the time the commit goes to expire
things. Changing that behavior now would definitely bite a lot of
people who depend on how it is right now (other people who are also
looking at their deleted objects against my recommendations... :) )





 Of course, you could make the case that other pieces of the program
 may want to inspect the data that was on there, after the fact...
 maybe you're going to print out something that says, RIP Steggy, or
 something - but in that case, the one field that really DOESN'T make
 any sense in this case (and it seems like it would be a common
 pattern!) is the one that exists solely as a means to look it up in
 the database, it's auto-incremented id column.  Which is what prompted
 this question...

well all the cols don't exist anymore, not just the primary key. the
inspect(obj).deleted call does allow this information (that the object
was deleted) to be known, though not very transparently.

 from sqlalchemy import inspect
 def exists(session, obj):
 state = inspect(obj)
 return session.query(state.mapper).get(state.identity) is None

 print exists(sess, a1)


 Hmm... very interesting.  I'll have to read up what what exactly this
 is doing (ie, what is state.identity?)...
it's documented here:
http://docs.sqlalchemy.org/en/rel_0_9/orm/internals.html?highlight=instancestate#sqlalchemy.orm.state.InstanceState.identity

 

 from sqlalchemy import inspect, UniqueConstraint
 def exists(session, obj):
 state = inspect(obj)
 table = state.mapper.local_table
 for const in table.constraints:
 if isinstance(const, UniqueConstraint):
crit = and_(*[col == getattr(obj, col.key) for col in
 const])
return session.query(state.mapper).filter(crit).count()  0
 else:
return False


 Yep, it looks like that's doing basically what I was thinking of.  Thanks!

 the unique constraints are a set though.   not necessarily
 deterministic which one it would locate first.  I'd use more of
 some kind of declared system on the class:


 Not clear on why this matters - if we're iterating through all the
 constraints, and returning True if any of them is matched, what
 difference does it make which one is evaluated first?  Except
 potentially from a performance standpoint, I suppose...
what if there are two constraints, and only one is satisfied for a given
object's values (e.g. the constraint is now satisfied by some other
row), the other one is not present.  Is the answer True or False?  


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] failed to locate a name error when using model with relationship

2014-07-03 Thread Mike Bayer
to start with, you have an import cycle. fix that first.

classics-MacBook-Pro-2:artFlask classic$ .venv/bin/python
Python 2.7.5 (default, Mar  7 2014, 19:17:16)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.2.79)] on darwin
Type help, copyright, credits or license for more information.
 import model
Traceback (most recent call last):
  File stdin, line 1, in module
  File model/__init__.py, line 1, in module
from app import db
  File app/__init__.py, line 46, in module
import views
  File /Users/classic/Desktop/tmp/artFlask/views.py, line 4, in module
from api.artists import Artists
  File /Users/classic/Desktop/tmp/artFlask/api/artists.py, line 32, in
module
from utils.app_ctx import ApplicationContext
  File /Users/classic/Desktop/tmp/artFlask/utils/app_ctx.py, line 1,
in module
from model import Artwork
ImportError: cannot import name Artwork


well fix the package part first.  then fix the import cycle.





On 7/3/14, 1:39 PM, Mike Bayer wrote:
 this project needs:

 1. a package

 2. a setup.py file

 3. a requirements.txt file


 I am running in each dependency indivdually as I get import errors. 
 very painful.




 On 7/3/14, 1:19 PM, trusted...@gmail.com wrote:
 Mike, here is the source for the models:
 https://github.com/ArtAPI/artFlask/blob/refactoring/model/__init__.py

 I would really apprieciate your feedback on this, I'm really new to
 flask and SQLAlchemy.
 The other source files (artList.py etc) are also there.

 Thank you,
 Andrey

 четверг, 3 июля 2014 г., 17:33:09 UTC+3 пользователь Michael Bayer
 написал:


 On 7/3/14, 3:23 AM, trust...@gmail.com javascript: wrote:
  Hi, Michael,
 
  here is the stack trace:
File /home/andrey/projects/art/artFlask/api/artList.py,
 line 30,
  in post
  item = app_ctx.create_item_from_context()
File /home/andrey/projects/art/artFlask/utils/app_ctx.py,
 line 53,
  in create_item_from_context
  item = ModelClass(**data)
File string, line 2, in __init__

 this means on line 30 of your artList.py class, the Person class
 hasn't been set up yet, or was set up and failed to do so and for
 some
 reason the error didn't get reported (such as if some unusual
 system of
 loading modules were present).


 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it,
 send an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] failed to locate a name error when using model with relationship

2014-07-03 Thread Mike Bayer
this project needs:

1. a package

2. a setup.py file

3. a requirements.txt file


I am running in each dependency indivdually as I get import errors. 
very painful.




On 7/3/14, 1:19 PM, trusted...@gmail.com wrote:
 Mike, here is the source for the models:
 https://github.com/ArtAPI/artFlask/blob/refactoring/model/__init__.py

 I would really apprieciate your feedback on this, I'm really new to
 flask and SQLAlchemy.
 The other source files (artList.py etc) are also there.

 Thank you,
 Andrey

 четверг, 3 июля 2014 г., 17:33:09 UTC+3 пользователь Michael Bayer
 написал:


 On 7/3/14, 3:23 AM, trust...@gmail.com javascript: wrote:
  Hi, Michael,
 
  here is the stack trace:
File /home/andrey/projects/art/artFlask/api/artList.py, line
 30,
  in post
  item = app_ctx.create_item_from_context()
File /home/andrey/projects/art/artFlask/utils/app_ctx.py,
 line 53,
  in create_item_from_context
  item = ModelClass(**data)
File string, line 2, in __init__

 this means on line 30 of your artList.py class, the Person class
 hasn't been set up yet, or was set up and failed to do so and for
 some
 reason the error didn't get reported (such as if some unusual
 system of
 loading modules were present).


 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Reuse pre-defined Enum?

2014-07-03 Thread Mike Bayer
this works fine for me:

def upgrade():

mytype = ENUM('a', 'b', 'c', create_type=False, name='myenum')

mytype.create(op.get_bind(), checkfirst=False)

op.create_table('t1',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('col1', mytype)
)

op.create_table('t2',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('col1', mytype)
)



output:

INFO  [alembic.migration] Running upgrade None - 47bed260c052, initial rev
INFO  [sqlalchemy.engine.base.Engine]
SELECT EXISTS (
SELECT * FROM pg_catalog.pg_type t
WHERE t.typname = %(typname)s
AND pg_type_is_visible(t.oid)
)
   
INFO  [sqlalchemy.engine.base.Engine] {'typname': u'myenum'}
INFO  [sqlalchemy.engine.base.Engine] CREATE TYPE myenum AS ENUM ('a',
'b', 'c')
INFO  [sqlalchemy.engine.base.Engine] {}
INFO  [sqlalchemy.engine.base.Engine]
CREATE TABLE t1 (
id INTEGER NOT NULL,
col1 myenum
)


INFO  [sqlalchemy.engine.base.Engine] {}
INFO  [sqlalchemy.engine.base.Engine]
CREATE TABLE t2 (
id INTEGER NOT NULL,
col1 myenum
)


INFO  [sqlalchemy.engine.base.Engine] {}
INFO  [sqlalchemy.engine.base.Engine] INSERT INTO alembic_version
(version_num) VALUES ('47bed260c052')
INFO  [sqlalchemy.engine.base.Engine] {}
INFO  [sqlalchemy.engine.base.Engine] COMMIT

works with --sql too, output:

CREATE TYPE myenum AS ENUM ('a', 'b', 'c');

CREATE TABLE t1 (
id INTEGER NOT NULL,
col1 myenum
);

CREATE TABLE t2 (
id INTEGER NOT NULL,
col1 myenum
);

INSERT INTO alembic_version (version_num) VALUES ('47bed260c052');

COMMIT;





On 7/3/14, 3:23 PM, Anton wrote:
 Hey guys,

 I am facing the same issue, I am using PostgreSQL and want to use
 native ENUM type in two tables. For migration I am going to use alembic.
 Did you guys find out any was to do this?

 Thanks,
 Anton

 On Thursday, May 1, 2014 10:33:21 AM UTC-7, Michael Bayer wrote:

 OK.  ENUM is something I’d have to dedicate several days of
 attention on :(


 On May 1, 2014, at 1:32 PM, Vlad Wing vlad...@gmail.com
 javascript: wrote:

 Yes, that's exactly what happend. I specified create_type=False
 and it was ignored. Alembic tried to create the type anyway and,
 of course, it failed.

 -- 
 You received this message because you are subscribed to the
 Google Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it,
 send an email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy
 http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout
 https://groups.google.com/d/optout.

 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] failed to locate a name error when using model with relationship

2014-07-03 Thread Mike Bayer
OK thats great but you have a long way to go:

import cycle:

classic$ .venv/bin/python
Python 2.7.5 (default, Mar  7 2014, 19:17:16)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.2.79)] on darwin
Type help, copyright, credits or license for more information.
 import model
Traceback (most recent call last):
  File stdin, line 1, in module
  File model/__init__.py, line 1, in module
from app import db
  File app/__init__.py, line 46, in module
import views
  File /Users/classic/Desktop/tmp/artFlask/views.py, line 4, in module
from api.artists import Artists
  File /Users/classic/Desktop/tmp/artFlask/api/artists.py, line 32, in
module
from utils.app_ctx import ApplicationContext
  File /Users/classic/Desktop/tmp/artFlask/utils/app_ctx.py, line 1,
in module
from model import Artwork
ImportError: cannot import name Artwork




tests dont run:

$ .venv/bin/nosetests -v
Failure: ImportError (No module named mail) ... ERROR
Failure: ImportError (No module named app) ... ERROR
Failure: ImportError (No module named app) ... ERROR

==
ERROR: Failure: ImportError (No module named mail)
--
Traceback (most recent call last):
  File
/Users/classic/Desktop/tmp/artFlask/.venv/lib/python2.7/site-packages/nose/loader.py,
line 414, in loadTestsFromName
addr.filename, addr.module)
  File
/Users/classic/Desktop/tmp/artFlask/.venv/lib/python2.7/site-packages/nose/importer.py,
line 47, in importFromPath
return self.importFromDir(dir_path, fqname)
  File
/Users/classic/Desktop/tmp/artFlask/.venv/lib/python2.7/site-packages/nose/importer.py,
line 94, in importFromDir
mod = load_module(part_fqname, fh, filename, desc)
  File /Users/classic/Desktop/tmp/artFlask/app/__init__.py, line 6, in
module
from mail import mail
ImportError: No module named mail

etc.

These are all very basic Python issues.  If you can get on IRC and get
someone to help you get your program just to import in a basic way you
can probably figure out why your models aren't initializing.Good luck!









On 7/3/14, 3:29 PM, trusted...@gmail.com wrote:
 Hi Mike!

 I've updated requirements. Let me check if I didn't commit something.

 четверг, 3 июля 2014 г., 20:41:55 UTC+3 пользователь Michael Bayer
 написал:

 to start with, you have an import cycle. fix that first.

 classics-MacBook-Pro-2:artFlask classic$ .venv/bin/python
 Python 2.7.5 (default, Mar  7 2014, 19:17:16)
 [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.2.79)] on darwin
 Type help, copyright, credits or license for more information.
  import model
 Traceback (most recent call last):
   File stdin, line 1, in module
   File model/__init__.py, line 1, in module
 from app import db
   File app/__init__.py, line 46, in module
 import views
   File /Users/classic/Desktop/tmp/artFlask/views.py, line 4, in
 module
 from api.artists import Artists
   File /Users/classic/Desktop/tmp/artFlask/api/artists.py, line
 32, in module
 from utils.app_ctx import ApplicationContext
   File /Users/classic/Desktop/tmp/artFlask/utils/app_ctx.py,
 line 1, in module
 from model import Artwork
 ImportError: cannot import name Artwork


 well fix the package part first.  then fix the import cycle.





 On 7/3/14, 1:39 PM, Mike Bayer wrote:
 this project needs:

 1. a package

 2. a setup.py file

 3. a requirements.txt file


 I am running in each dependency indivdually as I get import
 errors.  very painful.




 On 7/3/14, 1:19 PM, trust...@gmail.com javascript: wrote:
 Mike, here is the source for the models:
 https://github.com/ArtAPI/artFlask/blob/refactoring/model/__init__.py
 https://github.com/ArtAPI/artFlask/blob/refactoring/model/__init__.py

 I would really apprieciate your feedback on this, I'm really new
 to flask and SQLAlchemy.
 The other source files (artList.py etc) are also there.

 Thank you,
 Andrey

 четверг, 3 июля 2014 г., 17:33:09 UTC+3 пользователь Michael
 Bayer написал:


 On 7/3/14, 3:23 AM, trust...@gmail.com wrote:
  Hi, Michael,
 
  here is the stack trace:
File
 /home/andrey/projects/art/artFlask/api/artList.py, line 30,
  in post
  item = app_ctx.create_item_from_context()
File
 /home/andrey/projects/art/artFlask/utils/app_ctx.py, line 53,
  in create_item_from_context
  item = ModelClass(**data)
File string, line 2, in __init__

 this means on line 30 of your artList.py class, the Person
 class
 hasn't been set up yet, or was set up and failed to do so
 and for some
 reason the error didn't get reported (such as if some
 unusual system of
 loading

Re: [sqlalchemy] Automatically set cascade settings based on ON DELETE / ON UPDATE when reflecting?

2014-07-03 Thread Mike Bayer

On 7/3/14, 5:45 PM, Paul Molodowitch wrote:
 I noticed that sqlalchemy now properly sets the onpudate / ondelete
 properties of foreign keys when reflecting tables:

 https://bitbucket.org/zzzeek/sqlalchemy/issue/2183/support-on-delete-update-in-foreign-key

 However, it doesn't seem to set the cascade properties of
 relationships to reflect these properties. ie, if the Child table
 references the Parent table with a foreign key that has ON DELETE
 CASCADE, and the reference column does not allow NULL, when you
 delete a parent table that has children, you will get an error,
 because sqlalchemy will try to set the child's ref to NULL.

 ideally we should add delete in the relationship's cascade
 properties (and probably delete-orphan as well), and then set
 passive_updates=True.

 Or am I missing something obvious  / doing something wrong / etc?

the configuration of a Column or ForeignKey has never been directly
linked to how relationship() gets configured.   passive_updates in
particular is a thorny one as not every database supports ON UPDATE
CASCADE, but for that matter not every database even supports ON DELETE
CASCADE.   There's also lots of variants to ON UPDATE and ON DELETE and
SQLAlchemy has no awareness of any of these directly.

If we were to explore some automatic configuration of relationship based
on these attributes of ForeignKey, it would take place within the
automap extension: see
http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/automap.html. 

There are also recipes such that both relationship() and ForeignKey()
are generated at once, these are also good places for this kind of thing
to happen.  See
https://bitbucket.org/zzzeek/pycon2014_atmcraft/src/a6d96575bc497ce0c952bb81db9c05d054c98bb5/atmcraft/model/meta/orm.py?at=master
for an example of this, I still am thinking of a way recipes like this
could also be integrated into SQLAlchemy, possibly as an enhancement to
declarative.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Automatically set cascade settings based on ON DELETE / ON UPDATE when reflecting?

2014-07-03 Thread Mike Bayer

On 7/3/14, 6:15 PM, Mike Bayer wrote:
 On 7/3/14, 5:45 PM, Paul Molodowitch wrote:
 I noticed that sqlalchemy now properly sets the onpudate / ondelete
 properties of foreign keys when reflecting tables:

 https://bitbucket.org/zzzeek/sqlalchemy/issue/2183/support-on-delete-update-in-foreign-key

 However, it doesn't seem to set the cascade properties of
 relationships to reflect these properties. ie, if the Child table
 references the Parent table with a foreign key that has ON DELETE
 CASCADE, and the reference column does not allow NULL, when you
 delete a parent table that has children, you will get an error,
 because sqlalchemy will try to set the child's ref to NULL.

 ideally we should add delete in the relationship's cascade
 properties (and probably delete-orphan as well), and then set
 passive_updates=True.

 Or am I missing something obvious  / doing something wrong / etc?
 the configuration of a Column or ForeignKey has never been directly
 linked to how relationship() gets configured.   passive_updates in
 particular is a thorny one as not every database supports ON UPDATE
 CASCADE, but for that matter not every database even supports ON DELETE
 CASCADE.   There's also lots of variants to ON UPDATE and ON DELETE and
 SQLAlchemy has no awareness of any of these directly.

 If we were to explore some automatic configuration of relationship based
 on these attributes of ForeignKey, it would take place within the
 automap extension: see
 http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/automap.html. 

 There are also recipes such that both relationship() and ForeignKey()
 are generated at once, these are also good places for this kind of thing
 to happen.  See
 https://bitbucket.org/zzzeek/pycon2014_atmcraft/src/a6d96575bc497ce0c952bb81db9c05d054c98bb5/atmcraft/model/meta/orm.py?at=master
 for an example of this, I still am thinking of a way recipes like this
 could also be integrated into SQLAlchemy, possibly as an enhancement to
 declarative.

or a flag like autoconfigure=True on relationship().   this would also
set up innerjoin=True for joined eager loading if the FK is not null.  
if the primaryjoin condition is too complex (has mulitple FKs),
autoconfigure would raise an exception.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] failed to locate a name error when using model with relationship

2014-07-05 Thread Mike Bayer

On 7/3/14, 5:20 PM, trusted...@gmail.com wrote:
 Muchos grasias, Michael!

 Issue resolved.
 There were a couple of other ForeignKey issues, seems to be okay now.
 So far I understand that there is a lot to learn for me, so I was
 wondering - do you by chance accept apprentices ?

 I would be willing to help writing fixes/minor features whatever for
 an exchange of your guidance. What do you say?

 No matter what, thank you again for assistance in these issues. I'm
 now going to test and fix everything else :-)
I've figured out a way to mark mostly small issues as sprintable:
https://bitbucket.org/zzzeek/sqlalchemy/issues?status=newstatus=opensort=-priorityresponsible=sqlalchemy_sprinters

most of the work with these is in writing a simple test and a properly
formatted changelog message (I'm tired of always writing those
myself).   It's a lot of wax on/wax off stuff (if you dont get the
reference: http://www.youtube.com/watch?v=fULNUr0rvEc)


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Postgresql - Index on a json field

2014-07-05 Thread Mike Bayer

On 7/5/14, 3:14 PM, Phillip Aquilina wrote:
 I finally had a chance to try this with the inspector and quickly
 discovered it doesn't support expression-based indexes with this
 warning, Skipped unsupported reflection of expression-based index
 some_index.  I can patch this out locally for my own needs but is
 there a bigger reason this is not yet supported that I should be
 worried about?

on the reflection side, expressions inside of indexes (which is largely
a Postgresql thing) are skipped at the moment.  This doesn't present any
issues.  It only means if you're using Alembic to autogenerate
migrations, you might have to type these expressions into your migration
file manually.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Modified mapping in composite association proxy causes conflict with persistent instance

2014-07-08 Thread Mike Bayer
a test case is attached, show me the failure please, thanks.



On 7/8/14, 10:59 AM, Brian Findlay wrote:
 Hi Mike,

 I'm using your variant on the 'unique object' recipe (see previous
 posting http://goo.gl/I1buRz) with some composite association proxies.
 Recently, the data I've been working with introduced a duplicate in
 the property I've been using with attribute_mapped_collection(), so
 I'm trying to modify the collection class such that the key is based
 on a column in the association object instead of a column in the
 'right' table. My modified mapping results in a FlushError when
 attempting to update a UserCourse object because it conflicts with a
 persistent instance.

 _*Basic model:*_
 class User(Base):
 id = Column(Integer, primary_key=True)
 name = Column(Text)

 class Course(Base):
 id = Column(Integer, primary_key=True)
 title = Column(Text, unique=True)

 class UserCourse(Base):
 user_id = Column(Integer, ForeignKey(User.id), primary_key=True)
 course_id = Column(Integer, ForeignKey(Course.id), primary_key=True)
 grade = Column(Integer)

 _*Original use:*_
 user.courses['math'] = 100# user.courses[course.name] = grade

 _*Desired use:*_
 user.courses['1'] = 100# user.courses[course.id] = grade

 _*Original model:*_
 class User(Base):
 __tablename__ = 'users'
 id = Column(Integer, primary_key=True)
 name = Column(Text)
 courses = association_proxy('user_courses', 'grade',
 creator=lambda k, v: UserCourse(course_title=k, grade=v))

 def __init__(self, name):
 self.name = name


 class Course(Base):
 __tablename__ = 'courses'
 id = Column(Integer, primary_key=True)
 title = Column(Text, unique=True)

 def __init__(self, title):
 self.title = title

 class UserCourse(Base):
 __tablename__ = 'user_courses'
 user_id = Column(Integer, ForeignKey(User.id), primary_key=True)
 course_id = Column(Integer, ForeignKey(Course.id), primary_key=True)
 grade = Column(Integer)
 user = relationship(
 User,
 backref=backref(
 'user_courses',
 collection_class=attribute_mapped_collection('course_title'),
 cascade='all, delete-orphan'
 )
 )
 course = relationship(Course)

 def __init__(self, course_title, grade):
 self._course_title = course_title
 self.grade = grade

 @property
 def course_title(self):
 if self.course is not None:
 return self.course.title
 else:
 return self._course_title

 @event.listens_for(Session, after_attach)
 def after_attach(session, instance):
 if isinstance(instance, UserCourse):
 with session.no_autoflush:
 course =
 session.query(Course).filter_by(title=instance._course_title).first()
 if course is None:
 course = Course(title=instance._course_title)
 instance.course = course


 _*Error-producing model modified for desired use:*_
 class User(Base):
 __tablename__ = 'users'
 id = Column(Integer, primary_key=True)
 name = Column(Text)
 courses = association_proxy('user_courses', 'grade',
 creator=lambda k, v: UserCourse(course_id=k, grade=v))

 def __init__(self, name):
 self.name = name


 class Course(Base):
 __tablename__ = 'courses'
 id = Column(Integer, primary_key=True)
 title = Column(Text, unique=True)

 def __init__(self, title):
 self.title = title

 class UserCourse(Base):
 __tablename__ = 'user_courses'
 user_id = Column(Integer, ForeignKey(User.id), primary_key=True)
 course_id = Column(Integer, ForeignKey(Course.id), primary_key=True)
 grade = Column(Integer)
 user = relationship(
 User,
 backref=backref(
 'user_courses',
 collection_class=attribute_mapped_collection('course_id'),
 cascade='all, delete-orphan'
 )
 )
 course = relationship(Course)

 def __init__(self, course_id, grade):
 self._course_id = course_id
 self.grade = grade

 @event.listens_for(Session, after_attach)
 def after_attach(session, instance):
 if isinstance(instance, UserCourse):
 with session.no_autoflush:
 course =
 session.query(Course).filter_by(id=instance._course_id).first()
 # no way to create to Course object by id alone, but I
 don't need that capability
 # new UserCourse objects are limited to existing courses
 instance.course = course



 Seems like there's a simple way to accomplish this as the dictionary
 collection is now coming directly from the association object instead
 of having to hop across it to the 'courses' table. Could you point me
 in the right direction? Thanks.

 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy 

Re: [sqlalchemy] advice sought : `commit()` without expiring from the session ?

2014-07-08 Thread Mike Bayer
you can make a new Session() with expire_on_commit=False, that can be
from your sessionmaker.  the settings you apply there override whats in
the original factory.

Though for status logging I tend to just use a separate
Session/transaction for that, that way the main transaction can maintain
atomicity while the status value is visible from the outside.


On 7/8/14, 2:04 PM, Jonathan Vanasco wrote:
 I have a situation where I need to issue a `commit()` without expiring
 the object/attributes from the session.   I'm in a `Celery` task, and
 need to log progress at certain points into a status column.  the
 status points will be used to troubleshoot and fix jobs that didn't
 correctly complete.

 `commit()` doesn't accept an `expire_on_commit` argument -- which
 would suit my needs perfectly -- only the sessionmaker does.

 it looks like I have 3 options :

 1- `merge` my object back into the session 
 2- use a non-expiring session
 3- don't touch the column in the ORM/Session; have a second connection
 that issues updates via the core API

 does anyone have other ideas ?  or have others handled similar
 situations ?



 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] unit tests failing on 0.9.6

2014-07-08 Thread Mike Bayer
how are you running them?

it isn't reporting any failure, this seems like it's finding a bad
test case.   do you have any test.py type stuff it might be hitting?



On 7/8/14, 5:56 PM, Jason Newton wrote:
 Any thoughts on why the unit testing is failing?

 platform linux2 -- Python 2.7.6 -- py-1.4.20 -- pytest-2.5.2 --
 /usr/bin/python

 

 == 5776
 passed, 688 skipped in 139.77 seconds
 ===
 Traceback (most recent call last):
   File setup.py, line 165, in module
 run_setup(True)
   File setup.py, line 146, in run_setup
 **kwargs
   File /usr/lib64/python2.7/distutils/core.py, line 152, in setup
 dist.run_commands()
   File /usr/lib64/python2.7/distutils/dist.py, line 953, in run_commands
 self.run_command(cmd)
   File /usr/lib64/python2.7/distutils/dist.py, line 972, in run_command
 cmd_obj.run()
   File /usr/lib/python2.7/site-packages/setuptools/command/test.py,
 line 146, in run
 self.with_project_on_sys_path(self.run_tests)
   File /usr/lib/python2.7/site-packages/setuptools/command/test.py,
 line 127, in with_project_on_sys_path
 func()
   File /usr/lib/python2.7/site-packages/setuptools/command/test.py,
 line 167, in run_tests
 testRunner=self._resolve_as_ep(self.test_runner),
   File /usr/lib64/python2.7/unittest/main.py, line 94, in __init__
 self.parseArgs(argv)
   File /usr/lib64/python2.7/unittest/main.py, line 149, in parseArgs
 self.createTests()
   File /usr/lib64/python2.7/unittest/main.py, line 158, in createTests
 self.module)
   File /usr/lib64/python2.7/unittest/loader.py, line 130, in
 loadTestsFromNames
 suites = [self.loadTestsFromName(name, module) for name in names]
   File /usr/lib64/python2.7/unittest/loader.py, line 122, in
 loadTestsFromName
 (obj, test))
 TypeError: calling function main at 0x2d3b500 returned 0, not a test


 -Jason
 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] unit tests failing on 0.9.6

2014-07-09 Thread Mike Bayer
OK what if you just run it via py.test ?the distutils/setuptools
approach is so often problematic.

issue appeared in 0.9.6, not 0.9.5 ?




On 7/9/14, 3:01 AM, Jason Newton wrote:
 Yes, it seems to me like it gets to the end and then commits suicide
 just before returning, but I wasn't able to make heads or tails.

 Nothing fancy to running it - just unpack the pypi archive, and run
 python setup.py test.  I first ran into it when updating the opensuse
 package from 0.8.5 but it's reproducable inside the chroot jail OBS
 creates as well as on my desktop.

 On Tuesday, July 8, 2014 3:43:02 PM UTC-7, Michael Bayer wrote:

 how are you running them?

 it isn't reporting any failure, this seems like it's finding a
 bad test case.   do you have any test.py type stuff it might be
 hitting?



 On 7/8/14, 5:56 PM, Jason Newton wrote:
 Any thoughts on why the unit testing is failing?

 platform linux2 -- Python 2.7.6 -- py-1.4.20 -- pytest-2.5.2 --
 /usr/bin/python

 

 == 5776
 passed, 688 skipped in 139.77 seconds
 ===
 Traceback (most recent call last):
   File setup.py, line 165, in module
 run_setup(True)
   File setup.py, line 146, in run_setup
 **kwargs
   File /usr/lib64/python2.7/distutils/core.py, line 152, in setup
 dist.run_commands()
   File /usr/lib64/python2.7/distutils/dist.py, line 953, in
 run_commands
 self.run_command(cmd)
   File /usr/lib64/python2.7/distutils/dist.py, line 972, in
 run_command
 cmd_obj.run()
   File
 /usr/lib/python2.7/site-packages/setuptools/command/test.py,
 line 146, in run
 self.with_project_on_sys_path(self.run_tests)
   File
 /usr/lib/python2.7/site-packages/setuptools/command/test.py,
 line 127, in with_project_on_sys_path
 func()
   File
 /usr/lib/python2.7/site-packages/setuptools/command/test.py,
 line 167, in run_tests
 testRunner=self._resolve_as_ep(self.test_runner),
   File /usr/lib64/python2.7/unittest/main.py, line 94, in __init__
 self.parseArgs(argv)
   File /usr/lib64/python2.7/unittest/main.py, line 149, in
 parseArgs
 self.createTests()
   File /usr/lib64/python2.7/unittest/main.py, line 158, in
 createTests
 self.module)
   File /usr/lib64/python2.7/unittest/loader.py, line 130, in
 loadTestsFromNames
 suites = [self.loadTestsFromName(name, module) for name in names]
   File /usr/lib64/python2.7/unittest/loader.py, line 122, in
 loadTestsFromName
 (obj, test))
 TypeError: calling function main at 0x2d3b500 returned 0, not a
 test


 -Jason
 -- 
 You received this message because you are subscribed to the
 Google Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it,
 send an email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy
 http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout
 https://groups.google.com/d/optout.

 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] unit tests failing on 0.9.6

2014-07-09 Thread Mike Bayer
OK this is fixed now, I hope.


On 7/9/14, 9:52 AM, Mike Bayer wrote:
 OK what if you just run it via py.test ?the distutils/setuptools
 approach is so often problematic.

 issue appeared in 0.9.6, not 0.9.5 ?




 On 7/9/14, 3:01 AM, Jason Newton wrote:
 Yes, it seems to me like it gets to the end and then commits suicide
 just before returning, but I wasn't able to make heads or tails.

 Nothing fancy to running it - just unpack the pypi archive, and run
 python setup.py test.  I first ran into it when updating the opensuse
 package from 0.8.5 but it's reproducable inside the chroot jail OBS
 creates as well as on my desktop.

 On Tuesday, July 8, 2014 3:43:02 PM UTC-7, Michael Bayer wrote:

 how are you running them?

 it isn't reporting any failure, this seems like it's finding a
 bad test case.   do you have any test.py type stuff it might be
 hitting?



 On 7/8/14, 5:56 PM, Jason Newton wrote:
 Any thoughts on why the unit testing is failing?

 platform linux2 -- Python 2.7.6 -- py-1.4.20 -- pytest-2.5.2 --
 /usr/bin/python

 

 == 5776
 passed, 688 skipped in 139.77 seconds
 ===
 Traceback (most recent call last):
   File setup.py, line 165, in module
 run_setup(True)
   File setup.py, line 146, in run_setup
 **kwargs
   File /usr/lib64/python2.7/distutils/core.py, line 152, in setup
 dist.run_commands()
   File /usr/lib64/python2.7/distutils/dist.py, line 953, in
 run_commands
 self.run_command(cmd)
   File /usr/lib64/python2.7/distutils/dist.py, line 972, in
 run_command
 cmd_obj.run()
   File
 /usr/lib/python2.7/site-packages/setuptools/command/test.py,
 line 146, in run
 self.with_project_on_sys_path(self.run_tests)
   File
 /usr/lib/python2.7/site-packages/setuptools/command/test.py,
 line 127, in with_project_on_sys_path
 func()
   File
 /usr/lib/python2.7/site-packages/setuptools/command/test.py,
 line 167, in run_tests
 testRunner=self._resolve_as_ep(self.test_runner),
   File /usr/lib64/python2.7/unittest/main.py, line 94, in __init__
 self.parseArgs(argv)
   File /usr/lib64/python2.7/unittest/main.py, line 149, in
 parseArgs
 self.createTests()
   File /usr/lib64/python2.7/unittest/main.py, line 158, in
 createTests
 self.module)
   File /usr/lib64/python2.7/unittest/loader.py, line 130, in
 loadTestsFromNames
 suites = [self.loadTestsFromName(name, module) for name in
 names]
   File /usr/lib64/python2.7/unittest/loader.py, line 122, in
 loadTestsFromName
 (obj, test))
 TypeError: calling function main at 0x2d3b500 returned 0, not
 a test


 -Jason
 -- 
 You received this message because you are subscribed to the
 Google Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from
 it, send an email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy
 http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout
 https://groups.google.com/d/optout.

 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it,
 send an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] problems with mysql reflect

2014-07-09 Thread Mike Bayer

On 7/9/14, 3:41 PM, Paul Molodowitch wrote:
 I just ran into the same problem, using python 2.6 + sqlalchemy 0.9.4
 / 0.9.6 + MySQL.

 The problem in my case IS definitely related to python 2.6 -
 basically, python 2.6 doesn't allow unicode keywords, while 2.7 does.
 Ie, if you do this:

 def foo(**kwargs):
 print kwargs
 foo(**{u'thing':1})


 ...it will work in 2.7, but give this error in 2.6:

 TypeError: foo() keywords must be strings


 For reference, these were the table.dialect_kwargs.keys() that were
 making trouble in 2.6:

 [u'mysql_comment', u'mysql_engine', u'mysql_default charset']


 Fine, except for the fact that they're unicode...
OK but this is not a codepath within SQLAlchemy's MySQL reflection
code.   I'm PDBing right now into 0.9, using py2.6 + use_unicode=1; the
reflected table options are sent directly into table.kwargs, not using
the constructor or any **kw system.  the tests pass, and the keys are
coming back as u''.

if you can show me where table.kwargs gets used implicitly as a
constructor arg i can fix that.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Bound parameter on Postgres/psycopg2 Range data types

2014-07-10 Thread Mike Bayer

On 7/10/14, 2:46 PM, Brian Findlay wrote:
 Sqlalchemy seems to be coercing the upper boundary to be exclusive.
 See below tests (will need to change postgres db if you want to run them).



 from sqlalchemy import *
 from sqlalchemy.orm import *
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.dialects.postgresql import INT4RANGE
 from psycopg2.extras import NumericRange

 Base = declarative_base()

 class Foo(Base):
 __tablename__ = 'foo'
 id = Column(Integer, primary_key=True)
 range = Column(INT4RANGE)


 e = create_engine(postgresql://brian@10.0.1.10:5432/test, echo=True)
 Base.metadata.create_all(e)

 sess = Session(e)

 foo_one = Foo(id=1, range='[1, 10]')
 foo_two = Foo(id=2, range=NumericRange(lower=1, upper=10, bounds='[]'))
 sess.add_all([foo_one, foo_two])
 sess.commit()

 #foo_one = sess.query(Foo).filter_by(id=1).first()
 #foo_two = sess.query(Foo).filter_by(id=2).first()

 # These pass
 assert foo_one.range == foo_two.range
 assert foo_one.range.lower == foo_two.range.lower
 assert foo_one.range.upper == foo_two.range.upper

 # These fail
 assert foo_one.range == NumericRange(lower=1, upper=10, bounds='[]')
 assert foo_two.range == NumericRange(lower=1, upper=10, bounds='[]')

 # But this passes
 assert foo_two.range == NumericRange(lower=1, upper=11, bounds='[)')
I don't do the range types, if you create a bug report, chris withers
can be assigned and he'll get a note about it.  Or figure out a PR for us.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] problems with mysql reflect

2014-07-10 Thread Mike Bayer

On 7/10/14, 3:49 PM, Paul Molodowitch wrote:
 Whoops! Just noticed this was the totally wrong traceback!

 Here's the correct trace:

 Traceback (most recent call last):
   File stdin, line 1, in module
   File test.py, line 155, in module
 metadata.reflect(db.engine, only=tables)
   File ./sqlalchemy/sql/schema.py, line 3277, in reflect
 Table(name, self, **reflect_opts)
   File ./sqlalchemy/sql/schema.py, line 352, in __new__
 table._init(name, metadata, *args, **kw)
   File ./sqlalchemy/sql/schema.py, line 425, in _init
 self._autoload(metadata, autoload_with, include_columns)
   File ./sqlalchemy/sql/schema.py, line 437, in _autoload
 self, include_columns, exclude_columns
   File ./sqlalchemy/engine/base.py, line 1198, in run_callable
 return callable_(self, *args, **kwargs)
   File ./sqlalchemy/engine/default.py, line 355, in reflecttable
 return insp.reflecttable(table, include_columns, exclude_columns)
   File ./sqlalchemy/engine/reflection.py, line 463, in reflecttable
 for col_d in self.get_columns(table_name, schema,
 **table.dialect_kwargs):
 TypeError: get_columns() keywords must be strings


with metadata.reflect(), OK.  Can you please make a very short and
self-contained test case and post a bug report?  thanks.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] StaleDataError/ObjectDeletedError

2014-07-11 Thread Mike Bayer

On 7/11/14, 5:07 PM, Staszek wrote:
 On 2014-02-01 00:09, Michael Bayer wrote:
 On Jan 31, 2014, at 5:24 PM, lars van gemerden l...@rational-it.com
 mailto:l...@rational-it.com wrote:

 Hi, all

 I am running into these 2 errors and have run out of ideas what to do
 about it (also because i don't what they mean); They seem to happen in
 exactly the same circumstances.

 mapper, table, update)
   File build\bdist.win32\egg\sqlalchemy\orm\persistence.py, line
 514, in _emit_update_statements
 (table.description, len(update), rows))
 StaleDataError: UPDATE statement on table 'Company' expected to update
 1 row(s); 0 were matched.
 this means an object was meant to be UPDATEed via the ORM, however the
 row which is the target of the UPDATE is missing.  Either the primary
 key of this row changed somehow, or the row was deleted, *or* the row is
 not visible to your transaction (this seems to be your case).
 I am running into something very similar:

 StaleDataError: UPDATE statement on table 'buildings' expected to update
 1 row(s); 0 were matched.

 However this case is very simple, just a single row in a table, and it
 looks to get selected just before the update (in the same transaction).
 Nothing is being deleted.

 The update:

 UPDATE buildings SET glatitude=%s, glongitude=%s WHERE buildings.id = %s

 The columns:

 id = Column(BigInteger, autoincrement=True, primary_key=True)
 glatitude  = Column(Numeric(precision=11, scale=8))
 glongitude = Column(Numeric(precision=11, scale=8))

 What's interesting is that the error occurs only if glatitude and
 glongitude are being updated to the same values that are already in the
 database (i.e.: no real update).
 When the old values are NULL or otherwise different than the new values,
 everything works fine!

 Is this a bug?
It's unlikely to be a SQLAlchemy bug, but questions to be answered
include:   Can you produce a self-contained test case?  Have you tried
SQLAlchemy 0.9.6 rather than 0.9.1?  Also what backend is this, MySQL? 
MySQL has had some bugs where the rowcount is not reported correctly. 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] self-referential relationship and aliases

2014-08-11 Thread Mike Bayer
For a fancy column_property like that you likely need to define it after
the fact and attach it to the class.   There's some new features I have
in the works for 1.0 to make that easier.

The general idea is:

class MyClass(Base):
   # ...

alias = aliased(MyClass)
stmt = select([alias.foo]).where(alias.id ==
MyClass.other_id).correlate(MyClass).as_scalar()
MyClass.some_prop = column_property(stmt)




On 08/11/2014 02:03 PM, dweitzenfeld wrote:
 I do have a follow up, actually.

 If I wanted to make friend_code_usage_count a column property so that
 it was always loaded with the object, how would I do that? 
 It doesn't look like I can add 

 alias = aliased(AffinionCode)

 within the class definition. Where/how would I define the alias?  


 On Monday, August 11, 2014 12:56:01 PM UTC-5, dweitzenfeld wrote:

 I've got the expression part of the hybrid properties working now,
 using an alias:

 |@friend_code_usage_count.expression
 def friend_code_usage_count(cls):
 alias = aliased(AffinionCode)
 return select([func.count(alias.Rtid)]). \
 where(alias.PromoCode == 
 cls.ReferAFriendCode).label('friend_code_usage_count')
 |

 ​

 I was going to ask how to add aliasing to the relationship
 definitions, but it seems like I don't have to.  Awesome, thanks
 for your help. 



 On Sunday, August 10, 2014 7:08:43 PM UTC-5, Michael Bayer wrote:

 it seems like you might want to make use of aliased() in those
 @hybrid_properties you have; if you do an equijoin from
 AffinionCode.something == cls.somethingelse, “cls” here is
 AffinionCode again.   One side should be an aliased() object.





 On Aug 10, 2014, at 8:01 PM, dweitzenfeld
 dweitz...@gmail.com wrote:


 I'm having trouble getting a self-referential table to alias
 correctly.  
 In the example below, Jack was given a family code of 'FOO.'
  He shares this with his family member Jill, who enters it as
 her PromoCode.




 Rtid PromoCode   FamilyCode
 Jack BAR FOO
 Jill FOO 


 ​

 This is my sqlalchemy class for the table:

 |class AffinionCode(MssqlBase):
 __tablename__ = AffinionCodes
 Rtid = Column(String, ForeignKey('MemberFacts.Rtid'), 
 primary_key=True)
 PromoCode = Column(String)
 FamilyCode = Column(String)
 ReferAFriendCode = Column(String)
 family_code_usages = relationship(AffinionCode,
   primaryjoin=PromoCode == 
 FamilyCode,
   foreign_keys=FamilyCode,
   remote_side=PromoCode,
   backref='family_source')
 friend_code_usages = relationship(AffinionCode,
   primaryjoin=PromoCode == 
 ReferAFriendCode,
   foreign_keys=ReferAFriendCode,
   remote_side=PromoCode,
   backref='friend_source')

 @hybrid_property
 def family_code_usage_count(self):
 return len([_ for _ in self.family_code_usages])

 @family_code_usage_count.expression
 def family_code_usage_count(cls):
 return select([func.count(AffinionCode.Rtid)]). \
 where(AffinionCode.PromoCode == 
 cls.FamilyCode).label('family_code_usage_count')

 @hybrid_property
 def friend_code_usage_count(self):
 return len([_ for _ in self.friend_code_usages])

 @friend_code_usage_count.expression
 def friend_code_usage_count(cls):
 return select([func.count(AffinionCode.Rtid)]). \
 where(AffinionCode.PromoCode == 
 cls.ReferAFriendCode).label('friend_code_usage_count')
 |
 ​


 The problem is that this is producing queries like:
 |SELECT TOP 10 [AffinionCodes]. [Rtid] AS [AffinionCodes_Rtid],
[AffinionCodes]. [PromoCode] AS [AffinionCodes_PromoCode],
[AffinionCodes]. [FamilyCode] AS [AffinionCodes_FamilyCode],
[AffinionCodes]. [ReferAFriendCode] AS 
 [AffinionCodes_ReferAFriendCode],
(SELECT COUNT([AffinionCodes]. [Rtid]) AS count_1
 FROM [AffinionCodes] 
 WHERE [AffinionCodes]. [PromoCode] = [AffinionCodes]. 
 [FamilyCode]) AS family_code_usages
 FROM [AffinionCodes]
 WHERE (SELECT COUNT([AffinionCodes]. [Rtid]) AS count_1
FROM [AffinionCodes] AS [AffinionCodes]
WHERE [AffinionCodes]. [PromoCode] = [AffinionCodes]. 
 [FamilyCode])  1;
 |
   

Re: [sqlalchemy] Trying to get tables to be created by unit tests

2014-08-11 Thread Mike Bayer

On 08/11/2014 04:37 PM, alchemy1 wrote:
 I have combined several examples I've found to try to get the
 'transactional-style' of unit tests to work, where you roll back the
 database after each test. However when I run this, the test fails when
 trying to insert the object with DBSession.add, complaining that the
 tables don't exist.

are you setting up the Connection in DBSession?   I see you doing
something with self.session, but that is not the same session as
DBSession.   If DBSession is a scoped session you'd want to say
DBSession(bind=connection) for each test.


 I thought Base.metadata.create_all(connection) would create the
 tables? I'd like to create the tables within the setup_module and roll
 it back in teardown_module so that the testdb database always goes
 back to being empty. This is to ensure that the tests are always
 running against a known state. (Start with empty db, create the
 tables, do the tests, then empty the db)

 Also, since a lot of copy-pasting was involved in creating this, could
 you please take a look and see what isn't necessary? I'm just trying
 to do simple tests (in Pyramid). For example is |self.session
 =Session(connection)required? And is using the global variables the
 way I am a good way of doing it? Quite new to this so just trying to
 learn the best practices.
 |

 |
 fromsqlalchemy.engine importcreate_engine
 fromsqlalchemy.orm.session importSession

 frompyramid importtesting

 from.models importBase
 from.models importDBSession

 transaction =None
 connection =None
 engine =None


 defsetup_module():
 globaltransaction,connection,engine

 engine
 =create_engine('postgresql+psycopg2://username:password@host:5432/testdb')
 DBSession.configure(bind=engine)
 connection =engine.connect()
 transaction =connection.begin()
 Base.metadata.create_all(connection)


 defteardown_module():
 globaltransaction,connection,engine
 transaction.rollback()
 connection.close()
 engine.dispose()


 classDatabaseTest(object):

 defsetup(self):
 self.__transaction =connection.begin_nested()
 self.session =Session(connection)

 defteardown(self):
 self.session.close()
 self.__transaction.rollback()


 classTestCustom(DatabaseTest):

 deftest_passing_view(self):
 from.models importMyModel
 model =MyModel(name='one',value=55)
 DBSession.add(model)

 from.views importmy_view
 request =testing.DummyRequest()
 info =my_view(request)
 assertinfo['one'].name =='one'
 |





 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SAWarnings when using history_meta.py versioning and Inheritance.

2014-10-17 Thread Mike Bayer

On 10/17/2014 02:52 PM, JPLaverdure wrote:
 Hello,

 It seems a number of SAWarnings are being thrown whenever I
 instantiate Versioned objects which make use of inheritance:

 |
 SAWarning:Implicitlycombining column container_history.changed
 withcolumn barcoded_container_history.changed under attribute 'changed'. 
 Pleaseconfigure one ormore attributes forthese same-named columns
 explicitly.
 prop =self._property_from_column(key,prop)
 |

 Unfortunately, since these objects are instantiated auto-magically
 by the Versioned mixin class, I can't see a way to make these go away
 or address the issue.
 I tried looking into the history_meta.py source and cannot understand
 why this same warning is not being thrown for the version attribute.

 Anyone has an idea ?

this is a newer warning which refers to the fact that these two columns
are separate but placed under the same attribute.  in some cases, people
didn't expect this to happen so hence a warning.   the recipe might be
taking this into account for version, not sure.  it can be fixed in
the recipe but the warning should only be raised at mapping time, not at
instantiation time.






 Thanks !

 JP
 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] SQLAlchemy 0.3.9 Released

2007-07-15 Thread mike bayer
SQLAlchemy 0.3.9 Released

Version 0.3.9 of SQLAlchemy is released.  This is probably not the last of
the 0.3 series, but it's the last major set of changes for this line;
subsequent releases should probably be only for important bug fixes.
0.3.9includes a lot of important bug fixes as usual, but also has a
focus on the
new API which is to be standardized in the upcoming 0.4 series, and is
largely available in SQLAlchemy 0.3.9 so that existing applications can
adapt to the new patterns.  These changes represent improvements by leaps
and bounds over anything SQLAlchemy has ever had, and a lot of them are
available right now.  The documentation has also been updated on the site
with a large portion of these changes - so if you haven't looked at
SQLAlchemy's documentation in a long time, now is the time to revisit;
particularly the chapters on Database Engines, Database Metadata, and Data
Mapping.

So whats changed ?

- The method of binding things to engines, such as MetaData, Sessions,
select() constructs, etc., all standardize on a single keyword
argument/attribute name:  bind.  bind in all cases may receive an Engine
or Connection as its argument (known as a connectable).  In the case of
MetaData, the bind attribute on the object itself is also assignable,
which binds the MetaData to the connectable.  While all the previous
attributes and keyword arguments remain in 0.3.9, such as engine,
bind_to, connectable, etc., these are removed in 0.4.  So once your
applications are stable on 0.3.9, begin changing these keywords and
attributes to be named bind.

- The generative interface of Query is now the standard.  All of the old
select/get_by methods are still present, as they will remain in 0.4 in a
deprecated state.  However, 90% of 0.4's Query functionality is now
available in 0.3.9, and the Data Mapping documentation discusses this
interface exclusively, addressing many common tasks that have historically
been mysterious to many users, such as joining, selecting multiple
entities/columns, and using string-based criterion or full text queries in
place of constructed expressions.  The improvements in this API, from a
functional and usage standpoint as well as how it's presented in the
documentation, are very significant.  Check out the docs at
http://www.sqlalchemy.org/docs/datamapping.html#datamapping_query .

- MetaData is now pickleable.  This was a very small change from a
technical point of view, but from an operational point of view, pickling a
MetaData object and storing it in a file should be the new way going forward
to deal with applications which need to reflect large sets of tables every
time they start up.  If your MetaData is bound to an engine, just reattach
the current engine to your unpickled MetaData using metadata.bind = engine.

- DynamicMetaData is now ThreadLocalMetaData (the old name remains for
compatibility).  This is to ensure that its purpose is clearly understood.
If you only need to bind an engine to a MetaData after the MetaData is
already defined, the regular MetaData does that.

- Similarly, BoundMetaData is just MetaData.  Theres no difference
between the two.  The old name remains for compatibility.

- Using polymorphic, joined-table inheritance ?  Forget about
polymorphic_union(), just join all the tables together using outerjoin().
In 0.4, even the select_table argument becomes optional.

The changelog for SQLAlchemy 0.3.9 is available at
http://www.sqlalchemy.org/CHANGES .  Download links at
http://www.sqlalchemy.org/download.html .

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



Re: [sqlalchemy] Long transactions with SQLite (and of course locks)

2015-03-31 Thread Mike Bayer


On 3/31/15 7:23 PM, Eric Smith wrote:
 Environment: SQLAlchemy 0.9.7, SQLite back-end, autoflush=True, mostly
 using scoped session, multi-platform Python 2.7

 I'm working on a multi-threaded app, but with fairly low concurrency
 (user actions and scheduled tasks are on different threads, for
 example). I think I have a fair understanding of SQLite's locking
 model, the implications of having autoflush=True, etc. but I still
 occasionally get the OperationError database is locked. 

 The challenge with this error is that it is raised on a thread that
 could be doing everything right while some other thread is keeping a
 transaction open for too long. Plus, unless another thread tries to
 write, the misbehaving thread goes mostly unnoticed.

 So I thought I'd try to write a long transaction detector that would
 start a timer when when a flush happens, and cancel it if a commit or
 rollback happens. If the timer expires (after a few seconds), a
 warning is logged with the stack trace of the flush. The idea is that
 I can see information about the misbehaving thread rather than the
 innocent victim, and I'll see problems even without actual contention.

 While I'm still experimenting to see if this strategy will work, it
 looks like I'm getting some false positives. It seems like sometimes I
 get an after_flush event even when there are no modifications to the
 database.
A flush can proceed when objects are marked as dirty, but in some cases
it turns out the changes on those objects are not net changes; e.g. an
object attribute was set to a new value that matches the old one. 
that's the case where you might see a flush that doesn't actually do
anything.

As far as database is locked, I'd just increase the timeout (Pysqlite
setting, defaults to 5 seconds), so that the locking acts just like any
other mutex that you'd place into your application.  

As far as SQLite's locking model, note that Pysqlite has extremely
specific behavior here which makes the locking model much more
forgiving.  It only locks the database when the SQL on the current
transaction turns into DML (e.g. INSERT / UPDATE / DELETE).   So it
should not be hard to write the app such that things that actually emit
DML aren't taking up that much time.   Note that when rollback or
commit happens, the transaction is over, nothing is locked.

If the app truly needs DML on multiple threads lots of the time and
you're getting a lot of contention (even with Pysqlite's forgiving
model), then SQLite isn't appropriate for that level of concurrency.



 This is based on having the engine logging turned on so I can see the
 SQL being emitted.  I'll see something like:

 BEGIN (implicit)
 SELECT
 SELECT
 ...
 after_flush event
 SELECT
 SELECT
 ...
 My timer expires

 Code is below -- the class is passed as the class_ parameter to
 sessionmaker. 

 Am I doing something dumb? Is this a reasonable strategy for my goals?
 Any ideas on the false positive?

 Thanks,
 Eric



 class LongTransactionDetector(sqlalchemy.orm.Session):

 TIMEOUT = 3

 def __init__(self, *args, **kwargs):
 super(LongTransactionDetector, self).__init__(*args, **kwargs)
 self._timer = None
 self._stack = None
 self._flush_time = None
 event.listen(self, 'after_flush', self._after_flush)
 event.listen(self, 'after_commit', self._after_commit)
 event.listen(self, 'after_rollback', self._after_rollback)
 event.listen(self, 'after_soft_rollback',
 self._after_soft_rollback)

 def close(self):
 # Calling close on a session automatically emits a ROLLBACK,
 but we
 # don't seem to get an event for it, so we'll just hook it here.
 super(LongTransactionDetector, self).close()
 self._stop_timing()

 def _after_flush(self, session, flush_context):
 if any([session.new, session.dirty, session.deleted]):
 self._begin_timing()

 def _after_commit(self, session):
 self._stop_timing()

 def _after_rollback(self, session):
 self._stop_timing()

 def _after_soft_rollback(self, session, previous_transaction):
 self._stop_timing()

 def _begin_timing(self):
 if self._timer is None:
 logger.debug('set transaction timer')
 self._flush_time = datetime.datetime.now()
 self._stack = traceback.extract_stack()
 self._thread = threading.current_thread()
 self._timer = threading.Timer(self.TIMEOUT, self._on_timer)
 self._timer.start()

 def _stop_timing(self):
 if self._timer:
 logger.debug('clear transaction timer')
 self._timer.cancel()
 self._timer = None
 self._stack = None

 def _on_timer(self):
 trace = ''.join(traceback.format_list(self._stack))
 logger.warning('long transaction detected on {!r} {}\n{}\n{}'
.format(self._thread, self._flush_time, trace))
 

Re: [sqlalchemy] Inherited class column override

2015-04-01 Thread Mike Bayer


On 4/1/15 4:55 AM, Pierre B wrote:
 Unfortunately I'm inheriting the relational model from an old
 application. I have dozens of tables using a single junction table for
 associations.
 I can not completely redesign my relational model because it needs to
 be compatible with the old application.
I was asking no such thing.  I only ask that you consider the relational
model when building *new* elements of the application.   If these models
are in fact mapping to an existing schema, I find it surprising that
your existing database schema includes *two* foreign key constraints
present on each of people4l2.id1 and people4l2.id2, constraining each
column to both left1.id/left2.id and right1.id/right2.id.   




 At this point, I think my best option is setting up table inheritance
 at the database level (database is Postgresql) and migrating records
 into children tables. Minimal code refactoring would be involved in
 the old application and it would be possible to use the association
 object pattern.

 On Tuesday, March 31, 2015 at 8:05:19 PM UTC+2, Michael Bayer wrote:



 Pierre B rocambol...@gmail.com javascript: wrote:

  I tried using the association object pattern before but can't
 get it to work because I use the same id1 and id2 columns for all
 foreign keys and I'm not able to override them in the sub-classes
 (conflicts with existing column error).
  class MyClass(HasSomeAttribute, db.Model):
 __tablename__ = 'people4l2'
 id = db.Column(db.Integer, primary_key=True)
 
  class MySubClass1(MyClass):
 right1_id = db.Column('id2', db.Integer,
 ForeignKey('right1.id http://right1.id'))
 left1_id = db.Column('id1', db.Integer, ForeignKey('left1.id
 http://left1.id'))
 
  class MySubClass2(MyClass):
 right2_id = db.Column('id2', db.Integer,
 ForeignKey('right2.id http://right2.id'))
 left2_id = db.Column('id1', db.Integer, ForeignKey('left2.id
 http://left2.id’))

 That’s because you do not have a __tablename__ for these
 subclasses, so when
 you put a column on the subclass, that is physically a column on the
 ‘people4l2’ table; the names cannot be conflicting. Also, it is
 not possible
 to have a column named “people4l2.id2” which is in some cases a
 foreign key
 to “right1.id http://right1.id” and in other cases to “right2.id
 http://right2.id”.

 This probably all seems very complicated if you only think of it
 in terms of
 a Python object model. That’s why it is essential that you design
 your
 database schema in terms of database tables, and how those tables
 will work
 within a purely relational model, without Python being involved,
 first.

 For simple cases, the design of the relational model and the
 object model
 are so similar that this explicit step isn’t necessary, but once
 the goals
 become a little bit divergent between relational and object model,
 that’s
 when the relational model has to be developed separately, up
 front. This is
 the essence of how SQLAlchemy works, which becomes apparent the
 moment you
 get into models like these which are typically impossible on most
 other
 ORMs, since most ORMs do not consider design of the relational
 model as
 separate from the object model.

 The tradeoff here is basically between “more work with SQLAlchemy”
 vs.
 “not possible at all with other ORMs”  :)

 The relational model is the more rigid part of the system here, so
 you have to
 work that part out first; then determine how you want to map the
 Python
 object model on top of the relational model.

  On Tuesday, March 31, 2015 at 4:29:52 PM UTC+2, Michael Bayer
 wrote:
 
 
  Pierre B rocambol...@gmail.com wrote:
 
   Here's my use case:
   right1 = Right()
   right.left = Left()
  
   right2 = Right2()
   right2.left = Left2()
  
   db.session.add(right) // automatically create the junction
 using MySubClass1 and set the type field to 1
   db.session.add(right2) // automatically create the junction
 using MySubClass1 and set the type field to 2
   db.session.commit()
  
   Basically I have a junction table associating a bunch of
 different tables in my model.
   I want to abstract that mechanism using relationships and
 polymorphism so that I don't have to deal with that junction table
 while coding.
   The relationships I created allow me to not have to deal with
 it while selecting records but I can't get it to set the type
 field while inserting records.
 
  OK, you are using the association object pattern. You cannot use
 “secondary”
  in the way that you are doing here. You need to map a
 relationship to
  MySubClass1 explicitly. To reduce verbosity, you’d then apply the
  association 

Re: [sqlalchemy] 0.9.7: Intermittently find a dead weakref to a listens_for function while checking _stored_in_collection of a registry

2015-04-02 Thread Mike Bayer


On 4/2/15 4:28 PM, Evan James wrote:
 Hi folks,

 While running my test suite, I hit an issue with the following stack
 trace:

 |
 ERROR at setup of test_track_before_delete
  

 request
 =SubRequest'database_session'forFunction'test_track_before_delete',engine
 =Engine(sqlite:///test.db)


  

 @pytest.fixture

 defdatabase_session(request,engine):

 connection =engine.connect()

 trans =connection.begin()

  


 meta.Session=scoped_session(sessionmaker(autocommit=False,autoflush=True,bind=connection))

 register_session_listeners(meta.Session)

  

 meta.Session.begin_nested()

   @event.listens_for(meta.Session,after_transaction_end)

 defrestart_savepoint(session,transaction):

 iftransaction.nested andnottransaction._parent.nested:


  

 automated_tests/conftest.py:52: 

 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

 ../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/api.py:94:indecorate


 listen(target,identifier,fn,*args,**kw)

 ../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/api.py:63:inlisten


 _event_key(target,identifier,fn).listen(*args,**kw)

 ../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/registry.py:187:inlisten


 self.dispatch_target.dispatch._listen(self,*args,**kw)

 ../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/base.py:184:in_listen


 event_key.base_listen(propagate=propagate,insert=insert,named=named)

 ../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/registry.py:226:inbase_listen


 for_modify(target.dispatch).append(self,propagate)

 ../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/attr.py:118:inappend


 registry._stored_in_collection(event_key,self)

 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

  

 event_key =sqlalchemy.event.registry._EventKey objectat
 0x111c6b3d0,owner =sqlalchemy.event.attr._DispatchDescriptor
 objectat 0x10eac8590


  

 def_stored_in_collection(event_key,owner):

 key =event_key._key

  

 dispatch_reg =_key_to_collection[key]

  

 owner_ref =owner.ref

 listen_ref =weakref.ref(event_key._listen_fn)

  

 ifowner_ref indispatch_reg:

   assertdispatch_reg[owner_ref]==listen_ref

 E   assertweakref at 0x111cc25d0;dead==weakref at
 0x111cc2af8;to 'function'at 0x111c7f668(restart_savepoint)


  

 ../../../.virtualenvs/ERP/lib/python2.7/site-packages/sqlalchemy/event/registry.py:74:AssertionError
 |


 While running the @event.listens_for decorator in the setup of a test
 fixture, SQLAlchemy throws an assertion because the ref it has in the
 dispatch registry isn't identical to the ref to the listener function.
  We're on SQLAlchemy 0.9.7, pytest 2.6.4.  Note that the test suite is
 setting up nested transactions on SQLite; we are using the recommended
 workaround from SQLAlchemy documentation to take control of
 transactional integration from pysqlite.


 Since there's an assertion in the code in _stored_in_collection(), I
 assume that there's some expectation that the refs might not match; am
 I doing something wrong in my setup which this assertion is meant to
 catch?  I've only seen this error once (while tracking down a
 different nondeterministic error in my own app's code), so I can't
 provide much more information than this, but the portion of test
 fixture code seen above in the stack trace is basically the entire
 reproduction case.  This fixture runs before every test in my suite,
 but I've only seen an error once across a large number of runs, so the
 error is *very* intermittent.  Because of that, I'm not worried too
 much about the error itself, but I thought I should post it here in
 case it's a symptom of something I should be worrying about.

this was issue https://bitbucket.org/zzzeek/sqlalchemy/issue/3199 and
has been fixed as of 0.9.8.






 Thanks,

 Evan James

 -- 
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com
 mailto:sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com
 mailto:sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at 

[sqlalchemy] SQLAlchemy 1.0.0b5 Released

2015-04-03 Thread Mike Bayer
SQLAlchemy release 1.0.0b5 is now available.

This release is yet another set of fixes for issues reported
by beta testers.  At this point, 1.0.0 is ready to go and should
be released very soon.

In preparation for 1.0.0, production installations that haven't yet been
tested in the 1.0 series should be making sure that their requirements
files are capped at 0.9.99, to avoid surprise upgrades.

Changelog for 1.0.0b5 is at:

http://www.sqlalchemy.org/changelog/CHANGES_1_0_0b5


SQLAlchemy 1.0.0b5 is available on the Download Page:

http://www.sqlalchemy.org/download.html



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Decimals generated as strings in query

2015-04-13 Thread Mike Bayer



On 4/13/15 2:25 PM, Gabriel Becedillas wrote:

Dear Michael,
Thanks a lot for your reply.
In trying to narrow the problem as much as possible, I missed 
something important in my example. I'm actually doing an UPDATE, not a 
SELECT. When I wrote 'I tried casting my decimals using 
sqlalcheme.cast(..., sqlalchemy.Numeric(precision=16, scale=8)) but 
that didn't work because the filter expression failed' I meant that it 
didn't work in an update scenario. In a select scenario it works ok. 
This is what I should have wrote in my bug_test.py:


amount = decimal.Decimal('0.0001')
query = session.query(Balance)
query = query.filter(
Balance.available_balance + sqlalchemy.cast(amount,
sqlalchemy.Numeric(precision=16, scale=8)) = Balance.full_balance
)

values = {}
values[Balance.available_balance] = Balance.available_balance + amount
row_count = query.update(values)
print row_count, rows updated


and the error I get is 'sqlalchemy.exc.InvalidRequestError: Could not 
evaluate current criteria in Python. Specify 'fetch' or False for the 
synchronize_session parameter.'. This is not even getting to MySQL.

for query.update() you usually want to send synchronize_session=False.

also the cast() needs to be Decimal(), not Numeric().  Will not work 
with Numeric().






Thanks a lot

On Monday, April 13, 2015 at 2:46:14 PM UTC-3, Michael Bayer wrote:



On 4/13/15 11:50 AM, Gabriel Becedillas wrote:

Dear all,
I have a table that has 2 numeric columns, and I'm writing a
query that performs some arithmetic on the filter clause between
those columns and a Decimal. The problem that I'm facing is that
I don't get any results at all. After a while I realized that the
SQL statement getting generated is dumping Decimals as strings,
and when strings are involved in a numeric expression they get
converted to floats. So, my query is not returning anything at
all due to float representation limitations.

I tried casting my decimals using sqlalcheme.cast(...,
sqlalchemy.Numeric(precision=16, scale=8)) but that didn't work
because the filter expression failed. Can anyone help me in
getting a cast over a query parameter to work in a filter
expression ?


this is driver stuff.  If you change the query to see what you're
getting:

query = session.query(Balance.available_balance + amount,
Balance.full_balance)

you can see there's some floating point noise in there, not to
mention we're even getting the value back as a floating point:

Col (u'anon_1', u'balance_full_balance')
2015-04-13 13:10:39,798 DEBUG sqlalchemy.engine.base.Engine Row
(3.0004e-08, Decimal('3E-8'))

I'm getting the same result with MySQL-Python, PyMySQL, and
Mysql-connector.   The issue is definitely in the drivers however,
the code below produces no result for all three drivers:

conn = engine.raw_connection()
cursor = conn.cursor()
cursor.execute(
SELECT balance.available_balance + %s AS anon_1, 
balance.full_balance AS balance_full_balance FROM balance 
WHERE balance.available_balance + %s =
balance.full_balance,
(amount, amount))

print cursor.fetchall()

If I embed the number 000.1 as is into the query without using
a parameter, then it works, as it does on the command line.

Looking into PyMySQL since that's the driver I'm most familiar
with, if we send a Decimal it is doing this:

SELECT balance.available_balance + 1E-8 AS anon_1,
balance.full_balance AS balance_full_balance FROM balance WHERE
balance.available_balance + 1E-8 = balance.full_balance

What is interesting is that these values as passed in an INSERT
*are* working correctly.Which means really, this is a MySQL
bug; I can prove it at the command line.

First, we illustrate that scientific notation *is* accepted
directly by MySQL:

mysql insert into balance (full_balance, available_balance)
values (3E-8, 2E-8);
Query OK, 1 row affected (0.00 sec)


values go in just fine (note I increased the scale in the table
here, hence the two trailing 0's):

mysql select * from balance;
++--+---+
| id | full_balance | available_balance |
++--+---+
|  2 | 0.000300 |  0.000200 |
++--+---+
1 row in set (0.00 sec)

but in the WHERE clause, *it fails*:

mysql select * from balance where available_balance + 1E-8 =
full_balance;
Empty set (0.00 sec)

writing out the whole value, *it succeeds*:

mysql select * from balance where available_balance + 0.0001
= full_balance;
++--+---+
| id | full_balance | available_balance |
++--+---+
|  2 | 

Re: [sqlalchemy] Decimals generated as strings in query

2015-04-13 Thread Mike Bayer



On 4/13/15 11:50 AM, Gabriel Becedillas wrote:

Dear all,
I have a table that has 2 numeric columns, and I'm writing a query 
that performs some arithmetic on the filter clause between those 
columns and a Decimal. The problem that I'm facing is that I don't get 
any results at all. After a while I realized that the SQL statement 
getting generated is dumping Decimals as strings, and when strings are 
involved in a numeric expression they get converted to floats. So, my 
query is not returning anything at all due to float representation 
limitations.


I tried casting my decimals using sqlalcheme.cast(..., 
sqlalchemy.Numeric(precision=16, scale=8)) but that didn't work 
because the filter expression failed. Can anyone help me in getting a 
cast over a query parameter to work in a filter expression ?


this is driver stuff.  If you change the query to see what you're getting:

query = session.query(Balance.available_balance + amount, 
Balance.full_balance)


you can see there's some floating point noise in there, not to mention 
we're even getting the value back as a floating point:


Col (u'anon_1', u'balance_full_balance')
2015-04-13 13:10:39,798 DEBUG sqlalchemy.engine.base.Engine Row 
(3.0004e-08, Decimal('3E-8'))


I'm getting the same result with MySQL-Python, PyMySQL, and 
Mysql-connector.   The issue is definitely in the drivers however, the 
code below produces no result for all three drivers:


conn = engine.raw_connection()
cursor = conn.cursor()
cursor.execute(
SELECT balance.available_balance + %s AS anon_1, 
balance.full_balance AS balance_full_balance FROM balance 
WHERE balance.available_balance + %s = balance.full_balance,
(amount, amount))

print cursor.fetchall()

If I embed the number 000.1 as is into the query without using a 
parameter, then it works, as it does on the command line.


Looking into PyMySQL since that's the driver I'm most familiar with, if 
we send a Decimal it is doing this:


SELECT balance.available_balance + 1E-8 AS anon_1, balance.full_balance 
AS balance_full_balance FROM balance WHERE balance.available_balance + 
1E-8 = balance.full_balance


What is interesting is that these values as passed in an INSERT *are* 
working correctly.Which means really, this is a MySQL bug; I can 
prove it at the command line.


First, we illustrate that scientific notation *is* accepted directly by 
MySQL:


mysql insert into balance (full_balance, available_balance) values 
(3E-8, 2E-8);

Query OK, 1 row affected (0.00 sec)


values go in just fine (note I increased the scale in the table here, 
hence the two trailing 0's):


mysql select * from balance;
++--+---+
| id | full_balance | available_balance |
++--+---+
|  2 | 0.000300 |  0.000200 |
++--+---+
1 row in set (0.00 sec)

but in the WHERE clause, *it fails*:

mysql select * from balance where available_balance + 1E-8 = full_balance;
Empty set (0.00 sec)

writing out the whole value, *it succeeds*:

mysql select * from balance where available_balance + 0.0001 = 
full_balance;

++--+---+
| id | full_balance | available_balance |
++--+---+
|  2 | 0.000300 |  0.000200 |
++--+---+
1 row in set (0.00 sec)

we can see that *MySQL itself is doing floating point*, so that's really 
the bug here:


mysql select available_balance + 1E-8 from balance;
++
| available_balance + 1E-8   |
++
| 0.00030004 |
++
1 row in set (0.00 sec)

We can in fact make it work with a CAST.  However!  crazytown time. Even 
though NUMERIC and DECIMAL are equivalent in MySQL, cast will *not* 
accept NUMERIC (note SQLAlchemy only warns on these and only as of 1.0 I 
think):


mysql select available_balance + CAST(1E-8 AS NUMERIC) from balance;
ERROR 1064 (42000): You have an error in your SQL syntax; check the 
manual that corresponds to your MySQL server version for the right 
syntax to use near 'NUMERIC) from balance' at line 1


But it *will* accept DECIMAL:

mysql select available_balance + CAST(1E-8 AS DECIMAL) from balance;
+---+
| available_balance + CAST(1E-8 AS DECIMAL) |
+---+
|  0.000200 |
+---+
1 row in set (0.00 sec)

So there's our answer:

from sqlalchemy import cast, DECIMAL

amount = decimal.Decimal('0.0001')
query = session.query(Balance.available_balance + cast(amount, 
DECIMAL()), Balance.full_balance)

query = query.filter(
Balance.available_balance + cast(amount, DECIMAL()) = 
Balance.full_balance

)


SELECT balance.available_balance + CAST(%s AS DECIMAL) AS anon_1, 

Re: [sqlalchemy] polymorphic inheritance and unique constraints

2015-04-13 Thread Mike Bayer



On 4/13/15 4:59 PM, Richard Gerd Kuesters | Pollux Automation wrote:
well, this didn't work with upstream 1.0 - sorry, I was in another 
project and couldn't test it myself.



you're not doing the same thing this user was doing in any case...



Traceback (most recent call last):
  File database_test.py, line 46, in module
from plx.db.core import *
  File ../src/plx/db/core.py, line 901, in module
UniqueConstraint(ContainerInstance.batch_id, 
ContainerAggregation.container_descriptor_id,)
  File 
/home/richard/.pyenv/versions/vpak-pollux-2.7.9/lib/python2.7/site-packages/sqlalchemy/sql/schema.py, 
line 2464, in __init__
ColumnCollectionMixin.__init__(self, *columns, 
_autoattach=_autoattach)
  File 
/home/richard/.pyenv/versions/vpak-pollux-2.7.9/lib/python2.7/site-packages/sqlalchemy/sql/schema.py, 
line 2393, in __init__

self._check_attach()
  File 
/home/richard/.pyenv/versions/vpak-pollux-2.7.9/lib/python2.7/site-packages/sqlalchemy/sql/schema.py, 
line 2429, in _check_attach

table.description)
sqlalchemy.exc.ArgumentError: Column(s) 
'container_aggregation.fk_container_descriptor_id' are not part of 
table 'container_instance'.


I got sqlalchemy from git, today.

 sqlalchemy.__version__
'1.0.0'

container_aggretation is a subclass of container_instance. I'm not 
using concrete inheritance here, may this be the problem?


anything else, it's Python 2.7.9 + Linux + PostgreSQL 9.4.1.


cheers,
richard.

On 03/24/2015 08:49 PM, Michael Bayer wrote:

are these two separate constraints?  I just looked and it seems like they are 
distinct.

I just added a fix to 1.0 because someone was hacking around something similar 
to this.

The easiest way to get these for the moment is just to create the 
UniqueConstraint outside of the class definition.

class Foo(Base):
 # …

class Bar(Foo):
# …

UniqueConstraint(Bar.x, Foo.y)

that way all the columns are set up, should just work.



Richard Gerd Kuesters | Polluxrich...@pollux.com.br  wrote:


well, understanding better the docs for column conflicts, can i use a 
declared_attr in a unique constraint? if yes, my problem is solved :)


On 03/24/2015 10:33 AM, Michael Bayer wrote:

Richard Gerd Kuesters | Pollux
rich...@pollux.com.br
  wrote:



hi all!

i'm dealing with a little problem here. i have a parent table and its two 
inheritances. there is a value that both children have and must be unique along 
either types. is there a way to move this column to the parent and use a 
constraint in the child? my implementation is postgres 9.4+ with psycopg2 only.

if this is single table inheritance then the constraint would most ideally
be placed on the parent class.

if you’re trying to make this “magic” such that you can semantically keep
the unique constraints on the child classes, you’d need to build out a
conditional approach within @declared_attr. IMO I think this is an idealized
edge case that in the real world doesn’t matter much - just do what works
(put the col / constraint on the base).

the approach is described at

http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/declarative/inheritance.html#resolving-column-conflicts
.
You’d need to make this work for both the column and the constraint.




as a simple example (i'm just creating this example to simplify things), this 
works:

class MyParent(Base):

 foo_id = Column(Integer, Sequence('foo_id_seq'), primary_key=True)
 foo_name = Column(Unicode(64), nullable=False)
 foo_type = Column(Integer, nullable=False)

 __mapper_args__ = {
 polymorphic_on: foo_type,
 polymorphic_identity: 0
 }


class MyChild1(MyParent):

 foo_id = Column(Integer, ForeignKey(MyParent.foo_id), primary_key=True)
 bar_id = Column(Integer, ForeignKey(AnotherEntity.bar_id), nullable=False)
 child1_specific_name = Column(Unicode(5), nullable=False)
 child1_baz_stuff = Column(Boolean, default=False)

 __mapper_args__ = {
 polymorphic_identity: 1
 }

 __table_args__ = (
 UniqueConstraint(bar_id, child1_specific_name,),  # works, bar_id is 
in MyChild1
 )


class MyChild2(MyParent):

 foo_id = Column(Integer, ForeignKey(MyParent.foo_id), primary_key=True)
 bar_id = Column(Integer, ForeignKey(AnotherEntity.bar_id), nullable=False)
 child2_specific_code = Column(UUID, nullable=False)
 child2_baz_stuff = Column(Float, nullable=False)
 
 __mapper_args__ = {

 polymorphic_identity: 2
 }

 __table_args__ = (
 UniqueConstraint(bar_id, child2_specific_code,),  # works, bar_id is 
in MyChild2
 )


but i would like to do this, if possible:

class MyParent(Base):

 foo_id = Column(Integer, Sequence('foo_id_seq'), primary_key=True)
 foo_name = Column(Unicode(64), nullable=False)
 foo_type = Column(Integer, nullable=False)
 bar_id = Column(Integer, ForeignKey(AnotherEntity.bar_id), nullable=False) 
 # since both child uses bar_id, why 

Re: [sqlalchemy] Non backwards-compatible changes in 1.0? Lots of suddenly failing tests here.

2015-04-20 Thread Mike Bayer



On 4/20/15 12:56 PM, Guido Winkelmann wrote:
I just tested, the problem is still present in the current master 
(bd61e7a3287079cf742f4df698bfe3628c090522 from github). Guido W. 


can you please try current master at least as of 
a3af638e1a95d42075e25e874746, thanks.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] SQLAlchemy 1.0.1 Released

2015-04-23 Thread Mike Bayer

SQLAlchemy release 1.0.1 is now available.

This is a quick bug-fix release that repairs several new regressions 
identified in the 1.0.0 release, not found during the beta period. All 
users of 1.0.0 are encouraged to upgrade to 1.0.1.


Key elements of this release include fixes regarding the NEVER_SET 
symbol leaking into queries in some cases, fixes to SQLite when using 
DDL in conjunction with referential integrity enabled, a fix to the 
EXISTS construct which primarily impacts queries that use special 
datatypes, and repairs to the Firebird dialect regarding the new 
LIMIT/OFFSET features.


In order to accommodate some of these fixes, there are three additional 
behavioral changes in 1.0.1; a new warning is emitted when using DDL 
with SQLite in conjunction with mutually-dependent foreign keys (e.g. a 
reference cycle), a new warning is emitted when running ORM relationship 
comparisons when the target object contains the value None for any of 
the Python-side column values, and a change is made regarding which data 
values are used within a relationship comparison that uses the != 
operator, in order to make the behavior consistent with that of the == 
operator as used in the same context. The migration notes contains 
updates for all three of these changes, and they are each linked 
directly from the changelog which should be carefully reviewed.


Changelog for 1.0.1 is at: http://www.sqlalchemy.org/changelog/CHANGES_1_0_1

SQLAlchemy 1.0.1 is available on the download page at:

http://www.sqlalchemy.org/download.html

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Access __tablename__ in server_defaults?

2015-04-23 Thread Mike Bayer
if you are using __tablename__ what happens if you just refer to 
cls.__tablename__ in that method ?




On 4/23/15 3:46 PM, Jacob Magnusson wrote:
Would it somehow be possible to access the __tablename__ in 
server_default? What I'm looking for is something like this:


|class PositionMixin(object): @declared_attr def position(cls): return 
Column(Integer, 
server_default=text(nextval('%(__tablename__)s_id_seq'))) |

​

--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Add information to reflected tables

2015-04-23 Thread Mike Bayer



On 4/23/15 5:15 PM, Jeffrey Yan wrote:
I have a couple of tables that I want to reflect. The first is a data 
table where one of the columns is a foreign key to the second table.


If I used SQLAlchemy declarative_base, a query might look something 
like this:


|
session.query(Client.name,Suburb.label).join(Suburb)# In the Client 
class there is an attribute suburb_id = Column(Integer, 
ForeignKey(Suburb.id))

|
/
/
However, this foreign key is not specified in the schema (we're using 
postgres 9.2) but we know all the columns that look like something_id 
are foreign keys, so I've been defining them that way using SQLAlchemy.



My problem is, although we have a fixed number of property tables 
(suburb, country, join_date, ...) - each data table (per client) can 
have a different set of columns.


This hasn't been much of a problem so far, since we only have a few 
/types/ of client data tables, so the combinations have been limited. 
However, I'd like to cater for changes in the future.


If I reflect the table using SQLAlchemy, the resultant table will not 
have the ForeignKey columns compared to if I did it manually. Is there 
a way to add these in after reflection?
you can do this by instrumenting the process of the Table being built 
up.It's weird enough that I had to write a demo to verify it works, 
so here it is:


from sqlalchemy import create_engine
from sqlalchemy import event
from sqlalchemy import MetaData, ForeignKey, Column
import re

e = create_engine(postgresql://scott:tiger@localhost/test)

conn = e.connect()
trans = conn.begin()

conn.execute(
create table foo (id integer primary key)
)

conn.execute(
create table bar (id integer primary key, foo_id integer)
)


metadata = MetaData()


@event.listens_for(Column, before_parent_attach)
def associate_fk(column, table):
# if you want to limit the event's scope; a good idea
# else this will take place for Column objects everywhere
if table.metadata is not metadata:
return

m = re.match(r(.+)_id, column.name)
if m:
tname = m.group(1)
column.append_foreign_key(ForeignKey('%s.id' % tname))

metadata.reflect(conn)


foo = metadata.tables['foo']
bar = metadata.tables['bar']

assert bar.c.foo_id.references(foo.c.id)











Or is my only option to use reflected tables and explicit join 
conditions? Something like:


|
client_table_1 
=Table('client_table_1',metadata,autoload=True,autoload_with=engine,schema='client_1')
session.query(client_table_1.c.name,Suburb.label).join(Suburb,client_table_1.c.suburb_id 
==Suburb.id)# Explicit joins only from now on

|


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Generating Correlated Subqueries

2015-04-19 Thread Mike Bayer


On 4/19/15 9:36 AM, ThereMichael wrote:


Sorry, sometimes you get so deep into something you forget
everyone else isn't familiar with the problem.


As an example, here's what I'm looking for:

select things.id, count(comments.type) from things things, comments 
comments
where things.creation = 2015-04-19 and things.creation  
2015-04-26 and comments.target_id = things.id

and comments.type = 5
and comments.creation = 2015-04-19 and comments.creation  2015-04-26
group by things.id
order by count(comments.type) desc;


OK here is a demonstration of that SQL.  I had to include mappings to 
match, so figure out what's different in your code vs. this example to 
see where it might be going wrong.


from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
import datetime

Base = declarative_base()


class Thing(Base):
__tablename__ = 'things'
id = Column(Integer, primary_key=True)
creation = Column(DateTime)


class Comment(Base):
__tablename__ = 'comments'
id = Column(Integer, primary_key=True)
type = Column(String)
target_id = Column(ForeignKey('things.id'))
creation = Column(DateTime)

s = Session()
q = s.query(Thing.id, func.count(Comment.type)).\
filter(Thing.creation = datetime.date(2015, 4, 19)).\
filter(Thing.creation  datetime.date(2015, 4, 26)).\
filter(Comment.target_id == Thing.id).\
filter(Comment.creation = datetime.date(2015, 4, 19)).\
filter(Comment.creation  datetime.date(2015, 4, 26)).\
group_by(Thing.id).\
order_by(func.count(Comment.type).desc())

print q

e = create_engine(sqlite://, echo=True)
Base.metadata.create_all(e)
q.with_session(Session(e)).all()








This gives the desired result of:

+--+--+
| id   | count(comments.type) |
+--+--+
| 2181 |   30 |
| 2182 |   28 |
| 2183 |   26 |
| 2184 |   24 |
| 2185 |   22 |
| 2186 |   20 |
| 2187 |   18 |
| 2188 |   16 |
| 2189 |   14 |
| 2190 |   12 |
| 2191 |   10 |
| 2192 |8 |
| 2193 |6 |
| 2194 |4 |
| 2195 |2 |
+--+--+
15 rows in set (0.00 sec)


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Can I restrict the set of mappers that configure_mappers() works on?

2015-04-21 Thread Mike Bayer



On 4/21/15 9:31 AM, Steven Winfield wrote:

Hi,

It seems like configuration is attempted for all new mappers, 
globally, whenever a query is done. So if library A and B both use 
sqlalchemy, and A imports B before A's mappers can be properly 
initialised (e.g. there is a relationship(ClassnameAsString) call 
somewhere that can't be resolved yet), and B does something to trigger 
mapper configuration, then it will fail.
This occurs even if A and B make separate calls to declarative_base(), 
even with explicitly different metadata and bound engines.


no there's not, and the short answer is that libraries shouldn't be 
triggering mapper configuration (and definitely not doing ORM queries) 
at import time, and/or the imports of A and B should be organized such 
that B imports fully before A starts doing things. Either these 
libraries have inter-dependencies, in which case this implies mapper 
configuration should be across all of the mappings in both, or they 
don't, in which case the imports of A and B should not be from within 
each other.


An enhancement that would limit configuration to groups of mappings is a 
feasible proposal but we don't have that right now. Wouldn't be that 
easy to do without adding a performance penalty since the check for new 
mappers would have to be limited to some categorization, meaning 
lookups in critical sections.









Here's a boiled-down version of the problem that I've been playing 
with, which shows that the relationship between Parent and Child is 
configured when a query on Test is done - even though it may be part 
of a different library and in a different database:


|
from sqlalchemy import Column, Integer, Text, ForeignKey, create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker, relationship
import traceback

Base1 = declarative_base()

class Test(Base1):
__tablename__ = test
id = Column(Integer, primary_key=True)

Base2 = declarative_base()

class Parent(Base2):
__tablename__ = parent
id = Column(Integer, primary_key=True)

def deferred_parent():
traceback.print_stack()
return Parent

class Child(Base2):
__tablename__ = child
id_parent = Column(Integer, ForeignKey(Parent.id), primary_key=True)
name = Column(Text, primary_key=True)
parent = relationship(deferred_parent)

engine = create_engine('sqlite://')
Session = sessionmaker(bind=engine)
session = Session()
try:
session.query(Test).all()
except:
pass

|

...the important bit of the traceback being:

  File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\session.py, 
line 1165, in query


return self._query_cls(entities, self, **kwargs)

  File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\query.py, 
line 108, in __init__


self._set_entities(entities)

  File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\query.py, 
line 118, in _set_entities


self._set_entity_selectables(self._entities)

  File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\query.py, 
line 151, in _set_entity_selectables


ent.setup_entity(*d[entity])

  File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\query.py, 
line 2997, in setup_entity


self._with_polymorphic = ext_info.with_polymorphic_mappers

  File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\util\langhelpers.py, 
line 726, in __get__


obj.__dict__[self.__name__] = result = self.fget(obj)

File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\mapper.py, 
line 1871, in _with_polymorphic_mappers


configure_mappers()

  File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\mapper.py, 
line 2583, in configure_mappers


mapper._post_configure_properties()

  File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\mapper.py, 
line 1688, in _post_configure_properties


prop.init()

  File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\interfaces.py, 
line 144, in init


self.do_init()

  File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\relationships.py, 
line 1549, in do_init


self._process_dependent_arguments()

  File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\orm\relationships.py, 
line 1605, in _process_dependent_arguments


self.target = self.mapper.mapped_table

  File 
R:\sw\external\20150407-0\python27\lib\site-packages\sqlalchemy-0.9.7-py2.7-win32.egg\sqlalchemy\util\langhelpers.py, 
line 726, in __get__


obj.__dict__[self.__name__] = result 

  1   2   3   4   5   6   7   8   9   10   >