[sqlalchemy] could not assemble any primary key columns for mapped table '...'

2010-05-20 Thread Yang Zhang
How do I create an ORM type with no primary key columns? For some
reason I'm getting:

  sqlalchemy.exc.ArgumentError: Mapper
Mapper|ActorActivity|actor_activities could not assemble any primary
key columns for mapped table 'actor_activities'

for:

class ActorActivity(Base):
  __tablename__ = 'actor_activities'
  actor_id =  Column(UUID,ForeignKey(Actor.id),  nullable  =  False)
  t=  Column(SmallInteger,  nullable = False)
  ts=  Column(TIMESTAMP, nullable = False)
  a  =  Column(UUID,ForeignKey(A.id))
  b   =  Column(UUID,ForeignKey(B.id))
  n   =  Column(SmallInteger)
  x  =  Column(SmallInteger)

Thanks for any hints.
-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Custom UUID type with variant impl

2010-05-20 Thread Yang Zhang
I defined a TypeDecorator with impl=postgres.PGUuid so that I can work
with raw UUID bytes (instead of hex strings), but I'd like to make
this portable to other DBMSs, where i want the impl to be a BLOB. How
can I do this? The reference docs are pretty sparse here. Thanks in
advance.
-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Custom UUID type with variant impl

2010-05-21 Thread Yang Zhang
On Thu, May 20, 2010 at 8:06 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 On May 20, 2010, at 4:49 PM, Yang Zhang wrote:

 I defined a TypeDecorator with impl=postgres.PGUuid so that I can work
 with raw UUID bytes (instead of hex strings), but I'd like to make
 this portable to other DBMSs, where i want the impl to be a BLOB. How
 can I do this? The reference docs are pretty sparse here. Thanks in
 advance.

 i'd recommend a VARCHAR for a uuid as they are usually used as primary keys - 
 BLOBs make very poor primary keys.

But is there a way to accomplish this such that for PG dialects I'm
using a PGUuid instead of a CHAR?

-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] could not assemble any primary key columns for mapped table '...'

2010-05-21 Thread Yang Zhang
On Thu, May 20, 2010 at 8:09 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 On May 20, 2010, at 5:51 PM, Yang Zhang wrote:

 How do I create an ORM type with no primary key columns? For some
 reason I'm getting:

  sqlalchemy.exc.ArgumentError: Mapper
 Mapper|ActorActivity|actor_activities could not assemble any primary
 key columns for mapped table 'actor_activities'

 for:

 class ActorActivity(Base):
  __tablename__ = 'actor_activities'
  actor_id     =  Column(UUID,    ForeignKey(Actor.id),      nullable  =  
 False)
  t        =  Column(SmallInteger,  nullable = False)
  ts    =  Column(TIMESTAMP,     nullable = False)
  a  =  Column(UUID,    ForeignKey(A.id))
  b   =  Column(UUID,    ForeignKey(B.id))
  n   =  Column(SmallInteger)
  x  =  Column(SmallInteger)

 Thanks for any hints.


 the orm has to be given the list of columns that serve as the primary key 
 even if the table doesn't actually have any.  FAQ entry here:
 http://www.sqlalchemy.org/trac/wiki/FAQ#IhaveaschemawheremytabledoesnthaveaprimarykeycanSAsORMhandleit

I did see that, and the only thing I'll say here is: I know what I'm doing :)


 although here you can just put primary_key=True on those columns you'd like 
 to consider as PK cols, since you are using declarative and everything is in 
 one place anyway.

 if the issue is, you want no PK at all, even a python-only one, that's not 
 really possible.   The ORM needs a way to locate the row for your instance in 
 the DB in order to issue updates/deletes etc.

Thanks for the anwser.

It would be nice if this restriction could be lifted if explicitly
requested somehow (__use_pk__ = False, and have those operations raise
run-time exceptions if attempted). (As for why I'm using the ORM, it's
basically because it affords many relatively minor benefits such as
packaging up the tuple in its own object, a cleaner declaration syntax
than Table(...), no need to use table.c.blah, custom constructors and
methods, etc.)
--
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Custom UUID type with variant impl

2010-05-21 Thread Yang Zhang
Thanks for your follow-up answers to my questions (and in the other
thread). I am curious about what you said on BLOBs making for poor
PKs. Can you elaborate on that and/or point me to something that does?
Thanks in advance.

On Thu, May 20, 2010 at 8:06 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 On May 20, 2010, at 4:49 PM, Yang Zhang wrote:

 I defined a TypeDecorator with impl=postgres.PGUuid so that I can work
 with raw UUID bytes (instead of hex strings), but I'd like to make
 this portable to other DBMSs, where i want the impl to be a BLOB. How
 can I do this? The reference docs are pretty sparse here. Thanks in
 advance.

 i'd recommend a VARCHAR for a uuid as they are usually used as primary keys - 
 BLOBs make very poor primary keys.

 impl is like:

 class GUIDType(TypeDecorator):
    impl = sa.CHAR

    def __init__(self):
        TypeDecorator.__init__(self, length=16)

    def load_dialect_impl(self, dialect):
        if dialect.name == 'sqlite':
            return dialect.type_descriptor(sa.CHAR(self.impl.length))
        else:
            return dialect.type_descriptor(pg.UUID())

    def process_bind_param(self, value, dialect):
        if value is None:
            return value
        else:
            return str(value)

    def process_result_value(self, value, dialect):
        if value is None:
            return value
        else:
            return uuid.UUID(value)




 --
 Yang Zhang
 http://yz.mit.edu/

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.


 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.





-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] BLOB and str comparisons

2010-05-21 Thread Yang Zhang
I'm trying to run the following:

  session.query(Activity).filter(Activity.blob == blob).one()

where Activity.blob is a BLOB and blob is a Python bytes object
(equiv. to str in Python 2.x). But although I can insert Activity(blob
= blob) objects fine, the above query fails with:

ProgrammingError: (ProgrammingError) You must not use 8-bit
bytestrings unless you use a text_factory that can interpret 8-bit
bytestrings (like text_factory = str).

This is because blob has to be wrapped with sqlite3.Binary(blob). Why
doesn't sqlalchemy automatically do this casting, given that it has
schema awareness? Is there any way to avoid having to do this for
every such query?

Thanks in advance.
--
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: BLOB and str comparisons

2010-05-21 Thread Yang Zhang
On Fri, May 21, 2010 at 6:47 PM, Yang Zhang yanghates...@gmail.com wrote:
 I'm trying to run the following:

  session.query(Activity).filter(Activity.blob == blob).one()

 where Activity.blob is a BLOB and blob is a Python bytes object
 (equiv. to str in Python 2.x). But although I can insert Activity(blob
 = blob) objects fine, the above query fails with:

 ProgrammingError: (ProgrammingError) You must not use 8-bit
 bytestrings unless you use a text_factory that can interpret 8-bit
 bytestrings (like text_factory = str).

 This is because blob has to be wrapped with sqlite3.Binary(blob). Why
 doesn't sqlalchemy automatically do this casting, given that it has
 schema awareness? Is there any way to avoid having to do this for
 every such query?

 Thanks in advance.
 --
 Yang Zhang
 http://yz.mit.edu/



Also, how do I do the wrapping portably? For sqlite3 I have to wrap
with sqlite3.Binary, for postgresql I have to wrap with
psycopg2.Binary, etc.


-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: BLOB and str comparisons

2010-05-23 Thread Yang Zhang
On Sat, May 22, 2010 at 4:12 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 On May 22, 2010, at 1:14 AM, Michael Bayer wrote:

 On May 21, 2010, at 10:04 PM, Yang Zhang wrote:

 Also, how do I do the wrapping portably? For sqlite3 I have to wrap

 with sqlite3.Binary, for postgresql I have to wrap with

 psycopg2.Binary, etc.

 I am not able to reproduce your error in py2k or py3k:

 clarifying, I can't reproduce in py3k because the type of bytes is coerced
 into LargeBinary.   I *can* reproduce in py2k using a non-ascii value, as
 the right side of the expression isn't coerced by the left, and we only see
 str or unicode as the type which doesn't by itself get coerced into a
 _Binary.  So I have added an expression coercion rule to the _Binary in
 r19922de7317c to fix this issue.

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



Ah, I was *just* about to paste you a test case I constructed. Glad to
see this get fixed, really appreciate it.


-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] What is a good pattern for using sqlalchemy+psycopg2+gevent in a pylons app

2010-06-25 Thread Yang Zhang
On Fri, Jun 4, 2010 at 7:16 AM, afrotypa ovuaia...@gmail.com wrote:
 Hi All,


 For a while I have been toying with the idea of developing/running a
 database (postgresql backend) driven WSGI app using an async python
 WSGI server such as gevent.

 Now that the newer psycopg2 (=2.2.0) drivers provide coroutine
 support (via a hook which makes c extension database drivers co-
 routine friendly) I am looking to try gevent out as soon as I can
 figure out how to integrate gevent and sqlalchemy.

 In theory it should be possible to run blocking SQLAlchemy queries in
 a spawned greenlet using an async DB API 2.0 driver such as psycopg2.
 Blocking only the greenlet handling the specific request.

 See the following URL for details of coroutine support in psycopg2:-

 http://bitbucket.org/dvarrazzo/psycogreen/wiki/Home

 An example of a wait call back implementation for gevent is here :-

 http://bitbucket.org/dvarrazzo/psycogreen/src/tip/gevent/psyco_gevent.py

 My question is what is a good pattern for using SQLAlchemy with a co-
 routine library such as gevent in a pylons based WSGI app.?

 For starters how do I configure the SQLAlchemy connection pool to
 create async connections only?.

 i.e.

 Write a custom getconn method and passing async=True to
 psycopg2.connect?

 Or

 pass connect_args to create_engine?.

 Assuming gevent spawns a greenlet to handle each WSGI request, I think
 SQLAlchemy should just work as usual in this situation, since the
 scopedSession would ensure that a different SQLAlchemy session is used
 for each request. Is this a reasonable assumption to make about this?.

 If I wanted to run multiple independent queries (each in its own
 greenlet) in the same pylons request, it appears I would have to
 explicitly create an sqlalchemy session for each greenlet instance in
 this case. Is this the case?

 Am I missing something very important here?

 Hopefully someone here might be able to shed some more light into all
 this.

 Thanks,

 Note: I already posted this question on pylons-discuss, but I am
 thinking this is more an SQLAlchemy than a pylons question.

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.



I would also love to know the answer to this question!  Also
interested in other libraries like those for mysql (gevent-mysql).
-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Getting ORM to use locking selects (select for update, etc.)

2010-06-25 Thread Yang Zhang
How do I get the ORM to use locking selects when I do a
Session.query()?  E.g., select for update, or select for share
(PG)/select lock in share mode (MySQL).  I know this is possible using
the lower-level SQL expression library.  Thanks in advance.
--
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] What is a good pattern for using sqlalchemy+psycopg2+gevent in a pylons app

2010-06-25 Thread Yang Zhang
On Fri, Jun 25, 2010 at 6:41 AM, Michael Bayer mike...@zzzcomputing.com wrote:
 That's as deep as my thought goes on this and its all based on hypotheticals 
 since I've never used twisted or greenlets or anything like that.    Sorry if 
 I'm totally off on how gevent/greenlets work, the linked documents didn't 
 really make it clear how they work for someone who isn't already familiar.

That's completely fair, the docs leave a lot to be desired. Think of
greenlets as an implementation of cooperative threading. In typical
threading, threads can first of all run in parallel, and the threading
is also preemptive, meaning context switches between threads can
happen at any time. With greenlets, threads are run in a single real
OS thread (no parallelism, only multiplexed concurrency), and context
switches between threads happen only voluntarily.

Why is this useful? Because then you can write non-blocking code in a
blocking style. Non-blocking IO is useful for scalable systems
development, but event-driven programming (a la twisted) tends to be
more tedious and less natural than programming with blocking IO. Plus,
most existing code is written against blocking IO, but event-driven
programming makes those difficult to reuse.

So to answer your first question of what things would look like in an
asynchronous world: with cooperative threads like greenlets,
everything would hopefully look identical. Embracing event-driven
style would indeed spell out significant changes to both sqlalchemy
and user code (and that's also not what I'm personally interested in
using).

Upon closer inspection, it seems that changes might not even be
necessary for sqlalchemy, since one can globally set the asynchronous
callback for psycopg to hook directly into gevent.

http://bitbucket.org/dvarrazzo/psycogreen/src/tip/gevent/psyco_gevent.py
--
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Getting ORM to use locking selects (select for update, etc.)

2010-06-25 Thread Yang Zhang
On Fri, Jun 25, 2010 at 6:42 AM, Michael Bayer mike...@zzzcomputing.com wrote:

 On Jun 25, 2010, at 3:27 AM, Yang Zhang wrote:

 How do I get the ORM to use locking selects when I do a
 Session.query()?  E.g., select for update, or select for share
 (PG)/select lock in share mode (MySQL).  I know this is possible using
 the lower-level SQL expression library.  Thanks in advance.

 use query.with_lockmode().   The argument is passed down to the for_update 
 argument of select().

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.



Thanks. Is there some way to have sqlalchemy always attach *at least*
a for share/lock in share mode when issuing queries?
-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] ObjectDeletedError on orm.Query.delete

2010-07-01 Thread Yang Zhang
We're getting a strange ObjectDeletedError that we've been trying to
debug for a large part of the day. The problem occurs when we keep a
(hard) reference to an ORM object, drop_all, create_all, then delete()
on a query over that ORM class (using the same scoped_session).
Unfortunately, just the above steps aren't enough; despite our efforts
so far, we haven't been able to reproduce this outside of our (large)
application. Any help would be tremendously appreciated.

Traceback (most recent call last):
...
  File /home/yang/work/app.py, line 3115, in foo
count = q.delete()
  File /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/query.py,
line 1858, in delete
if issubclass(cls, target_cls) and eval_condition(obj)]
  File /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/evaluator.py,
line 60, in evaluate
value = sub_evaluate(obj)
  File /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/evaluator.py,
line 60, in evaluate
value = sub_evaluate(obj)
  File /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/evaluator.py,
line 60, in evaluate
value = sub_evaluate(obj)
  File /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/evaluator.py,
line 60, in evaluate
value = sub_evaluate(obj)
  File /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/evaluator.py,
line 60, in evaluate
value = sub_evaluate(obj)
  File /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/evaluator.py,
line 60, in evaluate
value = sub_evaluate(obj)
  File /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/evaluator.py,
line 60, in evaluate
value = sub_evaluate(obj)
  File /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/evaluator.py,
line 82, in evaluate
left_val = eval_left(obj)
  File /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/evaluator.py,
line 42, in lambda
return lambda obj: get_corresponding_attr(obj)
  File /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/attributes.py,
line 159, in __get__
return self.impl.get(instance_state(instance), instance_dict(instance))
  File /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/attributes.py,
line 377, in get
value = callable_(passive=passive)
  File /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/state.py,
line 268, in __call__
self.manager.deferred_scalar_loader(self, toload)
  File /usr/local/lib/python2.6/dist-packages/sqlalchemy/orm/mapper.py,
line 2097, in _load_scalar_attributes
raise orm_exc.ObjectDeletedError(Instance '%s' has been deleted.
% state_str(state))
ObjectDeletedError: Instance 'Blah at 0x33032d0' has been deleted.

When echoing the SQL, we see toward the end:

...
sqlalchemy.engine.base.Engine.0x...1f90: INFO: DELETE FROM blah WHERE
blah.actor_id = ? AND blah.atype = ? AND blah.domain = ? AND ...
sqlalchemy.engine.base.Engine.0x...1f90: INFO: (read-only buffer for
0x4aaece0, size -1, offset 0 at 0x4807170, 2, u'fake.com', ...)
sqlalchemy.engine.base.Engine.0x...1f90: INFO: SELECT blah.id AS
blah_id, blah.actor_id AS blah_actor_id, blah.domain AS blah_domain,
...
FROM blah
WHERE blah.id = ?
sqlalchemy.engine.base.Engine.0x...1f90: INFO: (426,)
sqlalchemy.engine.base.Engine.0x...1f90: INFO: ROLLBACK
--
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Read-only transactions

2010-11-05 Thread Yang Zhang
Hi, we're building an application where we're distinguishing strictly
between read-only and read-write transactions, so we'd like to (as a
sanity measure) ensure that we're not inadvertently doing any writing
from the read-only sections.

What's the simplest way to catch writes with sqlalchemy? Some Googling
turned up monkey-patching the .flush() method to be a no-op that also
logs a message, but we're interested in raising an exception if the
flush actually has dirty data to write.

Thanks!
--
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Read-only transactions

2010-11-06 Thread Yang Zhang
Thanks, Michael. We're now doing something similar to your suggestion:
monkey-patching Session.flush() to check len(Session.dirty) == 0. This
seems to work OK.

We can't rely on read-only DB-level users because we also use sqlite
for testing purposes, and we'd like to catch errors there too
(impractical to make our web application straddle two different Linux
users).

SessionExtension is interesting, but it seems there's no way to pass
information to it, e.g. to let it know whether we're in a read-only
transaction or not. We could switch off between two Sessions - one
read-only and the other read-write - but we'd like to avoid having to
keep around for each web app thread multiple Sessions (and thus 2x the
open connections on the DB).

Yang

On Fri, Nov 5, 2010 at 6:42 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 On Nov 5, 2010, at 8:48 PM, Yang Zhang wrote:

 Hi, we're building an application where we're distinguishing strictly
 between read-only and read-write transactions, so we'd like to (as a
 sanity measure) ensure that we're not inadvertently doing any writing
 from the read-only sections.

 What's the simplest way to catch writes with sqlalchemy? Some Googling
 turned up monkey-patching the .flush() method to be a no-op that also
 logs a message, but we're interested in raising an exception if the
 flush actually has dirty data to write.


 The traditional approach to this would be to connect the read-only session 
 to an engine that's on a user with read-only permissions.     From within 
 SQLA, I suppose a before_flush() extension would fit the bill here, if new, 
 dirty, deleted aren't empty, raise.   If you want to be more fine grained 
 about it and not trigger for attributes with no net changes, scan through 
 dirty and see if session.is_modified(obj) returns True.



 Thanks!
 --
 Yang Zhang
 http://yz.mit.edu/

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.


 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.





-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Quick question on fetch batch size

2010-11-22 Thread Yang Zhang
When reading objects like so (notice no `.all()`):

  for obj in Session().query(User):
...

what's the batch size with which sqlalchemy fetches rows from the DB?
(If the engine matters: for Postgresql, MySQL, and sqlite?)

Thanks.
--
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Multi-get?

2011-01-26 Thread Yang Zhang
Is there something similar to the .get() method in SqlSoup and Session
but which allows me to fetch more than one object by ID, so as to save
on round trips to the DB? (This could be done by composing using the
IN operator in SQL.) Thanks in advance.

-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Multi-get?

2011-01-27 Thread Yang Zhang
Yeah, that's what we do right now.

On Wed, Jan 26, 2011 at 8:03 PM, Mike Conley mconl...@gmail.com wrote:

 On Wed, Jan 26, 2011 at 8:17 PM, Yang Zhang yanghates...@gmail.com wrote:

 Is there something similar to the .get() method in SqlSoup and Session
 but which allows me to fetch more than one object by ID, so as to save
 on round trips to the DB? (This could be done by composing using the
 IN operator in SQL.) Thanks in advance.

 Did you try something like
    session.query(MyClass).filter(MyClass.id.in_([...list of ids...])).all()


 --
 Mike Conley

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.




-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Advice on debugging an unexpected dirty flush update

2011-03-09 Thread Yang Zhang
I have a model with a field:

class Obj(Base):
  ...
  meta = Column(PickleType)

For some reason, when I do

  obj = session.query(Obj).get(id)
  obj2 = session.query(Obj2)

I get a dirty UPDATE being attempted on the first object (noticed this
due to read-only access to a Postgresql DB):

InternalError: (InternalError) transaction is read-only
 'UPDATE intel.opportunity_runs SET meta=%(meta)s WHERE
intel.opportunity_runs.id = %(intel_opportunity_runs_id)s' {'meta':
psycopg2._psycopg.Binary object at 0x2f8fe18,
'intel_opportunity_runs_id': '21da68c8-499d-11e0-a5c9-12313f00dc12'}

I'm using 0.6.4. I haven't been able to repro this issue outside the
app. Just wondering if there's any high-level tips on how to best go
about debugging this. Thanks in advance.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Advice on debugging an unexpected dirty flush update

2011-03-09 Thread Yang Zhang
On Wed, Mar 9, 2011 at 7:46 AM, Michael Bayer mike...@zzzcomputing.com wrote:

 On Mar 9, 2011, at 4:57 AM, Yang Zhang wrote:

 I have a model with a field:

 class Obj(Base):
  ...
  meta = Column(PickleType)

 For some reason, when I do

  obj = session.query(Obj).get(id)
  obj2 = session.query(Obj2)

 I get a dirty UPDATE being attempted on the first object (noticed this
 due to read-only access to a Postgresql DB):

 InternalError: (InternalError) transaction is read-only
 'UPDATE intel.opportunity_runs SET meta=%(meta)s WHERE
 intel.opportunity_runs.id = %(intel_opportunity_runs_id)s' {'meta':
 psycopg2._psycopg.Binary object at 0x2f8fe18,
 'intel_opportunity_runs_id': '21da68c8-499d-11e0-a5c9-12313f00dc12'}

 I'm using 0.6.4. I haven't been able to repro this issue outside the
 app. Just wondering if there's any high-level tips on how to best go
 about debugging this. Thanks in advance.


 It would appear that the comparison of the pickled data is coming up as 
 modified, which is a common occurrence.   To debug this, you'd do a pdb or 
 print statement after obj is loaded, assuming that's the one being marked 
 dirty, then do an obj in session.dirty which would probably return True, 
 then if still unsure that it's meta do an attributes.get_history(obj, 
 meta) and inspect the history object.  Within get_history() the comparison 
 is actually performed.   By default the == operator is used, as defined by 
 the compare_values() method of the column's type object.    The History 
 object returns a tuple with ([new value], (), [previous value]), the 
 previous value being a copy of the original.   If new_value != 
 previous_value, that is the issue.

 Going forward, just generally, if you see UPDATEs coming out for no reason 
 and you see a PickleType, its already obvious that this is the issue.    
 mutable types are headed towards deprecation in favor of the mutable 
 scalars extension in 0.7.   The strategies, in order of preference are:

 1. set mutable=False on the PickleType.  Your CPU will thank you as 
 mutable=True is extremely inefficient.

 2. If in place mutation detection is definitely needed, if possible (I 
 know, its probably not, just go to #3), upgrade to 0.7 and use the new 
 mutable scalars extension to detect in-place change events.

 3. add a comparator function to your PickleType, or implement an __eq__() 
 method on the object that you're pickling.



 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.


 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.



Ah, after some more experimentation, I think I understand what you're
saying. I can repro this problem outside the app by trying to pickle
something whose instances don't implement __eq__ (e.g., object()).

For posterity, my guess from reading the docs on MutableType (correct
me if I'm wrong) is that the ORM does loads(meta) twice, once for the
historic instance and once for the instance the app uses/mutates (not
really sure where in the code these take place), and it compares the
two whenever does a check for anything dirty to flush (which may be
pretty frequently, e.g. on every read).

And it sounds like in 0.7 the mutable scalars approach is to
define/use your own data structures that know when they've been
mutated and inform Mutable.changed().

Thanks!

-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Advice on debugging an unexpected dirty flush update

2011-03-09 Thread Yang Zhang
On Wed, Mar 9, 2011 at 12:07 PM, Yang Zhang yanghates...@gmail.com wrote:
 On Wed, Mar 9, 2011 at 7:46 AM, Michael Bayer mike...@zzzcomputing.com 
 wrote:

 On Mar 9, 2011, at 4:57 AM, Yang Zhang wrote:

 I have a model with a field:

 class Obj(Base):
  ...
  meta = Column(PickleType)

 For some reason, when I do

  obj = session.query(Obj).get(id)
  obj2 = session.query(Obj2)

 I get a dirty UPDATE being attempted on the first object (noticed this
 due to read-only access to a Postgresql DB):

 InternalError: (InternalError) transaction is read-only
 'UPDATE intel.opportunity_runs SET meta=%(meta)s WHERE
 intel.opportunity_runs.id = %(intel_opportunity_runs_id)s' {'meta':
 psycopg2._psycopg.Binary object at 0x2f8fe18,
 'intel_opportunity_runs_id': '21da68c8-499d-11e0-a5c9-12313f00dc12'}

 I'm using 0.6.4. I haven't been able to repro this issue outside the
 app. Just wondering if there's any high-level tips on how to best go
 about debugging this. Thanks in advance.


 It would appear that the comparison of the pickled data is coming up as 
 modified, which is a common occurrence.   To debug this, you'd do a pdb or 
 print statement after obj is loaded, assuming that's the one being marked 
 dirty, then do an obj in session.dirty which would probably return True, 
 then if still unsure that it's meta do an attributes.get_history(obj, 
 meta) and inspect the history object.  Within get_history() the 
 comparison is actually performed.   By default the == operator is used, as 
 defined by the compare_values() method of the column's type object.    The 
 History object returns a tuple with ([new value], (), [previous value]), the 
 previous value being a copy of the original.   If new_value != 
 previous_value, that is the issue.

 Going forward, just generally, if you see UPDATEs coming out for no reason 
 and you see a PickleType, its already obvious that this is the issue.    
 mutable types are headed towards deprecation in favor of the mutable 
 scalars extension in 0.7.   The strategies, in order of preference are:

 1. set mutable=False on the PickleType.  Your CPU will thank you as 
 mutable=True is extremely inefficient.

 2. If in place mutation detection is definitely needed, if possible (I 
 know, its probably not, just go to #3), upgrade to 0.7 and use the new 
 mutable scalars extension to detect in-place change events.

 3. add a comparator function to your PickleType, or implement an __eq__() 
 method on the object that you're pickling.



 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.


 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.



 Ah, after some more experimentation, I think I understand what you're
 saying. I can repro this problem outside the app by trying to pickle
 something whose instances don't implement __eq__ (e.g., object()).

 For posterity, my guess from reading the docs on MutableType (correct
 me if I'm wrong) is that the ORM does loads(meta) twice, once for the
 historic instance and once for the instance the app uses/mutates (not
 really sure where in the code these take place), and it compares the
 two whenever does a check for anything dirty to flush (which may be
 pretty frequently, e.g. on every read).

 And it sounds like in 0.7 the mutable scalars approach is to
 define/use your own data structures that know when they've been
 mutated and inform Mutable.changed().

 Thanks!

 --
 Yang Zhang
 http://yz.mit.edu/


Also, when using mutable=False, is there a way to manually mark the
field dirty, without creating a new copy of the value with a different
id?

-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] ORM performance

2011-04-13 Thread Yang Zhang
We've been finding the SA ORM to be a large component in our
application execution time.  Our application makes frequent queries,
and most of them simply query whole rows from a single table with some
filters, but no joins (therefore, very basic queries).  We've found
that an alarming amount of CPU time is being spent on three parts:

1. Query compilation.  Even though the compiled_cache execution option
exists, the way Query objects are commonly constructed makes compiled
statements non-cacheable.  For example, the statements created often
have the argument values baked in, and in any case, compiled_cache is
keyed by instance-equality statements, so unless we pass in the exact
same statement, we have to compile again.  We hacked around this by
manually managing statement instances for some subset of our most
common queries so that they can be looked up by compiled_cache.  This
resulted in ~2x improvement.

2. Instance mapping.  For these same relatively simple queries that
don't need fancy mapping but are run over tables with a lot of
columns, it takes a long time to create the ORM instances from rows.
We worked around this by adding our own MapperExtension with a
populate_instance that runs much faster, but makes several assumptions
(single-table, all columns, etc.).  This resulted in ~2x improvement
after applying 1.

3. ResultMetaData creation.  Since our tables have a lot of columns, a
significant amount of time is actually spent in creating processors
that read data off the cursor.  This work is sadly not reused in
between queries.  We hacked around this by building our own dialect
execution_ctx_cls that caches ResultMetaData._keymap/keys for certain
queries.  This resulted in ~3x performance improvement after applying
1 and 2.

All this feels like a lot of work, and we can't help but feel that we
might be doing something wrong or missing something obvious.  We're
wondering if there are any tips or anything built in to SA that could
help improve the performance situation.

Thanks!

-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Postgresql COPY

2011-04-15 Thread Yang Zhang
Is it possible to execute a Postegresql COPY ... FROM STDIN statement
via sqlalchemy, or do we have to drop down to psycopg2 for that
(http://initd.org/psycopg/docs/usage.html#using-copy-to-and-copy-from)?
 Tried executing a COPY statement followed directly by the input
values, as in psql, but that didn't work.  Thanks in advance.

-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] ORM performance

2011-04-26 Thread Yang Zhang
On Wed, Apr 13, 2011 at 6:07 PM, Michael Bayer mike...@zzzcomputing.com wrote:
 To work around the instance arguments being baked in, create the query like 
 this:

 query.filter(SomeClass.somerecord ==bindparam(somerecord))

 The params are then added using query.params(somerecord=x).

Is it possible to use this with .get()?

Currently, all our

  ...query(Item).get(id)

become

  q=...query(Item).filter_by(id=bindparam('id'))
  q.params(id=id).one()

We're thinking of having a custom Session and Query that override
Query.get() to build and cache the above query internally, since this
is such a frequently used construct.  But we wanted to see if we're
missing anything first.

-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] ORM performance

2011-04-26 Thread Yang Zhang
On Tue, Apr 26, 2011 at 12:32 PM, Yang Zhang yanghates...@gmail.com wrote:
 On Wed, Apr 13, 2011 at 6:07 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 To work around the instance arguments being baked in, create the query 
 like this:

 query.filter(SomeClass.somerecord ==bindparam(somerecord))

 The params are then added using query.params(somerecord=x).

 Is it possible to use this with .get()?

 Currently, all our

  ...query(Item).get(id)

 become

  q=...query(Item).filter_by(id=bindparam('id'))
  q.params(id=id).one()

 We're thinking of having a custom Session and Query that override
 Query.get() to build and cache the above query internally, since this
 is such a frequently used construct.  But we wanted to see if we're
 missing anything first.

Actually, this is turning out *slower* than using .get(), by about an
order of magnitude.

-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Query object over sqlite3 returning Nones

2011-11-18 Thread Yang Zhang
I'm using sqlalchemy 0.6.6 and sqlite 3.6.22 on Python 2.6.  When I do:

In [1]: for i in ses.query(UserSnapshot):
   ...: print i
   ...: if i is None: break
   ...:

I see:

...
twopt.UserSnapshot object at 0x86a52f50
twopt.UserSnapshot object at 0x86a55050
twopt.UserSnapshot object at 0x86a55110
twopt.UserSnapshot object at 0x86a551d0
twopt.UserSnapshot object at 0x86a55290
twopt.UserSnapshot object at 0x86a55350
None

My schema:

class User(Base):
  __tablename__ = 'user'
  id = Column(Integer, primary_key=True)
  blob = Column(LargeBinary, nullable=False)
  since = Column(DateTime, nullable=False)

class UserSnapshot(Base):
  __tablename__ = 'user_snapshot'
  id = Column(Integer, primary_key=True)
  uid = Column(Integer, ForeignKey(User.id), nullable=False)
  blob = Column(LargeBinary, nullable=False)
  time = Column(DateTime, nullable=False)

Short of dumping my entire DB, any hints as to what might cause `None`
to be returned?  I searched the documentation but couldn't find any
leads.  Thanks.

-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Query object over sqlite3 returning Nones

2011-11-18 Thread Yang Zhang
Oy...this turned out to be due to my own fiddling with the DB outside
of sqlalchemy, where I swapped out the user_snapshot table with a copy
that had

  id int primary key

instead of

  id integer primary key

Apparently sqlite treats these differently:

sqlite create table a (a integer primary key, b integer);
sqlite insert into a (b) values (0);
sqlite select * from a;
1|0
sqlite create table b (a int primary key, b integer);
sqlite insert into b (b) values (0);
sqlite select * from b;
|0

I couldn't find in http://www.sqlite.org/autoinc.html or
http://www.sqlite.org/datatype3.html any mention of this peculiar
distinguishing behavior.  Anyway, that's that.

On Fri, Nov 18, 2011 at 4:45 PM, Yang Zhang yanghates...@gmail.com wrote:
 I'm using sqlalchemy 0.6.6 and sqlite 3.6.22 on Python 2.6.  When I do:

    In [1]: for i in ses.query(UserSnapshot):
       ...:     print i
       ...:     if i is None: break
       ...:

 I see:

    ...
    twopt.UserSnapshot object at 0x86a52f50
    twopt.UserSnapshot object at 0x86a55050
    twopt.UserSnapshot object at 0x86a55110
    twopt.UserSnapshot object at 0x86a551d0
    twopt.UserSnapshot object at 0x86a55290
    twopt.UserSnapshot object at 0x86a55350
    None

 My schema:

    class User(Base):
      __tablename__ = 'user'
      id = Column(Integer, primary_key=True)
      blob = Column(LargeBinary, nullable=False)
      since = Column(DateTime, nullable=False)

    class UserSnapshot(Base):
      __tablename__ = 'user_snapshot'
      id = Column(Integer, primary_key=True)
      uid = Column(Integer, ForeignKey(User.id), nullable=False)
      blob = Column(LargeBinary, nullable=False)
      time = Column(DateTime, nullable=False)

 Short of dumping my entire DB, any hints as to what might cause `None`
 to be returned?  I searched the documentation but couldn't find any
 leads.  Thanks.

 --
 Yang Zhang
 http://yz.mit.edu/




-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] AttributeError: 'Session' object has no attribute '_model_changes'

2013-10-22 Thread Yang Zhang
Specifically, the code path where this is appearing ends up looking like:

session = sqlalchemy.orm.Session(sqlalchemy.create_engine('sqlite:///'))
session.execute('run_some_transaction()')
session.commit()

This raises:

AttributeError: 'Session' object has no attribute '_model_changes'

I get that there's nothing happening at the Session layer, but how
else do I achieve this without having to duplicating my code to use
raw Connections instead of Sessions?  (Most of my code *does* use
Sessions; only under certain conditions does no Session layer activity
actually take place.)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Re: AttributeError: 'Session' object has no attribute '_model_changes'

2013-10-22 Thread Yang Zhang
Sorry, disregard—wrong mailing list!  This is Flask-SQLAlchemy-specific.

On Mon, Oct 21, 2013 at 11:51 PM, Yang Zhang yanghates...@gmail.com wrote:
 Specifically, the code path where this is appearing ends up looking like:

 session = sqlalchemy.orm.Session(sqlalchemy.create_engine('sqlite:///'))
 session.execute('run_some_transaction()')
 session.commit()

 This raises:

 AttributeError: 'Session' object has no attribute '_model_changes'

 I get that there's nothing happening at the Session layer, but how
 else do I achieve this without having to duplicating my code to use
 raw Connections instead of Sessions?  (Most of my code *does* use
 Sessions; only under certain conditions does no Session layer activity
 actually take place.)



-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Connecting via pyodbc (to Vertica)

2013-11-08 Thread Yang Zhang
I'm trying to connect to Vertica as a regular ODBC DSN, which I can do
fine outside of sqlalchemy:

$ isql -v pod
+---+
| Connected!|
|   |
| sql-statement |
| help [tablename]  |
| quit  |
|   |
+---+
SQL

In [4]: pyodbc.connect('DSN=pod', ansi=True)
Out[4]: pyodbc.Connection at 0x18f0440

In [6]: con.execute('select 1')
Out[6]: pyodbc.Cursor at 0x18e8c90

How do I get sqlalchemy to connect?  It speaks a Postgresql-like
dialect (since it's a fork of PG); I have installed vertica-sqlalchemy
which provides this dialect.  However, I'm confused because there's no
way to just specify a DSN to sqlalchemy (not even getting to the
dialect-using stage of connection).  Whatever strings I try, I get:

DBAPIError: (Error) ('IM002', '[IM002] [unixODBC][Driver Manager]Data
source name not found, and no default driver specified (0)
(SQLDriverConnect)') None None

The /etc/odbc.ini is pretty simple:

[ODBC Data Sources]
pod = the pod database

[pod]
Driver = /opt/vertica/lib64/libverticaodbc.so
Servername = localhost
Database = pod
Port = 15433
UserName = dbadmin
Password = ...

I've been wrestling with this for a good part of the day.  Any hints
would be greatly appreciated.  Thanks in advance.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Re: Connecting via pyodbc (to Vertica)

2013-11-08 Thread Yang Zhang
Right after sending this, I hit upon the magic incantation, which I
had not thought of trying:

vertica+pyodbc://pod

Sigh.

On Fri, Nov 8, 2013 at 4:48 PM, Yang Zhang yanghates...@gmail.com wrote:
 I'm trying to connect to Vertica as a regular ODBC DSN, which I can do
 fine outside of sqlalchemy:

 $ isql -v pod
 +---+
 | Connected!|
 |   |
 | sql-statement |
 | help [tablename]  |
 | quit  |
 |   |
 +---+
 SQL

 In [4]: pyodbc.connect('DSN=pod', ansi=True)
 Out[4]: pyodbc.Connection at 0x18f0440

 In [6]: con.execute('select 1')
 Out[6]: pyodbc.Cursor at 0x18e8c90

 How do I get sqlalchemy to connect?  It speaks a Postgresql-like
 dialect (since it's a fork of PG); I have installed vertica-sqlalchemy
 which provides this dialect.  However, I'm confused because there's no
 way to just specify a DSN to sqlalchemy (not even getting to the
 dialect-using stage of connection).  Whatever strings I try, I get:

 DBAPIError: (Error) ('IM002', '[IM002] [unixODBC][Driver Manager]Data
 source name not found, and no default driver specified (0)
 (SQLDriverConnect)') None None

 The /etc/odbc.ini is pretty simple:

 [ODBC Data Sources]
 pod = the pod database

 [pod]
 Driver = /opt/vertica/lib64/libverticaodbc.so
 Servername = localhost
 Database = pod
 Port = 15433
 UserName = dbadmin
 Password = ...

 I've been wrestling with this for a good part of the day.  Any hints
 would be greatly appreciated.  Thanks in advance.



-- 
Yang Zhang
http://yz.mit.edu/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.