Hi list:
I've recently been plagued by a runaway query somewhere in one of my apps
that mistakenly loads 10s of 1000's of rows, swamping the working set of the
Python process and eventually invoking the OOM killer.
Unfortunately, the database backend I'm using (MSSQL 2005) doesn't provide a
lot
depend though on the fact
that your query is returning distinct primary keys - if you have a basic
cartesian product occurring (which is likely), Query's uniquifying of
results might conceal that.
On Mar 30, 2011, at 11:31 AM, Rick Morrison wrote:
Hi list:
I've recently been plagued
Hi Everybody,
Crossposting this from the pyodbc ML, in the hopes that someone here has run
into this.. Are there any SQLA unit tests that exercise multiple DB
connections over unixODBC?
Hi all,
Having trouble getting multiple MSSQL connections to work with unixODBC on
CentOS. Confirmed that
AFAIK, there's nothing in SQLA that will address this -- the issue sounds
new to me, and it seems to me that it's pretty clearly some kind of
pyodbc/FreeTDS issue. Check your character encoding settings, there's quite
a few reported issues with MSSQL + pyodbc + unicode statements. You may want
to
I think its just a flag on the class. you can probably monkeypatch it to
get your app working for now (and it would be False).
That works fine, many thanks.
(for the archive, here's what I'm doing right after the initial sqlalchemy
import:)
# remove for SQLA 0.5.3+
from
I'm running into this one now as well, though it's with Session.expire(),
not Session.merge().
The attached script will reproduce the issue against the 0.4 trunk
On Mon, Dec 1, 2008 at 11:05 AM, Michael Bayer mike...@zzzcomputing.comwrote:
this could be a bug, depending on if we have test
, 2009, at 3:01 PM, Rick Morrison wrote:
I'm running into this one now as well, though it's with Session.expire(),
not Session.merge().
The attached script will reproduce the issue against the 0.4 trunk
On Mon, Dec 1, 2008 at 11:05 AM, Michael Bayer
mike...@zzzcomputing.comwrote
Yep, same here.
..on my mssql 2005, I tried this query batch:
set implicit_transactions on
go
select 'After implicit ON', @@trancount
exec sp_datatype_info
go
select 'After query w/implicit', @@trancount
begin transaction
go
select 'After BEGIN', @@trancount
Here's the output:
Jeez, yet another wonky mssql behavior switch? That things got more flags
than the U.N.
I believe that the MSSQL ODBC driver on Windows automatically sets
IMPLICIT_TRANSACTION
off on connect, whereas FreeTDS likely does not, which is perhaps the source
of the problem.
Here's what I think the
Seems to me the issued SQL would work if the innermost query (the UNION
query) was phrased as a subquery. Have you tried simply wrapping the literal
SQL text in parenthesis to force it into a subquery like this?
nums = sql.select(['n'], from_obj=sql.text(r(SELECT 1 as n
On Tue, Mar 3, 2009 at 5:31 PM, phrrn...@googlemail.com
phrrn...@googlemail.com wrote:
Thanks. I wrapped it as ' (original_sql) as foo' as Sybase needs a
name for the derived table.You have helped to get primary key and
index introspection working on Sybase!
Huh, I thought you were using
Unfortunately, AFAICT, MS-SQL does not have an OFFSET clause (it uses
TOP instead of LIMIT). How does SQLA handle this situation?
For mssql2005 and higher, (those versions of mssql that support window
functions using OVER, row_number(), rank(), etc.), we simulate an OFFSET by
wrapping the
The dotted schema notation discussed in that ticket should fix the issue,
yes.
Meantime a short-term workaround might be to define a view in the local
database that does the cross-database reference and use the view in your
query.
--~--~-~--~~~---~--~~
You
I was wondering if anyone was aware of a JDBC DBAPI module for
cpython.
Interesting idea, and could be a killer feature for SA 0.6+ if it could be
made to work
Jpype could perhaps do the job:
http://jpype.sourceforge.net/
There's been at least some activity with accessing JDBC drivers from
I worked up a test case that simulates your usage, and checks the number of
open MSSQL connections using a call to the system stored proc sp_who, so
it can run in a more automated fashion.
I originally got mixed results on this, it would pass about 50% of the time
and fail about 50% of the time.
Hey, seems that you've got the problem. conn = self._pool.get( False )
is the problem
It raises an Empty error...:
It's supposed to; that's the exit condition for the while True loop. It
does make it at least once through the loop, though right? Enough to close
any connections you may
Oh... i didn't explain myself... I mean that it's already empty at the
first cycle of the loop...
It would be normal to not enter the loop if you haven't yet opened any
connections, as connections are opened on demand. Make sure your program
issues at least one query during this test. If you
To make pyodbc close the connection is settine pyodbc.pooling = False.
Whoa, I didn't know pyodbc automatically used ODBC connection pooling. Seems
like we should be turning that off if the user is using SQLA pooling.
--~--~-~--~~~---~--~~
You received this
Uh, did you guys not see my last message in this thread?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this
...@zzzcomputing.comwrote:
I believe this is a setting you establish when you create the DSN
yourself, no ?
On Jan 23, 2009, at 12:27 PM, Rick Morrison wrote:
To make pyodbc close the connection is settine pyodbc.pooling =
False.
Whoa, I didn't know pyodbc automatically used ODBC connection
his answer here:
http://groups.google.com/group/pyodbc/browse_thread/thread/320eab14f6f8830f
You say in that thread that you're already turning off the setting by
issuing:
import pyodbc
pyodbc.pooling = False
before you ever open an SQLAlchemy connection.
Is that still the case?
On Fri, Jan 23, 2009 at 3:05 PM, Michael Bayer mike...@zzzcomputing.comwrote:
pyodbc has the pooling implemented in Python ??? that seems weird ?
How did you get that idea from this thread? My read on it is that it uses
ODBC connection pooling.
OK, it should use whatever is set on the ODBC DSN then. im not sure that
pyodbc should have an opinion about it.
Eh?
is there a way to set pyodbc.pooling = None or some equivalent ?
It's pyodbc.pooling = False, as appears many times upthread
From the OP's description, it sounds like
From your earlier post:
a_session.close()
sa_Session.close_all()
sa_engine.dispose()
del sa_engine
but it does not close the connection!
Here's Engine.dispose (line 1152, engine/base.py)
def dispose(self):
self.pool.dispose()
self.pool = self.pool.recreate()
Please try r5718, it contains an updated method of column construction that
should fix this issue.
Rick
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to
Hey Greg, please set the output format to text (if you're in mssql 2005,
there's a button over the query window with a tooltip that should say
Results to text) and re-run the query. The text output will be a lot
easier to read.
Thanks
--~--~-~--~~~---~--~~
You
On Wed, Jan 21, 2009 at 12:16 PM, Michael Bayer mike...@zzzcomputing.comwrote:
I think we might need to just change the *args approach in mssql
reflecttable to do everything based on keyword arguments, and add in
some isinstance(String) / isinstance(Numeric) to determine what args
get sent
I think we might need to just change the *args approach in mssql
reflecttable to do everything based on keyword arguments
Yeah, that sounds like a good approach. I'll have a look later today.
Attached is an untested patch against trunk that uses only kwargs to build
out the tabledef. I
I'm just trying to introspect an existing production database, not
create any new tables.
The structure of the table is read when reflecting the table: it's likely
that an unusual column definition would trigger an error like this, and it
would be helpful to someone diagnosing the problem to
The MSSQL connection string changed for the 0.5 final release. In
particular, the dsn keyword is removed, and the pyodbc connection string
now expects the DSN to be named where the host was previously placed, so
the new connection URL would be:
mssql://username:passw...@mydbodbc
For
I've got a few concerns with the just-committed get_default_schema_name()
approach:
1) What about pre-2005 users? The query used in this patch to fetch the
schema name won't work. There was not even a real hard concept of 'schema'
pre-MSSQL-2005.
2) Prior to MSSQL 2005, MSSQL conflated user
Im curious, is the MSSQL dialect rendering tables as
schemaname.tablename in all cases ?
No, I don't think so: the module uses non-overridden calls to
compiler.IdentifierPreparer.format_table() and format_column().
So then the only usage of the get_default_schema_name() is for table
existence
This has been bandied back and forth for months, and I think it's becoming
clear that having sqla map dburl's to ODBC connection strings is a losing
battle. Yet another connection argument is not sounding very attractive to
me.
Perhaps a simple reductionist policy for ODBC connections would be
Whats the status of 0.5, is DSN the default in trunk now ?
DSN is the first choice in MSSQLDialect_pyodbc.make_connect_string
right now.
That's not what I see. I just pulled the 0.5 trunk, which I haven't been
tracking lately. Still uses the 'dsn' keyword build a connection string with
Something like this:
As of 0.5 for pyodbc connections:
a) If the keyword argument 'odbc_connect' is given, it is assumed to be a
full ODBC connection string, which is used for the connection (perhaps we
can include a facility for Python sting interpolation into this string from
the dburi
mssql://user:[EMAIL PROTECTED]/database?
connect_type=TDS7other=argsthat=areneeded=foo
using connect_type, or some better name, we can map the URL scheme
to an unlimited number of vendor specific connect strings on the back.
Yeah, it's exactly that kind of mapping that has so far been a
So I'm working a bit on this.
If base64 encoding within the bind_processor() can fix
MS-SQL for now, I'd say that would be the approach for the time
being.
turns out base64 encoding is problematic: it requires a corresponding decode
for the data retrieval, which makes it effectively
There's no doubt that pyodbc is the better-supported option; the pymssql
module hasn't been updated in two years, and it relies on the Microsoft DB
lib, which has been deprecated for a long, long time and is no longer
supported, and may not even work with MSSQL 2008. Here's the deprecation
notice
Hi John,
Is the column you're having issues with really a VARCHAR, or is the message
misleading? How did you create the table, pre-existing or via SQLAlchemy?
Can you show the schema and the code you're trying to access it with?
Thanks,
Rick
On Fri, Sep 19, 2008 at 2:54 PM, John Hampton
Hmmm, looks to me as if SQLA is generating the query correctly, but that the
DBAPI passes along the Binary() value encoded in a Python binary string,
which MSSQL then interprets as Varchar, and then complains that it can't do
an implicit conversion. That's a surprise to me; I had thought that this
and then I do the following where someValue happens to be a unicode
string:
pref.pref_value = someValue
session.commit()
That's not going to work with pymssql as your DBAPI. Pymssql is based on
Microsoft's DBLib (circa 1991 or so), which does not support unicode and
never will: it's been
There have been other reports of this issue, all specific to pyodbc. It
doesn't appear to be an issue with SA, as other MSSQL DBAPI modules don't
exhibit the problem.
Please raise the issue on the pyodbc list, I'll work with you if needed to
help resolve it.
I read through the pyodbc forum quickly, and it looks as though this issue,
first reported back in April, cannot be reproduced by the maintainer. Pick
up the forum thread entitled
http://sourceforge.net/forum/forum.php?thread_id=1942313forum_id=550700
pyodbc pops bad exception since
pymssql has a 30 char identifier limit, but pyodbc should work with
identifiers up to 128 chars. I can't tell from your message if you're
running pymssql or pyodbc, but that may be the issue.
--~--~-~--~~~---~--~~
You received this message because you are
Are you aware of any IoC frameworks which have been adapted to
inject/autowire things into SQLAlchemy transient business objects?
There was some talk a few months ago about integration of SA with the
Trellis component of PEAK, which I think was one of the primary motivators
for the user
I'm not sure where this is going with the 0.5 version, but I believe that
MappedClass.__int__ is still not called when objects are loaded from the DB.
If that's the case, and there isn't some alternate that SA provides like
MappedClass.__onload__, You can look into Mapper Extensions to provide
Insert into myisam table worked because
it does not support transactions?
Yes, to my knowledge mysql with myiasm tables will accept, but ignore any
'begin transaction' or 'commit transaction' statements: they are no-ops.
--~--~-~--~~~---~--~~
You received this
I have shard session set to transactional. Does this conflict with
innodb transaction?
No, but it means your inner sess.begin() and sess.commit() are now within
the scope of an outer transaction, so your inner sess.commit() has no
effect. Since you immediately issue a sess.clear() after
Session.add is a version 0.5 method, you're maybe running 0.4.6?
In the 0.4.x series, it's going to be:
Session.save() for objects that are to be newly added to the session
Session.update() for objects that are already in the session, or
Session.save_or_update() to have the library figure it out
That's exactly what the problem was :-) Is there any reason I should avoid
using 0.5? I'm running python 2.4 at the moment, are they compatible?
0.5 is still in beta, and I don't have much experience with it myself, but
if were just starting out, I would probably be using that, otherwise
Sounds like you want deferred loading for the column:
http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_mapper_deferred
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this
I started off with using only the SQL-API part of SA myself, but the ORM is
way too good to ignore, and I've since converted piles of code over to using
the ORM, typically with a 50% loss in lines of code and while getting much
better code reuse.
The query as a mapper-aware select orientation of
Hey I'll miss you Paul; thanks for all of your help with MSSQL and for being
the pyodbc trailblazer.
Good luck with whatever your new direction in life brings -- turning off the
computer and a taking bit of travel time sounds pretty appealing!
On Tue, Jun 24, 2008 at 3:37 PM, Paul Johnston
sorry, bad mod.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For
Is there an API-stable way to create a polymorphic instance from only the
type discriminator key? I've got a case where I need to create a mapped
instance from some JSON data that contains the type discriminator, but I'd
rather get the (key -- mapped class) from the sqla map and not maintain my
neative built a bunch of those job engines in other languages, is
it something you could post as a recipe / example ?
sure, I'll put up something in the next few days - it will let me see if it
works with sqlite as well
--~--~-~--~~~---~--~~
You received
There's offset support in the current sqla mssql driver. It's implemented
using the ansi ROW_NUMBER() OVER construct, which is supported only in mssql
2005 and higher. To turn it on, add the has_window_funcs keyword in the
dburi, or as an engine constructor keyword.
the broken traceback is some
er, that's, has_window_funcs=1
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL
Since I am using 2000 I don't think its going to work for me do?!?
nope
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To
I'm looking at using the pyprocessing module to set up a (dispatcher --
workqueue -- slave job) kind of environment.
The dispatcher will be my app server, and I've decided to just use a table
in the app database for the persistent job queue, since everything is going
to have to connect to that
Yeah, I wouldn't expect one to survive a fork, in fact I was looking for a
way to ensure that there were **no** open connections before a fork. Looking
over the pool module, I guess when I said fully closed pool, what I'm
really looking for is a fully empty pool.
I'm going to try a call to
I'm going to try a call to engine.dispose() in the child after the fork,
which should invalidate all the connections in the pool and force the pool
to start making new ones and see how that works out.
Update: seems to work.
The combination of polymorphic mapping and the processing module
Have you considered using a discriminator column, an additional integer that
identifies the shard and is part of a two-integer primary key?
You could then use concrete polymorphic inheritance to set up mappers for
both tables that would automatically set the discriminator column to the
The and_ function is expecting two arguments, not a series of *args. It
works when you remove the third argument because you then have the expected
two arguments.
Either use nested calls to and_, or multiple calls to filter(), and build
up the query in a generative fashion, like this:
You're right, that was the original motivation. I tried just changing
@@identity for scope_identity(), which worked just fine on pymssql, but not
on the other adapters. Did eventually get it working, but it involved pyodbc
changes, that I was unable to do. Fortunately someone on the list
I think we're using pymssql from a Linux box. Is there a way to tell
which Python module SQLAlchemy is using? We tried running it with
straight pymssql instead and it works in there:
The MSSQL module does an auto-detect of the supported DB-API modules and
uses the first one that imports
We should really be using the ODBC sanctioned syntax for procedure call,
which is still unsupported by pyodbc, AFAIK. ODBC on *nix is over 10 years
old at this point, you'd think we'd have a better story to tell by now,
jeez.
Dunno if this is related, but pyodbc and adodbapi execute each
That error would be thrown by an insert, not a table create, and I believe
there are other users using pyodbc with schema-specified tables without
problems.
I won't have a chance to look at this under pyodbc until tomorrow. In the
meantime, if you could try with pymssql to see if you get the same
So now I'm of two minds about which module to use and if I should use
a schema or not for these porposes.
There's a few arcane limitations when using the pymssql module, pyodbc will
be better-supported going into the future.
As for using schema vs. other namespace tricks, that's up to you. I
Does the same statement work in an interactive query window, complete with
the embedded semicolon you're using?
Also, you should be able to use positional parameters instead of named
parameters in your call:
cur.execute(execute stored_proc 'gra%' )
Note that as of yet there is no
Hi Bruce,
I'm considering a switch from pymssql to pyodbc myself in the
not-too-distance future, and this thread has me a bit curious about what's
going on. This is a subject that may affect SQL more in the future when ODBC
and JDBC drivers get more use.
I think there's two distinct questions
I think Jason hits the nail on the head with his response - my first
reaction on the initial post was that was splitting hairs to enforce the
difference between an ordered list and an (allegedly) unordered list, but I
thought it was going to be a non-starter until I read Mike's reply. It seems
The DSN method should work with Integrated Security as well. Here's a short
writeup of the DSN configuration:
http://support.microsoft.com/kb/176378
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy
You really should reconsider. DSN is a much easier setup method than trying
to specify a myriad of ODBC options in a connection string. There's a GUI
user interface for setting up DSN's etc. It's the simpler and better
supported method.
If you really are dead-set against it, you'll need to use
-1.
It's confusing, and there's already an extant or_ function that's documented
and not confusing. The proposal is no more cooked than it was five months
ago.
On Mon, May 12, 2008 at 11:58 AM, [EMAIL PROTECTED] wrote:
On Monday 12 May 2008 17:01:23 Michael Bayer wrote:
what does
I was thinking of a user-level option for liveliness checking on pool
checkout, with dialect-specific implementations (e.g. execute a 'SELECT
1', or something more efficient if the driver allows). Is that in line
with what you were thinking?
I had in mind something more of a optimistic /
There's a Dialect refactor underway for 0.5.0 that will likely change the
way that options are fed to db engines:
http://groups.google.com/group/sqlalchemy/browse_thread/thread/36fd2e935b165d70
Part of that work will probably have some influence on the dburi and
create_engine(**kwargs) option
Sounds nice, thanks for the heads-up.
There'll be opportunities for dialects to set up pool events as well.
One of the things I'm looking to see is better reconnect support for dead
database connections from network partitions, sql server restarts, etc.
Is that going to be fully Dialect
then queried the db *directly using sql*. It looks like the change
hasn't made it to the DB yet
Also possible is that you're using a an MVCC DB such as Postgres or Oracle,
and you're looking at an old, pre-update version of the data, as your direct
SQL would be in a separate transaction
I suppose that depends on the behavior of the DB-API interface, in this case
I guess that's psycopg.
Anyway, I'm certainly not sure if an MVCC snapshot that's causing your
problem, but it does seem like at least a possibility. The certain way is to
check the update status inside the same
I just started using session.merge, and I noticed that on session.flush(),
the before_update mapper extension for the objects that have been merged
into the session are not called.
These are new instances, not previously persisted.
Is there something I need to do to trigger this, or eesss a bug?
right, sorry, before_insert
On Wed, Apr 30, 2008 at 11:18 PM, Michael Bayer [EMAIL PROTECTED]
wrote:
On Apr 30, 2008, at 10:47 PM, Rick Morrison wrote:
I just started using session.merge, and I noticed that on
session.flush(), the before_update mapper extension for the objects
is not being called. as you call tell, I'm having trouble with the
keyboard tonight. first message was a typo,
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email
What I'm not sure of at this point is if theres some cursor usage
specific to the MS-SQL dialect that might be external to the
ResultProxyif Rick could comb through that for me that would be
helpful.
The cursor is used pre_exec() if an INSERT statement tries to set a literal
PK on a
It is possible we could re-introduce check for open cursors as a
pool events extension. It would raise an error if any connection is
returned with associated cursors still opened and could track down
issues like these.
That would be a great diagnostic tool for this: It's hard to track
perhaps we could simply reset the pyodbc cursor before issuing a new SQL
operation?
class MSSQLExecutionContext(default.DefaultExecutionContext):
def pre_exec(self):
if self.dialect.clear_previous_results:
self.cursor.clear_previous_results_somehow_idunnohow()
Look, relax:
No one is suggesting that we *eliminate* DSN-less connections, only to come
up with a reasonable *default* for ODBC connection specifications. A
mechanism for non-DSN connections will certainly be provided.
Well,
Based on :
http://www.4guysfromrolla.com/webtech/070399-1.shtml
Reading this thread, I keep wondering why you are trying to put
all that connection setup configuration into the connection string...
Such setting are normally configured in the odbc.ini file and then
you just reference data source name in the connection string.
That's the standard way of
hey thanks Jason, that's a nice shortcut.
Lukasz, can you please give that a try?
On Fri, Apr 18, 2008 at 12:07 PM, jason kirtland [EMAIL PROTECTED]
wrote:
Rick Morrison wrote:
Yeah, I was under the impression that config args passed in via
create_engine() ctor and via dburi were treated
at 4:24 PM, Lukasz Szybalski [EMAIL PROTECTED]
wrote:
On Thu, Apr 17, 2008 at 3:04 PM, Rick Morrison [EMAIL PROTECTED]
wrote:
It's a two-line change that pops the new keyword out of the config dict
just
like the others that were added.
Mike, can you take a quick look at mssql.py line 804
Here are the options as specified by free TDS. What you are talking
about is setting it in conf file which is used only for dsn
connection.
No, I meant as the *default* TDS version here. See here:
http://www.freetds.org/userguide/freetdsconf.htm
I'm talking about the [global] setting,
ok, ok, assuming that dsn-less connections actually do ignore the .conf file
and require all that stuff to be specified.
here's the question that I'm trying to ask:
instead of something like this:
create_engine('mssql://user:[EMAIL PROTECTED]/database',
odbc_driver='TDS',
It's in trunk r4518. Take 'er for a spin and let me know how it works out.
On Thu, Apr 17, 2008 at 2:54 PM, Lukasz Szybalski [EMAIL PROTECTED]
wrote:
On Thu, Apr 17, 2008 at 1:22 PM, Rick Morrison [EMAIL PROTECTED]
wrote:
ok, ok, assuming that dsn-less connections actually do ignore
Does it matter what case are the parameters? DRIVER in pyodbc, we used
'driver' in previous connection strings etc...
No the parameters are a straight pass-through, that traceback is complaining
about the 'odbc_options' keyword itself. Are you sure you're running the
current trunk?
]
wrote:
On Thu, Apr 17, 2008 at 2:35 PM, Rick Morrison [EMAIL PROTECTED]
wrote:
Does it matter what case are the parameters? DRIVER in pyodbc, we used
'driver' in previous connection strings etc...
No the parameters are a straight pass-through, that traceback is
complaining
File sqlalchemy/databases/mssql.py, line 499, in do_execute
cursor.execute(SET IDENTITY_INSERT %s OFF %
self
.identifier_preparer.format_table(context.compiled.statement.table))
SystemError: 'finally' pops bad exception
This seems to be some weird error either with pyodbc or with
Would you please post the traceback that you're getting with this?
Note that you don't need the session.begin() and session.flush() with a
transactional sessions, the .begin() is implicit and the .flush() will be
issued by the .commit()
On Tue, Apr 8, 2008 at 1:51 AM, Madhu Alagu [EMAIL
I'll reply here rather than on the ticket as I'm unable to stay logged into
Trac from here (dual TCP/IP address problem).
I have just posted a patch for the MSSQL_odbc dialect. The ticket
number is #1005.
The patch is certainly simple enough, but any objection if we call the key
For your call to mapper.populate_instance, you have the arguments for 'row'
and 'instance' reversed.
On Tue, Apr 8, 2008 at 6:56 AM, Sanjay [EMAIL PROTECTED] wrote:
Hi,
In a project, I am using MapperExtension this way:
class TaskExtension(MapperExtension):
def populate_instance(self,
r4479 has the new 'odbc_autotranslate' flag
Currently documented only in the CHANGES file, more docs will follow later.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send
1 - 100 of 359 matches
Mail list logo