I posted it on the psycopg list at
http://lists.initd.org/pipermail/psycopg/2008-April/006026.html, but it
mangled my link to this discussion (by eating a space after the URL and
appending the first word of the next sentence)
On Fri, Apr 18, 2008 at 9:34 PM, Michael Bayer <[EMAIL PROTECTED]>
wrote
>
> what I notice about both of these are that you're using correlations.
> So right off , using the Query, which has opinions about how to build
> select statements, to build up a statement like this with its current
> functionality (as well as what we're planning to do in 0.5) is awkward
> if n
On Apr 18, 2008, at 10:20 PM, Matthew Dennis wrote:
> I get a similar result if I use psycopg2 directly:
>
> #!/usr/bin/python
>
> import psycopg2
> from datetime import datetime
>
> conn = psycopg2.connect('''dbname=testdb user=postgres
> host=localhost''')
> cur = conn.cursor()
>
> cur.execu
I get a similar result if I use psycopg2 directly:
#!/usr/bin/python
import psycopg2
from datetime import datetime
conn = psycopg2.connect('''dbname=testdb user=postgres host=localhost''')
cur = conn.cursor()
cur.execute("drop table if exists t0")
cur.execute("create table t0(c0 timestamp(0) w
On Apr 18, 2008, at 4:42 PM, Matthew Dennis wrote:
> I'm using SA 0.4.3 and PostgreSQL 8.3.1
>
> I'm new to SA, so perhaps I'm doing something wrong or just not
> understanding something, but I think SA is trying to treat my
> timestamps as intervals in some cases. If I run the equivalent
I'm using SA 0.4.3 and PostgreSQL 8.3.1
I'm new to SA, so perhaps I'm doing something wrong or just not
understanding something, but I think SA is trying to treat my timestamps as
intervals in some cases. If I run the equivalent (select c0 from t0 where
c0 < current_timestamp - interval '1 hour')
something like:
- clean _all_ your refs to SA stuff like tables, mappers etc
- sqlalchemy.orm.clear_mappers()
- metadata.drop_all()
- dbengine.dispose()
plus eventualy
del x._instance_key
del x._state
for all instances that are still around, and u want them resued in
another db/mapping/..
On Fri, 18 Apr 2008 01:06:41 -0700 (PDT)
Vortexmind <[EMAIL PROTECTED]> wrote:
> On 17 Apr, 18:10, Lele Gaifax <[EMAIL PROTECTED]> wrote:
> > On Thu, 17 Apr 2008 07:40:02 -0700 (PDT)
> > Either get the trunk of autocode, ...
>
> I tried, but http://sqlautocode.googlecode.com/svn/trunk/ grabs the
nope. Seems like the SQLA connection is still not using the
appropriate magical incantation to get all that ODBC stuff to behave
reasonably.
might I suggest just circumventing the URL entirely and just using
creator=lambda:
pyodbc
.connect
("DRIVER
={TDS};SERVER=;UID=;PWD=
On Apr 18, 2008, at 3:18 PM, kris wrote:
> I think I want something like the following:
>
> select item.id
> from item,
> (select dataset.something_id from base, dataset
> where base.id = dataset.id and base.owner ='me'
> tag.c.name="good" and tag.c.parent_id ==
The problem stems from a tree structure and creating self joins
on a very large base table.. I am trying to create datasets
of items and filter on the contents of datasets in single query
that it built up progressively.
base = Table ('base',
Column('id', Integer, primarykey=True)
On Fri, Apr 18, 2008 at 1:36 PM, Rick Morrison <[EMAIL PROTECTED]> wrote:
> Err, on a second look, that's no good. The connect_args are passed directly
> through to connect().
>
> This thing needs to construct an ODBC connection string from some fragments
> provided by the dburi, and from some eng
Err, on a second look, that's no good. The connect_args are passed directly
through to connect().
This thing needs to construct an ODBC connection string from some fragments
provided by the dburi, and from some engine options. So we'll either need
some parsing tricks for the URL to allow strings w
> How can I realize this concept of a completely new and isolated DB
> environment for each single test case that's being run?
Not sure if this is a useful answer or not, but I just made starting sql
files for each of my test suites. It's an extra step but then you have a
convenient file to put
On Fri, Apr 18, 2008 at 11:07 AM, jason kirtland <[EMAIL PROTECTED]> wrote:
>
> Rick Morrison wrote:
> > Yeah, I was under the impression that config args passed in via
> > create_engine() ctor and via dburi were treated the same, but looking
> > over engine/strategies.py, it looks as if they
hey thanks Jason, that's a nice shortcut.
Lukasz, can you please give that a try?
On Fri, Apr 18, 2008 at 12:07 PM, jason kirtland <[EMAIL PROTECTED]>
wrote:
>
> Rick Morrison wrote:
> > Yeah, I was under the impression that config args passed in via
> > create_engine() ctor and via dburi were t
Rick Morrison wrote:
> Yeah, I was under the impression that config args passed in via
> create_engine() ctor and via dburi were treated the same, but looking
> over engine/strategies.py, it looks as if they have two separate
> injection points. I'll see if I can get it to allow either, stay tu
Yeah, I was under the impression that config args passed in via
create_engine() ctor and via dburi were treated the same, but looking over
engine/strategies.py, it looks as if they have two separate injection
points. I'll see if I can get it to allow either, stay tuned.
On Thu, Apr 17, 2008 at 4
>
> Reading this thread, I keep wondering why you are trying to put
> all that connection setup configuration into the connection string...
>
> Such setting are normally configured in the odbc.ini file and then
> you just reference data source name in the connection string.
>
> That's the standard
Thank you for your support. You have done an awesome work overall.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe fr
On 2008-04-17 22:24, Lukasz Szybalski wrote:
> On Thu, Apr 17, 2008 at 3:04 PM, Rick Morrison <[EMAIL PROTECTED]> wrote:
>> It's a two-line change that pops the new keyword out of the config dict just
>> like the others that were added.
>>
>> Mike, can you take a quick look at mssql.py line 804 an
On Apr 17, 2008, at 10:42 PM, kris wrote:
>
> I am building a tree structure of D1 and D2 nodes..
>
> I am progressively generating a query as before execution using
> subqueries.
>
> s = session.query(D1).filter (...)._values(D1.c.id).statement
> ...
>
> q = session.query (D2).select_from (s).f
On 17 Apr, 18:10, Lele Gaifax <[EMAIL PROTECTED]> wrote:
> On Thu, 17 Apr 2008 07:40:02 -0700 (PDT)
> Either get the trunk of autocode, or apply the simple patch available
> athttp://code.google.com/p/sqlautocode/issues/detail?id=2
I tried, but http://sqlautocode.googlecode.com/svn/trunk/ grabs
Sorry, this was a bug. The "byroot_tree" example had been changed to
work around some recent design decisions (it also works more
intelligently but is dependent on SA's eager load methodology), which
led me to not realize that "append_result()" was no longer working in
the way it was des
After debugging, i've noticed that the issue is related to eager
loaded
relations. If you try the example script with _descendants relation
having lazy=None or True, then the extension method is not called
anymore.
Is there a way to fire the extension method even without eadger
loading?
> i cant
25 matches
Mail list logo