A new suite of unittests shed light on what appears to be anti-pattern in
an application. I'm not sure how to address this one.
The general flow of a particular web request is this:
Phase 1- load some database objects for general verification
Phase 2- loop through a handful of routines to
your database logs may indicate why it is closing the connection or what
underlying error happened.
there are dozens, if not hundreds, of potential reasons why an error like
this may be happening. this could be from anything, including having too
many connections, to an issue on your database
There are a few strategies to handling disconnects that are outlined in the
docs:
https://docs.sqlalchemy.org/en/latest/core/pooling.html#dealing-with-disconnects
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please
you should ask snowflake/ snowflake-sqlalchemy developers. snowflake is a
commercial product and they maintain their own sqlalchemy product. the
chance of somewhere here being able to answer this question is very low,
and you're already paying snowflake for support.
On Thursday, October 11, 2018 at 11:26:57 PM UTC-4, Mike Bayer wrote:
>
>
> the overhead of using @event.listens on that setter will be almost
> undetectable.of course it also depends on what you're doing once
> youre in the event handler.
>
>
Great! Thanks. I'm just setting a bunch of
On Thursday, October 11, 2018 at 7:29:32 PM UTC-4, Mike Bayer wrote:
>
>
> overhead is fairly minimal, but also, a "set" is something you're
> doing at very specific times, so, how much is this "set" being called
> typically?
>
The app is a read-heavy CMS. This "set" happens on the Article's
I try to stay away from Events in production.
I now need to catch modifications of a particular column in order to expire
some data cached onto the object.
Is there a measurable overhead for catching a single column? e.g.
@event.listens_for(ObjectClass.column, 'set')
The alternative is to
On Wednesday, October 3, 2018 at 9:40:37 AM UTC-4, Mike Bayer wrote:
>
>
> those are not going to change within 1.2 or 1.3 but it's not ideal to
> be relying on them. For query._entities, you can use the public
> accessor query.column_descriptions. for _with_options I'm not sure
> what it
And a quick followup to Michael:
I didn't want to pollute the comments
in
https://bitbucket.org/zzzeek/sqlalchemy/issues/3225/query-heuristic-inspection-system
as there may not have been any substantial changes and I'm just reviewing
my old notes wrong...
How long do you think it is safe to
I have a common design in my database in which the heavy write/update
columns exist in their own 'metacontent' table. An `association_proxy` is
used to link them:
class Item(Base):
__tablename__ = 'item'
id = Column(Integer, primary_key=True)
item_description =
like, OMFG this is brilliant.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See http://stackoverflow.com/help/mcve for a full description.
---
You
Mike,
This is absolutely brilliant!
thank you!
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See http://stackoverflow.com/help/mcve for a full
I have a handful of relationships where there could be 100s of matching
rows, but in most views I only want to see a subset of them. perhaps the
most recent 5 items of 500 candidates..
to handle this so far, i've been constructing a join with correlated
subquery. it works, but this doesn't
On Wednesday, September 26, 2018 at 10:08:43 PM UTC-4, jens.t...@gmail.com
wrote:
>
>
> Suppose I get the “new”, “dirty”, and “deleted” sets as per discussion
> below, and I’m especially interested in the “dirty” set: is there a way to
> find out which properties of an object were modified,
here are 2 methods you can add to your base class:
> def columns_as_dict(self):
> """
> Beware: this function will trigger a load of attributes if they have not
> been loaded yet.
> """
> return dict((col.name, getattr(self, col.name))
> for col
> in
There's also the strategy of doing something within a nested transaction,
which will allow you to rollback on an integrity error.
such as...
try:
with s.begin_nested():
# do stuff
s.flush() # this will trigger an integrity error, unless the fkey
checks are deferred
except
On Friday, August 10, 2018 at 2:43:51 PM UTC-4, Mike Bayer wrote:
>
> You need to copy the keyedtuples into some other data structure, like a
> dictionary, modify it, then send that data back into updates. Your best
> bet is to use the bulk update stuff once you have those dictionaries, see
On Sunday, August 5, 2018 at 5:15:39 PM UTC-4, Mike Bayer wrote:
> Not on my end ! The openstack code I referred towards is regex based but
> works very well.
I just limit what can trigger the exception and call a flush. it's not the
most performant code if you are changing many fields,
On Monday, July 16, 2018 at 4:53:18 AM UTC-4, Marc wrote:
>
> Thank you both Jonathan and Kirk for your helpful comments. I am leaning
> more and more towards using a versioning/history solution. If I can figure
> out a way to incorporate a approved flag into the history tables produced
> by
On Saturday, July 14, 2018 at 5:45:29 AM UTC-4, Marc wrote:
>
> Surely this is quite a common requirement? Ideally I would like it to be
> as simple as transparent as possible to the front-end. Is there any
> add-ons/plugins for SQLAlchemy that would enable this kind of
> functionality? I have
this looks great, and omfg the docs!
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See http://stackoverflow.com/help/mcve for a full description.
---
http://docs.sqlalchemy.org/en/latest/core/pooling.html
specifically:
http://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-pessimistic
and
http://docs.sqlalchemy.org/en/latest/core/pooling.html#custom-legacy-pessimistic-ping
There is also:
On Tuesday, July 3, 2018 at 9:37:04 AM UTC-4, Mike Bayer wrote:
>
> This architecture has been liberalized but this assumption still remains
> within the Core and it's possible the ORM may or may not have some
> remaining reliance on this assumption as well.
>
I assumed the RowProxy also
Adding to Mike's response...
I think you're using two sessions...
you select this:
user = Users.query.filter_by(username='foo').first()
and save this
db.session.add(user)
when you select, i think that's using some flask-sqlalchemy syntactic sugar
to select the session. a 'raw'
the difference is possibly because this is calculated in Python, each time
it is executed:
datetime.datetime.now()
this is calculated in Postgres, and refers to the beginning of the
transaction; it does not change across the transaction.
NOW()
--
SQLAlchemy -
The Python SQL Toolkit and
On Monday, June 25, 2018 at 11:31:07 AM UTC-4, HP3 wrote:
>
> I'm confused about what you said about the underlined connection: I am
> creating 2 different engines. Why would both share the same connection?
>
>
That wasn't clear from the above, however..
looking at the code you've shared, it
Can you share/show how/where they engines and connections are created?
This is odd.
FWIW, with the forking bug- the issue isn't in the transaction/session but
in the underlying database connections. The SqlAlchemy connection pool
isn't threadsafe, so all the commits/rollbacks/etc in different
On Friday, June 22, 2018 at 1:32:15 PM UTC-4, HP3 wrote:... but I'll 2x
check!
> (I recall that task-inheritance in celery makes certain things happen
> before and others after the fork - I am using prefork)
>
i don't use pyramid_celery, but my own pyramid and celery integration...
FWIW, i use these methods in my base class to copy the object into a dict.
def columns_as_dict(self):
"""
Beware: this function will trigger a load of attributes if they
have not been loaded yet.
"""
return dict((col.name, getattr(self, col.name))
Is there a chance there is a query/connection being made between the
initialization and worker process? If so, that could screw up the
connection pool. To address that, you can try adding an `engine.dispose()`
before celery forks.
I'll take a look at the code later. I was home sick this
On Thursday, June 21, 2018 at 10:02:27 PM UTC-4, HP3 wrote:
>
> What's the correct way to create a session that is not automatically bound
> to any transaction or that can be explicitly bound to an isolated
> transaction that can be committed whenever?
>
That's what `Session()` does by
First: `join` allows you to specify a join condition. You can often use
that to bypass tables or automat selects
query(A).join(B, A.id == B.id_a)
But: what you really need to do is check the generated SQL to see what is
going on, and tweak that to eliminate the joins/fields you don't
personally, I'd do 3 tables:
-- matrix of all the potential tests and their results
table testcondition
id int primary key
var_1 int
var_2 int
expected_result STRING
-- one record per testrun
table testrun
id int primary key
test_time timestamp
-- one record per test per
In theory what you're claiming should be fine... but there's always a
chance that what you *think* you're doing isn't really what you're doing.
I suggest creating a tiny one-file flask app that mimics your behavior and
reproduces the result -- then share it here. Your example on
On Sunday, June 3, 2018 at 10:48:12 PM UTC-4, Mike Bayer wrote:
it seems like your source of connectivity has an idle connection
> timeout going on. What version of Postgresql is this, or is this
> redshift? are you using a proxy like pgbouncer or haproxy?
>
Adding a few other things to
On Friday, May 18, 2018 at 5:17:00 PM UTC-4, Mike Bayer wrote:
>
>
> Well, SQLite locks the whole file during writes so that could be
> problematic.if you have a write transaction open, it could cause
> problems.I haven't stayed up to date on the latest SQLite changes,
> current
We use a custom SqlAlchemy+Pyramid backed client for requesting and
managing LetsEncrypt SSL certificates. It centrally stores/manages the
certificates, which can then be deployed to various servers on a network,
with support built-in for PostgreSQL and SqlIte data storage.
I'm working on
On Sunday, May 13, 2018 at 10:27:16 PM UTC-4, Mike Bayer wrote:
>
>
> that's not true, you can use the same database connection on a second
> Session. Just say s2 = Session(bind=s1.connection()).
>
Thanks, mike. I didn't know this was doable - it seems like the right
approach.
The
A better way of conveying my problem might be: `get()` returns any object
which is in the identity map; I am having issues after the identity map is
now populated with some objects which are 'partials' that were created via
'load_only' parameters. The benefits of using `get` to minimize a
On Saturday, May 12, 2018 at 9:49:34 AM UTC-4, Mike Bayer wrote:
>
>
> The former would be a bug. The latter, I'm not sure what you would expect
> it to do. Do you want the unloaded attributes to raise attribute error?
> Or did you want the get() to fully refresh what was not loaded (that
I tracked down a pattern that has been causing some bottleneck issues in an
app. I'm hoping there may be a way to handle this, but for once I don't
expect there to be!
The problem arises when I fetch a specific "light" query with joined
objects (however toplevel objects have the same issue).
On Monday, May 7, 2018 at 10:27:03 PM UTC-4, Mike Bayer wrote:
>
> can you perhaps place a "pdb.set_trace()" inside of session._flush()?
> using the debugger you can see the source of every flush() call.
> Generally, it occurs each time a query is about to emit SQL.
>
>
Building off what Mike
we do both... a @property on the ORM just invokes the helper method.
most sections of a traffic heavy app are backed by a cache of dicts built
off sqlalchemy objects. when a cached object is pulled out of storage, the
same helper methods are used by it's model/api.
--
SQLAlchemy -
The
On Wednesday, May 2, 2018 at 5:53:56 PM UTC-4, Mike Bayer wrote:
>
> if you only care about things that are loaded, like before, look in
> inspect(instance).dict , that's what's loaded
>
Thanks. I'll migrate my proof-of-concept to use `inspect`.
--
SQLAlchemy -
The Python SQL Toolkit and
I have a mixin that helps convert object to JSON using a `columns_as_dict`
method.
it looks like this:
from sqlalchemy.orm import class_mapper as sa_class_mapper
class Mixin(object):
def columns_as_dict(self):
_cls = self.__class__
return dict((col.name, getattr(self,
Thanks for all this help, Mike!
On Tuesday, May 1, 2018 at 8:56:35 PM UTC-4, Mike Bayer wrote:
>
> at what "moment in time"?I thought you might mean when they are
> expired from the Session, easy enough use the expire event
>
>
I have a debugging toolbar on a web panel that occasionally receives
SqlAlchemy objects, which are unbound/detached by their nature.
Is there a way I can preserve their contents as a dict (at that moment in
time) which can be iterated/inspected?
Looking at
On Thursday, April 26, 2018 at 1:59:02 PM UTC-4, Jeremy Flowers wrote:
>
>
>> But I now have a stmt variable at the end..
> How do I iterate over that in the form:
> for row in results:
>print(row)
>
results = query.all()
for row in results:
print row
Since this is a commercial database, you should ask InterSystems support to
build or sponsor development of this.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
almost everything in `func` is just standard sql that is executed in the
database. `unnest` is a sql function that is used to turn arrays into
tabular data.
a search for CTE should surface 'suffix_with' which can be used to augment
a query
In the example you pointed to, `c` are the columns of an object created by
`alias()`
`secondary` was created as an `alias()`, and the query has 2 columns that
were queried and named via `label()` "id" and "ancestor_id"
sqlalchemy has a lot of relevant docs if you search via the keyword "CTE".
On Wednesday, April 18, 2018 at 11:20:39 AM UTC-4, su-sa wrote:
>
>
> But if I am not mistaken, the from clause of the query is generated by
> SQLAlchemy and the database driver or the dialect has no influence on this
> from clause generation of SQLAlchemy.
>
As an aside from everything
On Tuesday, April 17, 2018 at 5:58:32 PM UTC-4, Jonathan Vanasco wrote:
>
>
> So .limit() is .top()
>
Clarified: `limit()` is essentially the same as if there were a `top()`,
because it will emit `TOP` for the query.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational
SqlAlchemy doesn't natively support `TOP` because it's not part of standard
sql. IIRC, only mssql uses it -- and uses it instead of 'limit'.
The SqlAlchemy dialect for mssql will adapt .limit() to emit `TOP` instead
of `LIMIT`.
I have a complicated recursive CTE that exists as text()
_complex_sql_ = sqlalchemy.text("""WITH RECURSIVE _foos AS (
SELECT id
FROM foo
WHERE (id = :id_start AND ...)
UNION
SELECT f.id
FROM foo f
INNER JOIN _foos _f ON _f.id = f.id
)
SELECT DISTINCT id FROM
On Monday, April 9, 2018 at 7:35:14 PM UTC-4, Mike Bayer wrote:
>
> fetchall() gives you a list, how about that? the RowProxy acts just
> like a tuple.
>
Thanks! That could work. I'll try some tests.
The issue I ran into earlier is that some encoders inspect the `type`, and
that throws
I forgot to add, the current fix is this:
# consume and coerce `ResultProxy[RowProxy,...]` into a list for
compatibility with other functions
results = [list(i) for i in results]
this feels like I'm doing something wrong.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational
I have a handful of functions within an application's "library api" that
share a similar intent in functionality and return values, but query for
discrete types of objects. I'm working on standardizing them right now and
could use some feedback.
case 1 is simple:
results =
On Wednesday, April 4, 2018 at 4:49:16 PM UTC-4, Mike Bayer wrote:
>
> I've not seen any context for what you're doing so is using
> "query.select_from()" part of how you want to handle this?
>
My personal fix was just to flip the order of entities submitted to
`.query()` - but I could have
I built a simplified test-case, then tried to simplify it more into a
second form -- and got a different error.
The second error message, and sqlalchemy source code lines, confirmed my
initial belief this is an intentional behavior, which is not reflected in
the `query` or `join` API docs.
On Wednesday, April 4, 2018 at 4:31:36 AM UTC-4, Simon King wrote:
> Where is the traceback raised? On the evaluation of the
> "subquery.c.event_timestamp" expression? That wouldn't really make any
> sense. Or is it when the query itself is evaluated? We need to see at
> least the
On Tuesday, April 3, 2018 at 10:34:03 PM UTC-4, Mike Bayer wrote:
>
> "does not work" ?
wow, i am an awful person. sorry. that is the least helpful description of
what happens i can think of.
second try:
if the subquery is the 1st element, the select compiles correctly and gives
me the
I couldn't find docs on this behavior, so wanted to ask before filing a
ticket with a test-case. If this isn't the expected behavior, I'll
generate a SSCCE. I think it might be expected though.
I have a form of a query that uses an ORM object joined against a subquery
It works when the
On Tuesday, April 3, 2018 at 11:41:43 AM UTC-4, Mike Bayer wrote:
>
> right, the dispose() will emit "close connection" commands on the
> socket which will leak into the parent process.
>
But only if `close()` has been called -- right?
> they're not ! :) that's one of the "features" of
looking at the current `engine.dispose()` (which I should have done
sooner), that's pretty much what it's doing -- right?
https://bitbucket.org/zzzeek/sqlalchemy/src/55371f4cffa730f65f1b687e9f6287d2ac189227/lib/sqlalchemy/engine/base.py?at=master=file-view-default#base.py-1899:1923
On Monday, April 2, 2018 at 1:40:06 PM UTC-4, Mike Bayer wrote:
>
> now we can probably do better than that but the dispose() case is not
> supposed to be that common or critical, it's usually for when you just
> did a fork(), and you don't want to reference any of the filehandles
> in the
clarifying: I had run tests, and the checked-out connections are still open
after dispose(), and then garbage collected out.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and
On Monday, April 2, 2018 at 1:21:26 PM UTC-4, Mike Bayer wrote:
>
> Connections that are checked out stay open, they
> still refer to that dereferenced pool, and I think at the moment they
> don't get closed until they are garbage collected along with that pool
> object.
>
Thanks. That's
a question came up in another list, and I just wanted to confirm a behavior
it looks like `engine.dispose()` doesn't explicitly `close()` any open
connections, but they inherently get closed by the method's behavior. is
that right?
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational
in postgres
returns the start of the current transaction:
NOW()
CURRENT_DATE
CURRENT_TIME
CURRENT_TIMESTAMP
LOCALTIME
LOCALTIMESTAMP
returns the actual time:
clock_timestamp()
timeofday()
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post
On Monday, March 26, 2018 at 9:41:00 AM UTC-4, Rich wrote:
>
>
>I'm working now on the single-user, SQLite3 version and will return to
> this issue when that's functioning.
Pay close attention to which database functions you use, and how you use
datetime fields. Those are two things were
On Thursday, March 22, 2018 at 4:27:18 PM UTC-4, Mike Bayer wrote:
>
>
> probably just make a fake column:column('counted') see what that
> does
>
>
Bingo. Thanks!
counted_column = sqlalchemy.Column('counted')
stmt = sqlalchemy.text('''SELECT...''')
stmt = stmt.columns(Foo.id,
I have a large and complex analytics query that I'd like to keep in raw
SQL. It generally looks like this:
SELECT foo.id,
foo.name,
tally.counted
FROM(...
...) tally
JOIN foo ON tally.foo_id = foo.id
WHERE tally.counted > 1
ORDER BY
Mike will probably chime in with a more correct answer, but...
You should be able to figure that out by catching the
`sqlalchemy.exc.IntegrityError` error and inspecting the attributes
(original error or the text message).
On Tuesday, March 20, 2018 at 11:29:18 AM UTC-4, Mike Bayer wrote:
>
>
> if you just need a different name, rename it:
>
> my_attr = Column('id_foo__context_a', Integer, ...)
>
>
>
SqlAlchemy does everything I need. Thanks Mike!
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational
this is silly - I realized I can just use a flag to run a different
`orm.relationship` on the Admin and Public views:
if ADMIN:
records = orm.relationship(a.id=b.id)
else:
records = orm.relationship(and(a.id=b.id, b.id<1000))
Is there any magical SqlAlchemy feature
I was wondering if something were doable with SqlAlchemy in a decently
performant manner that can be maintained cleanly. I'm open to any
strategies that people can suggest.
We have a handful of reserved internal "system" or "control" records in our
database. Depending on the table, anywhere
I'm refactoring a large table and moving some write-heavy columns into
their own table
Is there a way to automatically create the child when the parent is
generated, or must this be done manually?
I couldn't find anything in the docs.
This is almost a variant of inherited tables, but there
On Tuesday, March 6, 2018 at 3:23:42 PM UTC-5, KCY wrote:
>
> I recall coming upon a section about this in the SQLAlchemy docs, although
> I can't remember where exactly. It's not the problem (if you can call it
> that) that I'm describing here. I should double check to make sure the
> design
I can possibly help with the Twitter/blog stuff, I know that sort of stuff
well.
I'll absolutely try to help, I'm just not sure I can do what you exactly
envision in a convenient timeframe.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To
On Thursday, March 1, 2018 at 9:31:26 PM UTC-5, Mike Bayer wrote:
not that I'm aware of. it's a matter of whichever is more convenient
> given the kind of expression that is applied with DESC (e.g. a more
> complex expression might look more natural enclosed in desc()).
>
I've had some
This is going to wildly depend on how many things are being sorted, and
what those things are. this topic usually a premature optimization or
"you're doing it wrong".
Imagine this query in Postgres:
SELECT * FROM records WHERE ORDER BY timestamp_desc;
If there are 1,000 items in the
On Wednesday, February 14, 2018 at 6:49:47 AM UTC-5, Simon King wrote:
> For what it's worth, I would do it something like this:
>
that's elegant. I'd rather do that now too.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post
On Sunday, February 11, 2018 at 8:07:23 PM UTC-5, Jeremy Flowers wrote:
>
> I'm afraid it just doesn't work. I would think you have to import String
> too..
>
Please run the entirety of what I pasted above as it's own file and share
the error. That is a self-contained example that runs as a
The following works for me on Python 2&3
how are you generating an error ?
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- -
#
# Use this file to build your own SSCCE
# SSCCE = Short, Self Contained, Correct (Compatible) Example
# see http://sscce.org/
#
# - - -
that won't work as you expect.
You're applying the python character replacements to the arguments, and
then emitting that in sql. You'll end up with the same
InstrumentedAttribute issue, because you're trying to operate on the Python
column instance in the orm defintion.
the string returned
ColdFusion's "queryable results cache" is pretty unique. I don't know of
any database layers or ORMs that support that functionality out-of-the-box.
They basically replace the upstream database with the results as a
"materialized view", and act as the database server for that view.
There is
actually, you'd be constraining the join against the edge elements since
the base is a node. (edge is the more common name for your usage/link)
if you're searching for node2, then you'd constrain the query by joining
the edge items like this:
result = session.query(node_2)\
.join(edge_1,
I do a lot with graphs in SqlAlchemy, and I think you'd have a easier time
writing these queries if you flipped your search so that you're querying a
single identified object.
For example, on use-case-1, you can query for the "Usage", and join the
nodes.
> result = session.query(Usage)\
>
On Monday, February 5, 2018 at 3:19:40 AM UTC-5, Rajesh Rolo wrote:
>
>
> Thank you. .op() seems to have done the trick. I'm going to go with it as
> of now.
>
>>
Please make a test-case of your mistake though, it's important to get this
figured out and patched if there is a bug.
--
On Sunday, February 4, 2018 at 11:07:49 AM UTC-5, Mike Bayer wrote:
>
> You test for NULL with == None. I have no idea what your issue is can you
> please share complete mappings, complete stack trace and a reproducible
> example? Thanks
>
This definitely needs a reproduction testcase to
at the very least, you are not using a "join" on the tables.
i suggest first going through the tutorial again, and paying attention to
the section on "join".
http://docs.sqlalchemy.org/en/latest/core/tutorial.html#using-joins
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational
On Wednesday, January 17, 2018 at 9:40:23 AM UTC-5, Mike Bayer wrote:
>
> if you create your engine in your parent process, and then let that
> engine travel into a fork(), then yes, you need to call dispose() just
> once when the fork starts, so that you get new connections in your
>
On Tuesday, January 16, 2018 at 6:26:34 PM UTC-5, Mike Bayer wrote:
dispose() is not needed inside of a web context. however, the kind
> of web container you're using as well as if you are using something
> like gevent or eventlet may be significant in why you are getting idle
>
On Tuesday, January 16, 2018 at 12:33:42 PM UTC-5, Joe Biggert wrote:
>
> we've got a wrapper around our requests that basically looks like this:
>
On Tuesday, January 16, 2018 at 2:34:25 PM UTC-5, Mike Bayer wrote:
>
> the dispose at the end is completely wrong, sorry.
>
...
> or if you are
I can't speak to the internals of this being a bug or not, or how this
should be done... but I think you could do a short-term (and cross
platform) fix using an alias via `sqlalchemy.orm.aliased` for one (or more)
of the inner subqueries. That should result in a unique discriminator
being
On Wednesday, January 10, 2018 at 2:00:25 PM UTC-5, Matt Schmidt wrote:
>
> The version I upgraded from was 1.1.9, and I originally started the
> project on 1.1.1.
>
>
And you wrote that above and I totally missed that line - sorry. I'll pipe
down for someone else to chime in now.
--
what version did you update from?
If this is the issue I am thinking about, that feature was added a long
time ago...
https://bitbucket.org/zzzeek/sqlalchemy/issues/3080
the issue in the ticket was the load_only added in the primary keys on
joinedload but not subqueryload -- and the orm
I'm using declarative.
I need to map a viewonly relationship that uses sqlalchemy.func.lower in
the primaryjoin.
`func` doesn't seem available in `ext/declarative/clsregistry.py` and can't
be used in the string context. is that correct?
i have a workaround of declaring the class, then
use `sqlalchemy.orm.aliased` to create an alias of A for your join
condition...
A_alt = sqlalchemy.orm.aliased(A, name='a_alt')
then use that to join and specify your join conditions
the `contains_eager` needs to specify the alias though.
201 - 300 of 1035 matches
Mail list logo