On Thursday, January 4, 2018 at 10:58:49 AM UTC-5, Mike Bayer wrote:
>
> that and, when you use contains_eager you need to tell it what entity
> it's looking for when it considers columns as part of a relationship:
>
> .options(sqlalchemy.orm.contains_eager('foo_alt', alias=Foo_2))\
>
As
i've got a handful of bad data from some edge cases and have been having
trouble querying the data with sqlalchemy. i attached a SSCCE below.
my cleanup script imports the model, extends `Main_Foo` with a new
relationship, and creates 2 aliases.
this selects the right rows (2 and 4) but the
I have a few of these and generally use a variant of Simon's method where I
wrap everything in a custom class. I stash all the requirements/options in
the custom class instance, then invoke a `count()` or `paginated()` method
that builds/stashes the core query and runs `count()` or `all()` on
thanks, michael. i'll look into that!
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See http://stackoverflow.com/help/mcve for a full description.
---
I have a potential relationship between Foo and Bar that is defined as
follows, and has usually been read-only from legacy data.
All the code works, I just want to make sure I'm handling this case in a
clear and concise way.
I've (over)simplified the case down to this:
class Foo(Base):
I don't know if this will work for your usecase, but we handle a few
similar situations with a nested transaction/savepoints. if an ntegrity
error is raised on insert, an update is issued instead.
on conflicts it takes longer than your ideal method, but it's much faster
than selecting the
This is a bit of an odd request, but here goes:
A table in postgresql has some columns with sensitive data which should be
secured against visibility by one of our applications (we use a central
SqlAlchemy model for multiple applications)
Currently we use an environment var to determine if the
On Friday, November 3, 2017 at 12:03:43 AM UTC-4, jens.t...@gmail.com wrote:
>
> Thanks Mike! Considering we're planning to migrate to PostgreSQL in a
> month or two, how would I go about that (considering there's no strict mode
> there). Would the exception be raised on PostgreSQL?
>
A
On Friday, October 27, 2017 at 10:26:50 AM UTC-4, André Caron wrote:
>
> FYI, I tried this out and updated my GitHub project to reflect your
> excellent insight.
>
Since your connections were large enough to notice an effect in production,
if you have spare time I suggest doing some tests to
On a related note, I suggest implementing the timeout as happening at a
random time within a 2-3 minute window (e.g. at 2 minutes there is an
increasing 25% chance of a reconnect, at 3 minutes it jumps up to 75%).
This should scatter reconnects during periods of high load.
A property I
sorry. i'm getting over a code and in my fever-dream state thought I was
making something clear with an analogy.
looking at this, and literally everything else I wrote yesterday, I'm
amazed it's at least in english.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
On Thursday, October 19, 2017 at 4:10:11 PM UTC-4, Taz Mainiac wrote:
>
>
> If you ran two scripts or two modules, it would also almost always fail
>> (IIRC, it is remotely possible but highly unlikely, to generate the same id
>> in another process).
>>
>
> I am not doing that.
>
>
re-read
On Thursday, October 19, 2017 at 11:55:49 AM UTC-4, Taz Mainiac wrote:
>
> So - a morning spent googling does not turn up any information about
> Python classes having a different id depending on context (script vs
> module). I'm probably not googling for the right thing? Can anyone point
>
thanks. i think i can just go through "insp.mapper.relationships". i don't
need to iterate/walk.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See
@mike - yes, uwsgi is that kind of server. It takes advantage of
copy-on-write memory
This sounds like Jens is connecting to the database before it forks
(usually any code that isn't wrapped in request logic)
uwsgi has a postfork hook, which can be handled via a decorator
OTOMH (I didn't go through your code), are the two databases the same?
If not, this is possibly related to database specific compiling (or the
lack of) and a common error. Note the `bind` references in the
docs: http://docs.sqlalchemy.org/en/latest/core/tutorial.html#executing
Depending on
FWIW, PostgreSQL supports a "DISTINCT ON" syntax.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See http://stackoverflow.com/help/mcve for a full
On Friday, October 6, 2017 at 7:23:45 AM UTC-4, Simon King wrote:
>
> Your code probably needs to do something like this:
>
> engine = sqlalchemy.create_engine(url, client_encoding='utf8')
> connection = engine.connect()
>
> "engine.connect()" will raise an exception if it fails to
Is there a reason why you can't `flush` as the first action the
`before_commit` event ?
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See
Thank You, Mike.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See http://stackoverflow.com/help/mcve for a full description.
---
You received this
what do your database and server logs show? are there slow query timeouts,
deadlocks, etc? are these happening during peak load? 10-30 seconds looks
more like an issue with competing locks or available workers, than
potentially pool checkouts or lazy loading.
aside from db logs, i've found
uot; in a specific way.
>
> On Thu, Aug 31, 2017 at 9:30 PM, Jonathan Vanasco <jona...@findmeon.com
> > wrote:
> > I discovered an edge-case in one of our apps today. A form got pummeled
> by
> > a botnet which was going through a list of compromised accoun
I discovered an edge-case in one of our apps today. A form got pummeled by
a botnet which was going through a list of compromised account data for
hits (e.g. credentials that match haveibeenpwned.com). That triggered a
lot of internal metrics logging, which eventually a dedicated logging
Looking at your code and considering the size of your database, I've run
into performance issues under PostgreSQL a few times that may be related...
they're all usually because very slight text changes to a query can
drastically alter the server's query planner.
The cause/fixes were:
1.
and thank you x 100
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See http://stackoverflow.com/help/mcve for a full description.
---
You received
On Tuesday, August 22, 2017 at 2:16:42 PM UTC-4, Mike Bayer wrote:
> you're looking for:
>
>
> http://docs.sqlalchemy.org/en/latest/orm/join_conditions.html#using-custom-operators-in-join-conditions
>
>
>
I swear I searched first and spent an hour trying to figure this out. And
then
When I've been unable to handle things like this with cascades (which you
already using), I use `session.flush(objects=SPECIFIC_OBJECTS_LIST)` to
only flush a single object. That usually gets me out of these situations.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
On Wednesday, August 9, 2017 at 7:18:03 PM UTC-4, Mike Bayer wrote:
> it will work fine
>
expanding on Mike's response... you're just defining `object` within the
scope of the class definition.
# `object` is the built-in
class Foo(object):
object = column()
#
A similar question about another anti-pattern was asked/answered recently.
Most of what Mike says applies in this use-case
https://groups.google.com/forum/#!topic/sqlalchemy/W_Rn-EwKvZo especially
the locking and integrity issues with long-running transactions. He's
written about it
http://docs.sqlalchemy.org/en/latest/orm/session_api.html#sqlalchemy.orm.session.Session.connection
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See
Using `bulk_insert_mappings` is much more performant than using the ORM and
custom collections. Right now you are bypassing ORM object creation and
state management. Your operations might be 10x slower with a collection.
IIRC, `bulk_insert_mappings` will iterate over the payload like this:
Unless you need to use all the readings immediately, have you considered
just making a custom def under the Sensor model, and then inserting all the
readings via sqlalchemy core? That would allow you to insert them without
creating ORM objects, which people using numpy and a lot of data often
'many' is the same as 'remote' in the example that you are thinking of,
but it loses applicability if you are doing a one-to-one relationship.
Let me illustrate with the `remote` annotation, which is a corollary form
of `remote_side` in the relationships API, and tends to be more clear:
Great! Thank you!.
Good to know the behavior hasn't changed. We have a lot of "read-only"
routes or database-free routes which rely on the "lazy"
transaction/connection behavior for performance. I was worried the new
flag could create db connectivity per-request when the session is
On Tuesday, July 11, 2017 at 10:27:15 AM UTC-4, Mike Bayer wrote:
>
> So the note for this is at:
>
>
> http://docs.sqlalchemy.org/en/latest/changelog/migration_12.html#pessimistic-disconnection-detection-added-to-the-connection-pool
>
>
> then in the main docs, the new flag replaces the
On Monday, July 10, 2017 at 9:44:03 AM UTC-4, Mike Bayer wrote:
>
> * Connection pool pre-ping - The connection pool now includes an
> optional "pre ping" feature that will test the "liveness" of a pooled
> connection for every connection checkout, transparently recycling the
> DBAPI
The docs give information on how to register and remove event listeners:
http://docs.sqlalchemy.org/en/latest/core/event.html
It may be easier for you to keep the event listeners "on", and have them
look for a toggle value to enable/disable.
--
SQLAlchemy -
The Python SQL Toolkit and
Is there a chance you updated the Python binary as part of your process?
Weird stuff like this often happens if you forget to trash the
site-packages or virtualenv after a python version upgrade (even on a micro
version upgrade)
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational
I understand your concerns. I dropped pyramid's transaction support a long
time ago, and prefer to do everything explicitly.
You should be aware that a scoped session and regular session are not 100%
interchangeable. There are a few slight differences... though 99.9% of
users won't be
On Wednesday, June 28, 2017 at 4:28:32 PM UTC-4, Mike Bayer wrote:
>
> I'm not 100% sure that zope.sqlalchemy unconditionally emits COMMIT
> for the session that's associated. Though overall would need to see
> where you're getting request.session from and all that; if it's not
> associated
> On Wednesday, June 28, 2017 at 3:16:52 PM UTC-4, Richard Rosenberg wrote:
>
On Wednesday, June 28, 2017 at 3:16:52 PM UTC-4, Richard Rosenberg wrote:
>
>
> I am absolutely puzzled, but it seems likely that pyramid_tm is in the way
> somehow. It always wants to do its own thing, and calling
http://docs.sqlalchemy.org/en/latest/core/selectable.html?highlight=with_for_update#sqlalchemy.sql.expression.Select.with_for_update
E.g.:
stmt = select([table]).with_for_update(nowait=True)
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To
I've run into similar issues like this with Pyramid before, though it was
usually from trying to leverage the work of previous queries into low-cost
`.get()` functions onto the identity map.
The trick I settled on is to keep an array as a @reify request property,
then stash objects onto it.
you're almost there.
flip a few lines around to rewrite the query to select from product...
e.g. something like
SELECT r_product_list.* FROM r_product_list
JOIN r_product_category_history_list on
r_product_list.r_id=r_product_category_history_list.r_id
JOIN
You're not joining anything onto the `rProductCategoryHistory` table, so
depending on your DB you may be getting populated rows for rProduct that
don't match anything.
you probably want something like this...
query = db.session.query(rProduct)\
.join(rProductHistoricalDetails,
On Friday, June 16, 2017 at 1:44:45 AM UTC-4, Mike Bayer wrote:
If you then added and populated a column in your table called
> "ten_minute_epoch" or something like that, you could avoid having to do
> the math operation on every row during your SELECT (e.g. it would be
> written at INSERT
These are two different lines of code. The second one looks correct.
> value_new = column_property("%.3f" % value())
> value_new = column_property("%.3f" % value)
>
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please
I can certainly bore you with specifics!
We have growing index of URLs, on discovery a url is standardized into a
"canonical" and stored into two tables:
class UrlRaw(DeclaredTable):
__tablename__ = 'url_raw'
id = Column(Integer, nullable=False, primary_key=True)
You're going to have better luck asking this question in a flask community
or, if this is in relation to your IDE, a community for that.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal,
the postgres trick is to use partial index on the column to improve the
query speed.
1. create a substring index on the table, lets say 5 characters wide:
CREATE INDEX speed_shortuuid ON table(substr(shortuuid, 0, 5)
2. have all queries include a match against that index, calculating the
side question - have you done any tests on how the UUID type queries as
your database scales? It may make sense to do the shortuuid as a text
field, which you can put a substring index on. if you don't know that
postgresql optimization trick, I can explain it further.
--
SQLAlchemy -
The
The FAQ
http://docs.sqlalchemy.org/en/latest/faq/sessions.html#how-do-i-make-a-query-that-always-adds-a-certain-filter-to-every-query
points to an example in the wiki:
https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/PreFilteredQuery
--
SQLAlchemy -
The Python SQL Toolkit and Object
On Monday, May 15, 2017 at 9:58:57 AM UTC-4, Mike Bayer wrote:
>
> I'd be curious to see under what scenarios being able to set one element
> of the JSON
> vs. UPDATEing the whole thing is a performance advantage significant
> compared to the usual overhead of the ORM flush process; that is,
So the Pyramid debug toolbar system catches the before_cursor_execute to
set up a timing start, and after_cursor_execute to calculate the duration.
It's decent support for approximated query timing.
It didn't log any of the other events though, so I'm working on a solution
to log them. The
On Wednesday, May 10, 2017 at 8:09:22 PM UTC-4, Mike Bayer wrote:
>
> no return value needed
>
perfect
> you mean, the commit() method itself how long does that take? You'd
> probably do a time.time() check before and after calling the method (or
> use timeit.timeit to do the same).
>
On Thursday, May 4, 2017 at 12:16:20 AM UTC-4, David Chanin wrote:
>
> How do objects get registered into the aggressive loader? Does it happen
> automatically when they're initially loaded via query or cache? Ideally we
> wanted to group items together when they're loaded and do bulk lazy
We have a somewhat tangential solution that was developed when building a
read-through cache that backs into SqlAlchemy.
Instead of working on the Session, we register items for 'aggressive
loading' into a custom class instance that handles the coordination.
Our system works like this:
*
This is referencing a concept in software design termed "opinionated".
Do a search for "opinionated framework" or "opinionated software" and this
will all probably make sense.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example
On Tuesday, April 25, 2017 at 5:50:48 AM UTC-4, Simon King wrote:
>
> def reload(instance, *options):
> session = saorm.object_session(instance)
> # TODO: make this generic rather than assuming presence of 'id' column
> q = (session.query(type(instance))
>
On Monday, April 24, 2017 at 4:28:22 PM UTC-4, Mike Bayer wrote:
>
> yeah just load the object again w/ the eagerloads option you want.
>
Thanks. I was hoping there was a way to just say `Obj.load('foo')` ?
I'll just untangle the code and load the relationship in the first place.
At this
[I couldn't find any docs on this, and my luck with the list archives only
showed some potential work for this in 2008.]
Is it currently possible to eager-load a ORM relationship after a query?
For example, I have loaded an instance of `Foo`, but I did not do an eager
load on `Foo.bars` and
Thanks. That's exactly it. It makes perfect sense why it just happens to
work when I join one bar onto this, but not two.
FYI, I'm not seeing a comma on the simple join - but it's still a bad query
that illustrates the problem as Bar doesn't get joined in:
SELECT foo.id AS foo_id FROM foo
i have roughly the following model: Widget > Widget2Foo > Foo > Foo2Bar >
Bar wherein `Foo` has multiple relationships to `Bar` (via a filter )
the setup works for lazyloading and subqueryloading, but I can't do
multiple joinedloads:
eg:
lazyloading and this subquery works:
query =
i briefly glanced at your setup, and just want to note that you'll have a
better comparison using the `compile` with the mysql engine (docs here:
http://docs.sqlalchemy.org/en/latest/faq/sqlexpressions.html)
the query clauses you pass into `str()` aren't compiled for the mysql
dialect, so may
Just to expand on Mike's answer... I've run into this often on a handful of
columns.
The solution I've settled on is to set a variable:
COLUMN_SIZE_SERVERSIDE = 1
and then use that as the length on the problematic column sizes while I
work out the issues on what the length should be, and
On Monday, April 3, 2017 at 7:09:41 PM UTC-4, Jason T. wrote:
>
> I think there may be even a cleaner way to do this since I have
> relationships() built into the models,
>
In that case, you will need to use "contains_eager" if you play on
iterating on any of the relationships
You might want to rename that from `object_as_dict` to `columns_as_dict` --
as your code only handles the columns, not the relationships. It is common
for people to have a chunk of code like that.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
On Wednesday, March 22, 2017 at 7:04:24 PM UTC-4, Mike Bayer wrote:
>
> you'd need to patch some kind of fake DBAPI connection that returns this
> cursor from the cursor() method. it would be ugly but if the cursor
> behaves well it could work. you'd get session.connection() to get a
>
longshot, i know.
We've got some legacy twisted code that does raw operations via a
twisted.enterprise.adbapi connection pool with the psycopg2 driver. Other
services in this deployment are configured to use SqlAlchemy for all
operations - this is the legacy holdout.
There are a few chunks
> request.add_finished_callback(cleanup)
is this pyramid?
if so, you can use the sqlalchemy panel for pyramid_debugtoolbar.
it will track all the queries for you in development. it's not appropriate
for production, but may give you the insight you need.
--
SQLAlchemy -
The Python SQL
I can't seem to declare a valid relationship in my application, but can do
so as a self-contained example.
the (working) example on the relevant columns is this:
class IpAddress(Base):
__tablename__ = 'ip_address'
id = Column(Integer, primary_key=True)
>
> the dialect has .server_version_info
>
> engine.dialect.server_version_info >= (9, 6)
>
> but you need your engine and connection to handle that. JSONB and JSON
> have different operator support.So you might want to do a
> TypeDecorator around it or similar but you can only support a
"within sqlalchemy". that was a typo.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable
Example. See http://stackoverflow.com/help/mcve for a full description.
---
I'm deploying something and unsure of the underlying postgresql version for
a given environment. most deployments are 9.6, a handful are 9.2.
upgrading everything is not an option I have.
i'm using declarative.
the only issue I have right now, is that JSONB is preferred to JSON
support. Is
On Monday, March 6, 2017 at 10:21:37 AM UTC-5, Mike Bayer wrote:
>
> a ROLLBACK is occurring which suggests a different error has occurred
> and is being swallowed - the transaction can then not continue. Does
> your production environment have logs which would illustrate other
>
On Wednesday, January 25, 2017 at 11:01:41 AM UTC-5, Randy Syring wrote:
>
>
>- pymssql has struggled to find maintainers who can devote time to it
>and it is starting to languish.
>
> Have you tried speaking with the "new" Microsoft? Perhaps they'd be
willing to contribute funds or
a given application makes explicit use of `commit` and `rollback` on the
sqlalchemy session.
a few views require interaction with some other technologies via a
two-phase commit.
under the current design, i can't enable two-phase commits for everything.
i'm not sure I can detect the need
On Saturday, January 14, 2017 at 3:06:21 AM UTC-5, Nikunj Yadav wrote:
>
> But I am interested in knowing, assuming the dumbest setting that I could
> have done is it possible that sqlalchemy is keeping a lot of references in
> memory ?
>
>
Perhaps I can think of a dumber mistake than Mike
On Friday, January 6, 2017 at 7:47:17 PM UTC-5, Daniel Kraus wrote:
>
> > If I'm reading your question correctly, most of what sqlalchemy does
> (and
> > excels at) is specifically keeping people from doing what you're trying
> to
> > do.
>
> I think you got me wrong then.
>
> > It seems
On Thursday, January 5, 2017 at 8:34:52 PM UTC-5, Daniel Kraus wrote:
>
> The use-case is that I have a big model with lots of complex
>
> relationships but 90% of the time I don't need the data from those.
>
If I'm reading your question correctly, most of what sqlalchemy does (and
excels at)
Does this issue consistently repeat within a transaction block? Does it
still happen if you reverse the tests?
I've run into similar issues in the past, and the problem was often from
postgresql checking indexes -- the first test would stall because indexes
needed to be read into memory,
If you really need to do something like that though, you can always string
together raw sql + bindparams, and pipe it all through an `execute` into
the dbcursor.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please
On Monday, December 19, 2016 at 11:26:31 AM UTC-5, Michal Nowikowski wrote:
>
> Imagine that we have many Keywords but not all of them are used (not all
> are associated to some BlogPosts).
> Now, how to get list of Keywords that are used anywhere (i.e. these that
> are associated to some
On Friday, December 16, 2016 at 12:14:11 PM UTC-5, Simon Moon wrote:
>
>
> It seems m is not a Machine object. I think it should be clear what I want
> to do, but I can't figure what the
> SQLAlchemy way would be to do it
>
replace this:
m = session.query(Machine.name).filter(...
On Friday, December 16, 2016 at 12:01:04 PM UTC-5, Mike Bayer wrote:
>
> You need to be inside orm/strategies.py in JoinedEagerLoader where it
> deals with row_processor. The "path" variable will show which relationship
> it thinks it's loading.
>
>>
>>
PERFECT! thanks.
--
SQLAlchemy -
On Friday, December 16, 2016 at 9:14:21 AM UTC-5, Mike Bayer wrote:
>
> Those aren't easy cases to debug. Isolating the behavior into a small
> test is the best first step, failing that you'd at least have echo set to
> debug , and you can watch the rows cone in. If it were me I'd then be
I've got an edge-case that I can't reliably reproduce yet, and it's driving
me crazy.
I have a table/view that is used for an analytics report. It just
references 2 objects from a given table, along with a count.
class Report(Base):
__tablename__ = 'report'
id = Column(Integer,
The PostgreSQL server killed the database connection. This usually happens
because someone killed your connection on purpose, or there is a server
shutdown/restart.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please
I had a similar need under Pyramid and went with a different approach: My
"dbSession" object is actually a proxy container that holds dedicated Read,
Write, and Log SqlAlchemy sessions. I explicitly address
"dbSession.reader" or "dbSession.writer" in all code.
Everything that is "public
I'm transitioning the code to plain sessions.
I can attach the sessionmaker onto to the session at generation – that is a
backup plan right now. I'm just wondering if there's a more correct way
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
I ran into an issue when transitioning a scoped session to a regular
session in an application.
A block of code had previously created a secondary session (to the same
database) to autocommit some data outside of the session/transaction:
dbSessionAutocommit =
FWIW, I recently had a similar situation and ended using 2 tables -- one as
a transaction log and the other with a constantly updated entry. While
your approach worked fine, the database performed significantly better with
2 tables because the (vast) majority of queries only needed the most
FYI, if you query via the ORM and load_only, the query should be something
like :
SELECT primary_key_column, array[1] FROM table;
The ORM adds the primary key behind-the-scenes so it can setup the objects.
as far as i know, If you need to only load array[1] and not the primary key
column,
*Posters of recruitment emails are considered as spammers, will be banned
immediately and messages will be removed.*
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and
I believe the answer is no.
IIRC, those psql commands are shortcuts within psql... and the various
drivers (psycopg2, etc) don't support them.
A workaround is to use the underlying query against postgresql's tables
yeah, `remove` is just a better way to handle scoped sessions. I should
really move to regular sessions like you; I keep forgetting that I have
these old legacy scoped sessions for no reason.
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To
oh great! `session.info["request"]` solved all my problems quite nicely. i
integrated that my pipy sessions manager.
Simon, thanks. Looking at your code, I recall that `dbsession.remove()` may
be better than `dbsession.close()`
I have an edge-case in a few situations where, within an @property of a
SqlAlchemy object, I need to know the current active web-request/context.
I'm currently using Pyramid's `get_current_request`, but it is no longer
recommended -- so I'd like to get a proper solution in place.
I have a new
The model for a given project exists within a Pyramid application,
`foo_pyramid`. Other services in Celery and Twisted have been integrated
against it with almost no trouble.
The one bit of trouble is that some convenience methods on the ORM classes
implemented for Pyramid are triggering a
I'm sorry, I totally misunderstood your question. I thought you were trying
to get all rows as a unified dict. Your result clarified things.
It looks like you're using the ORM (and not core), right?
I don't think it is entirely possible to do what you want. Several people
have asked similar
301 - 400 of 1035 matches
Mail list logo