Thank you, the event worked like a charm :-) Though I think that I don't
need the commit events, because the application terminates anyway.
I modified your approach to gather which objects were flushed so that in
the end I can give the user more precise information:
dbsession.info["new"
hi there -
please direct these requests to the IBM list at:
https://groups.google.com/forum/#!forum/ibm_db
On Tue, Nov 21, 2017 at 2:25 PM, nahumcastro wrote:
> Hello all.
>
> I have the same problem with db2 for as400, seems to be very different from
> db2 in windows, linux.
>
> Here is what
Hello all.
I have the same problem with db2 for as400, seems to be very different from
db2 in windows, linux.
Here is what I have found:
this string dont apply for as400 as documented:
ibm_db_sa://user:pass@server:port/database
because when you connect to an as400 there is only one database wit
> I was more interested by the discriminator_on_association. But after some
> tests, I find that the table_per_related solution works fine and the database
> is cleaner as I expected.
yep!! see that nobody likes it at first
On Tue, Nov 21, 2017 at 1:04 PM, Olaf wrote:
> Hello,
>
> Sorry f
Hello,
Sorry for the delay of the answer, I was busy these last days.
I tried every solution and I must honestly say that at the beginning, I was
not attracted by the table_per_association.py and the table_per_related.py
solutions because I didn't like the idea of having tables automatically
g
I've looked to see how hard it would be to allow "supplemental"
attributes to form part of the mapper's primary key tuple, and it
would be pretty hard. The "easy" part is getting the mapper to set
itself up with some extra attributes that can deliver some kind of
supplemental value to the identit
On Tue, Nov 21, 2017 at 7:39 AM, Антонио Антуан wrote:
> Hi guys.
>
> I got this code example:
> https://gist.github.com/aCLr/ff9462b634031ee6bccbead8d913c41f.
>
> Here I make custom `Session` and custom `Query`. As you see, `Session` has
> several binds.
>
> Also, you can see that there are two f
Alas, the production database is SQL Server (though from Linux). I use
SQLite for testing. One of the attractions of SQLAlchemy is to stop
worrying about database differences.
I'll get it all figured out eventually. Thanks for the help.
Skip
On Tue, Nov 21, 2017 at 7:16 AM, Simon King wrote:
>
I'm pretty sure that bulk_insert_mappings ends up just calling the
same code that I suggested.
What database are you using? If it's Postgres, you might be interested
in
http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#psycopg2-batch-mode
(linked from
http://docs.sqlalchemy.org/en/la
Hi guys.
I got this code example:
https://gist.github.com/aCLr/ff9462b634031ee6bccbead8d913c41f.
Here I make custom `Session` and custom `Query`. As you see, `Session` has
several binds.
Also, you can see that there are two functions:
`assert_got_correct_objects_with_remove` and
`assert_got_
Thanks. I guess I'm still a bit confused. The problem I've been trying
to solve happens to involve inserting records into a table. In my real
application, the list of records can contain millions of dicts. The
name, "bulk_insert_mappings" sort of sounds like it's going to use
BULK INSERT types of s
(TLDR: I think bulk_insert_mappings is the wrong function for you to use)
SQLAlchemy consists of 2 main layers. The Core layer deals with SQL
construction, database dialects, connection pooling and so on. The ORM
is built on top of Core, and is intended for people who want to work
with "mapped cla
12 matches
Mail list logo