Hi,
I have a problem of bulk deleting from a sharded session, I tried to
search the previous posts without any finding.
I have the following table:
usersession_table = Table('kg_usersession', meta,
Column('session_id', String(32),
primary_key=True),
Just a suggestion, but wouldn't we want to always default
viewonly=True when lazy='dynamic'?
Or are there use cases such that the orm can actually still be
expected to understand the relationship correctly even when unknown
filter criteria are added?
--
You received this message because you
Hey guys,
I have a Pylons back-end running on SQLAlchemy. I have a script that
reads a tree of XML files from an HTTP server (it downloads an xml X,
and then downloads that X's children, and then iterates the children,
and so forth in recursion). Each xml file represents an SQLAlchemy
model.
The
I have a query:
query =
Session.query(SmartdataEligibilityRecord).order_by(SmartdataEligibilityRecord.term_date)
Which creates the following sql statement (the value of query):
SELECT smartdata_eligibility_records.id AS
smartdata_eligibility_records_id,smartdata_eligibility_records.hic_number
On Apr 20, 2011, at 7:48 AM, Kent wrote:
Just a suggestion, but wouldn't we want to always default
viewonly=True when lazy='dynamic'?
Or are there use cases such that the orm can actually still be
expected to understand the relationship correctly even when unknown
filter criteria are
What I tend to do in cases like this is to break things into commit chunks.
For instance, I've got an import script that goes through and processes 10
at a time and performs a commit every 10. This is tweakable via a config
setting, but so far 10 works for my needs.
As for the duplicates, If
What is the error that you're getting? Is it just that the value is None?
You're doing a left outer join which means you might gets rows back that
don't have any member record data tied to it. In that case you need to check
if you have a member first, then access the properties on it.
% if
Dear Rick,
Thank you for your reply. I understand, but is there not a better way
than doing a lot of single commits in case of a commit exception?
In other words, is there a way to tell SQLAlchemy to throw an
exception on the Session.add if there's a duplicate as opposed to on
the Session.commit?
Not that I'm aware of. When you do a Session.add() it's not touching the
database yet. It's part of the performance tradeoff. There's not a good way
for it to *know* there's a record in the db with that pk id until it does
chat with the db on it. Sure, you can keep a list of ids on the side if
You are the man.
That was it!
Thanks so much. -RVince
--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to
I agree, but the thing is that committing every 10 entries is a little
low for me, I was thinking of around 50, at which case having 50
individual commits is quite costly..
In case I choose the implement your method, how would you go about it?
How do you keep objects of the last 50 or whatever
I'm not sure, but I'd check the exception and see if you can get the info
about which of your 50 were the dupe. I don't recall if it's in the
traceback or exception error. If you can identify it then you could store it
aside and remove it from the session and retry the other 49 again.
-Original Message-
From: sqlalchemy@googlegroups.com [mailto:sqlalchemy@googlegroups.com]
On Behalf Of Aviv Giladi
Sent: 20 April 2011 15:53
To: sqlalchemy
Subject: [sqlalchemy] Best design for commits?
Hey guys,
I have a Pylons back-end running on SQLAlchemy. I have a script
Thanks again Rick. The issue is that I have a LOT of duplicates
(around 20-30%) - that's just how that tree is structured.
Therefore, I think I am going to go with catching DB exceptions
regardless, but also use an indexed collection to prevent duplicates.
Cheers!
On Apr 20, 12:43 pm, Richard
Hey guys,
I have a SQLAlchemy model with a Unicode column. I sometimes insert
unicode values to it (u'Value'), but also sometimes insert ASCII
strings. What is the best way to go about this? When I insert ASCII
strings with special characters I get this warning:
SAWarning: Unicode type
On Apr 20, 2011, at 1:09 AM, bukzor wrote:
Thanks Michael. I won't be using or supporting SQL Server, so I'm
quite fine with that.
I guess the way forward would be to install SA in develop mode and
make the changes directly?
no, to create your own SQL constructs use the compiler
On Apr 20, 2011, at 4:37 AM, can xiang wrote:
Hi,
I have a problem of bulk deleting from a sharded session, I tried to
search the previous posts without any finding.
I have the following table:
usersession_table = Table('kg_usersession', meta,
my practices with this kind of situation are:
1. theres just one commit() at the end. I'd like the whole operation in one
transaction
2. There are flush() calls every 100-1000 or so. 10 is very low.
3. I frequently will disable autoflush, if there are many flushes occurring due
to queries for
On Apr 20, 2011, at 12:33 AM, Aviv Giladi wrote:
Hey guys,
I reproduced the problem within my controllers, but I also ran this
test (after fully loading my environment of course):
parent = Node()
parent.id = 1
parent.parent_id = None
parent.name = 'parent'
I'm having a bit of trouble with this snippet from another stackoverflow
question:
http://stackoverflow.com/questions/1623661/sqlalchemy-shallow-copy-avoiding-lazy-loading
The snippet in question:
from sqlalchemy.orm import object_mapper, ColumnProperty,
RelationProperty
newobj =
On Apr 20, 2011, at 5:09 PM, Mahmoud Abdelkader wrote:
for model_property in object_mapper(user).iterate_properties:
if isinstance(model_property, ColumnProperty):
key = model_property.key
dictionary[key] = getattr(user, key)
elif (isinstance(model_property,
Thanks anyway.
It's sad horizontal shard extension is only considered as a example. I
hardly believe it, because it works so great in some of my simple use case.
I really hope there would be more work on this extension or more docs on how
to do it.
Best regards!
can
On Thu, Apr 21, 2011 at 3:19
Thank you for your responses everyone.
I have one more question - the really time heavy task here is
retrieving the URLs over HTTP (it takes almost a second per URL).
I am using urllib3 that has connection pooling, but other than that,
is there any other way to speed this up? Perhaps
ah well, glad to hear that ! It certainly can be more of a core element if it
had some more dedicated maintainers. I haven't actually used it in a real
project so i can't vouch strongly for it.It also might be a nice
third-party project. Sharding is very tough and there's lots of
if the URL fetch is an IO-bound operation (i.e. the time is spent waiting for
IO), it might work if you did a standard consumer/producer model using
Queue.Queue. One thread retrieves data from each URL and places the datasets
into the Queue. the other thread pulls off items and loads them
Hi Michael,
On Apr 20, 11:20 am, Michael Bayer mike...@zzzcomputing.com wrote:
yeah that is pretty awful, Oracle is super picky about names, and there's
already a lot of stuff to appease its limitations with bind parameters. The
bind name is ultimately derived from the key of the column
Hi Michael,
On Apr 20, 11:20 am, Michael Bayer mike...@zzzcomputing.com wrote:
yeah that is pretty awful, Oracle is super picky about names, and there's
already a lot of stuff to appease its limitations with bind parameters. The
bind name is ultimately derived from the key of the column
27 matches
Mail list logo