I believe any kind of delete will also clobber any models linked to or
from the object getting deleted.

One work around was to implement delete() on the model which nulls the
relationships first..... That works except querysets with more than
one instance will NOT call the custom delete() method and in fact
still clobber things during a bulk delete.

However we do it the simple problem is that I'm scared to death
because my application is an inventory-style app with numerous
relationships for almost every model. representing subnets with ips
linked to devices, vlans, contacts, locations, applications,
switchports etc.... Basic network management stuff. I just feel like
my testing workload has increased significantly because a single
instance of missing a relationship means I'll lose all kinds of data
and look horrible to my customers..... Like walking on ice.

I've accounted for most cases that I could think of where something
might get deleted except for the problem with IP addresses being so
numerous. The non bulk methods simply cause my app to slow down beyond
5-20 minutes to remove a bunch of objects.... previously it would take
1-2 seconds while ALSO deleting my other objects ruining my day. So
the simple fact is the "safe" delete is many many times slower while
deleting less objects. Because deleting less takes longer I know this
isnt a DB problem. The problem is the suggested code to handle these
safe deletes does each object one by one which is not exactly the best
method.

I understand custom cursors could fix that problem but seriously I
think its the wrong solution.... I simply feel the problem was created
by django, artificially, and django should provide a sane work around
that doesnt involve rewriting all delete views and hooks such as rest.
I want the DB to act like it normally would, which doesnt follow this
cascade data clobbering nonsense.

Almost makes me want to completely ditch djangos ORM. I'm only saying
this because many people seem to downplay the seriousness of the
problem, but in reality its turning people away.

I know for sure this will hurt adoption from anyone requiring many
relationships with numerous amounts of existing code.... Sure starting
from scratch we could design around it, but those of us with 5000+
lines of views and a few thousand lines of models will certainly feel
like I do and regret not fully understanding that a model delete()
method wont cover all cases as previously thought. As stated
previously its like walking on ice, the entire consistency of my
application is at stake after 6 months of development!

Thanks for any and all ideas/suggestions/workarounds/patches.

Cheers,
Zach

On Dec 11, 9:36 am, "Jeremy Dunck" <[EMAIL PROTECTED]> wrote:
> On Wed, Dec 10, 2008 at 3:04 PM, David Cramer <[EMAIL PROTECTED]> wrote:
>
> > To be perfectly honest, I'm +1 on a solution. I didn't realize this
> > was happening either.
>
> > Time to monkey-patch QuerySet's delete() method :)
>
> > (I'm also a bit curious as to how its handling cascades in MySQL when
> > it's not looping through each object)
>
> I'd like to make sure I understand what we're talking about here.
>
> model_instance.delete() nulls any instances with nullable FKs to the
> deleted instance, right?
>
> The problem AFAICS, is that QuerySet.delete is not so careful, and
> deletes all FK records, even if they are nullable?
>
> And was it always so, or was this introduced in QuerySet-refactor, or
> at some other time?  I don't see a mention on
> BackwardsIncompatibleChanges and friends.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to