On Oct 6, 2:03 am, Malcolm Tredinnick <[EMAIL PROTECTED]> wrote: > On Sun, 2008-10-05 at 14:11 -0700, mrts wrote: > > Looking at the path Honza has taken [1], it looks that it both > > complicates things and causes overhead -- for every constraint on > > an model object, an extra db call for checking it is made, > > instead of relying on the constraint checks enforced by the db > > backend and corresponding exceptions during `save()` > > (e.g. for uniqueness validation, one db call is needed to check > > the uniqueness constraint and a second one to actually save the > > object). > > Except that very few validation requirements correspond to database > constraints (for example any reg-exp matching field, or limits on an > integer, etc). We aren't about to require database check constraints for > all of those. So it's not really a one-to-one in practice. You've > misread the patch quite badly,
Now that I re-read the first section of my post I do see that I made the mistake of being over-emphatic and made an incorrect generalization ("for every constraint on an model object, an extra db call for checking it is made") -- sorry for that. My point was that validating non-db constraints (like regex matching of an EmailField) could remain the responsibility of form validation as it is now. So, as presently, forms layer could take care of everything that can be taken care of without touching the database (including max_length, not null and choices) and model forms take additionally care of IntegrityErrors by catching them, augmenting _errors[] with relevant data and re-throwing them. And as IntegrityErrors can only be reliably caught during save(), model forms have to be handled differently. I had mostly backwards-compatibility and simplicity in fixing e.g. #8882 in mind (as validate_unique() in forms/models.py lets IntegrityErrors through anyway as of 1.0 and can not be relied on). But if you think this is a bad idea, so be it. - copied from below: - > There needs to be a clear phase prior to saving when > you can detect validation errors so that they can be correctly presented > back to the user. You see this already with forms where we check for > validity and, if it's not valid, we present the errors. If it is valid, > we move onto doing whatever we want with the data (saving it or > whatever). General model validation is of course conceptually correct and good -- if the responsibility assignment between form and model validation is sorted out in non-duplicated and elegant manner, unique checks handled efficiently and if the code lands in a month or two timeframe, then all is well. Meanwhile, people who want to use inline formsets with unique validation are stuck on #8882 (as it looks that it takes quite an effort to fix that with the current codebase -- and I'd like to be wrong here). > it sounds like: only the unique > requirements have to check the database (and a pre-emptive check is > reasonable there, since it's early error detection and there's only a > small class of applications that are going to have such highly > concurrent overlapping write styles that they will pass that test and > fail at the save time). - copied from below: - > The only time there's any kind of overlap is when there's a database > constraint such as uniqueness which we cannot guarantee will remain true > between the validation step and the saving step. So there's a chance > that save() will raise some kind of database integrity error. But that's > actually the edge-case and in a wide number of use-cases it's > practically zero since you know that your application is the only thing > working with the database. And your take is just to ignore that case and actually let e.g. admin display 500 pages? It's an ordinary race condition (some of which are indeed rare in practice but nevertheless usually taken care of) and you yourself have generally advocated pushing this kind of problems to the relevant backends -- which is nice and proper. Applying the same principle to this case as well (and I can't think of anything other but try save/catch IntegrityError) seems to me the only viable way to avoid the race. And I don't buy the "your application is the only thing working with the database" argument -- it is, but it happens to be multi-process or -thread. How am I supposed to avoid that thread or process A and B do a validate_unique() for some x concurrently, finding both out that it doesn't exist and proceed happily with saving, so that one of them chokes on an IntegrityError? --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~----------~----~----~----~------~----~------~--~---