Absurdly long queries with Postgres 11 during unit tests

2021-06-07 Thread Rich Rauenzahn
This is heads up in case anyone sees something similar:

I have managed to trigger this degenerate query case in two completely 
different Django 2.2 projects.   In production with a normal sized dataset, 
the query time is fine.  But during unit testing with a small subset of the 
data, the queries took a long time.  A LONG time.

In the most recent case each query took 100x longer.  200+ seconds instead 
of 2 seconds.  The unit test dataset isn't very large because it's a unit 
test.  

I think I may have first seen this when I upgraded the project to postgres 
11.

Manually vacuuming between tests resolves the issue.  (Yes, autovacuum is 
on by default -- and isn't the db created from scratch for each 'manage 
test' invocation?)

This is how I did it:

def _vacuum():
# Some unit test queries seem to take a much longer time.
# Let's try vacuuming.
# https://stackoverflow.com/a/13955271/2077386
with connection.cursor() as cursor:
logger.info("Vacuum: begin")
cursor.execute("VACUUM ANALYZE")
logger.info("Vacuum: complete")

class VacuumMixin:
@classmethod
def setUpClass(cls):
_vacuum()
return super().setUpClass()

@classmethod
def tearDownClass(cls):
ret = super().tearDownClass()
_vacuum()
return ret

If anyone else sees this, please let me know.  Maybe we can further RCA it.



-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/e76b105e-ed53-4031-869c-830f94677ef4n%40googlegroups.com.


Influencing order of internal migration operations

2019-12-11 Thread Rich Rauenzahn
I've created a CompositeForeignKey constraint derived from Django's new 
BaseConstraint.  It requires that I manually make a unique_together in any 
of the referenced models views due to the DB internal requirements of 
composite foreign keys.

It mostly works -- except that when the operations are defined in the 
migration, the order is critical -- the unique_together's need to be made 
before the composite key SQL.

For now, I just went in and manually changed the order of operations in the 
migration to put the CompositeForeignKeys last.   Another way around would 
be to comment out all the CompositeForeignKey declarations in my models, 
migrate my unique_together's first, and then re-add the 
CompositeForeignKeys.

...I'm wondering if there is a supported way to hook into the migration 
creation process, give priorities to certain kinds of constraints, and then 
sort the operations by this priority before they are emitted into a 
migration.py.

Or any other ideas to make this a bit more seamless?

Rich

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/c5c30972-c5cb-402f-b88b-a072bb284318%40googlegroups.com.


Migrations appear to run twice when running tests

2019-11-19 Thread Rich Rauenzahn

I'm trying to figure out why my migrations appear to run twice when unit 
testing.I do have multiple dbs with custom routers to route to 
different databases.

I've tried debugging and tracing the routers to see if that's where the 
issue lies, but I don't know enough about the migration process to know 
what to look for.  

...I'm surprised it even works given that the database is already migrated 
once -- shouldn't the 2nd migrations fail?

I'm not expecting an answer -- I think there's not enough information.  

But where should I dig?

Rich


$ ./manage-coverage test --noinput -v 2
nosetests --logging-level=INFO --progressive-advisories --with-timer 
--timer-top-n=10 --with-prowl-report --with-timed-setup-report --with-id 
--id-file=/opt/gitlab-runner/NCiG8AGt/1/redacted-automation/reddash/djproj/.noseids
 
-v --verbosity=2
Using --with-id and --verbosity=2 or higher with nose-progressive causes 
visualization errors. Remove one or the other to avoid a mess.
Creating test database for alias 'default' ('reddash_cicd_3')...
Operations to perform:
  Synchronize unmigrated apps: api, bugzilla, celery, 
django_db_constraints, django_extensions, django_filters, django_nose, 
drf_yasg, humanize, messages, redauto, polymorphic, rest_framework, saml2, 
staticfiles, timezone_field
  Apply all migrations: accounts, admin, auth, contenttypes, dashboards, 
django_celery_beat, django_celery_results, reversion, sessions, tracking
Synchronizing apps without migrations:
  Creating tables...
Running deferred SQL...
Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0001_initial... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying accounts.0001_initial... OK
  Applying tracking.0001_initial... OK
  Applying accounts.0002_auto_20160929_2021... OK
  Applying accounts.0003_auto_20190225_2216... OK
  Applying accounts.0004_token... OK
  Applying accounts.0005_auto_20190306_1804... OK
  Applying tracking.0002_apirequestlog_department... OK
  Applying accounts.0006_dedupe_20190422_1447... OK
  Applying accounts.0007_auto_20190422_1459... OK
  Applying accounts.0008_auto_20190730_2235... OK
  Applying accounts.0009_auto_20190802_1750... OK
  Applying accounts.0010_auto_20190906_0024... OK
  Applying accounts.0011_user_is_redauto... OK
  Applying admin.0001_initial... OK
  Applying admin.0002_logentry_remove_auto_add... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying dashboards.0001_initial... OK
[...]
  Applying dashboards.0110_auto_20191118_1742... OK
  Applying django_celery_beat.0001_initial... OK
  Applying django_celery_beat.0002_auto_20161118_0346... OK
  Applying django_celery_beat.0003_auto_20161209_0049... OK
  Applying django_celery_beat.0004_auto_20170221_... OK
  Applying django_celery_beat.0005_add_solarschedule_events_choices... OK
  Applying django_celery_beat.0006_auto_20180210_1226... OK
  Applying django_celery_results.0001_initial... OK
  Applying reversion.0001_squashed_0004_auto_20160611_1202... OK
  Applying sessions.0001_initial... OK
  Applying tracking.0003_auto_20190822_1229... OK
  Applying tracking.0004_auto_20191030_2030... OK
  Applying tracking.0005_auto_20191030_2030... OK
  Applying tracking.0006_remove_apirequestlog_server_name... OK
  Applying tracking.0007_auto_20191030_2038... OK
Creating test database for alias 'bugzilla' (':memory:')...
Operations to perform:
  Synchronize unmigrated apps: api, bugzilla, celery, 
django_db_constraints, django_extensions, django_filters, django_nose, 
drf_yasg, humanize, messages, redauto, polymorphic, rest_framework, saml2, 
staticfiles, timezone_field
  Apply all migrations: accounts, admin, auth, contenttypes, dashboards, 
django_celery_beat, django_celery_results, reversion, sessions, tracking
Synchronizing apps without migrations:
  Creating tables...
Creating table bugs
Creating table bug_fix_by_map
Creating table products
Creating table phases
Creating table versions
Running deferred SQL...
Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0001_initial... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying accounts.0001_initial... OK
  Applying tracking.0001_initial... OK
  Applying accounts.0002_auto_20160929_2021... OK
  

Paginator UnorderedObjectListWarning on union(all=True) of two sorted queries

2019-04-16 Thread Rich Rauenzahn
I wonder if this is a case you want to catch and *not* warn about.

In my case, I'm doing a:

haves = 
MyModel.objects.filter(foreign_relationship=4).order_by('foreign_relationship__value',
 
'common_key')
havenots = MyModel.objects.exclude(id__in=haves).order_by('common_key')

query = haves.union(havenots, all=True)


And I'm using this with a Paginator.  The Paginator thinks the queries are 
not ordered, but they actually are (right?) due to the all=True in the 
union.

Is this a case the warning ought to handle and ignore?

The Django 1.11 source is:

def _check_object_list_is_ordered(self):
"""
Warn if self.object_list is unordered (typically a QuerySet).
"""
ordered = getattr(self.object_list, 'ordered', None)
if ordered is not None and not ordered:
obj_list_repr = (
'{} {}'.format(self.object_list.model, 
self.object_list.__class__.__name__)
if hasattr(self.object_list, 'model')
else '{!r}'.format(self.object_list)
)
warnings.warn(
'Pagination may yield inconsistent results with an 
unordered '
'object_list: {}.'.format(obj_list_repr),
UnorderedObjectListWarning,
stacklevel=3
)


Should I file a bug?

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/2d1dc6a6-35c1-48ac-afe3-d08cd4636990%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: multidb, don't create one of them during unit tests

2018-05-29 Thread Rich Rauenzahn
On Tuesday, May 29, 2018 at 12:09:46 PM UTC-7, Rich Rauenzahn wrote:
>
>
> Is this possible?  How?
>
>
Ah, found this thread which suggests just using sqlite locally... I'll give 
it a try:

https://groups.google.com/forum/#!topic/django-users/jxkTmibjmX4 

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/2fb17ad6-e3dc-4b35-a044-da24cb734413%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


multidb, don't create one of them during unit tests

2018-05-29 Thread Rich Rauenzahn
I'm using Django's (1.11) typical default db for my regular Django 
development, but I need to integrate with someone else's bugzilla db server 
running on mysql somewhere else (no web api available, mysql is the 
recommendation from them). So I've added a 2nd DB to my DB config 
pointing to their server.  I've used inspectdb to bring in their schema as 
models.

They have both a production instance and a test instance for reading bugs.  
These are both read only.

I'd like my unit tests to skip creating the test instance of their DB 
(since I'm not even running a mysql server to use, I don't have write 
access to theirs, and their test instance is already populated) and to just 
point my unit tests at their test instance.

I see I can override the TestRunner 

https://stackoverflow.com/questions/5917587/django-unit-tests-without-a-db

setup_databases in django.test.utils seems to take a global for keepdb so 
it doesn't seem like someone can choose on a per db basis ...

Is this possible?  How?

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/61d4c77e-2c5c-4afc-829d-3a1723d5cb2f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Constraints across foreign keys

2017-12-04 Thread Rich Rauenzahn

Let's say I have:

class Restaurant(Model):
   pass

class Table(Model)
   restaurant = ForeignKey(Restaurant)

class Chair(Model)
   restaurant = ForeignKey(Restaurant)
   table = ForeignKey(Table)

Is there a best practice for ensuring that the chair assigned to a table is 
always from the same restaurant?  These models above assume that we might 
have spare chairs not yet assigned to tables.

My actual case is more like this:

class A(Model):
   pass

class B(Model):
   a = ForeignKey(A)

class C(Model):
   a = ForeignKey(A)

class D(Model):
   b = ForeignKey(B)
   c = ForeignKey(C)

And I want to ensure b.a == c.a.

(I think I just have to manually add db sql constraints in my migrations 
and also override save())

Are there any well supported / used django packages to manage this for me?  
This https://github.com/rapilabs/django-db-constraints looks promising, but 
is awfully quiet.

Rich


-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/eea28a8f-4a73-4f74-a182-d6ade68a6737%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Unexpected poor template performance

2016-09-12 Thread Rich Rauenzahn

I'm working on a project that needs to occasionally render very large 
tables -- I'm picking hundreds of thousands of cells as a test point for 
trying out back end implementation scalability.

I'm parallelizing some of the rendering through ajax calls that returns 
large subsets of the rendered cells ("content") within 
a JSON structure.  I'm using a short template to render each cell:


  {{ datapoint.score }}



template = get_template('cell.djhtml')

for datapoint in self._iterate_datapoints():
datapoint_cell[datapoint.foo_id][datapoint.bar_id] = 
template.render(dict(datapoint=datapoint))


If I change the template.render() call to just a "%s" % datapoint.score, the code is an order of magnitude 
faster.

My assumption (and my reading of the django code I think confirmed) that 
the template is compiled only once, so I'm ok there.

Is this difference surprising or unexpected?  Would a single (consolidated) 
render() be expected to be much faster? (I'm trying to think of a way to 
refactor it to a single render, but I'm not looking forward to rendering 
json in a django template)

I'm also considering I may just need to return structured json and create 
the elements manually on the client side -- although if I have to do it 
that manually, I might as well use interpolation on the backend python 
without using a template...

Oh, and I just tried jinja2 -- it was much closer to interpolation speed.   
Perhaps that is my best answer.

Any thoughts or pointers on my predicament?

Thanks!
Rich

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/cb85b941-964c-46c8-8e84-148fdb6049ca%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Fixtures won't load twice in same testcase

2016-05-12 Thread Rich Rauenzahn


On Friday, May 6, 2016 at 4:11:42 AM UTC-7, Alasdair Nicol wrote:
>
>
>>
>> But In this particular run I'm currently tracing, rich is already in the 
>> db (as the only entry) as pk=5 (via fixture loading process).   For one, 
>> this tells me the sequence generators aren't always resetting between 
>> fixture loads/tests.
>>
>>  
> Sequences *are not* reset between test cases by default [2]. Perhaps you 
> need to change your code so that it doesn't assume that the user has pk=1, 
> or set reset_sequences = True.
>

My code didn't make that assumption -- it appeared that the fixture loading 
code did.
 

>
> It's difficult to offer any more advice, because you haven't posted any 
> code that can reproduce the problem. It doesn't need to be the actual code, 
> in fact the simpler the example is the better.
>

I spent some cycles trying to reproduce on a fresh Django install, but 
couldn't based on my assumptions.  As I replied later, the problem was a 
setUpClass() creating objects via a factory that persisted between 
TestCase's that did fixture loading, and they conflicted.

Rich 

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/3f1c6bfe-45be-48d8-bb94-a9a43acb8eb6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Fixtures won't load twice in same testcase

2016-05-12 Thread Rich Rauenzahn


On Thursday, May 5, 2016 at 4:22:11 PM UTC-7, Mike Dewhirst wrote:
>
> Are you using setUp() and tearDown() as class methods in your test class? 
>
>
No, the code was using setUpClass(), which is a classmethod. 

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/01e41a08-d956-4946-a6cf-e82644201cf6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Fixtures won't load twice in same testcase

2016-05-05 Thread Rich Rauenzahn

I've been tracing into django core code, and it looks to me that I have a 
case where the fixture has a auth.User(username=rich) with a pk=1 in the 
fixture.  But sometimes as the User fixture with pk=1 is being added 
(updated?) through Model._save_table(), the same User with pk=5 is already 
in the db (did not exist prior to fixture loading).  This causes the django 
core code (Model._save_table()) to also try to insert another user with the 
same username, causing an integrity error.

I added my own _fixture_setup() that asserted my db was clean prior to 
fixture loading.  It failed!  This lead me to find objects being created in 
a TestCase.setUpClass rather than TestCase.setUp(), and so were leaking 
across tests, fouling up fixture loading.

What's odd is that django-nose's FastFixtureTestCase hid this problem. 
  I think it affected the ordering and ran all the fixture based test 
cases first before the db was polluted.   (But isn't the DB dropped and 
refreshed after every TestCase?)


-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/c9d604a9-af10-4e22-91de-f81f3986d28d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Fixtures won't load twice in same testcase

2016-05-04 Thread Rich Rauenzahn

I'm having a strange problem.  My test environment has been working fine, 
but I am upgrading my environment's Django revision slowly, which means I 
also need to move away from django-nose's FastFixtureTestCase.

I'm now at Django 1.7.   I have a TestCase which is more or less...

class Foo(TestCase): # and I've tried TransactionTestCase

fixtures = ['accounts.json', 'config.json', 'runtimedata.json']

def test1(self):
pass # yes, literally.

def test2(self):
pass # literally, pass right now


If I split test1 and test2 into two classes, the TestCase works fine.

Otherwise, my runtimedata app fixture ends up with a duplicate row for the 
second test during the fixture setup phase.   When I turned on postgres 
(9.2) logging, I get:

LOG:  statement: UPDATE "runtimedata_branch" SET "name" = 'mock' WHERE 
"runtimedata_branch"."id" = 1
LOG:  statement: INSERT INTO "runtimedata_branch" ("id", "name") VALUES (1, 
'mock')
ERROR:  duplicate key value violates unique constraint 
"runtimedata_branch_name_49810fc21046d2e2_uniq"
DETAIL:  Key (name)=(mock) already exists.
STATEMENT:  INSERT INTO "runtimedata_branch" ("id", "name") VALUES (1, 
'mock')
LOG:  statement: ROLLBACK
LOG:  statement: DROP DATABASE "test_db"

If I grep for runtimedata_branch in the postgres logs, I get ...starting 
after the TRUNCATE caused by TransactionTestCase 

LOG:  statement: TRUNCATE [...], "runtimedata_branch", [...]; #
LOG:  statement: SELECT "runtimedata_branch"."id", 
"runtimedata_branch"."name" FROM "runtimedata_branch" WHERE 
"runtimedata_branch"."name" = 'mock' LIMIT 21
LOG:  statement: INSERT INTO "runtimedata_branch" ("name") VALUES ('mock') 
RETURNING "runtimedata_branch"."id"
LOG:  statement: UPDATE "runtimedata_branch" SET "name" = 'mock' WHERE 
"runtimedata_branch"."id" = 1
LOG:  statement: INSERT INTO "runtimedata_branch" ("id", "name") VALUES (1, 
'mock')
ERROR:  duplicate key value violates unique constraint 
"runtimedata_branch_name_49810fc21046d2e2_uniq"
STATEMENT:  INSERT INTO "runtimedata_branch" ("id", "name") VALUES (1, 
'mock')

You can see a different pattern for the first pass fixture loading for 
test1,

LOG:  statement: INSERT INTO "runtimedata_branch" ("name") VALUES ('mock') 
RETURNING "runtimedata_branch"."id"
LOG:  statement: UPDATE "runtimedata_branch" SET "name" = 'mock' WHERE 
"runtimedata_branch"."id" = 1

The main difference I can see is the first time, the tables are generated 
from scratch, the second the tables have been truncated.  Note, I get a 
very similar outcome with TestCase which does rollbacks instead of 
truncates.

Any ideas?  This is very odd.

Also, my fixtures are dumped with --natural.  I've just tried it without 
--natural and I get the same outcome.



-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/89730e77-2099-4219-9e75-34ccc3b4f457%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Extending Form to include 'placeholder' text

2016-03-22 Thread Rich Rauenzahn
Hi,

I'd like to make a mixin to set the placeholder text in the widget attrs.

Reading this https://code.djangoproject.com/ticket/5793, it seems that 
extending Meta for custom fields is somewhat discouraged.

Would the following be the recommended way to do it?  Just adding a class 
variable to the Form class?

class PlaceHolderMixin(object): 



placeholders = {}   



def __init__(self, *args, **kwargs):   
 
super(PlaceHolderMixin, self).__init__(*args, **kwargs) 

for field_name, field in self.fields.items():   

if field_name not in self.placeholders: 

continue   
 
self.fields[field_name].widget.attrs['placeholder'] = 
self.placeholders[field_name]


class MyForm(PlaceHolderMixin, ModelForm):   


placeholders = {   
 
'name': 'blah blah',
'owner': 'blah blah',
}   



class Meta: 

model = MyModel
 

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/423a8164-47bb-498c-b651-48ea1a3f7464%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Any interest in update_fields=dict(...)?

2014-06-10 Thread Rich Rauenzahn

I have a use case where I am retroactively adding save(update_fields=...) 
to an existing code base.  Sometimes the code is structured such that 
'self' of the model is modified in other helper methods of the model, but 
not saved immediately.  It seems like it would be useful to restructure the 
code such that these other methods modify a dict I pass around, and then 
when I am ready to save, I pass that dict to save() -- rather than guessing 
-- and hoping -- that I've listed (and kept up to date!) all of the 
relevant fields in the update_fields list.

It's kinda like:

class Foo(Model):
 
 def _go_do_the_complicated_stuff(self):
 self.a = ...
 self.b = ...
 ...

 def doit(self):
 self._go_do_the_complicated_stuff()
 self.done = True
 self.save(update_fields=[u???])

Where it could be:

class Foo(Model):
 
 def _go_do_the_complicated_stuff(self):
 return dict(a = ...,
 b = ...,
 ...)

 def doit(self):
 updates = self._go_do_the_complicated_stuff()
 updates['done'] = True
 self.save(update_fields=updates)

I like this because it is easy to inspect what is getting saved (and should 
be saved) at save time.

Implementation-wise, django would check update_fields for a dict (or just 
use another key) and do something like:

 for attr, value in d.iteritems(): 
setattr(self, attr, value)
 self.save(update_fields=d.keys())

Now, I know I can just make my own mixin (maybe add a save_dict()) to do 
this, but it seemed like something worth sharing.



Rich

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at http://groups.google.com/group/django-users.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/6f7f4bd1-1f63-4c16-92a8-600392792ff1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.