Re: More efficient negative lookups

2010-10-28 Thread George Vilches
On Oct 27, 2010, at 5:55 PM, Jacob Kaplan-Moss wrote:

> On Wed, Oct 27, 2010 at 4:32 PM, Adrian Holovaty  wrote:
>> I'm inclined to say we do the former -- restore the "ne" lookup type
>> -- because it's a quick fix, and ask somebody to write up a patch for
>> the latter. Does anybody have strong opinions against this? If not, I
>> can restore the "ne" lookup type.
> 
> Sounds like a good plan to me (especially making simple excludes faster).
> 
> However, just for the record I think the reason we decided to remove
> __ne is the first place was that its existence introduces a weird
> inconsistency with regard to other lookup types. That is, if there's a
> "ne" why isn't there a "nstartswith" or "nrange" or ... ? I think down
> that path lies madness so I'm +0 on bringing back "ne" with the
> proviso that we agree it's not the first step down a slippery slope
> towards "nistartswith" and friends.

I know it's been a little while since I've made any major ORM contributions, 
but I'd say -0 on __ne, and +1 on making exclude generate better code.  
Django's worked far too hard on making things consistent as possible to let 
like this slip by just because we don't want to muddy our hands with a little 
harder work in the exclude() code.  So many other tickets have been stuck in 
DDN/Accepted forever because the area of code is harder to review, it's not 
like it's an unknown state in the project. :)

I'd even be willing to throw my hat in the ring to contribute towards an 
.exclude()-based solution if someone else doesn't step forward, but I know I 
won't be touching it until a few days pass.

George

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-develop...@googlegroups.com.
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en.



Re: More efficient negative lookups

2010-10-27 Thread George Vilches


On Oct 27, 5:55 pm, Jacob Kaplan-Moss  wrote:
> On Wed, Oct 27, 2010 at 4:32 PM, Adrian Holovaty  wrote:
> > I'm inclined to say we do the former -- restore the "ne" lookup type
> > -- because it's a quick fix, and ask somebody to write up a patch for
> > the latter. Does anybody have strong opinions against this? If not, I
> > can restore the "ne" lookup type.
>
> Sounds like a good plan to me (especially making simple excludes faster).
>
> However, just for the record I think the reason we decided to remove
> __ne is the first place was that its existence introduces a weird
> inconsistency with regard to other lookup types. That is, if there's a
> "ne" why isn't there a "nstartswith" or "nrange" or ... ? I think down
> that path lies madness so I'm +0 on bringing back "ne" with the
> proviso that we agree it's not the first step down a slippery slope
> towards "nistartswith" and friends.

I know it's been a little while since I've made any major ORM
contributions, but I'd say -0 on __ne, and +1 on making exclude
generate better code.  Django's worked far too hard on making things
consistent as possible to let like this slip by just because we don't
want to muddy our hands with a little harder work in the exclude()
code.  So many other tickets have been stuck in DDN/Accepted forever
because the area of code is harder to review, it's not like it's an
unknown state in the project. :)

I'd even be willing to throw my hat in the ring to contribute towards
an .exclude()-based solution if someone else doesn't step forward, but
I know I won't be touching it until a few days pass.

George

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-develop...@googlegroups.com.
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en.



Re: Process discussion: reboot

2010-04-19 Thread George Vilches

On Apr 19, 2010, at 5:16 PM, Mike wrote:

> For the project of such exposure as Django the number of _active_ core
> members that actually do work on trunk and are participating in the
> decision making process is extremely small. Quick and dirty statistic
> on
> trunk commits shows that more than 75% of the work in _trunk_  is done
> by just 4 developers [1] and from this list it seems that not much
> more are
> really involved into design decision making either.

It does appear true that we're a little light on active core devs right now.  
Can I propose Alex Gaynor for commit bit?  Seriously, why hasn't someone else 
proposed this already? :)

George

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-develop...@googlegroups.com.
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en.



Re: Opinions sought on m2m signal ordering

2010-03-28 Thread George Vilches

On Mar 27, 2010, at 1:08 PM, Russell Keith-Magee wrote:

> Option 3: We modify the existing signals so we have a pre-post pair
> for every signal. This maintains the analog with pre/post save, and
> gives the most control. For example, on Alex Gaynor has suggested to
> me that some people might want to use a pre-add signal rather than a
> post-add signal for cache invalidation since there is a marginally
> lower chance of getting a race condition. However, signals aren't free
> -- an unattached signal is roughly equivalent to the overhead of a
> function call.

I'm +1 for this.  But, I'm also the person who proposed it on the ticket. :)  
I've had uses for pre- and post- on at least changed, and worked around the 
add/remove being on one side or the other.  I'd happily eat the extra function 
call worth of overhead for the general case.

George

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-develop...@googlegroups.com.
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en.



Re: Logging instead of connection.queries

2009-08-04 Thread George Vilches

On Aug 3, 2009, at 6:53 PM, Malcolm Tredinnick wrote:

>
> On Mon, 2009-08-03 at 15:48 +0400, Ivan Sagalaev wrote:
>> Hello!
>>
>> A couple of days ago a sudden thought has struck me while thinking on
>> the matter of logging in Python libraries. I consider a good practice
>> for a library to log what it's doing into a named logger without  
>> setting
>> up logging by itself. It's then a responsibility of an application  
>> that
>> would use the library so set up logging as it see fit.
>
> Adrian, in particular, has been historically against adding logger
> module hooks in Django. So you have to work around that.

I'd like to point at a ye olde ticket that Adrian did support for  
exactly this purpose: http://code.djangoproject.com/ticket/5415

Putting signals on a replacement CursorWrapper would give the same  
functionality from a user standpoint (the ticket outright says "This  
will enable all sorts of interesting and useful things, such as  
logging and debugging functionality"), and already has some blessing.   
The patch will have to be brought up to date, but I did address all of  
Malcolm's concerns at the time with the most recent version of the  
patch, and the worries about signal performance in #4561 have since  
been resolved.

What say you, Malcolm? :)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Reverse mappings raising exceptions

2009-04-13 Thread George Vilches


On Apr 13, 2009, at 5:55 AM, Shai Berger wrote:

> I think a stronger case can be made: The reverse OneToOne relation  
> just should
> not throw DoesNotExist, always returning None when there is no  
> matching
> object. This is a realization of the idea that "OneToOne" relations  
> are
> really, in a total majority of cases, "OneToZeroOrOne" relations.

I've been down this road before, and have been shot down with comments  
along the line of "what you really want is a ForeignKey(unique=True,  
null=True)". The problem is that you can't easily follow the reverse  
of that (it requires a separate query, which could get very  
expensive).  #7270 attempts to make that case not require an extra  
query for both 121 fields and FK(unique=True), and to have  
FK(unique=True) have an accessor which makes it syntactically similar  
to 121.  The current leaning for that ticket the last time I talked  
with Malcolm is that the 121 portion is very likely to get accepted,  
the FK(unique=True) portion is less likely.  However, if you had the  
FK part, would that resolve most of your issues, with the exception of  
the model inheritance instances?

Thanks,
George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: select_related to work with backward relationships?

2009-03-23 Thread George Vilches


On Mar 23, 2009, at 8:08 PM, Malcolm Tredinnick wrote:

> It is documented in that respect. In a couple of different Trac  
> tickets
> (since there are multiple issues: select related for reverse one-to- 
> one,
> which only isn't in 1.1-beta because I ran out of time to fix the  
> patch,
> and select-related for multi-valued relations).

I'll happily bring the patch up to date on #7270 for 1.1 if it's just  
a matter of you running out of time.  You took over the ticket the day  
I started looking back into it, so I let it be till I heard more from  
you. :)

George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Small URLconf suggestion

2009-02-22 Thread George Vilches

"Those are most *NOT* definitely the same".  Typos too early in the  
morning, sorry about that.

On Feb 22, 2009, at 8:29 AM, George Vilches wrote:

>
>
> On Feb 22, 2009, at 7:44 AM, rihad wrote:
>> I was just special-casing the backslash, which is special anyway,
>> otherwise APPEND_SLASH wouldn't exist.
>> Moreover, hello and hello/ (and hello/) _are_ the same URL to
>> Django, otherwise it wouldn't redirect to the url with the slash
>> appended. APPEND_SLASH looks more like a hack, so why not treat the
>> URLs the same from the beginning? This is all I was proposing.
>
> This statement is wholly inaccurate.  Those URLs are only the same to
> Django in the case where a rule is written to accept them that way,
> and the webserver above is nice about the treatment of them.  Take
> this example:
>
> urlpatterns = patterns('',
> (r'hello', 'view1'),
> (r'hello/', 'view2'),
> )
>
> Those are most definitely the same, and this is by choice and by user
> convention.  I am a strong -1 to your ideas about not making slash
> default, or that anything here is a /.  If APPEND_SLASH was a hack,
> then why does every webserver have equivalent functionality to emulate
> the add a trailing / behavior?
>
> gav
>
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Small URLconf suggestion

2009-02-22 Thread George Vilches


On Feb 22, 2009, at 7:44 AM, rihad wrote:
> I was just special-casing the backslash, which is special anyway,
> otherwise APPEND_SLASH wouldn't exist.
> Moreover, hello and hello/ (and hello/) _are_ the same URL to
> Django, otherwise it wouldn't redirect to the url with the slash
> appended. APPEND_SLASH looks more like a hack, so why not treat the
> URLs the same from the beginning? This is all I was proposing.

This statement is wholly inaccurate.  Those URLs are only the same to  
Django in the case where a rule is written to accept them that way,  
and the webserver above is nice about the treatment of them.  Take  
this example:

urlpatterns = patterns('',
 (r'hello', 'view1'),
 (r'hello/', 'view2'),
)

Those are most definitely the same, and this is by choice and by user  
convention.  I am a strong -1 to your ideas about not making slash  
default, or that anything here is a /.  If APPEND_SLASH was a hack,  
then why does every webserver have equivalent functionality to emulate  
the add a trailing / behavior?

gav

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to 
django-developers+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: BitmaskField

2008-12-05 Thread George Vilches

Unfortunately, Malcolm has shot this down in the past as something  
that would be included in Django:

http://groups.google.com/group/django-developers/browse_thread/thread/4cc529b95c9efe20/439e90ed09cbcf2e

Theoretically, you can do this with a Q object, although I have not  
tried since Django 1.0 was released.  I definitely have a use for this  
as well, and as can be seen from my original post, every DB engine  
that Django includes a driver for has support for at very least the &  
and | operators.

gav


On Dec 5, 2008, at 8:20 AM, [EMAIL PROTECTED] wrote:

>
> I would use this. The one thing I don't see covered in your example is
> setting flags. I would look at allowing a list or tuple of integers.
> Using your example:
>
> p = Person(name='John Doe', flags=[PeopleFlags.Male,
> PeopleFlags.Employed])
>
> - Justin
>
> On Dec 4, 5:16 pm, "Craig Kimerer" <[EMAIL PROTECTED]> wrote:
>> Apologies if this has been asked already and I have missed it in  
>> searching,
>> but is there any interest in taking a patch for a BitmaskField?
>>
>> Given the following (albeit stupid) example to show some usages  
>> that would
>> be nice to have on a bitmask field.  I should note in the examples  
>> below,
>> the names I have chosen I am not sold on, but work well enough to  
>> describe
>> what is going on in this example.
>>
>> class Person(models.Model):
>> name = models.TextField()
>> flags = models.BitmaskField()
>>
>> class PeopleFlags(object):
>> NoFlag = 0
>> Male = 1
>> Female = 2
>> Student = 4
>> Unemployed = 8
>> Employed = 16
>>
>> Example filter API:
>>
>> Finding all unemployed students:
>> Person.objects.filter(flags__all=[PeopleFlags.Unemployed,
>> PeopleFlags.Student])
>>
>> Finding all females who are students or unemployed
>> Person 
>> .objects 
>> .filter(flags__is=PeopleFlags.Female).filter(flags__any=[Peop  
>> leFlags.Student,PeopleFlags.Unemployed])
>>
>> Obviously there are some special cases, like you couldn't use the  
>> same logic
>> if someone wanted to find 'All people with NoFlags'.  By default 0  
>> would
>> have to be special cased for '= 0' instead of '& 0'.
>>
>> I dont have the code currently written, but I am willing to put  
>> some work
>> into it if this is a feature that people (other than me) think  
>> would be
>> useful.
>>
>> Craig
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Proposal: Make filters in admin persistent (#6903)

2008-11-11 Thread George Vilches

I wasn't thinking that the filters would be preserved across  
*different* models or admin pages, only within them.  So, you'd keep  
some sort of dictionary, keyed based on the particular admin URL,  
model, or some other easily achievable unique piece of information per  
screen.  I wouldn't expect that just because I was filtering on e-mail  
address in one section of the admin app, it would automagically apply  
that filter wherever else it could. :)

Would preserving filters open the door to also preserving the current  
ordering and current page number in the admin view?  I would suspect  
that if you're saving one, you'd really want to save all three,  
because they all go together insofar as returning you to *exactly* the  
view you were last using.  If you're going to preserve it at all,  
might as well do it right and preserve it as pristinely as possible.

On Nov 11, 2008, at 5:34 PM, David Cramer wrote:

>
> I guess you're right. This is just for admin so it's not a huge deal.
>
> It will feel weird howerver, if I somehow go to a search results page
> and it remembers my filters when I was visiting something else before
> that. So it'd need to handle clearing it at the right time as well.
>
> On Nov 11, 3:51 pm, George Vilches <[EMAIL PROTECTED]> wrote:
>> Don't sessions already have standard expirations in them?  Besides
>> that, this is the admin, it's not a tool for every user of your
>> application to be in, so there will only be a few larger sessions  
>> (and
>> larger is still only a few K at most, if you have lots and lots of
>> models you're filtering on).  And yes, it would only keep it for the
>> length of that user's session, I don't magically expect it to be able
>> to suddenly transfer to another user.  If I wanted that  
>> functionality,
>> I would ask for something to be added to the admin that would print
>> out a URL that you could give to another user to get the filtering  
>> you
>> were just using.  Which sounds handy, but is a separate ticket from
>> what we're discussing here.
>>
>> On Nov 11, 2008, at 4:47 PM, David Cramer wrote:
>>
>>
>>
>>> Well I'm not sure storing multiple search paths is too good of an
>>> idea, as you increase the size of the session significantly, and  
>>> then
>>> have to worry about expiring those or clearing them somehow. The
>>> session just keeps it for that users session, vs whoever else  
>>> happens
>>> to visit that url (say I pass it off to a coworker).
>>
>>> On Nov 11, 3:39 pm, George Vilches <[EMAIL PROTECTED]> wrote:
>>>> I definitely second this.  The extra bonus to storing it in the
>>>> session is that you can maintain your search state on multiple  
>>>> admin
>>>> pages/models independently without overflowing the URL.   
>>>> Naturally if
>>>> you do it this way, you'd also want to have a visible "clear  
>>>> filters"
>>>> link so that there's some way to reset that state, I didn't check  
>>>> the
>>>> patch to see if this was already included.
>>
>>>> On Nov 11, 2008, at 4:35 PM, David Cramer wrote:
>>
>>>>> Before this gets accepted, I'd like to throw in the proposal of
>>>>> storing this in the session vs a huge URL. That and a hash seem to
>>>>> be
>>>>> the common approach to storing search paths.
>>
>>>>> On Nov 11, 7:19 am, Jonas Pfeil <[EMAIL PROTECTED]> wrote:
>>>>>> Currently if you search in the admin, use some kind of filter or
>>>>>> even
>>>>>> just go to the second page in the change list this selection is
>>>>>> reset
>>>>>> when you edit an item and hit save. The user gets the default  
>>>>>> list
>>>>>> again. Needless to say this can be quite annoying. Especially if
>>>>>> you
>>>>>> want to edit a specific subset of a very large database.
>>
>>>>>> The solution is to somehow make the filters persistent. The  
>>>>>> ticket
>>>>>> [1]
>>>>>> already has a patch.
>>
>>>>>> Cheers,
>>
>>>>>> Jonas
>>
>>>>>> [1]http://code.djangoproject.com/ticket/6903
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Proposal: Make filters in admin persistent (#6903)

2008-11-11 Thread George Vilches

I definitely second this.  The extra bonus to storing it in the  
session is that you can maintain your search state on multiple admin  
pages/models independently without overflowing the URL.  Naturally if  
you do it this way, you'd also want to have a visible "clear filters"  
link so that there's some way to reset that state, I didn't check the  
patch to see if this was already included.

On Nov 11, 2008, at 4:35 PM, David Cramer wrote:

>
> Before this gets accepted, I'd like to throw in the proposal of
> storing this in the session vs a huge URL. That and a hash seem to be
> the common approach to storing search paths.
>
> On Nov 11, 7:19 am, Jonas Pfeil <[EMAIL PROTECTED]> wrote:
>> Currently if you search in the admin, use some kind of filter or even
>> just go to the second page in the change list this selection is reset
>> when you edit an item and hit save. The user gets the default list
>> again. Needless to say this can be quite annoying. Especially if you
>> want to edit a specific subset of a very large database.
>>
>> The solution is to somehow make the filters persistent. The ticket  
>> [1]
>> already has a patch.
>>
>> Cheers,
>>
>> Jonas
>>
>> [1]http://code.djangoproject.com/ticket/6903
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Ticket #7591: Authenticate By Email Support

2008-07-02 Thread George Vilches


On Jul 2, 2008, at 1:24 PM, Paul Kenjora wrote:

> I understand the resistance but you've got demand, you've got a  
> willing developer, and you've got a clean fix that significantly  
> improves the adaptability of the framework.  What better reason  
> would you need?

Someone who has a proven history of contributions and maintenance of  
the core framework for a significant period of time.

When you contribute an entire backend that goes into core, it will  
have to be maintained forevermore.  People severely underestimate the  
need for this, and even Django has suffered a bit from this over its 3  
years of life (the signal framework had a period of time like this,  
although it's getting more attention again).

Providing the addition as a 3rd party framework allows the community  
to vet your work and decide whether 1) it's worth inclusion, and 2)  
that you have the stamina to maintain it in Django for a long while to  
come.

gav

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: MySQL exact/iexact lookups change

2008-06-30 Thread George Vilches


On Jun 30, 2008, at 1:27 PM, Collin Grady wrote:

>
> George Vilches said the following:
>> As of http://code.djangoproject.com/changeset/7798 in the MySQL
>> DatabaseWrapper.operations, Malcolm has changed the "__exact" filter
>> to use '= BINARY' to force case-sensitive comparisons, which is
>> appropriate.
>>
>> I therefore propose that operations."__iexact" should be changed from
>> 'LIKE %s' to '= %s', which performs much better, and gives the same
>> results for the cases which the documentation specifically describes:
>> "Case-insensitive exact match".  There's no reason to be using LIKE
>> here when the database gives us a better built-in option for the same
>> behavior.
>
> Actually, based on what I remember of my testing for #3575[1], = and
> LIKE were about the same when using a case insensitive collation
> (though I don't seem to have mentioned that in the ticket, since it
> didn't really matter there)

Well, if that's true, that covers about half the problem (I will be  
verifying  The other half is that if someone puts a string like 'ab%c'  
into a __iexact search, the % is not currently escaped (correct me if  
I'm wrong here), then it's going to run a weird LIKE statement.  So  
either we should have a backend-specific wildcard escape for these  
things, tell people "don't be stupid and watch out", or switch to  
something that doesn't use LIKE?

George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



MySQL exact/iexact lookups change

2008-06-30 Thread George Vilches

As of http://code.djangoproject.com/changeset/7798 in the MySQL  
DatabaseWrapper.operations, Malcolm has changed the "__exact" filter  
to use '= BINARY' to force case-sensitive comparisons, which is  
appropriate.

I therefore propose that operations."__iexact" should be changed from  
'LIKE %s' to '= %s', which performs much better, and gives the same  
results for the cases which the documentation specifically describes:  
"Case-insensitive exact match".  There's no reason to be using LIKE  
here when the database gives us a better built-in option for the same  
behavior.

Thoughts?
George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Nullable foreignkeys do not let me retrieve records where the foreignkey is null

2008-06-23 Thread George Vilches

You'll be happy to know that there's a ticket in the system that  
already resolves this problem: http://code.djangoproject.com/ticket/ 
7512 .  It will properly LEFT JOIN whenever ordering occurs on  
null=True relationships on any of the major field types.

gav

On Jun 23, 2008, at 10:00 AM, Mike H wrote:

>
>
> Gah... ignore! Even though the foreignkey was nullable, I had an
> ordering entry in the Meta class which ordered by deployment__name
> which forced the inner join.
>
> I'll be quiet now... ;-)
>
> Mike
>
> On 23 Jun 2008, at 14:51, Mike H wrote:
>
>>
>> Hi all,
>>
>> Before I rush off and possibly file a bogus bug report, am I seeing
>> incorrect behavior here?
>>
>> I have a simple 'Project' model, as so:
>>
>> class Project(models.Model):
>>deployment = models.ForeignKey(Deployment, blank=True, null=True)
>>... some other fields here
>>
>> Some of the projects in the db do not have a deployment.
>>
>> When I run
>>
>> Project.objects.all().count() it counts all the records in the  
>> project
>> table.
>>
>> When I run
>>
>> Project.objects.all() it tries to do an inner join to the deployment
>> table, which of course cuts out all projects that don't have a
>> deployment.
>>
> m.Project.objects.all().query.as_sql()
>> ('SELECT `tasks_project`.`id`, `tasks_project`.`deployment_id`,
>> `tasks_project`.`name`, `tasks_project`.`active`,
>> `tasks_project`.`is_support` FROM `tasks_project` INNER JOIN
>> `tasks_deployment` ON (`tasks_project`.`deployment_id` =
>> `tasks_deployment`.`id`) ORDER BY `tasks_deployment`.`name` ASC,
>> `tasks_project`.`name` ASC', ())
>>
>> I have tried .select_related(depth=0) but that returns exactly the
>> same sql, which really, to my mind anyway, it should not. I  
>> explicitly
>> asked it not to select the deployments with depth=0, didn't I?
>>
>> I am running revision 7569 of trunk.
>>
>> Cheers,
>>
>> Mike
>>
>>>
>
>
> >


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: RFC: Django 1.0 roadmap and timeline

2008-06-11 Thread George Vilches

Just one fix to this list:

On Jun 11, 2008, at 10:03 PM, Jacob Kaplan-Moss wrote:

> 8. Many-to-many intermediates (#6905).
>

Shouldn't that be #6095?  http://code.djangoproject.com/ticket/6095

George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: OneToOne model field bug?

2008-06-11 Thread George Vilches


On Jun 11, 2008, at 7:06 AM, patS wrote:

> {{ m.z.pool }} prints: something in z
>
> but if i put {{ m.z.pool }} after {{ m.y.pool }} it will print
> something in y not something in z

This appears just like a problem that was solved in SVN trunk a couple  
weeks ago with improper caching on OneToOneFields.  Are you using  
close to the latest trunk? (r7600 or greater?)

With latest, I am unable to reproduce the error, and here is the test  
case I wrote for it:

from django.db import models

class m(models.Model):
something = models.CharField(max_length=100)

class y(models.Model):
m = models.OneToOneField(m,primary_key=True,related_name='y')
pool = models.CharField(max_length=100)#example data: "something  
in y"

class z(models.Model):
m = models.OneToOneField(m,primary_key=True,related_name='z')
pool = models.CharField(max_length=100)#example data: "something  
in z"

__test__ = {'API_TESTS':"""
 >>> a = m.objects.create(something='123')
 >>> b = y.objects.create(m=a, pool='something in y')
 >>> c = z.objects.create(m=a, pool='something in z')
 >>> my_choice = m.objects.get(id=1)
 >>> my_choice.y.pool
u'something in y'
 >>> my_choice.z.pool
u'something in z'

"""}

Thanks,
gav

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Django releases

2008-06-08 Thread George Vilches


On Jun 8, 2008, at 4:27 AM, Wim Feijen wrote:

> Fortunately, the trunk is stable (thank you!).

I think what people are missing most here is that this statement is  
moderately inaccurate.  Since QSRF, there have been a significant  
number of data-fetching related tickets that are relatively easy to  
hit that have not yet been resolved.  This is no one's fault, Malcolm  
had real life come up, and there are very few people capable of  
replacing him right now.  This is also not to disparage Django's  
quality of trunk code (which is very high), QSRF was a *very* large  
merge, it was impossible that things were not going to need review  
after release to such a wide audience.

I'm attempting to pick up the slack at least in part, but we have  
several tickets that range from easy to identify and solve to rather  
complicated for things that are relatively common database use cases.   
Here's a smattering of those tickets (I do not promise that I have  
verified every one of these, just some, the rest were from a grazing  
of the ticket system for things that looked similar to problems I've  
encountered):

http://code.djangoproject.com/ticket/7378 - Reverse relationship  
ignores to_field (in another setting)
http://code.djangoproject.com/ticket/7372 - queryset intersection  
either returns wrong result or raises KeyError depending on order  
(after queryset-refactor merge)
http://code.djangoproject.com/ticket/7371 - Missing FROM table from  
autogenerated SQL throws error
http://code.djangoproject.com/ticket/7369 - ForeignKey non-null  
relationship after null relationship on select_related() generates  
invalid query
http://code.djangoproject.com/ticket/7367 - Cannot have non-primary- 
key OneToOneField to parent class
http://code.djangoproject.com/ticket/7330 - Filtering on multiple  
indirect relations to same table gives incorrect SQL
http://code.djangoproject.com/ticket/7277 - after queryset refactor,  
specifying > 2 OR'ed conditions that span many-to-many relationships  
returns broken SQL
http://code.djangoproject.com/ticket/7125 - Multiple ForeignKeys to  
the same model produces wrong SQL statements.

Now, I'm putting my money where my mouth is, and am working to  
understand the flow of the internal query API so that I can generate  
patches for most of these tickets, as something we're working on over  
here requires some deep integration into the SQL layer (better  
selectability on OneToOneFields).  That aside, now that QSRF is  
getting a real fleshing-out and all these reports are trickling in, I  
think it would be a bad idea to stamp a version right now until either  
someone can step up and fill Malcolm's shoes as a queryset maintainer,  
or he becomes available once again from real life.

Thanks,
George


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Magic ORM save()

2008-05-30 Thread George Vilches


On May 30, 2008, at 7:29 PM, Jeremy Dunck wrote:

>
> On Fri, May 30, 2008 at 5:16 PM, [EMAIL PROTECTED]
> <[EMAIL PROTECTED]> wrote:
>>
>> Let's discuss the ORM save() method. In my opinion, unless I grossly
>> misunderstand what's going on it seems the save method is very
>> magical.
>>
>> First, some confirmations; The save method will INSERT a row into the
>> database if it doesn't exist, always. The save method will UPDATE a
>> row that exists in the database, always.
>
> Please review the previous threads on this topic on this mailing list.

I have been following most of the threads related to save()  
functionality, and most of them only address part of what James is  
asking.  The part you quoted above, yes, there's plenty of discussion  
about the explicit changes that people are interested, most recently  
and notably 
http://groups.google.com/group/django-developers/browse_thread/thread/179ed55b3bf44af0
 
  (Malcolm codifying the way to explicitly indicate doing only an  
INSERT or an UPDATE).

What I believe he is questioning is the basis that it should  
necessarily be explicit as a default behavior, to force on or the  
other.  In ORM terms (or at least Hibernate terms), should transient  
objects be treated the same as persistent objects?  Ken Arnold's post  
in the thread linked above comes closest to describing this.  For  
instance, why is it that save() *always* tries to make a write to the  
database?  For instance, if we know that the data was pulled from the  
database, then in most any scenario, the expected behavior is that the  
same data would be updated when put back in the database (UNLESS the  
primary key has changed is some way, and couldn't this automatically  
be detected Django-side, and converted to an INSERT without the  
separate SELECT)?  In cases where something goes wrong, like a .save()  
on an existing row in the DB being called, but a separate thread  
called .delete(), removing the object before the .save()'s update  
fires, then possibly an exception should be thrown, or the row  
shouldn't reappear in the database?

So, from the above paragraph, why is the current behavior the implicit  
choice, saving automatically no matter what, even if the row was just  
deleted, vs. having to force this behavior explicitly, which is what  
has been suggested in other threads?  Or maybe even saying, having the  
current behavior be the behavior that has to be explicit, and the  
behavior that Hibernate's .save() method (where, if it came from the  
database, the .save() will only safely update, and transient objects  
will only insert, IIRC) as the default.  James, please correct me if  
I've misunderstood what you're asking.

Note that I am not taking a stance on the topic.  I'm just pointing  
out that this facet did not seem to be discussed in the threads that  
were linked to by Jeremy and Karen.  If you can point out somewhere in  
there where what I described above was discussed (if this is a direct  
overlap of Ken Arnold's comments, for instance), then I apologize for  
the extra noise.

Thanks,
George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---




Re: Rethinking silent failures in templates

2008-05-14 Thread George Vilches


On May 14, 2008, at 12:29 PM, J. Cliff Dyer wrote:

>
> On Wed, 2008-05-14 at 19:00 +0400, Ivan Sagalaev wrote:
>> Simon Willison wrote:
>>> {{ article.something.title }} - outputs text if article is there,
>>> fails silently otherwise
>>>
>>> Which leaves us in a tricky situation. A global settings.py variable
>>> for "throw errors on missing template variables" is a bad idea as it
>>> kills application portability
>>
>> It doesn't if we clearly document that silent beahviour is for
>> production and breakage is for debug. This setting won't travel with
>> application code but will be right alongside DEBUG in settings.  
>> Actually
>> there is such setting: TEMPLATE_DEBUG. We can just add this braking
>> effect to it. I believe it's pretty intuitive.
>>
>
> -1
>
> I'm a fairly strong -1 on adding this across the board to debug
> behavior, especially if it causes the template not to render.  If  
> having
> a variable missing is expected behavior in some cases, and should  
> result
> in nothing being inserted,  (and it frequently is for me) then causing
> the page to break will make actual debugging impossible.
>
> On the other hand, I'm +1 on the idea of having a filter to mark a
> variable as required.

I mirror these sentiments exactly, with one exception.  I don't think  
filters are the right way to do this, for two reasons:

1) It's very limiting.  There's lots of places that filters can't be  
applied cleanly that would still make sense to have some subset of  
behavior like this
2) It seems impossible.  How can you have a filter that checks whether  
the scope of some variable existed in a previous call, since the  
filter only runs on the *output* of that variable evaluations.

So, here's my two cents (Marty, apologies, as I think I'm copying some  
of your idea, but I needed my own stream of consciousness post).  Why  
not use a block tag, much like autoescape, to indicate that all  
variables in this scope are required, and tie it to a custom exception.

The benefits as I see it:

1) This doesn't actually cause a schism between development and  
production, as the block tag works the same in both cases.  If we  
really wanted a "this should be turned off in production no matter  
what", it could be a settings flag, although I'm -0 on that.
2) By using a custom exception, any other block tag or filter inside  
this scope could throw the same exception.  It wouldn't affect anyone  
normally as the template handler would catch it, but it would allow  
people building their own tags to get the same sort of functionality  
as internal variable resolutions, allowing block tags to force  
required.  (At least, when wrapped).
3) It's backwards-compatible.  People not using the block tag aren't  
affected.
4) It doesn't require any new syntactic sugar or assumptions.  It's  
just another block tag.  We might modify a few existing block tags or  
filters to be able to take advantage of it.

The negatives that I can think of right now:

1) It doesn't magically apply to all your templates.  I think this is  
a good thing, it's just like autoescape, if you're using it you  
probably know what you're doing and why, but I think some people may  
argue against it.

If people like this idea, I'll do a proof of concept.  It shouldn't be  
horrific, the templating engine seems pretty extendible in regards to  
adding new block tags.

gav


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Rethinking silent failures in templates

2008-05-14 Thread George Vilches


On May 14, 2008, at 9:58 AM, Simon Willison wrote:
> Silent errors are bad. If we were to remove them, how much of a
> negative impact would it have on the existing user base?

I suspect that a lot of people actually rely on this behavior, and it  
would be devastating to them.  What I've personally hoped for (besides  
a pony) was a bit of both: a development mode where I could turn on  
template errors and have them trickle to me, and then a setting that  
allows me to turn that off in production.  That's the best of both  
worlds, and I'm sure makes the template code ridiculously more  
complicated so it probably won't happen.  But I'd still like it.

gav

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Model Inheritance in qsrf and User?

2008-05-01 Thread George Vilches


On May 1, 2008, at 11:49 AM, [EMAIL PROTECTED] wrote:

>
>> Now that QSRF has landed, this type of thinking leads me to: who's
>> working on 121-rewrite?
>
> I'm fairly certain that in refactoring QuerySet, OneToOneField has
> been fixed.  It's the base mechanism that allows multi-table
> subclassing to work, in fact.

I do not believe you understood what I mean by fixed.  OneToOneField  
has always worked, halfway.  What it doesn't do is follow the  
relationship in both directions:

from django.db import models

class A(models.Model):
a1 = models.CharField(max_length=20)

class B(models.Model):
b1 = models.CharField(max_length=20)
b2 = models.OneToOneField(A)


B.objects.select_related() creates this query (MySQL, taken from the  
logs):

SELECT `t1_b`.`id`, `t1_b`.`b1`, `t1_b`.`b2_id`, `t1_a`.`id`,  
`t1_a`.`a1` FROM `t1_b` INNER JOIN `t1_a` ON (`t1_b`.`b2_id` =  
`t1_a`.`id`)

A.objects.select_related() creates this query:

SELECT `t1_a`.`id`, `t1_a`.`a1` FROM `t1_a`


A complete OneToOneField would generate effectively the same query in  
both directions, since it's a special type of ForeignKey that the  
reverse relationship makes sense to automatically select on.

If this worked, then in the User case, you could select_related() on  
the UserProfile, *OR* .select_related() on the User(), and you would  
get the same data either way, no matter how many models were connected  
via OneToOneField in this manner.  I would assume that all the  
normal .select_related(***) options would work as well across this,  
like "depth" and the new options for picking a subset of fields only  
to follow the relations on.

(As an aside, forward FKs on both sides of the OneToOne should be  
followed, but that I would assume is an incidental of getting  
OneToOnes working this way in the first place).

Does that make more sense as to why it would alleviate the issue?

Thanks,
gav


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: meta attribute for default select_related?

2008-04-30 Thread George Vilches


On Apr 30, 2008, at 2:33 PM, Ken Arnold wrote:

>
> What would you all think about a Meta attribute for models for
> select_related, analogous to ordering? i.e., default to selecting
> these related fields for any query on this model.
>
> For example, I have a model roughly like:
>
> class Contribution(models.Model):
># content...
>creator = models.ForeignKey(User)
>
> Nearly every time I want to display a contribution, I'm going to want
> the username. It would be nice if, instead of sprinkling
> `select_related('creator__username')` all over the code (thanks for
> that very nice addition though, Malcolm), I could just declare on the
> model something like:
>
>class Meta:
>related = ('creator__username',)
>
> Canceling could be done like `select_related('-creator__username')`
> perhaps.
>
> What do people think?

Without assigning my own preference to this solution, I would say that  
canceling might better be done as a non-generative option, as there's  
a previous discussion on this topic:

http://groups.google.com/group/django-developers/browse_thread/thread/fb195cbcda2e4b44/f85fb54382a7847a?#f85fb54382a7847a

gav

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Model Inheritance in qsrf and User?

2008-04-30 Thread George Vilches


On Apr 25, 2008, at 3:40 PM, Marty Alchin wrote:

> *snip*
> class AuthorProfile(models.Model):
>user = models.OneToOneField(User)
>pen_name = models.CharField(max_length=255)

Now that QSRF has landed, this type of thinking leads me to: who's  
working on 121-rewrite?  And more importantly, can I offer my  
assistance? :)

Things like this would be well served by being able to follow a  
OneToOneField in both directions via a .select_related() (no extra  
trip to the DB).  We have several use cases, among them our own User  
system based on django.contrib.auth, in which we currently use almost  
exactly what you see above as our "extended profile" that would be  
very useful to have the relationship work in both directions in queries.

I will hesitantly say too, *if* OneToOneFields were to work that way  
and only generate a single query no matter which way they were  
approached (and followed all the proper ForeignKeys on both sides), I  
think it would end a good portion of the confusion/arguments about the  
proper design patterns to use here, since the only major justification  
left for not using the 121 mechanism would be on the saving side at  
that point.

gav

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Feature Request: "Abstract Model"

2008-02-04 Thread George Vilches

On Feb 4, 2008, at 7:50 AM, Brian Harring wrote:
>
> This is a dirty hack mind you, but a rather effective one- I
> personally use it for when I need to create common structures w/in
> tables and need to able to change the structure definitions in a
> single spot.  If you did the following-
>
> def add_common_fields(local_scope):
>local_scope['field1'] = models.IntegerField(blank=True,
>   null=True)
>local_scope['field2'] = models.CharField(maxlength=255,
>   blank=True, null=True)
># other common definitions, same thing, updating the passed in
># dict
>
> you could then just do
>
> class Model_A(models.Model):
>   add_common_fields(locals())
>   # other fields
>
> class Model_B(models.Model):
>   add_common_fields(locals())
>   # other fields.

...

> You probably could fold the approach above into a metaclass if desired
> also- would be a bit more pythonic possibly.

Since we've been using a metaclass for doing a similar task, seems  
appropriate to paste it now:

class ModelMixinBase(ModelBase):
 def __new__(cls, name, bases, attrs):
 new_attrs = attrs.copy()
 for attr,value in cls.__dict__.iteritems():
 if isinstance(value, models.Field):
 new_attrs.setdefault(attr,value)
 elif attr == 'methods':
 for v in value:
 if callable(v):
 new_attrs.setdefault(v.__name__, v)
 elif isinstance(v, str):
 new_attrs.setdefault(v, getattr(cls, v))
 return super(ModelMixinBase, cls).__new__(cls, name, bases,  
new_attrs)


class MixinBaseA(ModelMixinBase):
 common_1 = models.IntegerField()
 common_2 = models.CharField(max_length=255)

class ResultModel(models.Model):
 __metaclass__ = MixinBaseA
 specific_1 = models.IntegerField()

The end result should give you as far as we have been able to tell a  
perfectly okay Django model instance (we've been using it for months  
and haven't seen any weird behavior yet).  We know it does a touch  
more than what yours does, but it could easily be stripped down to  
just be the equivalent of what you've got above.

gav
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: preventing syncdb from loading initial_data

2008-01-30 Thread George Vilches


On Jan 29, 2008, at 10:51 PM, Russell Keith-Magee wrote:

>
> On Jan 30, 2008 8:17 AM, Joseph Kocherhans <[EMAIL PROTECTED]>  
> wrote:
>>
>> I ran into a situation today where for every future site I set up,
>> I'll want to load an initial_data fixture, but for some existing  
>> sites
>> that I'm upgrading, it's very useful to be able to run syncdb without
>> loading any fixtures. Thus http://code.djangoproject.com/ticket/6511

I'm -0 to the original idea.  I would be +1 to the idea if a different  
management command was created that did only the creation tasks of  
syncdb, but was called something else (create_missing?).  syncdb is  
named pretty aptly, I think.

But I'm -1 to the below:

> I imagine the use case here is that the data in the initial data
> fixture might be modified after being loaded, in which case you don't
> want the modifications being overwritten on each syncdb. IMHO, the
> better solution to this problem would be to ensure that initial_data
> for a given model is only loaded for models that have been added as a
> result of the syncdb (e.g, on first sync, contrib.User is added, so
> the initial_data users are added; on the second sync, blog.Entry is
> added, so the initial entry is added, but the initial user is not
> reloaded). I haven't looked at this in detail, but my gut reaction is
> that this wouldn't be a trivial thing to implement. In the meantime,
> calling your fixture something else works fine :-)

We rely on the syncdb feature that uses initial_data,json to update  
existing rows.  As our developers add columns to models and such, we  
have to execute ALTER TABLEs by hand when we go from our local  
development environment to staging, but then we run syncdb and we get  
all the updated fixtures applied to our updated environment.  I think  
this is not only reasonable but expected behavior, since a broken  
fixture due to a DB change *must* be updated before your whole app is  
in a working state.

We've already got a great feature in loaddata, let's not waste it.  At  
most, I would recommend that we add a new management command (which is  
awesome easy now, thanks Django devs)!  The command would take in a  
fixture filename, and then check every app and execute the fixture if  
it exists in that app.  This is very straightforward, it wouldn't  
interfere with existing functionality, and it should pretty painlessly  
cover Joseph's original use case needs with maybe only adding a little  
bit extra from the base fixtures to the external ones.

Any reason why a combination of the above wouldn't work to cover  
everyone's possible use cases, and not break any existing functionality?

gav


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Django 1.0 features -- the definitive list

2007-11-30 Thread George Vilches

Etienne Robillard wrote:
> 
> 
> On Nov 30, 2007 2:27 AM, Max Battcher <[EMAIL PROTECTED] 
> > wrote:
> 
> 
> On Nov 30, 2007 2:18 AM, jj <[EMAIL PROTECTED]
> > wrote:
>  > move 0.96 to 1.0 status. This might sound somewhat artificial, but
>  > would clearly indicate that 0.96 is a version one can already trust.
>  > Isn't the Web site already advocating 0.96 that way?
> 
> That might be a good idea...  backport any remaining useful fixes to
> 0.96, maybe go ahead and do the newforms -> forms rename, and not much
> more really needs to be done and call that 1.0 and everything else
> becomes 2.0...
> 
> 
> I think it would be great if Django-1.0 (and subsequent releases) be 
> backward-compatible with Django-0.96...

I think it's fair to say that with the changes already in the trunk, 
this is a lost cause.

gav

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Bitwise operations in QuerySets?

2007-10-30 Thread George Vilches

What I want to do:  Assume I have an integer column in a table that 
contains a value that I want perform a bit operation on.  I would like 
to able to make a QuerySet that generates a query similar to this: 
"SELECT * FROM table WHERE column & 4;". (example is MySQL-friendly).

I've looked around both in the current DB layer and in 
Queryset-refactor, but it's possible I've missed something.  If this 
already exists, then give me the shaming I deserve.  Otherwise...


Would anyone object to it being just another __ operation?  For 
instance, Poll.objects.filter(column__bitand=4).  Is there a preference 
whether this should be __and or __bitand?

I don't know if this makes sense for | (bitwise-OR), since you would 
need to be able to specify something more like this:

"SELECT * FROM table WHERE column | 4 > 12;".
"SELECT * FROM table WHERE column | 4 = 12;".

Which may not be appropriate for a simple queryset operator in Django. 
However, it would be cool if there was some way to support that with a 
more complicated Q() or the like, but I could see it being out of scope 
of Django, so ignore this part if so.

Bitwise-XOR, I don't know, some specific usages maybe.

I've already verified that all the DBs  Django supports have some 
mechanism for doing the basic bitwise operations.[1][2][3][4][5]

I don't mind doing this only against the qs-rf branch, and I can build 
the tests and patch, but wanted to get the community opinion first.  I'm 
+1 on & (bitwise-AND), -0 on | (bitwise-OR, pending someone giving a 
neat example), and +0 on bitwise-XOR/the rest, but would like to find 
some way to be +1 on all of them.

Thoughts?

Thanks,
George

[1] SQLite: http://www.sqlite.org/changes.html
[2] MySQL: http://dev.mysql.com/doc/refman/5.0/en/bit-functions.html
[3] MSSQL: http://www.functionx.com/sqlserver/Lesson03.htm
[4] PostgreSQL: 
http://www.postgresql.org/docs/8.1/interactive/functions-math.html
[5] Oracle: http://www.jlcomp.demon.co.uk/faq/bitwise.html


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Missing imports in sessions?

2007-09-25 Thread George Vilches

All,

Just throwing this out there because I think it breaks everyone 
currently using sessions against trunk.  Already filed a ticket (and 
patch) for it: http://code.djangoproject.com/ticket/5598 .

Short of it, there's a couple imports that look like they're missing 
from django/contrib/sessions/models.py (os and time), and it causes new 
sessions to fail on creation.  The imports used to be there, but were 
removed as of r6333.

Can someone else verify this is a problem?  I didn't think that os or 
time was an implicit import. :)

Thanks,
George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Following OneToOne Relationships

2007-09-07 Thread George Vilches

[EMAIL PROTECTED] wrote:
> I have searched through the dev list and couldn't find anything
> relating to the specific thing I am trying to do. I also am aware that
> both QuerySet and OneToOne fields are in some sort of flux right now.
> 
> QuerySet is being completely refactored, and I will of course wait for
> that (like my admin changes) to drop before trying to come up with a
> patch. OneToOne field has also been pending changes for about forever
> now and I don't know exactly what's going on with it. So this post is
> to start a dialog about whether we can follow OneToOne relationships
> in a bi-directional manner and what the current plans are.

I like this idea a lot, it's definitely something that's been a 
hindrance on things I'm working on.  OneToOneFields are special, they 
seem like they should provide the relationship whichever way I need to 
(since by its very nature, all this stuff could have just been in one 
table, so I'm just isolating groups of data that are still single 
row-centric into multiple tables).  Use this example:

class Account(models.Model):
username = models.CharField(maxlength=20)
password = models.CharField(maxlength=20)
active = models.BooleanField()

class AccountPersonInfo(models.Model):
account = OneToOneField(Account)
first_name = models.CharField(maxlength=20)
last_name = models.CharField(maxlength=20)

class AccountCompanyInfo(models.Model):
account = OneToOneField(Account)
company_name = models.CharField(maxlength=20)
company_address = models.CharField(maxlength=20)


Now, there's no possible *.objects.select_related() I can use that will 
get me the information in all three of these tables.

* Account.objects.select_related() - Only gives me Account, not 
AccountPersonInfo or AccountCompanyInfo
* AccountPersonInfo.objects.select_related() - Only gives me Account and 
AccountPersonInfo, not AccountCompanyInfo
* AccountCompanyInfo.objects.select_related() - Only gives me Account 
and AccountCompanyInfo, not AccountPersonInfo

I can get any two, but not all three.  If I want the third, it requires 
a second hit to the database.  If I'm selecting one record, that's not 
so bad.  But when I do this:

Account.objects.select_related().filter(active=True)

If I want the information in AccountPersonInfo *and* AccountCompanyInfo 
for the N (for argument, say hundreds of) records it returns, it will do 
that N rows worth of queries for the other table.  This is a performance 
destroyer.

So, my alternatives are:

1) Run 2-3 queries, pulling the linked records back, and then merging 
them on the Python side.  UGLY.
2) Just use a hand-coded query.  Also UGLY, since the Models have 
methods on them that I'd like to use, and populating them by hand from 
the results of the QuerySet is more than a minor annoyance.
3) Make OneToOneFields bidirectional.

For obvious reasons, I think (3) is the right choice.  Is there anything 
that would prevent QuerySet or the Models from working this way?

Thanks,
George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Creating and using a project-specific database backend?

2007-09-05 Thread George Vilches

George Vilches wrote:
> Russell Keith-Magee wrote:
>> On 8/30/07, George Vilches <[EMAIL PROTECTED]> wrote:
>>> Folks,
>>>
>>> Now that the database backend refactoring has landed, and DB
>>> functionality is really easy to extend, how does everyone feel about the
>>> possibility of allowing people to specify their own database backends
>>> within their projects (i.e., without modifying the Django source tree in
>>> any way?)  I see this as an excellent way for people to increase their
>> The broad idea seems reasonable to me. There's no point having an
>> easily pluggable database engine if you can't plug in your own
>> database :-)
>>
>> Regarding the approach - I'm inclined to prefer #1. It's simple and
>> easy to explain, and I don't see that there is that much potential for
>> side effects. The only clash I can forsee is if you had your own
>> toplevel module that mirrored the backend names of Django - this
>> doesn't really strike me as something that will be a common problem,
>> and if it is, the solution is easy (rename the clashing external
>> module).
> 
> Then in this spirit, my patch (against r6049) is at the end of the 
> message.  Notice that the only real change is:

And I'm amending this slightly due to something I missed in how the 
other modules are imported, we have to adjust the import path for the 
introspection/creation/etc. modules as well.  Patch at bottom, as before.

The only two changes is that there is now an "import_path" string which 
contains the working prefix to the modules (either "django.db.backends" 
or empty), and each import after the base uses the one that's appropriate.

Thanks,
George

Index: django/db/__init__.py
===
--- django/db/__init__.py   (revision 6049)
+++ django/db/__init__.py   (working copy)
@@ -8,24 +8,29 @@
  settings.DATABASE_ENGINE = 'dummy'

  try:
-backend = __import__('django.db.backends.%s.base' % 
settings.DATABASE_ENGINE, {}, {}, [''])
+import_path = 'django.db.backends.'
+backend = __import__('%s%s.base' % (import_path, 
settings.DATABASE_ENGINE), {}, {}, [''])
  except ImportError, e:
-# The database backend wasn't found. Display a helpful error message
-# listing all possible database backends.
-from django.core.exceptions import ImproperlyConfigured
-import os
-backend_dir = os.path.join(__path__[0], 'backends')
-available_backends = [f for f in os.listdir(backend_dir) if not 
f.startswith('_') and not f.startswith('.') and not f.endswith('.py') 
and not f.endswith('.pyc')]
-available_backends.sort()
-if settings.DATABASE_ENGINE not in available_backends:
-raise ImproperlyConfigured, "%r isn't an available database 
backend. Available options are: %s" % \
-(settings.DATABASE_ENGINE, ", ".join(map(repr, 
available_backends)))
-else:
-raise # If there's some other error, this must be an error in 
Django itself.
+try:
+import_path = ''
+backend = __import__('%s%s.base' % (import_path, 
settings.DATABASE_ENGINE), {}, {}, [''])
+except ImportError, e_user:
+# The database backend wasn't found. Display a helpful error 
message
+# listing all possible database backends.
+from django.core.exceptions import ImproperlyConfigured
+import os
+backend_dir = os.path.join(__path__[0], 'backends')
+available_backends = [f for f in os.listdir(backend_dir) if not 
f.startswith('_') and not f.startswith('.') and not f.endswith('.py') 
and not f.endswith('.pyc')]
+available_backends.sort()
+if settings.DATABASE_ENGINE not in available_backends:
+raise ImproperlyConfigured, "%r isn't an available database 
backend. Available options are: %s" % \
+(settings.DATABASE_ENGINE, ", ".join(map(repr, 
available_backends)))
+else:
+raise # If there's some other error, this must be an error 
in Django itself.

-get_introspection_module = lambda: 
__import__('django.db.backends.%s.introspection' % 
settings.DATABASE_ENGINE, {}, {}, [''])
-get_creation_module = lambda: 
__import__('django.db.backends.%s.creation' % settings.DATABASE_ENGINE, 
{}, {}, [''])
-runshell = lambda: __import__('django.db.backends.%s.client' % 
settings.DATABASE_ENGINE, {}, {}, ['']).runshell()
+get_introspection_module = lambda: __import__('%s%s.introspection' % 
(import_path, settings.DATABASE_ENGINE), {}, {}, [''])
+get_creation_module = lambda: __import__('%s%s.creation' % 
(import_path, settings.DATABASE_ENGINE), {}, {}, [''])
+runshell = lambda: __import__('%s%s.client' % (import_path, 
settings.DATABASE_ENGINE), {}, {}, ['']).runshell()

  connection = backend.DatabaseWrapper(**settings.DATABASE_O

Re: Creating and using a project-specific database backend?

2007-09-04 Thread George Vilches

Russell Keith-Magee wrote:
> On 8/30/07, George Vilches <[EMAIL PROTECTED]> wrote:
>> Folks,
>>
>> Now that the database backend refactoring has landed, and DB
>> functionality is really easy to extend, how does everyone feel about the
>> possibility of allowing people to specify their own database backends
>> within their projects (i.e., without modifying the Django source tree in
>> any way?)  I see this as an excellent way for people to increase their
> 
> The broad idea seems reasonable to me. There's no point having an
> easily pluggable database engine if you can't plug in your own
> database :-)
> 
> Regarding the approach - I'm inclined to prefer #1. It's simple and
> easy to explain, and I don't see that there is that much potential for
> side effects. The only clash I can forsee is if you had your own
> toplevel module that mirrored the backend names of Django - this
> doesn't really strike me as something that will be a common problem,
> and if it is, the solution is easy (rename the clashing external
> module).

Then in this spirit, my patch (against r6049) is at the end of the 
message.  Notice that the only real change is:

 try:
 backend = __import__('%s.base' % settings.DATABASE_ENGINE, {}, 
{}, [''])
 except ImportError, e_user:

Everything else is just an indentation correction (it moves the current 
error checking under the second import error).  The flow of this allows 
the internal packages to still be checked first, and then it tries to 
import the package directly without the django.db.backends prefix.  If 
that doesn't work, the error message and handling is exactly as it used 
to be.

I've tried it in several existing situations, seems to hold up fine with 
existing Django DB backends and finds and imports new backends properly 
(and properly errors if it can't do either).  Anyone have additional 
thoughts on this approach to having replaceable backends, or is this 
good as is?

Thanks,
George


Index: django/db/__init__.py
===
--- django/db/__init__.py   (revision 6049)
+++ django/db/__init__.py   (working copy)
@@ -10,18 +10,21 @@
  try:
  backend = __import__('django.db.backends.%s.base' % 
settings.DATABASE_ENGINE, {}, {}, [''])
  except ImportError, e:
-# The database backend wasn't found. Display a helpful error message
-# listing all possible database backends.
-from django.core.exceptions import ImproperlyConfigured
-import os
-backend_dir = os.path.join(__path__[0], 'backends')
-available_backends = [f for f in os.listdir(backend_dir) if not 
f.startswith('_') and not f.startswith('.') and not f.endswith('.py') 
and not f.endswith('.pyc')]
-available_backends.sort()
-if settings.DATABASE_ENGINE not in available_backends:
-raise ImproperlyConfigured, "%r isn't an available database 
backend. Available options are: %s" % \
-(settings.DATABASE_ENGINE, ", ".join(map(repr, 
available_backends)))
-else:
-raise # If there's some other error, this must be an error in 
Django itself.
+try:
+backend = __import__('%s.base' % settings.DATABASE_ENGINE, {}, 
{}, [''])
+except ImportError, e_user:
+# The database backend wasn't found. Display a helpful error 
message
+# listing all possible database backends.
+from django.core.exceptions import ImproperlyConfigured
+import os
+backend_dir = os.path.join(__path__[0], 'backends')
+available_backends = [f for f in os.listdir(backend_dir) if not 
f.startswith('_') and not f.startswith('.') and not f.endswith('.py') 
and not f.endswith('.pyc')]
+available_backends.sort()
+if settings.DATABASE_ENGINE not in available_backends:
+raise ImproperlyConfigured, "%r isn't an available database 
backend. Available options are: %s" % \
+(settings.DATABASE_ENGINE, ", ".join(map(repr, 
available_backends)))
+else:
+raise # If there's some other error, this must be an error 
in Django itself.

  get_introspection_module = lambda: 
__import__('django.db.backends.%s.introspection' % 
settings.DATABASE_ENGINE, {}, {}, [''])
  get_creation_module = lambda: 
__import__('django.db.backends.%s.creation' % settings.DATABASE_ENGINE, 
{}, {}, [''])


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Django 500 error debugging causes QuerySets to evaluate

2007-08-29 Thread George Vilches

Jeremy Dunck wrote:
> On 8/28/07, George Vilches <[EMAIL PROTECTED]> wrote:
> ...
>> Something seems very wrong about this situation, that debugging could
>> cause another query to execute (especially an unintended query), but I
>> don't know what the correct way to go about fixing or preventing it.
>> I've tried a bunch of things to stop the QuerySets from evaluating when
>> the local vars are being printed, but haven't been able to come up with
>> anything much.  Is there a good way to prevent this issue in the 500
>> template?
> 
> As a ham-handed fix, django.views.debug, where it pushes the frame
> locals into 'vars' , could remove instances of queryset.
> Alternatively, it could substitute the generated sql for the queryset
> itself.

I looked at the possibility of both of those fixes, and they seem 
doable, but they would require adding special handler code just for 
QuerySets (especially the one to get the SQL generated code instead of 
the QuerySet).  I think I agree with your ham-handed phrase there.

That having been said, would a fix like that even get into Django? 
Doesn't sound very Django-ish.  I'll do the work, but I'm really hoping 
someone can point me in the direction of what a proper Django solution 
is, I would highly prefer not to have to keep patching this with an ugly 
QuerySet workaround that won't get accepted into trunk.

I feel this problem might have far-reaching consequences as more people 
migrate large existing projects onto Django.  Often, people have tricks 
to turn on debugging for certain users/IPs in production, and this could 
totally tank a production site if the large dataset issue described 
previously was encountered.

George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Django 500 error debugging causes QuerySets to evaluate

2007-08-28 Thread George Vilches

Quick summary: If Django errors during a QuerySet evaluation with 
DEBUG=True, the built-in 500 handler in views/debug.py causes the last 
QuerySet (one filter shorter than the final version) to be executed, as 
in the SQL statement hits the database.


This is very bad if there was only a single filter on a table, and that 
table is very large.  Here's an example to demonstrate:

class Test1(models.Model):
 a = models.CharField(max_length=100)

class Test2(models.Model):
 b = models.OneToOneField(Test1)

Say that Test1 is populated with an entry at id=1, but there is not a 
matching entry at Test2:

t = Test1.objects.get(id=1)
print t.b

This throws the error you would expect, a "DoesNotExist" error.  When 
running from a shell, this is the end of it.

However, if you're running from a view, and have Debug=True, then it 
runs the 500 error, goes up through the stack trace and shows all the 
functions and their local vars.  Unfortunately, one of the functions is 
QuerySet.get(), in django.db.models.query:

 def get(self, *args, **kwargs):
 "Performs the SELECT and returns a single object matching the 
given keyword arguments."
 clone = self.filter(*args, **kwargs)

When the local variables are printed, it attempts to print both the self 
and the clone.  Since the clone is where the filter is applied, and not 
the self, at the point when debugging happens, the self variable is 
missing a filter.  I know that this is expected behavior, but it is 
important for the following piece.

Now, when the 500 template iterates through the local variables, this 
"self" QuerySet gets printed, and by the act of printing, it gets 
executed.  Now, this causes the SQL to hit the database.  In this case, 
the SQL is the SELECT statement with *no* WHERE clause.  If I were to 
have used:

t = Test1.objects.filter(id=27).get(id=1)
print t

The end result would be a SELECT statement checking for only WHERE id = 
27, instead of WHERE (id=27 AND id=1).  (Yes, I know this example is 
contrived, but it's the simplest case I can provide).

Picture this: Test1 has 5 million records in it.  Suddenly, the 
debugging is now destroying the Apache instance and/or MySQL with the 
amount of data it has to process, because it's returning all 5 million 
results due to a query error.

Something seems very wrong about this situation, that debugging could 
cause another query to execute (especially an unintended query), but I 
don't know what the correct way to go about fixing or preventing it. 
I've tried a bunch of things to stop the QuerySets from evaluating when 
the local vars are being printed, but haven't been able to come up with 
anything much.  Is there a good way to prevent this issue in the 500 
template?

Thanks,
George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Run script from management.py

2007-08-27 Thread George Vilches

Casey T. Deccio wrote:
> I would find it extremely useful to pass a script argument to the
> 'manage.py shell' command, so I could run arbitrary scripts using the
> models library defined in project/app.  I can't currently see a way to
> do this except using input redirection:
> 
> python manage.py shell < script.py
> 
> Is there a better way to do this?  Would it be useful to take an
> argument from the 'manage.py shell' to support this?
> 
> Thanks,
> casey

Although I'd like to see something in shell too, this question came up 
on the django-users list the other day, and here's one possible solution:

http://groups.google.com/group/django-users/browse_thread/thread/9c571e82e1930c99/56b04af443685df7?lnk=gst=application+commands+and+one+off+scripts=5=en#56b04af443685df7

Works great for me, I just have an _local directory, drop my scripts in 
there, and run "python test_shell.py _local.some_module".

gav



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Django unit tests for pre/post_save, pre/post_delete signals

2007-08-21 Thread George Vilches

Russell Keith-Magee wrote:
> On 8/21/07, George Vilches <[EMAIL PROTECTED]> wrote:
>> All,
>>
>> I've looked everywhere in the Django unit tests and can't find anywhere
>> that's testing these four signals:
> ...
>> Thoughts?
> 
> Only one thought - put this in a ticket so it doesn't get lost or forgotten.

I've already attached the unit test patch to this ticket: 
http://code.djangoproject.com/ticket/4879 .  I'll add a note in there 
that's a little more specific about the unit tests.  The unit tests were 
done appropriately then (up to Django snuff)? :)

Thanks,
George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Django unit tests for pre/post_save, pre/post_delete signals

2007-08-21 Thread George Vilches

All,

I've looked everywhere in the Django unit tests and can't find anywhere 
that's testing these four signals:

pre_save
post_save
pre_delete
post_delete

I found the dispatcher tests, which cover all the generic bits, but 
nothing specifically for these model signals.

On the assumption that I'm not blind, I built a unit test for it, which 
I've posted below.  I've also attached the unit test to #4879 ( 
http://code.djangoproject.com/ticket/4879 ), because my unit test is 
dependent on the post_save signal created keyword.  It would be easy 
enough to strip it (the tests will pass after a few extra lines of 
output were removed, I'll do it if necessary), but I figured adding a 
unit test for it would make the ticket pretty painless for submission.

Thoughts?
gav




django/tests/modeltests/signals/models.py:

"""
Testing signals before/after saving and deleting.
"""

from django.db import models
from django.dispatch import dispatcher

class Person(models.Model):
 first_name = models.CharField(max_length=20)
 last_name = models.CharField(max_length=20)

 def __unicode__(self):
 return u"%s %s" % (self.first_name, self.last_name)


def pre_save_test(sender, instance, **kwargs):
 print 'pre_save signal,', instance

def post_save_test(sender, instance, **kwargs):
 print 'post_save signal,', instance
 if 'created' in kwargs:
 if kwargs['created']:
 print 'Is created'
 else:
 print 'Is updated'

def pre_delete_test(sender, instance, **kwargs):
 print 'pre_delete signal,', instance
 print 'instance.id is not None: %s' % (instance.id != None)

def post_delete_test(sender, instance, **kwargs):
 print 'post_delete signal,', instance
 print 'instance.id is None: %s' % (instance.id == None)

__test__ = {'API_TESTS':"""

 >>> dispatcher.connect(pre_save_test, signal=models.signals.pre_save)
 >>> dispatcher.connect(post_save_test, signal=models.signals.post_save)
 >>> dispatcher.connect(pre_delete_test, signal=models.signals.pre_delete)
 >>> dispatcher.connect(post_delete_test, signal=models.signals.post_delete)
 >>> p1 = Person(first_name='John', last_name='Smith')
 >>> p1.save()
pre_save signal, John Smith
post_save signal, John Smith
Is created

 >>> p1.first_name = 'Tom'
 >>> p1.save()
pre_save signal, Tom Smith
post_save signal, Tom Smith
Is updated

 >>> p1.delete()
pre_delete signal, Tom Smith
instance.id is not None: True
post_delete signal, Tom Smith
instance.id is None: True

 >>> p2 = Person(first_name='James', last_name='Jones')
 >>> p2.id = 9
 >>> p2.save()
pre_save signal, James Jones
post_save signal, James Jones
Is created

 >>> p2.id = 8
 >>> p2.save()
pre_save signal, James Jones
post_save signal, James Jones
Is created

 >>> p2.delete()
pre_delete signal, James Jones
instance.id is not None: True
post_delete signal, James Jones
instance.id is None: True

 >>> Person.objects.all()
[]

 >>> dispatcher.disconnect(post_delete_test, 
signal=models.signals.post_delete)
 >>> dispatcher.disconnect(pre_delete_test, 
signal=models.signals.pre_delete)
 >>> dispatcher.disconnect(post_save_test, signal=models.signals.post_save)
 >>> dispatcher.disconnect(pre_save_test, signal=models.signals.pre_save)
"""}

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Adding hooks to methods that generate SQL in django/core/management.py

2007-08-19 Thread George Vilches

Russell Keith-Magee wrote:
> On 8/13/07, George Vilches <[EMAIL PROTECTED]> wrote:
>> Russell Keith-Magee wrote:
>>> The configuration option will need to be a little more generic - i.e.,
>>> putting the entire backend into a record mode - not just a single
>>> cursor call.
>> Second, we could add a class level variable to each DatabaseWrapper,
>> since the handle to those seem to be instantiated only once at runtime
>> (at least, my short testing with just the Django built-in webserver
>> seemed to do so, I only assume that for Apache it's once per thread).
>> That would be an easy enough variable to update from pretty much
>> anywhere in the app:
>>
>>  connection.playback_only = True
> 
> There's another possibility you haven't considered - dynamically
> replacing/wrapping the connection object. The test system already does
> this for the email framework - when the test framework is set up, the
> email framework is dynamically replaced with a mock; when the test
> framework is torn down, the mock is uninstalled. A similar approach
> could be used to 'start/end SQL recording'.

Since Adrian has done great work tonight landing Brian Harring's 
database refactoring, it seemed like a good time to revisit this, 
because dropping the addition of "non-executing" or "playback only" SQL 
(what do we want to call this?) will be in a single place and very painless.

As far as dynamically wrapping it, that's a neat idea, and I think it 
would have been very appropriate before the refactor.  Now, we've 
already got a debug cursor, and we're already tracking queries, so the 
change seems very natural.

The two needed file patches are at the bottom of the e-mail.  This 
change is small and totally non-destructive to existing apps.  If you 
want to turn your DB to playback_only, you just have to call:

from django.db import connection
connection.playback_only = True

And since the connection variable are only be initialized once per 
runtime, setting it once anywhere means that the app is in playback only 
mode for the rest of the run.  This could easily be controlled from a 
middleware, so all the items below this middleware could be playback 
only, but outer middlewares can still write to the database if necessary.

The only way this could be simpler or easier for the user to control is 
if there was an entry in settings.py that the user could also use. 
settings.DATABASE_PLAYBACK_ONLY, maybe?  I'm only +0 on this.  It would 
have to be optional, because I can definitely see a reason for wanting 
to turn this ability on and off during runtime without changing the code 
at all.  Specifically, I could imagine several setup programs in the 
style of phpBB (I know, it's a sin to mention PHP apps on here, but 
everything I can think of right this second is in PHP) that might want 
to turn it off based on whether the user asked to display the queries or 
actually execute them on install on the previous web page.

How do people feel about this approach?

Thanks,
gav


--- django_orig/django/db/backends/__init__.py  2007-08-19 
21:22:44.0 -0400
+++ django_live/django/db/backends/__init__.py  2007-08-19 
22:03:11.0 -0400
@@ -12,6 +12,7 @@ class BaseDatabaseWrapper(local):
  ops = None
  def __init__(self, **kwargs):
  self.connection = None
+self.playback_only = False
  self.queries = []
  self.options = kwargs

@@ -31,13 +32,13 @@ class BaseDatabaseWrapper(local):
  def cursor(self):
  from django.conf import settings
  cursor = self._cursor(settings)
-if settings.DEBUG:
+if settings.DEBUG or self.playback_only:
  return self.make_debug_cursor(cursor)
  return cursor

  def make_debug_cursor(self, cursor):
  from django.db.backends import util
-return util.CursorDebugWrapper(cursor, self)
+return util.CursorDebugWrapper(cursor, self, 
playback_only=self.playback_only)

  class BaseDatabaseOperations(object):
  """
--- django_orig/django/db/backends/util.py  2007-08-19 
21:22:44.0 -0400
+++ django_live/django/db/backends/util.py  2007-08-19 
21:31:12.0 -0400
@@ -9,14 +9,16 @@ except ImportError:
  from django.utils import _decimal as decimal# for Python 2.3

  class CursorDebugWrapper(object):
-def __init__(self, cursor, db):
+def __init__(self, cursor, db, playback_only=False):
  self.cursor = cursor
  self.db = db
+self.allow_execute = not playback_only

  def execute(self, sql, params=()):
  start = time()
  try:
-return self.cursor.execute(sql, params)
+if self.allow_execute:
+return self.cursor.execute(sql, params)
  finally:
  stop = time()
  self.db.queries

Re: Taming management.py, the 1730-line behemoth

2007-08-15 Thread George Vilches

> I'd like to suggest a (somewhat controversial) extension:
> 
> Let any (installed) app provide its own manage.py actions in a similar
> way -- something like::
> 
> schema_evolution/
> management/
> __init__.py
> commands/
> __init__.py
> evolve.py
> 
> This would let projects like S-E -- there's a reason I used it for the
> example -- provide custom management actions; installing said app
> would make those actions "appear" in manage.py. This would go a long
> way towards letting optional bits "feel" more built-in.

There aren't enough +1s in the world for me to vote as strongly as I 
would like on both Adrian's initial proposal and this extension. 
Looking forward to seeing this.

gav

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Changing the options in manage.py, adding more signals

2007-08-14 Thread George Vilches

Marty Alchin wrote:
> Okay, I'll post one last time on this subject, then leave you guys to
> do what you're supposed to be doing. I'm only posting here in case who
> heard me ranting find it interesting. I did manage to come up with a
> solution like I mentioned, and I'll be posting it soon. It'd be up
> already, but it seems djangosnippets.org is having PostgreSQL problems
> at the moment. I'll be putting it there, writing about it on my blog,
> and linking to it from the DynamicModels article.
> 
> So, for the record, it can be done, retaining syncdb/sql/sqlall/reset
> functionality for all the dynamic models with very little hackery (the
> whole of it is barely over 100 lines). Details to follow for those who
> are interested.
> 
> -Gul

Very much looking forward to it.  I think there's still some legitimacy 
to updating the options in manage.py and adding a --sql flag that 
supports direct logging (a subtopic in the other thread), but if this 
functionality can be garnered in the meantime without that patch and 
without modifying Django internals, all the better!

gav

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Changing the options in manage.py, adding more signals

2007-08-14 Thread George Vilches

Marty Alchin wrote:
> This sounds like a far more complicated example than I had considered
> when I was doing my work with dynamic models[1], but I did have
> success getting syncdb to install dynamic models, provided a few
> things are in order. I probably didn't document them well enough on
> the wiki, but I could do so if this is a real need you have.
> 
> I also can't speak for how well your audit example would work on the
> whole using that method, but if it's a real task for somebody, I'd
> love to help work it out. In theory though, given my past experience,
> it would be possible to do in such a way that a single line in each
> audit-enabled model would trigger all the hard work, enabling syncdb
> and even admin integration.
> 
> Keep in mind that I have no opinion on the real meat of this thread,
> I'm just chiming in to help clarify what is and isn't possible with
> dynamic models.
> 
> -Gul
> 
> [1] http://code.djangoproject.com/wiki/DynamicModels

That page was a great start, that's where I started to figure most of it 
out when I started down this path a while back.  And what I described is 
a fully working app, but I've not exposed it because I'm not so sure 
that it really fits into the spirit of Django, even on the contrib side. 
:)  That having been said, it's something we very much needed for our 
current app, and the current branch in the Django trunk for doing 
history just doesn't have the performance for large DBs, especially with 
large change counts (one table in columnar key/value fashion just won't 
cut it, especially for reporting purposes).  We had to have row-based to 
do reporting on millions of historical entries at any speed.  (And it's 
also much faster for re-constructing history at the DB level, we can 
recreate any table at any point in time with a roll-forward type 
approach from the audit tables, columnar requires a lot more processing 
to do the same).

As far as how well it works, it's great across the board. :)  We have 
the syncdb signal stuff working fine (all the missing tables are created 
happily, and I don't have to write any custom SQL, I just piggyback on 
the things in django/core/management which do a fine job, since a Model 
is a Model, dynamic or not), and it's actually really solid under load, 
and easy to add into an existing model to "turn on" auditing.  Here's an 
example of how you use it:

class SomeModel(models.Model):
   c1 = models.CharField(maxlength=10)
   c2 = models.ForeignKey(SomeOtherModel)

   class Audit:
 pass

That's it.  I think it's nifty. :)  Calling save() or delete() on the 
model automagically writes the audit entries and all the related tasks, 
all you have to do is add the Audit app to the INSTALLED_APPS, and all 
the rest is handled, and it doesn't require any hackery to the Django 
codebase, which makes it near perfect for my uses. :)

So, the problem itself is actually solved for the runtime portion of the 
app.  The *only* thing that I found myself missing is the ability to 
generate the correct DROP TABLE/CREATE TABLE/CREATE INDEX type SQL in a 
printable manner that the Django manage.py commands could hook into and 
either display or run.  When I reset an app, I want to reset all my 
audit tables in that app as well, and there's just no signals in place 
(and no way to inject SQL code for display even if the signals were 
there, like in syncdb).  Same for "manage.py sql", "manage.py sqlclear", 
and etc.  So, that's at least part of where this all came from, and all 
the more reason that I like the --sql flag and reducing the manage.py 
set of SQL-related options to something more straightforward.

Thoughts?

Thanks,
George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Changing the options in manage.py, adding more signals (was Re: Adding hooks to methods that generate SQL in django/core/management.py)

2007-08-14 Thread George Vilches

George Vilches wrote:
> Russell Keith-Magee wrote:
>> On 8/12/07, George Vilches <[EMAIL PROTECTED]> wrote:
>>> 1) Add a signal to every option?
>> If we were going to go down this path, this would be the preferred
>> option. However, I'm not sure I'm convinced of the need. Which
>> commands exactly do you think require signals?
> 

Since the first example I gave may not be particularly compelling, since 
some craftiness with static Django models could be used to solve the 
problem, let me give one that I don't believe could be solved that way.

Assume I'm building a row-based audit system.  I also want this audit 
system to have one audit table/model per legitimate Django model.  So, 
say I have an app.model called "wiki.article".  This would create a 
"wiki_article" table.  I also want to have a "wiki_article_audit" table 
keeping a full history of changes.  Now, since Django models don't 
support inheritance yet, and I don't want to have to re-create every 
model that I want to perform an audit on, I can instead create a dynamic 
model from the original model with a small helper.  Unfortunately, 
syncdb and the like don't have a way of detecting this dynamic model and 
creating tables and such for it.  However, I've already got a mechanism 
in syncdb (via signals) which uses the existing management functions to 
write that new dynamic model to the database, and then in runtime 
everything works perfectly happy.

Unfortunately, since there's only a syncdb signal, I can't even do 
things like a reset on it, and there's definitely no way currently to 
get the SQL generated from my syncdb signal.  Being able to get the 
CREATE and DROP statements in text as well as each individually would be 
a huge boon to this type of use (and any dynamic model use in general).

Is that a more reasonable example?

Thanks,
George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Changing the options in manage.py, adding more signals (was Re: Adding hooks to methods that generate SQL in django/core/management.py)

2007-08-14 Thread George Vilches

Russell Keith-Magee wrote:
> On 8/12/07, George Vilches <[EMAIL PROTECTED]> wrote:
>> 1) Add a signal to every option?
> 
> If we were going to go down this path, this would be the preferred
> option. However, I'm not sure I'm convinced of the need. Which
> commands exactly do you think require signals?

Let's make the assumption that we can land the changes to manage.py that 
would allow us to merge the SQL logger and the actual execution commands 
into a single keyword, so that things like "sqlall" or "sqlreset" would 
not be necessary, just their equivalent executions.  (We can easily map 
this list backwards if we have to).  Then, the commands that would need 
signals are (with a few footnotes):

   (merge sqlflush)
   flush [--verbosity] [--noinput]

   loaddata [--verbosity] fixture, fixture, ...

   (merge sqlreset)
   reset [--noinput][appname ...]

   (make this executable?)
   sql [appname ...]

   syncdb [--verbosity] [--noinput]
 Create the database tables for all apps in INSTALLED_APPS whose
 tables haven't already been created.

   (we would probably want to keep the following, but have a version 
that's executable as well as printable)
   sqlall [appname ...]

   sqlclear [appname ...]

   sqlcustom [appname ...]

   sqlindexes [appname ...]


As far as the need goes, here's a generic example.  Say I need to create 
a few extra tables for clever caching (something more dense than just 
good indexing) as an extension to my Django app, and those tables are 
very specific to the structure of the model (and as the model changes, 
so will the contents of those tables).  Therefore, I need something that 
would generate proper CREATE TABLE and DROP TABLE commands and indexes 
and such, and it would be highly preferable that when I reset my app, I 
can also reset my additional work that's directly correlated with my app.

I know there is a custom SQL option, but there's no good way to have the 
DROP and the CREATE (or the create indexes, etc.) isolated from one 
another, since the custom SQL option doesn't support reading in one of a 
set of files.  And anyway, custom SQL static files wouldn't be very good 
for this, because we want the contents of the table to be accurate to 
the Model at this moment in time, and having to generate that statically 
when we have an awesome Django framework to tie into to do this with 
doesn't seem very DRY. :)

Thanks,
George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Adding hooks to methods that generate SQL in django/core/management.py

2007-08-13 Thread George Vilches

Russell Keith-Magee wrote:
> On 8/12/07, George Vilches <[EMAIL PROTECTED]> wrote:
>> How about the patch below?  When you create the cursor, if you want
>> access to "don't run this SQL, just have playback available", just use
>> connection.cursor(playback_only=True), and if you want to roll the
> 
> This is a simple approach that would probably suffice for
> management.py based problems, but it isn't really practical on a large
> scale. For example, QuerySets open their own connections, so it won't
> be easy to pass an argument to an ORM call to enable capture of
> playback data.
>
> The configuration option will need to be a little more generic - i.e.,
> putting the entire backend into a record mode - not just a single
> cursor call.

Alright, that makes sense.  So, my thought is, if we can't change the 
call to connection.cursor() on each instantiation, we're going to need 
some sort of static variable store that can get updated at runtime (or 
some global).  I know that settings is right out for anything that 
changes at runtime, is there a Django-friendly way of storing a variable 
of this nature?  If we can store this variable in some more non-code 
changing way, the rest of the patch should be fine, right?  Here's my 
two thoughts for hopefully Django-friendly approaches:

First, Brian Ferring's base class could have a static or class variable 
easily enough, but I'd like this landing not to depend on that refactor, 
since that could be a while coming.

Second, we could add a class level variable to each DatabaseWrapper, 
since the handle to those seem to be instantiated only once at runtime 
(at least, my short testing with just the Django built-in webserver 
seemed to do so, I only assume that for Apache it's once per thread). 
That would be an easy enough variable to update from pretty much 
anywhere in the app:

 connection.playback_only = True

*should* be all we would need, anywhere in the app.  If need be, we can 
add a helper to disguise this call, but every connection.cursor() 
creation at that would just need to check for self.playback_only to get 
the same effect as in the previously sent patch.

Thoughts?
George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Does SELECT 1 FROM... Work on SQL Server and Oracle?

2007-08-12 Thread George Vilches

Simon Greenhill wrote:
> Hi all,
> 
> I'm trying to work out if Ticket #5030 is good to go or not.
> Basically, when save() is called on a model, django does a SELECT
> COUNT(*)... query to check that the primary key is good.
> 
> However, SELECT 1... is apparently quite a bit faster in many
> circumstances, and we should use it if we can. It looks like postgres,
> mysql, and sqlite all handle this nicely, but we're not sure about SQL
> Server and Oracle. If anyone knows, or can apply the patch and check,
> please let me know.

MSSQL is also okay with this.  Using the same setup as Michael's:

CREATE TABLE ABC ( ID INT NOT NULL, BLAH VARCHAR(30) );
ALTER TABLE ABC ADD CONSTRAINT PK_ABC PRIMARY KEY (ID);
INSERT INTO ABC VALUES (1,'ZZZ') ;
INSERT INTO ABC VALUES (2,'WWW') ;

SELECT COUNT(*) FROM ABC WHERE ID = 2;

The estimated execution plan for this is:

SELECT -> Compute Scalar -> Stream Aggregate -> ABC.PK_ABC (Clustered 
Index Seek)


SELECT 1 FROM ABC WHERE ID = 2;


The estimated execution plan for this is:

SELECT -> ABC.PK_ABC (Clustered Index Seek)

Gives the right results in both cases.  SELECT 1 definitely looks like a 
faster execution plan.

Thanks,
George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Changing the options in manage.py, adding more signals (was Re: Adding hooks to methods that generate SQL in django/core/management.py)

2007-08-12 Thread George Vilches

I'm splitting this conversation off from the other conversation on SQL 
logging, because I think that topic has merits all its own, and this one 
has more to discuss and slightly more possibility for 
backwards-incompatibility.  I don't want to pork barrel this issue in 
with a perfectly legitimate one. :)

Russell Keith-Magee wrote:
> On 8/9/07, George Vilches <[EMAIL PROTECTED]> wrote:
>> So, my proposal is this: generate hooks for users.  For each of the
>> get_custom_sql/get_create_sql/etc., add a small portion that checks the
>> installed apps for their own management.py file and an appropriate
>> method.  For instance, "sqlcustom"'s method could be
>> "get_custom_sql_for_model_all", denoting that it's run on every model in
>> every app that is having the current manage.py operation applied to it.
>>   These functions would be expected to return an array of SQL
>> statements, which could then be fit in with the other generated SQL from
>> each of the current built in methods.
> 
> This is actually how the management commands started out - once upon a
> time, you ran ./manage.py install myapp, which was a wrapper around
> calling ./manage.py sqlall myapp and piping the output to the
> database.
> 
> The problem is that this approach isn't very flexible. Some of what
> syncdb does isn't handled at a raw SQL level - we use the ORM to
> generate the commands/queries etc. post_sync handlers, for example,
> would be almost impossible to recode in the way you describe, and any
> user-based post_sync handlers would need to support some sort of
> 'retrieve sql' API.

What I was proposing didn't involve taking the SQL code that Django 
generates and modifying it.  Rather, this would allow the user to add 
more SQL that they generate entirely independently from what Django's 
internals are generating, but based on contents in the app (as opposed 
to static SQL files that can be loaded in).

Example: Say I'm building a dynamic model.  Say that, for whatever 
reason, I want this dynamic model to have a DB backing, just like any 
Django model. (I know there's already comments on the wiki as to the 
gotchas of this and why this may not be a great example, but it's the 
easiest one for me to explain).  Well, syncdb works fine, if I have a 
signal to dispatch.  But say I want to use something that doesn't have a 
signal, like "sql" or "sqldelete".  I have no way to get my app's 
management method executed.

So possibly, we get rid of "sql", "sqldelete" and the like, and only 
have "create", "delete", etc., which actually do the task, and like you 
said, have a --sql flag that does output only.  If we do this, we still 
need to address how to call the user-level management.py code to make 
sure that all the extra user SQL is *always* included in every possible 
way the manage.py can be run.

Here's the two options I see:

1) Add a signal to every option?  Right now, only syncdb has a signal, 
although I have a ticket and patch for adding a signal to reset, see 
http://code.djangoproject.com/ticket/5065 .  Doing this would be pretty 
straightforward, shouldn't break anyone's existing code (since we 
wouldn't be removing the syncdb signal, we'd just be making it more 
granular).  Although, if we have a create signal and a delete signal, 
we'd need to make sure that when running a syncdb we don't also fire 
those signals.  Easy enough, just something to be careful for.

2) Add a callback to every option?  This is similar to the example I 
wrote in the first message.  Less married to this with the idea of SQL 
logging and playback.

My vote is for 1), I think with the playback feature, it would be pretty 
sexy, give us more functionality than is currently in manage.py, and 
fewer keywords to do so.  I'll happily write the patch for it if there's 
no objections to the idea.  The only thing that the idea currently 
breaks would be some of the old "manage.py" keywords, but the whole 
point of adding the logger and a --sql flag was to reduce the number of 
useless keywords we have in manage.py, which I'm fully +1 for.  Too many 
redundant/easily merge-able options in there. :)

Thanks,
George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---



Re: Adding hooks to methods that generate SQL in django/core/management.py

2007-08-12 Thread George Vilches

Russell Keith-Magee wrote:

> I would suggest approaching this problem at lower than that -
> installing a filter at the level of the database cursor that diverts
> queries away from the actual database, and into a store. That way, if
> you run the code 'Author.objects.all()' , the backend will get the
> request to invoke 'SELECT * FROM Author', but this will get recorded
> rather than sent to the database.
> 
> You then add a --sql flag to ./manage.py that sets up the recording
> mode on the database backend, and outputs the command buffer at the
> end of execution. If you make this interface generic, anyone could
> invoke SQL recording whenever they want.
> 
> Part of this infrastructure is already in place for logging purposes.
> Improvements to the logging capability that allow for recording and
> playback would be most welcome.

How about the patch below?  When you create the cursor, if you want 
access to "don't run this SQL, just have playback available", just use 
connection.cursor(playback_only=True), and if you want to roll the 
playback, you can either use self.db.queries directly, or use the 
util.CursorDebugWrapper.playback method.  Totally backwards-compatible 
and shouldn't step on any toes (uses existing logging, like you 
mentioned above).  You can easily add this into 
django/core/management.py.  If this is okay, I'll do a patch for all the 
existing backends, it's a simple change now that I've looked through 
everything, assuming there's no changes related to the next paragraph.

Something that I found a little irksome while working on this is that 
the DatabaseWrapper class for each backend doesn't inherit from some 
logical parent.  I know that all the db-specific functionality is 
wrapped by these classes, but there are things that are in each class 
that are most definitely shared functionality, like the mechanism by 
which util.CursorDebugWrapper is instantiated.  We could move that off 
to a method in a base class very nicely, and then the playback_only 
addition wouldn't have to be added to every backend, or the change would 
be more minimal.  Also, I think it might be appropriate to have 
"playback()" be a method of the DatabaseWrapper (connection) and not the 
CursorDebugWrapper, since if there's any DB-specific separators we need 
to use, there's no way to deal with that from the cursor.  But, if we 
don't make a base class, then we're going to have to duplicate 
playback's code in all the backend/(db)/base.py files.  Not very DRY.

Thanks,
George


--- django_orig/django/db/backends/mysql/base.py2007-08-02 
20:59:29.0 -0400
+++ django_live/django/db/backends/mysql/base.py2007-08-12 
09:08:00.0 -0400
@@ -77,7 +77,7 @@ class DatabaseWrapper(local):
  self.connection = None
  return False

-def cursor(self):
+def cursor(self, playback_only=False):
  from django.conf import settings
  from warnings import filterwarnings
  if not self._valid_connection():
@@ -103,9 +103,9 @@ class DatabaseWrapper(local):
  cursor = self.connection.cursor()
  else:
  cursor = self.connection.cursor()
-if settings.DEBUG:
+if settings.DEBUG or playback_only:
  filterwarnings("error", category=Database.Warning)
-return util.CursorDebugWrapper(cursor, self)
+return util.CursorDebugWrapper(cursor, self, 
playback_only=playback_only)
  return cursor

  def _commit(self):


--- django_orig/django/db/backends/util.py  2007-08-02 
20:59:29.0 -0400
+++ django_live/django/db/backends/util.py  2007-08-12 
09:01:02.0 -0400
@@ -9,14 +9,16 @@ except ImportError:
  from django.utils import _decimal as decimal# for Python 2.3

  class CursorDebugWrapper(object):
-def __init__(self, cursor, db):
+def __init__(self, cursor, db, playback_only=False):
  self.cursor = cursor
  self.db = db
+self.allow_execute = not playback_only

  def execute(self, sql, params=()):
  start = time()
  try:
-return self.cursor.execute(sql, params)
+if self.allow_execute:
+return self.cursor.execute(sql, params)
  finally:
  stop = time()
  self.db.queries.append({
@@ -27,7 +29,8 @@ class CursorDebugWrapper(object):
  def executemany(self, sql, param_list):
  start = time()
  try:
-return self.cursor.executemany(sql, param_list)
+if self.allow_execute:
+return self.cursor.executemany(sql, param_list)
  finally:
  stop = time()
  self.db.queries.append({
@@ -40,6 +43,10 @@ class CursorDebugWrapper(object):
  return self.__dict__[attr]
  else:
  return getattr(self.cursor, attr)
+
+def playback(self):
+return ';'.join([query['sql'] for query in self.db.queries])
+

  def 

Re: Maybe we need more triagers? Or something else?

2007-08-10 Thread George Vilches

[EMAIL PROTECTED] wrote:
> Another thing to possibly consider is putting out some general
> guidelines about the thought process the developers go through to
> evaluate the tickets with design decisions needed.  In other words,
> how do the developers make the decision to include/exclude something?
> How do they decide to accept the ticket as coded vs. accept the need
> for change but not like the implementation?

+1 to almost everything on the checklist, except:

> - We'll give more weight to brand new functionality than minor
> improvements to existing capabilities

In most cases, the minor improvements likely only require a few minutes 
of a SVN-enabled developer's review, and would hopefully much more 
quickly reduce the size of the triage list.  Many of the outstanding 
minor improvements that I found in Trac while perusing fit exactly this, 
and not require large amounts of discussion.  Not quite so with new 
functionality.

Keeping these little fixes moving quicker probably waste the least 
amount of overall developer time, since they wouldn't require 
re-generating the few line patches just because they're stale against 
the repo, and the new functionality patches take weeks to decide on 
anyway, so there's going to be at least one full patch regen anyway that 
has to be done.

gav

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~--~~~~--~~--~--~---