Re: Aggregates
Distinct can cover most use cases for this, but in general: MyModel.objects.filter(some_kind_of_manytomany__blah__blah__blah=X).select_related(some_kind_of_manytomany__blah).group_by(some_kind_of_manytomany__blah) If you tag on anything for an additional select that distinct can't handle, you need group by. Btw, your example doesn't work, having happens after the query finishes executing, not within the where clause. That specific use case *has* to execute exaclty as shown. On Mar 24, 6:12 pm, "Russell Keith-Magee" <[EMAIL PROTECTED]> wrote: > On Mon, Mar 24, 2008 at 7:19 AM, David Cramer <[EMAIL PROTECTED]> wrote: > > > So we're not going to support group by at all, even though is > > extremely simple? I don't understand why everyone says mine (and > > Curse's and many other peoples) use of SQL is so uncommon. I'd > > guarantee almost every developer and I know who writes any kind of > > scalable code (and I've seen what they've written) uses GROUP BY in > > SQL much more than just for SUM, or COUNT, or any of those operations. > > And yet, the example you give is a count on related objects. > > > SELECT my_table.* FROM my_table JOIN my_other_table WHERE my_other > > table.my_table_id = my_table.id GROUP BY my_table.id HAVING COUNT(1) > > > 1; > > > How do you ever hope to achieve something like this in the ORM without > > patches like mine? > > Lets see if I can make myself clearer by using your example. > > You have provided some sample SQL. However, that wasn't the start > point. Once upon a time, you had a problem you needed to solve: "I > need to find all the MyTable objects that have more than one related > OtherTable objects". Notice that this question doesn't use the phrase > "GROUP BY". > > Django's ORM didn't support aggregates, so you hand crafted some SQL, > and since aggregates were involved, you needed to use a GROUP BY > clause. This is because GROUP BY is the way SQL handles aggregation. > > Now, we're talking about adding aggregates to Django. So, we need to > find a way to express your original question as Django syntax. The > solution to this problem isn't "add GROUP BY to Django Syntax" - it's > to find an elegant way to represent the underlying problem as an > object-based syntax. Based on the discussions so far, something like > this would seem appropriate: > > MyTable.objects.aggregate(othertable__count='related_count').filter(related_count__gt=1) > > Notice that this query doesn't require the use of the phrase GROUP BY > in the ORM syntax. When this hits the database, GROUP BY will > certainly be required. However, the terms for the GROUP BY can be > entirely implied from the rest of the query. There is no need to > expose GROUP BY into the ORM syntax. > > Your original question wasn't "How do I perform a GROUP BY over a > MyTable" - it was a real-world question that needed an object-based > answer. The goal of the Django ORM is to solve your _actual_ problem, > not your SQL problem. > > > My main argument is there's no reason to *not* include the > > functionality except that a few select people say "we dont need it". > > There is a very good reason, which I have told you several times. The > Django ORM does not provide an alternate syntax for SQL. It provides a > way of interrogating an object model that happens to be backed by a > database. As a result, the external APIs are not SQL specific - they > encompass broader object-based use-cases. "Just adding GROUP BY" > doesn't achieve any of the Django design goals. > > Now, if you can present a use case that isn't satisfied by the > discussions we are currently having - that is, a usage of GROUP BY > that steps outside the bounds of what could be implied from other > parts of an ORM query - let us know. We want to be able to cover as > many potential queries as possible. However, even then, I doubt that > the solution will be "add GROUP BY" - there will be another class of > object-based queries that we haven't considered, for which we will > need a new _object based_ syntax. > > Yours, > Russ Magee %-) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Public spec is needed for writing ORM adapters
On Mon, Mar 24, 2008 at 11:15 PM, Alex P <[EMAIL PROTECTED]> wrote: > 1. The original message was in brief: there is a growing interest in > the Python community for DB2 support, and we (developers behind IBM_DB > driver, DB-API wrapper and SQLAlchemy adapter) are interested to help > in the Django context if some _minimal_ documentation is provided in > an unrestricted form, even under a Open Source license like BSD (even > with sample/example API usage code, for that matter). The thing is, though, that there *is* a minimal API documented: django.db.backends.dummy. If this were Java, you could take that as laying out the interface to be implemented. The confusion came from the apparent refusal to deal with this because it's in the form of open-source code. -- "Bureaucrat Conrad, you are technically correct -- the best kind of correct." --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Custom management commands with django-admin.py (again)
So, I've worked up a new patch that works for me with manage.py and django-admin.py both with the settings flag and the environment variable. The only real difference from the original is that now in setup_environ, DJANGO_SETTINGS_MODULE is only set if it hasn't been already. If the DJANGO_SETTINGS_MODULE was set to 'localsettings', setup_environ was munging it to 'myproject.localsettings' I don't really use setup_eviron, ever, so I'm not sure if this will have any unintended consequences, but it seems reasonable to me. Anyone care to test this or comment before I check it in? http://code.djangoproject.com/ticket/5943 Joseph --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Public spec is needed for writing ORM adapters
Not because I'm never going to give up, but because there are 2 points that needed to be addressed, I am now resurrecting this thread once more: 1. First, surprisingly but consistently, the message gets distorted, and for the benefit of the original poster and at least 5 or 6 other Python and DB2 enthusiasts in the community, I'd like to clean up the noise once again and address anyone in the community that has a vested interest in the subject 2. And second, does the majority of Django community really think a web framework documentation as SQLAlchemy has is "onerous" (i.e. according to dictionary: "troublesome or oppressive; burdensome ") ? I hope not, but some may disagree and I can understand that, if given some context. So, the 2 issues that cry for clarification: 1. The original message was in brief: there is a growing interest in the Python community for DB2 support, and we (developers behind IBM_DB driver, DB-API wrapper and SQLAlchemy adapter) are interested to help in the Django context if some _minimal_ documentation is provided in an unrestricted form, even under a Open Source license like BSD (even with sample/example API usage code, for that matter). If those interested parties, which may unlikely include core Django developers will be able to produce such minimal subset docs (of external APIs required), and this docs could be reviewed informally by Django gurus through this forum, then we can probably have "lift-off" relatively soon after. Conclusion: it's not a call to Django core developers, and we do respect their opinion and limited time on the subject, it's a call to interested parties already asking for ways to get this started, therefore it's certainly far from any perceived "friction". I can only hope, core Django developers would not feel any such pressure, but just acknowledge the trend. 2. In more than one occasion, the _minimal_ "DB vendor specific" doc (dev guide) was construed as "documenting internals", "burden", "heavy- weight", while the request was specifically targeting the opposite: solely API _externals_ (signature, and contract, i.e. input/output params and entry/exit conditions), as-lightweight-as-possible-to-get- started to run compliance test suite. It is what worked with Rails and SQLAlchemy, and I can hardly see why it wouldn't work with Django. The way it worked with SQLAlchemy was to have one of the most motivated community member (Thanks Ken!), not a core SA dev, submit a minimal set of API signatures that were a "must implement" priority, and then step-by-step through direct dialogue we found missing pieces of info that helped us fix the test suite failures. That's were I also need to thank Florian Bösch and Michael Bayer for their help with entry point support in SQLAlchemy. Really wonderful people in the SQLAlchemy dev community ! But I think Jacob got it quite right in a post above: "Let's all shut up ... and write some documentation :) "... It's a message mostly for the Python and DB2 enthusiasts in this Django community, and we hope to help them, too. Thanks for reading this through with patience :-) Alex P --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Signal performance improvements
On Mon, Mar 24, 2008 at 10:55:03PM -0500, Jeremy Dunck wrote: > One other enhancement I thought might be good is to have a > Model.pre_init signal Such a signal exists already last I looked, same for post_init... Please clarify. > Other similar class-based > receiver lists might speed things up in some cases; the down-side is > that each receiver list has some overhead, even if empty. Please see ticket 4561... my attack of dispatch was to disable it when nothing was listening, dynamically re-enabling it when there was a listener. Via that, there is *zero* overhead for disconnected signal pathways- slight overhead addition when something is listening (specifically addition of another function call), but pretty much neglible. Suspect you and I came at it from different pathways; my plan was to target common optimization (model initialization, 25% boost when no listener connected), then work my way down. The next step I was intending, presuming that patch ever got commited, was to move actual connect/send directly into the signal instances including the arg checking; might I suggest merging functionality of the two? Literally, you're doing the second step I was after, while the first step levels a pretty significant gain for scenarios where signals aren't currently used. ~brian pgpivunVoh9zI.pgp Description: PGP signature
Signal performance improvements
Ticket #6814 is working to improve signal performance. Feature tradeoffs: Strong/weak receiver connections - strong-only is a big win-- about 2x performance. Any receivers going out of scope (and not to be called) should be explicitly disconnected if we remove support for weak receivers. .send API enforcement - about a 20% overhead to ensure signals are sent properly. Worth it? API whittling for extensibility: Leo has correctly pointed out that **kwargs can be used for the same purpose. Functions that accept kwargs are fast-pathed, but maybe we should change to passing all receiver arguments with kwargs to avoid the overhead entirely? kwargs vs whittling: 20% overhead to match APIs In general, the latest patch optimizes for .send at the expense of extra overhead in .connect and .disconnect. .connect and .disconnect are O(n) on number of active receivers for that signal; I don't anticipate that being a big problem, though. One other enhancement I thought might be good is to have a Model.pre_init signal, similar to Model.DoesNotExist so that .connect(class=Model, ...) would allow listening to, say, pre_init only for instantiation of that model.Other similar class-based receiver lists might speed things up in some cases; the down-side is that each receiver list has some overhead, even if empty. Please see the ticket for timings and alternate implementations. Regardless of the outcome, the patch also adds regression tests for signals which I believe would be useful. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Adding InnoDB table engine to sql.py create table statements
Wanted to see what everyone thinks about the idea of adding a patch to allow for a setting that will automatically append a table engine type to the create table statements generated by django.core.management.sql. I created a patch for my local sql.py and it seems to work well. It consists of adding a DATABASE_TABLE_ENGINE = 'InnoDB' variable to my project's settings.py and then checking for that setting in sql_model_create and many_to_many_sql_for_model and if it's there appending on the table engine to the create table output. I can imagine a few reasons why this may not fly but for someone who uses mysql as their db this is really helpful. Anyway would love to hear your thoughts on the approach and the idea. Also if anyone is interested here is a svn diff for the changes I made to my local sql.py. http://www.ninjacipher.com/wp-content/uploads/2008/03/table-engine-sql.diff --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Aggregates
A quick ping: I've written a small patch against qs-rf which might facilitate the development of aggregates, and solve David Cramer's problem of having to fork Django. The idea is to allow for a custom QuerySet subclass to be used by default everywhere, so if you subclass django.db.models.query.BaseQuerySet and provide group_by functionality, then you can point Django at your new subclass and Django will use it everywhere. Patch is here: http://code.djangoproject.com/ticket/6875 -Eric Florenzano --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Porting Django to Python 3.0
> I wasn't at PyCon, and haven't done any 3.0 porting work myself, so I > could be behind the times, but my understanding of current porting > advice (based on PEP 3000) was that it's not going to be possible to > support 2.x and 3.x from a single codebase in many cases (even with > 2to3) if Python < 2.6 needs to be supported. Right? It depends on whom you ask. People often give this advise *assuming* that it couldn't possibly work. I happen to think differently. I see no technical reason why you couldn't support all versions from, say, 2.1 up to 3.0, from a single code base, if you rely on 2to3. (there are severe technical reasons that make that difficult if you don't want to rely on 2to3, such as not being able to get the exception object in an except clause) > I also wonder, is it worth adding this stuff to Django *now* for a > transition that is likely to be quite far off, with at least two big > merges (qs-rf, nfa, possibly gis) coming up soonish? That's completely up to you; I don't want to interfere with the project management. I just wanted to make it absolutely clear that there is no *technical* reason why Django couldn't support 3.x. In addition to whether that is wise from a project management point of view, it's also a lot of work still to make it work fully, people might want to focus on other aspects, this is free software, and all that. Regards, Martin --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Django + SQLAlchemy as a potential GSoC project
On Mon, Mar 24, 2008 at 5:19 PM, Ben Firshman <[EMAIL PROTECTED]> wrote: > > > On 24 Mar 2008, at 20:48, Rob Hudson wrote: > > > > On 3/24/08, James Bennett <[EMAIL PROTECTED]> wrote: > >> > >> On Mon, Mar 24, 2008 at 3:23 PM, Ben Firshman <[EMAIL PROTECTED]> > >> wrote: > >>> Would proposing a complete replacement be a tad too controversial > >>> for a GSoC > >>> project? > >> > >> > >> Yes. It also wouldn't succeed as a project, because it's the Google > >> *Summer* of Code, not the Google Several Years of Code. > > > > And it sounds like 'empty' and 'brosner' are making good headway on > > it... > > http://blog.michaeltrier.com/2008/3/21/django-sqlalchemy > > > > Yes, but that is intended as purely a separate plugin, not as > something part of Django. The code is there, it would just be a matter > of figuring out the logistics (and politics!) of working it into the > official distribution. > You might want to go read some of the old threads on this list about the branch. IIRC, with queary-set refactor coming the core is not likely to ever include support for an sqlalchemy backend. The core devs have indicated that if and when is matures, it will remain a third party plugin. At most it could be distributed as a contrib app (or similar) with the distribution. Again, this is my recollection, so I could be a little off. The archives should shed more light in that. -- Waylan Limberg [EMAIL PROTECTED] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Auth and model inheritance idea
I started playing with subclassing the auth models today. What's nice is that when you subclass User with, say, UserProfile, all users get a .userprofile property, so the functionality of get_profile() is there for free with the bonus of multiple User subclasses if that's needed. I was thinking it might be useful if there was a setting like AUTH_USER_CLASS so that the specified subclass of User is used for request.user, create_user(), authenticate(), login(), the auth views, etc. Maybe something like this is already in the plans, since get_profile() seems out of place or needs updating for use with inheritance, but I couldn't find anything about it. cheers, Justin --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Django + SQLAlchemy as a potential GSoC project
On 24 Mar 2008, at 20:48, Rob Hudson wrote: > > On 3/24/08, James Bennett <[EMAIL PROTECTED]> wrote: >> >> On Mon, Mar 24, 2008 at 3:23 PM, Ben Firshman <[EMAIL PROTECTED]> >> wrote: >>> Would proposing a complete replacement be a tad too controversial >>> for a GSoC >>> project? >> >> >> Yes. It also wouldn't succeed as a project, because it's the Google >> *Summer* of Code, not the Google Several Years of Code. > > And it sounds like 'empty' and 'brosner' are making good headway on > it... > http://blog.michaeltrier.com/2008/3/21/django-sqlalchemy > Yes, but that is intended as purely a separate plugin, not as something part of Django. The code is there, it would just be a matter of figuring out the logistics (and politics!) of working it into the official distribution. Ben --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Django + SQLAlchemy as a potential GSoC project
On 3/24/08, James Bennett <[EMAIL PROTECTED]> wrote: > > On Mon, Mar 24, 2008 at 3:23 PM, Ben Firshman <[EMAIL PROTECTED]> wrote: > > Would proposing a complete replacement be a tad too controversial for a > GSoC > > project? > > > Yes. It also wouldn't succeed as a project, because it's the Google > *Summer* of Code, not the Google Several Years of Code. And it sounds like 'empty' and 'brosner' are making good headway on it... http://blog.michaeltrier.com/2008/3/21/django-sqlalchemy --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Django + SQLAlchemy as a potential GSoC project
On Mon, Mar 24, 2008 at 3:23 PM, Ben Firshman <[EMAIL PROTECTED]> wrote: > Would proposing a complete replacement be a tad too controversial for a GSoC > project? Yes. It also wouldn't succeed as a project, because it's the Google *Summer* of Code, not the Google Several Years of Code. -- "Bureaucrat Conrad, you are technically correct -- the best kind of correct." --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Django + SQLAlchemy as a potential GSoC project
I have been considering Django + SQLAlchemy as a potential summer of code project. I understand there is a branch which has gone nowhere and this project: http://code.google.com/p/django-sqlalchemy/ I have read numerous comments on here about either entirely replacing the current database code with SQLAlchemy or at least providing it as an official alternative. What is the current consensus on this? Would proposing a complete replacement be a tad too controversial for a GSoC project? I'd be interested to hear your views. Thanks, Ben --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: robust_apply, signal refactoring and #6857
On Mon, Mar 24, 2008 at 10:30 AM, Jeremy Dunck <[EMAIL PROTECTED]> wrote: > > On Sat, Mar 22, 2008 at 3:04 PM, Leo Soto M. <[EMAIL PROTECTED]> wrote: > ... > > On the other hand, all this gymnastic is done to allow receivers which > > do not conform to the full API of a signal (i.e., does not accept all > > signal arguments). What is the real reason behind this? Signal > > "extensibility"? > > Yeah, that's the idea. We're trying to keep some backwards > compatibility, and the existing system provides that argument > matching. Even some existing signal handlers in Django rely on it. > > Even so, it may be worth dropping. Numbers soon. I am sure you already know it, but let me point out that this backward-incompatibility would be very easy to fix, just adding a **kwargs formal argument to every signal receiver. It seems like the idiomatic way to have "extensible" method signatures. Then, all the machinery messing with co_code would dissapear. As a side effect, the non-descriptor implementation of method saferefs could use curry(), for full compatibility with the default saferef implementation. -- Leo Soto M. http://blog.leosoto.com --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Usage of items() vs iteritems()
I'd venture that it's a simple oversight. items() is likely taking more memory in addition to being slower because it creates a list of the items while iteritems simply returns an iterator. On Mar 24, 9:51 am, David Cramer <[EMAIL PROTECTED]> wrote: > Ive been having to dig into code vs documentation lately, and there > are *tons* of uses of items() vs iteritems(). I was wondering why this > is? > > I did a quick benchmark, on an example I found in newforms, to try and > give myself an answer: > > In [31]: timeit.Timer("attrs2=attrs.copy();[(field_name, > attrs2.pop(field_name)) for field_name, obj in attrs2.items()]", > "attrs = dict((('adfadsfasdf', 'hello'),)*10)").timeit() > Out[31]: 1.7580790519714355 > > In [32]: timeit.Timer("attrs2=attrs.copy();[(field_name, obj) for > field_name, obj in attrs2.iteritems()]", "attrs = > dict((('adfadsfasdf', 'hello'),)*10)").timeit() > Out[32]: 1.2866780757904053 > > The first example is what's currently in new forms, and I can't see it > saving much memory, for being that much slower. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Streaming Uploads Discussion
Now that we actually have a working patch [1], there are a few details I'd like to raise here. Major Issues === Supporting dictionaries in form code While we can have file objects that support the dictionary lookup to make those backwards compatible, there's still the API of the forms code. As an example, there are a lot of tests that look like:: TextFileForm(data={'description': u'Assistance'}, files={'file': {'filename': 'test1.txt', 'content': 'hello world'}}) This is an example of something using a dictionary instead of a UploadedFile instance when using the forms. However, the forms should *probably* be using the new fileobject API (use .chunks()/.read() rather than ['content']), so that would mean they would break if fed a dictionary like the above example. I see three options to deal with this: 1. Leave forms code alone, causing (a) DeprecationWarnings and (b) the files to be read into memory when saving. 2. Modify the form code to access the attributes of the UploadedFile, but on AttributeError use the old dict-style interface and emit a DeprecationWarning. 3. Modify form code to use the new UploadedFile object, but just break on the old dictionary access as it's largely an internal API anyway. Without thinking about it, I wrote the patch for (3) and modified the tests appropriately. I'm now thinking people will probably want (2) more. The other issue is what we should do with the tests. Should we leave them? Should we copy them and create a copy of them for the new style? Should we replace them with the new style? Having upload_handlers be a list object --- I know that .set_upload_handler() was really cool, but I've discarded it to support multiple handlers (see below for details). So now I had to think of what to replace it with. At first I thought: .append_upload_handler(), but then I realized that I may want .insert_upload_handler(), and .delete_upload_handler(). So before long I realized that I just want people to have a list interface. Therefore, I decided to just leave it as a plain old list. Thus, to add an upload handler, you'd just write:: request.upload_handlers.append(some_upload_handler) And to replace them all:: request.upload_handlers = [some_upload_handler] I've made a few efforts to ensure that it will raise an error if the upload has already been handled. I know this isn't as simple as the .set_upload_handler() interface, but I think it's the simplest way we can support the list modification/replacement in a useful fashion. What do people think about this? (Mis)Handling Content-Length There's probably not much room for argument here, but it's worth asking a larger group. Currently when you try uploading Large files (~2GB and greater), you will get a weird Content-Length header (less than zero, overflowing). The problem is two-fold: 1. We can't easily exhaust the request data without knowing its length. 2. If we don't exhaust the request data, we get a sad Connect Reset error from the browsers. This means that even if we wanted to display a useful error page to people telling them why this happened, it won't happen because we'll just get Connection Reset errors. We can just ignore it, or we can try modifying the request streams to set timeouts and exhaust the input. My perusal of Apache docs tells me that there's discard_request_body, but unfortunately even this seems to depend on Content-Length. Should I/we just ignore this and expect people to be sane and upload reasonable data? Revised API === I was about to write a huge document that described the API in here. But then I realized I might as well write it in restructuredtext. Then I realized I might as well make it formatted like a Django document. And well...you can see the outcome at: http://orodruin.axiak.net/upload_handling.html Let me know if you have any comments on the API. (Things like how it deals with multiple handlers could be up for discussion.) Addendum One last note: In order to test the API and everything, I created a google code project 'django-uploadutils' [2] which I hope will become a collection of upload handlers for doing common tasks. If anyone is interested in helping out with this, I'd be happy to add you as a contributor. Regards, Mike 1: http://code.djangoproject.com/ticket/2070 2: http://code.google.com/p/django-uploadutils/ --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Porting Django to Python 3.0
On Mar 23, 7:49 pm, Martin v. Löwis <[EMAIL PROTECTED]> wrote: > At the PyCon sprint, I started porting Django to Python 3.0. In the > process, I had to make a number of changes to Python, so this port > currently requires the Python 3.0 subversion head (soon to be released > as 3.0a4). ... > a) split the patch into separate chunks, and contribute them > separately. > b) hand the entire patch over to somebody interested in completing it > c) complete it, then submit it > d) abandon it Very cool! Maybe it would be best not to apply it as separate patches on the current trunk, but to make it a branch, at least until it can be clearly established that the single-version-plus-2to3 approach will actually work for all of Django. I wasn't at PyCon, and haven't done any 3.0 porting work myself, so I could be behind the times, but my understanding of current porting advice (based on PEP 3000) was that it's not going to be possible to support 2.x and 3.x from a single codebase in many cases (even with 2to3) if Python < 2.6 needs to be supported. Right? I also wonder, is it worth adding this stuff to Django *now* for a transition that is likely to be quite far off, with at least two big merges (qs-rf, nfa, possibly gis) coming up soonish? In any case, it's cool to see the work, and I look forward to the day I'm running Django on Python 3. pb pb --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Public spec is needed for writing ORM adapters
Alex, Antonio, & Leon: IBM should understand our frustration when a spec exists (in `django.db.backends`) but, because it is in source code form, your particular IBM department cannot look at it. Thus, IBM is attempting to shift the burden onto our community to produce onerous technical documentation to _your_ requirements. This is the source of friction you are experiencing on this list. Moreover, there are actually several legitimate reasons for the lack of a spec: (i) the database internals are being refactored, and (ii) each RDBMS is different and has its own quirks that are dealt with in its respective backend. For example, Oracle doesn't provide LIKE and OFFSET capabilities, so those have to be worked around with a custom QuerySet. Producing a heavy-weight public-domain spec will bind us to a model that won't allow us to adapt to the different variations of SQL dialects -- and I'm sure that DB2 and Informix will have their fair share of quirks. How was this problem solved with other OSS projects that IBM worked with -- e.g, was this process been recorded for other OSS projects (like Django) to examine? Was this on the SQLAlchemy mailing lists, in private discussions with their developers, and/or all of the above? Was this somehow less of an issue becaus SQLAlchemy and its documentation are provided under the MIT license? Due to the scarcity of the core developer's time don't expect this documentation to appear absent some incentive. My last experience trying to work with IBM's commercial databases was very painful[1], and I personally would not invest myself again unless I was paid to. If IBM is truly dedicated to having a Django database backend consider transferring this to a department that can look at OSS code, or be willing to hire someone to produce the documentation to your own standards. Other risk-reduction measures, such as indemnification insurance, are also available for your corporate customers. Regards, -Justin [1] "Informix", django-users, http://groups.google.com/group/django-users/browse_thread/thread/3f0f772e84befaa1 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Public spec is needed for writing ORM adapters
This is quite the discussion thread. I was having a hard time trying to figure out where to insert the post. Let me introduce myself. I am Leon Katsnelson and I am responsible for Product Management for the IBM Data Management portfolio which includes DB2 and IDS database servers as well as Data Studio to mention just very few products. In other words, if there is anything that is wrong with our approach you can blame me and leave poor Alex and Antonio be. They are really good developers who thrive on technology and have been pushing us to fully embrace Django. Let me get right to the genesis of this thread. Will IBM be releasing commercial grade Django driver for its databases? I can't say this in an open forum. I can however confirm that we have been watching Django with a great deal of interest as part of our overall commitment to support of the scripting languages (PHP, Ruby, Python, Perl etc.). So, if you follow what we have done with Ruby on Rails (DB2onRails.com) it is a definite maybe :-) A bit to the method to our madness. As you would expect IBM provides solutions to corporations large and small and what we provide is commercial software. IBM is a very big contributor of Open Source but our business is "commercial" software. The lines between commercial software and open source do get blurry from time to time as is the case with DB2 Express-C (www.ibm.com/db2/express) for example. This is a product that follows open source business model i.e. free license with availability of inexpensive support and extra features. However, DB2 Express-C is not open source. For most of our corporate customers this is very important as they have restrictions on the use of open source. I am not hear to argue the wisdom of having such restrictions, I am simply stating the fact. One of the reasons (there are more) many corporations have such restrictions is lack of "indemnification". In other words, they don't like acquiring a product and getting it used within their infrastructure and than finding out that they are a target of a law suit claiming that they have misappropriated some IP. If you think it does not happen just look at the case of SCO suing Diamler-Chrysler over Linux. This does not happen to individual developers and budding start-ups but if you are a large company with a lot of money things are different. It is like you have a bulls eye painted on your corporate logo and it is in the cross sight of every IP lawyer who thinks he/she 1% chance of even having a semblance of a case. it is sad that we live in such a litigious society but it is a reality of IP law. When IBM provides commercial software we guarantee its pedigree. Before we ship any software we put together a Certificate of Originality. This is one of the processes we have to ensure that all of the code we are shipping to our customers was either written by IBM employees and IBM has full rights to it or if the code came from some place (e.g. open source) we are absolutely sure we have the rights to use it. I am simplifying things a bit but this is the essence of the due diligence of what we must do. If you read so far you should get an idea by now that for IBM developers taking an existing Django adapter and just creating one for DB2 or IDS is not a simple proposition. That is not to say that we do not leverage open source in IBM development; we do. But we do it with great care. Yes, Django has is a fairly liberal license that would allow us to do this i.e create what lawyers call "derivative work". And thankfully it is not a viral license that would force us to open source the entire product (DB2 or IDS) that would ship the adapter. For us to look at Django code we need to be sure that we understand the pedigree of that code so we inadvertently don't infringe on some third party rights. For example, the code in the Django adapter could have had contributions from someone who has not explicitly assigned the rights and may not wish to have IBM use this code. Maybe this person expects to be compensated for their contribution if a company like IBM was to benefit from it. Again, this is not a hypothetical scenario. This happens quite frequently. The way we and other protect ourselves is by coding to a publicly available specification. If there was such a spec for Django database interfaces it would make it a lot simpler for us to commit to developing a Django adapter for IBM DBMS. My reason for making this post was to explain the need for a spec. I am not hear to argue the wisdom of IBM practices; this is way out of my domain of responsibility. Yes, we can read the "Dutch pseudo-code" but we need to make sure that when if someone launches a lawsuit we can clearly demonstrate that there are no lines of the pseudo-code found their way in to IBM-supplied code. That is a lot more difficult to do with the "Dutch pseudo-code" than with English. And it is really difficult to argue with or our lawyers that the code we were looking at is really a
Re: robust_apply, signal refactoring and #6857
On Sat, Mar 22, 2008 at 3:04 PM, Leo Soto M. <[EMAIL PROTECTED]> wrote: ... > On the other hand, all this gymnastic is done to allow receivers which > do not conform to the full API of a signal (i.e., does not accept all > signal arguments). What is the real reason behind this? Signal > "extensibility"? Yeah, that's the idea. We're trying to keep some backwards compatibility, and the existing system provides that argument matching. Even some existing signal handlers in Django rely on it. Even so, it may be worth dropping. Numbers soon. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Recent test breakage on Windows
I was wary of re-implementing the logic in the test, just because it makes the test somewhat opaque. In the end, I added a helper function to clean the path results, submitted a patch: http://code.djangoproject.com/ticket/6868 --Ned. http://nedbatchelder.com/blog Malcolm Tredinnick wrote: > On Sun, 2008-03-23 at 11:15 -0400, Ned Batchelder wrote: > [...] > >> Here's one of the tests in question: >> >> >>> f = forms.FilePathField(path=path) >> >>> f.choices.sort() >> >>> f.choices >> [('.../django/newforms/__init__.py', '__init__.py'), >> ('.../django/newforms/__init__.pyc', '__init__.pyc'), >> ('.../django/newforms/fields.py', 'fields.py'), >> ('.../django/newforms/fields.pyc', 'fields.pyc'), >> ('.../django/newforms/forms.py', 'forms.py'), >> ('.../django/newforms/forms.pyc', 'forms.pyc'), >> ('.../django/newforms/models.py', 'models.py'), >> ('.../django/newforms/models.pyc', 'models.pyc'), >> ('.../django/newforms/util.py', 'util.py'), >> ('.../django/newforms/util.pyc', 'util.pyc'), >> ('.../django/newforms/widgets.py', 'widgets.py'), >> ('.../django/newforms/widgets.pyc', 'widgets.pyc')] >> >> On my machine, the problem boils down to: >> >> Expected: '.../django/newforms/__init__.py' >> Got: 'C:\\src\\django\\django\\newforms/__init__.py' >> > [...] > >> Any guidance? >> > > Given that we can compute the expect result pretty easily (it's the > directory listing of the that directory), I'd be tempted to compute the > expected result as part of the test and compare it to what the widget > puts out. This isn't quite as tight a coupling as it looks on the > surface, since the interface definition is that it returns a list of > tuples that are the files in that path. So we can write the "expected > result" based on the interface description, not the implementation (yes, > it's semantic hair-splitting, but it's type of trickery that helps me > sleep at nights in those months when I sleep). > > I'm always a little wary of the "just add more dots" approach to ironing > out these problems because unexpected test failures have caught a lot of > problems over the past couple of years. > > Still, adding more leading dots wouldn't destroy the world order, > either. > > Regards, > Malcolm > > > -- Ned Batchelder, http://nedbatchelder.com --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Porting Django to Python 3.0
On Mar 24, 10:49 am, Martin v. Löwis <[EMAIL PROTECTED]> wrote: > At the PyCon sprint, I started porting Django to Python 3.0. In the > process, I had to make a number of changes to Python, so this port > currently requires the Python 3.0 subversion head (soon to be released > as 3.0a4). FWIW, have started playing with getting mod_wsgi to work with Python 3.0 as well. When I tried Python 3.0a2, it would crash deep inside of Python. When I tried 3.0a3 recently however it got further and absolute minimum hello world program actually works. Thus now seems to be just a case of working out changes to mod_wsgi in line with what was worked out on Python WEB-SIG as necessary changes to WSGI specification to make WSGI compatible with Python 3.0. If I can get mod_wsgi working on Python 3.0, will be interesting to see if you can tweak the Django WSGI interface, if not done already, to make it WSGI for Python 3.0 compatible and thus try your changes on top of mod_wsgi. Graham > This port is just a start; it runs the tutorial with the sqlite3 > backend, including the admin interface, but probably chokes on larger > applications that use more features. > > The patch is designed to not break Django on any earlier Python > versions. > > I put up the patch and my notes at > > http://wiki.python.org/moin/PortingDjangoTo3k > > I would like to know how I should proceed with that; options are > > a) split the patch into separate chunks, and contribute them > separately. > b) hand the entire patch over to somebody interested in completing it > c) complete it, then submit it > d) abandon it > > I would favor option a), even though this means that the django source > repository would only get a part of a working 3.0 port; I might > continue to work on it over the next months, although I would again > hope that regular Django users take over (which I am not). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Porting Django to Python 3.0
Hi Martin, Just responding to the pertinent bits. I stand corrected on all the rest. On Mon, 2008-03-24 at 11:10 +0100, Martin v. Löwis wrote: > > Some of the things you've identified as "Django should..." are a little > > problematic since one of the current requirements (at least for 1.0) is > > "Django should work out of the box with Python 2.3". Some of your > > proposed changes don't even work with Python 2.5, so we need to add a > > few more conditionals. > > I think you misunderstood. In these cases, I already changed Django to > do what it should do in the patch. E.g. with the patch, it already > uses io.BytesIO > in 3k, but continues to use cStringIO.StringIO in earlier versions. My apologies. I misread the patch. You're correct. [...] > Ok. Out of curiosity: What was the rationale for clearing Meta.__dict__ when > iterating over it? That's code from the pre-open-source days (so before my involvement), but I presume because it presented an easy way to check that all the parameters were used by the end of processing. It happened to work for whomever wrote that code. I've had to change it in the queryset-refactor branch because Meta can be subclassed, so I rapidly hit all sorts of problems there. This is one of many things in Django that are there because "it was fit for purpose when originally written years ago and has worked up until now." Incremental improvements and all that. :-) > > The AdminLogNode sending strings as slices bit me in queryset-refactor, > > too. I was a bit annoyed that Python allowed it because then I had to > > work around it. I'm strongly tempted to fix that (i.e. fix AdminLogNode > > to not be so slack) at a some point, but it's low-priority at the > > moment, since it breaks indeterminate amounts of code, so I haven't had > > the energy to work out the impact and/or mitigation. > > > > It's kind of cool, though: in Python 2.x, you can write a class that > > accepts fruit['apple':'banana'] and have it do something. Useful for > > symbolic enumerated types, I guess. > > Sure - the slice object is designed to hold arbitrary objects (and continues > to in 3.x); however, isn't this broken already in 2.x, and just not causing > an exception? IIUC, when limit is, say, '10', render will do > > (k.start is None or k.start >= 0) and (k.stop is None or k.stop >= 0) > > which happens to give True, but is actually incorrect. The culprit seems > to be connection.ops.limit_offset_sql, which accepts '10' as a limit; > _QuerySet seems to pass limit through as-is (except for the bogus > check above). Indeed. It's fixed in the queryset-refactor branch (which is a big rewrite of query.py's functionality), where we convert the arguments to integers before trying to use them as things that should have integer properties. > > I'm not going to make a call here; I'll leave that up to Jacob and > > Adrian, but I would hope our plan isn't to leave Python 2.5 (and maybe > > even 2.4) behind too quickly. > > Again, with the patch, that would not at all be necessary. Django could > continue to support any 2.x versions it pleases to. I think I misunderstood some of the reasons you were patching things (e.g. working around the bare except issue and things like that). Thanks for straightening me out. Cheers, Malcolm -- Experience is something you don't get until just after you need it. http://www.pointy-stick.com/blog/ --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Porting Django to Python 3.0
> Some of the things you've identified as "Django should..." are a little > problematic since one of the current requirements (at least for 1.0) is > "Django should work out of the box with Python 2.3". Some of your > proposed changes don't even work with Python 2.5, so we need to add a > few more conditionals. I think you misunderstood. In these cases, I already changed Django to do what it should do in the patch. E.g. with the patch, it already uses io.BytesIO in 3k, but continues to use cStringIO.StringIO in earlier versions. > The only potential showstoppers I see are the need to change "except > Exception, e:" to "except Exception as e", since that isn't supported in > Python 2.5 or earlier and it's going to be hard to fake as it's a major > syntactic construction. Yes; these are the remaining two cases left. There are *many* more cases that need to be changed to use "except ... as ..."; these are all fixed by 2to3. There is a bug in 2to3 which fails to update the try-except block if there is a bare except. Once that's fixed in 2to3, these chunks can be removed from the patch; this is bug http://bugs.python.org/issue2453 > Also, relative imports aren't supported in 2.4 > or earlier whereas they seem to be required to be explicitly specified > in 2.6. In 2.6, you can still use the implicit relative imports. In 3.0, you need to make explicit relative imports. Again, this is fixed by 2to3. The remaining imports were left in because the 2to3 fixer again had a bug (http://bugs.python.org/issue2446); this should be fixed now. > The latter problem can be fixed a bit by reshuffling some of the import > paths, as we've been intending to do in some cases. We use relative > imports in a few places to avoid importing a higher-level __init__ file > because it does so much (too much) work. Relative imports are fine. David Wolever wrote a 2to3 fixer in the PyCon sprint just because I requested one to simplify porting Django. > That's a change that can be > made so everything can do "from django.foo.bar.baz..." and avoid > unnecessary "from ." ugliness. Painful and possibly quite disruptive in > places, but possible and probably worth doing. However, unnecessary to support 3.0; see above. > We already have a lot of conditional code based on sys.version_info. > We'll need to introduce some more. That explains why, e.g., we're using > md5 in some places and are testing based on behaviour (try to import > hashlib and if that fails, use the fallback). Changing to be a > sys.version_info test is not impossible eventually, although it relies > on IronPython and Jython being comparable in their version numbering and > functionality. Right. I think both Jython and IronPython try to follow the CPython feature sets by version number. In particular wrt. bytes-vs-string, it is actually simpler for them to implement 3.x instead of 2.x, as their native string type is already a Unicode type. Still, they would have to change APIs when dividing strings and bytes, so they'll likely use the major version 3 to indicate presence of the separate bytes type. > Similarly, we can't unconditionally switch to "from io import ...", > since that doesn't exist in Python 2.5, let alone earlier versions. But my patch doesn't do that! > Wrapping it in import guards will work, but the code is starting to get > a bit ugly to read. If this happens in too many places, it can be factored out into a single module, like the django.utils.py3 module that I propose. > Ditto, the reason we use upper-case names when > import from email is because that's required by 2.3. They are backwards > compatible in later versions, so we intentionally stuck with them. Which is fine, no need to change that. Again, if it gets too ugly, you can factor it out into a single place (although I think each email class is imported only once). > We need to retain one use of ClassType to generate the right > class type for our own exception subclasses -- they're old-style classes > there -- in Python 2.3 and 2.4, but that's guarded by a version_info > conditional, so hopefully 2to3.py won't be concerned about that. The one > in ModelBase.contribute_to_class can go away, though. That's the one I > haven't committed to the branch yet. Anyway, most of those issues will > become non-issues once the branch is in trunk. Ok. Out of curiosity: What was the rationale for clearing Meta.__dict__ when iterating over it? > The AdminLogNode sending strings as slices bit me in queryset-refactor, > too. I was a bit annoyed that Python allowed it because then I had to > work around it. I'm strongly tempted to fix that (i.e. fix AdminLogNode > to not be so slack) at a some point, but it's low-priority at the > moment, since it breaks indeterminate amounts of code, so I haven't had > the energy to work out the impact and/or mitigation. > > It's kind of cool, though: in Python 2.x, you can write a class that > accepts
Usage of items() vs iteritems()
Ive been having to dig into code vs documentation lately, and there are *tons* of uses of items() vs iteritems(). I was wondering why this is? I did a quick benchmark, on an example I found in newforms, to try and give myself an answer: In [31]: timeit.Timer("attrs2=attrs.copy();[(field_name, attrs2.pop(field_name)) for field_name, obj in attrs2.items()]", "attrs = dict((('adfadsfasdf', 'hello'),)*10)").timeit() Out[31]: 1.7580790519714355 In [32]: timeit.Timer("attrs2=attrs.copy();[(field_name, obj) for field_name, obj in attrs2.iteritems()]", "attrs = dict((('adfadsfasdf', 'hello'),)*10)").timeit() Out[32]: 1.2866780757904053 The first example is what's currently in new forms, and I can't see it saving much memory, for being that much slower. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Recent test breakage on Windows
On Sun, 2008-03-23 at 11:15 -0400, Ned Batchelder wrote: [...] > Here's one of the tests in question: > > >>> f = forms.FilePathField(path=path) > >>> f.choices.sort() > >>> f.choices > [('.../django/newforms/__init__.py', '__init__.py'), > ('.../django/newforms/__init__.pyc', '__init__.pyc'), > ('.../django/newforms/fields.py', 'fields.py'), > ('.../django/newforms/fields.pyc', 'fields.pyc'), > ('.../django/newforms/forms.py', 'forms.py'), > ('.../django/newforms/forms.pyc', 'forms.pyc'), > ('.../django/newforms/models.py', 'models.py'), > ('.../django/newforms/models.pyc', 'models.pyc'), > ('.../django/newforms/util.py', 'util.py'), > ('.../django/newforms/util.pyc', 'util.pyc'), > ('.../django/newforms/widgets.py', 'widgets.py'), > ('.../django/newforms/widgets.pyc', 'widgets.pyc')] > > On my machine, the problem boils down to: > > Expected: '.../django/newforms/__init__.py' > Got: 'C:\\src\\django\\django\\newforms/__init__.py' [...] > > Any guidance? Given that we can compute the expect result pretty easily (it's the directory listing of the that directory), I'd be tempted to compute the expected result as part of the test and compare it to what the widget puts out. This isn't quite as tight a coupling as it looks on the surface, since the interface definition is that it returns a list of tuples that are the files in that path. So we can write the "expected result" based on the interface description, not the implementation (yes, it's semantic hair-splitting, but it's type of trickery that helps me sleep at nights in those months when I sleep). I'm always a little wary of the "just add more dots" approach to ironing out these problems because unexpected test failures have caught a lot of problems over the past couple of years. Still, adding more leading dots wouldn't destroy the world order, either. Regards, Malcolm -- Why be difficult when, with a little bit of effort, you could be impossible. http://www.pointy-stick.com/blog/ --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to django-developers@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~--~~~~--~~--~--~---
Re: Porting Django to Python 3.0
On Sun, 2008-03-23 at 16:49 -0700, Martin v. Löwis wrote: > At the PyCon sprint, I started porting Django to Python 3.0. In the > process, I had to make a number of changes to Python, so this port > currently requires the Python 3.0 subversion head (soon to be released > as 3.0a4). > > This port is just a start; it runs the tutorial with the sqlite3 > backend, including the admin interface, but probably chokes on larger > applications that use more features. > > The patch is designed to not break Django on any earlier Python > versions. > > I put up the patch and my notes at > > http://wiki.python.org/moin/PortingDjangoTo3k This is interesting. I read over this during lunch today and it's quite nice. Thanks. Some of the things you've identified as "Django should..." are a little problematic since one of the current requirements (at least for 1.0) is "Django should work out of the box with Python 2.3". Some of your proposed changes don't even work with Python 2.5, so we need to add a few more conditionals. The only potential showstoppers I see are the need to change "except Exception, e:" to "except Exception as e", since that isn't supported in Python 2.5 or earlier and it's going to be hard to fake as it's a major syntactic construction. Also, relative imports aren't supported in 2.4 or earlier whereas they seem to be required to be explicitly specified in 2.6. The latter problem can be fixed a bit by reshuffling some of the import paths, as we've been intending to do in some cases. We use relative imports in a few places to avoid importing a higher-level __init__ file because it does so much (too much) work. That's a change that can be made so everything can do "from django.foo.bar.baz..." and avoid unnecessary "from ." ugliness. Painful and possibly quite disruptive in places, but possible and probably worth doing. We already have a lot of conditional code based on sys.version_info. We'll need to introduce some more. That explains why, e.g., we're using md5 in some places and are testing based on behaviour (try to import hashlib and if that fails, use the fallback). Changing to be a sys.version_info test is not impossible eventually, although it relies on IronPython and Jython being comparable in their version numbering and functionality. Similarly, we can't unconditionally switch to "from io import ...", since that doesn't exist in Python 2.5, let alone earlier versions. Wrapping it in import guards will work, but the code is starting to get a bit ugly to read. Ditto, the reason we use upper-case names when import from email is because that's required by 2.3. They are backwards compatible in later versions, so we intentionally stuck with them. None of this is your fault, Martin, I realise that. I'm just commenting from the notes I took when I read through the patch earlier, more as a record of things we need to think about. Almost all the things you've listed under the new-style classes section are fixed in the queryset-refactor branch. I just checked and I've forgotten to commit one local branch that removes the last ClassType check. We need to retain one use of ClassType to generate the right class type for our own exception subclasses -- they're old-style classes there -- in Python 2.3 and 2.4, but that's guarded by a version_info conditional, so hopefully 2to3.py won't be concerned about that. The one in ModelBase.contribute_to_class can go away, though. That's the one I haven't committed to the branch yet. Anyway, most of those issues will become non-issues once the branch is in trunk. The AdminLogNode sending strings as slices bit me in queryset-refactor, too. I was a bit annoyed that Python allowed it because then I had to work around it. I'm strongly tempted to fix that (i.e. fix AdminLogNode to not be so slack) at a some point, but it's low-priority at the moment, since it breaks indeterminate amounts of code, so I haven't had the energy to work out the impact and/or mitigation. It's kind of cool, though: in Python 2.x, you can write a class that accepts fruit['apple':'banana'] and have it do something. Useful for symbolic enumerated types, I guess. > I would like to know how I should proceed with that; options are > > a) split the patch into separate chunks, and contribute them > separately. > b) hand the entire patch over to somebody interested in completing it > c) complete it, then submit it > d) abandon it > > I would favor option a), even though this means that the django source > repository would only get a part of a working 3.0 port; I might > continue to work on it over the next months, although I would again > hope that regular Django users take over (which I am not). I'm not going to make a call here; I'll leave that up to Jacob and Adrian, but I would hope our plan isn't to leave Python 2.5 (and maybe even 2.4) behind too quickly. It's a bit too easy to be blinkered in Open Source development into thinking others should/must upgrade entire system components