Russ & Carl,
I'm interested in this one, so I sketched up a potential avenue tonight
(https://github.com/toastdriven/django/compare/check_setup). I actually
implemented Russ' setup first, but ran into a couple issues, so I pared it
down into an "opt-in" scheme. If there's a feeling like this is a
worthwhile start, I'd be happy to flesh it out further, adding tests docs,
refinements, etc. If not, no big deal.
Daniel
On Friday, May 10, 2013 8:32:38 PM UTC-7, Carl Meyer wrote:
>
> On Thursday, May 9, 2013 8:15:02 PM UTC-4, Russell Keith-Magee wrote:
>
>> On Thu, May 9, 2013 at 11:30 PM, Carl Meyer <[email protected]> wrote:
>>
>>> Hi Russ,
>>>
>>> On 05/09/2013 04:51 AM, Russell Keith-Magee wrote:
>>> > Great work Carl and Preston! (And everyone else who had a hand in the
>>> > patch - I know this has been kicking around for a while now)
>>>
>>> Thanks for the reminder; I should have mentioned that Mahdi Yusuf did
>>> the initial draft patch, based on Jannis' django-discover-runner, which
>>> was based on code I showed on a slide in my "Django and Testing" talk at
>>> PyCon 2012; it all comes full circle :-)
>>>
>>> > I agree with Jacob's tests aren't production sensitive, so making a
>>> fast
>>> > transition to the new test discovery has less impact.
>>> >
>>> > However, we know from experience that it doesn't matter how much we
>>> flag
>>> > this change in the release notes, they won't be read, and *someone* is
>>> > going to get stung. In an ideal world, it would be good to be able to
>>> > warn people who upgrade that their test suite may not be run as
>>> expected.
>>> >
>>> > We've had a proposal kicking around for a while to add a management
>>> > command that does an "upgrade check". I'm wondering if this might be an
>>> > opportune time to dig into this some more.
>>> >
>>> > To be clear, this isn't something I consider to be a merge blocker. I'd
>>> > be happy to see 1.6 released with a fast transition to the new test
>>> > runner. It would just be nice icing on the cake if we can get an
>>> upgrade
>>> > validator included in the same release.
>>>
>>> It wouldn't be too hard to write something that tried test discovery
>>> using the old runner and using the new runner and did a simple
>>> comparison of test counts to warn you if they differ.
>>>
>>> I'm not personally likely to work on this "upgrade checks" framework,
>>> but if it does happen I'd be happy to contribute this check to it.
>>
>>
>> I thought about that approach, but my immediate reaction was that it will
>> end up producing more false positives than helpful feedback. Even the
>> simple case -- myapp.TestCase -> myapp.tests.TestCase -- is going to
>> require some name munging to work out if the old test and the new test are
>> actually the same one. Since this is for a short-lived transitional tool,
>> the effort involved in getting it right exceeds the value gained by having
>> it, IMHO.
>>
>
> I wasn't anticipating trying to actually match up individual tests by
> name/path, just trying both runners and seeing whether the test counts
> match. On second thought, though, I realize that contrib / third-party apps
> will mean this check would alert "missing tests!" for pretty much everyone,
> so it would need to be smarter to be useful, and you're right that's likely
> not worth it.
>
>
>> I'd prefer something much simpler:
>>
>> * On syncdb/validate, check for a marker somewhere in user space.
>>
>> * If the marker isn't present, do a check for likely problems. In this
>> case, look for the value of TEST_RUNNER; if it's set to the new default
>> value, display a warning about the change in test discovery format. Add
>> similar checks for other setting changes, or warnings about features that
>> have been fully deprecated (e.g., the final transition of the {% url %}
>> behavior)
>>
>> * Provide a management command to set marker.
>>
>> * On the next syncdb/validate, the marker is present, so the install is
>> verified as being "updated", and no warnings are displayed.
>>
>> This leaves the question of where to put the marker. Some initial ideas:
>>
>> - a file on disk (e.g., an .updated file next to the settings file)
>> - a new setting (VERIFIED_VERSION = "1.5")
>> - a key-value table in a database used only for Django administrivia
>>
>> Essentially, this would give us a place to put the "NO REALLY, READ THIS"
>> warnings that we need in release notes, and make those messages slightly
>> targeted.
>>
>
> In theory this makes sense to me, but I think the question of where to put
> the "I already ran the upgrade check" marker is thorny. Database is ugly,
> since there's no requirement for a Django project to even use the ORM at
> all; and requiring projects using the ORM to have a magical extra
> not-related-to-a-voluntarily-installed-app table in their database sticks
> in my craw. Similar with adding a file to the filesystem; we'd essentially
> be requiring everyone to have this file in order to avoid the extra
> overhead (and possible noise) of the check on startup; forcing that type of
> cruft on everyone using Django is a non-starter in my book. Of those three
> ideas, I'd say a new setting has the least downside, but even that is extra
> cruft we're forcing into everyone's settings.py file.
>
> I think perhaps better is simply an "upgradecheck" management command that
> does checks like this relative to the last version of Django. Of course,
> this has to be manually run, so it doesn't have the "catch everyone"
> benefit of something that happens automatically at validation, but if it's
> useful I think it's likely more people would run it than bother to read
> through the release notes.
>
> Carl
>
--
You received this message because you are subscribed to the Google Groups
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/django-developers?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.