Re: Testing generated SQL under different backends

2013-06-01 Thread Shai Berger
Hi again,

On Wednesday 29 May 2013, Anssi Kääriäinen wrote:
> 
> An example of what I think would be the ultimate solution:
> 
> class TestWeirdJoinConditions(TestCase):
> def test_something(self):
> ...
> If it turns out MySQL is having problems with this test, you could
> then have a class somewhere in the backend:
> 
> from queries.tests import TestWeirdJoinConditions
> 
> class TestWeirdJoinConditionsMySQL(TestWeirdJoinConditions):
>@skip
>def test_something(self):
>...
> 

I think this can be done along the following lines: In the backend, include a 
module for overriding tests (it could be a special module for overrides, or 
just a regular test module) with:

from queries import tests as queries_tests

@override_parent(queries_tests)
class TestWeirdJoinConditions(queries_tests.TestWeirdJoinConditions):
@skip
def test_something(self):
...

With override_parent being something like 

def override_parent(module):
def override(cls):
parent = cls.__bases__[0]
setattr(module, parent.__name__, cls)
return cls
   return override

[perhaps the module argument can be removed, but the __module__ property on 
classes is just the module name, and I'm not sure it's unique enough].

Then, we just need to make sure that the test runner imports these modules 
before it collects test cases.

> Assuming this could technically work (that is, there is a way to use
> only the MySQL version of the class) it should be possible for a
> backend to alter everything about the test class if need be, and the
> test class doesn't need to know anything about the backend. So, no
> dependency from queries tests to possible 3rd party backends. OTOH
> this adds a dependency from backend to arbitrary test classes...
> 

Just from backend tests to general tests. I don't see a problem there.

> Just moving some tests to backends is a possibility. The problem is
> that if some third party backend has problems with a test we will need
> to move that test to backends without having anything in core
> requiring that. This creates a weird dependency between Django and 3rd
> party backends. Of course, we have that dependency already. For
> example some tests use IntegerField(primary_key=True) instead of
> AutoField, the reason is that SQL Server doesn't like manually setting
> AutoField values.
> 

The point this raises is, that sometimes modifying the test class is not 
enough, and we'd need to modify the test app. I'm not sure if this can be done 
along the same lines.

> > > For 3rd party backend support in general: My opinion is that we should
> > 
> > > include them all in core. The rules would be:
> > By "them", do you mean 3rd party backend tests or the entire 3rd party
> > backend?
> 
> Whole backend. This could work if the idea is that core doesn't
> guarantee that all backends are working all the time. The only
> guarantee is that core tries to find a maintainers for backends. If
> nobody is found, then the backend simply doesn't work.
> 

This sounds like a very good plan if you think mostly on django-mssql or the 
core Oracle backend. It sounds a little less attractive if you add the two 
active forks of django-pyodbc, the fledgling Firebird backend, or the wholly-
IBM-owned DB2 backend -- of which, as far as I'm aware, none of the developers 
are very active on this mailing list.
(For the record: I have been contributing significantly to the MSSQL backend of 
South, and am interested in having an MSSQL backend in core).

> This is the way device driver development is done in Linux. Basically,
> if nobody cares enough to fix a broken driver, then it isn't a
> problem. This turns out to work really, really well.
> 

There is a difference in the development organization -- Linux development is 
hierarchic, with "lieutenants" each responsible for their own area of the 
kernel; which makes it much easier to make someone responsible for a set of 
device drivers, but nothing more.

> Interestingly something like this has already happened to Oracle GIS.
> It was broken for a long time in 1.4 release (you couldn't save
> anything IIRC). Then Jani Tiainen needed the GIS functionality, wrote
> a patch and now GIS is in mostly working state. It still doesn't pass
> the GIS tests, but that doesn't seem to be a problem in practice.
> 

(I am probably one of the obvious candidates to take care of this; alas, I am 
not too well versed in GIS, and find other improvements in the Oracle backend 
of higher priority. If anyone wants to work on it, I will collaborate as best 
I can).

> Based on how things work in Linux I think similar arrangement should
> work in Django, too. The crucial point is that adding a new backend to
> Django's code base must not add any significant amount of work to core
> committers. If the idea is to add backends to Django's code base and
> have any sort of core-committer guarantee that all backends work all
> the time (that is, you need to run tests ag

Re: Testing generated SQL under different backends

2013-05-31 Thread Anssi Kääriäinen
On 31 touko, 15:51, Marc Tamlyn  wrote:
> In particular I'd be quite strongly against the main Django codebase
> shipping with potentially broken drivers - this will reflect very badly on
> the project for new users.

The thing is, this is *already* true. Oracle GIS was badly broken in
1.4 release for a long time. Many GIS tests are still failing (1.4,
1.5 and master), but at least saving objects is now possible. If
nobody cares enough to fix this, then I don't see a big problem. What
we should likely do is add some warnings somewhere about potentially
broken state of Oracle GIS (import time + docs maybe).

We could also throw the Oracle GIS code away, but I don't see much
point in doing that. The code is working well enough now that Oracle
GIS is used by some users.

Of course the best case scenario is that somebody is motivated enough
to fix the tests.

 - Anssi

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Testing generated SQL under different backends

2013-05-31 Thread Anssi Kääriäinen
On 31 touko, 16:58, Michael Manfre  wrote:
> On Fri, May 31, 2013 at 8:51 AM, Marc Tamlyn  wrote:
> > In terms of non-core backends (and I include Oracle in this) - my
> > preferred option would not be to include them all in core Django, but to
> > collate them in and manage them under the Django project - i.e.
> > github.com/django/django-mssql. These are "looked after" by the Django
> > project to ensure that there is one place to find that code base, but are
> > externally maintained. This is effectively similar to the way that
> > localflavor is progressing, although we should wait and see how that plays
> > out first - there are some teething issues there at the moment.
>
> Django-mssql can be installed with pip and the project's pages (code, docs,
> etc) are the first several hits on the popular search engines. Moving a
> repo for the sake of moving it is a non starter for django-mssql. I'm
> guessing that most of the other 3rd party database backends would object as
> well, especially those not currently hosted on github. Relocating a repo is
> disruptive to the developer(s) and also the users. I've gone through this
> once when I moved the project from google code to bitbucket, which had the
> huge benefit of making it easier and more enjoyable for me to work on the
> project.
>
> The disruption associated with relocating the project in to the core would
> at least be balanced by many benefits. Just to name a few, easier test
> integration, backend's issues in trac, no more django version/feature
> checking in the backend, and more eyes watching commits (even if no one
> else works on the backend). There is also the potential of easily fixing
> the poorly named flags on DatabaseFeature.

>From ORM coding perspective the issue is this: I want the ability to
alter backends API at will. Of course, we have already changed the API
when needed (transaction handling changes, connection initialization
changes for example), and we will likely do this in future, too
(support for database schemas for example).

Of course, this doesn't mean the API should be changed without reason.
But if there is a reason, then we must be able to change the API.
(Same goes for other internal APIs, too).

If unstable APIs are OK for 3rd party backend devs, then all is OK. I
don't think there is point in moving code anywhere for the sake of
moving it. The reason must be making development easier for both 3rd
party devs and core devs. I don't see much difference for users, pip
install is easy enough to do.

 - Anssi

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Testing generated SQL under different backends

2013-05-31 Thread Michael Manfre
On Fri, May 31, 2013 at 8:51 AM, Marc Tamlyn  wrote:

> In terms of non-core backends (and I include Oracle in this) - my
> preferred option would not be to include them all in core Django, but to
> collate them in and manage them under the Django project - i.e.
> github.com/django/django-mssql. These are "looked after" by the Django
> project to ensure that there is one place to find that code base, but are
> externally maintained. This is effectively similar to the way that
> localflavor is progressing, although we should wait and see how that plays
> out first - there are some teething issues there at the moment.


Django-mssql can be installed with pip and the project's pages (code, docs,
etc) are the first several hits on the popular search engines. Moving a
repo for the sake of moving it is a non starter for django-mssql. I'm
guessing that most of the other 3rd party database backends would object as
well, especially those not currently hosted on github. Relocating a repo is
disruptive to the developer(s) and also the users. I've gone through this
once when I moved the project from google code to bitbucket, which had the
huge benefit of making it easier and more enjoyable for me to work on the
project.

The disruption associated with relocating the project in to the core would
at least be balanced by many benefits. Just to name a few, easier test
integration, backend's issues in trac, no more django version/feature
checking in the backend, and more eyes watching commits (even if no one
else works on the backend). There is also the potential of easily fixing
the poorly named flags on DatabaseFeature.

Regards,
Michael Manfre

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Testing generated SQL under different backends

2013-05-31 Thread Marc Tamlyn
In terms of non-core backends (and I include Oracle in this) - my preferred 
option would not be to include them all in core Django, but to collate them 
in and manage them under the Django project - i.e. 
github.com/django/django-mssql. These are "looked after" by the Django 
project to ensure that there is one place to find that code base, but are 
externally maintained. This is effectively similar to the way that 
localflavor is progressing, although we should wait and see how that plays 
out first - there are some teething issues there at the moment.

In particular I'd be quite strongly against the main Django codebase 
shipping with potentially broken drivers - this will reflect very badly on 
the project for new users.

On Wednesday, 29 May 2013 08:42:19 UTC+1, Anssi Kääriäinen wrote:
>
> On 28 touko, 17:46, Michael Manfre  wrote: 
> > > I don't know how to make this actually work so that Django's test 
> > > runner finds the correct test class from the backend's test module 
> > > instead of the generic backends test class. Optimally a backend could 
> > > subclass any test class to specialize or skip tests. 
> > 
> > django.tests.runtests.setup already has special handling for adding gis 
> > test apps. Something similar could be used to for database backends. Not 
> > 100% sure what would be a good way of flagging core tests as needing to 
> be 
> > skipped or wrapped to expect failure, but we might be able to avoid that 
> > issue entirely with a combination of database feature flags and moving 
> the 
> > vendor specific problem cases to the respective backends. 
>
> An example of what I think would be the ultimate solution: 
>
> class TestWeirdJoinConditions(TestCase): 
> def test_something(self): 
> ... 
> If it turns out MySQL is having problems with this test, you could 
> then have a class somewhere in the backend: 
>
> from queries.tests import TestWeirdJoinConditions 
>
> class TestWeirdJoinConditionsMySQL(TestWeirdJoinConditions): 
>@skip 
>def test_something(self): 
>... 
>
> Assuming this could technically work (that is, there is a way to use 
> only the MySQL version of the class) it should be possible for a 
> backend to alter everything about the test class if need be, and the 
> test class doesn't need to know anything about the backend. So, no 
> dependency from queries tests to possible 3rd party backends. OTOH 
> this adds a dependency from backend to arbitrary test classes... 
>
> Just moving some tests to backends is a possibility. The problem is 
> that if some third party backend has problems with a test we will need 
> to move that test to backends without having anything in core 
> requiring that. This creates a weird dependency between Django and 3rd 
> party backends. Of course, we have that dependency already. For 
> example some tests use IntegerField(primary_key=True) instead of 
> AutoField, the reason is that SQL Server doesn't like manually setting 
> AutoField values. 
>
> > > For 3rd party backend support in general: My opinion is that we should 
> > > include them all in core. The rules would be: 
> > 
> > By "them", do you mean 3rd party backend tests or the entire 3rd party 
> > backend? 
>
> Whole backend. This could work if the idea is that core doesn't 
> guarantee that all backends are working all the time. The only 
> guarantee is that core tries to find a maintainers for backends. If 
> nobody is found, then the backend simply doesn't work. 
>
> This is the way device driver development is done in Linux. Basically, 
> if nobody cares enough to fix a broken driver, then it isn't a 
> problem. This turns out to work really, really well. 
>
> Interestingly something like this has already happened to Oracle GIS. 
> It was broken for a long time in 1.4 release (you couldn't save 
> anything IIRC). Then Jani Tiainen needed the GIS functionality, wrote 
> a patch and now GIS is in mostly working state. It still doesn't pass 
> the GIS tests, but that doesn't seem to be a problem in practice. 
>
> Based on how things work in Linux I think similar arrangement should 
> work in Django, too. The crucial point is that adding a new backend to 
> Django's code base must not add any significant amount of work to core 
> committers. If the idea is to add backends to Django's code base and 
> have any sort of core-committer guarantee that all backends work all 
> the time (that is, you need to run tests against all backends before 
> commit) then there is no way more backends in Django could work. 
>
> I don't know what others in core think about this... 
>
> Of course, the most important question is if maintaining backends in 
> 3rd party projects is painful enough to justify this discussion. I 
> don't know. If there isn't a problem in practice, then lets not solve 
> imaginary problems... 
>
>  - Anssi 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group

Re: Testing generated SQL under different backends

2013-05-29 Thread Anssi Kääriäinen
On 28 touko, 17:46, Michael Manfre  wrote:
> > I don't know how to make this actually work so that Django's test
> > runner finds the correct test class from the backend's test module
> > instead of the generic backends test class. Optimally a backend could
> > subclass any test class to specialize or skip tests.
>
> django.tests.runtests.setup already has special handling for adding gis
> test apps. Something similar could be used to for database backends. Not
> 100% sure what would be a good way of flagging core tests as needing to be
> skipped or wrapped to expect failure, but we might be able to avoid that
> issue entirely with a combination of database feature flags and moving the
> vendor specific problem cases to the respective backends.

An example of what I think would be the ultimate solution:

class TestWeirdJoinConditions(TestCase):
def test_something(self):
...
If it turns out MySQL is having problems with this test, you could
then have a class somewhere in the backend:

from queries.tests import TestWeirdJoinConditions

class TestWeirdJoinConditionsMySQL(TestWeirdJoinConditions):
   @skip
   def test_something(self):
   ...

Assuming this could technically work (that is, there is a way to use
only the MySQL version of the class) it should be possible for a
backend to alter everything about the test class if need be, and the
test class doesn't need to know anything about the backend. So, no
dependency from queries tests to possible 3rd party backends. OTOH
this adds a dependency from backend to arbitrary test classes...

Just moving some tests to backends is a possibility. The problem is
that if some third party backend has problems with a test we will need
to move that test to backends without having anything in core
requiring that. This creates a weird dependency between Django and 3rd
party backends. Of course, we have that dependency already. For
example some tests use IntegerField(primary_key=True) instead of
AutoField, the reason is that SQL Server doesn't like manually setting
AutoField values.

> > For 3rd party backend support in general: My opinion is that we should
> > include them all in core. The rules would be:
>
> By "them", do you mean 3rd party backend tests or the entire 3rd party
> backend?

Whole backend. This could work if the idea is that core doesn't
guarantee that all backends are working all the time. The only
guarantee is that core tries to find a maintainers for backends. If
nobody is found, then the backend simply doesn't work.

This is the way device driver development is done in Linux. Basically,
if nobody cares enough to fix a broken driver, then it isn't a
problem. This turns out to work really, really well.

Interestingly something like this has already happened to Oracle GIS.
It was broken for a long time in 1.4 release (you couldn't save
anything IIRC). Then Jani Tiainen needed the GIS functionality, wrote
a patch and now GIS is in mostly working state. It still doesn't pass
the GIS tests, but that doesn't seem to be a problem in practice.

Based on how things work in Linux I think similar arrangement should
work in Django, too. The crucial point is that adding a new backend to
Django's code base must not add any significant amount of work to core
committers. If the idea is to add backends to Django's code base and
have any sort of core-committer guarantee that all backends work all
the time (that is, you need to run tests against all backends before
commit) then there is no way more backends in Django could work.

I don't know what others in core think about this...

Of course, the most important question is if maintaining backends in
3rd party projects is painful enough to justify this discussion. I
don't know. If there isn't a problem in practice, then lets not solve
imaginary problems...

 - Anssi

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Testing generated SQL under different backends

2013-05-28 Thread Michael Manfre
On Tue, May 28, 2013 at 2:26 AM, Anssi Kääriäinen
wrote:

> On 27 touko, 20:15, Shai Berger  wrote:
> > On Monday 27 May 2013 19:37:55 Carl Meyer wrote:
> > I think a better solution for this is to keep the original method, and
> mark it
> > with skipUnless(is_core_db) -- we'd need to define is_core_db for that,
> of
> > course; this could also serve as an easy-to-grep marker for "general
> > functionality test, with backend variations" -- which I think would be
> quite
> > useful for 3rd-party backend writers.
>

Having vendor checks inside of a test case is painful for non-core
backends, but I don't like the approach of entirely skipping these cases
because one of the core backends needs to do something different. The
existing database features are a bit kludgey, largely due to poor feature
naming, inappropriate feature reuse, and sometimes a simple flag doesn't
work, but I think that it is still a decent enough solution for the problem
of multiple backends.


> Traditionally a backend features has been used for this. So, add some
>  feature like "produces_extra_sql_for_drop_table", if so we skip based
> on that. This unfortunately leads to features that make no sense
> except when combined with the test suite.


Many of the existing features are only used by the test suite, and most of
the ones changed by django-mssql only impact testing. Verifying
functionality between different databases is less forgiving than simply
using the functionality; Most users don't care whether the backend needed 2
or 3 queries to complete the task.


> One idea is to somehow include a subclass of problematic test classes
> in each backend. The backend can alter or skip those tests that are
> backend specific. There are a couple of test apps that should work in
> this way (introspection, backends, inspectdb, maybe more).
>
> I don't know how to make this actually work so that Django's test
> runner finds the correct test class from the backend's test module
> instead of the generic backends test class. Optimally a backend could
> subclass any test class to specialize or skip tests.
>

django.tests.runtests.setup already has special handling for adding gis
test apps. Something similar could be used to for database backends. Not
100% sure what would be a good way of flagging core tests as needing to be
skipped or wrapped to expect failure, but we might be able to avoid that
issue entirely with a combination of database feature flags and moving the
vendor specific problem cases to the respective backends.


> For 3rd party backend support in general: My opinion is that we should
> include them all in core. The rules would be:
>

By "them", do you mean 3rd party backend tests or the entire 3rd party
backend?

Regards,
Michael Manfre

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Testing generated SQL under different backends

2013-05-28 Thread VernonCole
I have just run afoul of one of those "features that make no sense except 
when combined with the test suite."In BaseDatabaseFeatures it looks 
like this:

   # Does the backend allow very long model names without error?
supports_long_model_names = True

What this flag actually does is (when False) eliminate generation of one 
specific test table when setting up the test suite. The test creation logic 
only considers the setting of the flag on the 'default' database, so if, as 
in the case I was attempting, 'default' is PostgreSQL and 'other' is 
ms-sql, the construction of the test configuration fails.  I wanted to test 
that configuration specifically, since I think it is one of the wiser ways 
to run django with SQL Server involved. But I digress...

My point is that an example exists in the core code, and that, indeed, it 
is not very maintainer friendly.  In addition to not addressing the 
question of "how long is 'long'?", it requires a full text search of the 
source code to figure out the relationship.  If, instead, the failing setup 
module were coded with actual vendor names, and a comment like "# vendor_x 
table names are limited to 132 characters", it would be much easier to 
maintain the tests correctly, IMHO.

I maintain a test suite which has four target database engines, and the 
test code has plenty of examples if skipping (or modifying) a test based on 
code like "if db.engine is in ['mssql', 'postgress']:". I find that such an 
arrangement works very well.


On Tuesday, May 28, 2013 12:26:44 AM UTC-6, Anssi Kääriäinen wrote:
>
> On 27 touko, 20:15, Shai Berger  wrote: 
> > Hi Carl, 
> > 
> > On Monday 27 May 2013 19:37:55 Carl Meyer wrote: 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > > Hi Shai, 
> > 
> > > On 05/27/2013 09:26 AM, Shai Berger wrote: 
> > > > I'm working on fixing some failing tests under Oracle, and I ran 
> into 
> > 
> > > >   commands_sql.tests.SQLCommandsTestCase.test_sql_all() 
> > 
> > > > [...] 
> > 
> > > > For now, I will only special-case Oracle -- that should solve a 
> standing, 
> > > > release-blocker bug, and not change the semantics of the test 
> otherwise; 
> > > > but I'd like to achieve something better, more general and 
> > > > 3rd-party-backend- friendly, for the future. 
> > 
> > > It seems to me that ideally a test for backend-specific behavior 
> should 
> > > become a test for that backend (and thus skipped on other backends). 
> > > This also solves the third-party-backend problem; said backend should 
> > > have its own tests as needed, and Django's tests for backend-specific 
> > > behavior should be skipped under any unrecognized backend. 
> > 
> > I agree in general, but this can lead to DRY violations if we're not 
> careful. 
> > A better example of the problem is a test next to the one I needed to 
> fix: 
> > 
> > def test_sql_delete(self): 
> > app = models.get_app('commands_sql') 
> > output = sql_delete(app, no_style(), 
> connections[DEFAULT_DB_ALIAS]) 
> > # Oracle produces DROP SEQUENCE and DROP TABLE for this command. 
> > if connections[DEFAULT_DB_ALIAS].vendor == 'oracle': 
> > sql = output[1].lower() 
> > else: 
> > sql = output[0].lower() 
> > six.assertRegex(self, sql, r'^drop table .commands_sql_book.*') 
> > 
> > Does any backend beside Oracle produce extra SQL for this command? Does 
> the 
> > different command index for Oracle justify separating this test into two 
> > separate methods? And if you separate, would you mark the non-oracle 
> case with 
> > skipIf(Oracle) or with skipUnless(sqlite or postgres or mysql) ? 
> > 
> > I think a better solution for this is to keep the original method, and 
> mark it 
> > with skipUnless(is_core_db) -- we'd need to define is_core_db for that, 
> of 
> > course; this could also serve as an easy-to-grep marker for "general 
> > functionality test, with backend variations" -- which I think would be 
> quite 
> > useful for 3rd-party backend writers. 
> > 
> > Shai. 
>
> Traditionally a backend features has been used for this. So, add some 
> feature like "produces_extra_sql_for_drop_table", if so we skip based 
> on that. This unfortunately leads to features that make no sense 
> except when combined with the test suite. 
>
> One idea is to somehow include a subclass of problematic test classes 
> in each backend. The backend can alter or skip those tests that are 
> backend specific. There are a couple of test apps that should work in 
> this way (introspection, backends, inspectdb, maybe more). 
>
> I don't know how to make this actually work so that Django's test 
> runner finds the correct test class from the backend's test module 
> instead of the generic backends test class. Optimally a backend could 
> subclass any test class to specialize or skip tests. 
>
> For 3rd party backend support in general: My opinion is that we should 
> include them all in core. The rules would be: 
>   1

Re: Testing generated SQL under different backends

2013-05-27 Thread Anssi Kääriäinen
On 27 touko, 20:15, Shai Berger  wrote:
> Hi Carl,
>
> On Monday 27 May 2013 19:37:55 Carl Meyer wrote:
>
>
>
>
>
>
>
>
>
> > Hi Shai,
>
> > On 05/27/2013 09:26 AM, Shai Berger wrote:
> > > I'm working on fixing some failing tests under Oracle, and I ran into
>
> > >       commands_sql.tests.SQLCommandsTestCase.test_sql_all()
>
> > > [...]
>
> > > For now, I will only special-case Oracle -- that should solve a standing,
> > > release-blocker bug, and not change the semantics of the test otherwise;
> > > but I'd like to achieve something better, more general and
> > > 3rd-party-backend- friendly, for the future.
>
> > It seems to me that ideally a test for backend-specific behavior should
> > become a test for that backend (and thus skipped on other backends).
> > This also solves the third-party-backend problem; said backend should
> > have its own tests as needed, and Django's tests for backend-specific
> > behavior should be skipped under any unrecognized backend.
>
> I agree in general, but this can lead to DRY violations if we're not careful.
> A better example of the problem is a test next to the one I needed to fix:
>
>     def test_sql_delete(self):
>         app = models.get_app('commands_sql')
>         output = sql_delete(app, no_style(), connections[DEFAULT_DB_ALIAS])
>         # Oracle produces DROP SEQUENCE and DROP TABLE for this command.
>         if connections[DEFAULT_DB_ALIAS].vendor == 'oracle':
>             sql = output[1].lower()
>         else:
>             sql = output[0].lower()
>         six.assertRegex(self, sql, r'^drop table .commands_sql_book.*')
>
> Does any backend beside Oracle produce extra SQL for this command? Does the
> different command index for Oracle justify separating this test into two
> separate methods? And if you separate, would you mark the non-oracle case with
> skipIf(Oracle) or with skipUnless(sqlite or postgres or mysql) ?
>
> I think a better solution for this is to keep the original method, and mark it
> with skipUnless(is_core_db) -- we'd need to define is_core_db for that, of
> course; this could also serve as an easy-to-grep marker for "general
> functionality test, with backend variations" -- which I think would be quite
> useful for 3rd-party backend writers.
>
> Shai.

Traditionally a backend features has been used for this. So, add some
feature like "produces_extra_sql_for_drop_table", if so we skip based
on that. This unfortunately leads to features that make no sense
except when combined with the test suite.

One idea is to somehow include a subclass of problematic test classes
in each backend. The backend can alter or skip those tests that are
backend specific. There are a couple of test apps that should work in
this way (introspection, backends, inspectdb, maybe more).

I don't know how to make this actually work so that Django's test
runner finds the correct test class from the backend's test module
instead of the generic backends test class. Optimally a backend could
subclass any test class to specialize or skip tests.

For 3rd party backend support in general: My opinion is that we should
include them all in core. The rules would be:
  1. Core committers do not need to make sure all backends pass tests
before committing features.
  2. There are separate committers for the backends, they make sure
that their backend will pass tests eventually.
  3. When a release is nearing we will try to make sure each backend
passes tests. If the maintainer of the backend doesn't have time to
fix their backend, then we will try to find a new maintainer. If we
can't find a new maintainer, then we simply don't care that the
backend isn't working. That is, if nobody cares to maintain the
backend, then it just isn't important enough to maintain.
  4. The maintainers have complete control over their backend. They
don't need design decisions if a change concerns only their backend.
Of course, if they do really stupid decisions then we will find a new
maintainer.

If Shai's recent contributions are an indication of how such a system
might work, then the system is going to work very well...

 - Anssi

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Testing generated SQL under different backends

2013-05-27 Thread Shai Berger
Hi Carl,

On Monday 27 May 2013 19:37:55 Carl Meyer wrote:
> Hi Shai,
> 
> On 05/27/2013 09:26 AM, Shai Berger wrote:
> > I'm working on fixing some failing tests under Oracle, and I ran into
> > 
> > commands_sql.tests.SQLCommandsTestCase.test_sql_all()
> > 
> > [...]
> > 
> > For now, I will only special-case Oracle -- that should solve a standing,
> > release-blocker bug, and not change the semantics of the test otherwise;
> > but I'd like to achieve something better, more general and
> > 3rd-party-backend- friendly, for the future.
> 
> It seems to me that ideally a test for backend-specific behavior should
> become a test for that backend (and thus skipped on other backends).
> This also solves the third-party-backend problem; said backend should
> have its own tests as needed, and Django's tests for backend-specific
> behavior should be skipped under any unrecognized backend.

I agree in general, but this can lead to DRY violations if we're not careful. 
A better example of the problem is a test next to the one I needed to fix:

def test_sql_delete(self):
app = models.get_app('commands_sql')
output = sql_delete(app, no_style(), connections[DEFAULT_DB_ALIAS])
# Oracle produces DROP SEQUENCE and DROP TABLE for this command.
if connections[DEFAULT_DB_ALIAS].vendor == 'oracle':
sql = output[1].lower()
else:
sql = output[0].lower()
six.assertRegex(self, sql, r'^drop table .commands_sql_book.*')

Does any backend beside Oracle produce extra SQL for this command? Does the 
different command index for Oracle justify separating this test into two 
separate methods? And if you separate, would you mark the non-oracle case with 
skipIf(Oracle) or with skipUnless(sqlite or postgres or mysql) ?

I think a better solution for this is to keep the original method, and mark it 
with skipUnless(is_core_db) -- we'd need to define is_core_db for that, of 
course; this could also serve as an easy-to-grep marker for "general 
functionality test, with backend variations" -- which I think would be quite 
useful for 3rd-party backend writers.

Shai.

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Testing generated SQL under different backends

2013-05-27 Thread Carl Meyer
Hi Shai,

On 05/27/2013 09:26 AM, Shai Berger wrote:
> I'm working on fixing some failing tests under Oracle, and I ran into 
> 
>   commands_sql.tests.SQLCommandsTestCase.test_sql_all()
> 
> which collects the sql_all command's output, and verifies it is as expected.
> It includes, among others, these two lines:
> 
>  # PostgreSQL creates two indexes
> self.assertIn(len(output), [2, 3])
>  
> Which I read to mean: Actually, we expect 2 statements on all backends other 
> than PostgreSQL, and 3 on PostgreSQL.
> 
> The test fails under Oracle, because the Oracle backend generates four 
> statements.
> 
> Two ways to fix this are quite trivial: just add 4 to the list of 
> "acceptable" 
> values, or specialize the test by database vendor -- require (len==4) on 
> Oracle, and (len in [2,3]) otherwise. The latter option is better in terms of 
> testing -- if something changes generation on Oracle, I'd rather that the 
> change not be accepted by the test on  the grounds that MySql generates only 
> 2 
> lines.
> 
> However, by this reasoning, it would also be better to special-case 
> PostgreSQL 
> as well; so the test requires len==4 on Oracle, len==3 on PostgreSQL, and 
> len==2 anywhere else.
> 
> Except that this raises another issue: what about the 3rd-party backends? In 
> particular, Michael Manfre has mentioned here his attempt to make SQL Server 
> pass all the tests. If we special-case back-ends, how can we support the 
> needs 
> of a backend that isn't in core?
> 
> For now, I will only special-case Oracle -- that should solve a standing, 
> release-blocker bug, and not change the semantics of the test otherwise; but 
> I'd like to achieve something better, more general and 3rd-party-backend-
> friendly, for the future.

It seems to me that ideally a test for backend-specific behavior should
become a test for that backend (and thus skipped on other backends).
This also solves the third-party-backend problem; said backend should
have its own tests as needed, and Django's tests for backend-specific
behavior should be skipped under any unrecognized backend.

Carl



signature.asc
Description: OpenPGP digital signature


Testing generated SQL under different backends

2013-05-27 Thread Shai Berger
Hi all,

I'm working on fixing some failing tests under Oracle, and I ran into 

commands_sql.tests.SQLCommandsTestCase.test_sql_all()

which collects the sql_all command's output, and verifies it is as expected.
It includes, among others, these two lines:

 # PostgreSQL creates two indexes
self.assertIn(len(output), [2, 3])
 
Which I read to mean: Actually, we expect 2 statements on all backends other 
than PostgreSQL, and 3 on PostgreSQL.

The test fails under Oracle, because the Oracle backend generates four 
statements.

Two ways to fix this are quite trivial: just add 4 to the list of "acceptable" 
values, or specialize the test by database vendor -- require (len==4) on 
Oracle, and (len in [2,3]) otherwise. The latter option is better in terms of 
testing -- if something changes generation on Oracle, I'd rather that the 
change not be accepted by the test on  the grounds that MySql generates only 2 
lines.

However, by this reasoning, it would also be better to special-case PostgreSQL 
as well; so the test requires len==4 on Oracle, len==3 on PostgreSQL, and 
len==2 anywhere else.

Except that this raises another issue: what about the 3rd-party backends? In 
particular, Michael Manfre has mentioned here his attempt to make SQL Server 
pass all the tests. If we special-case back-ends, how can we support the needs 
of a backend that isn't in core?

For now, I will only special-case Oracle -- that should solve a standing, 
release-blocker bug, and not change the semantics of the test otherwise; but 
I'd like to achieve something better, more general and 3rd-party-backend-
friendly, for the future.

Your comments are welcome,

Shai.

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To post to this group, send email to django-developers@googlegroups.com.
Visit this group at http://groups.google.com/group/django-developers?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.