On 8/9/07, George Vilches <[EMAIL PROTECTED]> wrote:

> But, these cases still wouldn't allow you to just print this
> additionally generated SQL, or in any way retrieve and use it from any
> of the other commands (sql, sqlclear, etc.).  This is sometimes
> frustrating, as these additional SQL statements being generated are
> dependent on the current state of the model.

+1. I love this idea! As a side note, it actually fits in with a whole
lot of other ideas that have been circulating recently.

- Speeding up the test system with a database mock will require the
ability to record and replay the commands going to the database.

- The schema-evolution suggestions that are under development may
require the ability to convert a sequence of python-based ORM commands
into their equivalent SQL for storage, rather than execution directly
on the backend.

> So, my proposal is this: generate hooks for users.  For each of the
> get_custom_sql/get_create_sql/etc., add a small portion that checks the
> installed apps for their own management.py file and an appropriate
> method.  For instance, "sqlcustom"'s method could be
> "get_custom_sql_for_model_all", denoting that it's run on every model in
> every app that is having the current manage.py operation applied to it.
>   These functions would be expected to return an array of SQL
> statements, which could then be fit in with the other generated SQL from
> each of the current built in methods.

This is actually how the management commands started out - once upon a
time, you ran ./manage.py install myapp, which was a wrapper around
calling ./manage.py sqlall myapp and piping the output to the
database.

The problem is that this approach isn't very flexible. Some of what
syncdb does isn't handled at a raw SQL level - we use the ORM to
generate the commands/queries etc. post_sync handlers, for example,
would be almost impossible to recode in the way you describe, and any
user-based post_sync handlers would need to support some sort of
'retrieve sql' API.

I would suggest approaching this problem at lower than that -
installing a filter at the level of the database cursor that diverts
queries away from the actual database, and into a store. That way, if
you run the code 'Author.objects.all()' , the backend will get the
request to invoke 'SELECT * FROM Author', but this will get recorded
rather than sent to the database.

You then add a --sql flag to ./manage.py that sets up the recording
mode on the database backend, and outputs the command buffer at the
end of execution. If you make this interface generic, anyone could
invoke SQL recording whenever they want.

Part of this infrastructure is already in place for logging purposes.
Improvements to the logging capability that allow for recording and
playback would be most welcome.

Yours,
Russ Magee %-)

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to