> The big operator question will be: how will "and" and "or" be
> implemented? This is always a sticking point because of Python's
> short-circuiting behaviors regarding them (the resultant bytecode will
> include a JUMP).

I'm using the Boolean | and & operators for logical groups, eg (a | b |
(b & c)).  This might seem ugly to pureists, but solves all of the
short-circuit issues.  It does require the user to use excessive
parentheses, becuase | evaluates before ==.  Another option is to use
functions-- AND(EQ(a, 1), OR(IN(B,(1,2,3)))) -- But I find this hard to
read.  But mixing the two is sometimes clean:  EQ(a,1) & LT(b,2).  But
having too many ways of doing things doesn't seem very pythonic.

>
> An alternative is to stuff the representation into a string, which can
> then be parsed however one likes.
>
> For Dejavu (http://projects.amor.org/dejavu), I didn't do either
> one--instead I used lambdas to express the where clause, so that:
>
>     f = logic.Expression(lambda x: ('Rick' in x.Name) or
>                          (x.Birthdate == datetime.date(1970, 1, 1)))
>     units = sandbox.recall(Person, f)
>
> might produce, in the bowels of the ORM:
>
>     "SELECT * FROM [Person] WHERE [Person].[Name] Like '%Rick%' or
> [Person].[Birthdate] = #1/1/1970#"
>
> Note that the tablename is provided in a separate step. The translation
> is based on the codewalk.py and logic.py modules, which are in the
> public domain if you want to use any part of them. See
> http://projects.amor.org/dejavu/svn/trunk/

This is a very elegant solution, so much so that I almost didn't go
down the path of sqlstring.  Having support for lambda expressions is
still an option, though I wanted to try object operator
overloading/methods first--too see if I could avoid the Bytecode issue.

>
> >    2. How to best add further sql function support? Adding magic
> > callable objects to columns came to mind, but this has it's own set
> > of issues. I'm leaning towards a magic object in the sqlstring
> > module. For example:
> >
> >       sqlstring.F.substring(0, 4, person.first_name)
> >
> >       would result in: substring(0, 4, person.first_name). the F object
> > could be put in the local scope for short-hand.
>
> This is a hard problem, since your sqlstring module doesn't control the
> result sets, and so can't provide fallback mechanisms if a given
> database does not support a given function (or operator, or minute
> detail of how a function or operator works; for example, LIKE is
> case-insensitive in MS SQL Server but case-sensitive in PostgreSQL). If
> you're going to use subclasses to handle "database-specific overwrites"
> (below), then you'll probably want to stick such functions in that base
> class (and override them in subclasses), as well.
Good point.  These things should be able to be "intercepted" in the
database specific modules, so the library has a documented way
functions should be used (ANSI if applicable), but database specific
overwrites allow us to deal with issues or hacks (to emulate a
function) in databases.


> See the Adapter and SQLDecompiler classes in
> http://projects.amor.org/dejavu/svn/trunk/storage/db.py (and the
> store*.py modules) for some examples of using subclassing to produce
> database-specific syntax. There, it's one Adapter class per supported
> DB-type; you might consider keeping the Expression objects themselves
> free from SQL, and transform the Expressions to SQL in a separate
> class, which you could then subclass.
Thanks.  Your approach here had already inspired me, I'll take a look
at it again.  Pulling the SQL out of the Expression objects is double
sided, but might be a way to cleanly support db syntax nuances.  I'll
keep you posted.

Runar

-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to