Jim Starkey wrote:
Sheeri K. Cabral wrote:
On 9/14/08, *Roland Bouman* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

    > How about inserting and getting the result back at the same time?
    > (Postgres does this already)  Or inserting into two tables at once?
    > Or deleting from one table and inserting into another at the same
> time? Or deleting while getting back the deleted rows? (That would
    > be a "queue", but it would be useful for a lot more than that --
    think
    > data archiving -- DELECT FROM foo INTO foo_archive WHERE ....")


Agree with Stewart. Batched client protocol. Would be nice too because
    it may be easier for the receiving end to shell out the statements to
    multiple nodes and execute the batch in parallel.


Actually, it goes a bit beyond batched client protocol. Specifically in an "archiving" context, I'd like to see some kind of functionality like Unix's &&. "Take this record, insert it into this table, and iff that was successful, delete it." (iff=if and only if)


This is what I've been calling (sometimes) blob-in/blob-out or (sometimes) aggregating protocols. Some that can pass in a hunk of data, mulch it in the server with as many SQL statements as it takes, and sending it back as a hunk. Everything in one round trip.

There isn't a snowball's chance in hell of getting agreement on a one aggregating interface, so defining then as a class to be implemented with plugins makes the most sense.

This is how we do it in HADB, actually. The solution is also more-or-less SQL standard compliant. You take a batch of SQL statements with zero or more parameters. Then send the statement and as much as possible of the parameters in one chunk. Server may execute on what it's got, but will not send back results before all parameters have been received.

After execution, query results are sent back one by one, followed by result set data. Again this is chunk-based, so the client can decide what to do (continue, cancel execution, skip to next result set) after each chunk of the result sets is received.

Because this is a clustered database, the server will break up the statements and send them to one or more executing slaves, with possibility for parallel execution.

The protocol has also possibilities for multiple parameter sets, also in one roundtrip if possible. So the most "advanced" operation is executing multiple statements in a batch, and executing the batch multiple times (with one parameter set for each execution).

The place where the SQL standard is unclear is about atomicity. ODBC and JDBC defines batched statements and multiple parameter sets, but leaves the decision on whether each statement is atomic or the entire roundtrip is atomic to the server. HADB treats the whole batch as a an atomic "statement".

A statement batch has a serious drawback compared to a full stored procedure, of course, because it must execute all statements in sequence, no conditional execution is possible.

And there is no difference between a prepared and a direct statement (except for the parameters of course...)

Roy

_______________________________________________
Mailing list: https://launchpad.net/~drizzle-discuss
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~drizzle-discuss
More help   : https://help.launchpad.net/ListHelp

Reply via email to