Hey Tim.

> I've kept an eye on Perl 6 and Parrot developments but I'm no expert in
> either. What I'd like *you* to do is make proposals (ideally fairly
> detailed proposals, but vague ideas are okay) for what a Perl 6 DBI API
> should look like.
> Keep in mind that the role of the DBI is to provide a consistent
> interface to databases at a fairly low level. To provide a *foundation*
> upon which higher level interfaces (such as Class::DBI, Tangram, Alzabo
> etc. in Perl 5) can be built.

OK, well based on my experience, here's a few things that would be nice;

  - optional treatment of the statements as an AST, similar in concept to
    SQL::Routine, or Tangram::Expr.  Death to SQL templating systems!

    Ideally there should be no need for a module like DBD::CSV or even,
    eventually, DBD::SQLite to actually generate SQL strings for queries;
    the AST is enough for them to go away and run the query.

    It should be possible to use either, though - and use /named/
    placeholder arguments for either, something like:

        use DBI-2;
        my $dbi = DBI->connect(...);
        my $sth = $dbi->prepare("select * from foo where bar = :bar");
        $sth->execute(:bar($bar));

        my $table = $dbi->table("foo");
        my $sth = $dbi->select
                    (:all
                     :from($table),
                     :where($table.bar == $dbh.placeholder("bar"))
                    );
        $sth->execute(:bar($bar));

    Virtually every major DB abstraction tends to build this, whether or
    not they know they're doing it ;).

  - support for automatically pulling database DSN information from a
    ~/.dbi (or similar) file.  This is constantly re-invented poorly.
    Let's just do a connect by logical application name and let the
    SysAdmins sort out which DB that connects to, in a standard way.

  - object-oriented modelling of core database objects, so that schema
    operations can be made portable, a la Rosetta.  This should really
    just constitute re-thinking the existing interfaces, and leaving it
    up to the DBD authors to finish them up.

    This may also open the door for unifying the way that relationships
    such as foreign key constraints, etc are dealt with.

  - make it easier to get ACID right; steal Tangram's `tx_do` transaction
    interface; though I'd suggest a name including the perl6-ish `try'

     $dbh.tx_try {

         # ... transaction ...

     };

    in fact, if you were to wrap the closure in an atomic{ } block, then
    you could probably easily support replaying the transaction in the
    event of a forced rollback (so long as the code in the closure has no
    side effects ... <g>)

    Also there is the nested transction interface;

    $dbh.tx_start;
    $dbh.tx_start;
    $dbh.tx_commit;
    $dbh.tx_commit;  # commit happens here

    Some databases can even support nested transactions internally via
    SQL SAVEPOINTS.

  - asynchronous/event-based processing of queries, so that we don't need
    yet another sub-process when in POE-like environments (which might
    include a standard Perl 6 event programming system).

In terms of making things Perl6-ish, you probably want to look at technology
such as coroutines for cursors, and continuations for exception handling, for
their implications to writing transactional applications.

Perl 6 will be able to serialise continuations and probably also coroutines,
so it should be possible for a continuation serialised with an active cursor
in a coroutine to automatically resume itself once the continuation is thaw'ed.

This might happen by simply leaving the transaction open (I hear screams!) in
a single threaded context, or by rolling back, replaying the transaction's
queries on the continuation resume and checking all the database responses
match what was previously read.  At the first sign of discrepancies, the
next database operation would throw a fake ROLLBACK; which would even be
caught and the application transaction replayed from the beginning, if they
used .tx_try :retry(3), :{ ... }

This would make some of the "old problems", such as object versioning,
potentially a lot easier to solve.

Allow me to illustrate with some application code;

  my $mvc;

  if ($mvc.controller.action eq "edit") {
     my $id = $mvc.controller.edit_id;
     my $object;
     $dbi.tx_try {
        $object = $fetch_query.select_only(:id($id));

        $mvc.view(:state("edit"), :object($object))
           until $mvc.controller.action eq "commit";

        $update_query.execute(:id($id));

        CATCH {
            $mvc.view(:state("error"),
                      :message("object modified by another!"));
        }

     }, :retry(3);
     $mvc.view(:state("edit"), :object($object));
  }

So, depending on the MVC system in use, when you wrote "$mvc.view()", it
will have done one of the following things;

Session-based continuations:

    1. Serialise the continuation - when it comes across the DBH with an
       active transaction, it creates an object which represents the
       queries issued so far in this transaction, and the checksum of the
       responses they returned and positions of any cursors.

    2. When the "real" continuation is destroyed, rollback to the
       savepoint that the application server started just after opening
       the transaction.

    3. save the serialised continuation into a LOB field in the session,
       save the session and commit the outer transaction.

    4. issue the page/response

Pure serialisable continuations:

    1. Serialise the continuation as above.

    2. rollback the transaction as its live continuation object is
       destroyed.

    3. build into a hidden form field/client side state variable

    4. issue the page/response

Each time the controller receives an action, it is resuming a
continuation.  It will then start the operation described above; re-issuing
the queries (a SELECT in this case), and checking the checksum it got back.
If there was a discrepancy, it is treated as a normal transaction
inconsistency, and the CATCH block issues a message saying that the
data was stale.  As it is logically outside the try { } block, it does not
need to serialise an open transaction when the continuation is serialised
to the response or saved to the session.

Finally, the transaction is committed.  For the whole session, there were
no long running transactions and in the case of the pure serialisable
continuations, no need for anything other than client-side state.

Consider my 2ยข deposited ;-)

I'm about to fly off for two weeks vacation (in a few hours), blissfully
absent of any hi-tech gear beyond a mobile phone. When I get back I'll
gather up your emails and try to distill them into a coherent whole.

Hope you're having a great time.

Sam.

Reply via email to