Brendan Byrd wrote:
On Fri, Sep 23, 2011 at 5:01 PM, Darren Duncan <dar...@darrenduncan.net> wrote:
This essentially is exactly what you want to do, have a common query
syntax where behind the scenes some is turned into SQL that is
pushed to back-end DBMSs, and some of which is turned into Perl to
do local processing. The great thing is as a user you don't have to
know where it executes, but just that the implementation will pick
the best way to handle particular code. I think of an analogy like
LLVM that can compile selectively to a CPU or a GPU. Automatically,
more capable DBMSs like Postgres get more work pushed to them to do
natively, and less capable things like DBD::CSV or whatever have
less pushed to them and more done in Perl.
Yeah, that sounds right. So would this eventually become its own DBD
module?
Yes and no. It would not natively be a DBD module, but a separate module can
exist that is a DBD module which wraps it. Kind of like how you have both
SQLite and DBD::SQLite, say.
Does it use DBI methods to figure out the specs of the system?
For example, you were saying "less capable things like DBD::CSV". Is
that determined by querying get_info for the ODBC/ANSI capability data?
It would use whatever means make sense, which might be starting with the DBI
methods for some basic functionality and then doing SELECT from the
INFORMATION_SCHEMA to provide enhanced functionality.
Of course. Something like this is huge, but it's also hugely important
to make sure it gets into the hands of the Perl community.
Absolutely.
-- Darren Duncan