Hi DBI and DBD developers,
I have some open points for DBD::File I'd like to ask for feedback on them.
The first two are related to the table meta data.
1) I introduced simple getters and setters for DBD::File table's meta
data (look for get_file_meta and set_file_meta in
lib/DBD/File/Developers.pod). Merijn and me came over to extend
this interface with some wild cards:
- '.': default - read, the attributes of the database handle is
delivered/modified (even through a proxy, which was the
primary intention)
- '*': all - the attribute of all tables (and the default one)
is modified
- '+': as '*', but restricted to all ANSI conform tables names
(default isn't touched)
- qr//: all tables matching the regex are affected
The question is related to the getter: how should the attributes being
returned (or more: what API should be supported)?
Let me explain a bit deeper the cause of the question. set_file_meta
can be called using the table_name as first argument, the attribute
name as second one and the new attribute value as third argument.
It sounds reasonable to allow following call, too:
$dbh->func( $tname, { attr1 => val1, attr2 => val2 },
'set_file_meta' );
Consequently get_file_meta should be able to return more than one
attribute, shouldn't it? So we have 3 situations for get_file_meta
regarding the expected return values:
a) 1 table, 1 attribute - expected return value is a scalar
b) n tables, 1 attribute - expected return value is a hash of
table names pointing to the scalar value of the attribute
belonging to that table
c) n tables, m attributes - expected return value is a hash of
table names pointing to a hash of attribute names containing
the attribute values of the affected table
I rate it to complex in API and need external thoughts :)
2) backward compatibility of depreciated structures per table of DBD::CSV,
DBD::AnyData (and maybe later DBD::PO or DBD::DBM). DBD::CSV had a
structure $dbh->{csv_tables}{$tname}{...} - which is now handled via
$dbh->{f_meta}{$tname}{...} ... - DBD::AnyData had the same with
$dbh->{ad_tables}{$tname}{...}.
Both had several attributes in their which were not synchronized
between both DBD's. Further, the $dbh->{f_meta} structure contains
the table names after they walked through an internal filter
(suggested by mje) which handles the identifier case. An additional
structure $dbh->{f_meta_map} is used to hold the map between the
original table name (from SQL statement or through the getter/setter)
and the internal used identifier.
Because it could be very difficult to manage backward compatibility
there, I would like to have a solution which can be plugged in as
early as possible:
a) $dbh->{csv_tables} or $dbh->{ad_tables} should become a tied hash
which access the table's meta data using the get_table_meta method
from the table implementor class.
b) further, the meta data which can be accessed through this tie
should be tied, too - to allow overriding FETCH/STORE methods which
could handle the mapping between old attribute names and new
attribute names.
While I rate being 2a more important than 2b, I would like to get
feedback for both ideas. Discarding 2a could result in inconsistent
entries in $dbh->{f_meta} which could lead the DBD into unpredictable
behavior.
My third question has lesser implications :)
3) Merijn and me had the idea to let the users decide on connection
whether to use DBI::SQL::Nano or SQL::Statement - not globally via
DBI_SQL_NANO environment variable.
Because of the handling of the handles for the dr, db and st
instances of a DBD I hoped to get some suggestions how we could
implement a similar behavior for DBD::File::Statement and
DBD::File::Table.
Thanks for all answers in advance,
Jens