Jerrad Pierce wrote:
>
> My thoughts were the following:
>
> table name in the statement is ignored, there is only one table per DBM.
> this allows you to drop in a different DBD later
>
> the column name is ignored, see above
Sorry, I don't follow you here. The way DBD::AnyData works (and
DBD::CSV is somewhat similar), there is a user-defined association of
table names, data formats, and file names. With DBM it might look like
this:
$dbh->func([
[ 'tableA', 'DBM', 'file1.dbm' ],
[ 'tableB', 'DBM', 'file2.dbm' ],
[ 'tableC', 'CSV', 'file3.csv' ],
,'ad_catalog');
# dump selected records from one of the tables into others
# do searches on joins across several or all of the tables
# ...
So it doesn't seem like one would want to ignore table names. But maybe
I've missed your point.
> For a second run:
>
> multiple columns are allowed however, the relationship between the first
> and second keys MUST be AND. for multi-key systems the data is stored
> as hash within the DBM value. AND is forced, as an OR makes the initial
> pass O(n), and that's just silly, there'd be no point in using a DBM for
> that.
I am not at all in favor of inventing arbitrary limitations to the SQL.
I would prefer to see this worked out behind the scenes -- if the user
supplies a well constructed query, it will be optimized and if they
don't, it won't be. That's how most database implementations work.
Basically the docs would advise users to start out with "WHERE key =
value AND ..." but if they ignore that or have other kinds of querries
they need to perform, they could do it, but slower.
What I'm saying here might be more applicable to MLDBM or to a DBM that
has serialized additional columns but it would seem like we'd want to
include those kinds of options.
--
Jeff