On 06/09/10 16:04, Tim Bunce wrote:
On Wed, Jun 09, 2010 at 02:45:18PM +0000, Jens Rehsack wrote:
On 06/09/10 14:36, Tim Bunce wrote:

The interface of the per-table API doesn't allow that :(
That's exactly the reason why I thought it's required to warn about that.

Can it be expressed as "this is regarded as a bug and is likely to change"?

It's a bug, it's a bug by design and yes, this is likely to change
(in distant future).
This behavior is listed under "GOTCHAS AND WARNINGS" (which seems to
be Jeffs place for describing "BUGS AND LIMITATIONS".

It would be good to explicitly say it's a bug and it's likely to change.

I've added an explicitly =head1 for BUGS AND LIMITATIONS for this
paragraph.

Update could do a fetch_row first and fail if the row is not found.

This should happen. I try it and report.

Apart from the performance hit, what's the problem?

I rate a full table scan on INSERT not as a minor performance hit.
OTOH it's better to be slow than inconsistent.

Sure, but I was thinking specifically of "hash/DBM style databases".

AFAIK - the table method "push_row" is now called only for adding
new rows (read: INSERT only). I have to double check before we
releasing a new DBI production release.

When I'm right, it's easy and reasonable fast to check, because no
explicit logic needs to be added to an SQL engine. But we should take
this as a reminder for designing new API for SQL::Statement 2.0 to
support more explicit methods for the several commands.

If you agree, I could now add a new method "insert_row" - which is
favored by the SQL engines over push_row. This might be safer than modifying
push_row and be surprised about other callers.

It's perfectly understandable (and NOT a bug) that things like CSV files
don't support unique constraints.

Not yet - but it's on TODO :)

On the other hand, databases built on hashes DO have a unique constraint
and silently discarding rows (on inserts) or inserting rows (on updates)
is a bug.

Jens

Reply via email to