On Tue, Jun 08, 2010 at 11:54:32AM +0000, Jens Rehsack wrote: > On 06/08/10 11:42, Tim Bunce wrote: > >On Fri, Jun 04, 2010 at 04:12:18AM -0700, rehs...@cvs.perl.org wrote: > >> > >>+statements really do the same (even if they mean something completely > >>+different): > >>+ > >>+ $sth->do( "insert into foo values (1, 'hello')" ); > >>+ > >>+ # this statement does ... > >>+ $sth->do( "update foo set v='world' where k=1" ); > >>+ # ... the same as this statement > >>+ $sth->do( "insert into foo values (1, 'world')" ); > > > >Is this really necessary? Can't we get duplicate inserts and > >updates of non-existent rows to behave in a sane manner? > > The interface of the per-table API doesn't allow that :( > That's exactly the reason why I thought it's required to warn about that.
Can it be expressed as "this is regarded as a bug and is likely to change"? > When I took over SQL::Statement and Tux got DBD::CSV, we talked each > other and discovered that an per-table API for indices is missing. > > This is one goal I want to reach when developing SQL::Statement 2.0 - but > it will be a long road. > > >The hash/DBM style databases should be modeled as two column tables > >with a unique constraint on the key column. > > The table-API just knows: fetch_row, push_row, push_names and some > optimized routines to allow update/delete specific rows. Insert could do a fetch_row first and fail if a row is found. Update could do a fetch_row first and fail if the row is not found. Apart from the performance hit, what's the problem? Tim.