I believe that currently the only way to properly handle that is to
capture the error output  ( 'perldoc DBI' at a command prompt and search
for "Transaction" and "eval") and parse that output for something common
to all instances where you try to insert a duplicate record.  Something
like the string "Duplicate entry" maybe.  If you find "Duplicate entry"
in the error output, then you can display "Cannot insert, same data" if
you want.

A few months ago we had discussed on this list _standardizing_ error
handling for things like trying to insert duplicate records, but I kind
of dropped the ball - changing jobs and all didn't leave me enough time
to persue that standardization with members of this list.  And since now
I don't use Perl in my new job, I don't have time to persue it at all. 
Anyone else have time to revisit standardizing error handling for things
like "Trying to insert duplicate record"???  Basically all it involved
was putting together a list of common error conditions and finding the
corresponding ODBC error codes.

Hardy Merrill

>>> Michael Ragsdale <[EMAIL PROTECTED]> 03/31/04 04:09PM >>>

>hai everyone, i have write scripts perl on the top and if i input
data
>12345678,Hendra Kusnandar,123456 data will insert
>into table user, and if i input wrong format data will insert into
log.
>this scripts is work but if i insert again 12345678,Hendra
Kusnandar,123456
>scripts will error and the message are : DBD::mysql::st execute
failed:
>Duplicate entry '12345678' for key 1 at ./test.pl line 20, <> line 1.
>Unable to execute query: DBI::db=HASH(0x81e1ed0)->errstr
>i know field nik in table user on mysql is primary key that i can't
insert
>same data...how to correct this scripts if i insert the same data and
will
>show the message on shell "cannot insert, same data".

Actually, you can insert the same data.  Instead of using INSERT, use 
REPLACE.  If the data does not exist, the statement will act just like
an 
INSERT statement.  If the data (unique value) already exists, the
fields in 
the REPLACE statement will be overwritten with the new data.  This
won't 
give you any sort of "same data" error, but it will prevent the
"execute 
failed" errors.  This may or may not do the trick for you, depending on
how 
you want to handle duplicate data from your input source.

-Mike 

Reply via email to