Marc,
A fast method of doing a table load while preventing duplicates is to
project a temp table from your original table.
Then an insert or append using a "where .... not in (select ... from temptable)
If the tables are large, create an index on the unique column and the process will run very fast.
Many ways to accomplish the same end!
-Bob
-------------- Original message --------------
> Hi
>
> I want to insert several hundred rows of data into
> a table for my users. The problem is some of them
> already have some of these rows of data and when
> the insert command tries to add a row that already
> exists I get an error and the process stops.
>
> *there is a unique rule on the main column
>
> So, is there a better way to "insert" these rows
> into their table and have it ignore the duplicate
> rows but keep processing the rest of the rows
> of data?
>
> Thanks
> Marc
>
