This works for small amount of data. But for large amount of data
the join takes a lot of time.
Bruce Momjian wrote:
Michael Enke wrote:
I have a feature request as I think it is not possible with the actual version:
I want to load huge amount of data and I know that COPY is much faster than
But in my case I have an already filled table and rows (not all, only partly)
from this table
should be replaced. The table has a primary key for one column.
If I do a COPY table FROM file and the key value already exists, postgresql
that the import is not possible because of the violation of the PK.
If postgres is aware of such a violation, couldn't there be an option to the
to delete such existing rows so that a COPY table FROM file will never generate
a PK violation message
but replaces existing rows?
If this is not possible, would it be the next fastes solution to create a
before trigger and to
delete rows in this trigger? Or is this not different from issuing for every
line an insert
and if this fails (because of the PK) than an update?
I would just COPY into another table, remove any duplicates by joining
the two tables, and then do a INSERT INTO ... SELECT.
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not