Michael Enke wrote:
This works for small amount of data. But for large amount of data
the join takes a lot of time.

It certainly is faster then anly algorithm that checks for duplicates
for each lines of copy input could ever be. Especially for joins, doing
them in one large batch allows you to use better algorithms then looping
over one table, and searching for matching rows in the other - which is
exactly what copy would need to do if it had an "replace on duplicate"
flag.

I think the fastest way to join two large tables would be a mergejoin.
Try doing an "explain select" (or "explain delete") to see what algorithm
postgresc chooses. Check if you actually declared your primary key
in both tables - it might help postgres to know that the column you're joining
in is unique. Also check your work_mem setting - if this is set too low,
it often forces postgres to use inferior plans becaues it tries to save memory.

greetings, Florian Pflug


---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
      choose an index scan if your joining column's datatypes do not
      match

Reply via email to