On May 28, 2008, at 1:22 PM, Gregory Stark wrote:
Tom Lane [EMAIL PROTECTED] writes:
Tomasz Rybak [EMAIL PROTECTED] writes:
I tried to use COPY to import 27M rows to table:
CREATE TABLE sputnik.ccc24 (
station CHARACTER(4) NOT NULL REFERENCES
sputnik.station24 (id),
moment
On Wed, 2008-05-28 at 22:45 +0100, Simon Riggs wrote:
On Wed, 2008-05-28 at 16:28 -0400, Tom Lane wrote:
Gregory Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
This is expected to take lots of memory because each row-requiring-check
generates an entry in the pending
[moving to -hackers]
Tom Lane [EMAIL PROTECTED] writes:
Tomasz Rybak [EMAIL PROTECTED] writes:
I tried to use COPY to import 27M rows to table:
CREATE TABLE sputnik.ccc24 (
station CHARACTER(4) NOT NULL REFERENCES sputnik.station24 (id),
moment INTEGER NOT NULL,
Gregory Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
This is expected to take lots of memory because each row-requiring-check
generates an entry in the pending trigger event list.
Hm, it occurs to me that we could still do a join against the pending event
trigger
Tom Lane [EMAIL PROTECTED] writes:
Gregory Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
This is expected to take lots of memory because each row-requiring-check
generates an entry in the pending trigger event list.
Hm, it occurs to me that we could still do a join
On Wed, 2008-05-28 at 16:28 -0400, Tom Lane wrote:
Gregory Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
This is expected to take lots of memory because each row-requiring-check
generates an entry in the pending trigger event list.
Hm, it occurs to me that we could
Simon Riggs [EMAIL PROTECTED] writes:
AFAICS we must aggregate the trigger checks. We would need a special
property of triggers that allowed them to be aggregated when two similar
checks arrived. We can then use hash aggregation to accumulate them. We
might conceivably need to spill to disk
Gregory Stark [EMAIL PROTECTED] writes:
Simon Riggs [EMAIL PROTECTED] writes:
We certainly need a TODO item for improve RI checks during bulk
operations.
I have a feeling it's already there. Hm. There's a whole section on RI
triggers but the closest I see is this, neither of the links appear
On Wed, 2008-05-28 at 18:17 -0400, Gregory Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
AFAICS we must aggregate the trigger checks. We would need a special
property of triggers that allowed them to be aggregated when two similar
checks arrived. We can then use hash aggregation to