Thanks for the clarification
On Wed, Jun 13, 2018 at 9:32 AM, Adrian Klaver
wrote:
> On 06/13/2018 06:21 AM, Alex O'Ree wrote:
>
>> Desired behavior is to just log the error and continue the import using
>> pgdump based copy commands
>>
>
> Each COPY is atomic so if any part of it fails the whol
On 06/13/2018 06:21 AM, Alex O'Ree wrote:
Desired behavior is to just log the error and continue the import using
pgdump based copy commands
Each COPY is atomic so if any part of it fails the whole thing fails, so
you will not be able to achieve what you want that way.
The servers are not
Desired behavior is to just log the error and continue the import using
pgdump based copy commands
The servers are not on the same network. Sneaker net is the only way
On Wed, Jun 13, 2018, 7:42 AM Andreas Kretschmer
wrote:
>
>
> Am 13.06.2018 um 13:17 schrieb Alex O'Ree:
> > I have a situatio
Am 13.06.2018 um 13:17 schrieb Alex O'Ree:
I have a situation with multiple postgres servers running all with the
same databases and table structure. I need to periodically export the
data from each of there then merge them all into a single server. OnĀ
occasion, it's feasible for the same r
Hi Alex,
For storing duplicate rows, dropping primary and unique indexes is the only
way.
One alternative is create a column with timestamp which updates on every
insert/update so that timestamp will be primary. Hope it helps.
Regards,
Pavan
On Wed, Jun 13, 2018, 4:47 PM Alex O'Ree wrote:
> I