I have a question regarding blocking locks in the pg database. I ran into a
process which terminated abnormally, and to fully clear the locks it left
behind I had to reboot the system (probably restarting postmaster would have
had the same effect). This was a personal development system so this
ocking locks Date: Thu, 25 Aug
2005 12:08:11 -0400
"Kevin Keith" <[EMAIL PROTECTED]> writes:
> I have a question regarding blocking locks in the pg database. I ran
into a
> process which terminated abnormally, and to fully clear the locks it
left
> behind I had to reboot
I am coming from an Oracle background - which in the case of bulk data loads
there were several options I had where I could disable writing to the redo
log to speed up the bulk data load (i.e. direct load, set the user session
in no archive logging, set the affected tables to have no logging).
I am having some problems with the COPY... to FILE command - and I am
wondering if anyone has experienced similar problems in the past, and what
you may have done to resolve the problem?
The platform is Free BSD, Postgres version 7.4.5 and the program triggering
the COPY command is a CGI scrip
: Michael Fuhr <[EMAIL PROTECTED]>
To: Kevin Keith <[EMAIL PROTECTED]>
CC: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Copy command not writing complete data to text file
Date: Thu, 22 Sep 2005 08:52:36 -0600
On Thu, Sep 22, 2005 at 08:27:00AM -0500, Kevin Keith wrote:
> The platform is
If I have followed the chain correctly, I saw that you were trying to
run an update statement on a large number of records in a large table
right? I have changed my strategy in the past for this type of problem.
I don't know if it would have fixed this problem or not, but I have seen
with Postg
I was trying to run a bulk data load using the COPY command on PGSQL 8.1.0.
After loading about 3,500,000 records it ran out of memory - I am
assuming because it ran out of space to store such a large transaction.
Does the COPY command offer a similar feature to Oracle's SQL*Loader
where you c
I was wondering if anything has been implemented or is in the works in a
future version - where when provided setting / flag / max number of
errors - the COPY command would not fail on the error and continue
loading data. It would then put the data that didn't load due to error
in another locat
I have had to bump the stats on a partitioned table in order to get the
planner to use an index over a seqscan. This has worked well in making
the system perform where it needs to as it reduced one query's execution
from > 45 seconds to < 1 second.
The one problem I have run into is that when