unsubscribe
Russ Garrett <[EMAIL PROTECTED]> writes:
> VACUUM *will* respond to a SIGTERM, but it doesn't check very often -
> I've often had to wait hours for it to determine that it's been killed,
> and my tables aren't anywhere near 1TB. Maybe this is a place where
> things could be improved...
Hmm, the
In my experience a kill -9 has never resulted in any data loss in this
situation (it will cause postgres to detect that the process died, shut
down, then recover), and most of the time it only causes a 5-10sec
outage. I'd definitely hesitate to recommend it in a production context
though, espec
On Thu, 2005-12-29 at 22:53 +, Russ Garrett wrote:
> In my experience a kill -9 has never resulted in any data loss in this
> situation (it will cause postgres to detect that the process died, shut
> down, then recover), and most of the time it only causes a 5-10sec
> outage. I'd definitely
Ick. Can you get users and foreign connections off that machine,
lock them out for some period, and renice the VACUUM?
Shedding load and keeping it off while VACUUM runs high priority
might allow it to finish in a reasonable amount of time.
Or
Shedding load and dropping the VACUUM priority mi
A few WEEKS ago, the autovacuum on my instance of pg 7.4 unilaterally
decided to VACUUM a table which has not been updated in over a year and
is more than one terabyte on the disk. Because of the very high
transaction load on this database, this VACUUM has been ruining
performance for almost a mon
I have an instance of PG 7.4 where I would really like to execute some
schema changes, but every schema change is blocked waiting for a process
doing a COPY. That query is:
COPY drill.trades (manager, sec_id, ticker, bridge_tkr, date, "type",
short, quantity, price, prin, net_money, factor) TO st
On Thursday 29 December 2005 17:19, Arnau wrote:
> > - Use plpgsql function to do the actual insert (or update/insert if
> > needed).
> >
> > - Inside a transaction, execute SELECT statements with maximum
> > possible number of insert function calls in one go. This minimizes
> > the number of roun
I am doing twice as big imports daily, and found the follwing method
most efficient (other than using copy):
- Use plpgsql function to do the actual insert (or update/insert if
needed).
- Inside a transaction, execute SELECT statements with maximum possible
number of insert function calls
On Thursday 29 December 2005 10:48, Arnau wrote:
>Which is the best way to import data to tables? I have to import
> 9 rows into a column and doing it as inserts takes ages. Would be
> faster with copy? is there any other alternative to insert/copy?
I am doing twice as big imports daily, a
At 04:48 AM 12/29/2005, Arnau wrote:
Hi all,
Which is the best way to import data to tables? I have to import
9 rows into a column and doing it as inserts takes ages. Would
be faster with copy? is there any other alternative to insert/copy?
Compared to some imports, 90K rows is not that
On Thu, 29 Dec 2005, Arnau wrote:
>Which is the best way to import data to tables? I have to import
> 9 rows into a column and doing it as inserts takes ages. Would be
> faster with copy? is there any other alternative to insert/copy?
Wrap the inserts inside a BEGIN/COMMIT block and it
On Thu, Dec 29, 2005 at 10:48:26AM +0100, Arnau wrote:
> Which is the best way to import data to tables? I have to import
> 9 rows into a column and doing it as inserts takes ages. Would be
> faster with copy? is there any other alternative to insert/copy?
There are multiple reasons why yo
Hi all,
Which is the best way to import data to tables? I have to import
9 rows into a column and doing it as inserts takes ages. Would be
faster with copy? is there any other alternative to insert/copy?
Cheers!
---(end of broadcast)---
T
14 matches
Mail list logo