Hi Scott -
But. When the primary is vacuumed, re-indexed and all clean and shiny
again, HOW do I catch up with data changes that happened in the meantime on
my warm standby without corruption or 30 minutes of user down-time? I
cannot re-ingest WALs from the warm standby into the
On Tue, Jul 7, 2009 at 5:12 AM, Jennifer Spencer
jenniferm...@hotmail.comwrote:
If you've moved on, so to speak, with the new primary, you restart the
old primary, now warm standby, the same way you initially created the
warm standby. issue the start hot backup command to the primary,
Using Hibernate/JPA with tables partitioned by triggers and inheritance doesn't
work since the result is 0 affected rows. In truth Postgresql inserted the rows.
Is there a solution which doesn't involve hacking Hibernate/JPA?
--
Sent via pgsql-admin mailing list
Jennifer Spencer jenniferm...@hotmail.com wrote:
I think that we won't be able to do vacuuming/reindexing with the
machine online and serving users if the database is over a certain
size. Am I wrong?
Probably. My first concern is to make sure you aren't doing VACUUM
FULL as part of your
Am I wrong?
Probably. My first concern is to make sure you aren't doing VACUUM
FULL as part of your maintenance cycle. That option is meant for
recovery from extreme bloat, and is sort of a last resort.
Good - glad to be wrong about that! We do mostly inserts, no updates and very
On Tue, Jul 7, 2009 at 10:42 AM, Jennifer
Spencerjenniferm...@hotmail.com wrote:
We are using v. 8.3.1 at present.
You should really update to the latest 8.3.x version. It's simple and fast.
We anticipate a terabyte of data each
year starting in November, and I am concerned about what
On Tue, Jul 7, 2009 at 11:10 AM, Scott Whitneyswhit...@journyx.com wrote:
I'd like to phone in with a slightly different opinion on VACUUM FULL. Yeah,
it should be avoided when possible, but it's not always possible. In our
case, I've got 300ish databases backing to a single database server.
I'd like to phone in with a slightly different opinion on VACUUM FULL. Yeah,
it should be avoided when possible, but it's not always possible. In our
case, I've got 300ish databases backing to a single database server. Each of
those dbs has a couple of hundred tables and a hundred or more views.
Scott Whitney escribió:
I'd like to phone in with a slightly different opinion on VACUUM FULL. Yeah,
it should be avoided when possible, but it's not always possible. In our
case, I've got 300ish databases backing to a single database server. Each of
those dbs has a couple of hundred tables
I'd like to phone in with a slightly different opinion on VACUUM FULL.
Yeah,
it should be avoided when possible, but it's not always possible. In our
case, I've got 300ish databases backing to a single database server. Each
of
those dbs has a couple of hundred tables and a hundred or more
That's most likely because you have too small an FSM. Have you tuned
that?
My settings are:
max_fsm_pages = 150 # min max_fsm_relations*16, 6 bytes
each
max_fsm_relations = 7 # min 100, ~70 bytes each
It's quite possible that those settings are too low,
You are sure that the XID wraparound is gone? That's good news. No other
reasons for vacuum full on the entire database.
We could do it a table at a time if we absolutely have to do it, and that would
minimize down time on the rest of the system.
-Jennifer
I _think_ autovacuum,
Scott Whitney wrote:
max_fsm_pages = 150 # min max_fsm_relations*16, 6 bytes
each
max_fsm_relations = 7 # min 100, ~70 bytes each
It's quite possible that those settings are too low, but I've honestly no
idea how to estimate these other than trial and
On Tue, Jul 7, 2009 at 1:40 PM, Jennifer Spencer
jenniferm...@hotmail.comwrote:
You are sure that the XID wraparound is gone? That's good news. No other
reasons for vacuum full on the entire database.
I think we're talking apples and gorillas on the use of the word 'full'.
There is
On Tue, Jul 7, 2009 at 1:46 PM, Lewis Kapell lkap...@setonhome.org wrote:
If you were planning to upgrade to 8.4 in the near future, then you
wouldn't have to worry about these settings any more. Those settings tune
themselves and they are no longer specified in the configuration file.
Jennifer Spencer jenniferm...@hotmail.com wrote:
We do mostly inserts, no updates and very few deletes. We drop
entire tables but don't delete often. We have very long rows,
though. Do you think the above is a situation likely to create
extreme bloat?
No. Only deletes and updates can
Any help here appreciated.
I ran a vacuum verbose analyze on a database over the weekend. It ran fine
until it tried to vacuum a table less than 2000 pages. It successfully
acquired a ShareUpdateExclusiveLock as I would expect.
There was an idle thread that had an AccessSharelock on the same
On Tue, Jul 7, 2009 at 11:01 AM, Jennifer
Spencerjenniferm...@hotmail.com wrote:
Am I wrong?
Probably. My first concern is to make sure you aren't doing VACUUM
FULL as part of your maintenance cycle. That option is meant for
recovery from extreme bloat, and is sort of a last resort.
Good -
On Tue, Jul 7, 2009 at 10:42 AM, Jennifer
Spencerjenniferm...@hotmail.com wrote:
We are using v. 8.3.1 at present. We anticipate a terabyte of data each
year starting in November, and I am concerned about what happens
maintenance-wise a couple of years down the line. I think that we won't be
2009/7/7 Mark Steben mste...@autorevenue.com:
Any help here appreciated.
I ran a vacuum verbose analyze on a database over the weekend. It ran fine
until it tried to vacuum a table less than 2000 pages. It successfully
acquired a ShareUpdateExclusiveLock as I would expect.
There was an
Scott Marlowe scott.marl...@gmail.com writes:
2009/7/7 Mark Steben mste...@autorevenue.com:
I ran a vacuum verbose analyze on a database over the weekend. It ran fine
until it tried to vacuum a table less than 2000 pages. It successfully
acquired a ShareUpdateExclusiveLock as I would expect.
On Tue, Jul 7, 2009 at 3:40 PM, Tom Lanet...@sss.pgh.pa.us wrote:
Scott Marlowe scott.marl...@gmail.com writes:
2009/7/7 Mark Steben mste...@autorevenue.com:
I ran a vacuum verbose analyze on a database over the weekend. It ran fine
until it tried to vacuum a table less than 2000 pages. It
Scott Marlowe scott.marl...@gmail.com writes:
On Tue, Jul 7, 2009 at 3:40 PM, Tom Lanet...@sss.pgh.pa.us wrote:
The described situation is impossible: AccessSharelock doesn't block
ShareUpdateExclusiveLock. There must have been some other lock or
attempted lock involved (perhaps at a page or
Hi - Some answers, questions and comments below
At my last job I had a head dev guy who's experience with pgsql was back in
the 7.0 days or so, and his standard phrase was vacuum in postgresql isn't
fast enough.
Yeah, I don't want to turn into that guy.
So, what class machines are
On Tue, Jul 7, 2009 at 11:11 PM, Tom Lanet...@sss.pgh.pa.us wrote:
Scott Marlowe scott.marl...@gmail.com writes:
On Tue, Jul 7, 2009 at 3:40 PM, Tom Lanet...@sss.pgh.pa.us wrote:
The described situation is impossible: AccessSharelock doesn't block
ShareUpdateExclusiveLock. There must have
On Tue, Jul 7, 2009 at 4:58 PM, Jennifer
Spencerjenniferm...@hotmail.com wrote:
Hi - Some answers, questions and comments below
So, what class machines are these, and specifically how much ram, what
kind of RAID controllers, and how many hard drives are you throwing at the
problem?
We have
26 matches
Mail list logo