Hi Scott -
> > But. When the primary is vacuumed, re-indexed and all clean and shiny
> > again, HOW do I catch up with data changes that happened in the meantime on
> > my warm standby without corruption or >30 minutes of user down-time? I
> > cannot re-ingest WALs from the warm standby into th
On Tue, Jul 7, 2009 at 5:12 AM, Jennifer Spencer
wrote:
>
> >
> > If you've moved on, so to speak, with the new primary, you restart the
> > old primary, now warm standby, the same way you initially created the
> > warm standby. issue the start hot backup command to the primary, copy
> > over all
Using Hibernate/JPA with tables partitioned by triggers and inheritance doesn't
work since the result is 0 affected rows. In truth Postgresql inserted the rows.
Is there a solution which doesn't involve hacking Hibernate/JPA?
--
Sent via pgsql-admin mailing list (pgsql-admin@postgres
Jennifer Spencer wrote:
> I think that we won't be able to do vacuuming/reindexing with the
> machine online and serving users if the database is over a certain
> size. Am I wrong?
Probably. My first concern is to make sure you aren't doing VACUUM
FULL as part of your maintenance cycle. Tha
>> Am I wrong?
>
> Probably. My first concern is to make sure you aren't doing VACUUM
> FULL as part of your maintenance cycle. That option is meant for
> recovery from extreme bloat, and is sort of a "last resort".
Good - glad to be wrong about that! We do mostly inserts, no updates and v
We are using v. 8.3.1 at present. We anticipate a terabyte of data each year
starting in November, and I am concerned about what happens maintenance-wise a
couple of years down the line. I think that we won't be able to do
vacuuming/reindexing with the machine online and serving users if the
On Tue, Jul 7, 2009 at 10:42 AM, Jennifer
Spencer wrote:
> We are using v. 8.3.1 at present.
You should really update to the latest 8.3.x version. It's simple and fast.
> We anticipate a terabyte of data each
> year starting in November, and I am concerned about what happens
> maintenance-wise a
On Tue, Jul 7, 2009 at 11:10 AM, Scott Whitney wrote:
> I'd like to phone in with a slightly different opinion on VACUUM FULL. Yeah,
> it should be avoided when possible, but it's not always possible. In our
> case, I've got 300ish databases backing to a single database server. Each of
> those dbs
I'd like to phone in with a slightly different opinion on VACUUM FULL. Yeah,
it should be avoided when possible, but it's not always possible. In our
case, I've got 300ish databases backing to a single database server. Each of
those dbs has a couple of hundred tables and a hundred or more views. Th
Scott Whitney escribió:
> I'd like to phone in with a slightly different opinion on VACUUM FULL. Yeah,
> it should be avoided when possible, but it's not always possible. In our
> case, I've got 300ish databases backing to a single database server. Each of
> those dbs has a couple of hundred tables
>> I'd like to phone in with a slightly different opinion on VACUUM FULL.
Yeah,
>> it should be avoided when possible, but it's not always possible. In our
>> case, I've got 300ish databases backing to a single database server. Each
of
>> those dbs has a couple of hundred tables and a hundred or mo
>That's most likely because you have too small an FSM. Have you tuned
>that?
My settings are:
max_fsm_pages = 150 # min max_fsm_relations*16, 6 bytes
each
max_fsm_relations = 7 # min 100, ~70 bytes each
It's quite possible that those settings are too low, b
You are sure that the XID wraparound is gone? That's good news. No other
reasons for vacuum full on the entire database.
We could do it a table at a time if we absolutely have to do it, and that would
minimize down time on the rest of the system.
-Jennifer
> > I _think_ autovacuum, some
Scott Whitney wrote:
max_fsm_pages = 150 # min max_fsm_relations*16, 6 bytes
each
max_fsm_relations = 7 # min 100, ~70 bytes each
It's quite possible that those settings are too low, but I've honestly no
idea how to estimate these other than trial and error
On Tue, Jul 7, 2009 at 1:40 PM, Jennifer Spencer
wrote:
> You are sure that the XID wraparound is gone? That's good news. No other
> reasons for vacuum full on the entire database.
>
I think we're talking apples and gorillas on the use of the word 'full'.
There is a command:
VACUUM F
On Tue, Jul 7, 2009 at 1:46 PM, Lewis Kapell wrote:
>
>>
> If you were planning to upgrade to 8.4 in the near future, then you
> wouldn't have to worry about these settings any more. Those settings tune
> themselves and they are no longer specified in the configuration file.
+1
The visibil
Jennifer Spencer wrote:
> We do mostly inserts, no updates and very few deletes. We drop
> entire tables but don't delete often. We have very long rows,
> though. Do you think the above is a situation likely to create
> extreme bloat?
No. Only deletes and updates can cause dead rows, and i
Any help here appreciated.
I ran a vacuum verbose analyze on a database over the weekend. It ran fine
until it tried to vacuum a table less than 2000 pages. It successfully
acquired a ShareUpdateExclusiveLock as I would expect.
There was an idle thread that had an AccessSharelock on the same tab
On Tue, Jul 7, 2009 at 11:01 AM, Jennifer
Spencer wrote:
>>> Am I wrong?
>>
>> Probably. My first concern is to make sure you aren't doing VACUUM
>> FULL as part of your maintenance cycle. That option is meant for
>> recovery from extreme bloat, and is sort of a "last resort".
>
> Good - glad to be
On Tue, Jul 7, 2009 at 10:42 AM, Jennifer
Spencer wrote:
> We are using v. 8.3.1 at present. We anticipate a terabyte of data each
> year starting in November, and I am concerned about what happens
> maintenance-wise a couple of years down the line. I think that we won't be
> able to do vacuuming
2009/7/7 Mark Steben :
> Any help here appreciated.
>
> I ran a vacuum verbose analyze on a database over the weekend. It ran fine
> until it tried to vacuum a table less than 2000 pages. It successfully
> acquired a ShareUpdateExclusiveLock as I would expect.
> There was an idle thread that had
Scott Marlowe writes:
> 2009/7/7 Mark Steben :
>> I ran a vacuum verbose analyze on a database over the weekend. It ran fine
>> until it tried to vacuum a table less than 2000 pages. It successfully
>> acquired a ShareUpdateExclusiveLock as I would expect.
>> There was an idle thread that had an
On Tue, Jul 7, 2009 at 3:40 PM, Tom Lane wrote:
> Scott Marlowe writes:
>> 2009/7/7 Mark Steben :
>>> I ran a vacuum verbose analyze on a database over the weekend. It ran fine
>>> until it tried to vacuum a table less than 2000 pages. It successfully
>>> acquired a ShareUpdateExclusiveLock as I
Scott Marlowe writes:
> On Tue, Jul 7, 2009 at 3:40 PM, Tom Lane wrote:
>> The described situation is impossible: AccessSharelock doesn't block
>> ShareUpdateExclusiveLock. There must have been some other lock or
>> attempted lock involved (perhaps at a page or tuple level rather than
>> the whol
Hi - Some answers, questions and comments below
> At my last job I had a head dev guy who's experience with pgsql was back in
> the 7.0 days or so, and his standard phrase was "vacuum in postgresql isn't
> fast enough."
Yeah, I don't want to turn into that guy.
> So, what class machines ar
On Tue, Jul 7, 2009 at 11:11 PM, Tom Lane wrote:
> Scott Marlowe writes:
>> On Tue, Jul 7, 2009 at 3:40 PM, Tom Lane wrote:
>>> The described situation is impossible: AccessSharelock doesn't block
>>> ShareUpdateExclusiveLock. There must have been some other lock or
>>> attempted lock involved (p
On Tue, Jul 7, 2009 at 4:58 PM, Jennifer
Spencer wrote:
> Hi - Some answers, questions and comments below
>
>> So, what class machines are these, and specifically how much ram, what
>> kind of RAID controllers, and how many hard drives are you throwing at the
>> problem?
> We have two identical en
27 matches
Mail list logo