You can use the psql command line to run:
select pg_start_backup();
...then when you're done,
select pg_stop_backup();
if you want an example from the unix command line:
psql -c select pg_start_backup(); database_name
then
psql -c select pg_stop_backup(); database_name
/kurt
On Jun 22,
on
this matter so far. It helps to not feel so alone when dealing
with difficult issues (for me anyway) on a system I don't know so
much about.
Thanks guys,
/kurt
On Jun 19, 2007, at 10:51 PM, Tom Lane wrote:
Kurt Overberg [EMAIL PROTECTED] writes:
Okay, I've grabbed pg_filedump and got
that
stands out to me is
the XMAX_INVALID mask. Thoughts?
Thanks,
/kurt
On Jun 20, 2007, at 11:22 AM, Tom Lane wrote:
Kurt Overberg [EMAIL PROTECTED] writes:
Okay, so the sl_log_1 TABLE looks okay. Its the indexes that seem to
be messed up, specifically sl_log_1_idx1 seems to think
Drat! I'm wrong again. I thought for sure there wouldn't be a
wraparound problem.
So does this affect the entire database server, or just this table?
Is best way to
proceed to immediately ditch this db and promote one of my slaves to
a master? I'm just
concerned about the data integrity.
no problems keeping it
around if you think I may have found some obscure bug that could help
someone debug. Again, this
DB gets vacuumed every day, and in the beginning, I think I remember
doing a vacuum full every
day.
Thanks,
/kurt
On Jun 20, 2007, at 5:08 PM, Tom Lane wrote:
Kurt
Gang,
Hoping you all can help me with a rather bizarre issue that I've run
across. I don't really need a solution, I think I have one, but I'd
really like to run it by everyone in case I'm headed in the wrong
direction.
I'm running a small Slony (v1.1.5)/postgresql 8.0.4 cluster (on
A useful utility that I've found is PgFouine. It has an option to
analyze VACUUM VERBOSE logs. It has been instrumental in helping me
figure out whats been going on with my VACUUM that is taking 4+
hours, specifically tracking the tables that are taking the longest.
I highly recommend
, the same table on db2 and db3
is very small, like zero. I guess this is looking like it is
overhead from slony? Should I take this problem over to the slony
group?
Thanks again, gang-
/kurt
On Jun 19, 2007, at 10:13 AM, Richard Huxton wrote:
Kurt Overberg wrote:
In my investigation
Chris,
I took your advice, and I had found that sl_log_1 seems to be causing
some of the problem. Here's the result of a VACUUM VERBOSE
mydb # vacuum verbose _my_cluster.sl_log_1 ;
INFO: vacuuming _my_cluster.sl_log_1
INFO: index sl_log_1_idx1 now contains 309404 row versions in
1421785
production
systems down for
maintenance, can I wait until sl_log_1 clears out, so then I can just
drop that
table altogether (and re-create it of course)?
Thanks!
/kurt
On Jun 19, 2007, at 5:33 PM, Tom Lane wrote:
Kurt Overberg [EMAIL PROTECTED] writes:
mydb # vacuum verbose
On Jun 19, 2007, at 7:26 PM, Tom Lane wrote:
Kurt Overberg [EMAIL PROTECTED] writes:
That's the thing thats kinda blowing my mind here, when I look at
that table:
db1=# select count(*) from _my_cluster.sl_log_1 ;
count
---
6788
(1 row)
Well, that's real interesting. AFAICS
Thank you everyone for the replies. I'll try to answer everyone's
questions in one post.
* Regarding production/mac memory and cache usage. This query HAS
been running on 8.0 on my Mac, I just got that particular query
explain from our production system because I had to nuke my local 8.0
Gang,
I'm running a mid-size production 8.0 environment. I'd really like
to upgrade to 8.2, so I've been doing some testing to make sure my
app works well with 8.2, and I ran across this weirdness. I set up
and configured 8.2 in the standard way, MacOSX Tiger, current
patches, download
13 matches
Mail list logo