it does.
So...
I'm performing a full vacuum on each database in the system every Saturday.
My pg_clog files date back to August 8th.
What am I doing wrong?
Thanks,
Scott Whitney
Journyx, Inc.
---(end of broadcast)---
TIP 4: Have you searched our
I'm reading conflicting information on this. Is this a supported technique?
start_backup
rsync the initial base backup
stop_backup
Then, periodically,
start_backup
rsync again
stop_backup
It's my opinion that this wouuld significantly cut down the 30GB base backup
I'll need to take for each
-postgresql.html
-Original Message-
From: Bruce Momjian [mailto:[EMAIL PROTECTED]
Sent: Oct 10, 2007 11:39 AM
To: Scott Whitney
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Using rsync for base backups for PITR
Scott Whitney wrote:
I'm reading conflicting information
is each database in the system. I don't have the log at the moment
to prove it ran Saturday, but, yeah it does.
So...
I'm performing a full vacuum on each database in the system every Saturday.
My pg_clog files date back to August 8th.
What am I doing wrong?
Thanks,
Scott Whitney
Journyx, Inc
select datid from pg_stat_database where datname='km';
select database_size(that_id);
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Carol Walter
Sent: Nov 13, 2007 9:35 AM
To: pgsql-admin@postgresql.org
Subject: [ADMIN] Help with command syntax?
I don't really understand the postings I've found on this issue. This has
been going on for quite some time, but now that I'm regularly vacuuming this
db, I was looking for any info on how to fix this problem...I don't even
know enough about it to know what to include in this message.
This is
There goes my breakfast...
pg_dump: ERROR: missing chunk number 0 for toast value 110439697
pg_dump: SQL command to dump the contents of table attachments failed:
PQendcopy() failed.
pg_dump: Error message from server: ERROR: missing chunk number 0 for toast
value 110439697
pg_dump: The command
:41 AM
To: Scott Whitney
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Missing chunks from my toast...
Scott Whitney [EMAIL PROTECTED] writes:
There goes my breakfast...
pg_dump: ERROR: missing chunk number 0 for toast value 110439697
pg_dump: SQL command to dump the contents of table
this by using select * from attachments limit 1000 offset 0
then incremented until I found the bad rec. :)
d) insert the recs manually into the new db
Just an FYI for anyone else who runs into this and doesn't want to use a
known good backup.
-Original Message-
From: Scott Whitney
Sent: Jun 20
I've got 3 different database servers (db01, db02 and db03).
I would like to have a WAL standby server that replays logs for all 3 in
case one goes down, so I can promote that particular server.
Can I do this by installing 3 separate postmasters on this machine?
Obviously, if 2 went down at the
(at my office)?
Assume no auto-failover.
-Original Message-
From: Montaseri [mailto:[EMAIL PROTECTED]
Sent: Jun 26, 2008 12:51 PM
To: Simon Riggs
Cc: Scott Whitney; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Warm standby server
I am not so sure of this arrangement's mertis
From HA (High
PM
To: Scott Whitney
Cc: Simon Riggs; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Warm standby server
While I am not an expert on WAL, but again I question the merits of such
sophisticated HA configuration. Of course there are use cases for such
configs, but I am only advocating best price
I'm migrating a pg server (v 8.1.4) this weekend, and I have 2 dbs left on
it. One cannot be shut down until tomorrow. The last remaining one, I can
migrate today.
However, the software that talks to it is somewhat hinky and has several
places that connect -- there is no overall consolidated
I ran into this issue awhile ago. Here's my long internal tech note to my
dev guys on what I did. A bit modified for more genericism:
I'm in the process of migrating our internal db server, and I decided to use
the helpdesk as
my test database. It backed up fine last night. Something went
I'm in the process of testing PITR recovery, and I have an issue.
My data directory is 30GB. Not huge, but it certainly takes awhile to tar
up.
My understanding is:
a) pg_start_backup
b) tar up
c) pg_stop_backup
d) restore tar file
The problem is that I create databases pretty regularly. Let's
PROTECTED]
Sent: Tuesday, October 21, 2008 3:10 PM
To: Scott Whitney
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] PITR question with base backup
Scott Whitney wrote:
The problem is that I create databases pretty regularly. Let's say I
create
3 in a week. I'm not looking forward to going to my
Sorry for the repetition on some of this stuff. I'm just now learning it.
So, I've got my WALs archiving just fine, and I've got my base backup.
What I'd like to do is keep the standby server in recovery mode.
Docs say:
If we continuously feed the series of WAL files to another machine that has
.
Any clever ideas? Requests for more information?
Thanks in advance!
Scott Whitney
Journyx, Inc.
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
all, when I _do_ get hit by one of those bugs, I _will_ be asked why
we weren't upgraded. *sigh*
-Original Message-
From: Scott Marlowe [mailto:scott.marl...@gmail.com]
Sent: Tuesday, January 13, 2009 4:16 PM
To: Scott Whitney
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Strange
Um. How is this possible? Am I doing something very, very stupid, here?
mydb=# select * from time_recs where id_time_rec not in (select id_time_rec
from punch_time_recs);
id_time_rec | id_user | record_date | id_code_task | id_code_pay_type |
id_project | time_amount | comment | commit_state |
So, you're sayin' I ain't crazy? :)
-Original Message-
From: Hoover, Jeffrey [mailto:jhoo...@jcvi.org]
Sent: Wednesday, January 28, 2009 12:18 PM
To: Kevin Grittner; Scott Whitney; pgsql-admin@postgresql.org
Subject: RE: [ADMIN] Strange query problem...
Wow! I would never have expected
To: Scott Whitney; Kevin Grittner; pgsql-admin@postgresql.org
Subject: RE: [ADMIN] Strange query problem...
How do other databases handle this? I tried it in SQLite and I get
different behavior (see below). Can someone try it in Oracle? In MySQL?
In Sybase? If postgres is alone in this interpretation
: Joshua D. Drake [mailto:j...@commandprompt.com]
Sent: Wednesday, January 28, 2009 12:47 PM
To: Scott Whitney
Cc: 'Hoover, Jeffrey'; 'Kevin Grittner'; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Strange query problem...
On Wed, 2009-01-28 at 12:38 -0600, Scott Whitney wrote:
Wow. This just
So, I got pretty close to my max_connections setting, and I had to up it
last night. I did so, and I issued a pg_ctl reload. Everything _seems_
happy.
However, the script I use to monitor this has the number hard-coded in it
(well, it looks it up from the conf file, anyway), and I'd rather get
Well, there are a lot of unanswered questions in your post.
What VM technology are you using? Does it support clustering or some such?
Do you need to backup the entire VM, or just the PG data?
Generally speaking, if the machine itself (installed OS and programs, etc)
are not changing, what I
I'd like to phone in with a slightly different opinion on VACUUM FULL. Yeah,
it should be avoided when possible, but it's not always possible. In our
case, I've got 300ish databases backing to a single database server. Each of
those dbs has a couple of hundred tables and a hundred or more views.
I'd like to phone in with a slightly different opinion on VACUUM FULL.
Yeah,
it should be avoided when possible, but it's not always possible. In our
case, I've got 300ish databases backing to a single database server. Each
of
those dbs has a couple of hundred tables and a hundred or more
That's most likely because you have too small an FSM. Have you tuned
that?
My settings are:
max_fsm_pages = 150 # min max_fsm_relations*16, 6 bytes
each
max_fsm_relations = 7 # min 100, ~70 bytes each
It's quite possible that those settings are too low,
There's an awful lot of information left out that would be very useful to
help advise you.
Restarting the postgres services on a daily basis is certainly nothing
that's going to corrupt your data or hurt your system, PROVIDED that it is
done correctly (ie: not killing the backend postmaster when
or waiting on storage. I
suspect you'll find that throwing hardware (memory, as a guess) at this
problem will solve it.
-Original Message-
From: S Arvind [mailto:arvindw...@gmail.com]
Sent: Monday, September 21, 2009 2:19 PM
To: Scott Whitney; pgsql-admin
Subject: Re: [ADMIN] Postgres
Visual C++ is Microsoft's compiler for C++. If something is compiled with
Visual C++, it was COMPILED on Windows. There is a 99.9% chance that the
underlying machine is also Windows (unless you're talking about a VM, but,
still, the OS on which PG itself is running is, in fact, Windows).
To the
...@postgresql.org
[mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Scott Whitney
Sent: Thursday, October 08, 2009 4:11 PM
To: 'Louis Lam'; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] How to tell what OS PostgreSQL is installed on.
Visual C++ is Microsoft's compiler for C++. If something
I've got about 44GB of data in a few hundred production databases. I'm using PG
8.1.4, but upgrading today (even to the latest 8.1) is not an option. I know, I
know. I wish it were, and it's slated here for q2, but I cannot even apply
maintenance patches without a full testing cycle.
My
I've got about 44GB of data in a few hundred production databases. I'm using PG
8.1.4, but upgrading today (even to the latest 8.1) is not an option. I know, I
know. I wish it were, and it's slated here for q2, but I cannot even apply
maintenance patches without a full testing cycle.
My
I've got about 44GB of data in a few hundred production databases. I'm using PG
8.1.4, but upgrading today (even to the latest 8.1) is not an option. I know, I
know. I wish it were, and it's slated here for q2, but I cannot even apply
maintenance patches without a full testing cycle.
My
Will PG start at all? Manually, even? I'd say go back to clean, first of all
by removing your from-source install, and reinstall the apt-get packages.
At that point, can you start postgres at all? If not, what do you system logs
and postgres startup logs say?
- Jordz
During the testing that I did when moving from pg7 to pg8 a few years back, I
didn't notice any particular performance
increase on a similarly-configured server.
That is, we've got 14 disks (15k rpm) striped in a single RAID10 array. Moving
the logs to an internal RAID
versus leaving them on
that theory holds up only erratically.
- Greg Smith g...@2ndquadrant.com wrote:
Scott Whitney wrote:
On the 10k vs 15k rpm disks, there's a _lot_ to be said about that. I don't
want to start a flame war here,
but 15k versus 10k rpm hard drives does NOT equivocate to a 50% increase in
read
Well, there are quite a few more variables to consider, including:
a) speed of the drives
b) battery backed caching controller or not
c) throughput of the controller channels
d) your particular use
Point d is kind of the most important. We've got about 250 customers talking to
the same
While I agree with JD, we ended up using a fiber solution through a fiber
switch with multi-path drivers (IBM DS4300). It did end up costing a few
thousand dollars with all of the drives, but the performance made it worth it.
The big thing you want to remember to consider with any storage
There are a couple of ways. Assuming that it's the same database, and it's up
and running, you could do this:
Assuming:
table foo (col1 text, col2 int);
table bar (col2 text, col3 int);
insert into bar (select * from foo);
would stick everything from foo.col1 and foo.col2 into bar.col2
There are ways to enable it, but in general, yes, you're right. In AD you have
to do some magic to bind if you're not part of the actual AD forest.
Hi,
I'm not an AD expert myself, but I've read somewhere, that AD disallows
simple binds without using ssl.
You could try enabling ssl on AD,
Hello. I posted this once before, but I'm encountering it again.
Each Saturday, I run: vacuumdb -a -v. I have autovac on all the time.
However, my pg_clog directory lists clog files going back to July.
This is pg 8.4.4 on Linux (CentOS 5.5).
I know this isn't a whole lot of information at
, is when the server was last
restarted.
I'll try the -F on Sat and see if that resolves the issue. I'll also have logs
available at that time, assuming all goes well.
Thanks for the advice.
Scott Whitney sc...@journyx.com wrote:
Each Saturday, I run: vacuumdb -a -v. I have autovac on all
I had this issue back in pg 7.x, and it was resolved by using -a in vacuumdb.
I'm having it again in v8.4.4.
So, my pg_clog directory contains files going back to Jul 13 of 2010.
Every Saturday, I run:
vacuumdb -a -v -F
I _thought_ that was supposed to clear those out. Am I wrong?
Ooops...I accidentally took this off list, as Kevin was nice enough to point
out.
What am I looking for?
Outliers.
Yeah. It's just those 2. I'd assume that the db I created
yesterday would be an outlier, but template0 has been there all along
(of course) and is still listed as 648, a
Ooops...I accidentally took this off list, as Kevin was nice enough
to point out.
What am I looking for?
Outliers.
Yeah. It's just those 2. I'd assume that the db I created
yesterday would be an outlier, but template0 has been there all
along (of course) and is still
On Thu, Mar 10, 2011 at 07:56:26AM -0600, Scott Whitney wrote:
If you have hardware problems like that you have way more
problems. You could have corruption (silent) occurring in any of
the other
database files. Good luck.
I am, in fact, aware of that, but every single machine
If you have hardware problems like that you have way more
problems. You could have corruption (silent) occurring in any
of the other
database files. Good luck.
I am, in fact, aware of that, but every single machine ever
manufactured will have hardware problems such
This is also interesting. I just allowed connections to template0
for the express purpose of vacuuming it, did a full vac on
template0, and that did NOT clear up the hanging clogs.
Was it a VACUUM FREEZE followed by a CHECKPOINT? Did you also cover
that new database?
Yes, yes and yes,
I dealt with this issue for years. It's not a pgadmin issue. In my case I had ipchains messing with me then iptables. Eventually Cisco. Each time I've moved to a new firewall, I have to re-solve this issue. Unless others are interested, take this offline and email me. Original message
I'll be moving to PG9 (hopefully soon...probably 6 weeks).
At that time, I'll be setting up hot-standby with streaming replication to 2
sites. Off-siting my pgdumps nightly is no longer going to be possible in the
very near future, due to the size of the dumps.
So...what I had planned to do
Hello, everyone. I want to throw a scenario out there to see what y'all think.
Soon, my cluster backups will be increasing in size inordinately. They're going
to immediately go to 3x as large as they currently are with the potential to be
about 20x within a year or so.
My current setup uses
the tables you really want get
replicated there.
For the backup turn to hot backup (tar $PGDATA)+ archiving, easier,
faster and more efficient rather than a logical copy with pgdump.
A.A
On 04/25/2012 09:11 AM, Scott Whitney wrote:
Hello, everyone. I want to throw a scenario out there to see what
would use to pay for additional
vetting?
Thanks in advance.
Scott Whitney
PS: I have written a multi-proc script (in Python, Linux specific at the
moment) for pg_dump that you can use to pg_dump and restore said dumps. If
anyone's interested, contact me directly. It drastically cuts down
I'd hate to contradict Tom, and since ^ is probably a typo, and you likely
meant *, yes, that's likely, but be sure you understand the implications of
listening on all interfaces first. Just saying from an IT point of view.Tom
Lane t...@sss.pgh.pa.us wrote:Terry Khatri terrykhatri...@gmail.com
I've got replication setup (streaming replication) between two 9.2.2 clusters.
Everything went just fine on that, and it's working as expected.
Now, obviously, I'll know if my primary goes down. I just monitor him as usual.
How do I tell whether the standby has gone out of sync for whatever
whether full vacuum could have been
to blame for the servers getting out ot sync.
Thanks,
Scott Whitney
When you say 16 10K drives, do you mean:
a) RAID 0 with 16 drives?
b) RAID 1 with 8+8 drives?
c) RAID 5 with 12 drives?
d) RAID 1 with 7+7 drives and 2 hotspares?
We moved from a 14 FC drive (15k RPM) array (6+6 with 2 hotspares) to a 6 SSD
array (2+2 with 2 hotspares) because our iops
I tested the hybrid approach during my months-long testing and performance
stuff, and I was a bit underwhelmed.
That said, what _I_ personally really needed was increase in peak iops. Using
spindles for static data (OS, some logs, and such) worked fine, but no matter
how I split up the pg
Everything you need to see is right here:
[root@serverNameRemoved pg_log]# ls -lrt
total 5924
-rw--- 1 postgres postgres 708669 Aug 8 18:59 postgresql-Thu.log
-rw--- 1 postgres postgres 669505 Aug 9 18:59 postgresql-Fri.log
-rw--- 1 postgres postgres 03 Aug 10 18:56
Server is set to Chicago. CDT (or CST...are we in or out? I never
remember...) GMT -6/5.
Original message
From: Tom Lane t...@sss.pgh.pa.us
Date: 08/13/2013 7:54 PM (GMT-06:00)
To: Scott Whitney swhit...@journyx.com,Scott Whitney sc...@journyx.com
Cc: pgsql-admin
Never RAID 5 for a database. When I say never, I can give you edge-case
scenarios, but you're basically taking a 4x overhead on all writes. Now, RAID-0
is a bad choice as well, since JBOD has no replication, but it sounds like you
might have that end under control.
Original message
I needed both UTF8 and Latin-1. I accomplished this by initdb with the LOCALE
set to C. That lets me create dbs with template0 encoding='Latin-1' as well
as encoding=UTF8, FWIW...
Original message
From: Marc Fromm marc.fr...@wwu.edu
Date: 10/10/2013 5:39 PM (GMT-06:00)
To:
64 matches
Mail list logo