Thanks for the responses
For anyone searching in the future I'll answer Tom's questions and list the
boneheaded fix that it ended up actually being (really painful as I've been
fighting this for a week).
1) According to amazon they run stock postgres as far as the query planner
is concerned.
2) Y
Hi there,
I'm having an issue with query performance between 2 different pgsql
environments.
Ther first is our current production postgres server with is running 9.3.5
on Centos 5 x64. The second system is Amazon's RDS postgres as a service.
On our local DB server we have a query that executes
Perfect thanks Bruce that worked.
I just extern'd PostPortNumber in my module and everything seems to be
working.
--Mike
> SHOW PORT;
>
test=> SELECT setting FROM pg_settings WHERE name = 'port';
setting
-
5432
Both of these are from a query context. This is in a C module, I suppose I
could run a query but there has to be a direct C function to get this data.
Hi There,
I'm having a bit of an issue finding a C function to fetch the
configured server port from a C module.
We have written a C module to allow for remote clients to call a function
to run pg_dump/pg_restore remotely but create files locally on the db
server.
Currently it works fine if th
Wed, Sep 19, 2012 at 1:34 PM, Lonni J Friedman wrote:
> Just curious, is there a reason why you can't use pg_basebackup ?
>
> On Wed, Sep 19, 2012 at 12:27 PM, Mike Roest
> wrote:
> >
> >> Is there any hidden issue with this that we haven't seen. Or does
&g
> Is there any hidden issue with this that we haven't seen. Or does anyone
> have suggestions as to an alternate procedure that will allow 2 slaves to
> sync concurrently.
>
> With some more testing I've done today I seem to have found an issue with
this procedure.
When the slave starts up after t
p script simpler for sure).
On Wed, Sep 19, 2012 at 10:05 AM, Mike Roest wrote:
>
> Specifically what is the error?
>>
> psql (9.1.5)
> Type "help" for help.
>
> postgres=# select pg_start_backup('hotbackup',true);
> pg_start_backup
>
> Specifically what is the error?
>
psql (9.1.5)
Type "help" for help.
postgres=# select pg_start_backup('hotbackup',true);
pg_start_backup
-
61/B20
(1 row)
postgres=# select pg_start_backup('hotbackup',true);
ERROR: a backup is already in progress
HINT: Run pg_stop_backup
Hey Everyone,
We currently have a 9.1.5 postgres cluster running using streaming
replication. We have 3 nodes right now
2 - local that are setup with pacemaker for a HA master/slave set failover
cluster
1 - remote as a DR.
Currently we're syncing with the pretty standard routine
clear local
Hi Tom,
On the test box this seems to have completely resolved our problem.
I'll be scheduling an upgrade on the production cluster to verify it.
Thanks
On Thu, Sep 13, 2012 at 11:08 AM, Tom Lane wrote:
> Mike Roest writes:
> >We have a interesting thing happening on o
Good Morning everyone,
We have a interesting thing happening on one of our DB's that when
autovacuum runs against the pg_catalog tables (or we run a manual vacuum)
we get a large delay in opening new connections. If this happens during a
busy period on our system we end up getting 200-300 con
valid
query against the system it does show up in the log.
I check dmesg and /var/log/messages and there are no IO errors or any other
messages.
Any one have a idea of where I should go from here?
Thanks
On Mon, Aug 13, 2012 at 12:14 PM, Mike Roest wrote:
> Hey Everyone,
> With my inv
Hey Everyone,
With my investigations into our streaming slave having a continuously
growing pg_xlog directory (
http://archives.postgresql.org/pgsql-general/2012-08/msg00124.php) I think
I've hit upon what's going on.
Using pg_controldata I've found that our streaming slave hasn't run a
check
Hey Everyone,
I've got a bit of an interesting issue going on with pg_xlog growing on
a streaming replication slave.
We're running postgres 9.1.1 x64 built from source on Centos 5.8 x64.
On both the master and the slave we have wal_keep_segments configured for
1000
wal_keep_segments = 1000
T
:) yah that makes sense no big deal. i'll probably just push this head
buiild of pg_dump onto the production machines till it comes out.
Thanks again!
On Sat, Mar 31, 2012 at 3:44 PM, Tom Lane wrote:
> Mike Roest writes:
> > Any idea when 9.1.4 with this change will be out
>
>
> I'm just pulling another backup using the stock 9.1.1 pg_dump to ensure
> the backups are equivalent.
>
> Schema & data are identical between the 2 backups. the new backup passes
all our tests for validating a tenant.
Thank you again for the quick response!
--Mike
>
> I've committed fixes for both these issues. If you are in a position to
> test with 9.1 branch tip from git, it'd be nice to have confirmation
> that these patches actually cure your problem. For both of them, the
> issue seems to only show up in a subset of cases, which may explain why
> we'
>
> That could be out-of-date info though. Here's some info about
> another possibility:
> http://wiki.postgresql.org/wiki/Profiling_with_perf
>
>
There we go this perf worked on the VM.
The file is 6 megs so I've dropped it here.
That was doing perf for the length of the pg_dump command and the
That was on the CentOS 5.8 x64 machine. The one I'm trying it from now is
Ubuntu 11.10 x64
On Fri, Mar 30, 2012 at 11:30 AM, Tom Lane wrote:
> Mike Roest writes:
> > Ok I just realized that's probably not going to be much help :)
>
> gmon.out would be of no val
on my
dev box then ran it on the server. I'm just running the actual dump on my
dev box against the server instead to see if I get something more useful
since that doesn't really seem to have much data in it)
On Fri, Mar 30, 2012 at 11:09 AM, Mike Roest wrote:
> Here's the gmon.o
This query returns 9843923 rows from the DB. So processing this seems like
it'll take quite a while.
I'll get a -pg build of pg_dump going here on a dev box so I can get you a
profile.
On Fri, Mar 30, 2012 at 10:18 AM, Tom Lane wrote:
> Mike Roest writes:
> > This dump is c
Hey Everyone,
I've got an interesting issue. We're running postgres 9.1.1 linux x64
centos 5.8
aspdata=# select version();
version
--
23 matches
Mail list logo