On Thursday, April 11, 2013, X.H.WANG <82661...@qq.com> wrote:
> Hello everybody:
>
> After I switch the slave to the master , I can not get the stats
> information by the below sql and the pg_stat_reset() does not work on the
> New Master,
> And I vacuum by hand,it's still not work! I ne
Hello everybody:
After I switch the slave to the master , I can not get the stats
information by the below sql and the pg_stat_reset() does not work on the New
Master,
And I vacuum by hand,it's still not work! I need some help.Could you give me
any idea?
the stat sql:
SELECT st.re
Thx for clarification, Craig. Your Perl snippet comes in handy, too.
-ar
On Apr 10, 2013, at 8:08 PM, Craig James wrote:
> On Wed, Apr 10, 2013 at 4:59 PM, Armin Resch wrote:
>> Not sure this is the right list to vent about this but here you go:
>>
>> I) select regexp_replace('BEFORE.AFTER','(
On Wed, Apr 10, 2013 at 4:59 PM, Armin Resch wrote:
> Not sure this is the right list to vent about this but here you go:
>
> I) select regexp_replace('BEFORE.AFTER','(.*)\..*','\1','g') "Substring"
> II) select regexp_replace('BEFORE.AFTER','(.*)\\..*','\\1','g') "Substring"
>
> Executing (II) a
Not sure this is the right list to vent about this but here you go:
I) select regexp_replace('BEFORE.AFTER','(.*)\..*','\1','g') "Substring"
II) select regexp_replace('BEFORE.AFTER','(.*)\\..*','\\1','g') "Substring"
Executing (II) against pg 8.4.4 or 9.0.4 yields 'BEFORE', but in order for
9.1.7
Hi Bambi,
Thank you the prompt reply.
This table is very volatile, lot of inserts/updates happen on this
tables(atleast 20~30 inserts/min).
When auto vacuum tries to run on this table, I get this warning.
Is there a way, I force it to happen, because the table/indexes statistics
are becoming sta
That's just that some other process has some DML going on in the table that is
supposed to be truncated. No lock, no truncate.
HTH,
Bambi.
From: pgsql-admin-ow...@postgresql.org
[mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Nik Tek
Sent: Wednesday, April 10, 2013 4:58 PM
To: pgsql-per
Hi,
Could some please explain what these warnings mean in postgres.
I see these messages a lot when automatic vacuum runs.
1 tm:2013-04-10 11:39:20.074 UTC db: pid:13766 LOG: automatic vacuum
of table "DB1.nic.pvxt": could not (re)acquire exclusive lock for truncate
scan
1 tm:2013-04
On Wed, Apr 10, 2013 at 11:06:32AM -0700, Bhanu Murthy wrote:
> Hi all,
>
> Can someone please point me to detailed documentation on how to
> secure/encrypt connections between PGBouncer and Postgresql database (version
> 8.4.3)?
>
> Thanks in advance!
>
> Bhanu M. Gandikota
> Cell: (415)
AFAIK, you have to use stunnel to do it (which is not hard to setup, but
it almost makes you wonder whether you should go to the trouble of using
pgbouncer at all).
I just went through this and I ended up just testing direct connections
through the tunnel without pgbouncer in the middle. It w
Hi all,
Can someone please point me to detailed documentation on how to secure/encrypt
connections between PGBouncer and Postgresql database (version 8.4.3)?
Thanks in advance!
Bhanu M. Gandikota
Cell: (415) 420-7740
On 10-04-2013 13:58, Herman Pool wrote:
All,
We have to migrate to a new hardware infrastructure and we want to use
a Postgres version of EnterpriseDB in the new hardware infrastructure.
On the old infrastructure, postgres comes from postgresql.org
The version of postgres on the old and new i
Every table or index is created in the form of OS files therefore max open
files need to be set appropriately in order to achieve the larger table
count. There would be no limitation for creation of tables in PostgreSQL
and performance would be the major criteria as catalogs get overburdened.
**
*
On Wed, Apr 10, 2013 at 10:43 AM, Dale Betts wrote:
>
> I'd agree, certainly in my experiences.
>
> You need to ensure OS parameters such as the max open files (fs.file-max if
> we're talking
> Linux) is set appropriately. Baring in mind each user will have an open file
> on each
> underlying da
Hello,
Yes, it's possible, but I recommend you to update the current version
coming soon.
Regards
2013/4/10 Prashanth Ranjalkar
> Yes, the log shipping/streaming replication is possible between these
> versions as the major versions are same i.e 9.1.X
>
> **
> *Thanks & Regards,*
> *** *
> *P
Yes, the log shipping/streaming replication is possible between these
versions as the major versions are same i.e 9.1.X
**
*Thanks & Regards,*
*** *
*Prashanth Ranjalkar*
*Database Consultant & Architect*
*Skype:prashanth.ranjalkar*
*www.postgresdba.net*
On Wed, Apr 10, 2013 at 1:58 PM, Herman
I'd agree, certainly in my experiences.
You need to ensure OS parameters such as the max open files
(fs.file-max if we're talking Linux) is set appropriately. Baring
in mind each user will have an open file on each underlying
datafile for the databas
All,
We have to migrate to a new hardware infrastructure and we want to use a
Postgres version of EnterpriseDB in the new hardware infrastructure.
On the old infrastructure, postgres comes from postgresql.org
The version of postgres on the old and new infrastructure is the same, 9.1
(old 9.1.5
On Wed, Apr 10, 2013 at 10:10 AM, Vedran Krivokuca wrote:
> On Wed, Apr 10, 2013 at 9:07 AM, Vedran Krivokuca
> wrote:
> > 1) we can go with different instances of PostgreSQL service, let's say
> > (for pure theory) 10 of them on the same HA cluster setup. Every
> > instance would hold let's say
On Wed, Apr 10, 2013 at 9:07 AM, Vedran Krivokuca wrote:
> 1) we can go with different instances of PostgreSQL service, let's say
> (for pure theory) 10 of them on the same HA cluster setup. Every
> instance would hold let's say 1/10th of that big recordset, and around
> 3.000 database tables (thi
Hello all,
We are exploring possible strategies on deploying PostgreSQL with
application which will store fairly much data (current implementation
stores around 345 GB of data, and it will be subject of up to 10 times
more data stored in the database).
Now, not to go into too much details to avo
21 matches
Mail list logo