Hi,
We're using PostgreSQL 8.2. Recently, in our production database, there was
a severe performance impact.. Even though, we're regularly doing both:
1. VACUUM FULL ANALYZE once in a week during low-usage time and
2. ANALYZE everyday at low-usage time
Also, we noticed that the
Hi,
Even though I've enabled statistics collector in our server, it is not
collecting statistics, and because of this autovacuum is also not running as
expected.
PostgreSQL version 8.2
Parameters enabled related to this are:
# - Query/Index Statistics Collector -
#stats_command_str
default.
-Original Message-
From: Guillaume Lelarge [mailto:guilla...@lelarge.info]
Sent: Wednesday, March 31, 2010 1:02 PM
To: gna...@zoniac.com
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Statistics Collector not collecting server activities
Le 31/03/2010 08:41, Gnanakumar a
[mailto:guilla...@lelarge.info]
Sent: Wednesday, March 31, 2010 2:35 PM
To: gna...@zoniac.com
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Statistics Collector not collecting server activities
Le 31/03/2010 10:58, Gnanakumar a écrit :
> Im also seeing a weird thing. If I do show sta
gna...@zoniac.com
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Statistics Collector not collecting server activities
Le 31/03/2010 11:14, Gnanakumar a écrit :
> OK. I'll try that way.
>
> But, as per 8.2 doc, by default it is ON. So if I comment with just '#'
> i
e 31/03/2010 12:51, Gnanakumar a écrit :
> After restarting PostgreSQL, I can see the following parameters are set to
> ON:
> mydb=# show stats_start_collector;
> stats_start_collector
> ---
> on
> (1 row)
>
> mydb=# show stats
tm
l, stats_start_collector is on by default.
From: Gnanakumar [mailto:gna...@zoniac.com]
Sent: Wednesday, March 31, 2010 12:11 PM
To: pgsql-admin@postgresql.org
Subject: Statistics Collector not collecting server activities
Hi,
Even though I've enabled statistics collector
Hi,
I'm using PostgreSQL 8.2 on RedHat ES 5.
I connect to our database which is hosted remotely using PuTTY thro SSH
protocol.
As part of my maintenance activity, I was cloning the database using the
command:
CREATE DATABASE newdb WITH TEMPLATE=olddb;
While cloning was in-progress, my Interne
...@zoniac.com
Subject: Re: [ADMIN] Database clone stuck in the middle
Le 02/04/2010 15:38, Kevin Grittner a écrit :
> "Gnanakumar" wrote:
>
>> I connect to our database which is hosted remotely using PuTTY
>> thro SSH protocol.
>
>> my Internet got disco
Hi Tom,
> I wonder whether the OP is doing
> something weird like launching the server as a subprocess of his login
> shell.
I don't understand what you mean here. What is OP?
> Another possibility (if the source is a really big database) is that
> the copy operation is actually still going on.
] Database clone stuck in the middle
On Fri, Apr 2, 2010 at 10:53 PM, Gnanakumar wrote:
> I cannot use screen, since my database server is hosted remotely. I have
> only command-line access to my server by PuTTY.
screen is a cli application. Indispensable.
--
Sent via pgsql-admin mailin
clone stuck in the middle
"Gnanakumar" wrote:
> I connect to our database which is hosted remotely using PuTTY
> thro SSH protocol.
> my Internet got disconnected in the middle,
> so database cloning was also stuck in the middle.
> 4. Considering this Internet dis
ect: Re: [ADMIN] Database clone stuck in the middle
Le 02/04/2010 15:38, Kevin Grittner a écrit :
> "Gnanakumar" wrote:
>
>> I connect to our database which is hosted remotely using PuTTY
>> thro SSH protocol.
>
>> my Internet got disconnected in the middle,
Hi,
I'm using PostgreSQL 8.2.
I want to configure autovacuum daemon process in our production server in an
efficient way and take advantage of this daemon to perform vacuum and
analyze more aggressively.
I also read about autovacuum daemon at
http://www.postgresql.org/docs/8.2/interactive/routi
>> 1. Why need to have 2 parameters (base threshold and scale factor) to
define
>> the threshold value, when either one of the parameter is more than enough
to
>> define the threshold value. Can you explain the significance of having
both
>> parameters. What is the real-time advantage of this?
>
l.org
Subject: Re: [ADMIN] Statistics Collector not collecting server activities
"Gnanakumar" writes:
> I'm also seeing a weird thing. If I do 'show stats_start_collector;'
inside
> psql, it says 'off'.
Yeah, this is the expected state if the collector
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Statistics Collector not collecting server activities
"Gnanakumar" wrote:
> As you said, I notice the following in postmaster log:
> 2010-04-08 17:30:41 ISTLOG: could not receive test message on
> socket for statistics
rver data]# ls -l pg_xlog |wc -l
14438
[r...@dbserver data]# ls -l pg_xlog/archive_status |wc -l
14436
[r...@dbserver data]# ls -l /mnt/zoniacpitr/walarchive | wc -l
1783
NOTE: I can see all the files under pg_xlog/archive_status/ directory having
suffixed with "00*.ready".
Hi Florian,
Thanks for your valuable suggestion and a detailed step on common way to use
PITR. Things are very clear now except that I've some other question in
connection to this.
> The correct way to clean out pg_xlog therefore is to either disable WAL
archiving, or to make sure your archive_c
Hi Kevin,
> It is generally unsafe to delete any WAL files from pg_xlog. If
> they are there because your archive command has been failing, you
> need to turn off archiving or (probably more convenient) allow the
> archive script to return success until things clear. One trick
> would be to temp
, June 05, 2010 7:39 PM
To: pgsql-admin@postgresql.org; gna...@zoniac.com
Cc: f...@phlo.org
Subject: RE: [ADMIN] PITR Recovery Question
"Gnanakumar" wrote:
> I couldn't able to get this particular step clearly: "One trick
> would be to temporarily change your archive_command
Hi Florian,
Thanks for the clarification and a link to a post on automated script.
On Jun 5, 2010, at 9:05 , Gnanakumar wrote:
> Thanks for your valuable suggestion and a detailed step on common way to
use
> PITR. Things are very clear now except that I've some other question in
>
Hi Sam,
Thank your sharing this script.
> Here's a script to make your backup and rsync it to a remote destination:
> #!/bin/bash
> echo "checkpoint"
> echo "CHECKPOINT;" | /local/pkg/bin/psql template1
> echo "start backup"
> echo "SELECT pg_start_backup('cisoradr:/cis/pgsql/katana7/backup');"
Hi,
My production is running PostgreSQL v8.2.3 on CentOS release 5.2 (Final).
As part of our routine, we just wanted to make sure and practice once in a
while, whether PITR recovery process is performed without fail. When I
started the recovery process, after sometime, I see the following error
...@sss.pgh.pa.us]
Sent: Wednesday, June 09, 2010 9:27 PM
To: gna...@zoniac.com
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Fatal Error during PITR Recovery
"Gnanakumar" writes:
> My production is running PostgreSQL v8.2.3 on CentOS release 5.2 (Final).
> As part of ou
.us; gna...@zoniac.com
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Fatal Error during PITR Recovery
"Gnanakumar" wrote:
> If I would like to upgrade to the latest minor version in 8.2.x
> series, that is v8.2.17, how do I upgrade this [from 8.2.3 on
> CentOS] wit
Error during PITR Recovery
Gnanakumar" wrote:
> what is the correct method of upgrading/installing to 8.2.17?
>
> Let me make the question/idea, which I'm thinking in my minding of
> upgrading, clear here:
>
> 1. Rename '/usr/local/pgsql/data' to
Hi,
Recently we upgraded our production server RAM from 7.5GB to 15GB for the
following reasons:
1. to solve performance issues
2. the number of concurrent users increased from 150 to 300
Our production server is not a dedicated database server and is being shared
with our other a
> Not to the database itself, you don't; and it's probably killing
> performance for you to try to do that. You should use your
> connection pooler to funnel that many client-side connections down
> to a much smaller number of database connections. Your pgpool
> configuration doesn't seem to be d
Hi,
What are all the controlling factors/measures that one should consider/take
into account, when there is an unexpected growth in disk space usage of the
database?
Some of the factors that comes to my mind immediately are:
1) Running autovacuum daemon aggressively
2) Setting/tuning appropriate
Hi Scott,
Thanks for your valuable suggestion. Our production server is running
PostgreSQL 8.2.3 on Red Hat 4.1.1-52.
> Esp in 8.3 and before where blowing out the free space map is quite easy
to do and it can go unnoticed for some time.
As you pointed out rightly, recently in our Production s
> Definitely look at updating to the latest 8.2 release, there's a lot of
bug fixes since 8.2.3.
Yes, we're planning out to upgrade the version.
> Usually I set max_fsm_pages to 2x or more whatever vacuum verbose says it
needs.
A snippet of my VACUUM ANALYZE VERBOSE is pasted below. To set this
p
>> Does stats collector process need any other special
>> resource/privilege/operations/port to start?
> If I understand correctly, it uses UDP. I don't think PostgreSQL
> uses UDP outside of that.
Still I'm not able to resolve/fix this statistics collector (not
starting/working) in one of our
Hi,
Our production server is running PostgreSQL v8.2.3 on Red Hat Enterprise
Linux Server release 5 (Tikanga).
I need a clarification on how autovacuum daemon internally works/handles in
the following specific use case/situation:
1. Does autovacuum daemon works with one table at a time or does i
> in 8.2 "naptime" means "time to sleep after we finish a job". So even
> if the previous task takes an hour, it will still sleep a minute before
> doing another round. (Note that this setting has a different meaning in
> later releases).
I couldn’t understand the difference in meaning of "autov
Thanks for the update.
> The main difference is that 8.2 has only one process working at a time,
> whereas in 8.3 and later there can be several. When there's only one
> process, the only way for it to process several databases is
> sequentially; the naptime is how long to sleep between each item
Hi,
Our production server is running PostgreSQL v8.2.3 on Red Hat Enterprise
Linux Server release 5 (Tikanga) and for database connection management
we're using pgpool-II v2.0.1.
I've been following with one of the issue I was facing with pgpool in its
forums and mailing list, but till now I've n
Any ideas/suggestions?
-Original Message-
From: Gnanakumar [mailto:gna...@zoniac.com]
Sent: Friday, August 13, 2010 6:54 PM
To: pgsql-admin@postgresql.org
Subject: SQL statements not printed in pgpool log
Hi,
Our production server is running PostgreSQL v8.2.3 on Red Hat Enterprise
Hi,
Our production server is running PostgreSQL v8.2.3 on RHEL5. Autovacuum
daemon is also running in the server.
A brief background on my use case: We heavily use a table for storing of
data temporarily inside PostgreSQL functions. Eventually, there will not be
any record in the table. Becau
Solution is here:
http://pgfoundry.org/pipermail/pgpool-general/2010-August/002910.html
-Original Message-
From: Gnanakumar [mailto:gna...@zoniac.com]
Sent: Tuesday, August 17, 2010 7:38 PM
To: pgsql-admin@postgresql.org
Subject: RE: SQL statements not printed in pgpool log
Any ideas
Hi,
Does autovacuum daemon work on tables created with TEMPORARY syntax?
CREATE [ { TEMPORARY | TEMP } ] TABLE table_name ...
Regards,
Gnanam
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
> Truncate should not affect those values.
OK.
> Has your Postgres instance crashed or been restarted with immediate mode
since the last vacuum? This will cause the autovacuum information to be
null.
No, it's been up and running for almost a week. I even checked with "select
pg_postmaster_start_
> I don't believe pg_postmaster_start_time() is reset by a
> backend-crash-and-restart sequence ... but the stats collector data will
> be. You might want to look into the postmaster log for evidence of
> such a crash.
I'm sure that the backend has neither restarted nor crashed at that moment.
An
Hi,
My production server is running PostgreSQL v8.2.3 on CentOS release 5.2
(Final).
I've setup PITR in my production server. For some reason, after setting up
PITR, we're not able to manage and maintain it.
PITR SETUP DETAILS
We've 2 drives. Primary drive (pgsql/data/ directory resides) is 40
> You can check if PITR is disabled by executing:
> SHOW archive_command;
> At a psql prompt. If that's empty, then you turned it off, and it isn't
involved in the shutdown issue you have anymore.
Yes, I could see empty value here.
> Take a look at what processes are still running against the da
>> You can check if PITR is disabled by executing:
>> SHOW archive_command;
>> At a psql prompt. If that's empty, then you turned it off, and it isn't
>> involved in the shutdown issue you have anymore.
> Yes, I could see empty value here.
>> Take a look at what processes are still running again
Hi,
Our production server is running PostgreSQL v8.2.3 on CentOS release 5.2
(Final).
We've setup PITR in our production server. For some reasons, after setting
up PITR, we're not able to manage and maintain it. Eventually, our WAL
archive drive (secondary drive) has become full (100% use) and ad
Hi,
Is there a batch/bulk UPDATE query syntax available in PostgreSQL, similar
to multirow VALUES syntax available for INSERT?
INSERT Multirow VALUES syntax example:
INSERT INTO films (code, title, did, date_prod, kind) VALUES
('B6717', 'Tampopo', 110, '1985-02-10', 'Comedy'),
('HG120', '
us explain analyze for this update.
2011/2/3, Gnanakumar :
> Hi,
>
> Is there a batch/bulk UPDATE query syntax available in PostgreSQL, similar
> to multirow VALUES syntax available for INSERT?
>
> INSERT Multirow VALUES syntax example:
> INSERT INTO films (code, title
Hi,
Our production server is running PostgreSQL v8.2.3 on CentOS5.2. We're
planning to upgrade to the latest version in 8.2 series, that is v8.2.20.
Also, in future, we've planned to upgrade to the most recent minor release
as soon as possible (as and when released by PostgreSQL).
Initially, we'v
> ? Did you read the paragraph above?
> Install latest version, and restart postmaster.
Installing of latest version as-is will keep overwriting the existing installed
directories/files/binaries but not the "/usr/local/pgsql/data/" directory
right? Since this is our production server database,
Hi,
We're using PostgreSQL v8.2.3.
How do I get the schema name in which TEMPORARY table is created using
"CREATE TEMP TABLE mytable " syntax?
In our application, we're creating temporary table with the same as an
existing permanent table which is available in "public" schema. Hence, I
want
> select n.nspname from pg_class c join pg_namespace n on
n.oid=c.relnamespace
> where c.relname ='foo' and n.nspname like 'pg_temp%';
This will return all the schema name that are available which were created
using TEMP TABLE syntax. Since our application is web-based, of course,
there will be
>> How do I get the schema name in which TEMPORARY table is created using
>> "CREATE TEMP TABLE mytable " syntax?
> Do you need the real schema name, or will the "pg_temp" alias be
> sufficient?
I need the real schema name (for example, pg_temp_xxx) in which it is
created and not just the ali
Hi,
We're using PostgreSQL v8.2.3. Ours is a web-based application, language is
Java and we're using pgpool-II v 2.0.1 purely for connection pooling (we
don't use other features of pgpool like Replication, Load Balancing, etc.).
I've an implementation-specific question related to creating of tem
Hi,
We're using PostgreSQL v8.2.3. Ours is a web-based application, language is
Java and we're using pgpool-II v 2.0.1 purely for connection pooling (we
don't use other features of pgpool like Replication, Load Balancing, etc.).
We're running a complex report as explained below. Sometimes, the
First, thanks for your advice.
> If you want to see whether any of the bugs fixed might be related to
> your error messages, please read the release notes for 8.2.4 to
> 8.2.20 to see which might affect your situation:
> http://www.postgresql.org/docs/8.2/static/release.html
I read all the rele
> Although it's unrelated to your query about an exception being raised,
> a word of advice on temporary tables. (This may not be relevant to
> your situation, but it's no small source of pain in one of the systems
> I'm responsible for.) Significant use of temporary tables will cause
> commensur
Hi,
We're using PostgreSQL v8.2.3. Ours is a web-based application, language is
Java and we're using pgpool-II v 2.0.1 purely for connection pooling (we
don't use other features of pgpool like Replication, Load Balancing, etc.).
We're creating temporary table using "CREATE TEMP TABLE FOO(col1, c
> The key to understanding the problem is to realize that system
> catalogs are pretty much regular tables in PostgreSQL, with some
> additional special rules (no triggers, can't be CLUSTERed or ALTERed,
> &c). Anything you'd do in a regular table that would create a dead
> tuple UPDATEs, DELETE
> Clearly it's trying to use an OID it calculated for one of these
> tables after the table has been dropped, and I suspect that the lock
> is released between gathering the data and sorting it. I don't have
> any 8.2 databases around to try this on, but perhaps you would avoid
> it with a slight
Any ideas/recommendation?
-Original Message-
From: Gnanakumar [mailto:gna...@zoniac.com]
Sent: Monday, March 14, 2011 1:40 PM
To: pgsql-admin@postgresql.org
Subject: Temporary table failed to drop after power failure
Hi,
We're using PostgreSQL v8.2.3. Ours is a web-based applic
Hi,
We're planning to upgrade our PostgreSQL database from v8.2.3 to v9.0.3.
Before upgrading our Production server, we're trying out this complete
upgrade process in a separate database server that is just exactly the
replica of the production server.
I already took a dump of our database using
Hi,
Our OS is CentOS5.2. We're upgrading our PostgreSQL database from v8.2.3 to
v9.0.3. Is there any default init script available in v9.0.3 for starting
up the PostgreSQL server so that it can be added to "/etc/init.d/"? I
couldn't find any service file in the installation source directory. I
> The recommended upgrade technique is to us pg_dump (or pg_dumpall)
> from the newer version (9.0.3 in this case) to access the old
> database (running the 8.2.3 server in this case). If you followed
> some other procedure, like using the 8.2.3 software to dump the
> database, you should try the
> If you're upgrading to 9.0, I would start by following the
> recommended upgrade procedure and seeing whether these problems then
> exist in the new database. Fixing old issues like this is one big
> reason for using the newer version of pg_dump.
How do I use the newer version of pg_dump (9.0.3
> You can install two different versions of PostgreSQL in different
> directories, or you can run pg_dump from the machine which is your
> conversion target. We do both. We always build from source, and we
> use a --prefix on configure which includes the version. For
> example:
I even tried out
> when i said "search for the problematic constraint" i meant to open the
file in a text editor and search for the problematic constraint.
Yes, I've even opened the dump file and analyzed those constraints. In
fact, I've already explained in detail about my observations here:
http://archives.po
Hi,
We're using PostgreSQL v8.2.3. Ours is a web-based application, language is
Java and we're using pgpool-II v 2.0.1 purely for connection pooling (we
don't use other features of pgpool like Replication, Load Balancing, etc.).
Recently, in our Production server there was an unexpected growth i
Hi,
We're planning to upgrade our Production server from PostgreSQL v8.2.3 to
the latest version in 8.2 series, that is v8.2.22.
In 8.2.11 Release Notes here
http://www.postgresql.org/docs/current/interactive/release-8-2-11.html, it's
been mentioned that:
"Also, if you were running a previous 8
Hi,
Our Production server is running PostgreSQL v8.2.22 on CentOS5.2. Based on
the application usage trend, we're finding that we need to increase maximum
number of concurrent connections to the database server, that is
"max_connections" parameter. Of course, I also understand that it requires
m
Hi,
Our Production server is running PostgreSQL v8.2.22 on CentOS5.2. We want
to enable a specific CHECK constraint for our application table as explained
below:
There are 2 columns in the table: "managertype" numeric(1) and "managerid"
numeric(10). "managertype" accepts only 2 valid values, ei
Hi,
Recently, in our Production server, we found a "single query" being held up
in " in transaction" for more than 19 hours using the following query:
select date_trunc('second', current_timestamp - query_start) as runtime,
datname as database_name, current_query from pg_stat_activity where
curren
> SELECT application_name, client_addr, client_hostname, client_port
> FROM pg_stat_activity
> WHERE procpid = 14740;
> (Replace 14740 of the process ID of the "idle in transaction" backend).
> Look on the client machine and find the process that holds TCP port
> "client_port" open (on Linux you
> Then you need to identify the web server process/thread that has
> client_port open and fix the problem there. If you get that process
> to close the transaction or terminate, your problem is solved.
> You can also terminate a database session from the server with
> the pg_terminate_backend func
Hi,
Our Production server has got 35 GB physical RAM size. Since the server
has lots of RAM, we want to really make use of it. We've already configured
"max_connections" to 1000 and "shared_buffers" to 1536 MB, but when we tried
to increase only "shared_buffers" to 3072MB (keeping "max_connecti
> did you read
> http://www.postgresql.org/docs/8.2/static/kernel-resources.html ?
Yes, I read. But I'm not able to find a correct way to increase Linux
Kernel parameters.
> If it is a dedicated DB server the rule of thumb usually is to use
> 25% RAM for shared buffers, but no more than 8GB unl
> We get very good performance dealing with
> thousands of concurrent users with a pool of 35 connections to the
> database.
>
> If you want to handle more users than you can currently support, you
> probably need to use fewer database connections.
First, please excuse me that I'm not able to und
> There have been numerous discussions of this on the lists, so you
> can probably find a more in-depth discussion of the topic if you
> search the archives, and this may motivate me to put together a Wiki
> page on the topic, but here's the general concept.
I was really astonished on seeing a gre
> A formula which has held up pretty well across a lot of benchmarks
> for years is that for optimal throughput the number of active
> connections should be somewhere near
> ((core_count * 2) + effective_spindle_count).
Our entire Production application stack is setup in Amazon EC2 cloud
enviro
Hi,
Our application requirement demands archiving of records, so that
performance/speed of the application is not compromised. So, am looking
out/evaluating on various techniques/solutions available for database
archiving, in general, for PostgreSQL.
Few solutions that comes to my mind are:
> Your question isn't specific enough. What does "not compromised" mean?
Every technique for archiving records has some impact on some part of your
system. Is a 0.1% impact too
> much? A 1% impact? A 10% impact? A factor of 2 slowdown?
Sorry if my original question has not clearly conveyed
82 matches
Mail list logo