[GENERAL] template1 database is facing high datfrozenxid

2016-02-12 Thread AI Rumman
Hi,

I am running Postgresql 9.1 and I can see the datfrozenxid is going high
and vacuum process is not bringing it down. And this has been happening on
template1 database.

2016-02-12 16:51:50.400 CST [19445][@] : [13-1] WARNING:  oldest xmin is
> far in the past
> 2016-02-12 16:51:50.400 CST [19445][@] : [14-1] HINT:  Close open
> transactions soon to avoid wraparound problems.
> 2016-02-12 16:51:50.400 CST [19445][@] : [15-1] LOG:  automatic vacuum of
> table "template1.pg_catalog.pg_database": index scans: 0
> pages: 0 removed, 1 remain
> tuples: 0 removed, 9 remain
> system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec
> 2016-02-12 16:51:50.411 CST [19435][@] : [82-1] WARNING:  oldest xmin is
> far in the past
> 2016-02-12 16:51:50.411 CST [19435][@] : [83-1] HINT:  Close open
> transactions soon to avoid wraparound problems.
> 2016-02-12 16:51:50.411 CST [19435][@] : [84-1] LOG:  automatic vacuum of
> table "template1.pg_catalog.pg_largeobject": index scans: 0
> pages: 0 removed, 0 remain
> tuples: 0 removed, 0 remain
> system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec


I vacuum database manually but nothing is working out.
Please help,

Thanks.


Re: [GENERAL] template1 database is facing high datfrozenxid

2016-02-12 Thread AI Rumman
I checked it and I did not find any log running sql or any open
transaction. Not even in pg_prepared_xacts.
And it looks like pg_catalog database is making the alarm.

Any other idea please, where I need to look into.

Thanks.


On Fri, Feb 12, 2016 at 3:05 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 02/12/2016 02:56 PM, AI Rumman wrote:
>
>> Hi,
>>
>> I am running Postgresql 9.1 and I can see the datfrozenxid is going high
>> and vacuum process is not bringing it down. And this has been happening
>> on template1 database.
>>
>> 2016-02-12 16:51:50.400 CST [19445][@] : [13-1] WARNING:  oldest
>> xmin is far in the past
>> 2016-02-12 16:51:50.400 CST [19445][@] : [14-1] HINT:  Close open
>> transactions soon to avoid wraparound problems.
>>
>
> The above seems to be the contributing factor.
>
> Does:
>
> select * from pg_stat_activity
>
> show long running queries.
>
>
> 2016-02-12 16:51:50.400 CST [19445][@] : [15-1] LOG:  automatic
>> vacuum of table "template1.pg_catalog.pg_database": index scans: 0
>> pages: 0 removed, 1 remain
>> tuples: 0 removed, 9 remain
>> system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec
>> 2016-02-12 16:51:50.411 CST [19435][@] : [82-1] WARNING:  oldest
>> xmin is far in the past
>> 2016-02-12 16:51:50.411 CST [19435][@] : [83-1] HINT:  Close open
>> transactions soon to avoid wraparound problems.
>> 2016-02-12 16:51:50.411 CST [19435][@] : [84-1] LOG:  automatic
>> vacuum of table "template1.pg_catalog.pg_largeobject": index scans: 0
>> pages: 0 removed, 0 remain
>> tuples: 0 removed, 0 remain
>> system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec
>>
>>
>> I vacuum database manually but nothing is working out.
>> Please help,
>>
>> Thanks.
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] template1 database is facing high datfrozenxid

2016-02-12 Thread AI Rumman
Used this query in each of the database::

SELECT t.relname, l.database, l.locktype, l.pid , l.mode, l.granted,
p.current_query, p.query_start ,p.waiting
FROM pg_locks as l
INNER JOIN pg_stat_all_tables t
on l.relation = t.relid
INNER JOIN pg_stat_activity as p
on l.pid = p.procpid ;

No luck. At present, db is working, but t is going towards wraparound.

On Fri, Feb 12, 2016 at 3:28 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 02/12/2016 03:10 PM, AI Rumman wrote:
>
>> I checked it and I did not find any log running sql or any open
>> transaction. Not even in pg_prepared_xacts.
>> And it looks like pg_catalog database is making the alarm.
>>
>> Any other idea please, where I need to look into.
>>
>
> Should have added:
>
> select * from pg_database
>
>
>> Thanks.
>>
>>
>> On Fri, Feb 12, 2016 at 3:05 PM, Adrian Klaver
>> <adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>> wrote:
>>
>> On 02/12/2016 02:56 PM, AI Rumman wrote:
>>
>> Hi,
>>
>> I am running Postgresql 9.1 and I can see the datfrozenxid is
>> going high
>> and vacuum process is not bringing it down. And this has been
>> happening
>> on template1 database.
>>
>>  2016-02-12 16:51:50.400 CST [19445][@] : [13-1] WARNING:
>> oldest
>>  xmin is far in the past
>>  2016-02-12 16:51:50.400 CST [19445][@] : [14-1] HINT:
>> Close open
>>  transactions soon to avoid wraparound problems.
>>
>>
>> The above seems to be the contributing factor.
>>
>> Does:
>>
>> select * from pg_stat_activity
>>
>> show long running queries.
>>
>>
>>  2016-02-12 16:51:50.400 CST [19445][@] : [15-1] LOG:
>> automatic
>>  vacuum of table "template1.pg_catalog.pg_database": index
>> scans: 0
>>  pages: 0 removed, 1 remain
>>  tuples: 0 removed, 9 remain
>>  system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec
>>  2016-02-12 16:51:50.411 CST [19435][@] : [82-1] WARNING:
>> oldest
>>  xmin is far in the past
>>  2016-02-12 16:51:50.411 CST [19435][@] : [83-1] HINT:
>> Close open
>>  transactions soon to avoid wraparound problems.
>>  2016-02-12 16:51:50.411 CST [19435][@] : [84-1] LOG:
>> automatic
>>  vacuum of table "template1.pg_catalog.pg_largeobject":
>> index scans: 0
>>  pages: 0 removed, 0 remain
>>  tuples: 0 removed, 0 remain
>>  system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec
>>
>>
>> I vacuum database manually but nothing is working out.
>> Please help,
>>
>> Thanks.
>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] template1 database is facing high datfrozenxid

2016-02-12 Thread AI Rumman
Yes. And we solved the problem.
We looked into the pg_subtrans and found that we had subrans pending from
January 25th. We investigated more and found that I large sql was executed
on Streaming standby around that date.
More digging we found the date of the below alert is also near:

WARNING:  oldest xmin is far in the past

We stopped standby and the problem solved. :)

Thanks.

On Fri, Feb 12, 2016 at 4:11 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 02/12/2016 04:03 PM, AI Rumman wrote:
>
>> In pg_subtrans, I have files like:
>>
>
> Are you sure you are looking at the same database cluster in all the cases?
>
> What does:
>
> SELECT datname, age(datfrozenxid) FROM pg_database;
>
> give you?
>
>
>>
>> $ ls -lrt | more
>> total 1269436
>> -rw--- 1 postgres postgres 262144 Jan 25 18:49 D907
>> -rw--- 1 postgres postgres 262144 Jan 25 18:54 D908
>> -rw--- 1 postgres postgres 262144 Jan 25 18:58 D909
>> -rw--- 1 postgres postgres 262144 Jan 25 18:59 D90A
>> -rw--- 1 postgres postgres 262144 Jan 25 19:04 D90B
>> -rw--- 1 postgres postgres 262144 Jan 25 19:09 D90C
>> -rw--- 1 postgres postgres 262144 Jan 25 19:14 D90D
>> -rw--- 1 postgres postgres 262144 Jan 25 19:18 D90E
>> -rw--- 1 postgres postgres 262144 Jan 25 19:19 D90F
>> -rw--- 1 postgres postgres 262144 Jan 25 19:24 D910
>> -rw--- 1 postgres postgres 262144 Jan 25 19:29 D911
>> -rw--- 1 postgres postgres 262144 Jan 25 19:33 D912
>> -rw--- 1 postgres postgres 262144 Jan 25 19:34 D913
>> -rw--- 1 postgres postgres 262144 Jan 25 19:39 D914
>> -rw--- 1 postgres postgres 262144 Jan 25 19:44 D915
>> -rw--- 1 postgres postgres 262144 Jan 25 19:49 D916
>> -rw--- 1 postgres postgres 262144 Jan 25 19:53 D917
>> -rw--- 1 postgres postgres 262144 Jan 25 19:54 D918
>> -rw--- 1 postgres postgres 262144 Jan 25 19:59 D919
>> -rw--- 1 postgres postgres 262144 Jan 25 20:04 D91A
>> -rw--- 1 postgres postgres 262144 Jan 25 20:09 D91B
>> -rw--- 1 postgres postgres 262144 Jan 25 20:14 D91C
>> -rw--- 1 postgres postgres 262144 Jan 25 20:19 D91D
>> -rw--- 1 postgres postgres 262144 Jan 25 20:23 D91E
>> -rw--- 1 postgres postgres 262144 Jan 25 20:24 D91F
>> -rw--- 1 postgres postgres 262144 Jan 25 20:29 D920
>> -rw--- 1 postgres postgres 262144 Jan 25 20:34 D921
>> -rw--- 1 postgres postgres 262144 Jan 25 20:39 D922
>> -rw--- 1 postgres postgres 262144 Jan 25 20:44 D923
>> -rw--- 1 postgres postgres 262144 Jan 25 20:49 D924
>> -rw--- 1 postgres postgres 262144 Jan 25 20:54 D925
>> -rw--- 1 postgres postgres 262144 Jan 25 20:59 D926
>> -rw--- 1 postgres postgres 262144 Jan 25 21:04 D927
>> .
>>
>>
>> Does it mean that I have too many open transactions? If yes, it is not
>> showing in pg_stat_activity.
>>
>> On Fri, Feb 12, 2016 at 3:38 PM, AI Rumman <rumman...@gmail.com
>> <mailto:rumman...@gmail.com>> wrote:
>>
>> Used this query in each of the database::
>>
>> SELECT t.relname, l.database, l.locktype, l.pid , l.mode, l.granted,
>> p.current_query, p.query_start ,p.waiting
>> FROM pg_locks as l
>> INNER JOIN pg_stat_all_tables t
>> on l.relation = t.relid
>> INNER JOIN pg_stat_activity as p
>> on l.pid = p.procpid ;
>>
>> No luck. At present, db is working, but t is going towards wraparound.
>>
>> On Fri, Feb 12, 2016 at 3:28 PM, Adrian Klaver
>> <adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>> wrote:
>>
>> On 02/12/2016 03:10 PM, AI Rumman wrote:
>>
>> I checked it and I did not find any log running sql or any
>> open
>> transaction. Not even in pg_prepared_xacts.
>> And it looks like pg_catalog database is making the alarm.
>>
>>     Any other idea please, where I need to look into.
>>
>>
>> Should have added:
>>
>> select * from pg_database
>>
>>
>> Thanks.
>>
>>
>> On Fri, Feb 12, 2016 at 3:05 PM, Adrian Klaver
>> <adrian.kla...@aklaver.com
>> <mailto:adrian.kla...@aklaver.com>
>> <mailto:adrian.kla...@aklaver.com
>>
>> <mailto:adrian.kla...@aklaver.com>>> wrote:
>>
>>  

Re: [GENERAL] template1 database is facing high datfrozenxid

2016-02-12 Thread AI Rumman
In pg_subtrans, I have files like:

>
> $ ls -lrt | more
> total 1269436
> -rw--- 1 postgres postgres 262144 Jan 25 18:49 D907
> -rw--- 1 postgres postgres 262144 Jan 25 18:54 D908
> -rw--- 1 postgres postgres 262144 Jan 25 18:58 D909
> -rw--- 1 postgres postgres 262144 Jan 25 18:59 D90A
> -rw--- 1 postgres postgres 262144 Jan 25 19:04 D90B
> -rw--- 1 postgres postgres 262144 Jan 25 19:09 D90C
> -rw--- 1 postgres postgres 262144 Jan 25 19:14 D90D
> -rw--- 1 postgres postgres 262144 Jan 25 19:18 D90E
> -rw--- 1 postgres postgres 262144 Jan 25 19:19 D90F
> -rw--- 1 postgres postgres 262144 Jan 25 19:24 D910
> -rw--- 1 postgres postgres 262144 Jan 25 19:29 D911
> -rw--- 1 postgres postgres 262144 Jan 25 19:33 D912
> -rw--- 1 postgres postgres 262144 Jan 25 19:34 D913
> -rw--- 1 postgres postgres 262144 Jan 25 19:39 D914
> -rw--- 1 postgres postgres 262144 Jan 25 19:44 D915
> -rw--- 1 postgres postgres 262144 Jan 25 19:49 D916
> -rw--- 1 postgres postgres 262144 Jan 25 19:53 D917
> -rw--- 1 postgres postgres 262144 Jan 25 19:54 D918
> -rw--- 1 postgres postgres 262144 Jan 25 19:59 D919
> -rw--- 1 postgres postgres 262144 Jan 25 20:04 D91A
> -rw--- 1 postgres postgres 262144 Jan 25 20:09 D91B
> -rw--- 1 postgres postgres 262144 Jan 25 20:14 D91C
> -rw--- 1 postgres postgres 262144 Jan 25 20:19 D91D
> -rw--- 1 postgres postgres 262144 Jan 25 20:23 D91E
> -rw--- 1 postgres postgres 262144 Jan 25 20:24 D91F
> -rw--- 1 postgres postgres 262144 Jan 25 20:29 D920
> -rw--- 1 postgres postgres 262144 Jan 25 20:34 D921
> -rw--- 1 postgres postgres 262144 Jan 25 20:39 D922
> -rw--- 1 postgres postgres 262144 Jan 25 20:44 D923
> -rw--- 1 postgres postgres 262144 Jan 25 20:49 D924
> -rw--- 1 postgres postgres 262144 Jan 25 20:54 D925
> -rw--- 1 postgres postgres 262144 Jan 25 20:59 D926
> -rw--- 1 postgres postgres 262144 Jan 25 21:04 D927
> .


Does it mean that I have too many open transactions? If yes, it is not
showing in pg_stat_activity.

On Fri, Feb 12, 2016 at 3:38 PM, AI Rumman <rumman...@gmail.com> wrote:

> Used this query in each of the database::
>
> SELECT t.relname, l.database, l.locktype, l.pid , l.mode, l.granted,
> p.current_query, p.query_start ,p.waiting
> FROM pg_locks as l
> INNER JOIN pg_stat_all_tables t
> on l.relation = t.relid
> INNER JOIN pg_stat_activity as p
> on l.pid = p.procpid ;
>
> No luck. At present, db is working, but t is going towards wraparound.
>
> On Fri, Feb 12, 2016 at 3:28 PM, Adrian Klaver <adrian.kla...@aklaver.com>
> wrote:
>
>> On 02/12/2016 03:10 PM, AI Rumman wrote:
>>
>>> I checked it and I did not find any log running sql or any open
>>> transaction. Not even in pg_prepared_xacts.
>>> And it looks like pg_catalog database is making the alarm.
>>>
>>> Any other idea please, where I need to look into.
>>>
>>
>> Should have added:
>>
>> select * from pg_database
>>
>>
>>> Thanks.
>>>
>>>
>>> On Fri, Feb 12, 2016 at 3:05 PM, Adrian Klaver
>>> <adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>> wrote:
>>>
>>> On 02/12/2016 02:56 PM, AI Rumman wrote:
>>>
>>> Hi,
>>>
>>> I am running Postgresql 9.1 and I can see the datfrozenxid is
>>> going high
>>> and vacuum process is not bringing it down. And this has been
>>> happening
>>> on template1 database.
>>>
>>>  2016-02-12 16:51:50.400 CST [19445][@] : [13-1] WARNING:
>>> oldest
>>>  xmin is far in the past
>>>  2016-02-12 16:51:50.400 CST [19445][@] : [14-1] HINT:
>>> Close open
>>>  transactions soon to avoid wraparound problems.
>>>
>>>
>>> The above seems to be the contributing factor.
>>>
>>> Does:
>>>
>>> select * from pg_stat_activity
>>>
>>> show long running queries.
>>>
>>>
>>>  2016-02-12 16:51:50.400 CST [19445][@] : [15-1] LOG:
>>> automatic
>>>  vacuum of table "template1.pg_catalog.pg_database": index
>>> scans: 0
>>>  pages: 0 removed, 1 remain
>>>  tuples: 0 removed, 9 remain
>>>  system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec
>>>  2016-02-12 16:51:50.411 CST [19435][@] : [82-1] WARNI

Re: [GENERAL] How to stop autovacuum for daily partition old tables

2016-01-20 Thread AI Rumman
But, will it not create transaction wraparound for those table?

Thanks.

On Wed, Jan 20, 2016 at 4:44 PM, Melvin Davidson <melvin6...@gmail.com>
wrote:

>
> ALTER TABLE your_schema.your_table SET (autovacuum_enabled = false,
> toast.autovacuum_enabled = false);
>
> On Wed, Jan 20, 2016 at 6:22 PM, AI Rumman <rumman...@gmail.com> wrote:
>
>> Hi,
>>
>> I have a table with daily partition schema on Postgresql 9.1 where we are
>> keeping 2 years of data.
>> Often I experience that autovacuum process is busy with old tables where
>> there is no change. How can I stop it?
>> Please advice.
>>
>> Thanks.
>>
>>
>
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>


[GENERAL] How to stop autovacuum for daily partition old tables

2016-01-20 Thread AI Rumman
Hi,

I have a table with daily partition schema on Postgresql 9.1 where we are
keeping 2 years of data.
Often I experience that autovacuum process is busy with old tables where
there is no change. How can I stop it?
Please advice.

Thanks.


Re: [GENERAL] Building 9.4 rpm for Red Hat 5

2016-01-19 Thread AI Rumman
We build our own rpms.

Thanks.

On Tue, Jan 19, 2016 at 12:34 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 01/19/2016 12:29 PM, AI Rumman wrote:
>
>> We have all our scripts running on older Postrgesql versions for years
>> where it is using PGHOME and other paths. We need to make the 9.4 rpm to
>> follow those directory structure.
>> Considering above, am I going on right track? Please let me know.
>>
>
> How did you install the Postgres version you have running now?
>
>
>> Thanks.
>>
>> On Tue, Jan 19, 2016 at 12:25 PM, Joshua D. Drake <j...@commandprompt.com
>> <mailto:j...@commandprompt.com>> wrote:
>>
>> On 01/19/2016 12:21 PM, AI Rumman wrote:
>>
>> Hi All,
>>
>> My production boxes are running on Redhat 5 and I need to build
>> Postgresql 9.4 rpm for it following our environment setup.
>> If I build the rpm on Centos 5, will it be ok to run on Red Hat
>> 5 boxes?
>> Can you please let me know?
>>
>> Thanks.
>>
>>
>> CentOS 5 and RHEL 5 are binary compatible, so yes.
>>
>> However, is there a reason you aren't just using yum.postgresql.org
>> <http://yum.postgresql.org>?
>>
>> JD
>>
>> --
>> Command Prompt, Inc. http://the.postgres.company/
>> +1-503-667-4564 <tel:%2B1-503-667-4564>
>> PostgreSQL Centered full stack support, consulting and development.
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] Building 9.4 rpm for Red Hat 5

2016-01-19 Thread AI Rumman
We have all our scripts running on older Postrgesql versions for years
where it is using PGHOME and other paths. We need to make the 9.4 rpm to
follow those directory structure.
Considering above, am I going on right track? Please let me know.

Thanks.

On Tue, Jan 19, 2016 at 12:25 PM, Joshua D. Drake <j...@commandprompt.com>
wrote:

> On 01/19/2016 12:21 PM, AI Rumman wrote:
>
>> Hi All,
>>
>> My production boxes are running on Redhat 5 and I need to build
>> Postgresql 9.4 rpm for it following our environment setup.
>> If I build the rpm on Centos 5, will it be ok to run on Red Hat 5 boxes?
>> Can you please let me know?
>>
>> Thanks.
>>
>
> CentOS 5 and RHEL 5 are binary compatible, so yes.
>
> However, is there a reason you aren't just using yum.postgresql.org?
>
> JD
>
> --
> Command Prompt, Inc.  http://the.postgres.company/
>  +1-503-667-4564
> PostgreSQL Centered full stack support, consulting and development.
>


[GENERAL] Building 9.4 rpm for Red Hat 5

2016-01-19 Thread AI Rumman
Hi All,

My production boxes are running on Redhat 5 and I need to build Postgresql
9.4 rpm for it following our environment setup.
If I build the rpm on Centos 5, will it be ok to run on Red Hat 5 boxes?
Can you please let me know?

Thanks.


[GENERAL] pgpool ssl handshake failure

2015-10-15 Thread AI Rumman
Hi,

I am using pgpool-II version 3.4.3 (tataraboshi).
Where my database is Postgresql 8.4.

I am trying to configure ssl mode from client and between pgpool and
database it is non-ssl.
I configured as document and now I am getting this in my log:

>
> *2015-10-13 22:17:58: pid 1857: LOG:  new connection received*
> *2015-10-13 22:17:58: pid 1857: DETAIL:  connecting host=10.0.0.5
> port=65326*
> *2015-10-13 22:17:58: pid 1857: LOG:  pool_ssl: "SSL_read": "ssl handshake
> failure"*
> *2015-10-13 22:17:58: pid 1857: ERROR:  unable to read data from 
> frontend**2015-10-13
> 22:17:58: pid 1857: DETAIL:  socket read failed with an error "Success"*

Please let me know what wrong I am doing.

Thanks & Regards.


Re: [GENERAL] pgpool ssl handshake failure

2015-10-15 Thread AI Rumman
I configured Postgresql 9.4 and still getting the same error.

Thanks.

On Thu, Oct 15, 2015 at 7:16 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 10/15/2015 06:59 AM, AI Rumman wrote:
>
>> Hi,
>>
>> I am using pgpool-II version 3.4.3 (tataraboshi).
>> Where my database is Postgresql 8.4.
>>
>
> Probably already know, but 8.4 is approximately 1.25 years beyond EOL:
>
> http://www.postgresql.org/support/versioning/
>
>
>> I am trying to configure ssl mode from client and between pgpool and
>> database it is non-ssl.
>>
>
> What is non-ssl, the database or pgpool?
>
> I configured as document and now I am getting this in my log:
>>
>> /2015-10-13 22:17:58: pid 1857: LOG:  new connection received
>> //2015-10-13 22:17:58: pid 1857: DETAIL:  connecting host=10.0.0.5
>> port=65326
>> //2015-10-13 22:17:58: pid 1857: LOG:  pool_ssl: "SSL_read": "ssl
>> handshake failure"
>> //2015-10-13 22:17:58: pid 1857: ERROR:  unable to read data from
>> frontend
>> //2015-10-13 22:17:58: pid 1857: DETAIL:  socket read failed with an
>> error "Success"/
>>
>> Please let me know what wrong I am doing.
>>
>
> Not quite sure but given the below from the 9.5 Release Notes:
>
> "
> Remove server configuration parameter ssl_renegotiation_limit, which was
> deprecated in earlier releases (Andres Freund)
>
> While SSL renegotiation is a good idea in theory, it has caused enough
> bugs to be considered a net negative in practice, and it is due to be
> removed from future versions of the relevant standards. We have therefore
> removed support for it from PostgreSQL."
>
> I would check to see what  ssl_renegotiation_limit is set to:
>
> http://www.postgresql.org/docs/8.4/static/runtime-config-connection.html
>
> and if it is not set to 0, then try that.
>
>
>
>> Thanks & Regards.
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] ERROR: invalid input syntax for type date: IS IT A BUG here?

2015-08-21 Thread AI Rumman
Hi Adrian,

Thanks for replying here.

Actually, I modified the actual table name from my production where I
forgot to change the subtr value.

You can see the result SELECT 558 in SQL 3 where it selected that many
rows.

Regards.

On Fri, Aug 21, 2015 at 3:13 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 08/21/2015 02:32 PM, AI Rumman wrote:

 Hi All,

 I am using Postgresql 9.1  where have a partitioned table as below:

 events_20150101
 events_20150102
 events_20150103
 ...
 events_overflow


 When I am running the following query it gives me result:

 *SQL 1: *

 select all relname, pg_total_relation_size(relname::text) as s,
 substr(relname,18)::date as dt from pg_stat_user_tables where
 schemaname = 'partitions' and relname not like '%overflow'


 What is the result?


 But when I run the following one, it gives me error:

 *SQL 2: *

 select * as ts
 from
 (
 select relname, pg_total_relation_size(relname::text) as s,
 substr(relname,18)::date as dt from pg_stat_user_tables where
 schemaname = 'partitions' and relname not like '%overflow'  order by
 pg_total_relation_size(relname::text) desc
 ) as q
 where dt = '2015-01-01'::date;
 *ERROR:  invalid input syntax for type date: *



 production=# select substr('events_20150101', 18);
  substr
 

 (1 row)

 production=# select substr('events_20150101', 18)::date;
 ERROR:  invalid input syntax for type date: 

 Your substr is creating an empty str which cannot be cast to a date. I
 can't see how you could get a result from your first query, which is why I
 asked for what you are seeing.


 However, explain is showing plan:

   Sort  (cost=202.03..202.04 rows=1 width=64)
 Sort Key:

 (pg_total_relation_size(((pg_stat_all_tables.relname)::text)::regclass))
 -  Subquery Scan on pg_stat_all_tables  (cost=201.93..202.02
 rows=1 width=64)
   -  HashAggregate  (cost=201.93..201.99 rows=1 width=136)
 -  Nested Loop Left Join  (cost=0.00..201.92 rows=1
 width=136)
   -  Nested Loop  (cost=0.00..194.23 rows=1
 width=132)
 Join Filter: (c.relnamespace = n.oid)
 -  Seq Scan on pg_namespace n
 (cost=0.00..1.39 rows=1 width=68)
   Filter: ((nspname  ALL
 ('{pg_catalog,information_schema}'::name[])) AND (nspname !~
 '^pg_toast'::text) AND (nspname = 'partitions'::name))
 -  Seq Scan on pg_class c
 (cost=0.00..192.77 rows=6 width=72)
   Filter: ((relkind = ANY
 ('{r,t}'::char[])) AND (relname !~~ '%overflow'::text) AND
 ((substr((relname)::text, 18))::date = '2015-01-01'::date))
   -  Index Scan using pg_index_indrelid_index
 on pg_index i  (cost=0.00..7.66 rows=2 width=8)
 Index Cond: (c.oid = indrelid)

 Again, if I create a table and run the query it runs:

 *SQL 3:*

 create table dba.tbl_list as  select all relname,
 pg_total_relation_size(relname::text) as s, substr(relname,18)::date
 as dt from pg_stat_user_tables where schemaname = 'partitions' and
 relname not like '%overflow' ;
 SELECT 558

 \d+ dba.tbl_list
   Table dba.tbl_list
   Column  |  Type  | Modifiers | Storage | Description
 -++---+-+-
   relname | name   |   | plain   |
   s   | bigint |   | plain   |
   dt  | date   |   | plain   |
 Has OIDs: no


 *SQL 4:*

 select * from dba.tbl_list  where dt = '2015-01-01';
relname  | s  | dt
 ---++
   events_20150101 | 1309966336 | 2015-01-01
 (1 row)

 Why the 2nd query is showing error? Is it a bug? Or am I doing any silly?
 Any advice, please.

 Thanks  Regards.



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



[GENERAL] ERROR: invalid input syntax for type date: IS IT A BUG here?

2015-08-21 Thread AI Rumman
Hi All,

I am using Postgresql 9.1  where have a partitioned table as below:

 events_20150101
 events_20150102
 events_20150103
 ...
 events_overflow


When I am running the following query it gives me result:
*SQL 1: *

 select all relname, pg_total_relation_size(relname::text) as s,
 substr(relname,18)::date as dt from pg_stat_user_tables where schemaname =
 'partitions' and relname not like '%overflow'


But when I run the following one, it gives me error:
*SQL 2: *

 select * as ts
 from
 (
 select relname, pg_total_relation_size(relname::text) as s,
 substr(relname,18)::date as dt from pg_stat_user_tables where schemaname =
 'partitions' and relname not like '%overflow'  order by
 pg_total_relation_size(relname::text) desc
 ) as q
 where dt = '2015-01-01'::date;
 *ERROR:  invalid input syntax for type date: *

However, explain is showing plan:

  Sort  (cost=202.03..202.04 rows=1 width=64)
Sort Key:
 (pg_total_relation_size(((pg_stat_all_tables.relname)::text)::regclass))
-  Subquery Scan on pg_stat_all_tables  (cost=201.93..202.02 rows=1
 width=64)
  -  HashAggregate  (cost=201.93..201.99 rows=1 width=136)
-  Nested Loop Left Join  (cost=0.00..201.92 rows=1
 width=136)
  -  Nested Loop  (cost=0.00..194.23 rows=1 width=132)
Join Filter: (c.relnamespace = n.oid)
-  Seq Scan on pg_namespace n
 (cost=0.00..1.39 rows=1 width=68)
  Filter: ((nspname  ALL
 ('{pg_catalog,information_schema}'::name[])) AND (nspname !~
 '^pg_toast'::text) AND (nspname = 'partitions'::name))
-  Seq Scan on pg_class c  (cost=0.00..192.77
 rows=6 width=72)
  Filter: ((relkind = ANY
 ('{r,t}'::char[])) AND (relname !~~ '%overflow'::text) AND
 ((substr((relname)::text, 18))::date = '2015-01-01'::date))
  -  Index Scan using pg_index_indrelid_index on
 pg_index i  (cost=0.00..7.66 rows=2 width=8)
Index Cond: (c.oid = indrelid)



Again, if I create a table and run the query it runs:

*SQL 3:*

 create table dba.tbl_list as  select all relname,
 pg_total_relation_size(relname::text) as s, substr(relname,18)::date as dt
 from pg_stat_user_tables where schemaname = 'partitions' and relname not
 like '%overflow' ;
 SELECT 558



 \d+ dba.tbl_list
  Table dba.tbl_list
  Column  |  Type  | Modifiers | Storage | Description
 -++---+-+-
  relname | name   |   | plain   |
  s   | bigint |   | plain   |
  dt  | date   |   | plain   |
 Has OIDs: no


*SQL 4:*

 select * from dba.tbl_list  where dt = '2015-01-01';
   relname  | s  | dt
 ---++
  events_20150101 | 1309966336 | 2015-01-01
 (1 row)


Why the 2nd query is showing error? Is it a bug? Or am I doing any silly?
Any advice, please.

Thanks  Regards.


[GENERAL] official rpm build spec file

2015-08-08 Thread AI Rumman
Hi,

How to get postgresql official rpm spec file?
Please let me know. I want to build my own Postgresql rpm.

Thanks.


Re: [GENERAL] Postgresql upgrade from 8.4 to latest

2015-07-29 Thread AI Rumman
Thanks for good suggestions.


On Tue, Jul 28, 2015 at 3:13 PM, Joshua D. Drake j...@commandprompt.com
wrote:


 On 07/28/2015 01:35 PM, AI Rumman wrote:

 But what I read, in-place upgrade has smaller outage, compared to
 dump/restore.


 Correct, in fact if you do it with the link option, it will be very fast.


  But so many articles on having bugs afterwards.
 Do you think it is a good idea to use pg_upgrade for critical database
 application?


 It entirely depends, I have successfully used pg_upgrade many, many times.
 That is what -c is for, to work out all the kinks before you upgrade.

  Or any other tool should I consider? For example - slony?


 On at 2.5TB database, you very well be doing a lot more harm than good
 using a tool such as slony.

 JD


 --
 Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
 PostgreSQL Centered full stack support, consulting and development.
 Announcing I'm offended is basically telling the world you can't
 control your own emotions, so everyone else should do it for you.



[GENERAL] Postgresql upgrade from 8.4 to latest

2015-07-28 Thread AI Rumman
Hi,

I need to upgrade Postgresql database from 8.4 to latest stable version
(9.4). The db size is almost 2.5 TB.
Is pg_upgrade in-place is a good idea for it?

Thanks for advice.

Regards.


Re: [GENERAL] Postgresql upgrade from 8.4 to latest

2015-07-28 Thread AI Rumman
But what I read, in-place upgrade has smaller outage, compared to
dump/restore. But so many articles on having bugs afterwards.
Do you think it is a good idea to use pg_upgrade for critical database
application?
Or any other tool should I consider? For example - slony?

Thanks for advice.
Regards.

On Tue, Jul 28, 2015 at 1:29 PM, Joshua D. Drake j...@commandprompt.com
wrote:


 On 07/28/2015 01:12 PM, AI Rumman wrote:

 Hi,

 I need to upgrade Postgresql database from 8.4 to latest stable version
 (9.4). The db size is almost 2.5 TB.
 Is pg_upgrade in-place is a good idea for it?


 With quite a bit of testing, yes.

 But keep in mind, it is still an outage.

 JD



 Thanks for advice.

 Regards.



 --
 Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
 PostgreSQL Centered full stack support, consulting and development.
 Announcing I'm offended is basically telling the world you can't
 control your own emotions, so everyone else should do it for you.



Re: [GENERAL] pg_dump error

2015-07-27 Thread AI Rumman
I am not using any comma here. With the same command I am able to take dump
on other disks.
But with ISOLON CIFS, I am getting the error.

Thanks.

On Mon, Jul 27, 2015 at 10:56 AM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 07/27/2015 10:55 AM, AI Rumman wrote:

 No it is -s for schema only backup.


 My ^ got misplaced in sending. I was talking about the ',' in the file
 name at the end of the command.


 Thanks.

 On Mon, Jul 27, 2015 at 10:53 AM, Adrian Klaver
 adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com wrote:

 On 07/27/2015 10:48 AM, AI Rumman wrote:

 Hi,

 I am getting the following error during pg_dump:

 pg_dump dbname -v -t tablename -s -Fc -f dbname_tablename,sqlc

 ^
 Is that really a ','(comma)?



 pg_dump: [custom archiver] WARNING: ftell mismatch with expected
 position -- ftell used

 I am using Postgresql 9.1 and I have enough disk space on it.
 The backup
 drive is on ISOLON CIFS file system.

 What may be the errors?

 Please advice.


 Thanks.



 --
 Adrian Klaver
 adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com




 --
 Adrian Klaver
 adrian.kla...@aklaver.com



[GENERAL] pg_dump error

2015-07-27 Thread AI Rumman
Hi,

I am getting the following error during pg_dump:

pg_dump dbname -v -t tablename -s -Fc -f dbname_tablename,sqlc

pg_dump: [custom archiver] WARNING: ftell mismatch with expected position
-- ftell used

I am using Postgresql 9.1 and I have enough disk space on it. The backup
drive is on ISOLON CIFS file system.

What may be the errors?

Please advice.


Thanks.


Re: [GENERAL] pg_dump error

2015-07-27 Thread AI Rumman
No it is -s for schema only backup.

Thanks.

On Mon, Jul 27, 2015 at 10:53 AM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 07/27/2015 10:48 AM, AI Rumman wrote:

 Hi,

 I am getting the following error during pg_dump:

 pg_dump dbname -v -t tablename -s -Fc -f dbname_tablename,sqlc

^
 Is that really a ','(comma)?



 pg_dump: [custom archiver] WARNING: ftell mismatch with expected
 position -- ftell used

 I am using Postgresql 9.1 and I have enough disk space on it. The backup
 drive is on ISOLON CIFS file system.

 What may be the errors?

 Please advice.


 Thanks.



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL] Setting up HA postgresql

2015-07-21 Thread AI Rumman
Hi,

I made the following document 4 years back:
http://www.rummandba.com/2011/02/postgresql-failover-with-pgpool-ii.html

You may have a look if it makes any good to your work.

BTW, if you want to setup a share-nothing high scalable system with
data-sharding, you can go for pl/proxy.

Thanks.

On Tue, Jul 21, 2015 at 10:55 AM, Aviel Buskila avie...@gmail.com wrote:

 Can you link me up to a good tutorial using pgpool-II?

 2015-07-21 20:02 GMT+03:00 Joshua D. Drake j...@commandprompt.com:


 On 07/21/2015 08:34 AM, William Dunn wrote:

 Hello Aviel,

 On Tue, Jul 21, 2015 at 3:56 AM, Aviel Buskila avie...@gmail.com
 mailto:avie...@gmail.com wrote:

 How can I set a highly available postgresql in a share-nothing
 architecture?

 I suggest you review the official documentation on high-availability
 configurations linked below:
 http://www.postgresql.org/docs/current/static/high-availability.html

 The most common configuration is to use PostgreSQL's built in
 master/standby streaming replication. However you will need to develop
 any fail-over logic you need yourself or use a third party tool such as
 EnterpriseDB's Failover Manager.


 Or use already available open source tools such as Pgpool-II or Linux-HA.

 Sincerely,
 JD

 --
 Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
 PostgreSQL Centered full stack support, consulting and development.
 Announcing I'm offended is basically telling the world you can't
 control your own emotions, so everyone else should do it for you.





Re: [GENERAL] Postgresql CIFS

2015-03-05 Thread AI Rumman
Thanks.

On Wed, Mar 4, 2015 at 10:22 PM, John R Pierce pie...@hogranch.com wrote:

 On 3/4/2015 9:10 PM, AI Rumman wrote:

 I am working on setting up a new database server with shared disk and
 cluster failover.
 In this environment, only one node will be active at a certain time.
 Underneath, we are planning to have shared storage with CIFS protocol.

 As I am newbie with this storag, can anyone please help me with some info
 what are the database issues I can face with this kind of file system
 protocol with Postgresql 9.1


 why are you building a NEW system with the 2nd oldest release of
 postgres?  within a year or so, 9.1 will be obsolete and unsupported.

 CIFS will be pretty slow at the sorts of random writes that a database
 server does a lot of, and there's all sorts of room for hard-to-diagnose
 issues with unsafe write cache buffering in the file server, depending on
 the specifics of the CIFS server implementation. Not sure how you
 implement a high availability CIFS server without single points of failure,
 either...   thats hard enough with shared block storage implementations
 (requiring redundant storage networks, switches, and dual storage
 controllers with shared cache, dual homing the actual physical block
 storage, which is dual ported and all raid 10 typically).

 ISCSI or a proper SAN (fiberchannel) would be a much better choice for a
 shared storage active/passive cluster, just implement some sort of storage
 fencing to ensure only one node can have the file system mounted at a time.

 with postgres, its usually better to implement a HA cluster via streaming
 replication, the master and slave each with their own dedicated storage,
 and promoting the slave to master if/when the master dies.


 --
 john r pierce  37N 122W
 somewhere on the middle of the left coast



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Postgresql CIFS

2015-03-04 Thread AI Rumman
Hi All,

I am working on setting up a new database server with shared disk and
cluster failover.
In this environment, only one node will be active at a certain time.
Underneath, we are planning to have shared storage with CIFS protocol.

As I am newbie with this storag, can anyone please help me with some info
what are the database issues I can face with this kind of file system
protocol with Postgresql 9.1

Thanks.


[GENERAL] increasing varchar column size is taking too much time

2015-02-13 Thread AI Rumman
Hi,

I started the following query in Postgresql 9.1 where only this sql is
running on the host and it has been taking more than an hour and still
running.

 alter table userdata.table1 alter column name type varchar(512);


Here is the table description:


  d+ userdata.table1
Table userdata.table1
Column|Type |   Modifiers
  | Storage  |   Description

 -+-++--+-
  id  | character varying(50)   | not null
   | extended |
  code   | character varying(32)   | not null   |
 extended |
  accid   | character varying(50)   | not null
   | extended |
  name| character varying(100)  | not null
   | extended |
  create_time | timestamp with time zone| not null default
 now() | plain|
  modified_time   | timestamp with time zone| not null default
 now() | plain|
  install_date| timestamp without time zone |
  | plain|
  recent_scan_date| timestamp without time zone |
  | plain|
  update_date | timestamp without time zone |
  | plain|
  setting | character varying(100)  |
  | extended |
  name| character varying(100)  |
  | extended |
  type| character varying(8)|
  | extended |
  version | character varying(128)  |
  | extended |
  package | character varying(255)  |
  | extended |
  permission  | text|
  | extended |
  trigger | character varying(10)   |
  | extended |
  reasons | character varying(200)  |
  | extended |
  note| character varying(255)  |
  | extended |
  size| bigint  ||
 plain|
  usage| bigint  ||
 plain|
  running | character varying(4)|
  | extended |
  location| character varying(60)   |
  | extended |
  can_stop| character(1)|
  | extended |
  can_uninstall   | character(1)|
  | extended |
  flagged_status  | character(1)|
  | extended |
 status   | character(1)|
  | extended |
  consultation_status | character(1)|
  | extended |
  trust   | character(1)|
  | extended |
 Indexes:
 table1_pk PRIMARY KEY, btree (id, code)
 table1_accid_id_hashcode_idx btree (accid, id, code)
 table1_accid_idx btree (accid)
 table1_id_idx btree (id)
 Triggers:
 table1s_delete_trigger BEFORE DELETE ON table1 FOR EACH ROW EXECUTE
 PROCEDURE delete_jangles_table1()
 table1s_insert_trigger BEFORE INSERT ON table1 FOR EACH ROW EXECUTE
 PROCEDURE insert_jangles_table1()
 Child tables: table1_0,
   table1_1,
   table1_10,
   table1_2,
   table1_3,
   table1_4,
   table1_5,
   table1_6,
   table1_7,
   table1_8,
   table1_9
 Has OIDs: no


Here are the number of rows and pages in partition:



   relname| reltuples | relpages
 ---+---+--
  table1_0  |  10076840 |   362981
  table1_1  |  10165073 |   366548
  table1_2  |  10046372 |   361838
  table1_3  |  10114727 |   364360
  table1_4  |  10155816 |   366054
  table1_5  |  10188953 |   367023
  table1_6  |  10275270 |   370887
  table1_7  |  10163937 |   366245
  table1_8  |  10262516 |   369350
  table1_9  |  10359893 |   372099
  table1_10 |  10434026 |   375327
  table1| 0 |0



Any idea why the above ALTER statement is taking that much time?
Is it because of the number of rows we have in each partition?
Any suggestion for it?


[GENERAL] what is parse unnamed?

2015-01-29 Thread AI Rumman
Hi All,

I am facing some slow sqls in my database as follows:

2015-01-29 18:57:19.777 CST [29024][user@user] 10.6.48.226(59246): [1-1]
LOG:  duration: 3409.729 ms  parse unnamed:
2015-01-29 18:57:19.782 CST [29140][user@user] 10.6.48.227(36662): [1-1]
LOG:  duration: 3468.549 ms  parse unnamed:
2015-01-29 18:57:19.785 CST [29512][user@db] 10.6.48.227(37110): [1-1] LOG:
 duration: 3505.666 ms  parse unnamed:
2015-01-29 18:57:19.799 CST [28340][user@user] 10.6.48.224(36070): [1-1]
LOG:  duration: 3551.065 ms  parse unnamed:
2015-01-29 18:57:19.803 CST [29678][user@db] 10.6.48.226(59748): [1-1] LOG:
 duration: 3557.675 ms  parse unnamed:
2015-01-29 18:57:19.814 CST [27185][user@db] 10.6.48.224(35502): [1-1] LOG:
 duration: 3575.141 ms  parse unnamed: INSERT INTO schema.table
(id,aac_id,create_time,event_type,details) VALUES($1,$2,$3,$4,$5)
2015-01-29 18:57:19.816 CST [29604][user@db] 10.6.48.226(59600): [1-1] LOG:
 duration: 3508.277 ms  parse unnamed:
2015-01-29 18:57:19.816 CST [27586][user@db] 10.6.48.225(38279): [1-1] LOG:
 duration: 3540.860 ms  parse unnamed: INSERT INTO schema.table
(id,aac_id,create_time,event_type,details) VALUES($1,$2,$3,$4,$5)
2015-01-29 18:57:19.826 CST [28996][user@db] 10.6.48.225(39696): [1-1] LOG:
 duration: 3589.108 ms  parse unnamed: SELECT id, location_ts, type,
aac_id, latitude, longitude, accuracy, source, create_time, modified_time
FROM schema.table2 WHERE aac_id = $1 AND type = 'E'


This query are running fine when I am executing them separately.

Can you please let me know what does it mean by
parse unnamed
?

Thanks.


[GENERAL] pgcluu error

2014-09-15 Thread AI Rumman
Hi,

I am trying to use pgcluu with collected stats and got the error:
Can't call method print on an undefined value at
/opt/pgdata/pgcluu_prod/pgcluubin/pgcluu line 5494

Any one has idea?

Thanks.


[GENERAL] not finding rows using ctid

2014-08-07 Thread AI Rumman
Hi,

I am getting the logs as follows:

LOG:  process 32145 acquired ExclusiveLock on tuple (153420,5) of relation
663326 of database 475999 after 1123.028 ms


But, when I am executing sqls to find the row on that table using the ctid
= '(153420,5)', I get no rows.


Any idea, why?


Thanks.


Re: [GENERAL] not finding rows using ctid

2014-08-07 Thread AI Rumman
I didn't execute any Vacuum Full and I tried to get the row after 3 hours
of the issue.

Thanks.


On Thu, Aug 7, 2014 at 1:51 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 08/07/2014 12:40 PM, AI Rumman wrote:


 Hi,

 I am getting the logs as follows:

 LOG:  process 32145 acquired ExclusiveLock on tuple (153420,5) of
 relation 663326 of database 475999 after 1123.028 ms


 But, when I am executing sqls to find the row on that table using the
 ctid = '(153420,5)', I get no rows.


 Any idea, why?


 http://www.postgresql.org/docs/9.3/static/ddl-system-columns.html
 ctid

 The physical location of the row version within its table. Note that
 although the ctid can be used to locate the row version very quickly, a
 row's ctid will change if it is updated or moved by VACUUM FULL. Therefore
 ctid is useless as a long-term row identifier. The OID, or even better a
 user-defined serial number, should be used to identify logical rows.

 Something changed the row between the time you saw it in the log and the
 time you did the query.



 Thanks.



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



[GENERAL] Whats is lock type transactionid?

2014-07-17 Thread AI Rumman
Hi,

I have been facing lock contention in my Postgresql 9.1 DB.
And when I am querying in the pg_locks table I found a lock type with
transactionid.
Could someone please tell me what it means?

Thanks.


Re: [GENERAL] Whats is lock type transactionid?

2014-07-17 Thread AI Rumman
I am experiencing lock contention on one single UPDATE statement at a
certain time in whole day. This is a small table to UPDATE.
My suspect is we are facing it for one specific ID.
Could you please let me know how can I identify the tuple.

I got a log like follows:
  blocker_target   | blocker_pid | blocker_mode  | depth |
 target   | pid  | mode  |seq
---+-+---+---+---+--+---+---
 (tuple,475999,662775,1988,6,) |3557 | ExclusiveLock | 1 |
(tuple,475999,662775,1988,6,) | 3543 | ExclusiveLock | 3557,3543
 (tuple,475999,662775,1988,6,) |3557 | ExclusiveLock | 1 |
(tuple,475999,662775,1988,6,) | 7387 | ExclusiveLock | 3557,7387

Any idea on it.

Thanks.


On Thu, Jul 17, 2014 at 12:40 PM, Douglas J Hunley doug.hun...@gmail.com
wrote:

 On Thu, Jul 17, 2014 at 3:34 PM, AI Rumman rumman...@gmail.com wrote:

 Hi,

 I have been facing lock contention in my Postgresql 9.1 DB.
 And when I am querying in the pg_locks table I found a lock type with
 transactionid.
 Could someone please tell me what it means?

 Thanks.


 from http://www.postgresql.org/docs/9.3/static/view-pg-locks.html :
 Every transaction holds an exclusive lock on its virtual transaction ID
 for its entire duration. If a permanent ID is assigned to the transaction
 (which normally happens only if the transaction changes the state of the
 database), it also holds an exclusive lock on its permanent transaction ID
 until it ends. When one transaction finds it necessary to wait specifically
 for another transaction, it does so by attempting to acquire share lock on
 the other transaction ID (either virtual or permanent ID depending on the
 situation). That will succeed only when the other transaction terminates
 and releases its locks.

 I believe that describes what you're seeing

 --
 Douglas J Hunley (doug.hun...@gmail.com)



Re: [GENERAL] Whats is lock type transactionid?

2014-07-17 Thread AI Rumman
Yes. But as we are using bind variables, we are not able to get the ID of
the tuple.


On Thu, Jul 17, 2014 at 2:08 PM, Douglas J Hunley doug.hun...@gmail.com
wrote:


 On Thu, Jul 17, 2014 at 12:54 PM, AI Rumman rumman...@gmail.com wrote:

 I am experiencing lock contention on one single UPDATE statement at a
 certain time in whole day. This is a small table to UPDATE.
 My suspect is we are facing it for one specific ID.
 Could you please let me know how can I identify the tuple.


 Have you tried the lock monitoring queries on
 http://wiki.postgresql.org/wiki/Lock_Monitoring yet by chance?



 --
 Douglas J Hunley (doug.hun...@gmail.com)



Re: [GENERAL] lock contention, need profiling idea

2014-07-01 Thread AI Rumman
There was no CREATE INDEX command running on the host.


On Mon, Jun 30, 2014 at 5:06 PM, Michael Paquier michael.paqu...@gmail.com
wrote:




 On Tue, Jul 1, 2014 at 7:36 AM, AI Rumman rumman...@gmail.com wrote:

 I see lots of similar log message at a certain time in a day on
 Postgresql 9,.1:

 LOG:  process 18855 still waiting for ShareLock on transaction 2856146023
 after 1001.209 ms
 STATEMENT:  UPDATE table1 SET time = $1 WHERE id = $2

 The table1 size is 17 G.

 What could be the reason for this lock contention?
 autovacuum?

 This may be a CREATE INDEX query taking some time, perhaps combined with
 an old prepared transaction still holding a lock? Perhaps a cron job
 running behind that you are not aware of?
 You should have a look at pg_stat_activity, pg_prepared_xacts and pg_locks
 to get more information about the transactions running and the locks being
 taken.
 --
 Michael



[GENERAL] lock contention, need profiling idea

2014-06-30 Thread AI Rumman
I see lots of similar log message at a certain time in a day on Postgresql
9,.1:

LOG:  process 18855 still waiting for ShareLock on transaction 2856146023
after 1001.209 ms
STATEMENT:  UPDATE table1 SET time = $1 WHERE id = $2

The table1 size is 17 G.

What could be the reason for this lock contention?
autovacuum?

Please give some idea.

Thanks.


[GENERAL] skipping analyze of table1 --- lock not available?

2014-06-23 Thread AI Rumman
Could someone please tell me why I am getting these in my log:

2014-06-23 00:00:00.031 CDT [11379][@] : [1-1]LOG:  skipping analyze of
table1 --- lock not available
2014-06-23 00:00:00.056 CDT [11380][@] : [1-1]LOG:  skipping analyze of
table1 --- lock not available
2014-06-23 00:00:00.081 CDT [11381][@] : [1-1]LOG:  skipping analyze of
table1 --- lock not available
2014-06-23 00:00:00.106 CDT [11382][@] : [1-1]LOG:  skipping analyze of
table1 --- lock not available
2014-06-23 00:00:00.131 CDT [11383][@] : [1-1]LOG:  skipping analyze of
table1 --- lock not available
2014-06-23 00:00:00.156 CDT [11384][@] : [1-1]LOG:  skipping analyze of
table1 --- lock not available
2014-06-23 00:00:00.181 CDT [11385][@] : [1-1]LOG:  skipping analyze of
table1 --- lock not available
2014-06-23 00:00:00.206 CDT [11386][@] : [1-1]LOG:  skipping analyze of
table1 --- lock not available
2014-06-23 00:00:00.231 CDT [11387][@] : [1-1]LOG:  skipping analyze of
table1 --- lock not available


Thanks.


[GENERAL] Lock during insert statement

2014-05-21 Thread AI Rumman
Could any one please tell me why my system is waiting to get lock for an
INSERT statement?

2014-05-21 07:52:49.965 PDT [9-1]LOG:  process 31407 acquired ExclusiveLock
on extension of relation 429298276 of database 21497 after 3219.963 ms
2014-05-21 07:52:49.965 PDT [10-1]STATEMENT:  INSERT INTO table1
(end_id,account_id,create_time,event_type,details) VALUES($1,$2,$3,$4,$5)
2014-05-21 07:52:49.965 PDT [3-1]LOG:  duration: 4590.048 ms  execute
unnamed: INSERT INTO table1
(end_id,account_id,create_time,event_type,details) VALUES($1,$2,$3,$4,$5)


Thanks.


Re: [GENERAL] Lock during insert statement

2014-05-21 Thread AI Rumman
Got it.
Thanks.
Any special parameter to tune it? Like wal_buffers or shared_buffers?


On Wed, May 21, 2014 at 3:28 PM, Jeff Janes jeff.ja...@gmail.com wrote:

 On Wed, May 21, 2014 at 3:14 PM, AI Rumman rumman...@gmail.com wrote:

 Could any one please tell me why my system is waiting to get lock for an
 INSERT statement?

 2014-05-21 07:52:49.965 PDT [9-1]LOG:  process 31407 acquired
 ExclusiveLock on extension of relation 429298276 of database 21497 after
 3219.963 ms


 on extension of relation means that it needs to add 8KB to the end of
 the table.  That it takes so long to obtain that locks suggests you have a
 some serious IO congestion.

 Cheers,

 Jeff



[GENERAL] what should be the best autovacuum configuration for daily partition table

2014-05-14 Thread AI Rumman
Hi,

I have a table with daily partition setup where old partitions are static
tables that is after each day we don't get any new data in old partitions.
The database size is 2 TB and I am running with autovacuum on for
Postgresql 8.4.
Now, I am facing a problem where old tables are not being vacuumed by
autovacuum deamon and every now and then we are seeing autovacuum to
prevent wrap around in the database and age(datfrozenzid) gets high for the
database.
Any idea what should be best configuration for this type of database
environment.

Thanks.


[GENERAL] Is it good to have toast table for information schema table?

2014-04-22 Thread AI Rumman
Hi,

Is it good to have toast table for information schema table? I am using
Postgresql 8.4 and current state is:

*select datname, datfrozenxid from pg_database;   *
  datname  | datfrozenxid
---+--
 template1 |   1462730397
 template0 |   1462741467
 postgres  |   1562754912
 jangles   |   1459615432
(4 rows)


*select * from pg_class  where relfrozenxid  = 1459615432;*
relname | reltoastidxid | relhasindex | relfrozenxid
+---+-+--
 pg_toast_11447 | 11451 | t   |   1459615432
(1 row)


*select 11447::regclass;  *
regclass
-
 information_schema.sql_features
(1 row)

Please advice.

Thanks.


Re: [GENERAL] Is it safe to stop postgres in between pg_start_backup and pg_stop_backup?

2014-04-03 Thread AI Rumman
What we did in this kind of higher performance storage migration, setting
up standby on that mounts and then executed a failover.


On Thu, Apr 3, 2014 at 3:58 PM, Alan Hodgson ahodg...@simkin.ca wrote:

 On Thursday, April 03, 2014 02:48:03 PM Steven Schlansker wrote:
  On Apr 2, 2014, at 3:08 PM, Jacob Scott jacob.sc...@gmail.com wrote:
* pg_start_backup
* Take a filesystem snapshot (of a volume containing postgres data
 but not
  pg_xlog) * pg_stop_backup
* pg_ctl stop
* Bring a new higher performing disk online from snapshot
* switch disks (umount/remount at same mountpoint)
* pg_ctl start

 ... with a recovery.conf in place when starting the new instance.

 
  Assuming you ensure that your archived xlogs are available same to the
 new
  instance as the old

 And make sure they're archived to a different disk.

  Another option you could consider is rsync.  I have often transferred
  databases by running rsync concurrently with the database to get a dirty
  backup of it.  Then once the server is shutdown you run a cleanup rsync
  which is much faster than the initial run to ensure that the destination
  disk is consistent and up to date.  This way your downtime is limited to
  how long it takes rsync to compare fs trees / fix the inconsistencies.
 

 This would be simpler.



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] When does pg_archivecleanup work?

2014-03-11 Thread AI Rumman
As I have very low wal_keep_segments compare to my wal generation, I am
collecting archive wal files at slave.
Now in order to clean up archive wal collection directory at slave, I used
archive_cleanup_command.
I watched that after archive wal files were pilling up at slave and after
certain point it was cleared up.
But still I didn't get the idea when it will be executed and clean up my
directory.

Example: recovery.conf that I used:
standby_mode = 'on' # enables stand-by (readonly) mode
primary_conninfo = 'host= 10.0.0.1 port= 5432 user= replicator
password=replicator'
trigger_file = '/pgdata/pgsql.trigger.5432'
restore_command = 'cp /archivewal/%f %p'
archive_cleanup_command = '$HOME/bin/pg_archivecleanup  /archivewal %r'

Any idea please.

Thanks.


[GENERAL] EMC SRDF technology for creating replication with Postgresql?

2014-01-06 Thread AI Rumman
Hi,

Did any of here use EMC SRDF technology for creating replication with
Postgresql?

Thanks.


[GENERAL] what does the error mean?

2013-12-30 Thread AI Rumman
Hi,

What does the error message mean?

ERROR:  unexpected chunk number 1 (expected 2) for toast value 179638221 in
pg_toast_16700

Please let me know.

Thanks.


[GENERAL] Seems like bug in 9.1.3, need to confirm.

2013-12-20 Thread AI Rumman
HI,

I am working on Postgresql 9.1.3.
I executed the following query and got an error:
 select relname, pg_size_pretty(pg_table_size(relname::text)) as s from
pg_stat_user_tables order by pg_table_size(relname::text) desc  limit 10;
ERROR:  relation tab_20130206 does not exist

That table does not exist in the db, that's correct. But is it right I am
getting the error or it is a bug?

Thanks.


Re: [GENERAL] Question(s) about crosstab

2013-12-18 Thread AI Rumman
Hi,

Once I faced the same problem of adding new type and reqriting the query
working with crosstab function. Then I created a dynamic crosstab function.
You may have a look at it if it work out for you:
http://www.rummandba.com/2013/03/postgresql-dynamic-crosstab-function.html

Thanks.


On Tue, Dec 17, 2013 at 3:31 PM, Ken Tanzer ken.tan...@gmail.com wrote:

 Hi.  I've got a simple table unit_hold, with grant numbers, buildings and
 counts of unit types, which I need to summarize, along with a table listing
 unit types:

 \d unit_hold
  Table public.unit_hold
 Column| Type  | Modifiers
 --+---+---
  grant_number_code| character varying(10) |
  housing_project_code | character varying(10) |
  unit_type_code   | character varying(10) |
  count| bigint|

 SELECT * FROM unit_hold limit 3;
  grant_number_code | housing_project_code | unit_type_code | count
 ---+--++---
  1 |  | 4BR| 1
  1 |  | 1BR| 1
  1 |  | 1BR| 1

 SELECT unit_type_code,description FROM l_unit_type;
  unit_type_code | description
 +-
  5BR| 5 Bedroom
  4BR| 4 Bedroom
  3BR| 3 Bedroom
  6BR| 6 Bedroom
  UNKNOWN| Unknown
  GROUP  | Group Home
  2BR| 2 Bedroom
  1BR| 1 Bedroom
  0BR| Studio
  SRO| SRO


 I thought this would be a good candidate for crosstab.  After wrestling
 with the documentation, this is the best I could come up with:

 SELECT * FROM crosstab(
   'SELECT housing_project_code||''_''||grant_number_code AS
 project_and_grant,grant_number_code,housing_project_code,unit_type_code,count
 FROM unit_hold ORDER BY 1,2',
   'SELECT * FROM (SELECT DISTINCT unit_type_code FROM l_unit_type) foo
 ORDER BY unit_type_code ~ ''^[0-9]'' DESC, unit_type_code'
 ) AS ct(project_and_grant varchar, grant_number_code varchar,
 housing_project_code varchar, 0BR bigint, 1BR bigint, 2BR bigint,
 3BR bigint, 4BR bigint, 5BR bigint, 6BR bigint,GROUP bigint,
 SRO bigint, UNKNOWN bigint)

 So here are my questions:

 1)  Is there a simpler way?  I'm hoping I made this unnecessarily
 cumbersome and complicated.
 2)  AFAICT, if a new unit type were to be added, I'd have to rewrite this
 query.  Is there any way to avoid that?
 3)  It seems like everything after the first query, except for the
 category field, is redundant information, and that in theory you should be
 able to say crosstab('query','category_field').  Is there any inherent
 reason this simpler form couldn't work, or is it just that no one has
 wanted to do it, or gotten to it yet?

 Thanks in advance!

 Ken


 --
 AGENCY Software
 A data system that puts you in control
 100% Free Software
 *http://agency-software.org/ http://agency-software.org/*
 ken.tan...@agency-software.orghttps://mail.google.com/mail/?view=cmfs=1tf=1to=ken.tan...@agency-software.org
 (253) 245-3801

 Subscribe to the mailing 
 listhttps://mail.google.com/mail/?view=cmfs=1tf=1to=agency-general-requ...@lists.sourceforge.netbody=subscribe
  to
 learn more about AGENCY or
 follow the discussion.



Re: [GENERAL] Data Guard for Postgres?

2013-12-13 Thread AI Rumman
Postgresql 9.2 streaming replication which is very much similar with Oracle
Data Guard.


On Fri, Dec 13, 2013 at 1:57 PM, Roy Anderson roy.ander...@gmail.comwrote:

 Good day. I am inquiring as to whether there is a free solution
 available that approximates, in Postgres, what Data Guard does for
 Oracle DB. Can anyone advise?

 Thank you,

 Roy


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] pg_start_backup('label',true) why do I need 2nd parameter?

2013-11-05 Thread AI Rumman
Hi all,

A few days back, I faced a problem where I *pg_start_backup('label') *was
hang in the server forever.
I stopped the process and then used *pg_start_backup('label',true) *and  it
worked.

Now I am trying to investigate why I need to use true as second parameter
and read the doc
There is an optional second parameter of type boolean. If true, it
specifies executing pg_start_backup as quickly as possible. This forces an
immediate checkpoint which will cause a spike in I/O operations, slowing
any concurrently executing queries.

I tried to regenerate the scenario where  *pg_start_backup('label')
*hanged and
I failed.
Any idea, how can I regenerate that issue to investigate.

I am using Postgresql 9.2.

Thanks.


Re: [GENERAL] how to get the connected session pointer ( Archive * AH)

2013-10-22 Thread AI Rumman
I have a libpq C interface code which takes user/password to connect to the
db. Now, I want to call it from a pgsql function where user will not need
to put any user name or password, rather it will use the current session ID.
How can I do it? Any idea, please.

Thanks.


On Tue, Oct 22, 2013 at 2:05 AM, Luca Ferrari fluca1...@infinito.it wrote:

 On Tue, Oct 22, 2013 at 1:32 AM, AI Rumman rumman...@gmail.com wrote:
  Hi,
 
  Is there a way in Postgresql C function to get the connected session
 pointer
  ( Archive * AH)
  and use it for further execution?

 If I read pg_archiver.c correctly, the AH pointer is used only during
 the archiving and is not globally stored anywhere, and therefore I
 suspect the answer is NO (at least unless you modify the archiver).
 What is the aim of having such pointer?

 Luca



[GENERAL] how to get the connected session pointer ( Archive * AH)

2013-10-21 Thread AI Rumman
Hi,

Is there a way in Postgresql C function to get the connected session
pointer ( Archive * AH)
and use it for further execution?

Thanks.


[GENERAL] when do I analyze after concurrent index creation?

2013-10-17 Thread AI Rumman
Hi,

I have a very basic question.
If I create index concurrently, then do I need to analyze the table? If
yes, when?
Please let me know.

Thanks.


[GENERAL] what is BIND in OS log for Postgresql

2013-10-14 Thread AI Rumman
Hi,

I got in my OS log using ps command as follows:

postgres 17087  2770  0 Sep30 ?00:07:49 postgres: myapp appname
10.0.0.1(35607) BIND
postgres 32545  2770  1 Oct01 ?00:19:09 postgres: myapp appname
10.0.0.1(35783) SELECT
postgres 32546  2770  1 Oct01 ?00:15:25 postgres: myapp appname
10.0.0.1(35787) SELECT

Could you please let me know why I got this BIND?

Thanks.


[GENERAL] recover deleted data

2013-09-20 Thread AI Rumman
Is there any way to recover deleted record in Postgresql 9.2 if we don't
have any backup.

Thanks.


[GENERAL] 9.2 Replication in Ubuntu ; need help

2013-09-12 Thread AI Rumman
Hi,

I am trying to setup replication with Postgresql 9.2 in Ubuntu on Amazon
Ec2.
But stuck in the process.
Postgresql standby log is showing:

2013-09-12 14:45:47 UTC LOG:  entering standby mode
2013-09-12 14:45:47 UTC LOG:  redo starts at 1/3920
2013-09-12 14:45:47 UTC LOG:  record with zero length at 1/3905C088
2013-09-12 14:45:47 UTC LOG:  streaming replication successfully connected
to primary
2013-09-12 14:45:47 UTC LOG:  consistent recovery state reached at
1/391C8588

But ,
 psql
2013-09-12 14:48:04 UTC FATAL:  the database system is starting up

I configured replication for Centos so many times and followed those steps.
Is there something I am missing?

Thanks.


Re: [GENERAL] 9.2 Replication in Ubuntu ; need help

2013-09-12 Thread AI Rumman
Yes, I can access to the primary server perfectly.




On Thu, Sep 12, 2013 at 11:55 AM, Suzuki Hironobu hiron...@interdb.jpwrote:

 Hi,


 (13/09/12 23:53), AI Rumman wrote:

 Hi,

 I am trying to setup replication with Postgresql 9.2 in Ubuntu on Amazon
 Ec2.
 But stuck in the process.
 Postgresql standby log is showing:

 2013-09-12 14:45:47 UTC LOG:  entering standby mode
 2013-09-12 14:45:47 UTC LOG:  redo starts at 1/3920
 2013-09-12 14:45:47 UTC LOG:  record with zero length at 1/3905C088
 2013-09-12 14:45:47 UTC LOG:  streaming replication successfully connected
 to primary
 2013-09-12 14:45:47 UTC LOG:  consistent recovery state reached at
 1/391C8588

 But ,
   psql
 2013-09-12 14:48:04 UTC FATAL:  the database system is starting up

 I configured replication for Centos so many times and followed those
 steps.
 Is there something I am missing?

 Thanks.


 This FATAL message is returned when the status of primary server is
 PM_STARTUP or PM_RECOVERY.
 I guess your primary server is in startup state.
 Can you access to the primary server using psql? Please check the primary
 server.


 Regards,





 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/**mailpref/pgsql-generalhttp://www.postgresql.org/mailpref/pgsql-general



[GENERAL] regexp idea

2013-08-27 Thread AI Rumman
Hi,

I have a string like:
Gloucester Catholic vs. St. Augustine baseball, South Jersey Non-Public A
final, June 5, 2013

I need to extract date part from the string.

I used the follows:
regexp_matches(title,'[.* ]+\ (Jul|August|Sep)[, a-zA-Z0-9]+' )

But it gives me result August as it stops at Augustine.

In my case, date can be in different formats, some record may use , or
some may not.

Any idea to achieve this?

Thanks.


Re: [GENERAL] regexp idea

2013-08-27 Thread AI Rumman
Thanks. That's awesome.
Do you have any good guide where I may get more knowledge on REGEXP?


On Tue, Aug 27, 2013 at 3:57 PM, Rob Sargent robjsarg...@gmail.com wrote:

 On 08/27/2013 12:44 PM, AI Rumman wrote:

 Hi,

 I have a string like:
 Gloucester Catholic vs. St. Augustine baseball, South Jersey Non-Public A
 final, June 5, 2013

 I need to extract date part from the string.

 I used the follows:
 regexp_matches(title,'[.* ]+\ (Jul|August|Sep)[, a-zA-Z0-9]+' )

 But it gives me result August as it stops at Augustine.

 In my case, date can be in different formats, some record may use , or
 some may not.

 Any idea to achieve this?

 Thanks.

 select regexp_replace('Gloucester Catholic vs. St. Augustine baseball,
 South Jersey Non-Public A final, June 5, 2013',
 E'(^.*)(\\m(June|July|August|**Sep))([, a-zA-Z0-9]+)', E'\\2 \\4' );
 ++
 | regexp_replace |
 ++
 | June  5, 2013  |
 ++
 (1 row)




 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/**mailpref/pgsql-generalhttp://www.postgresql.org/mailpref/pgsql-general



[GENERAL] pg_get_triggerdef can't find the trigger using OID.

2013-08-16 Thread AI Rumman
Why can't pg_get_triggerdef find the trigger using OID.

testdb=# SELECT
testdb-#   p.oid,
testdb-#   n.nspname as Schema,
testdb-#   p.proname as Name,
testdb-#   pg_catalog.pg_get_function_result(p.oid) as Result data type,
testdb-#   pg_catalog.pg_get_function_arguments(p.oid) as Argument data
types,
testdb-#  CASE
testdb-#   WHEN p.proisagg THEN 'agg'
testdb-#   WHEN p.proiswindow THEN 'window'
testdb-#   WHEN p.prorettype = 'pg_catalog.trigger'::pg_catalog.regtype
THEN 'trigger'
testdb-#   ELSE 'normal'
testdb-# END as Type
testdb-# FROM pg_catalog.pg_proc p
testdb-#  LEFT JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
testdb-# WHERE pg_catalog.pg_function_is_visible(p.oid)
testdb-#   AND n.nspname  'pg_catalog'
testdb-#   AND n.nspname  'information_schema'
testdb-# ORDER BY 1, 2, 4;
  oid  | Schema |  Name   | Result data type | Argument
data types  |  Type
---++-+--+--+-
 18249 | public | test_f  | trigger  |
 | trigger


testdb=# select pg_get_triggerdef(18249);
ERROR:  could not find tuple for trigger 18249

Thanks.


[GENERAL] last_vacuum field in not updated

2013-08-15 Thread AI Rumman
Hi,
I am using Postgresql 9.2 where I have a table table1. I used vacuum
command in that table, but last_vacuum column of pg_stat_user_tables has
not been updated.
Any idea for it?

 \d table1
Table public.table1
  Column  |   Type   | Modifiers
--+--+
 batterysessionid | integer  | not null
 processedflag| smallint | not null default 0
Indexes:
table1_pkey PRIMARY KEY, btree (batterysessionid)

qualitycore=# select * from pg_stat_user_tables  where last_vacuum =
'2013-07-28 20:04:34.821115-04';
-[ RECORD 1 ]-+--
relid | 5452445
schemaname| public
relname   | table1
seq_scan  | 55394
seq_tup_read  | 458097965
idx_scan  | 3056888
idx_tup_fetch | 345092348
n_tup_ins | 1023618
n_tup_upd | 643602
n_tup_del | 642037
n_tup_hot_upd | 175225
n_live_tup| 381549
n_dead_tup| 77130
last_vacuum   | 2013-07-28 20:04:34.821115-04
last_autovacuum   | 2013-05-01 00:55:01.970799-04
last_analyze  | 2013-07-28 20:04:34.903569-04
last_autoanalyze  | 2013-05-01 06:04:12.905961-04
vacuum_count  | 93
autovacuum_count  | 248
analyze_count | 95
autoanalyze_count | 560

qualitycore=# vacuum  table1;
VACUUM
qualitycore=# vacuum  verbose table1;
INFO:  vacuuming public.table1
INFO:  index table1_pkey now contains 381973 row versions in 1878 pages
DETAIL:  0 index row versions were removed.
104 index pages have been deleted, 103 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  table1: found 0 removable, 1120 nonremovable row versions in 10
out of 2286 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 1538 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM



select * from pg_stat_user_tables  where last_vacuum = '2013-07-28
20:04:34.821115-04';
-[ RECORD 1 ]-+--
relid | 5452445
schemaname| public
relname   | table1
seq_scan  | 55394
seq_tup_read  | 458097965
idx_scan  | 3056888
idx_tup_fetch | 345092348
n_tup_ins | 1023618
n_tup_upd | 643602
n_tup_del | 642037
n_tup_hot_upd | 175225
n_live_tup| 381549
n_dead_tup| 77130
last_vacuum   | 2013-07-28 20:04:34.821115-04
last_autovacuum   | 2013-05-01 00:55:01.970799-04
last_analyze  | 2013-07-28 20:04:34.903569-04
last_autoanalyze  | 2013-05-01 06:04:12.905961-04
vacuum_count  | 93
autovacuum_count  | 248
analyze_count | 95
autoanalyze_count | 560

Please let me know.

Thanks.


[GENERAL] setting high value for wal_keep_segments

2013-08-08 Thread AI Rumman
Hi,

I am going to sync slave with my master which is almost 500 G. I am not
using archive directory instead of I am using wal files for streaming. As
it may take almost 3 hours, I am thinking of setting up 400 for
wal_keep_segments where I have enough space available.

Without the space issue, could there be any other problem in setting up
such high value for wal_keep_segments? As this is production, I need to
confirmed.

Thanks.


Re: [GENERAL] setting high value for wal_keep_segments

2013-08-08 Thread AI Rumman
Yeah, I already set it like that and it works.
Thanks.


On Thu, Aug 8, 2013 at 11:59 AM, bricklen brick...@gmail.com wrote:

 On Thu, Aug 8, 2013 at 6:23 AM, AI Rumman rumman...@gmail.com wrote:

 Hi,

 I am going to sync slave with my master which is almost 500 G. I am not
 using archive directory instead of I am using wal files for streaming. As
 it may take almost 3 hours, I am thinking of setting up 400 for
 wal_keep_segments where I have enough space available.

 Without the space issue, could there be any other problem in setting up
 such high value for wal_keep_segments? As this is production, I need to
 confirmed.


 Another data point: I set up SR on two systems recently in production with
 the wal_keep_segments set to 1 (lots of logs were being generated), and
 the slaves were about 1TB each. No problems were experienced.




Re: [GENERAL] last_vacuum field is not updating

2013-07-16 Thread AI Rumman
Yes, I am sure that I am looking for the same table.


On Tue, Jul 16, 2013 at 4:34 AM, Luca Ferrari fluca1...@infinito.it wrote:

 On Mon, Jul 15, 2013 at 6:43 PM, Giuseppe Broccolo
 giuseppe.brocc...@2ndquadrant.it wrote:

  Are you sure you are the table's owner?

 It should not be a permission problem: it works even after a revoke
 all on 9.2.4. Interestingly also the autovacuum is really old. Have
 you tried to do a simple vacuum?  From the documentation
 (
 http://www.postgresql.org/docs/current/static/monitoring-stats.html#MONITORING-STATS-VIEWS-TABLE
 ):

 Last time at which this table was manually vacuumed (not counting VACUUM
 FULL)

 Are you sure we are looking at the same table?

 Luca



[GENERAL] last_vacuum field is not updating

2013-07-15 Thread AI Rumman
Why does vacuum table is not updating the field last_vacuum of
pg_stat_user_tables?

select * from pg_stat_user_tables  where relname = 'table1';
-[ RECORD 1 ]-+--
relid | 5452445
schemaname| public
relname   | table1
seq_scan  | 54911
seq_tup_read  | 373562142
idx_scan  | 2773802
idx_tup_fetch | 125909227
n_tup_ins | 889595
n_tup_upd | 530602
n_tup_del | 529037
n_tup_hot_upd | 162199
n_live_tup| 360526
n_dead_tup| 0
*last_vacuum   | 2013-06-29 20:04:15.391413-04*
last_autovacuum   | 2013-05-01 00:55:01.970799-04
last_analyze  | 2013-07-15 10:55:40.870926-04
last_autoanalyze  | 2013-05-01 06:04:12.905961-04
vacuum_count  | 92
autovacuum_count  | 248
analyze_count | 94
autoanalyze_count | 560

psql# *vacuum analyze verbose table1;*

INFO:  vacuuming public.table1
INFO:  index table1_pkey now contains 360965 row versions in 1878 pages
DETAIL:  0 index row versions were removed.
326 index pages have been deleted, 305 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  table1: found 0 removable, 1116 nonremovable row versions in 30
out of 2286 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 6720 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM

select * from pg_stat_user_tables  where relname = 'table1';
-[ RECORD 1 ]-+--
relid | 5452445
schemaname| public
relname   | table1
seq_scan  | 54911
seq_tup_read  | 373562142
idx_scan  | 2773802
idx_tup_fetch | 125909227
n_tup_ins | 889595
n_tup_upd | 530602
n_tup_del | 529037
n_tup_hot_upd | 162199
n_live_tup| 360526
n_dead_tup| 0
*last_vacuum   | 2013-06-29 20:04:15.391413-04*
last_autovacuum   | 2013-05-01 00:55:01.970799-04
last_analyze  | 2013-07-15 10:55:40.870926-04
last_autoanalyze  | 2013-05-01 06:04:12.905961-04
vacuum_count  | 92
autovacuum_count  | 248
analyze_count | 94
autoanalyze_count | 560


Re: [GENERAL] dynamic partitioning

2013-06-26 Thread AI Rumman
Yes, you missed the trigger part. And also you will get error like below
during insert:

INSERT INTO foo VALUES (99, 109, 109, 99, '2013-06-26 16:38:58.466');
NOTICE:  table_name = (foo_100_to_119)
NOTICE:  CREATE TABLE foo_100_to_119 (CHECK ( foo_id = 100 AND foo_id =
119 )) INHERITS (foo)
ERROR:  new row for relation foo_100_to_119 violates check constraint
foo_100_to_119_foo_id_check
DETAIL:  Failing row contains (99, 109, 109, 99, 2013-06-26
16:38:58.466-04).
CONTEXT:  SQL statement INSERT INTO foo_100_to_119 VALUES (($1).*)
PL/pgSQL function foo_insert_trigger() line 22 at EXECUTE statement


That because you added entity_id + 1 in your function and hence when you
are giving foo_id = 99, it is creating table with check constraint where
foo_id = 100 and foo_id = 119.

I modified  it as below:
*from_value = entry_id ;*
*
*
Now its working:

INSERT INTO foo VALUES (99, 109, 109, 99, '2013-06-26 16:38:58.466');
NOTICE:  table_name = (foo_99_to_119)
NOTICE:  CREATE TABLE foo_99_to_119 (CHECK ( foo_id = 99 AND foo_id = 119
)) INHERITS (foo)
INSERT 0 0
postgres=# select * from foo;
 foo_id | blaa_id | blaa_num | foo_num |   createdatetime
+-+--+-+
 99 | 109 |  109 |  99 | 2013-06-26 16:38:58.466-04
(1 row)

postgres=# select * from foo_99_to_119;
 foo_id | blaa_id | blaa_num | foo_num |   createdatetime
+-+--+-+
 99 | 109 |  109 |  99 | 2013-06-26 16:38:58.466-04
(1 row)

postgres=# show constraint_exclusion ;
 constraint_exclusion
--
 partition
(1 row)



On Wed, Jun 26, 2013 at 10:25 AM, Daniel Cristian Cruz 
danielcrist...@gmail.com wrote:

 You forgot to set the trigger on foo:

 CREATE TRIGGER foo_insert
 BEFORE INSERT ON foo
 FOR EACH ROW EXECUTE PROCEDURE foo_insert_trigger();



 2013/6/26 dafNi zaf dza...@gmail.com

 one note: I create a table of 100 entries in order to test it so I want 5
 partition of 20 entries each.
 (And not a table of 10 entries)


 thanks again!
 dafni


 On Wed, Jun 26, 2013 at 4:47 PM, dafNi zaf dza...@gmail.com wrote:

 Hello!

 I want to dynamically create partition tables that inherit a main table
 called foo.
 The creation must occur when needed.

 For example, lets say that I want to insert 10 entries and I want 5
 partition
 tables (with 2 entries each).

 So, first I need a partition for the first 2 entries and when the
 entries reach
 the number 2, another partition must be created, e.t.c..

 I guess I need something like that:

 --the main table is:

 CREATE TABLE foo (
 foo_id integer NOT NULL,
 blaa_id integer NOT NULL,
 blaa_num integer NOT NULL,
  foo_num integer NOT NULL,
 createdatetime timestamp with time zone DEFAULT now()
 );

 --and the trigger function is:

 CREATE OR REPLACE FUNCTION foo_insert_trigger()
 RETURNS trigger AS $$
 DECLARE
 entry_id integer;
  from_value integer;
 to_value integer;
 table_name varchar;
 BEGIN
 entry_id = NEW.foo_id;
 from_value = entry_id + 1;
 to_value = entry_id + 20;
  table_name='foo_' || from_value || '_to_' || to_value;

 IF not exists(select * from pg_class where relname = table_name) THEN
  EXECUTE 'CREATE TABLE ' || table_name || '(CHECK ( foo_id =' ||
 from_value || 'AND foo_id =' || to_value || ' )) INHERITS (foo)' ;
  EXECUTE 'CREATE UNIQUE INDEX by_blaa_num_' || from_value || '_to_' ||
 to_value || 'k ON ' || table_name ||' USING btree (foo_id, blaa_id,
 blaa_num)';
  EXECUTE 'CREATE UNIQUE INDEX pk_foo_' || from_value || '_to_' ||
 to_value || 'k ON ' || table_name ||' USING btree (foo_id, foo_num)';
  EXECUTE 'GRANT ALL ON TABLE ' || table_name || ' TO foorole, postgres';
 EXECUTE 'GRANT SELECT ON TABLE ' || table_name || ' TO blarole';

 END IF;

 EXECUTE 'INSERT INTO ' || table_name ||' VALUES (($1).*)' USING NEW ;

 RETURN NULL;
 END;
 $$
 LANGUAGE plpgsql;


 but it doesn't seem to work. It doesn't actually create new partition
 tables.
 The entries are inserted into foo

 I attach a test .sql file that contains the data of the table


 any help would save me from a lot of time!

 thank you in advance!

 dafni





 --
 Daniel Cristian Cruz
 クルズ クリスチアン ダニエル



Re: [GENERAL] dynamic partitioning

2013-06-26 Thread AI Rumman
That because you are generating table name from from_value which is
distinct everytime.
Like,

INSERT INTO foo VALUES (1, 11, 11, 1, '2013-06-26 16:38:58.466');
NOTICE:  table_name = (foo_1_to_21)
NOTICE:  CREATE TABLE foo_1_to_21 (CHECK ( foo_id = 1 AND foo_id = 21 ))
INHERITS (foo)
INSERT 0 0
postgres=# \d
 List of relations
 Schema | Name  | Type  |  Owner
+---+---+--
 public | foo   | table | postgres
 public | foo_1_to_21   | table | postgres
 public | foo_99_to_119 | table | postgres
(3 rows)

postgres=# INSERT INTO foo VALUES (2, 12, 12, 2, '2013-06-26 16:38:58.466');
NOTICE:  table_name = (foo_2_to_22)
NOTICE:  CREATE TABLE foo_2_to_22 (CHECK ( foo_id = 2 AND foo_id = 22 ))
INHERITS (foo)
INSERT 0 0
postgres=# \d
 List of relations
 Schema | Name  | Type  |  Owner
+---+---+--
 public | foo   | table | postgres
 public | foo_1_to_21   | table | postgres
 public | foo_2_to_22   | table | postgres
 public | foo_99_to_119 | table | postgres
(4 rows)

Here, for two inserts it creates two tables one for foo_id = 1 and other
for foo_id = 2.


Use,
from_value = entry_id/20::int



On Wed, Jun 26, 2013 at 10:50 AM, dafNi zaf dza...@gmail.com wrote:

 I solved the problem with the error! thank you very much!

 But there is still 1 issue:

 when I insert multiple rows (for exaple with the attachment in my fist
 email)
 it creates 100 partition tables that contain 1 entry instead of 5
 partitions with
 20 entries..

 Any ideas in that??

 Thanks again!

 Dafni




 On Wed, Jun 26, 2013 at 5:32 PM, AI Rumman rumman...@gmail.com wrote:

 Yes, you missed the trigger part. And also you will get error like below
 during insert:

 INSERT INTO foo VALUES (99, 109, 109, 99, '2013-06-26 16:38:58.466');
 NOTICE:  table_name = (foo_100_to_119)
 NOTICE:  CREATE TABLE foo_100_to_119 (CHECK ( foo_id = 100 AND foo_id =
 119 )) INHERITS (foo)
 ERROR:  new row for relation foo_100_to_119 violates check constraint
 foo_100_to_119_foo_id_check
 DETAIL:  Failing row contains (99, 109, 109, 99, 2013-06-26
 16:38:58.466-04).
 CONTEXT:  SQL statement INSERT INTO foo_100_to_119 VALUES (($1).*)
 PL/pgSQL function foo_insert_trigger() line 22 at EXECUTE statement


 That because you added entity_id + 1 in your function and hence when
 you are giving foo_id = 99, it is creating table with check constraint
 where foo_id = 100 and foo_id = 119.

 I modified  it as below:
 *from_value = entry_id ;*
 *
 *
 Now its working:

 INSERT INTO foo VALUES (99, 109, 109, 99, '2013-06-26 16:38:58.466');
 NOTICE:  table_name = (foo_99_to_119)
 NOTICE:  CREATE TABLE foo_99_to_119 (CHECK ( foo_id = 99 AND foo_id =
 119 )) INHERITS (foo)
 INSERT 0 0
 postgres=# select * from foo;
  foo_id | blaa_id | blaa_num | foo_num |   createdatetime
 +-+--+-+
  99 | 109 |  109 |  99 | 2013-06-26 16:38:58.466-04
 (1 row)

 postgres=# select * from foo_99_to_119;
  foo_id | blaa_id | blaa_num | foo_num |   createdatetime
 +-+--+-+
  99 | 109 |  109 |  99 | 2013-06-26 16:38:58.466-04
 (1 row)

 postgres=# show constraint_exclusion ;
  constraint_exclusion
 --
  partition
 (1 row)



 On Wed, Jun 26, 2013 at 10:25 AM, Daniel Cristian Cruz 
 danielcrist...@gmail.com wrote:

 You forgot to set the trigger on foo:

 CREATE TRIGGER foo_insert
 BEFORE INSERT ON foo
 FOR EACH ROW EXECUTE PROCEDURE foo_insert_trigger();



 2013/6/26 dafNi zaf dza...@gmail.com

 one note: I create a table of 100 entries in order to test it so I want
 5 partition of 20 entries each.
 (And not a table of 10 entries)


 thanks again!
 dafni


 On Wed, Jun 26, 2013 at 4:47 PM, dafNi zaf dza...@gmail.com wrote:

 Hello!

 I want to dynamically create partition tables that inherit a main
 table called foo.
 The creation must occur when needed.

 For example, lets say that I want to insert 10 entries and I want
 5 partition
 tables (with 2 entries each).

 So, first I need a partition for the first 2 entries and when the
 entries reach
 the number 2, another partition must be created, e.t.c..

 I guess I need something like that:

 --the main table is:

 CREATE TABLE foo (
 foo_id integer NOT NULL,
 blaa_id integer NOT NULL,
 blaa_num integer NOT NULL,
  foo_num integer NOT NULL,
 createdatetime timestamp with time zone DEFAULT now()
 );

 --and the trigger function is:

 CREATE OR REPLACE FUNCTION foo_insert_trigger()
 RETURNS trigger AS $$
 DECLARE
 entry_id integer;
  from_value integer;
 to_value integer;
 table_name varchar;
 BEGIN
 entry_id = NEW.foo_id;
 from_value = entry_id + 1;
 to_value = entry_id + 20;
  table_name='foo_' || from_value || '_to_' || to_value;

 IF not exists(select * from pg_class where relname = table_name) THEN
  EXECUTE 'CREATE TABLE

Re: [GENERAL] postgresql query

2013-06-19 Thread AI Rumman
Which version of Postgresql are you using?
However, you may use string_agg like below if its available in your version:

 \d t1
  Table public.t1
 Column |  Type   | Modifiers
+-+---
 i  | integer |
 amt| integer |


select * from t1;
 i | amt
---+-
 1 |  20
 1 |  30
 2 |  30
(3 rows)


 select i, string_agg(amt::text,',') as c from t1 group by i;;
 i |   c
---+---
 1 | 20,30
 2 | 30
(2 rows)


Have a good day.


On Wed, Jun 19, 2013 at 6:51 AM, Jashaswee sweet.rink...@gmail.com wrote:

 I have numeric values in a numeric column.the column has two parts.i want
 to
 split in 2 differnet column .
 The column value looks like this:

 Quantity
 2000
 -1000

 both the quantity values are  of a same product.but i want these in a
 single
 line.
 so what i want is a result set that looks like:
 In quantity  Out quantity
 ---  
 2000  -1000

 how can i get this in a select statement  ?



 --
 View this message in context:
 http://postgresql.1045698.n5.nabble.com/postgresql-query-tp5759846.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Postgres DB crashing

2013-06-18 Thread AI Rumman
Stop the autovacuum process and try again.


On Tue, Jun 18, 2013 at 1:31 PM, bhanu udaya udayabhanu1...@hotmail.comwrote:

  Hello,
 Greetings.

 My PostgresSQL (9.2) is crashing after certain load tests. Currently,
 postgressql is crashing when simulatenously 800 to 1000 threads are run on
 a 10 million records schema. Not sure, if we have to tweak some more
 parameters of postgres. Currently, the postgressql is configured as below
 on a 7GB Ram on an Intel Xeon CPU E5507 2.27 GZ. Is this postgres
 limitation to support only 800 threads or any other configuration required.
 Please look at the log as below with errors. Please reply


   max_connections  5000  shared_buffers  2024 MB  synchronous_commit off
 wal_buffers 100 MB  wal_writer_delays 1000ms  checkpoint_segments 512
 checkpoint_timeout 5 min  checkpoint_completion_target 0.5
 checkpoint_warning 30s  work_memory 1G  effective_cache_size 5 GB



 2013-06-11 15:11:17 GMT [26201]: [1-1]ERROR: canceling autovacuum task

 2013-06-11 15:11:17 GMT [26201]: [2-1]CONTEXT: automatic vacuum of table
 newrelic.tenant1.customer

 2013-06-11 15:11:17 GMT [25242]: [1-1]LOG: sending cancel to blocking
 autovacuum PID 26201

 2013-06-11 15:11:17 GMT [25242]: [2-1]DETAIL: Process 25242 waits for
 ExclusiveLock on extension of relation 679054 of database 666546.

 2013-06-11 15:11:17 GMT [25242]: [3-1]STATEMENT: UPDATE tenant1.customer
 SET lastmodifieddate = $1 WHERE id IN ( select random_range((select min(id)
 from tenant1.customer ), (select max(id) from tenant1.customer )) as id )
 AND softdeleteflag IS NOT TRUE

 2013-06-11 15:11:17 GMT [25242]: [4-1]WARNING: could not send signal to
 process 26201: No such process

 2013-06-11 15:22:29 GMT [9]: [11-1]WARNING: worker took too long to
 start; canceled

 2013-06-11 15:24:10 GMT [26511]: [1-1]WARNING: autovacuum worker started
 without a worker entry

 2013-06-11 16:03:33 GMT [23092]: [1-1]LOG: could not receive data from
 client: Connection timed out

 2013-06-11 16:06:05 GMT [23222]: [5-1]LOG: could not receive data from
 client: Connection timed out

 2013-06-11 16:07:06 GMT [26869]: [1-1]FATAL: canceling authentication due
 to timeout

 2013-06-11 16:23:16 GMT [25128]: [1-1]LOG: could not receive data from
 client: Connection timed out

 2013-06-11 16:23:20 GMT [25128]: [2-1]LOG: unexpected EOF on client
 connection with an open transaction

 2013-06-11 16:30:56 GMT [23695]: [1-1]LOG: could not receive data from
 client: Connection timed out

 2013-06-11 16:43:55 GMT [24618]: [1-1]LOG: could not receive data from
 client: Connection timed out

 2013-06-11 16:44:29 GMT [25204]: [1-1]LOG: could not receive data from
 client: Connection timed out

 2013-06-11 16:54:14 GMT [6]: [1-1]PANIC: stuck spinlock
 (0x2aaab54279d4) detected at bufmgr.c:1239

 2013-06-11 16:54:14 GMT [32521]: [8-1]LOG: checkpointer process (PID
 6) was terminated by signal 6: Aborted

 2013-06-11 16:54:14 GMT [32521]: [9-1]LOG: terminating any other active
 server processes

 2013-06-11 16:54:14 GMT [26931]: [1-1]WARNING: terminating connection
 because of crash of another server process

 2013-06-11 16:54:14 GMT [26931]: [2-1]DETAIL: The postmaster has commanded
 this server process to roll back the current transaction and exit, because
 another server process exited abnormally and possibly corrupted shared
 memory.

 2013-06-11 16:54:14 GMT [26931]: [3-1]HINT: In a moment you should be able
 to reconnect to the database and repeat your command.

 2013-06-11 16:54:14 GMT [26401]: [1-1]WARNING: terminating connection
 because of crash of another server process

 2013-06-11 16:54:14 GMT [26401]: [2-1]DETAIL: The postmaster has commanded
 this server process to roll back the current transaction and exit, because
 another server process exited abnormally and possibly corrupted shared
 memory.

 2013-06-11 16:55:08 GMT [27579]: [1-1]FATAL: the database system is in
 recovery mode

 2013-06-11 16:55:08 GMT [24041]: [1-1]WARNING: terminating connection
 because of crash of another server process

 2013-06-11 16:55:08 GMT [24041]: [2-1]DETAIL: The postmaster has commanded
 this server process to roll back the current




Re: [GENERAL] pg_upgrade link mode

2013-05-16 Thread AI Rumman
I always think its a bit risky to use link instead of copying. However, I'd
suggest to try the  --check at first of pg_upgrade.


On Thu, May 16, 2013 at 1:42 PM, Fabio Rueda Carrascosa 
avances...@gmail.com wrote:

 Hello, I have a 9.1 cluster with 50 databases, only one table per db with
 2000 rows only, but a lot of schema around each one (postgis databases)

 The old cluster size is 1GB

 du -chs /var/lib/postgresql/9.1/main/
 1.1G

 now I run a pg_upgrade to 9.2 with hard link mode,

 pg_upgrade  --link \
 --old-datadir=/var/lib/postgresql/9.1/main \
 --new-datadir=/var/lib/postgresql/9.2/main \
 --old-bindir=/usr/lib/postgresql/9.1/bin \
 --new-bindir=/usr/lib/postgresql/9.2/bin

 du -chs /var/lib/postgresql/9.2/main/
 880M


 Is the expected behaviour? I can't double the space in production.

 Thanks.



Re: [GENERAL] pg_upgrade link mode

2013-05-16 Thread AI Rumman
I told you I would never go for a production with link mode. However, I was
working with pg_upgrade copy mode and in the middle I got an error saying
missing some extensions. I rollback and start the operation after setting
everything up.
I don't know how it will behave in link mode if you fail in the middle.


On Thu, May 16, 2013 at 2:03 PM, Igor Neyman iney...@perceptron.com wrote:


 From: pgsql-general-ow...@postgresql.org [mailto:
 pgsql-general-ow...@postgresql.org] On Behalf Of AI Rumman
 Sent: Thursday, May 16, 2013 1:56 PM
 To: Fabio Rueda Carrascosa
 Cc: pgsql-general
 Subject: Re: [GENERAL] pg_upgrade link mode

 I always think its a bit risky to use link instead of copying. However,
 I'd suggest to try the  --check at first of pg_upgrade.

 --

 Why?
 Do you have specific experience, when link mode caused any problems?
 Could you share?

 Regards,
 Igor Neyman



Re: [GENERAL] pg_upgrade link mode

2013-05-16 Thread AI Rumman
Yes Lonni. I agree with you.


On Thu, May 16, 2013 at 2:23 PM, Lonni J Friedman netll...@gmail.comwrote:

 On Thu, May 16, 2013 at 11:03 AM, Igor Neyman iney...@perceptron.com
 wrote:
 
  From: pgsql-general-ow...@postgresql.org [mailto:
 pgsql-general-ow...@postgresql.org] On Behalf Of AI Rumman
  Sent: Thursday, May 16, 2013 1:56 PM
  To: Fabio Rueda Carrascosa
  Cc: pgsql-general
  Subject: Re: [GENERAL] pg_upgrade link mode
 
  I always think its a bit risky to use link instead of copying. However,
 I'd suggest to try the  --check at first of pg_upgrade.
 
  --
 
  Why?
  Do you have specific experience, when link mode caused any problems?
  Could you share?

 I assume what he's referring to is if the upgrade gets partially
 completed and fails for any reason, then you have a broken mess, with
 no simple rollback path.  Since your database is only about 1GB in
 size, it shouldn't take very long to run a base backup before doing
 the upgrade.  You can send that backup over the network to a remote
 system, so that you have a fallback solution if the upgrade fails.



Re: [GENERAL]

2013-05-15 Thread AI Rumman
Create the functions:
CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE
LANGUAGE SQL AS 'SELECT textin(int4out($1));';
CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS
IMPLICIT;
http://www.rummandba.com/2010/10/migrate-database-from-postgresql-81-to.html

It'll work.


On Wed, May 15, 2013 at 3:17 PM, Carlos Henrique Reimer 
carlos.rei...@opendb.com.br wrote:

 Hi,

 Currently, our application is still using PG 8.2 and we are trying to use
 9.2 but there are some problems related with the implicit casts removed on
 8.3.

 Example:

 1) select 'teste'||1;
 2) select trim(1);

 Select 1  2 does run fine on 8.2 but in 9.2 select 1 is ok and select 2
 got an error due the fact implicit cast were removed on PG 8.3:

 ERROR:  function pg_catalog.btrim(integer) does not exist
 LINE 1: select trim(1);
^
 HINT:  No function matches the given name and argument types. You might
 need to add explicit type casts.

 ** Error **

 ERROR: function pg_catalog.btrim(integer) does not exist
 SQL state: 42883
 Hint: No function matches the given name and argument types. You might
 need to add explicit type casts.
 Character: 8

 Recreated the implicit cast for select 2:

 CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE
 LANGUAGE SQL AS 'SELECT textin(int4out($1));';
 CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS
 IMPLICIT;

 Now select 2 works fine but select 1 gets an error:

 ERROR:  operator is not unique: unknown || integer
 LINE 1: select 'teste'||1;
   ^
 HINT:  Could not choose a best candidate operator. You might need to add
 explicit type casts.

 ** Error **

 ERROR: operator is not unique: unknown || integer
 SQL state: 42725
 Hint: Could not choose a best candidate operator. You might need to add
 explicit type casts.
 Character: 15


 Is there a way to make both selects work on PG 9.2 without explicit casts?

 Thank you in advace!

 --
 Reimer
 47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br



Re: [GENERAL]

2013-05-15 Thread AI Rumman
Ah! I missed that.
Which version of 9.2 you are working with? I am also at 9.2 and its working
fine.
Try out using
select 'teste'||1::int;

See if it works or not.


On Wed, May 15, 2013 at 3:41 PM, Carlos Henrique Reimer 
carlos.rei...@opendb.com.br wrote:

 Actually, as stated in my first note, this is what I've done to fix the
 select 2 (select trim(1);) but after creating this function/cast the select
 1 (select 'teste'||1;) started not working.


 On Wed, May 15, 2013 at 4:24 PM, AI Rumman rumman...@gmail.com wrote:

 Create the functions:
 CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE
 LANGUAGE SQL AS 'SELECT textin(int4out($1));';
 CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS
 IMPLICIT;

 http://www.rummandba.com/2010/10/migrate-database-from-postgresql-81-to.html

 It'll work.


 On Wed, May 15, 2013 at 3:17 PM, Carlos Henrique Reimer 
 carlos.rei...@opendb.com.br wrote:

 Hi,

 Currently, our application is still using PG 8.2 and we are trying to
 use 9.2 but there are some problems related with the implicit casts removed
 on 8.3.

 Example:

 1) select 'teste'||1;
 2) select trim(1);

 Select 1  2 does run fine on 8.2 but in 9.2 select 1 is ok and select 2
 got an error due the fact implicit cast were removed on PG 8.3:

 ERROR:  function pg_catalog.btrim(integer) does not exist
 LINE 1: select trim(1);
^
 HINT:  No function matches the given name and argument types. You might
 need to add explicit type casts.

 ** Error **

 ERROR: function pg_catalog.btrim(integer) does not exist
 SQL state: 42883
 Hint: No function matches the given name and argument types. You might
 need to add explicit type casts.
 Character: 8

 Recreated the implicit cast for select 2:

 CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE
 LANGUAGE SQL AS 'SELECT textin(int4out($1));';
 CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS
 IMPLICIT;

 Now select 2 works fine but select 1 gets an error:

 ERROR:  operator is not unique: unknown || integer
 LINE 1: select 'teste'||1;
   ^
 HINT:  Could not choose a best candidate operator. You might need to add
 explicit type casts.

 ** Error **

 ERROR: operator is not unique: unknown || integer
 SQL state: 42725
 Hint: Could not choose a best candidate operator. You might need to add
 explicit type casts.
 Character: 15


 Is there a way to make both selects work on PG 9.2 without explicit
 casts?

 Thank you in advace!

 --
 Reimer
 47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br





 --
 Reimer
 47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br



Re: [GENERAL]

2013-05-15 Thread AI Rumman
Drop those functions and try again.


On Wed, May 15, 2013 at 4:22 PM, Carlos Henrique Reimer 
carlos.rei...@opendb.com.br wrote:

 The PG version I'm using is 9.2.4.

 It works but after recreating the implicit cast with the function:


 CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE
 LANGUAGE SQL AS 'SELECT textin(int4out($1));';
 CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS
 IMPLICIT;

 it does not work anymore, even using the explicit cast:

 select 'teste'||1::int;

 ERROR:  operator is not unique: unknown || integer
 LINE 1: select 'teste'||1::int;

   ^
 HINT:  Could not choose a best candidate operator. You might need to add
 explicit type casts.


 ** Error **

 ERROR: operator is not unique: unknown || integer
 SQL state: 42725
 Hint: Could not choose a best candidate operator. You might need to add
 explicit type casts.
 Character: 15





 On Wed, May 15, 2013 at 4:45 PM, AI Rumman rumman...@gmail.com wrote:

 Ah! I missed that.
 Which version of 9.2 you are working with? I am also at 9.2 and its
 working fine.
 Try out using
 select 'teste'||1::int;

 See if it works or not.


 On Wed, May 15, 2013 at 3:41 PM, Carlos Henrique Reimer 
 carlos.rei...@opendb.com.br wrote:

 Actually, as stated in my first note, this is what I've done to fix the
 select 2 (select trim(1);) but after creating this function/cast the select
 1 (select 'teste'||1;) started not working.


 On Wed, May 15, 2013 at 4:24 PM, AI Rumman rumman...@gmail.com wrote:

 Create the functions:
 CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE
 LANGUAGE SQL AS 'SELECT textin(int4out($1));';
 CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS
 IMPLICIT;

 http://www.rummandba.com/2010/10/migrate-database-from-postgresql-81-to.html

 It'll work.


 On Wed, May 15, 2013 at 3:17 PM, Carlos Henrique Reimer 
 carlos.rei...@opendb.com.br wrote:

 Hi,

 Currently, our application is still using PG 8.2 and we are trying to
 use 9.2 but there are some problems related with the implicit casts 
 removed
 on 8.3.

 Example:

 1) select 'teste'||1;
 2) select trim(1);

 Select 1  2 does run fine on 8.2 but in 9.2 select 1 is ok and select
 2 got an error due the fact implicit cast were removed on PG 8.3:

 ERROR:  function pg_catalog.btrim(integer) does not exist
 LINE 1: select trim(1);
^
 HINT:  No function matches the given name and argument types. You
 might need to add explicit type casts.

 ** Error **

 ERROR: function pg_catalog.btrim(integer) does not exist
 SQL state: 42883
 Hint: No function matches the given name and argument types. You might
 need to add explicit type casts.
 Character: 8

 Recreated the implicit cast for select 2:

 CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE
 LANGUAGE SQL AS 'SELECT textin(int4out($1));';
 CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer)
 AS IMPLICIT;

 Now select 2 works fine but select 1 gets an error:

 ERROR:  operator is not unique: unknown || integer
 LINE 1: select 'teste'||1;
   ^
 HINT:  Could not choose a best candidate operator. You might need to
 add explicit type casts.

 ** Error **

 ERROR: operator is not unique: unknown || integer
 SQL state: 42725
 Hint: Could not choose a best candidate operator. You might need to
 add explicit type casts.
 Character: 15


 Is there a way to make both selects work on PG 9.2 without explicit
 casts?

 Thank you in advace!

 --
 Reimer
 47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br





 --
 Reimer
 47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br





 --
 Reimer
 47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br



Re: [GENERAL]

2013-05-15 Thread AI Rumman
I am using the same version and its fine.
postgres=# select version();
version

---
 PostgreSQL 9.2.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2
20080704 (Red Hat 4.1.2-52), 64-bit

Try out the query:

SELECT ct.*,
source_t.typname as source_type
,target_t.typname as target_type
, proc.proname
FROM
pg_cast as ct
, pg_type as source_t
, pg_type as target_t
,pg_proc as proc
WHERE
ct.castsource = source_t.oid
and ct.casttarget = target_t.oid
and  ct.castfunc =  proc.oid

I get 144 rows.
http://www.rummandba.com/2013/02/postgresql-type-casting-information.html




On Wed, May 15, 2013 at 4:54 PM, Carlos Henrique Reimer 
carlos.rei...@opendb.com.br wrote:

 It works if I drop the functions but then the select trim(1) does not work;


 On Wed, May 15, 2013 at 5:38 PM, AI Rumman rumman...@gmail.com wrote:

 Drop those functions and try again.


 On Wed, May 15, 2013 at 4:22 PM, Carlos Henrique Reimer 
 carlos.rei...@opendb.com.br wrote:

 The PG version I'm using is 9.2.4.

 It works but after recreating the implicit cast with the function:


 CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE
 LANGUAGE SQL AS 'SELECT textin(int4out($1));';
 CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS
 IMPLICIT;

 it does not work anymore, even using the explicit cast:

 select 'teste'||1::int;

 ERROR:  operator is not unique: unknown || integer
  LINE 1: select 'teste'||1::int;

   ^
 HINT:  Could not choose a best candidate operator. You might need to add
 explicit type casts.


 ** Error **

 ERROR: operator is not unique: unknown || integer
 SQL state: 42725
 Hint: Could not choose a best candidate operator. You might need to add
 explicit type casts.
 Character: 15





 On Wed, May 15, 2013 at 4:45 PM, AI Rumman rumman...@gmail.com wrote:

 Ah! I missed that.
 Which version of 9.2 you are working with? I am also at 9.2 and its
 working fine.
 Try out using
 select 'teste'||1::int;

 See if it works or not.


 On Wed, May 15, 2013 at 3:41 PM, Carlos Henrique Reimer 
 carlos.rei...@opendb.com.br wrote:

 Actually, as stated in my first note, this is what I've done to fix
 the select 2 (select trim(1);) but after creating this function/cast the
 select 1 (select 'teste'||1;) started not working.


 On Wed, May 15, 2013 at 4:24 PM, AI Rumman rumman...@gmail.comwrote:

 Create the functions:
 CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT
 IMMUTABLE LANGUAGE SQL AS 'SELECT textin(int4out($1));';
 CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer)
 AS IMPLICIT;

 http://www.rummandba.com/2010/10/migrate-database-from-postgresql-81-to.html

 It'll work.


 On Wed, May 15, 2013 at 3:17 PM, Carlos Henrique Reimer 
 carlos.rei...@opendb.com.br wrote:

 Hi,

 Currently, our application is still using PG 8.2 and we are trying
 to use 9.2 but there are some problems related with the implicit casts
 removed on 8.3.

 Example:

 1) select 'teste'||1;
 2) select trim(1);

 Select 1  2 does run fine on 8.2 but in 9.2 select 1 is ok and
 select 2 got an error due the fact implicit cast were removed on PG 8.3:

 ERROR:  function pg_catalog.btrim(integer) does not exist
 LINE 1: select trim(1);
^
 HINT:  No function matches the given name and argument types. You
 might need to add explicit type casts.

 ** Error **

 ERROR: function pg_catalog.btrim(integer) does not exist
 SQL state: 42883
 Hint: No function matches the given name and argument types. You
 might need to add explicit type casts.
 Character: 8

 Recreated the implicit cast for select 2:

 CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT
 IMMUTABLE LANGUAGE SQL AS 'SELECT textin(int4out($1));';
 CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer)
 AS IMPLICIT;

 Now select 2 works fine but select 1 gets an error:

 ERROR:  operator is not unique: unknown || integer
 LINE 1: select 'teste'||1;
   ^
 HINT:  Could not choose a best candidate operator. You might need to
 add explicit type casts.

 ** Error **

 ERROR: operator is not unique: unknown || integer
 SQL state: 42725
 Hint: Could not choose a best candidate operator. You might need to
 add explicit type casts.
 Character: 15


 Is there a way to make both selects work on PG 9.2 without explicit
 casts?

 Thank you in advace!

 --
 Reimer
 47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br





 --
 Reimer
 47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br





 --
 Reimer
 47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br





 --
 Reimer
 47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br



[GENERAL] check_postgres_last_vacuum for all databases

2013-05-14 Thread AI Rumman
Hi,

I have been setting up nagios alert for check_postgres_last_vacuum.
I went through the code and saw that I have to define the databases in
order to check for all database tables. In my environment, database
creation and deletion is dynamic. So any time any one can create database
or delete for their needs.
And I configured manual vacuum process for all databases each night.

In this scenario, I was looking for a tool to check the last vacuum time in
the servers so that if any table's last vacuum date is N days older, nagios
will send an alert.

Any idea for it.

Thanks,
AI


[GENERAL] Character set display

2013-04-09 Thread AI Rumman
Hi,

I got a data like:
AHrühn

And I need the output like:
  AHrühn

The DB is running with UTF8 on Postgresql 9.2.

Any help will be appreciated.

Thanks.


Re: [GENERAL] how to completely disable toasted table in postgresql and best practices to follow

2013-04-05 Thread AI Rumman
According to doc, YES.

http://www.postgresql.org/docs/9.2/static/storage-toast.html
MAIN allows compression but not out-of-line storage. (Actually, out-of-line
storage will still be performed for such columns, but only as a last resort
when there is no other way to make the row small enough to fit on a page.)


On Fri, Apr 5, 2013 at 9:53 AM, Szymon Guz mabew...@gmail.com wrote:


 On 5 April 2013 15:49, Andres Freund and...@2ndquadrant.com wrote:

 On 2013-04-05 18:32:47 +0530, Zahid Quadri wrote:
  Dear admin,
 
  please help me i want to completely disable Toasted tables in
 postgresql as per your suggestion i have used 9.0.13 but still toasted
 tables are getting created.
 
  also i want to know best practices and methods to clear such issues.

 Why do you want to do that?

 It is not possible to completely disable the usage of toast tables, but
 maybe explaining the real reason of you wanting that helps us to give
 you another solution.



 Even if I set storage to plain for all the columns? Will then toast be
 used for something else?

 regards
 Szymon




[GENERAL] replication behind high lag

2013-03-25 Thread AI Rumman
Hi,

I have two 9.2 databases running with hot_standby replication. Today when I
was checking, I found that replication has not been working since Mar 1st.
There was a large database restored in master on that day and I believe
after that the lag went higher.

SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS offset

431326108320

SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0') AS
receive,   pg_xlog_location_diff(pg_last_xlog_replay_location(), '0/0')
AS replay

   receive|replay
--+--
 245987541312 | 245987534032
(1 row)

I checked the pg_xlog in both the server. In Slave the last xlog file
-rw--- 1 postgres postgres 16777216 Mar  1 06:02
00010039007F

In Master, the first xlog file is
-rw--- 1 postgres postgres 16777216 Mar  1 04:45
00010039005E


Is there any way I could sync the slave in quick process?

Thanks.


Re: [GENERAL] replication behind high lag

2013-03-25 Thread AI Rumman
On Mon, Mar 25, 2013 at 3:40 PM, Lonni J Friedman netll...@gmail.comwrote:

 On Mon, Mar 25, 2013 at 12:37 PM, AI Rumman rumman...@gmail.com wrote:
  Hi,
 
  I have two 9.2 databases running with hot_standby replication. Today
 when I
  was checking, I found that replication has not been working since Mar
 1st.
  There was a large database restored in master on that day and I believe
  after that the lag went higher.
 
  SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS offset
 
  431326108320
 
  SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0') AS
  receive,   pg_xlog_location_diff(pg_last_xlog_replay_location(),
 '0/0')
  AS replay
 
 receive|replay
  --+--
   245987541312 | 245987534032
  (1 row)
 
  I checked the pg_xlog in both the server. In Slave the last xlog file
  -rw--- 1 postgres postgres 16777216 Mar  1 06:02
  00010039007F
 
  In Master, the first xlog file is
  -rw--- 1 postgres postgres 16777216 Mar  1 04:45
  00010039005E
 
 
  Is there any way I could sync the slave in quick process?

 generate a new base backup, and seed the slave with it.


OK. I am getting these error in slave:
LOG:  invalid contrecord length 284 in log file 57, segment 127, offset 0

What is the actual reason?

Thanks.


Re: [GENERAL] replication behind high lag

2013-03-25 Thread AI Rumman
On Mon, Mar 25, 2013 at 3:52 PM, Lonni J Friedman netll...@gmail.comwrote:

 On Mon, Mar 25, 2013 at 12:43 PM, AI Rumman rumman...@gmail.com wrote:
 
 
  On Mon, Mar 25, 2013 at 3:40 PM, Lonni J Friedman netll...@gmail.com
  wrote:
 
  On Mon, Mar 25, 2013 at 12:37 PM, AI Rumman rumman...@gmail.com
 wrote:
   Hi,
  
   I have two 9.2 databases running with hot_standby replication. Today
   when I
   was checking, I found that replication has not been working since Mar
   1st.
   There was a large database restored in master on that day and I
 believe
   after that the lag went higher.
  
   SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS
   offset
  
   431326108320
  
   SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0')
 AS
   receive,   pg_xlog_location_diff(pg_last_xlog_replay_location(),
   '0/0')
   AS replay
  
  receive|replay
   --+--
245987541312 | 245987534032
   (1 row)
  
   I checked the pg_xlog in both the server. In Slave the last xlog file
   -rw--- 1 postgres postgres 16777216 Mar  1 06:02
   00010039007F
  
   In Master, the first xlog file is
   -rw--- 1 postgres postgres 16777216 Mar  1 04:45
   00010039005E
  
  
   Is there any way I could sync the slave in quick process?
 
  generate a new base backup, and seed the slave with it.
 
 
  OK. I am getting these error in slave:
  LOG:  invalid contrecord length 284 in log file 57, segment 127, offset 0
 
  What is the actual reason?

 Corruption?  What were you doing when you saw the error?


I did not have enough idea about these stuffs. I got the database now and
saw the error.
Is there any way to recover from this state. The master database is a large
database of 500 GB.


Re: [GENERAL] replication behind high lag

2013-03-25 Thread AI Rumman
On Mon, Mar 25, 2013 at 4:03 PM, AI Rumman rumman...@gmail.com wrote:



 On Mon, Mar 25, 2013 at 4:00 PM, Lonni J Friedman netll...@gmail.comwrote:

 On Mon, Mar 25, 2013 at 12:55 PM, AI Rumman rumman...@gmail.com wrote:
 
 
  On Mon, Mar 25, 2013 at 3:52 PM, Lonni J Friedman netll...@gmail.com
  wrote:
 
  On Mon, Mar 25, 2013 at 12:43 PM, AI Rumman rumman...@gmail.com
 wrote:
  
  
   On Mon, Mar 25, 2013 at 3:40 PM, Lonni J Friedman 
 netll...@gmail.com
   wrote:
  
   On Mon, Mar 25, 2013 at 12:37 PM, AI Rumman rumman...@gmail.com
   wrote:
Hi,
   
I have two 9.2 databases running with hot_standby replication.
 Today
when I
was checking, I found that replication has not been working since
 Mar
1st.
There was a large database restored in master on that day and I
believe
after that the lag went higher.
   
SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS
offset
   
431326108320
   
SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(),
 '0/0')
AS
receive,
 pg_xlog_location_diff(pg_last_xlog_replay_location(),
'0/0')
AS replay
   
   receive|replay
--+--
 245987541312 | 245987534032
(1 row)
   
I checked the pg_xlog in both the server. In Slave the last xlog
 file
-rw--- 1 postgres postgres 16777216 Mar  1 06:02
00010039007F
   
In Master, the first xlog file is
-rw--- 1 postgres postgres 16777216 Mar  1 04:45
00010039005E
   
   
Is there any way I could sync the slave in quick process?
  
   generate a new base backup, and seed the slave with it.
  
  
   OK. I am getting these error in slave:
   LOG:  invalid contrecord length 284 in log file 57, segment 127,
 offset
   0
  
   What is the actual reason?
 
  Corruption?  What were you doing when you saw the error?
 
 
  I did not have enough idea about these stuffs. I got the database now
 and
  saw the error.
  Is there any way to recover from this state. The master database is a
 large
  database of 500 GB.

 generate a new base backup, and seed the slave with it.  if the error
 persists, then i'd guess that your master is corrupted, and then
 you've got huge problems.


 Master is running fine right now showing only a warning:
 WARNING:  archive_mode enabled, yet archive_command is not set

 Do you think the master could be corrupted?


Hi,

I got the info that there was a master db restart on Feb 27th. Could this
be a reason of this error?

Thanks.


Re: [GENERAL] How to join table to itself N times?

2013-03-20 Thread AI Rumman
On Wed, Mar 20, 2013 at 7:38 PM, W. Matthew Wilson m...@tplus1.com wrote:

 I got this table right now:

 select * from market_segment_dimension_values ;
 +--+---+
 | market_segment_dimension | value |
 +--+---+
 | geography| north |
 | geography| south |
 | industry type| retail|
 | industry type| manufacturing |
 | industry type| wholesale |
 +--+---+
 (5 rows)

 The PK is (market_segment_dimension, value).

 The dimension column refers to another table called
 market_segment_dimensions.

 So, north and south are to values for the geography dimension.

 In that data above, there are two dimensions.  But sometimes there could
 be just one dimension, or maybe three, ... up to ten.

 Now here's the part where I'm stumped.

 I need to create a cartesian product of the dimensions.

 I came up with this approach by hard-coding the different dimensions:

  with geog as (
 select value
 from market_segment_dimension_values
 where market_segment_dimension = 'geography'),

 industry_type as (
 select value
 from market_segment_dimension_values
 where market_segment_dimension = 'industry type')

 select geog.value as g,
 industry_type.value as ind_type
 from geog
 cross join industry_type
 ;
 +---+---+
 |   g   |   ind_type|
 +---+---+
 | north | retail|
 | north | manufacturing |
 | north | wholesale |
 | south | retail|
 | south | manufacturing |
 | south | wholesale |
 +---+---+
 (6 rows)

 But that won't work if I add a new dimension (unless I update the query).
  For example, maybe I need to add a new dimension called, say, customer
 size, which has values big and small.  A

 I've got some nasty plan B solutions, but I want to know if there's some
 solution.

 There's a really elegant solution in python using itertools.product, like
 this:

  list(itertools.product(*[['north', 'south'], ['retail',
 'manufacturing', 'wholesale']]))

 [('north', 'retail'),
  ('north', 'manufacturing'),
  ('north', 'wholesale'),
  ('south', 'retail'),
  ('south', 'manufacturing'),
  ('south', 'wholesale')]

 All advice is welcome.  Thanks in advance!

 Matt



 --
 W. Matthew Wilson
 m...@tplus1.com
 http://tplus1.com


You may try:

Select a.value, b.value
from market_segment_dimension_values as a,
from market_segment_dimension_values as b
where a.market_segment_dimension  b.market_segment_dimension

-- AI


[GENERAL] crosstab creating multiple rows for same id

2013-03-07 Thread AI Rumman
Hi,

I am using the crosstab function in Postgresql 9.0.
The functions works fine when the distinct category value is less. But when
it goes to to higher like 100 or more, it creates multiple rows for the
same rowid.

Any idea about it?

Thanks.


[GENERAL] no implicit cast error in 9.2?

2013-02-08 Thread AI Rumman
Hi,

I am using
PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6
20120305 (Red Hat 4.4.6-4), 64-bit.

I got a bit confused after installing this version. So far I used to know
that from Postgresql 8.3 implicit casting has been removed and the
following should not work at 8.3 :
create table testtab ( id varchar, id1 int)
insert into testtab values (1,1);

Where it was good to work at 8.1

http://osdir.com/ml/pgsql-general/2011-02/msg00055.html

I also faced the problem earlier.

But in 9.2 it is working perfectly.
Am I missing any release notes?

Please share your opinion. It will be appreciated.

Thanks.


[GENERAL] Is it possible to migrate database from Postgresql 8.2 to 9.2 using pg_upgrade?

2013-02-04 Thread AI Rumman
Is it possible to migrate database from Postgresql 8.2 to 9.2 using
pg_upgrade?

Thanks.


[GENERAL] Query Parallelizing with dblink

2013-01-30 Thread AI Rumman
Hi All,

I made an idea to utilize multiple core for Postgresql query. I used dblink
to achieve this task.
You may find it:
http://www.rummandba.com/2013/01/postgresql-query-parallelizing-with.html


It is just a basic idea and completely usable for my data model. But I
share it with you all so that you may add some more good ideas here.

-- Added the document here
--

I have three tables with almost 10 millions of records in each. The tables
are:

customer

account

tickets

customer table holds record for all kind of customers which are related
to account or tickets.

We need to generate a report with of customer and its related accounts or
tickets.

The basic sql is like:

select *

from

(

select c.custid,

case when a.accountid is not null then

  a.accountid

 when t.ticketid is not null then

  t.ticketid

end

as relatedid

from customer as c

left join account as a  on c.custid = a.accountid and type ='Accounts'

left  join tickets as t on c.custid = t.ticketid and type ='HelpDesk'

where c.deleted = 0

) as q

where relatedid is not null


I have all the required indexes. But the query was taking too much time.

One of the bottleneck that I always feel with Postgresql is its lack of
query parallelism technique. Good news is that, the great developers are
working on it.

However, I have to improve the query performance at this moment. So I make
a plan to divide the query in two parts and then execute each part
asynchronously and then collect the result.


To achieve this, I make the function qry_parallel_fn. This function create
two separate dblink connection conn1 and conn2 and execute two different
queries in async mode.

There is a while loop which checks if both the connections have completed
the task or not. If yes, then the function return results.


CREATE OR REPLACE FUNCTION qry_parallel_fn() RETURNS SETOF RECORD AS $$

DECLARE

   v_qry1 text;

   v_qry2 text;

   cur1 cursor is

   select *

   from dblink_get_result('conn1') as t1(custid int, relatedid int);



   cur2 cursor is

   select *

   from dblink_get_result('conn2') as t1(custid int, relatedid int);



   v_closed smallint;



BEGIN



 v_qry1 := 'select custid, accountid as relatedid from customer c inner
join account a on c.custid = a.accountid where c.deleted = 0';

 RAISE NOTICE 'vqry1 = %' , v_qry1;

 v_qry2 := 'select custid, ticketid as relatedid from customer c inner
join tickets as t on c.custid = t.ticketid where c.deleted = 0';

   PERFORM dblink_connect('conn1','dbname=rumman');

   PERFORM dblink_connect('conn2','dbname=rumman');

 PERFORM dblink_send_query('conn1',v_qry1);

 PERFORM dblink_send_query('conn2',v_qry2);



 v_closed := 0;

 WHILE v_closed  2 loop

   if check_conn_is_busy('conn1') = 0 then

  v_closed := v_closed + 1;

   end if;

   if check_conn_is_busy('conn2') = 0 then

  v_closed := v_closed + 1;

   end if;

 END LOOP;



 FOR rec IN cur1

 LOOP

   RETURN NEXT rec;

 END LOOP;



 FOR rec IN cur2

 LOOP

   RETURN NEXT rec;

 END LOOP;



 PERFORM dblink_disconnect('conn1');

 PERFORM dblink_disconnect('conn2');



 RETURN;

END;

$$

language 'plpgsql'


--Execute

--select * from test_fn() as t1(c int, d int);

-- select count(*) from test_fn() as t1(c int, d int);


CREATE OR REPLACE FUNCTION check_conn_is_busy(conn text) RETURNS INT AS $$

DECLARE

  v int;

BEGIN

   SELECT dblink_is_busy(conn) INTO v;

   RETURN v;

END;

$$

language 'plpgsql'


I was monitoring the server performance and found that it was using two cpu
cores to get the result and improve the query timing a bit.

---


Let me know your thoughts on it.


Thanks.


[GENERAL] postgresql 9.2 build error

2013-01-13 Thread AI Rumman
  I am trying to build Postgresql 9.2

  ./configure --prefix=/usr/pgsql-9.2  --with-ossp-uuid --with-libxml

  Got the error at config.log:

  configure:9747: result: no
configure:9752: checking for uuid_export in -luuid
configure:9787: gcc -o conftest -O2 -Wall -Wmissing-prototypes
-Wpointer-arith -Wdeclaration-after-statement -Wendif-labels
-Wmissing-format-attribute -Wformat-security -fno-stri
ct-aliasing -fwrapv -I/usr/local/include -D_GNU_SOURCE
-I/usr/include/libxml2  -L/usr/lib64  conftest.c -luuid  -lxml2 -lz
-lreadline -ltermcap -lcrypt -ldl -lm  5
/usr/bin/ld: cannot find -luuid
collect2: ld returned 1 exit status
configure:9794: $? = 1

 What should I do?


[GENERAL] pg_upgrade failed for 9.0 to 9.2

2013-01-13 Thread AI Rumman
Hi,

When I was upgrading database from 9.0 to 9.2 using pg_upgrade, I got the
error:

CREATE VIEW stats_slowest_queries AS
SELECT pg_stat_activity.procpid, (('now'::text)::timestamp(6) with time
zone - pg_stat_activity.query_start) AS execution_time,
pg_stat_activity.current_query FROM pg_stat_activity WHERE
(pg_stat_activity.current_query !~~ 'IDLE%'::text) ORDER BY
(('now'::text)::timestamp(6) with time zone - pg_stat_activity.query_start)
DESC;
psql:pg_upgrade_dump_db.sql:498897: ERROR:  column pg_stat_activity.procpid
does not exist
LINE 2: SELECT pg_stat_activity.procpid, (('now'::text)::timesta...
   ^
It failed.

Could you let me know why?

Thanks.


[GENERAL] Is there any plugin for Nagios, which sends Postgresql ERROR information with SQL query in mail?

2013-01-02 Thread AI Rumman
Is there any plugin for Nagios, which sends Postgresql ERROR information
with SQL query in mail?


[GENERAL] pg_xlog is getting bigger

2012-12-19 Thread AI Rumman
Hi,

I am working on a Postgresql 9.0 server. I have no replication and archive
mode setup. But I found that  the pg_xlog is getting bigger and bigger.
Right now it is 20 GB.

How should I recover these spaces?

Please let me know.

Thanks.


Re: [GENERAL] pg_xlog is getting bigger

2012-12-19 Thread AI Rumman
On Wed, Dec 19, 2012 at 7:52 PM, Adrian Klaver adrian.kla...@gmail.comwrote:

 On 12/19/2012 04:12 PM, Tom Lane wrote:

 Adrian Klaver adrian.kla...@gmail.com writes:

 Well the question is how long have those idle transactions been around?


 Idle transactions shouldn't have anything to do with pg_xlog bloat.
 What causes xlog bloat is inability to release old WAL because either
 (a) we're not able to complete checkpoints, or (b) WAL archiving is
 enabled but malfunctioning, and the old WAL segments are being kept
 pending successful archiving.


 Its obvious I am missing something important about WAL.
 Scenario:
 1) Transaction is opened and say many UPDATEs are done.
 2) This means there is now an old tuple and a new tuple for the previous
 row.
 3) The transaction is not committed.

 I assumed the WAL logs contained information necessary to either go
 forward to the new on commit or go back to the old on rollback. I further
 assumed the log segment(s) could not be released until either a
 commit/rollback was done.

 At this point I figure I the above assumption is wrong or my understanding
 of IDLE in TRANSACTION is wrong or both!




 Either (a) or (b) should result in bleating in the postmaster log.

 regards, tom lane




 --
 Adrian Klaver
 adrian.kla...@gmail.com


I modified checkpoint_segment to 100 form 300 and then forced some
CHECKPOINT and pg_switch_xlog() and now found that the pg_xlog file got
almost 1 gb of space back.


Re: [GENERAL] pg_restore error with out of memory

2012-12-14 Thread AI Rumman
I modified the
shared_buffer=50 MB
and
maintenance_work_mem = 50 MB

But still getting the same error.

On Thu, Dec 13, 2012 at 7:36 PM, Kevin Grittner kgri...@mail.com wrote:

 AI Rumman wrote:

 I am going to restore a 6 Gb database in my development machine
 which is running on Centos 5.6 with memory 1 GB.

  pg_restore: out of memory
  pg_restore: finished item 8570 TABLE DATA entity
  pg_restore: [archiver] worker process failed: exit code 1

  I set postgresql.conf as -
  shared_memory = 128 MB
  maintenance_work_mem = 300 MB

  During error my OS status:
  free -m
  total used free shared buffers cached
  Mem: 1024  975   48  0   3857
  -/+ buffers/cache: 114  909
 Swap: 10270 1027
 
  Please let me know what could be the actual cause of the error.

 You have 1024 MB total RAM.
 You seem to be using 114 MB of that before starting PostgreSQL.
 You have PostgreSQL configured to use 128 MB of shared buffers,
 which is only part of its shared memory.
 You have configured 300 MB per maintenance_work_mem allocation.
 There can be several of these at one time.
 You are running pg_restore, which needs to use memory to interpret
 the map of the dump and dependencies among objects.

 You are using more memory than you have.

 If you really need to run PostgreSQL on a machine with 1GB of
 memory, you need to use a configuration much closer to the default.

 Don't expect performance to be the same as on a larger server.

 -Kevin



[GENERAL] pg_restore error with out of memory

2012-12-13 Thread AI Rumman
I am going to restore a 6 Gb database in my development machine which is
running on Centos 5.6 with memory 1 GB.
During restoration I got error  as follows:

LOG:  checkpoints are occurring too frequently (22 seconds apart)
HINT:  Consider increasing the configuration parameter
checkpoint_segments.
pg_restore: out of memory
pg_restore: finished item 8570 TABLE DATA entity
pg_restore: [archiver] worker process failed: exit code 1
[postgres@rumman data]$ ERROR:  invalid input syntax for integer: U
CONTEXT:  COPY entity, line 2120568, column version: U
STATEMENT:  COPY entity (crmid, smcreatorid, smownerid, modifiedby, setype,
description, createdtime, modifiedtime, viewedtime, status, version,
presence, deleted, owner_type) FROM stdin;

LOG:  could not send data to client: Broken pipe
STATEMENT:  COPY entity (crmid, smcreatorid, smownerid, modifiedby, setype,
description, createdtime, modifiedtime, viewedtime, status, version,
presence, deleted, owner_type) FROM stdin;


The table entity has 2164182 rows.

And description as -
\d entity
  Table public.entity
Column|Type |  Modifiers
--+-+--
 crmid| integer | not null
 smcreatorid  | integer | not null default 0
 smownerid| integer | not null default 0
 modifiedby   | integer | not null default 0
 setype   | character varying(30)   | not null
 description  | text|
 createdtime  | timestamp without time zone | not null
 modifiedtime | timestamp without time zone | not null
 viewedtime   | timestamp without time zone |
 status   | character varying(50)   |
 version  | integer | not null default 0
 presence | integer | default 1
 deleted  | integer | not null default 0
 owner_type   | character(1)| not null default 'U'::bpchar
Indexes:
entity_pkey PRIMARY KEY, btree (crmid)
entity_createdtime_idx btree (createdtime)
entity_modifiedby_idx btree (modifiedby)
entity_modifiedtime_idx btree (modifiedtime)
entity_setype_idx btree (setype) WHERE deleted = 0
entity_smcreatorid_idx btree (smcreatorid)
entity_smownerid_idx btree (smownerid)
ftx_enentity_description gin (to_tsvector('en'::regconfig,
for_fts(description)))
entity_deleted_idx btree (deleted)
Referenced by:
TABLE service CONSTRAINT fk_1_service FOREIGN KEY (serviceid)
REFERENCES entity(crmid) ON DELETE CASCADE
TABLE servicecontracts CONSTRAINT fk_1_servicecontracts FOREIGN KEY
(servicecontractsid) REFERENCES entity(crmid) ON DELETE CASCADE
TABLE cc2entity CONSTRAINT fk_cc2entityentity FOREIGN KEY (crm_id)
REFERENCES entity(crmid) ON UPDATE CASCADE ON DELETE CASCADE
TABLE emails_optout_history CONSTRAINT fk_emails_optout_historyid
FOREIGN KEY (crmid) REFERENCES entity(crmid) ON DELETE CASCADE
TABLE emails_optout_history CONSTRAINT
fk_emails_optout_history_emailid FOREIGN KEY (emailid) REFERENCES
entity(crmid) ON DELETE CASCADE



I set postgresql.conf as -
shared_memory = 128 MB
maintenance_work_mem = 300 MB
checkpoint_segment = 10  # as the disk space is limited
fsync=off
autocommit=off


The backup was takes at Postgresql 9.2.3 and I am going to restore at
Postrgesql 9.2.1.

During error my OS status:
free -m
 total   used   free sharedbuffers cached
Mem:  1024975 48  0  3857
-/+ buffers/cache:114909
Swap: 1027  0   1027


Please let me know what could be the actual cause of the error.

Thanks.


[GENERAL] deadlock detected

2012-11-05 Thread AI Rumman
Hi all,

I am using Postrgesql 9.1

I got a message in my log:
ERROR:  deadlock detected
DETAIL:  Process 20265 waits for ShareLock on transaction 27774015; blocked
by process 20262.
Process 20262 waits for ShareLock on transaction 27774018; blocked
by process 20265.
Process 20265: UPDATE t1 SET product_id = 1017966 WHERE ticketid =
'2170501'
Process 20262: UPDATE c1 SET deleted=1 WHERE id='2170501'
HINT:  See server log for query details.
STATEMENT:  UPDATE t1 SET product_id = 1017966 WHERE ticketid = '2170501'

How may I get more information about this deadlock like which queries
created it.

Thanks.


[GENERAL] should I increase default_statistics_target

2012-09-20 Thread AI Rumman
Using explain analyze of a large query I found that in every step there are
a lot difference between the number of rows  between actual and estimated.
I am using default_statistics_target 200. Should I increase it?


Re: [GENERAL] Where should I start for learn development

2012-08-01 Thread AI Rumman
Thanks to all for all the good advice. I was thinking myself to work in any
of the open source project and contribute there. As a database developer I
think Postgresql is one of the best places for me where I may enjoy working
and see the outcome.
If you ask about goal, I was thinking to work in a large project where the
great hacker may be working for  parallel execution of a query. At present,
I need it badly. I know I may achieve a bit of that using pgpool load
balancer or grid sql. But it would be nice if we get it at core Postgresql.
Criag, you really tell a good point. At first I should start by writing
simple C functions as extension and then for more.
Thanks to all again.

On Tue, Jul 3, 2012 at 7:32 PM, Craig Ringer ring...@ringerc.id.au wrote:

  On 07/03/2012 07:50 PM, AI Rumman wrote:

 Hello,

  I have been working with Postgresql for the last 3 years. Before that I
 worked with Oracle, Mysql and other databases.
 Now, its time to learn the internals of Postgresql system. I downloaded
 the source code and imported it in my eclipse environment.
 But I have very limited knowledge on C programming.
  Could you guys please guide me from where I should start?


 If you really want to start messing with the Pg innards, and you have
 already read and understood all the developer documentation, I'd suggest
 starting by writing some simple user defined functions in C. Write a simple
 function that returns a plain value. Then a record. Then a set of records.
 Then write an aggregate function. Then a window function. Dig into the data
 structures and types. When you're game, implement a simple data type. Then
 add support for indexing it. etc.

 Honestly, if you don't have something you want to _achieve_ it's probably
 mostly going to be boring. What do you want to do, to get out of this?

 --
 Craig Ringer



[GENERAL] I cannot insert bengali character in UTF8

2012-07-20 Thread AI Rumman
I am using database with UTF8 and LC_CTYPE set as default value in
Postgresql 9.1.
But I cannot insert bengali character in a column.

Query Failed:INSERT into tracker (user_id, module_name, item_id,
item_summary) values ('1','Leads','353','বাংলা টেস্��...')::ADODB
error::-ERROR: invalid byte sequence for encoding UTF8: 0xe0a62e

Item_summary is a text type column and we can insert japanese character in
this field.

Could anybody let me know what is the problem here?


Re: [GENERAL] I cannot insert bengali character in UTF8

2012-07-20 Thread AI Rumman
WOW. Great informative answer. Thanks.

On Fri, Jul 20, 2012 at 7:11 PM, Christian Ullrich ch...@chrullrich.netwrote:

 * AI Rumman wrote:

  I am using database with UTF8 and LC_CTYPE set as default value in
 Postgresql 9.1.
 But I cannot insert bengali character in a column.

 Query Failed:INSERT into tracker (user_id, module_name, item_id,
 item_summary) values ('1','Leads','353','বাংলা টেস্��...')::ADODB
 error::-ERROR: invalid byte sequence for encoding UTF8: 0xe0a62e


 E0 A6 2E is not valid UTF-8: 1110 10100110 00101110

 The lead byte indicates that the codepoint consists of three bytes,
 but only the very next byte is a trail byte (10..). The third
 byte is a single character, a period (.), to be exact.

 Setting the MSB on the third byte gives us

 1110 10100110 10101110 = E0 A6 AE

 , which is a valid UTF-8 encoding of U+09AE BENGALI LETTER MA.

 Check your input data.

 --
 Christian





[GENERAL] Where should I start for learn development

2012-07-03 Thread AI Rumman
Hello,

I have been working with Postgresql for the last 3 years. Before that I
worked with Oracle, Mysql and other databases.
Now, its time to learn the internals of Postgresql system. I downloaded the
source code and imported it in my eclipse environment.
But I have very limited knowledge on C programming.
Could you guys please guide me from where I should start?

Thanks.


[GENERAL] is there any query so that I may find the list of columns those have rules?

2012-07-01 Thread AI Rumman
I am getting the following error:
ALTER TABLE base_table ALTER COLUMN base_table_field1 TYPE numeric(10,6);
ERROR:  cannot alter type of a column used by a view or rule
DETAIL:  rule _RETURN on view master_view depends on column
base_table_field1

I know that I have a view based in the base_table_field1 column.

So is there any query so that I may find the list of columns those have
rules?

Any help will be appreciable.


[GENERAL] Is it possible to send data in client end?

2012-06-20 Thread AI Rumman
I am using Postgresql 9.1. I know that COPY is used to export data as csv.
But COPY creates file in server end. I need to create a file in client end.
My application uses php.
Is there any tool for it?


  1   2   3   >