[GENERAL] Re: pg_upgrade could not create catalog dump while upgrading from 9.0 to 9.2

2013-07-12 Thread shankar.kotamar...@gmail.com
Thanks Adrian for your reply.You were right i am using Enterprise DB version
for 9.2 and community version for 9.0. I am including the screenshot
 



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/pg-upgrade-could-not-create-catalog-dump-while-upgrading-from-9-0-to-9-2-tp5763576p5763661.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] passing text value with single quote to stored procedure

2013-07-12 Thread Adrian Klaver

On 07/12/2013 03:05 PM, Prabhjot Sheena wrote:

Hi all
   I am currently having an issue with passing a value with a
single quote in it to stored procedure

  i am trying to pass this text value QA_SUMMER ' 2013_(EU/US) to stored
procedure text variable. it gives me error.

Here is more detail on this

My stored procedure is something like this

CREATE OR REPLACE FUNCTION load_stage_start_v1( p_name text)

when this value is passed to the function

select load_stage_start_v1('QA_SUMMER'2013_(EU/US)');

This gives an error. Is there a way to fix this issue.


Without seeing the body of the function or the error message it is a bit 
of a guess, that has never stopped me before, so here are a couple of 
suggestions:


Dollar quoting
http://www.postgresql.org/docs/9.2/interactive/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS
4.1.2.4. Dollar-quoted String Constants

so:

select load_stage_start_v1($$QA_SUMMER'2013_(EU/US)$$)

quote_literal
http://www.postgresql.org/docs/9.2/static/functions-string.html

Use quote_literal() in function to properly escape string.



And if i am sending this question to the wrong mailing list pls let me
know.

Thanks in advance





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] passing text value with single quote to stored procedure

2013-07-12 Thread Prabhjot Sheena
Hi all
  I am currently having an issue with passing a value with a single
quote in it to stored procedure

 i am trying to pass this text value QA_SUMMER ' 2013_(EU/US) to stored
procedure text variable. it gives me error.

Here is more detail on this

My stored procedure is something like this

CREATE OR REPLACE FUNCTION load_stage_start_v1( p_name text)

when this value is passed to the function

select load_stage_start_v1('QA_SUMMER'2013_(EU/US)');

This gives an error. Is there a way to fix this issue.

And if i am sending this question to the wrong mailing list pls let me
know.

Thanks in advance


[GENERAL] Build RPM from Postgres Source

2013-07-12 Thread ktewari1
Hi,
I need to have some different settings(like NAMEDATALEN  etc.) and
that's why I'm trying to build postgres from the source and to create an rpm
to be send for install. 

Now, the build works fine but, I don't see a way to create an RPM.

I'll greatly appreciate any help.

Thanks,
Kul 



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Build-RPM-from-Postgres-Source-tp5763633.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] V8.4 TOAST table problem

2013-07-12 Thread Scott Marlowe
It's always a good idea to keep a copy of the database for a post mortem if
possible. If you've found a bug, it's nice to find and fix it. If you were
suffering from an operational failure of some sort, then it helps to figure
that out too.


On Fri, Jul 12, 2013 at 2:42 PM, Bradley McCune wrote:

> Well, the issue was corrected by completely rebuilding the database a few
> days ago (all the way to reinitializing the database directory).  With that
> said, I did check that table at the time, and I received an empty result
> set from such a SELECT statement.  The same goes for
> max_prepared_transactions.
>
> Perplexing.
>
>
>
> On Fri, Jul 12, 2013 at 4:35 PM, Scott Marlowe wrote:
>
>> So what id
>> select * from pg_prepared_xacts ;
>> show?
>>
>>
>> On Fri, Jul 12, 2013 at 2:30 PM, Bradley McCune 
>> wrote:
>>
>>> Scott,
>>>
>>> Purely idle.  I compared these transactions with our other "healthy"
>>> databases, and they checked out.
>>>
>>>
>>> On Fri, Jul 12, 2013 at 4:25 PM, Scott Marlowe 
>>> wrote:
>>>
 Prepared transactions that are sitting still do the same thing, and
 show no connections.


 On Fri, Jul 12, 2013 at 2:25 PM, Scott Marlowe >>> > wrote:

> Idle in Transaction? Or plain Idle? Idle in Transaction stops vacuum
> from reclaiming space and is indicative of a broken application.
>
>
> On Fri, Jul 12, 2013 at 9:39 AM, Bradley McCune <
> bradley.mcc...@noaa.gov> wrote:
>
>> The only transactions present were "" for current_query.  I
>> even stopped the remote services, restarted the PostgreSQL server
>> (assumingly, there should be no transactions occurring now), and 
>> performed
>> another VACUUM FULL followed by REINDEX due to known fullvac index bloat 
>> in
>> pre-9 pgsql version.
>>
>>
>> On Fri, Jul 12, 2013 at 11:28 AM, Scott Marlowe <
>> scott.marl...@gmail.com> wrote:
>>
>>> Did you have a long running trasnaction? Especially a prepared
>>> transaction, blocking the vacuum from reclaiming the space?
>>>
>>> On Fri, Jul 12, 2013 at 8:10 AM, Bradley McCune <
>>> bradley.mcc...@noaa.gov> wrote:
>>> > David,
>>> >
>>> > (As a preface, I have already gone forward with completely
>>> rebuilding the
>>> > database which seems to have finally fixed the problem.
>>>  Rebuilding the
>>> > table itself had no effect, and I couldn't wait much longer to move
>>> > forward.)
>>> >
>>> > Yes, this seems similar, however, the key difference being that
>>> VACUUM FULL
>>> > did not alleviate the problem.  The extra "bloated" disk space was
>>> still
>>> > considered "in use" by the data server, and so it was never
>>> returned to the
>>> > system.  I have a suspicion that the server was storing the table
>>> data in
>>> > pages in an inefficient manner (by unknown means) because we had
>>> roughly ~5x
>>> > the number of pages used on that TOAST table to store the same
>>> number of
>>> > tuples compared to other similar databases.
>>> >
>>> > Depending on how often you have to use VACUUM FULL, you might want
>>> to
>>> > consider tweaking the autovacuum to be more aggressive on that hot
>>> table to
>>> > keep it in check more often.  (Recycling the disk space more
>>> efficiently
>>> > rather than sending it back to the server only to be reallocated
>>> to the
>>> > database again.)
>>> >
>>> >
>>> > On Fri, Jul 12, 2013 at 4:09 AM, David Welton 
>>> wrote:
>>> >>
>>> >> Hi,
>>> >>
>>> >> I have a very similar problem... details below.
>>> >>
>>> >> On Wed, Jul 3, 2013 at 5:19 PM, Paul Tilles 
>>> wrote:
>>> >> > Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4
>>> in order
>>> >> > to
>>> >> > take advantage of autovacuum features. This server exists in a
>>> very
>>> >> > closed
>>> >> > environment (isolated network, limited root privileges; this
>>> explains
>>> >> > the
>>> >> > older software in use) and runs on RHEL5.5 (i686). After the
>>> upgrade,
>>> >> > the
>>> >> > database has constantly been growing to the tune of 5-6 GB a
>>> day.
>>> >> > Normally,
>>> >> > the database, as a whole, is ~20GB; currently, it is ~89GB. We
>>> have a
>>> >> > couple
>>> >> > other servers which run equivalent databases and actually
>>> synchronize
>>> >> > the
>>> >> > records to each other via a 3rd party application (one I do not
>>> have
>>> >> > access
>>> >> > to the inner workings). The other databases are ~20GB as they
>>> should be.
>>> >>
>>> >> Our machine is an Ubuntu 12.04 system running on AWS, so it's a
>>> 64 bit
>>> >> system:
>>> >>
>>> >> PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc
>>> >> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit

Re: [GENERAL] V8.4 TOAST table problem

2013-07-12 Thread Bradley McCune
Well, the issue was corrected by completely rebuilding the database a few
days ago (all the way to reinitializing the database directory).  With that
said, I did check that table at the time, and I received an empty result
set from such a SELECT statement.  The same goes for
max_prepared_transactions.

Perplexing.


On Fri, Jul 12, 2013 at 4:35 PM, Scott Marlowe wrote:

> So what id
> select * from pg_prepared_xacts ;
> show?
>
>
> On Fri, Jul 12, 2013 at 2:30 PM, Bradley McCune 
> wrote:
>
>> Scott,
>>
>> Purely idle.  I compared these transactions with our other "healthy"
>> databases, and they checked out.
>>
>>
>> On Fri, Jul 12, 2013 at 4:25 PM, Scott Marlowe 
>> wrote:
>>
>>> Prepared transactions that are sitting still do the same thing, and show
>>> no connections.
>>>
>>>
>>> On Fri, Jul 12, 2013 at 2:25 PM, Scott Marlowe 
>>> wrote:
>>>
 Idle in Transaction? Or plain Idle? Idle in Transaction stops vacuum
 from reclaiming space and is indicative of a broken application.


 On Fri, Jul 12, 2013 at 9:39 AM, Bradley McCune <
 bradley.mcc...@noaa.gov> wrote:

> The only transactions present were "" for current_query.  I even
> stopped the remote services, restarted the PostgreSQL server (assumingly,
> there should be no transactions occurring now), and performed another
> VACUUM FULL followed by REINDEX due to known fullvac index bloat in pre-9
> pgsql version.
>
>
> On Fri, Jul 12, 2013 at 11:28 AM, Scott Marlowe <
> scott.marl...@gmail.com> wrote:
>
>> Did you have a long running trasnaction? Especially a prepared
>> transaction, blocking the vacuum from reclaiming the space?
>>
>> On Fri, Jul 12, 2013 at 8:10 AM, Bradley McCune <
>> bradley.mcc...@noaa.gov> wrote:
>> > David,
>> >
>> > (As a preface, I have already gone forward with completely
>> rebuilding the
>> > database which seems to have finally fixed the problem.  Rebuilding
>> the
>> > table itself had no effect, and I couldn't wait much longer to move
>> > forward.)
>> >
>> > Yes, this seems similar, however, the key difference being that
>> VACUUM FULL
>> > did not alleviate the problem.  The extra "bloated" disk space was
>> still
>> > considered "in use" by the data server, and so it was never
>> returned to the
>> > system.  I have a suspicion that the server was storing the table
>> data in
>> > pages in an inefficient manner (by unknown means) because we had
>> roughly ~5x
>> > the number of pages used on that TOAST table to store the same
>> number of
>> > tuples compared to other similar databases.
>> >
>> > Depending on how often you have to use VACUUM FULL, you might want
>> to
>> > consider tweaking the autovacuum to be more aggressive on that hot
>> table to
>> > keep it in check more often.  (Recycling the disk space more
>> efficiently
>> > rather than sending it back to the server only to be reallocated to
>> the
>> > database again.)
>> >
>> >
>> > On Fri, Jul 12, 2013 at 4:09 AM, David Welton 
>> wrote:
>> >>
>> >> Hi,
>> >>
>> >> I have a very similar problem... details below.
>> >>
>> >> On Wed, Jul 3, 2013 at 5:19 PM, Paul Tilles 
>> wrote:
>> >> > Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in
>> order
>> >> > to
>> >> > take advantage of autovacuum features. This server exists in a
>> very
>> >> > closed
>> >> > environment (isolated network, limited root privileges; this
>> explains
>> >> > the
>> >> > older software in use) and runs on RHEL5.5 (i686). After the
>> upgrade,
>> >> > the
>> >> > database has constantly been growing to the tune of 5-6 GB a day.
>> >> > Normally,
>> >> > the database, as a whole, is ~20GB; currently, it is ~89GB. We
>> have a
>> >> > couple
>> >> > other servers which run equivalent databases and actually
>> synchronize
>> >> > the
>> >> > records to each other via a 3rd party application (one I do not
>> have
>> >> > access
>> >> > to the inner workings). The other databases are ~20GB as they
>> should be.
>> >>
>> >> Our machine is an Ubuntu 12.04 system running on AWS, so it's a 64
>> bit
>> >> system:
>> >>
>> >> PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc
>> >> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
>> >>
>> >> > Running the following SQL, it's fairly obvious there's an issue
>> with a
>> >> > particular table, and, more specifically, its TOAST table.
>> >>
>> >> Same thing here: we have a table with around 2-3 megs of data that
>> is
>> >> blowing up to *10 gigs*.
>> >>
>> >> > This TOAST table is for a table called "timeseries" which saves
>> large
>> >> > records of blobbed data. ASUM(LENGTH(blob)/1024./1024.

Re: [GENERAL] V8.4 TOAST table problem

2013-07-12 Thread Scott Marlowe
So what id
select * from pg_prepared_xacts ;
show?


On Fri, Jul 12, 2013 at 2:30 PM, Bradley McCune wrote:

> Scott,
>
> Purely idle.  I compared these transactions with our other "healthy"
> databases, and they checked out.
>
>
> On Fri, Jul 12, 2013 at 4:25 PM, Scott Marlowe wrote:
>
>> Prepared transactions that are sitting still do the same thing, and show
>> no connections.
>>
>>
>> On Fri, Jul 12, 2013 at 2:25 PM, Scott Marlowe 
>> wrote:
>>
>>> Idle in Transaction? Or plain Idle? Idle in Transaction stops vacuum
>>> from reclaiming space and is indicative of a broken application.
>>>
>>>
>>> On Fri, Jul 12, 2013 at 9:39 AM, Bradley McCune >> > wrote:
>>>
 The only transactions present were "" for current_query.  I even
 stopped the remote services, restarted the PostgreSQL server (assumingly,
 there should be no transactions occurring now), and performed another
 VACUUM FULL followed by REINDEX due to known fullvac index bloat in pre-9
 pgsql version.


 On Fri, Jul 12, 2013 at 11:28 AM, Scott Marlowe <
 scott.marl...@gmail.com> wrote:

> Did you have a long running trasnaction? Especially a prepared
> transaction, blocking the vacuum from reclaiming the space?
>
> On Fri, Jul 12, 2013 at 8:10 AM, Bradley McCune <
> bradley.mcc...@noaa.gov> wrote:
> > David,
> >
> > (As a preface, I have already gone forward with completely
> rebuilding the
> > database which seems to have finally fixed the problem.  Rebuilding
> the
> > table itself had no effect, and I couldn't wait much longer to move
> > forward.)
> >
> > Yes, this seems similar, however, the key difference being that
> VACUUM FULL
> > did not alleviate the problem.  The extra "bloated" disk space was
> still
> > considered "in use" by the data server, and so it was never returned
> to the
> > system.  I have a suspicion that the server was storing the table
> data in
> > pages in an inefficient manner (by unknown means) because we had
> roughly ~5x
> > the number of pages used on that TOAST table to store the same
> number of
> > tuples compared to other similar databases.
> >
> > Depending on how often you have to use VACUUM FULL, you might want to
> > consider tweaking the autovacuum to be more aggressive on that hot
> table to
> > keep it in check more often.  (Recycling the disk space more
> efficiently
> > rather than sending it back to the server only to be reallocated to
> the
> > database again.)
> >
> >
> > On Fri, Jul 12, 2013 at 4:09 AM, David Welton 
> wrote:
> >>
> >> Hi,
> >>
> >> I have a very similar problem... details below.
> >>
> >> On Wed, Jul 3, 2013 at 5:19 PM, Paul Tilles 
> wrote:
> >> > Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in
> order
> >> > to
> >> > take advantage of autovacuum features. This server exists in a
> very
> >> > closed
> >> > environment (isolated network, limited root privileges; this
> explains
> >> > the
> >> > older software in use) and runs on RHEL5.5 (i686). After the
> upgrade,
> >> > the
> >> > database has constantly been growing to the tune of 5-6 GB a day.
> >> > Normally,
> >> > the database, as a whole, is ~20GB; currently, it is ~89GB. We
> have a
> >> > couple
> >> > other servers which run equivalent databases and actually
> synchronize
> >> > the
> >> > records to each other via a 3rd party application (one I do not
> have
> >> > access
> >> > to the inner workings). The other databases are ~20GB as they
> should be.
> >>
> >> Our machine is an Ubuntu 12.04 system running on AWS, so it's a 64
> bit
> >> system:
> >>
> >> PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc
> >> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
> >>
> >> > Running the following SQL, it's fairly obvious there's an issue
> with a
> >> > particular table, and, more specifically, its TOAST table.
> >>
> >> Same thing here: we have a table with around 2-3 megs of data that
> is
> >> blowing up to *10 gigs*.
> >>
> >> > This TOAST table is for a table called "timeseries" which saves
> large
> >> > records of blobbed data. ASUM(LENGTH(blob)/1024./1024.) of all the
> >> > records
> >> > in timeseries yields ~16GB for that column. There should be [b]no
> >> > reason[/b]
> >> > this table's TOAST table should be as large as it is.
> >>
> >> Similar situation: it's a bytea column that gets "a lot" of updates;
> >> in the order of 10's of thousands a day.
> >>
> >> > I've performed a VACUUM FULL VERBOSE ANALYZE timeseries, and the
> vacuum
> >> > runs
> >> > to completion with no errors.
> >>
> >> VACUUM FULL fixes the problem for us by r

Re: [GENERAL] V8.4 TOAST table problem

2013-07-12 Thread Bradley McCune
Scott,

Purely idle.  I compared these transactions with our other "healthy"
databases, and they checked out.


On Fri, Jul 12, 2013 at 4:25 PM, Scott Marlowe wrote:

> Prepared transactions that are sitting still do the same thing, and show
> no connections.
>
>
> On Fri, Jul 12, 2013 at 2:25 PM, Scott Marlowe wrote:
>
>> Idle in Transaction? Or plain Idle? Idle in Transaction stops vacuum from
>> reclaiming space and is indicative of a broken application.
>>
>>
>> On Fri, Jul 12, 2013 at 9:39 AM, Bradley McCune 
>> wrote:
>>
>>> The only transactions present were "" for current_query.  I even
>>> stopped the remote services, restarted the PostgreSQL server (assumingly,
>>> there should be no transactions occurring now), and performed another
>>> VACUUM FULL followed by REINDEX due to known fullvac index bloat in pre-9
>>> pgsql version.
>>>
>>>
>>> On Fri, Jul 12, 2013 at 11:28 AM, Scott Marlowe >> > wrote:
>>>
 Did you have a long running trasnaction? Especially a prepared
 transaction, blocking the vacuum from reclaiming the space?

 On Fri, Jul 12, 2013 at 8:10 AM, Bradley McCune <
 bradley.mcc...@noaa.gov> wrote:
 > David,
 >
 > (As a preface, I have already gone forward with completely rebuilding
 the
 > database which seems to have finally fixed the problem.  Rebuilding
 the
 > table itself had no effect, and I couldn't wait much longer to move
 > forward.)
 >
 > Yes, this seems similar, however, the key difference being that
 VACUUM FULL
 > did not alleviate the problem.  The extra "bloated" disk space was
 still
 > considered "in use" by the data server, and so it was never returned
 to the
 > system.  I have a suspicion that the server was storing the table
 data in
 > pages in an inefficient manner (by unknown means) because we had
 roughly ~5x
 > the number of pages used on that TOAST table to store the same number
 of
 > tuples compared to other similar databases.
 >
 > Depending on how often you have to use VACUUM FULL, you might want to
 > consider tweaking the autovacuum to be more aggressive on that hot
 table to
 > keep it in check more often.  (Recycling the disk space more
 efficiently
 > rather than sending it back to the server only to be reallocated to
 the
 > database again.)
 >
 >
 > On Fri, Jul 12, 2013 at 4:09 AM, David Welton 
 wrote:
 >>
 >> Hi,
 >>
 >> I have a very similar problem... details below.
 >>
 >> On Wed, Jul 3, 2013 at 5:19 PM, Paul Tilles 
 wrote:
 >> > Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in
 order
 >> > to
 >> > take advantage of autovacuum features. This server exists in a very
 >> > closed
 >> > environment (isolated network, limited root privileges; this
 explains
 >> > the
 >> > older software in use) and runs on RHEL5.5 (i686). After the
 upgrade,
 >> > the
 >> > database has constantly been growing to the tune of 5-6 GB a day.
 >> > Normally,
 >> > the database, as a whole, is ~20GB; currently, it is ~89GB. We
 have a
 >> > couple
 >> > other servers which run equivalent databases and actually
 synchronize
 >> > the
 >> > records to each other via a 3rd party application (one I do not
 have
 >> > access
 >> > to the inner workings). The other databases are ~20GB as they
 should be.
 >>
 >> Our machine is an Ubuntu 12.04 system running on AWS, so it's a 64
 bit
 >> system:
 >>
 >> PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc
 >> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
 >>
 >> > Running the following SQL, it's fairly obvious there's an issue
 with a
 >> > particular table, and, more specifically, its TOAST table.
 >>
 >> Same thing here: we have a table with around 2-3 megs of data that is
 >> blowing up to *10 gigs*.
 >>
 >> > This TOAST table is for a table called "timeseries" which saves
 large
 >> > records of blobbed data. ASUM(LENGTH(blob)/1024./1024.) of all the
 >> > records
 >> > in timeseries yields ~16GB for that column. There should be [b]no
 >> > reason[/b]
 >> > this table's TOAST table should be as large as it is.
 >>
 >> Similar situation: it's a bytea column that gets "a lot" of updates;
 >> in the order of 10's of thousands a day.
 >>
 >> > I've performed a VACUUM FULL VERBOSE ANALYZE timeseries, and the
 vacuum
 >> > runs
 >> > to completion with no errors.
 >>
 >> VACUUM FULL fixes the problem for us by recouping all the wasted disk
 >> space.  I don't have the knowledge to investigate much further on my
 >> own, but I'd be happy to try out a few things.  The database is,
 >> unfortunately, sensitive data that I can't share, but I could
 probably
 >> script a similar s

Re: [GENERAL] V8.4 TOAST table problem

2013-07-12 Thread Scott Marlowe
Prepared transactions that are sitting still do the same thing, and show no
connections.


On Fri, Jul 12, 2013 at 2:25 PM, Scott Marlowe wrote:

> Idle in Transaction? Or plain Idle? Idle in Transaction stops vacuum from
> reclaiming space and is indicative of a broken application.
>
>
> On Fri, Jul 12, 2013 at 9:39 AM, Bradley McCune 
> wrote:
>
>> The only transactions present were "" for current_query.  I even
>> stopped the remote services, restarted the PostgreSQL server (assumingly,
>> there should be no transactions occurring now), and performed another
>> VACUUM FULL followed by REINDEX due to known fullvac index bloat in pre-9
>> pgsql version.
>>
>>
>> On Fri, Jul 12, 2013 at 11:28 AM, Scott Marlowe 
>> wrote:
>>
>>> Did you have a long running trasnaction? Especially a prepared
>>> transaction, blocking the vacuum from reclaiming the space?
>>>
>>> On Fri, Jul 12, 2013 at 8:10 AM, Bradley McCune 
>>> wrote:
>>> > David,
>>> >
>>> > (As a preface, I have already gone forward with completely rebuilding
>>> the
>>> > database which seems to have finally fixed the problem.  Rebuilding the
>>> > table itself had no effect, and I couldn't wait much longer to move
>>> > forward.)
>>> >
>>> > Yes, this seems similar, however, the key difference being that VACUUM
>>> FULL
>>> > did not alleviate the problem.  The extra "bloated" disk space was
>>> still
>>> > considered "in use" by the data server, and so it was never returned
>>> to the
>>> > system.  I have a suspicion that the server was storing the table data
>>> in
>>> > pages in an inefficient manner (by unknown means) because we had
>>> roughly ~5x
>>> > the number of pages used on that TOAST table to store the same number
>>> of
>>> > tuples compared to other similar databases.
>>> >
>>> > Depending on how often you have to use VACUUM FULL, you might want to
>>> > consider tweaking the autovacuum to be more aggressive on that hot
>>> table to
>>> > keep it in check more often.  (Recycling the disk space more
>>> efficiently
>>> > rather than sending it back to the server only to be reallocated to the
>>> > database again.)
>>> >
>>> >
>>> > On Fri, Jul 12, 2013 at 4:09 AM, David Welton 
>>> wrote:
>>> >>
>>> >> Hi,
>>> >>
>>> >> I have a very similar problem... details below.
>>> >>
>>> >> On Wed, Jul 3, 2013 at 5:19 PM, Paul Tilles 
>>> wrote:
>>> >> > Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in
>>> order
>>> >> > to
>>> >> > take advantage of autovacuum features. This server exists in a very
>>> >> > closed
>>> >> > environment (isolated network, limited root privileges; this
>>> explains
>>> >> > the
>>> >> > older software in use) and runs on RHEL5.5 (i686). After the
>>> upgrade,
>>> >> > the
>>> >> > database has constantly been growing to the tune of 5-6 GB a day.
>>> >> > Normally,
>>> >> > the database, as a whole, is ~20GB; currently, it is ~89GB. We have
>>> a
>>> >> > couple
>>> >> > other servers which run equivalent databases and actually
>>> synchronize
>>> >> > the
>>> >> > records to each other via a 3rd party application (one I do not have
>>> >> > access
>>> >> > to the inner workings). The other databases are ~20GB as they
>>> should be.
>>> >>
>>> >> Our machine is an Ubuntu 12.04 system running on AWS, so it's a 64 bit
>>> >> system:
>>> >>
>>> >> PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc
>>> >> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
>>> >>
>>> >> > Running the following SQL, it's fairly obvious there's an issue
>>> with a
>>> >> > particular table, and, more specifically, its TOAST table.
>>> >>
>>> >> Same thing here: we have a table with around 2-3 megs of data that is
>>> >> blowing up to *10 gigs*.
>>> >>
>>> >> > This TOAST table is for a table called "timeseries" which saves
>>> large
>>> >> > records of blobbed data. ASUM(LENGTH(blob)/1024./1024.) of all the
>>> >> > records
>>> >> > in timeseries yields ~16GB for that column. There should be [b]no
>>> >> > reason[/b]
>>> >> > this table's TOAST table should be as large as it is.
>>> >>
>>> >> Similar situation: it's a bytea column that gets "a lot" of updates;
>>> >> in the order of 10's of thousands a day.
>>> >>
>>> >> > I've performed a VACUUM FULL VERBOSE ANALYZE timeseries, and the
>>> vacuum
>>> >> > runs
>>> >> > to completion with no errors.
>>> >>
>>> >> VACUUM FULL fixes the problem for us by recouping all the wasted disk
>>> >> space.  I don't have the knowledge to investigate much further on my
>>> >> own, but I'd be happy to try out a few things.  The database is,
>>> >> unfortunately, sensitive data that I can't share, but I could probably
>>> >> script a similar situation...
>>> >>
>>> >> --
>>> >> David N. Welton
>>> >>
>>> >> http://www.dedasys.com/
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Bradley D. J. McCune
>>>
>>>
>>>
>>> --
>>> To understand recursion, one must first understand recursion.
>>>
>>
>>
>>
>> --
>> Bradley D. J. McCune
>> NOAA/OCWWS/HSD
>> Community Hydrologic Prediction System - Support

Re: [GENERAL] V8.4 TOAST table problem

2013-07-12 Thread Scott Marlowe
Idle in Transaction? Or plain Idle? Idle in Transaction stops vacuum from
reclaiming space and is indicative of a broken application.


On Fri, Jul 12, 2013 at 9:39 AM, Bradley McCune wrote:

> The only transactions present were "" for current_query.  I even
> stopped the remote services, restarted the PostgreSQL server (assumingly,
> there should be no transactions occurring now), and performed another
> VACUUM FULL followed by REINDEX due to known fullvac index bloat in pre-9
> pgsql version.
>
>
> On Fri, Jul 12, 2013 at 11:28 AM, Scott Marlowe 
> wrote:
>
>> Did you have a long running trasnaction? Especially a prepared
>> transaction, blocking the vacuum from reclaiming the space?
>>
>> On Fri, Jul 12, 2013 at 8:10 AM, Bradley McCune 
>> wrote:
>> > David,
>> >
>> > (As a preface, I have already gone forward with completely rebuilding
>> the
>> > database which seems to have finally fixed the problem.  Rebuilding the
>> > table itself had no effect, and I couldn't wait much longer to move
>> > forward.)
>> >
>> > Yes, this seems similar, however, the key difference being that VACUUM
>> FULL
>> > did not alleviate the problem.  The extra "bloated" disk space was still
>> > considered "in use" by the data server, and so it was never returned to
>> the
>> > system.  I have a suspicion that the server was storing the table data
>> in
>> > pages in an inefficient manner (by unknown means) because we had
>> roughly ~5x
>> > the number of pages used on that TOAST table to store the same number of
>> > tuples compared to other similar databases.
>> >
>> > Depending on how often you have to use VACUUM FULL, you might want to
>> > consider tweaking the autovacuum to be more aggressive on that hot
>> table to
>> > keep it in check more often.  (Recycling the disk space more efficiently
>> > rather than sending it back to the server only to be reallocated to the
>> > database again.)
>> >
>> >
>> > On Fri, Jul 12, 2013 at 4:09 AM, David Welton 
>> wrote:
>> >>
>> >> Hi,
>> >>
>> >> I have a very similar problem... details below.
>> >>
>> >> On Wed, Jul 3, 2013 at 5:19 PM, Paul Tilles 
>> wrote:
>> >> > Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in
>> order
>> >> > to
>> >> > take advantage of autovacuum features. This server exists in a very
>> >> > closed
>> >> > environment (isolated network, limited root privileges; this explains
>> >> > the
>> >> > older software in use) and runs on RHEL5.5 (i686). After the upgrade,
>> >> > the
>> >> > database has constantly been growing to the tune of 5-6 GB a day.
>> >> > Normally,
>> >> > the database, as a whole, is ~20GB; currently, it is ~89GB. We have a
>> >> > couple
>> >> > other servers which run equivalent databases and actually synchronize
>> >> > the
>> >> > records to each other via a 3rd party application (one I do not have
>> >> > access
>> >> > to the inner workings). The other databases are ~20GB as they should
>> be.
>> >>
>> >> Our machine is an Ubuntu 12.04 system running on AWS, so it's a 64 bit
>> >> system:
>> >>
>> >> PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc
>> >> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
>> >>
>> >> > Running the following SQL, it's fairly obvious there's an issue with
>> a
>> >> > particular table, and, more specifically, its TOAST table.
>> >>
>> >> Same thing here: we have a table with around 2-3 megs of data that is
>> >> blowing up to *10 gigs*.
>> >>
>> >> > This TOAST table is for a table called "timeseries" which saves large
>> >> > records of blobbed data. ASUM(LENGTH(blob)/1024./1024.) of all the
>> >> > records
>> >> > in timeseries yields ~16GB for that column. There should be [b]no
>> >> > reason[/b]
>> >> > this table's TOAST table should be as large as it is.
>> >>
>> >> Similar situation: it's a bytea column that gets "a lot" of updates;
>> >> in the order of 10's of thousands a day.
>> >>
>> >> > I've performed a VACUUM FULL VERBOSE ANALYZE timeseries, and the
>> vacuum
>> >> > runs
>> >> > to completion with no errors.
>> >>
>> >> VACUUM FULL fixes the problem for us by recouping all the wasted disk
>> >> space.  I don't have the knowledge to investigate much further on my
>> >> own, but I'd be happy to try out a few things.  The database is,
>> >> unfortunately, sensitive data that I can't share, but I could probably
>> >> script a similar situation...
>> >>
>> >> --
>> >> David N. Welton
>> >>
>> >> http://www.dedasys.com/
>> >
>> >
>> >
>> >
>> > --
>> > Bradley D. J. McCune
>>
>>
>>
>> --
>> To understand recursion, one must first understand recursion.
>>
>
>
>
> --
> Bradley D. J. McCune
> NOAA/OCWWS/HSD
> Community Hydrologic Prediction System - Support
> CHPS FogBugz Administrator
> Office phone:  (301) 713-1625   x160
>
>
>


-- 
To understand recursion, one must first understand recursion.


Re: [GENERAL] pg_upgrade could not create catalog dump while upgrading from 9.0 to 9.2

2013-07-12 Thread Adrian Klaver

On 07/12/2013 04:56 AM, shankar.kotamar...@gmail.com wrote:

Hello,
I am upgrading from postgresql 9.0 to 9.2 using the utility pg_upgrade.
Creation of catalog dump failed .The reason is in 9.2 view pg_roles has a
column "rolecreatedblink" which is missing in 9.0.Please help me to move
forward.
Thanks in Advance

*Last lines in pg_upgradeutility.log**
command: "/u01/PostgresPlus/9.2AS/bin/pg_dumpall" --port 50432 --username
"postgres" --schema-only --binary-upgrade  -f pg_upgrade_dump_all.sql >>
"pg_upgrade_utility.log" 2>&1
pg_dumpall: query failed: ERROR:  column "rolcreatedblink" does not exist
LINE 1: ...rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcreated...
  ^
pg_dumpall: query was: SELECT oid, rolname, rolsuper, rolinherit,
rolcreaterole, rolcreatedb, rolcreatedblink, rolcreatepublicdblink,
roldroppublicdblink, false as rolpolicyexempt, rolcanlogin, rolconnlimit,
rolpassword, rolvaliduntil, false as rolreplication,
pg_catalog.shobj_description(oid, 'pg_authid') as rolcomment FROM pg_authid
ORDER BY 2



There is something else going here. I am not seeing that column in the docs:

http://www.postgresql.org/docs/9.2/interactive/view-pg-roles.html

nor in the database:

postgres=# select version(); 



   version 



- 


 PostgreSQL 9.2.3

postgres=# \d+ pg_roles
   View "pg_catalog.pg_roles"
 Column |   Type   | Modifiers | Storage  | 
Description

+--+---+--+-
 rolname| name |   | plain|
 rolsuper   | boolean  |   | plain|
 rolinherit | boolean  |   | plain|
 rolcreaterole  | boolean  |   | plain|
 rolcreatedb| boolean  |   | plain|
 rolcatupdate   | boolean  |   | plain|
 rolcanlogin| boolean  |   | plain|
 rolreplication | boolean  |   | plain|
 rolconnlimit   | integer  |   | plain|
 rolpassword| text |   | extended |
 rolvaliduntil  | timestamp with time zone |   | plain|
 rolconfig  | text[]   |   | extended |
 oid| oid  |   | plain|
View definition:
 SELECT pg_authid.rolname, pg_authid.rolsuper, pg_authid.rolinherit, 



pg_authid.rolcreaterole, pg_authid.rolcreatedb, 
pg_authid.rolcatupdate, 

pg_authid.rolcanlogin, pg_authid.rolreplication, 
pg_authid.rolconnlimit, 

''::text AS rolpassword, pg_authid.rolvaliduntil, 



s.setconfig AS rolconfig, pg_authid.oid 



   FROM pg_authid 



   LEFT JOIN pg_db_role_setting s ON pg_authid.oid = s.setrole AND 
s.setdatabase = 0::oid;



I noticed from above you are using PostgresPlus, are you using one of 
EDB extended versions of Postgres?





Shankar Kotamarthy
Postgresql DBA



--



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] initdb of pg 9.0.13 fails on pg_authid

2013-07-12 Thread snark
Hello! I also have the same problem and same configuration (x86_64, gcc
4.8.0), but -O flag didn't help me.
I've used few variants, like:
./configure --prefix=/usr CFLAGS="-O"
./configure --prefix=/usr CFLAGS="-O0"
./configure --prefix=/usr CFLAGS="-O1"
./configure --prefix=/usr CFLAGS="-O2"
./configure --prefix=/usr CFLAGS="-O3"
but heither one had helped me to get rid of the error.

I have also tried compiling with gcc-4.7(from sources) (again, on x86_64),
but this haven't helped either.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/initdb-of-pg-9-0-13-fails-on-pg-authid-tp5750773p5763623.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] transactional swap of tables

2013-07-12 Thread Merlin Moncure
On Fri, Jul 12, 2013 at 9:24 AM, Kevin Grittner  wrote:
> Vincenzo Romano  wrote:
>
>> I'd like to "replace" a full table F with an empty one E.
>> In order to do this I see only one way:
>>
>> ALTER TABLE F RENAME TO T;
>> ALTER TABLE E RENAME TO F;
>> ALTER TABLE T RENAME TO E; -- optional
>>
>> This implies there's a moment when the full table doesn't exist.
>> Would a transaction enclosure ensure that the table F will be
>> always available to all clients?
>
> Yes.  What you show is safe.  What has a race condition is dropping
> the old table before all transactions which started with it have
> completed.  If you're going to drop the old table, be sure to wait
> long enough after the COMMIT for things to "settle".

By advised that when doing rename based swaps all depending structures
(foreign keys, views, table based composite types, etc) will still be
pointing at the old table.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can you get "WAL segment has already been removed" when doing synchronous replication ?!

2013-07-12 Thread Jeff Janes
On Thu, Jul 11, 2013 at 11:01 AM, hubert depesz lubaczewski
 wrote:
> On Thu, Jul 11, 2013 at 11:29:24PM +0530, Raghavendra wrote:
>> On Thu, Jul 11, 2013 at 11:18 PM, hubert depesz lubaczewski <
>> dep...@depesz.com> wrote:
>>
>> >
>> > Yet, every now and then we're getting:
>> > FATAL:  requested WAL segment * has already been removed
>> >
>> > Assuming no part of the system is issuing "set synchronous_commit
>> > = off", how can we get in such situation?
>> >
>> > Best regards,
>> >
>> > depesz
>> >
>> >
>> Increasing the wal_keep_segments ?
>
> I know that I can increase wal_keep_segments to "solve" it, but
> shouldn't it be *impossible* to happen with synchronous replication?

If a single transaction spans over both log switch boundaries and
checkpoint boundaries (at least two of the later, I think) it is
possible for a file to be recycled before the commit, and hence before
any attempt to synch-to-standby has occured.

> After all - all commits should wait for slave to be 100% up to date!

But if the file isn't there on the sending end, no amount of waiting can help.

It looks like what is needed is to invoke the SyncRepWaitForLSN code
just before log file recycle, as well as upon transaction commit.
I'm not sure why that isn't already done indirectly.  Doesn't the
checkpointer insert a WAL record upon completion of a checkpoint
indicating that completion, before any recycling is attempted?  Surely
the LSN of that record is higher than that in any file becoming
eligible for recycling.  But I guess that that record is not a commit
record, so does not trigger the sync rep.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] V8.4 TOAST table problem

2013-07-12 Thread Bradley McCune
The only transactions present were "" for current_query.  I even
stopped the remote services, restarted the PostgreSQL server (assumingly,
there should be no transactions occurring now), and performed another
VACUUM FULL followed by REINDEX due to known fullvac index bloat in pre-9
pgsql version.


On Fri, Jul 12, 2013 at 11:28 AM, Scott Marlowe wrote:

> Did you have a long running trasnaction? Especially a prepared
> transaction, blocking the vacuum from reclaiming the space?
>
> On Fri, Jul 12, 2013 at 8:10 AM, Bradley McCune 
> wrote:
> > David,
> >
> > (As a preface, I have already gone forward with completely rebuilding the
> > database which seems to have finally fixed the problem.  Rebuilding the
> > table itself had no effect, and I couldn't wait much longer to move
> > forward.)
> >
> > Yes, this seems similar, however, the key difference being that VACUUM
> FULL
> > did not alleviate the problem.  The extra "bloated" disk space was still
> > considered "in use" by the data server, and so it was never returned to
> the
> > system.  I have a suspicion that the server was storing the table data in
> > pages in an inefficient manner (by unknown means) because we had roughly
> ~5x
> > the number of pages used on that TOAST table to store the same number of
> > tuples compared to other similar databases.
> >
> > Depending on how often you have to use VACUUM FULL, you might want to
> > consider tweaking the autovacuum to be more aggressive on that hot table
> to
> > keep it in check more often.  (Recycling the disk space more efficiently
> > rather than sending it back to the server only to be reallocated to the
> > database again.)
> >
> >
> > On Fri, Jul 12, 2013 at 4:09 AM, David Welton 
> wrote:
> >>
> >> Hi,
> >>
> >> I have a very similar problem... details below.
> >>
> >> On Wed, Jul 3, 2013 at 5:19 PM, Paul Tilles 
> wrote:
> >> > Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in order
> >> > to
> >> > take advantage of autovacuum features. This server exists in a very
> >> > closed
> >> > environment (isolated network, limited root privileges; this explains
> >> > the
> >> > older software in use) and runs on RHEL5.5 (i686). After the upgrade,
> >> > the
> >> > database has constantly been growing to the tune of 5-6 GB a day.
> >> > Normally,
> >> > the database, as a whole, is ~20GB; currently, it is ~89GB. We have a
> >> > couple
> >> > other servers which run equivalent databases and actually synchronize
> >> > the
> >> > records to each other via a 3rd party application (one I do not have
> >> > access
> >> > to the inner workings). The other databases are ~20GB as they should
> be.
> >>
> >> Our machine is an Ubuntu 12.04 system running on AWS, so it's a 64 bit
> >> system:
> >>
> >> PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc
> >> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
> >>
> >> > Running the following SQL, it's fairly obvious there's an issue with a
> >> > particular table, and, more specifically, its TOAST table.
> >>
> >> Same thing here: we have a table with around 2-3 megs of data that is
> >> blowing up to *10 gigs*.
> >>
> >> > This TOAST table is for a table called "timeseries" which saves large
> >> > records of blobbed data. ASUM(LENGTH(blob)/1024./1024.) of all the
> >> > records
> >> > in timeseries yields ~16GB for that column. There should be [b]no
> >> > reason[/b]
> >> > this table's TOAST table should be as large as it is.
> >>
> >> Similar situation: it's a bytea column that gets "a lot" of updates;
> >> in the order of 10's of thousands a day.
> >>
> >> > I've performed a VACUUM FULL VERBOSE ANALYZE timeseries, and the
> vacuum
> >> > runs
> >> > to completion with no errors.
> >>
> >> VACUUM FULL fixes the problem for us by recouping all the wasted disk
> >> space.  I don't have the knowledge to investigate much further on my
> >> own, but I'd be happy to try out a few things.  The database is,
> >> unfortunately, sensitive data that I can't share, but I could probably
> >> script a similar situation...
> >>
> >> --
> >> David N. Welton
> >>
> >> http://www.dedasys.com/
> >
> >
> >
> >
> > --
> > Bradley D. J. McCune
>
>
>
> --
> To understand recursion, one must first understand recursion.
>



-- 
Bradley D. J. McCune
NOAA/OCWWS/HSD
Community Hydrologic Prediction System - Support
CHPS FogBugz Administrator
Office phone:  (301) 713-1625   x160


Re: [GENERAL] Changing the function used in an index.

2013-07-12 Thread Tom Lane
Clodoaldo Neto  writes:
> How does the planner know that the function used in an index has changed?
> If the function's body is changed keeping the argument and return types the
> planner will not use it anymore:

Ordinarily, changing a function definition like that *would* break
things.  The only thing that's saving you from yourself in this example
is that the function is inline-able SQL, and so what the planner is
seeing is not "f(i)" vs "f(i)", but "i * -1" vs just "i".  Even so,
the index is broken/corrupt, or will be as soon as you make more
insertions into it with the changed function.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] transactional swap of tables

2013-07-12 Thread Kevin Grittner
Vincenzo Romano  wrote:

> Would you please elaborate more on the "wait[ing] long enough
> after the COMMIT" ?

You can note the time when you commit the transaction, and then
poll pg_stat_activity until there are no active transactions which
started before that.

You can sometimes simplify this a little.  In the case where I did
something like this, the only use of the table was by a web
application which timed out any query which didn't complete in 20
seconds.  To that web team, and those using that web application,
there is absolutely no difference between a query which takes more
than 20 seconds and one where PostgreSQL throws an error; the
resulting user-facing behavior is identical.  So if, after waiting
more than 20 seconds, a query blocked long enough to get an error
on trying to access the dropped table OID, that made no difference
to them -- the user would have already been given an error.  In
that situation I was comfortable just hard-coding a 60-second sleep
before dropping the old table.

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] V8.4 TOAST table problem

2013-07-12 Thread Scott Marlowe
Did you have a long running trasnaction? Especially a prepared
transaction, blocking the vacuum from reclaiming the space?

On Fri, Jul 12, 2013 at 8:10 AM, Bradley McCune  wrote:
> David,
>
> (As a preface, I have already gone forward with completely rebuilding the
> database which seems to have finally fixed the problem.  Rebuilding the
> table itself had no effect, and I couldn't wait much longer to move
> forward.)
>
> Yes, this seems similar, however, the key difference being that VACUUM FULL
> did not alleviate the problem.  The extra "bloated" disk space was still
> considered "in use" by the data server, and so it was never returned to the
> system.  I have a suspicion that the server was storing the table data in
> pages in an inefficient manner (by unknown means) because we had roughly ~5x
> the number of pages used on that TOAST table to store the same number of
> tuples compared to other similar databases.
>
> Depending on how often you have to use VACUUM FULL, you might want to
> consider tweaking the autovacuum to be more aggressive on that hot table to
> keep it in check more often.  (Recycling the disk space more efficiently
> rather than sending it back to the server only to be reallocated to the
> database again.)
>
>
> On Fri, Jul 12, 2013 at 4:09 AM, David Welton  wrote:
>>
>> Hi,
>>
>> I have a very similar problem... details below.
>>
>> On Wed, Jul 3, 2013 at 5:19 PM, Paul Tilles  wrote:
>> > Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in order
>> > to
>> > take advantage of autovacuum features. This server exists in a very
>> > closed
>> > environment (isolated network, limited root privileges; this explains
>> > the
>> > older software in use) and runs on RHEL5.5 (i686). After the upgrade,
>> > the
>> > database has constantly been growing to the tune of 5-6 GB a day.
>> > Normally,
>> > the database, as a whole, is ~20GB; currently, it is ~89GB. We have a
>> > couple
>> > other servers which run equivalent databases and actually synchronize
>> > the
>> > records to each other via a 3rd party application (one I do not have
>> > access
>> > to the inner workings). The other databases are ~20GB as they should be.
>>
>> Our machine is an Ubuntu 12.04 system running on AWS, so it's a 64 bit
>> system:
>>
>> PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc
>> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
>>
>> > Running the following SQL, it's fairly obvious there's an issue with a
>> > particular table, and, more specifically, its TOAST table.
>>
>> Same thing here: we have a table with around 2-3 megs of data that is
>> blowing up to *10 gigs*.
>>
>> > This TOAST table is for a table called "timeseries" which saves large
>> > records of blobbed data. ASUM(LENGTH(blob)/1024./1024.) of all the
>> > records
>> > in timeseries yields ~16GB for that column. There should be [b]no
>> > reason[/b]
>> > this table's TOAST table should be as large as it is.
>>
>> Similar situation: it's a bytea column that gets "a lot" of updates;
>> in the order of 10's of thousands a day.
>>
>> > I've performed a VACUUM FULL VERBOSE ANALYZE timeseries, and the vacuum
>> > runs
>> > to completion with no errors.
>>
>> VACUUM FULL fixes the problem for us by recouping all the wasted disk
>> space.  I don't have the knowledge to investigate much further on my
>> own, but I'd be happy to try out a few things.  The database is,
>> unfortunately, sensitive data that I can't share, but I could probably
>> script a similar situation...
>>
>> --
>> David N. Welton
>>
>> http://www.dedasys.com/
>
>
>
>
> --
> Bradley D. J. McCune



-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Distributed systems and primary keys

2013-07-12 Thread Melvin Call
On Fri, Jul 12, 2013 at 10:04 AM, Koichi Suzuki wrote:

> In Postgres-XC, GTM assigns sequence value to all the transactions in its
> cluster.   XC is a kind of tightly-coupled distributed system.   In a
> loosely-coupled distributed system, where each database is autonomous, we
> may need another mechanism.
>
> I've learned that logical replication (used to be bi-directional
> replication) people are doing this kind of work.
>
> Regards;
>
> --
> Koichi Suzuki
>

Thanks Koichi. I will be looking into it shortly.

Melvin


>
>
> 2013/7/12 Joshua D. Drake 
>
>>
>> On 07/12/2013 07:23 AM, Melvin Call wrote:
>>
>>> Hello list,
>>>
>>> Can anyone point me to some reading material on how auto-generated
>>> sequence primary keys are handled on distributed systems? I think the
>>> advice used to be to use GUIDs, but I thought I read somewhere that
>>> PostgreSQL now assigns a pool of numbers to each node when a sequence is
>>> implemented. I have searched the PostgreSQL 9.1.5 Documentation, but
>>> apparently my search terms are not quite what it takes, or dreamt that
>>> up.
>>>
>>
>> PostgreSQL itself does not support a distributed architecture. You may be
>> thinking of Postgres-XC?
>>
>> Sequences are local to each instances and it is not a pool, it is a 64bit
>> allocation for each sequence within the local node, generally constrained
>> only when called from the serial (big serial being 64 bits) type to 32 bits.
>>
>> Sincerely,
>>
>> Joshua D. Drake
>>
>>
>>
>>
>>> Thanks,
>>> Melvin
>>>
>>
>>
>> --
>> Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
>> PostgreSQL Support, Training, Professional Services and Development
>> High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
>> For my dreams of your image that blossoms
>>a rose in the deeps of my heart. - W.B. Yeats
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/**mailpref/pgsql-general
>>
>
>


Re: [GENERAL] Distributed systems and primary keys

2013-07-12 Thread Melvin Call
On Fri, Jul 12, 2013 at 9:46 AM, Joshua D. Drake wrote:

>
> On 07/12/2013 07:23 AM, Melvin Call wrote:
>
>> Hello list,
>>
>> Can anyone point me to some reading material on how auto-generated
>> sequence primary keys are handled on distributed systems? I think the
>> advice used to be to use GUIDs, but I thought I read somewhere that
>> PostgreSQL now assigns a pool of numbers to each node when a sequence is
>> implemented. I have searched the PostgreSQL 9.1.5 Documentation, but
>> apparently my search terms are not quite what it takes, or dreamt that up.
>>
>
> PostgreSQL itself does not support a distributed architecture. You may be
> thinking of Postgres-XC?
>
> Sequences are local to each instances and it is not a pool, it is a 64bit
> allocation for each sequence within the local node, generally constrained
> only when called from the serial (big serial being 64 bits) type to 32 bits.
>
> Sincerely,
>
> Joshua D. Drake
>

Thank you Joshua.

Yep, if you are correct, and I believe you are, then obviously I was
thinking of something else (this happens when you work on multiple systems
I'm afraid). That would explain why I was having so much trouble finding
something that wasn't there.

This will make our planning a bit easier because I now know what the limits
currently are. I will be checking out Postgres-XC as well.

Regards,
Melvin


Re: [GENERAL] Distributed systems and primary keys

2013-07-12 Thread Koichi Suzuki
In Postgres-XC, GTM assigns sequence value to all the transactions in its
cluster.   XC is a kind of tightly-coupled distributed system.   In a
loosely-coupled distributed system, where each database is autonomous, we
may need another mechanism.

I've learned that logical replication (used to be bi-directional
replication) people are doing this kind of work.

Regards;

--
Koichi Suzuki


2013/7/12 Joshua D. Drake 

>
> On 07/12/2013 07:23 AM, Melvin Call wrote:
>
>> Hello list,
>>
>> Can anyone point me to some reading material on how auto-generated
>> sequence primary keys are handled on distributed systems? I think the
>> advice used to be to use GUIDs, but I thought I read somewhere that
>> PostgreSQL now assigns a pool of numbers to each node when a sequence is
>> implemented. I have searched the PostgreSQL 9.1.5 Documentation, but
>> apparently my search terms are not quite what it takes, or dreamt that up.
>>
>
> PostgreSQL itself does not support a distributed architecture. You may be
> thinking of Postgres-XC?
>
> Sequences are local to each instances and it is not a pool, it is a 64bit
> allocation for each sequence within the local node, generally constrained
> only when called from the serial (big serial being 64 bits) type to 32 bits.
>
> Sincerely,
>
> Joshua D. Drake
>
>
>
>
>> Thanks,
>> Melvin
>>
>
>
> --
> Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
> PostgreSQL Support, Training, Professional Services and Development
> High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
> For my dreams of your image that blossoms
>a rose in the deeps of my heart. - W.B. Yeats
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>


Re: [GENERAL] V8.4 TOAST table problem

2013-07-12 Thread Bradley McCune
David,

(As a preface, I have already gone forward with completely rebuilding the
database which seems to have finally fixed the problem.  Rebuilding the
table itself had no effect, and I couldn't wait much longer to move
forward.)

Yes, this seems similar, however, the key difference being that VACUUM FULL
did not alleviate the problem.  The extra "bloated" disk space was still
considered "in use" by the data server, and so it was never returned to the
system.  I have a suspicion that the server was storing the table data in
pages in an inefficient manner (by unknown means) because we had roughly
~5x the number of pages used on that TOAST table to store the same number
of tuples compared to other similar databases.

Depending on how often you have to use VACUUM FULL, you might want to
consider tweaking the autovacuum to be more aggressive on that hot table to
keep it in check more often.  (Recycling the disk space more efficiently
rather than sending it back to the server only to be reallocated to the
database again.)


On Fri, Jul 12, 2013 at 4:09 AM, David Welton  wrote:

> Hi,
>
> I have a very similar problem... details below.
>
> On Wed, Jul 3, 2013 at 5:19 PM, Paul Tilles  wrote:
> > Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in order to
> > take advantage of autovacuum features. This server exists in a very
> closed
> > environment (isolated network, limited root privileges; this explains the
> > older software in use) and runs on RHEL5.5 (i686). After the upgrade, the
> > database has constantly been growing to the tune of 5-6 GB a day.
> Normally,
> > the database, as a whole, is ~20GB; currently, it is ~89GB. We have a
> couple
> > other servers which run equivalent databases and actually synchronize the
> > records to each other via a 3rd party application (one I do not have
> access
> > to the inner workings). The other databases are ~20GB as they should be.
>
> Our machine is an Ubuntu 12.04 system running on AWS, so it's a 64 bit
> system:
>
> PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc
> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
>
> > Running the following SQL, it's fairly obvious there's an issue with a
> > particular table, and, more specifically, its TOAST table.
>
> Same thing here: we have a table with around 2-3 megs of data that is
> blowing up to *10 gigs*.
>
> > This TOAST table is for a table called "timeseries" which saves large
> > records of blobbed data. ASUM(LENGTH(blob)/1024./1024.) of all the
> records
> > in timeseries yields ~16GB for that column. There should be [b]no
> reason[/b]
> > this table's TOAST table should be as large as it is.
>
> Similar situation: it's a bytea column that gets "a lot" of updates;
> in the order of 10's of thousands a day.
>
> > I've performed a VACUUM FULL VERBOSE ANALYZE timeseries, and the vacuum
> runs
> > to completion with no errors.
>
> VACUUM FULL fixes the problem for us by recouping all the wasted disk
> space.  I don't have the knowledge to investigate much further on my
> own, but I'd be happy to try out a few things.  The database is,
> unfortunately, sensitive data that I can't share, but I could probably
> script a similar situation...
>
> --
> David N. Welton
>
> http://www.dedasys.com/
>



-- 
Bradley D. J. McCune


[GENERAL] pg_upgrade could not create catalog dump while upgrading from 9.0 to 9.2

2013-07-12 Thread shankar.kotamar...@gmail.com
Hello,
I am upgrading from postgresql 9.0 to 9.2 using the utility pg_upgrade.
Creation of catalog dump failed .The reason is in 9.2 view pg_roles has a
column "rolecreatedblink" which is missing in 9.0.Please help me to move
forward.
Thanks in Advance

*Last lines in pg_upgradeutility.log**
command: "/u01/PostgresPlus/9.2AS/bin/pg_dumpall" --port 50432 --username
"postgres" --schema-only --binary-upgrade  -f pg_upgrade_dump_all.sql >>
"pg_upgrade_utility.log" 2>&1
pg_dumpall: query failed: ERROR:  column "rolcreatedblink" does not exist
LINE 1: ...rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcreated...
 ^
pg_dumpall: query was: SELECT oid, rolname, rolsuper, rolinherit,
rolcreaterole, rolcreatedb, rolcreatedblink, rolcreatepublicdblink,
roldroppublicdblink, false as rolpolicyexempt, rolcanlogin, rolconnlimit,
rolpassword, rolvaliduntil, false as rolreplication,
pg_catalog.shobj_description(oid, 'pg_authid') as rolcomment FROM pg_authid
ORDER BY 2



Shankar Kotamarthy
Postgresql DBA



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/pg-upgrade-could-not-create-catalog-dump-while-upgrading-from-9-0-to-9-2-tp5763576.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] transactional swap of tables

2013-07-12 Thread Vincenzo Romano
> Vincenzo Romano  wrote:
>
>> I'd like to "replace" a full table F with an empty one E.
>> In order to do this I see only one way:
>>
>> ALTER TABLE F RENAME TO T;
>> ALTER TABLE E RENAME TO F;
>> ALTER TABLE T RENAME TO E; -- optional
>>
>> This implies there's a moment when the full table doesn't exist.
>> Would a transaction enclosure ensure that the table F will be
>> always available to all clients?
>
> Yes.  What you show is safe.  What has a race condition is dropping
> the old table before all transactions which started with it have
> completed.  If you're going to drop the old table, be sure to wait
> long enough after the COMMIT for things to "settle".

Would you please elaborate more on the "wait[ing] long enough after
the COMMIT" ?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Distributed systems and primary keys

2013-07-12 Thread Joshua D. Drake


On 07/12/2013 07:23 AM, Melvin Call wrote:

Hello list,

Can anyone point me to some reading material on how auto-generated
sequence primary keys are handled on distributed systems? I think the
advice used to be to use GUIDs, but I thought I read somewhere that
PostgreSQL now assigns a pool of numbers to each node when a sequence is
implemented. I have searched the PostgreSQL 9.1.5 Documentation, but
apparently my search terms are not quite what it takes, or dreamt that up.


PostgreSQL itself does not support a distributed architecture. You may 
be thinking of Postgres-XC?


Sequences are local to each instances and it is not a pool, it is a 
64bit allocation for each sequence within the local node, generally 
constrained only when called from the serial (big serial being 64 bits) 
type to 32 bits.


Sincerely,

Joshua D. Drake





Thanks,
Melvin



--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] transactional swap of tables

2013-07-12 Thread Kevin Grittner
Vincenzo Romano  wrote:

> I'd like to "replace" a full table F with an empty one E.
> In order to do this I see only one way:
>
> ALTER TABLE F RENAME TO T;
> ALTER TABLE E RENAME TO F;
> ALTER TABLE T RENAME TO E; -- optional
>
> This implies there's a moment when the full table doesn't exist.
> Would a transaction enclosure ensure that the table F will be
> always available to all clients?

Yes.  What you show is safe.  What has a race condition is dropping
the old table before all transactions which started with it have
completed.  If you're going to drop the old table, be sure to wait
long enough after the COMMIT for things to "settle".

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Distributed systems and primary keys

2013-07-12 Thread Melvin Call
Hello list,

Can anyone point me to some reading material on how auto-generated sequence
primary keys are handled on distributed systems? I think the advice used to
be to use GUIDs, but I thought I read somewhere that PostgreSQL now assigns
a pool of numbers to each node when a sequence is implemented. I have
searched the PostgreSQL 9.1.5 Documentation, but apparently my search terms
are not quite what it takes, or dreamt that up.

Thanks,
Melvin


Re: [GENERAL] Changing the function used in an index.

2013-07-12 Thread David Johnston
Clodoaldo Neto wrote
> How does the planner know that the function used in an index has changed?
> If the function's body is changed keeping the argument and return types
> the
> planner will not use it anymore:

Not a hacker but the function catalog entry is MVCC just like any other
record and so even if you alter the function without changing the signature
the system still knows that something has changed.  Since the system cannot
inspect the function change to know whether the original and new function
provide the same output given the same input it can no longer trust the
index.  For me the "how" doesn't matter and it is working as expected.  

Do you have some larger intent than just understanding how that you have not
made clear?

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Changing-the-function-used-in-an-index-tp5763581p5763582.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Changing the function used in an index.

2013-07-12 Thread Clodoaldo Neto
How does the planner know that the function used in an index has changed?
If the function's body is changed keeping the argument and return types the
planner will not use it anymore:

create table t (i integer);
insert into t (i)
select generate_series(1, 10);
analyze t;

A simple function to return the opposite integer:

create or replace function f(i integer)
returns integer as $$
select i * -1;
$$ immutable language sql;

And the index on it:

create index t_i_index on t(f(i));

The index is used:

explain select * from t order by f(i);
QUERY PLAN
---
 Index Scan using t_i_index on t  (cost=0.00..3300.26 rows=10 width=4)

Now the function is changed to return the integer itself:

create or replace function f(i integer)
returns integer as $$
select i;
$$ immutable language sql;

And the index is not used anymore:

explain select * from t order by f(i);
  QUERY PLAN
---
 Sort  (cost=6.32..11366.32 rows=10 width=4)
   Sort Key: i
   ->  Seq Scan on t  (cost=0.00..1443.00 rows=10 width=4)

If the index is rebuilt

reindex index t_i_index;

It is used again:

explain select * from t order by f(i);
QUERY PLAN
---
 Index Scan using t_i_index on t  (cost=0.00..4376.26 rows=10 width=4)

Regards, Clodoaldo


[GENERAL]

2013-07-12 Thread Vincenzo Romano
Hi all
I'm making some experiments with table archiving and I'd like to
"replace" a full table F with an empty one E.
In order to do this I see only one way:

ALTER TABLE F RENAME TO T;
ALTER TABLE E RENAME TO F;
ALTER TABLE T RENAME TO E; -- optional

This implies there's a moment when the full table doesn't exist.
Would a transaction enclosure ensure that the table F will be always
available to all clients?
Thanks.

--
Vincenzo Romano - NotOrAnd.IT
Information Technologies
--
NON QVIETIS MARIBVS NAVTA PERITVS


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can you get "WAL segment has already been removed" when doing synchronous replication ?!

2013-07-12 Thread hubert depesz lubaczewski
On Fri, Jul 12, 2013 at 12:30:22PM +0530, Amit Langote wrote:
> >> Increasing the wal_keep_segments ?
> > I know that I can increase wal_keep_segments to "solve" it, but
> > shouldn't it be *impossible* to happen with synchronous replication?
> > After all - all commits should wait for slave to be 100% up to date!
> Is it possible that xlog recycling might have caused this wherein the
> xlog segment which is yet to be archived/shipped is recycled? I

As far as I know, pg will not recycle log before it's archived.
Otherwise we wouldn't be able to have archives.

> remember something of that sort. Check this discussion:
> http://www.postgresql.org/message-id/51779b3b.1020...@lab.ntt.co.jp
> Is this logged on the master or a standby?

master.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] function returning select result to JDBC

2013-07-12 Thread Raghavendra
On Fri, Jul 12, 2013 at 3:26 PM, giozh  wrote:

> I need to write a function (invoked by a jdbc client) that returns select
> query result.
> That function could return also set of row. How should i declare return
> type
> of function?
>
>
>
Bunch of examples here:
http://wiki.postgresql.org/wiki/Return_more_than_one_row_of_data_from_PL/pgSQL_functions

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/


> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/function-returning-select-result-to-JDBC-tp5763565.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] function returning select result to JDBC

2013-07-12 Thread giozh
I need to write a function (invoked by a jdbc client) that returns select
query result. 
That function could return also set of row. How should i declare return type
of function?



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/function-returning-select-result-to-JDBC-tp5763565.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full text search

2013-07-12 Thread Amit Langote
On Fri, Jul 12, 2013 at 12:56 PM, itishree sukla
 wrote:
> Hi Raghavendra,
>
> Thanks for your response, however i want same kind of result using full text
> search. is it possible?
>

What do you mean when you say you are using full text search? Like
what is the query that you used? That would be helpful.


--
Amit Langote


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] V8.4 TOAST table problem

2013-07-12 Thread David Welton
Hi,

I have a very similar problem... details below.

On Wed, Jul 3, 2013 at 5:19 PM, Paul Tilles  wrote:
> Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in order to
> take advantage of autovacuum features. This server exists in a very closed
> environment (isolated network, limited root privileges; this explains the
> older software in use) and runs on RHEL5.5 (i686). After the upgrade, the
> database has constantly been growing to the tune of 5-6 GB a day. Normally,
> the database, as a whole, is ~20GB; currently, it is ~89GB. We have a couple
> other servers which run equivalent databases and actually synchronize the
> records to each other via a 3rd party application (one I do not have access
> to the inner workings). The other databases are ~20GB as they should be.

Our machine is an Ubuntu 12.04 system running on AWS, so it's a 64 bit system:

PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc
(Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit

> Running the following SQL, it's fairly obvious there's an issue with a
> particular table, and, more specifically, its TOAST table.

Same thing here: we have a table with around 2-3 megs of data that is
blowing up to *10 gigs*.

> This TOAST table is for a table called "timeseries" which saves large
> records of blobbed data. ASUM(LENGTH(blob)/1024./1024.) of all the records
> in timeseries yields ~16GB for that column. There should be [b]no reason[/b]
> this table's TOAST table should be as large as it is.

Similar situation: it's a bytea column that gets "a lot" of updates;
in the order of 10's of thousands a day.

> I've performed a VACUUM FULL VERBOSE ANALYZE timeseries, and the vacuum runs
> to completion with no errors.

VACUUM FULL fixes the problem for us by recouping all the wasted disk
space.  I don't have the knowledge to investigate much further on my
own, but I'd be happy to try out a few things.  The database is,
unfortunately, sensitive data that I can't share, but I could probably
script a similar situation...

--
David N. Welton

http://www.dedasys.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full text search

2013-07-12 Thread itishree sukla
Hi Raghavendra,

Thanks for your response, however i want same kind of result using full
text search. is it possible?

Regards,
Itishree


On Fri, Jul 12, 2013 at 12:14 PM, Raghavendra <
raghavendra@enterprisedb.com> wrote:

>
> On Fri, Jul 12, 2013 at 11:48 AM, itishree sukla  > wrote:
>
>> Hello everyone,
>>
>> I am using full text search,  however it is not helping me to get the
>> partial value.
>>
>> For example in  my document let Pizza Hut is there, if i am searching for
>> Pizza Hut is it giving me the values for only Pizza or a spell mistake like
>> pizz is not returning any thing. any work around for this, please suggest.
>>
>> Regards,
>> Itishree
>>
>
> Hope you are looking like this.
>
> create table foo(v text);
> insert into foo values('Near to my house there\'s no any Pizza Hut
> restuarant');
> insert into foo values('I wont like pizza, but friends are crazy of it');
>
>
> postgres=# select * from foo  where v ~* 'pizz';
>   v
> --
>  Near to my house there's no any Pizza Hut restuarant
>  I wont like pizza, but friends are crazy of it
> (2 rows)
>
> postgres=# select * from foo  where v ~* 'pizza hut';
>   v
> --
>  Near to my house there's no any Pizza Hut restuarant
> (1 row)
>
> postgres=# select * from foo  where v ~* 'pizza';
>   v
> --
>  Near to my house there's no any Pizza Hut restuarant
>  I wont like pizza, but friends are crazy of it
> (2 rows)
>
> or
>
> with ILIKE
>
> select * from foo where v ilike '%hut%';
>
>
> ---
> Regards,
> Raghavendra
> EnterpriseDB Corporation
> Blog: http://raghavt.blogspot.com/
>
>


Re: [GENERAL] How can you get "WAL segment has already been removed" when doing synchronous replication ?!

2013-07-12 Thread Amit Langote
On Thu, Jul 11, 2013 at 11:31 PM, hubert depesz lubaczewski
 wrote:
> On Thu, Jul 11, 2013 at 11:29:24PM +0530, Raghavendra wrote:
>> On Thu, Jul 11, 2013 at 11:18 PM, hubert depesz lubaczewski <
>> dep...@depesz.com> wrote:
>>
>> > We are seeing situation like this:
>> > 1. 9.2.4 database
>> > 2. Master settings:
>> >name|setting
>> > ---+---
>> >  fsync | on
>> >  synchronize_seqscans  | on
>> >  synchronous_commit| remote_write
>> >  synchronous_standby_names | *
>> >  wal_sync_method   | open_datasync
>> > (5 rows)
>> >
>> > Yet, every now and then we're getting:
>> > FATAL:  requested WAL segment * has already been removed
>> >
>> > Assuming no part of the system is issuing "set synchronous_commit
>> > = off", how can we get in such situation?
>> >
>> > Best regards,
>> >
>> > depesz
>> >
>> >
>> Increasing the wal_keep_segments ?
>
> I know that I can increase wal_keep_segments to "solve" it, but
> shouldn't it be *impossible* to happen with synchronous replication?
> After all - all commits should wait for slave to be 100% up to date!
>

Is it possible that xlog recycling might have caused this wherein the
xlog segment which is yet to be archived/shipped is recycled? I
remember something of that sort. Check this discussion:

http://www.postgresql.org/message-id/51779b3b.1020...@lab.ntt.co.jp

Is this logged on the master or a standby?

--
Amit Langote


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general