Re: [GENERAL] variable not found in subplan target list

2011-11-01 Thread Roger Niederland



On 11/1/2011 8:30 PM, Tom Lane wrote:

Roger Niederland  writes:

I extracted the tables from the database which generates the error above
I eliminated most of the columns such that this query still exhibits
this behavior to reduce
the file size.  I have a zipped file from windows postgresql 9.1.1
(32-bit) which is about 1.5MBytes.
I verified that the database generated from this file will produce the
error.

If you'd send that to me off-list, I'd be happy to take a look.

regards, tom lane






I stripped enough out of the database that it is only good for a test 
case.  Here is a public

url for getting at the database backup:

http://dl.dropbox.com/u/13814973/test2.zip

The sql I posted earlier can be used on this to generate the error.

Thanks for taking a look at this!
Roger


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Subselects in select expressions

2011-11-01 Thread Royce Ausburn

On 02/11/2011, at 2:16 PM, Tom Lane wrote:

> Royce Ausburn  writes:
>> [ random() is evaluated only once in ]
>> test=# select (select random()) from generate_series(1,10); -- rows are the 
>> same
> 
>> I understand that it's likely an optimisation thing -- postgres knows
>> that the subselect doesn't depend on the FROM rows so it evaluates it
>> only once, but is this really correct behaviour?
> 
> I can't find anything in the SQL standard that explicitly addresses this
> point, but a bit of googling turns up documentation for both Oracle and
> DB/2 that say they evaluate uncorrelated subqueries just once:
> http://www.smart-soft.co.uk/Oracle/advanced-sql-tutorial.htm
> http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=/sqlp/rbafycorrs.htm
> whereas correlated subqueries are evaluated once per row.
> 
> So we're in good company here.  Even if we were not, I'd be loath to
> change it, because people have historically relied on this behavior to
> force single evaluation of what otherwise would be considered volatile
> functions.

That's good enough for me, Tom.  Thanks =)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regarding the shared disk fail over cluster configuration

2011-11-01 Thread Ondrej Ivanič
Hi,

On 2 November 2011 02:00, Debasis Mishra  wrote:
> Thanks a lot for your replay. I just wanna know whether it is required for
> me to run initdb or setting the PGDATA environment variable is enough?

Master needs to be properly initialised & configured
- install postgres
- run initdb
- install your application schema,...
- point $PGDATA to SAN volume

Secondary:
- install Postgres
- point $PGDATA to SAN volume (volume is mounted on primary!)

During failover clustering software has to:
- shutdown primary (usually: STONITH (Shoot The Other Node In The
Head) aka fencing)
- mount SAN volume on secondary
- startup postgres on secondary

-- 
Ondrej Ivanic
(ondrej.iva...@gmail.com)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] adding a column takes FOREVER!

2011-11-01 Thread Eric Smith
Tom,

Well... there's a lot of data hiding in each of those rows... as much as 4MB in 
each.  I'll make allowances in my code so that adding a column without a 
default is a workable solution. 

Thank you,
Eric

On Nov 1, 2011, at 8:27 PM, Tom Lane wrote:

> Eric Smith  writes:
>> Thank you for the response... to be perfectly honest, I don't know enough to 
>> know what I'm not telling you.  Below is the string I use to create the 
>> table, so you can see the contents.  I don't think I have foreign key 
>> references or triggers of any kind.  Any ideas? (this is 8.3 running on Mac 
>> OS 10.7)
> 
> I'm wondering how much data is hiding behind this column:
> 
>>  [tableString appendString:@"\"imageData\" bytea null,"];
> 
> Since you're installing a non-null column default value, the ALTER TABLE
> ADD COLUMN command has to rewrite the entire table.  If there are large
> images hiding in each of those "only 14000" rows, there'd be a lot of
> data to copy over and so it could take awhile.
> 
> (In contrast, ADD COLUMN without a default value is speedy because
> Postgres plays some tricks to avoid rewriting the table data.  That
> won't help you if you have to install non-null values in the new
> column, but it's good to know that there's a difference.)
> 
>   regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] variable not found in subplan target list

2011-11-01 Thread Tom Lane
Roger Niederland  writes:
> I extracted the tables from the database which generates the error above

> I eliminated most of the columns such that this query still exhibits 
> this behavior to reduce
> the file size.  I have a zipped file from windows postgresql 9.1.1 
> (32-bit) which is about 1.5MBytes.
> I verified that the database generated from this file will produce the 
> error.

If you'd send that to me off-list, I'd be happy to take a look.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] adding a column takes FOREVER!

2011-11-01 Thread Tom Lane
Eric Smith  writes:
> Thank you for the response... to be perfectly honest, I don't know enough to 
> know what I'm not telling you.  Below is the string I use to create the 
> table, so you can see the contents.  I don't think I have foreign key 
> references or triggers of any kind.  Any ideas? (this is 8.3 running on Mac 
> OS 10.7)

I'm wondering how much data is hiding behind this column:

>   [tableString appendString:@"\"imageData\" bytea null,"];

Since you're installing a non-null column default value, the ALTER TABLE
ADD COLUMN command has to rewrite the entire table.  If there are large
images hiding in each of those "only 14000" rows, there'd be a lot of
data to copy over and so it could take awhile.

(In contrast, ADD COLUMN without a default value is speedy because
Postgres plays some tricks to avoid rewriting the table data.  That
won't help you if you have to install non-null values in the new
column, but it's good to know that there's a difference.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Subselects in select expressions

2011-11-01 Thread Tom Lane
Royce Ausburn  writes:
> [ random() is evaluated only once in ]
> test=# select (select random()) from generate_series(1,10); -- rows are the 
> same

> I understand that it's likely an optimisation thing -- postgres knows
> that the subselect doesn't depend on the FROM rows so it evaluates it
> only once, but is this really correct behaviour?

I can't find anything in the SQL standard that explicitly addresses this
point, but a bit of googling turns up documentation for both Oracle and
DB/2 that say they evaluate uncorrelated subqueries just once:
http://www.smart-soft.co.uk/Oracle/advanced-sql-tutorial.htm
http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=/sqlp/rbafycorrs.htm
whereas correlated subqueries are evaluated once per row.

So we're in good company here.  Even if we were not, I'd be loath to
change it, because people have historically relied on this behavior to
force single evaluation of what otherwise would be considered volatile
functions.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] variable not found in subplan target list

2011-11-01 Thread Roger Niederland



On 11/1/2011 3:54 PM, Roger Niederland wrote:

Hello,

I stripped down the original query to what is below.  I am not saying 
that the query below
is useful except to show an error I am getting in Postgresql 9.1.1 on 
both SL6.1 (64 bit) and
Windows 2008 server  9.1.1 (32-bit and 64-bit).  The error I am 
getting is:


ERROR: variable not found in subplan target list
SQL state: XX000

If I remove the inner join to person (at the end)  to works on 9.1.1

I tried this query on Postgresql 9.1.0 on Windows Vista (32-bit) and 
it works.

Also on the postgres 8.4 it works on both OS's.

Thanks,
Roger
(sorry if this is double posted, I sent this in about 24 hours ago, 
but it was stalled
due to me not subscribing to the list, I turned into a better citizen 
and subscribed)



SELECT
   COALESCE(FreshUps.employeeId, Appts.employeeId) AS employeeId
 FROM
 (
   SELECT
 COALESCE(Ups.employeeId, -1) AS employeeId
   FROM
   (
 SELECT Lead.salesmanId AS employeeId
 FROM Lead
 GROUP BY Lead.salesmanID) AS Ups
 ) AS FreshUps
 FULL OUTER JOIN
 (
   SELECT
 COALESCE(Ups.employeeId, -1) AS employeeId
   FROM
   (
 SELECT SalesAppointment.SalesmanID AS employeeID
 FROM SalesAppointment
 GROUP BY SalesAppointment.SalesmanID) AS Ups
 ) AS Appts USING (employeeId)

 INNER JOIN Employee USING (employeeid)
 INNER JOIN Person ON Employee.employeeId = Person.personId



-
I extracted the tables from the database which generates the error above

I eliminated most of the columns such that this query still exhibits 
this behavior to reduce
the file size.  I have a zipped file from windows postgresql 9.1.1 
(32-bit) which is about 1.5MBytes.
I verified that the database generated from this file will produce the 
error.


Thanks,
Roger


Re: [GENERAL] variable not found in subplan target list

2011-11-01 Thread Tom Lane
Roger Niederland  writes:
> I stripped down the original query to what is below.  I am not saying 
> that the query below
> is useful except to show an error I am getting in Postgresql 9.1.1 on 
> both SL6.1 (64 bit) and
> Windows 2008 server  9.1.1 (32-bit and 64-bit).  The error I am getting is:

> ERROR: variable not found in subplan target list

This is a pretty typical symptom of a certain class of planner bugs,
but I couldn't reproduce any such failure on the basis of what you
showed here.  Please provide a *self-contained* test case.

http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] adding a column takes FOREVER!

2011-11-01 Thread Gauthier, Dave
I think you need exclusive access to the table in order to add columns (I'll 
stand/sit corrected if the more experienced responders say different).  I've 
found that you have to either wait for them to get off or kill them first (as 
in pg_ctl kill ABRT).



From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Eric Smith
Sent: Tuesday, November 01, 2011 10:05 PM
To: Tom Lane
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] adding a column takes FOREVER!

Thank you for the response... to be perfectly honest, I don't know enough to 
know what I'm not telling you.  Below is the string I use to create the table, 
so you can see the contents.  I don't think I have foreign key references or 
triggers of any kind.  Any ideas? (this is 8.3 running on Mac OS 10.7)

Thanks again,
Eric

[tableString setString:@""];
[tableString appendString:@"create table images 
(\"imageID\" varchar(11) primary key,"];
[tableString appendString:@"\"patientID\" varchar(11) 
null,"];
[tableString appendString:@"\"layoutID\" varchar(11) 
null,"];
for( iTooth = 0; iTooth < 33; iTooth++ ){
[tableString appendString:[NSString 
stringWithFormat:@"tooth_%d varchar(1) default 0,",iTooth]];
}
[tableString appendString:@"\"pixelsWide\" varchar(4) 
null,"];
[tableString appendString:@"\"pixelsHigh\" varchar(4) 
null,"];
[tableString appendString:@"\"bytesPerPixel\" varchar(1) 
null,"];
[tableString appendString:@"\"imageData\" bytea null,"];
[tableString appendString:@"\"filePath\" varchar(256) 
null,"];
[tableString appendString:@"orientation char(1) null,"];
[tableString appendString:@"sequence char(2) null,"];
[tableString appendString:@"\"genericInfo\" 
varchar(65536),"];
[tableString appendString:@"time time null,"];
[tableString appendString:@"\"saveState\" varchar(1) default \'0\',"];
[tableString appendString:@"date date null)"];

On Oct 21, 2011, at 8:24 PM, Tom Lane wrote:


Eric Smith mailto:eric_h_sm...@mac.com>> writes:

I'm adding a column in postgres 8.3 with the syntax:  alter table images add 
column "saveState" varchar(1) default '0';  It takes a good solid 20 minutes to 
add this column to a table with ~ 14,000 entries.  Why so long?  Is there a way 
to speed that up?  The table has ~ 50 columns.

As Craig explained, that does require updating every row ... but for
only 14000 rows, it doesn't seem like it should take that long.
A quick test with 8.3 on my oldest and slowest machine:

regression=# create table foo as select generate_series(1,14000) as x;
SELECT
Time: 579.518 ms
regression=# alter table foo add column "saveState" varchar(1) default '0';
ALTER TABLE
Time: 482.143 ms

I'm thinking there is something you haven't told us about that creates a
great deal of overhead for updates on this table.  Lots and lots o'
indexes?  Lots and lots o' foreign key references?  Inefficient
triggers?

Or maybe it's just blocking behind somebody else's lock?

regards, tom lane



Re: [GENERAL] Re: [pgsql-general] Need Help With a A Simple Query That's Not So Simple

2011-11-01 Thread Bill Thoen
Thanks! Half the problem searching the 'Net for answers is knowing what it's 
called.

Regards,

Bill Thoen
GISnet
http://gisnet.com
303-786-9961

On Nov 1, 2011, at 10:01 AM, "James B. Byrne"  wrote:

> 
>> Date: Mon, 31 Oct 2011 16:51:02 -0600
>> From: Bill Thoen 
>> To: Postgrresql 
>> Subject: Need Help With a A Simple Query That's Not So
>> Simple
>> Message-ID: <4eaf2656.6020...@gisnet.com>
>> 
>> I think this should be easy, but I can't seem to put the
>> SQL together correctly and would appreciate any help.
>> (I'm using Pg 8.4 in CentOS 5.5, if that matters.)
>> 
>> I have a table of Farms and a table of crops in a 1:M
>> relationship of Farms : Crops. There are lots of
>> different crops to choose form but for now I'm only
>> interested in two crops; corn and soybeans.
>> 
>> Some farms grow only corn and some grow only soybeans,
>> and some grow both.  What I'd like to know is, which
>> Farms and how many are growing only corn, which and
>> how many are growing soybeans and which and how many are
>> growing both? I can easily get all the corn growers with:
>> 
>> SELECT a.*
>>   FROM farms a
>>   JOIN crops b
>> ON a.farm_id=b.farm_id
>>  WHERE crop_cd='0041'
>> 
>> I can do the same with soybeans (crop_cd= '0081') and
>> then I could subtract the sum of these from the total
>> of all farms that grow either corn or soybeans to get
>> the number of farms growing both, but having to
>> do all those queries sounds very time consuming and
>> inefficient. Is there a better way to get the farm
>> counts or data by categories like farms growing only
>> corn, farms growing only soybeans, farms growing
>> both? I'm also interested in possibly expanding to a
>> general case where I could select more than two crops.
>> and get counts of the permutations.
>> 
>> Here's a sketch of the relevant pieces of the data base.
>> 
>> *Tables:*
>> farms crops
>> === ===
>> farm_id  bigint (pkey) crop_id   (pkey)
>> type farm_idforeign key to farms
>> size crop_cd0041 = corn 0081=soybeans
>> ...year
>> ...
>> 
>> Any help would be much appreciated.
>> 
>> TIA,
>> 
>> - Bill Thoen
> 
> I believe that what you are trying to do is called
> relational algebra division. Take a look at these
> references and see if either fits your needs:
> 
> http://en.wikipedia.org/wiki/Relational_algebra#Division_.28.C3.B7.29
> 
> http://www.cs.arizona.edu/~mccann/research/divpresentation.pdf
> 
> 
> 
> 
> 
> -- 
> ***  E-Mail is NOT a SECURE channel  ***
> James B. Byrnemailto:byrn...@harte-lyne.ca
> Harte & Lyne Limited  http://www.harte-lyne.ca
> 9 Brockley Drive  vox: +1 905 561 1241
> Hamilton, Ontario fax: +1 905 561 0757
> Canada  L8E 3C3
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] adding a column takes FOREVER!

2011-11-01 Thread Eric Smith
Thank you for the response... to be perfectly honest, I don't know enough to 
know what I'm not telling you.  Below is the string I use to create the table, 
so you can see the contents.  I don't think I have foreign key references or 
triggers of any kind.  Any ideas? (this is 8.3 running on Mac OS 10.7)

Thanks again,
Eric

[tableString setString:@""];
[tableString appendString:@"create table images (\"imageID\" 
varchar(11) primary key,"];
[tableString appendString:@"\"patientID\" varchar(11) null,"];
[tableString appendString:@"\"layoutID\" varchar(11) null,"];
for( iTooth = 0; iTooth < 33; iTooth++ ){
[tableString appendString:[NSString stringWithFormat:@"tooth_%d 
varchar(1) default 0,",iTooth]];
}
[tableString appendString:@"\"pixelsWide\" varchar(4) null,"];
[tableString appendString:@"\"pixelsHigh\" varchar(4) null,"];
[tableString appendString:@"\"bytesPerPixel\" varchar(1) null,"];
[tableString appendString:@"\"imageData\" bytea null,"];
[tableString appendString:@"\"filePath\" varchar(256) null,"];
[tableString appendString:@"orientation char(1) null,"];
[tableString appendString:@"sequence char(2) null,"];
[tableString appendString:@"\"genericInfo\" varchar(65536),"];
[tableString appendString:@"time time null,"];
[tableString appendString:@"\"saveState\" varchar(1) default \'0\',"];
[tableString appendString:@"date date null)"];

On Oct 21, 2011, at 8:24 PM, Tom Lane wrote:

> Eric Smith  writes:
>> I'm adding a column in postgres 8.3 with the syntax:  alter table images add 
>> column "saveState" varchar(1) default '0';  It takes a good solid 20 minutes 
>> to add this column to a table with ~ 14,000 entries.  Why so long?  Is there 
>> a way to speed that up?  The table has ~ 50 columns.
> 
> As Craig explained, that does require updating every row ... but for
> only 14000 rows, it doesn't seem like it should take that long.
> A quick test with 8.3 on my oldest and slowest machine:
> 
> regression=# create table foo as select generate_series(1,14000) as x;
> SELECT
> Time: 579.518 ms
> regression=# alter table foo add column "saveState" varchar(1) default '0';
> ALTER TABLE
> Time: 482.143 ms
> 
> I'm thinking there is something you haven't told us about that creates a
> great deal of overhead for updates on this table.  Lots and lots o'
> indexes?  Lots and lots o' foreign key references?  Inefficient
> triggers?
> 
> Or maybe it's just blocking behind somebody else's lock?
> 
>   regards, tom lane



[GENERAL] Subselects in select expressions

2011-11-01 Thread Royce Ausburn
I'm having trouble understanding why these two queries produce different 
results:

test=# select (select random()) from generate_series(1,10); -- rows are the same
 ?column?  
---
 0.770797704812139
 0.770797704812139
 0.770797704812139
 0.770797704812139
 0.770797704812139
 0.770797704812139
 0.770797704812139
 0.770797704812139
 0.770797704812139
 0.770797704812139
(10 rows)

test=# select (select random() where generate_series is not null) from 
generate_series(1,10); -- rows are different
  ?column?  

  0.561828337144107
 0.0275383000262082
  0.290950470604002
  0.281174722127616
  0.530742571223527
  0.617655908688903
  0.169655770529062
  0.428002137690783
  0.442224354483187
  0.203044794034213
(10 rows)

I understand that it's likely an optimisation thing -- postgres knows that the 
subselect doesn't depend on the FROM rows so it evaluates it only once, but is 
this really correct behaviour?  Ideally, shouldn't postgres know that each 
invocation of random() produces different results and so decide that it should 
execute it for each row? If not, why?

Thanks!

--Royce



Re: [GENERAL] Regarding the shared disk fail over cluster configuration

2011-11-01 Thread Debasis Mishra
Thanks Ondrej Ivanic,

Thanks a lot for your replay. I just wanna know whether it is required for
me to run initdb or setting the PGDATA environment variable is enough?


Thanks,
Debasis




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Regarding-the-shared-disk-fail-over-cluster-configuration-tp4952316p4955421.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Disable Streaming Replication ==> Stop WAL Sender on master and WAL receiver on Slave

2011-11-01 Thread Jaime Casanova
On Tue, Nov 1, 2011 at 1:35 PM, Samba  wrote:
>
> But how do I disable streaming replication, with the effect that both WAL
> sender on master and WAL receiver on slave gets stopped? Similarly, when I
> enable streaming replication, the WAL sender and WAL receiver processes
> would need to be started.
>

There is no such thing that i know. but the effect is the same as if
you shutdown the standby.
The problem is that if you have a standby server and stop walsender
and walreceiver. what happens when you start them again:

1) there would be accumalated work (wal records to send and to
receive), so it will take some time until standby catch up
2) what if the wal segment that was being used when you stopped
walsender is reused in master before walsender is reactivated?

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [pgsql-general] Need Help With a A Simple Query That's Not So Simple

2011-11-01 Thread James B. Byrne

> Date: Mon, 31 Oct 2011 16:51:02 -0600
> From: Bill Thoen 
> To: Postgrresql 
> Subject: Need Help With a A Simple Query That's Not So
> Simple
> Message-ID: <4eaf2656.6020...@gisnet.com>
>
> I think this should be easy, but I can't seem to put the
> SQL together correctly and would appreciate any help.
> (I'm using Pg 8.4 in CentOS 5.5, if that matters.)
>
> I have a table of Farms and a table of crops in a 1:M
> relationship of Farms : Crops. There are lots of
> different crops to choose form but for now I'm only
> interested in two crops; corn and soybeans.
>
> Some farms grow only corn and some grow only soybeans,
> and some grow both.  What I'd like to know is, which
> Farms and how many are growing only corn, which and
> how many are growing soybeans and which and how many are
> growing both? I can easily get all the corn growers with:
>
> SELECT a.*
>FROM farms a
>JOIN crops b
>  ON a.farm_id=b.farm_id
>   WHERE crop_cd='0041'
>
> I can do the same with soybeans (crop_cd= '0081') and
> then I could subtract the sum of these from the total
> of all farms that grow either corn or soybeans to get
> the number of farms growing both, but having to
> do all those queries sounds very time consuming and
> inefficient. Is there a better way to get the farm
> counts or data by categories like farms growing only
> corn, farms growing only soybeans, farms growing
> both? I'm also interested in possibly expanding to a
> general case where I could select more than two crops.
> and get counts of the permutations.
>
> Here's a sketch of the relevant pieces of the data base.
>
> *Tables:*
> farms crops
> === ===
> farm_id  bigint (pkey) crop_id   (pkey)
> type farm_idforeign key to farms
> size crop_cd0041 = corn 0081=soybeans
> ...year
> ...
>
> Any help would be much appreciated.
>
> TIA,
>
> - Bill Thoen

I believe that what you are trying to do is called
relational algebra division. Take a look at these
references and see if either fits your needs:

http://en.wikipedia.org/wiki/Relational_algebra#Division_.28.C3.B7.29

http://www.cs.arizona.edu/~mccann/research/divpresentation.pdf





-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server hitting 100% CPU usage, system comes to a crawl.

2011-11-01 Thread Brian Fehrle

Update on this:

We did a switchover to another machine with the same hardware, however 
this system was running on some older parameters we had set in the 
postgresql.conf file.


So we went from 400 max_connections to 200 max_connections, and 160MB 
work_mem to 200MB work_mem. And now on this other system, so far it 
seems to be running ok.


Other than the obvious fact that each connection has a certain amount of 
memory usage, is there something else to watch for when increasing 
connections to numbers like 400? When we had the issue of the system 
jumping to 100% cpu usage, even at that point our number of backends to 
the cluster was at MAX 250, but generally in the 175 range, so well 
below our 400 max_connections we allow. So could this be the culprit?


I'll be watching the cluster as we run on the new configuration (with 
only 200 max_connections).


- Brian F

On 10/27/2011 03:22 PM, Brian Fehrle wrote:

On 10/27/2011 02:50 PM, Tom Lane wrote:

Brian Fehrle  writes:

Hi all, need some help/clues on tracking down a performance issue.
PostgreSQL version: 8.3.11
I've got a system that has 32 cores and 128 gigs of ram. We have
connection pooling set up, with about 100 - 200 persistent connections
open to the database. Our applications then use these connections to
query the database constantly, but when a connection isn't currently
executing a query, it's. On average, at any given time, there are
3 - 6 connections that are actually executing a query, while the rest
are.
About once a day, queries that normally take just a few seconds slow 
way

down, and start to pile up, to the point where instead of just having
3-6 queries running at any given time, we get 100 - 200. The whole
system comes to a crawl, and looking at top, the CPU usage is 99%.

This is jumping to a conclusion based on insufficient data, but what you
describe sounds a bit like the sinval queue contention problems that we
fixed in 8.4.  Some prior reports of that:
http://archives.postgresql.org/pgsql-performance/2008-01/msg1.php
http://archives.postgresql.org/pgsql-performance/2010-06/msg00452.php

If your symptoms match those, the best fix would be to update to 8.4.x
or later, but a stopgap solution would be to cut down on the number of
idle backends.

regards, tom lane
That sounds somewhat close to the same issue I am seeing. Main 
differences being that my spike lasts for much longer than a few 
minutes, and can only be resolved when the cluster is restarted. Also, 
that second link shows TOP where much of the CPU is via the 'user', 
rather than the 'sys' like mine.


Is there anything I can look at more to get more info on this 'sinval 
que contention problem'?


Also, having my cpu usage high in 'sys' rather than 'us', could that 
be a red flag? Or is that normal?


- Brian F



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server hitting 100% CPU usage, system comes to a crawl.

2011-11-01 Thread Tom Lane
Brian Fehrle  writes:
> Update on this:
> We did a switchover to another machine with the same hardware, however 
> this system was running on some older parameters we had set in the 
> postgresql.conf file.

> So we went from 400 max_connections to 200 max_connections, and 160MB 
> work_mem to 200MB work_mem. And now on this other system, so far it 
> seems to be running ok.

> Other than the obvious fact that each connection has a certain amount of 
> memory usage, is there something else to watch for when increasing 
> connections to numbers like 400? When we had the issue of the system 
> jumping to 100% cpu usage, even at that point our number of backends to 
> the cluster was at MAX 250, but generally in the 175 range, so well 
> below our 400 max_connections we allow. So could this be the culprit?

Well, yeah, the pre-8.4 sinval problems I mentioned scale with the
number of live backends.  When you have many more backends in the system
that will contribute to the problem, even --- in fact, especially --- if
the extra ones are idle.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] missing chunk 0 for toast value ...

2011-11-01 Thread Tom Lane
Andrew Hammond  writes:
> On Tue, Oct 25, 2011 at 2:20 PM, Tom Lane  wrote:
>> Hmm ... pg_toast_2619 is pg_statistic's toast table.  Is this 9.0.x,
>> and are you by any chance in the habit of running CLUSTER or VACUUM FULL
>> on your system catalogs?  Could one have been running in the background
>> when this happened?

> This is 9.0.4.

I've applied a patch for this:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=7f797d27fe0be5200ad5fd5af6cefcee30c8e24a

It turns out that the problem is very significantly harder to reproduce
pre-9.0, which may explain why you hadn't seen it before ... but it does
exist, at least as far back as 8.2, and probably in every version that
has TOAST tables.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] variable not found in subplan target list

2011-11-01 Thread Roger Niederland

Hello,

I stripped down the original query to what is below.  I am not saying 
that the query below
is useful except to show an error I am getting in Postgresql 9.1.1 on 
both SL6.1 (64 bit) and

Windows 2008 server  9.1.1 (32-bit and 64-bit).  The error I am getting is:

ERROR: variable not found in subplan target list
SQL state: XX000

If I remove the inner join to person (at the end)  to works on 9.1.1

I tried this query on Postgresql 9.1.0 on Windows Vista (32-bit) and it 
works.

Also on the postgres 8.4 it works on both OS's.

Thanks,
Roger
(sorry if this is double posted, I sent this in about 24 hours ago, but 
it was stalled
due to me not subscribing to the list, I turned into a better citizen 
and subscribed)



SELECT
   COALESCE(FreshUps.employeeId, Appts.employeeId) AS employeeId
 FROM
 (
   SELECT
 COALESCE(Ups.employeeId, -1) AS employeeId
   FROM
   (
 SELECT Lead.salesmanId AS employeeId
 FROM Lead
 GROUP BY Lead.salesmanID) AS Ups
 ) AS FreshUps
 FULL OUTER JOIN
 (
   SELECT
 COALESCE(Ups.employeeId, -1) AS employeeId
   FROM
   (
 SELECT SalesAppointment.SalesmanID AS employeeID
 FROM SalesAppointment
 GROUP BY SalesAppointment.SalesmanID) AS Ups
 ) AS Appts USING (employeeId)

 INNER JOIN Employee USING (employeeid)
 INNER JOIN Person ON Employee.employeeId = Person.personId



Re: [GENERAL] Server move using rsync

2011-11-01 Thread Alan Hodgson
On October 31, 2011 03:01:19 PM Stephen Denne wrote:
> I'm wondering whether it's worth doing anyway, simply to check that it
> doesn't do something completely unexpected, which would presumably alert
> us to something we hadn't considered.
> 

Testing is always worthwhile, if only to ensure that PostgreSQL will actually 
run with your configuration on the new machine (sufficient shared memory, IP 
addresses specified in postgresql.conf, etc). 

However, assuming the PostgreSQL binary packages you're using are identical, 
and assuming that you aren't changing tablespace pointers around, the rsync / 
restart is pretty fool-proof in terms of reliably copying PostgreSQL itself. 
PostgreSQL is good about updating time stamps on modified files, you don't have 
to worry about needing the full compare options on rsync or anything  "-avr --
delete" is generally sufficient .

You might disable WAL archiving during a test startup to avoid sending 
duplicates to your backup server.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Disable Streaming Replication ==> Stop WAL Sender on master and WAL receiver on Slave

2011-11-01 Thread Samba
Hi all,

I read in pg mail archives that postgresql-9.1 release added support or
pausing and resuming  (or disabling and enabling) streaming replication but
could not find any relevant configuration settings for achieving the same.

the closest that i could find so far are:

replication_timeout (integer) that stops replication connections that are
inactive (perhaps due to standby failure)
replication_timeout_action(enum) was mentioned in some mail archives but
not present in the current documentation.
pg_xlog_replay_pause() and pg_xlog_replay_resume() which just pauses the
recovery (or application of WAL contents) on the Standby but still let the
streaming replication continue (accumulate the WAL files on the Standby
with out applying those)

Interestingly, even in the documentation for pg_xlog_replay_pause, it was
written that

"If streaming replication is disabled, the paused state may continue
indefinitely without problem. While streaming replication is in progress
WAL records will continue to be received, which will eventually fill
available disk space, depending upon the duration of the pause, the rate of
WAL generation and available disk space"


But how do I disable streaming replication, with the effect that both WAL
sender on master and WAL receiver on slave gets stopped? Similarly, when I
enable streaming replication, the WAL sender and WAL receiver processes
would need to be started.

Could some one please explain the GUC settings or the SQL function calls
that help achieving this?


Thanks and Regards,
Samba


Re: [GENERAL] FATAL: the database system is starting up

2011-11-01 Thread patrick keshishian
On Wed, Oct 26, 2011 at 5:46 PM, Scott Marlowe  wrote:
> On Wed, Oct 26, 2011 at 6:09 PM, patrick keshishian  
> wrote:
>> On Wed, Oct 26, 2011 at 4:49 PM, William E. Moreno A.
>>  wrote:
>>> Solution: http://archives.postgresql.org/pgsql-general/2007-12/msg01339.php
>>> Solution: Message-id: <476d6de1.4050...@latnet.lv>
>>> 
>>>
>>> Problem: FATAL:  the database system is starting up
>>>
>>> Solved:   change postgresql_flags in /etc/rc.conf to: postgresql_flags="-s 
>>> -m fast" or postgresql_flags="-s -m smart"
>>
>> interesting enough, got bit by this recently. Our set up does not have
>> a "-w" option. During upgrades we "cycle" our PostgreSQL daemon. We
>> had to change the "pg_ctl stop" command to "-mimmediate" because
>> during upgrades we would find that someone had an abandoned psql shell
>> running for days and that would halt our upgrade script.
>>
>> Adding "-m immediate" for shutdown seemed like a logical choice to get
>> around this sort of a "procedural" issue(s).
>
> didn't -m fast work?

I forget now why "-m fast" wasn't used. I tried to dig up anything I
had in my notes and did a set of new experiments (hence the late
reply), but didn't come up with much.

About the only difference with '-m fast' and '-m immediate' seems to
be the following log entry:

  database system was
not properly shut down; automatic recovery in progress

For now, I've changed the script to use '-m fast' and removed the
sleep. Will find out after some moderate use of it whether it was a
wise decision or not :-)

--patrick

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pglesslog for Postgres 9.1.1

2011-11-01 Thread mailtolouis2020-postg...@yahoo.com
Hi,

Thanks for the patch, tested and it working fine.

Regards
Louis



>
>From: Satoshi Nagayasu 
>To: "mailtolouis2020-postg...@yahoo.com" 
>Cc: Tom Lane ; Postgres 
>Sent: Friday, October 28, 2011 5:51 AM
>Subject: Re: [GENERAL] pglesslog for Postgres 9.1.1
>
>Hi Louis,
>
>2011/10/27 19:49, mailtolouis2020-postg...@yahoo.com wrote:
>> Hi,
>>
>> I'm sorry I'm not good in C, anyone can help to put a patch or release a new 
>> version for that?
>>
>>
>>
>> Regards
>> Louis
>>
>>    
 
--
-
>---
>>     *From:* Tom Lane 
>>     *To:* "mailtolouis2020-postg...@yahoo.com" 
>>
>>     *Cc:* Postgres 
>>     *Sent:* Wednesday, October 26, 2011 3:42 PM
>>     *Subject:* Re: [GENERAL] pglesslog for Postgres 9.1.1
>>
>>     "mailtolouis2020-postg...@yahoo.com 
>>" 
>>>> writes:
>>      > remove.c:182: error: ‘XLOG_GIN_INSERT’ undeclared (first use in this 
>>function)
>>      > remove.c:182: error: (Each undeclared identifier is reported only once
>>      > remove.c:182: error: for each function it appears in.)
>>      > remove.c:184: error: ‘XLOG_GIN_VACUUM_PAGE’ undeclared (first use in 
>>this function)
>>      > remove.c:186: error: ‘XLOG_GIN_DELETE_PAGE’ undeclared (first use in 
>>this function)
>>
>>     That stuff got moved to gin_private.h in 9.1 ...
>>
>>     regards, tom lane
>
>I'm taking part in.
>
>Try this patch,
>
>https://gist.github.com/1321650
>
>and build as following.
>
>$ make USE_PGXS=1 top_builddir=/path/to/postgresql-9.1.0
>
>Regards,
>
>-- 
>NAGAYASU Satoshi 
>
>
>