Re: [GENERAL] Postgres 9.0 + LDAP

2011-04-13 Thread Sim Zacks

On 04/13/2011 03:36 AM, Vinzenz Bildstein wrote:
Right now the line I would add to the pg_hba.conf would look something 
like this:
host database all CIDR ldap ldapserver=my.domain.com 
ldapbasedn="dc=my,dc=domain,dc=com" ldapprefix="cn=ldap,cn=users" 
ldapsuffix="dc=my,dc=domain,dc=com"



You need to include the user name field.
Mine looks like:
 ldap ldapserver=my-ldap-server.domain.local ldapprefix="uid=" 
ldapsuffix=",ou=Users,dc=domain,dc=local"


Where the authentication would generally look like:
uid=sim,ou=Users,dc=domain,dc=local

Sim

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.0 Out of memory

2011-04-13 Thread Jeremy Palmer
Hi Tom,

Wow thank you so much for the hint!

The plpgsql code that is could be to blame is in the below snippet. I had a 
look and I'm not sure why it might be leaking. Is it because I assign the v_id1 
and v_id2 to the return table 'id' record, return it and then assign to v_id1 
or v_id2 again from the cursor?

CREATE OR REPLACE FUNCTION bde_control.bde_gettabledifferences(p_table1 
regclass, p_table2 regclass, p_compare_key name)
  RETURNS TABLE("action" character, id bigint) AS
...
...
FETCH FIRST FROM v_table_cur1 INTO v_id1, v_check1, v_uniq1;
FETCH FIRST FROM v_table_cur2 INTO v_id2, v_check2, v_uniq2;

WHILE v_id1 IS NOT NULL AND v_id2 IS NOT NULL LOOP
IF v_id1 < v_id2 THEN
action := 'D';
id := v_id1;
RETURN NEXT;
FETCH NEXT FROM v_table_cur1 INTO v_id1, v_check1, v_uniq1;
CONTINUE;
ELSIF v_id2 < v_id1 THEN
action := 'I';
id := v_id2;
RETURN NEXT;
FETCH NEXT FROM v_table_cur2 INTO v_id2, v_check2, v_uniq2;
CONTINUE;
ELSIF v_uniq1 <> v_uniq2 THEN
action := 'X';
id := v_id1;
RETURN NEXT;
ELSIF v_check1 <> v_check2 THEN
action := 'U';
id := v_id1;
RETURN NEXT;
END IF;
FETCH NEXT FROM v_table_cur1 INTO v_id1, v_check1, v_uniq1;
FETCH NEXT FROM v_table_cur2 INTO v_id2, v_check2, v_uniq2;
END LOOP;

WHILE v_id1 IS NOT NULL LOOP
action := 'D';
id := v_id1;
RETURN NEXT;
FETCH NEXT FROM v_table_cur1 INTO v_id1, v_check1, v_uniq1;
END LOOP;

WHILE v_id2 IS NOT NULL LOOP
action := 'I';
id := v_id2;
RETURN NEXT;
FETCH NEXT FROM v_table_cur2 INTO v_id2, v_check2, v_uniq2;
END LOOP;

CLOSE v_table_cur1;
CLOSE v_table_cur2;

RETURN;

The full function can be read in full here:

https://github.com/linz/linz_bde_uploader/blob/master/sql/bde_control_functions.sql#L3263

The actual query for both cursors in the case of the table that was proabily 
causing the error looks like this (v_table_cur1 and v_table_cur2 only differ by 
table referenced):

SELECT 
   id AS ID,
   COALESCE('V|' || CAST(T.estate_description AS TEXT), '|N') || '|V' || 
   CAST(T.guarantee_status AS TEXT) || '|V' || 
   CAST(T.issue_date AS TEXT) || '|V' || 
   CAST(T.land_district AS TEXT) || '|V' || 
   CAST(T.number_owners AS TEXT) || '|V' || 
   CAST(T.part_share AS TEXT) || 
   COALESCE('V|' || CAST(T.shape AS TEXT), '|N') || '|V' || 
   CAST(T.status AS TEXT) || '|V' || 
   CAST(T.title_no AS TEXT) || '|V' || 
   CAST(T.type AS TEXT) AS check_sum,
   '' AS check_uniq
FROM 
   lds.titles AS T
ORDER BY
   id ASC;

The definition for the table looks like this:

CREATE TABLE titles (
id INTEGER NOT NULL PRIMARY KEY,
title_no VARCHAR(20) NOT NULL,
status VARCHAR(4) NOT NULL,
type TEXT NOT NULL,
land_district VARCHAR(100) NOT NULL,
issue_date TIMESTAMP NOT NULL,
guarantee_status TEXT NOT NULL,
estate_description TEXT,
number_owners INT8 NOT NULL,
part_share BOOLEAN NOT NULL,
   shape GEOMETRY,
);

CREATE INDEX shx_title_shape ON titles USING gist (shape);

Thanks,
Jeremy

From: Tom Lane [t...@sss.pgh.pa.us]
Sent: Wednesday, 13 April 2011 5:44 p.m.
To: Jeremy Palmer
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] 9.0 Out of memory

Jeremy Palmer  writes:
> Ok I have attached the map, or least what I think the map is.

Yup, that's what I was after.  It looks like the main problem is here:

> PortalHeapMemory: 16384 total in 4 blocks; 5944 free (0 chunks); 10440 
> used
>   ExecutorState: 122880 total in 4 blocks; 63984 free (8 chunks); 58896 
> used
> ExprContext: 2622363000 total in 9 blocks; 21080 free (15 chunks); 
> 2622341920 used

You've evidently got a leak during execution of a query that's being run
in a "portal", which most likely is a cursor or plpgsql FOR-IN-SELECT
query.  Unfortunately there's not enough information here to tell which
query that is, but maybe you can narrow it down now.  I'm guessing that
some specific function or operator you're using in that query is leaking
memory with successive executions.

regards, tom lane
__

This message contains information, which is confidential and may be subject to 
legal privilege. 
If you are not the intended recipient, you must not peruse, use, disseminate, 
distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 
0800 665 463 or i...@linz.govt.nz) and destroy the original message.
LINZ accepts no responsibility for changes to this email, or for any 
attachments, after its transmission from LINZ.

Thank you.
__

[GENERAL] updating rows which have a common value forconsecutive dates

2011-04-13 Thread Lonni J Friedman
Greetings,
I have a table full of automated test data, which continuously has new
unique data inserted:

Column |Type |
 Modifiers
+-+-
 id | integer | not null default
nextval('dbltests_id_seq'::regclass)
 testname   | text| not null
 last_update| timestamp without time zone | not null default now()
 current_status | text| not null
 os | text| not null
 arch   | text| not null
 build_type | text| not null
 branch | text| not null

The 'testname' column contains many different tests, and each unique
'testname' has numerous different associated os,arch,build_type &
branch values.  For example, testname='foo' will run on
os='Linux',arch='i686',build_type='debug',branch='t38', and also on
os='Windows7',arch='x86_64',build_type='release',branch='r40', etc,
and there will be many other different testname's with similar
permutations of the os,arch,build_type & branch columns.  So for
example, there will also be testname='bar' or testname='omega' or
testname='sigma' for the other assorted os,arch,build_type & branch
permutations.

The current_status column is either 'PASSED' or 'FAILED'.

What I'm struggling with is how to put together a SQL query which
shows me all instances of a given testname permutation
(os,arch,build_type,branch) which has a current_status value of
'FAILED' for two or more most recent consecutive 'last_update' values.

Suggestions welcome.

thanks in advance!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying out replication: cp cannot stat log file during recovery

2011-04-13 Thread Henry C.
On Wed, April 13, 2011 04:28, Fujii Masao wrote:
> When the standby fails to read the WAL file from the archive, it tries to
> read that from the master via replication connection. So the standby would not
> skip that file.

Great, thanks.  It looks like it's proceeding normally (if slow) then.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Blob handling with Delphi...

2011-04-13 Thread Durumdara
Hi!

PG9.0, Delphi 6, Zeos.

I want to use PGSQL bytea field as normal BLOB field in Delphi.

But when I insert a value into this field, for example all characters (chr
0..255), and I fetch, and save it as blob stream into a file, I got
interesting result, not what I stored here previously.

It is got \x prefix, and it is stored hexadecimal values.

Is it normal, and I needs to convert this format to readable before I use
it, or I can get same result as in other databases/adapters (the stream
saved BlobField.SaveToFile have equal content as BlobField.LoadFromFile)...

Many DBAware components can show the blob directly as Image. With PG's \x
prefix this won't working well... :-(


Thanks for your help:
dd


Re: [GENERAL] Blob handling with Delphi...

2011-04-13 Thread John R Pierce

On 04/13/11 1:28 AM, Durumdara wrote:

Hi!

PG9.0, Delphi 6, Zeos.

I want to use PGSQL bytea field as normal BLOB field in Delphi.

But when I insert a value into this field, for example all characters 
(chr 0..255), and I fetch, and save it as blob stream into a file, I 
got interesting result, not what I stored here previously.


It is got \x prefix, and it is stored hexadecimal values.

Is it normal, and I needs to convert this format to readable before I 
use it, or I can get same result as in other databases/adapters (the 
stream saved BlobField.SaveToFile have equal content as 
BlobField.LoadFromFile)...


Many DBAware components can show the blob directly as Image. With PG's 
\x prefix this won't working well... :-(




in 9.0, the default encoding for BYTEA changed, and if your client 
interface doesnt undersatnd the new encoding (and isn't using the libpq 
entry points for handling byte encoding), it will fail like this..  
there's a SET variable that will restore the old behavior.


See 
http://www.postgresql.org/docs/current/static/runtime-config-client.html#GUC-BYTEA-OUTPUThttp://www.postgresql.org/docs/current/static/runtime-config-client.html#GUC-BYTEA-OUTPUT


SET bytea_output='escape';

should revert to the previous behavior, and may well fix your problem 
with delphi


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Speeding up replication startup/recovery

2011-04-13 Thread Henry C.
Greets,

My test replication seems to be proceeding normally, but the process appears
to be quite slow:

SLAVE
...postgres: startup process   recovering 000101900024(1)
...postgres: wal receiver process   streaming 190/244FEA80

MASTER
...postgres: wal sender process replicator 1.1.1.1(55390) streaming 190/244FEA80

There are quite a few log files to process and both machines are not heavily
taxed.  Is there any way to expedite this initial recovery process (1)?  It
seems to be chugging along at a rather sedate pace.

Thanks
Henry



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Speeding up replication startup/recovery

2011-04-13 Thread Simon Riggs
On Wed, Apr 13, 2011 at 10:03 AM, Henry C.  wrote:

> My test replication seems to be proceeding normally, but the process appears
> to be quite slow:
>
> SLAVE
> ...postgres: startup process   recovering 000101900024    (1)
> ...postgres: wal receiver process   streaming 190/244FEA80
>
> MASTER
> ...postgres: wal sender process replicator 1.1.1.1(55390) streaming 
> 190/244FEA80
>
> There are quite a few log files to process and both machines are not heavily
> taxed.  Is there any way to expedite this initial recovery process (1)?  It
> seems to be chugging along at a rather sedate pace.

The replication lag is zero since the master and slave WAL locations match.

There seems to be nothing to expedite... why do you say it is slow?

Maybe because you see this as an "initial recovery process". The
recovery process remains active while processing continues.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Speeding up replication startup/recovery

2011-04-13 Thread Henry C.
On Wed, April 13, 2011 11:16, Simon Riggs wrote:
>> seems to be chugging along at a rather sedate pace.
>
> The replication lag is zero since the master and slave WAL locations match.
>
>
> There seems to be nothing to expedite... why do you say it is slow?
>
>
> Maybe because you see this as an "initial recovery process". The
> recovery process remains active while processing continues.

I think you're right - I'm not sure what I was expecting, but I need to be a
bit more realistic, I think.  The rsync took an hour or two, which means there
are a lot of updates to catch up on (recovery has been at it for several hours
now).

Cheers
Henry


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres 8.3 erro on shared memory windows

2011-04-13 Thread paulo matadr
Hi All,

Anyone knowns how to start postgres on win 2003 r2 with 2 GB of shared memory 
on 
O.S with 8GB.

ERROR FATAL: could not create shared memory segment 5
Failed system call was MapViewOFFileEx

I try use PAE :
/PAE /3GB but not sucessed.



Regards



 Paulo


[GENERAL] Weird WAL problem - 9.0.3

2011-04-13 Thread Rafael Martinez
Hello

Yesterday we had a weird problem with the pg_xlog partition in one of
our servers:

- The amount of WAL files was much higher than (2*checkpoint_segments)+1
(over 360 WAL files)

- The WAL files were not created/recycle time-ordered. Here is an
example:

.
16777216 Apr 12 17:49 0001000D001C
16777216 Apr 12 17:49 0001000D001D
16777216 Apr 12 17:49 0001000D001E
16777216 Apr 12 17:52 0001000D001F
16777216 Apr 12 17:50 0001000D0020
16777216 Apr 12 17:51 0001000D0021
16777216 Apr 12 17:49 0001000D0022
16777216 Apr 12 17:49 0001000D0023
16777216 Apr 12 17:49 0001000D0024
16777216 Apr 12 17:51 0001000D0025
.

This is the first time I see this behavior with the result of a full
pg_xlog partition.

This happened when testing an upgrade procedure and moving on the fly
with "pg_dumpall | psql" around 30 databases (ca.140GB) from a 8.3
server to a 9.0 one.

Is this normal? If it is, how can we find out the max.number of WAL
files a 9.0 system can generate in the worst case scenario?

Some relevant information about this system:

PostgreSQL 9.0.3 - ext4 - RHEL5.6 - 2.6.18-238.5.1.el5 - x86_64

checkpoint_segments: 128
wal_buffers: 512kB
wal_level: archive
wal_sync_method: fdatasync
shared_buffers: 10GB


regards,
-- 
Rafael Martinez Guerrero
Center for Information Technology
University of Oslo, Norway

PGP Public Key: http://folk.uio.no/rafael/



signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Weird WAL problem - 9.0.3

2011-04-13 Thread Adrian Klaver
On Wednesday, April 13, 2011 6:09:25 am Rafael Martinez wrote:
> Hello
> 
> Yesterday we had a weird problem with the pg_xlog partition in one of
> our servers:
> 
> - The amount of WAL files was much higher than (2*checkpoint_segments)+1
> (over 360 WAL files)
> 

Might want to take a look at:
http://www.postgresql.org/docs/9.0/interactive/wal-configuration.html
In particular:

"There will always be at least one WAL segment file, and will normally not be 
more files than the higher of wal_keep_segments or (2 + 
checkpoint_completion_target) * checkpoint_segments + 1. Each segment file is 
normally 16 MB (though this size can be altered when building the server). You 
can use this to estimate space requirements for WAL. Ordinarily, when old log 
segment files are no longer needed, they are recycled (renamed to become the 
next 
segments in the numbered sequence). If, due to a short-term peak of log output 
rate, there are more than 3 * checkpoint_segments + 1 segment files, the 
unneeded 
segment files will be deleted instead of recycled until the system gets back 
under this limit."

-- 
Adrian Klaver
adrian.kla...@gmail.com


Re: [BUGS] [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-13 Thread Merlin Moncure
On Wed, Apr 13, 2011 at 12:29 AM, Tom Lane  wrote:
> Merlin Moncure  writes:
>> I think you may have uncovered a leak (I stand corrected).
>
>> The number of schemas in your test is irrelevant -- the leak is
>> happening in proportion to the number of views (set via \setrandom
>> tidx 1 10).  At 1 I don't think it exists at all -- at 100 memory use
>> grows very fast.
>
> I don't think it's a leak, exactly: it's just that the "relcache" entry
> for each one of these views occupies about 100K.  A backend that touches
> N of the views is going to need about N*100K in relcache space.  I can't
> get terribly excited about that.  Trying to reduce the size of the
> relcache would be a net loss for most usage patterns (ie, we'd end up
> increasing the amount of re-fetching from the system catalogs that
> backends would have to do).  And I don't think that this test case has
> much of anything to do with sane application design, anyway.  Do you
> really need that many complex views?  Do you really need to have most
> sessions touching all of them?

Ya, my mistake -- it *felt* like a leak when of course it was not.
100k does seem like an awful lot though -- perhaps this could be
organized better? -- but that's not really the point.  I've coded a
lot of multi schema designs and they tend to either go the one
session/schema route or the connection pooling route.  Either way,
cache memory usage tends to work itself out pretty well (it's never
been a problem for me before at least).  I can't recall anyone ever
even complaining about it in a non synthetic test.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Weird WAL problem - 9.0.3

2011-04-13 Thread Rafael Martinez
On Wed, 2011-04-13 at 06:28 -0700, Adrian Klaver wrote:
> On Wednesday, April 13, 2011 6:09:25 am Rafael Martinez wrote:

> 
> Might want to take a look at:
> 
[..]
>  sequence). If, due to a short-term peak of log output rate, there are
> more than 3 * checkpoint_segments + 1 segment files, the unneeded
> segment files will be deleted instead of recycled until the system
> gets back under this limit."
> 

Thank you. 

This explains the number of WAL files generated, I was not aware of the
(3*checkpoint_segments + 1) limit (I will RTBM better next time)

But this doesn't explain the WAL files not been created/recycled
time-ordered. I wonder if this happened because the partition got full
while the WALs were created/recycled?

regards,
-- 
Rafael Martinez Guerrero
Center for Information Technology
University of Oslo, Norway

PGP Public Key: http://folk.uio.no/rafael/



signature.asc
Description: This is a digitally signed message part


[GENERAL] Cursor metadata

2011-04-13 Thread Andy Chambers

Hi All,

Is there anywhere in the postgres catalog where one can access metadata  
about a held cursor.  Type information

in particular would be really useful.

Cheers,
Andy

--
Andy Chambers

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Adding a default value to a column after it exists

2011-04-13 Thread Gauthier, Dave
Is there a way to add a default value definition to an existing column?  
Something like an "alter table... alter column... default 'foo'".

I thought of a clumsy way to do this... create a temp column, set it's value to 
that of the column to be altered, drop the column to be altered, redefine it 
with the default, shunt all the values in the temp column over to the new 
column and then drop the temp column.  But I have before and after triggers on 
the table that react to changes in this column (not to mention the need for 
it's existence).

I could add something to the before trigger to do this too. But it would be 
cleaner to do this as a column property.

Thanks for any help.


Re: [GENERAL] Adding a default value to a column after it exists

2011-04-13 Thread Mike Fowler

Hi Dave,

On 13/04/11 17:21, Gauthier, Dave wrote:


Is there a way to add a default value definition to an existing 
column?  Something like an "alter table... alter column... default 'foo'".




Sure is something like that:

ALTER TABLE tablename ALTER COLUMN columnname SET DEFAULT expression;

For full documentation see: 
http://www.postgresql.org/docs/9.0/static/sql-altertable.html



Thanks for any help.



Regards,

--
Mike Fowler
Registered Linux user: 379787


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding a default value to a column after it exists

2011-04-13 Thread Tom Lane
"Gauthier, Dave"  writes:
> Is there a way to add a default value definition to an existing column?  
> Something like an "alter table... alter column... default 'foo'".

ALTER TABLE ... ALTER COLUMN ... SET DEFAULT ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding a default value to a column after it exists

2011-04-13 Thread Andrew Sullivan
On Wed, Apr 13, 2011 at 09:21:20AM -0700, Gauthier, Dave wrote:
> Is there a way to add a default value definition to an existing column?  
> Something like an "alter table... alter column... default 'foo'".

ALTER TABLE table ALTER [ COLUMN ] column SET DEFAULT expression

(see http://www.postgresql.org/docs/9.0/interactive/sql-altertable.html)

Note that this doesn't actually update the fields that are NULL in the
column already.  For that, once you had the default in place, you
could do

UPDATE table SET column = DEFAULT WHERE column IS NULL

IIRC.

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cursor metadata

2011-04-13 Thread Tom Lane
"Andy Chambers"  writes:
> Is there anywhere in the postgres catalog where one can access metadata  
> about a held cursor.

The pg_cursors system view offers some info ...

> Type information
> in particular would be really useful.

... but not that.  Usually the best way to get information about the
columns of a table/view/cursor is to fetch a row from it and use the
metadata that's provided by the fetch mechanism.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] updating rows which have a common value forconsecutive dates

2011-04-13 Thread David Johnston
If you have the ability to use Window functions you can group (as necessary), 
order by last_update, and then use rank() to number each test run sequentially. 
 Then you can limit the results to  ( rank() <= 2 AND current_status = 'FAILED' 
).

David J.

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Lonni J Friedman
Sent: Wednesday, April 13, 2011 3:34 AM
To: pgsql-general
Subject: [GENERAL] updating rows which have a common value forconsecutive dates

Greetings,
I have a table full of automated test data, which continuously has new unique 
data inserted:

Column |Type |
 Modifiers
+-+-
+-+-
+-+---
 id | integer | not null default
nextval('dbltests_id_seq'::regclass)
 testname   | text| not null
 last_update| timestamp without time zone | not null default now()
 current_status | text| not null
 os | text| not null
 arch   | text| not null
 build_type | text| not null
 branch | text| not null

The 'testname' column contains many different tests, and each unique 'testname' 
has numerous different associated os,arch,build_type & branch values.  For 
example, testname='foo' will run on 
os='Linux',arch='i686',build_type='debug',branch='t38', and also on 
os='Windows7',arch='x86_64',build_type='release',branch='r40', etc, and there 
will be many other different testname's with similar permutations of the 
os,arch,build_type & branch columns.  So for example, there will also be 
testname='bar' or testname='omega' or testname='sigma' for the other assorted 
os,arch,build_type & branch permutations.

The current_status column is either 'PASSED' or 'FAILED'.

What I'm struggling with is how to put together a SQL query which shows me all 
instances of a given testname permutation
(os,arch,build_type,branch) which has a current_status value of 'FAILED' for 
two or more most recent consecutive 'last_update' values.

Suggestions welcome.

thanks in advance!

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make 
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Weird WAL problem - 9.0.3

2011-04-13 Thread Tom Lane
Rafael Martinez  writes:
> But this doesn't explain the WAL files not been created/recycled
> time-ordered. I wonder if this happened because the partition got full
> while the WALs were created/recycled?

When a checkpoint finishes, it scans the pg_xlog directory to find WAL
files that are no longer needed (because they're before the checkpoint's
WAL replay point).  It will either rename them "forward" to become ready
for future use, or delete them if there are already enough future WAL
files present (as determined by checkpoint_segments).  The order in
which old segments get renamed to be future ones is basically chance,
because it's determined by the order in which readdir() visits them.
So there's no reason to think that their file timestamps will be in
order.

I would expect WAL files that are *behind* the current write point to
have increasing write timestamps.  But not those ahead.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Streaming Replication limitations

2011-04-13 Thread raghu ram
Hi,

Is there any limitations to configure streaming replication between
different operating systems i.e solaris 64 bit to RHEL 64 bit.

--Raghu Ram


Re: [GENERAL] [ADMIN] Streaming Replication limitations

2011-04-13 Thread Devrim GÜNDÜZ

Hi,

On Wed, 2011-04-13 at 23:23 +0530, raghu ram wrote:

> Is there any limitations to configure streaming replication between
> different operating systems i.e solaris 64 bit to RHEL 64 bit. 

It won't work.

Regards,

-- 
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Streaming Replication limitations

2011-04-13 Thread Andrew Sullivan
On Wed, Apr 13, 2011 at 11:23:24PM +0530, raghu ram wrote:
> Hi,
> 
> Is there any limitations to configure streaming replication between
> different operating systems i.e solaris 64 bit to RHEL 64 bit.

I personally wouldn't be willing to use anything except identical
binaries for the back end, and those two platforms are binary
incompatible.  The manual actually warns about this.

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [ADMIN] Streaming Replication limitations

2011-04-13 Thread Tom Lane
Devrim =?ISO-8859-1?Q?G=DCND=DCZ?=  writes:
> On Wed, 2011-04-13 at 23:23 +0530, raghu ram wrote:
>> Is there any limitations to configure streaming replication between
>> different operating systems i.e solaris 64 bit to RHEL 64 bit. 

> It won't work.

As long as it's the same machine architecture, it probably will ...
but if "solaris" here really means "sparc" then I agree.

Short answer is to test the case you have in mind and see.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.0 Out of memory

2011-04-13 Thread Tom Lane
Jeremy Palmer  writes:
> The plpgsql code that is could be to blame is in the below snippet. I had a 
> look and I'm not sure why it might be leaking. Is it because I assign the 
> v_id1 and v_id2 to the return table 'id' record, return it and then assign to 
> v_id1 or v_id2 again from the cursor?

No, given the info from the memory map I'd have to say that the leakage
is in the cursor not in what you do in the plpgsql function.  The cursor
query looks fairly unexciting except for the cast from geometry to text.
I don't have PostGIS installed here so I can't do any testing, but I
wonder whether the leak goes away if you remove that part of the query
(ie, leave the shape out of the "checksum" for testing purposes).
If so, you probably ought to file the issue as a PostGIS bug.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Replication: slave is in permanent startup 'recovery'

2011-04-13 Thread Henry C.
Greets,

Pg 9.0.3

This must be due to my own misconfiguration, so apologies if I'm not seeing
the obvious - I've noticed that my slave seems to be stuck in a permanent
startup/recovery state.  ps on the slave shows:

...
postgres: wal receiver process   streaming 190/A6C384A0
postgres: startup process   recovering 0001019000A6 (1)
...

(1) keeps incrementing to reflect each new log file in pg_xlog on the master
(even after the slave caught up with all the other log files after the initial
rsync).

If I try and execute a long-lived SQL query on the slave, it eventually fails
with "canceling statement due to conflict with recovery".  Replication is
definitely working (DML actions are propagated to the slave), but something is
amiss.

I'm trying Streaming replication.  Once I get this working reliably, the idea
is to use it on a rather busy server where the log files are used in case the
slave(s) fall behind the stream during peak periods.  (sorry if I'm using the
wrong idioms, or not quite understanding Pg's built in replication methods -
I'm used to Skype's Londiste)

MASTER CONFIG
wal_level = hot_standby
archive_mode = on
archive_command = 'cp -a "%p" /home/psql-wal-archive/"%f"'
max_wal_senders = 5
wal_keep_segments = 8

SLAVE CONFIG
wal_level = hot_standby
wal_keep_segments = 64
hot_standby = on


Any ideas what I'm doing wrong here?

Thanks


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [ADMIN] Streaming Replication limitations

2011-04-13 Thread AShved
just hit me what if we use pg_standby and convert archive logs from one 
notation to the other.  and after apply them to our standby. can this 
work?
Andrew Shved
DBA, Symcor Inc, Delivery Support Services
( Phone: 905-273-1433
( BlackBerry: 416-803-2675
* Email: ash...@symcor.com 




From:
Tom Lane 
To:
Devrim GÜNDÜZ 
Cc:
raghu ram , pgsql-ad...@postgresql.org, 
pgsql-general@postgresql.org
Date:
04/13/2011 02:14 PM
Subject:
Re: [ADMIN] Streaming Replication limitations
Sent by:
pgsql-admin-ow...@postgresql.org



Devrim =?ISO-8859-1?Q?G=DCND=DCZ?=  writes:
> On Wed, 2011-04-13 at 23:23 +0530, raghu ram wrote:
>> Is there any limitations to configure streaming replication between
>> different operating systems i.e solaris 64 bit to RHEL 64 bit. 

> It won't work.

As long as it's the same machine architecture, it probably will ...
but if "solaris" here really means "sparc" then I agree.

Short answer is to test the case you have in mind and see.

 regards, tom lane

-- 
Sent via pgsql-admin mailing list (pgsql-ad...@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin



CONFIDENTIALITY WARNING 
This communication, including any attachments, is for the exclusive use of 
addressee and may contain proprietary and/or confidential information. If you 
are not the intended recipient, any use, copying, disclosure, dissemination or 
distribution is strictly prohibited. If you are not the intended recipient, 
please notify the sender immediately by return e-mail, delete this 
communication and destroy all copies.

AVERTISSEMENT RELATIF À LA CONFIDENTIALITÉ 
Ce message, ainsi que les pièces qui y sont jointes, est destiné à l’usage 
exclusif de la personne à laquelle il s’adresse et peut contenir de 
l’information personnelle ou confidentielle. Si le lecteur de ce message n’en 
est pas le destinataire, nous l’avisons par la présente que toute diffusion, 
distribution, reproduction ou utilisation de son contenu est strictement 
interdite. Veuillez avertir sur-le-champ l’expéditeur par retour de courrier 
électronique et supprimez ce message ainsi que toutes les pièces jointes.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [ADMIN] Streaming Replication limitations

2011-04-13 Thread AShved
since your onine logs are in different endian notation.  I do not see how 
it would work.  Sony may be an option in this case.
Andrew Shved




From:
Tom Lane 
To:
Devrim GÜNDÜZ 
Cc:
raghu ram , pgsql-ad...@postgresql.org, 
pgsql-general@postgresql.org
Date:
04/13/2011 02:14 PM
Subject:
Re: [ADMIN] Streaming Replication limitations
Sent by:
pgsql-admin-ow...@postgresql.org



Devrim =?ISO-8859-1?Q?G=DCND=DCZ?=  writes:
> On Wed, 2011-04-13 at 23:23 +0530, raghu ram wrote:
>> Is there any limitations to configure streaming replication between
>> different operating systems i.e solaris 64 bit to RHEL 64 bit. 

> It won't work.

As long as it's the same machine architecture, it probably will ...
but if "solaris" here really means "sparc" then I agree.

Short answer is to test the case you have in mind and see.

 regards, tom lane

-- 
Sent via pgsql-admin mailing list (pgsql-ad...@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin



CONFIDENTIALITY WARNING 
This communication, including any attachments, is for the exclusive use of 
addressee and may contain proprietary and/or confidential information. If you 
are not the intended recipient, any use, copying, disclosure, dissemination or 
distribution is strictly prohibited. If you are not the intended recipient, 
please notify the sender immediately by return e-mail, delete this 
communication and destroy all copies.

AVERTISSEMENT RELATIF À LA CONFIDENTIALITÉ 
Ce message, ainsi que les pièces qui y sont jointes, est destiné à l’usage 
exclusif de la personne à laquelle il s’adresse et peut contenir de 
l’information personnelle ou confidentielle. Si le lecteur de ce message n’en 
est pas le destinataire, nous l’avisons par la présente que toute diffusion, 
distribution, reproduction ou utilisation de son contenu est strictement 
interdite. Veuillez avertir sur-le-champ l’expéditeur par retour de courrier 
électronique et supprimez ce message ainsi que toutes les pièces jointes.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [ADMIN] Streaming Replication limitations

2011-04-13 Thread Simon Riggs
2011/4/13 Tom Lane :
> Devrim =?ISO-8859-1?Q?G=DCND=DCZ?=  writes:
>> On Wed, 2011-04-13 at 23:23 +0530, raghu ram wrote:
>>> Is there any limitations to configure streaming replication between
>>> different operating systems i.e solaris 64 bit to RHEL 64 bit.
>
>> It won't work.
>
> As long as it's the same machine architecture, it probably will ...
> but if "solaris" here really means "sparc" then I agree.
>
> Short answer is to test the case you have in mind and see.

That's the long answer, not least because the absence of a failure in
a test is not conclusive proof that it won't fail at some point in the
future while in production.

The short answer is "don't do it".

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication: slave is in permanent startup 'recovery'

2011-04-13 Thread Henry C.

Forgot to mention recovery.conf on slave:

standby_mode = 'on'
primary_conninfo = 'host..."
restore_command = 'cp /home/psql-wal-archive/%f "%p"'
archive_cleanup_command = 'pg_archivecleanup /home/psql-wal-archive %r'


The wiki states "If wal_keep_segments is a high enough number to retain the
WAL segments required for the standby server, this [restore_command] may not
be necessary."

Presumably this is referring to wal_keep_segments config on the master, right?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [ADMIN] Streaming Replication limitations

2011-04-13 Thread Tom Lane
Simon Riggs  writes:
> 2011/4/13 Tom Lane :
>> Short answer is to test the case you have in mind and see.

> That's the long answer, not least because the absence of a failure in
> a test is not conclusive proof that it won't fail at some point in the
> future while in production.

Not really.  Every known source of incompatibility (endianness,
alignment, float format, etc) is checked at postmaster startup via
entries in pg_control.  If you get the slave postmaster to start at all,
it will probably work, though certainly more extensive testing than that
would be advisable.

> The short answer is "don't do it".

DBAs are paid to be incredibly paranoid, and from that mindset this
answer makes sense.  But there's a big difference between "it won't work"
and "I'm afraid to risk my paycheck on this because there might possibly
be some problem that no one knows about yet".  Let's be perfectly clear
that this is a question of the second case not the first.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory management in Postgres

2011-04-13 Thread Jorge Arévalo
Hello,

I'm very interested in PostgreSQL memory management, specially in the
concept "memory context". I've read the official documentation at
http://www.postgresql.org/docs/8.4/static/spi-memory.html, but I'd
like to learn more about it. Do you recommend me any particular book
or url?

Many thanks in advance,

-- 
Jorge Arévalo
Internet & Mobilty Division, DEIMOS
jorge.arev...@deimos-space.com
http://es.linkedin.com/in/jorgearevalo80
http://mobility.grupodeimos.com/
http://gis4free.wordpress.com
http://geohash.org/ezjqgrgzz0g

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory leak in SPI_finish call

2011-04-13 Thread Jorge Arévalo
2011/4/5 Jorge Arévalo :
> Hello,
>
> I'm having problems with a PostgreSQL server side C-function. It's not
> an aggregate function (operates over a only row of data). When the
> function is called over tables with ~4000 rows, it causes postgres
> backend crash with SEGFAULT. I know the error is a kind of
> "cumulative", because with 3460 rows works fine, but from 3461 fails
> with SEGFAULT.
>
> Debugging, I've found the problem is a SPI_finish call. If I comment
> the call, the function ends without errors. The only problem is a
> warning message is raised, because I skipped the SPI_finish call.
>
> I'm working with postgres 8.4.7 in a Ubuntu 9.10 machine. Same problem
> in Windows machine, anyway. Things I've tried:
>
> - Quit SPI_finish call, obviously. But it's not a solution
>
> - Modify log configuration: log_min_messages=debug5,
> log_error_verbosity=verbose, log_min_error_statement=debug5,
> log_min_duration_statement=0, log_connections=on,
> log_disconnections=on, log_statment=all. I can't see any conclussion.
> Here, a log example:
> http://dl.dropbox.com/u/6599273/postgresql-2011-04-04_195420.log. The
> function that crashes is MapAlgebra.
>
> - Attach postgres process to GDB (gdb --pid=...). When I connect with
> PostgreSQL via psql/pgadmin, the backend creates 2 new processes in
> idle state, until I execute a query. One connected to the postgres
> database (I'm using postgres user) and another one connected to my
> testing database. I've tried to attach a gdb instance to both
> processes.
>
> When I attach gdb to the process connected with my testing database, I
> get :"Program exited with code 02". And no more. No core dumped. I've
> looked for that error, and looks like it depends on the software that
> caused the signal, not gdb.
>
> When I attach gdb to the process connected with postgres database, I
> get "Program received signal SIGQUIT, Quit.
> 0x00651422 in __kernel_vsyscall ()". No more information. No core dumped.
>
> So, what can I do to find the error, apart from that things? Any clue
> with the information posted above?
>
> Thanks in advance,
>
> --
> Jorge Arévalo
> Internet & Mobilty Division, DEIMOS
> jorge.arev...@deimos-space.com
> http://es.linkedin.com/in/jorgearevalo80
> http://mobility.grupodeimos.com/
> http://gis4free.wordpress.com
> http://geohash.org/ezjqgrgzz0g
>

Hi,

Problem solved. I was allocating memory with palloc, instead of
SPI_palloc, like specified at
http://www.postgresql.org/docs/8.4/static/spi-memory.html

Best regards,

-- 
Jorge Arévalo
Internet & Mobilty Division, DEIMOS
jorge.arev...@deimos-space.com
http://es.linkedin.com/in/jorgearevalo80
http://mobility.grupodeimos.com/
http://gis4free.wordpress.com
http://geohash.org/ezjqgrgzz0g

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory management in Postgres

2011-04-13 Thread Simon Riggs
2011/4/13 Jorge Arévalo :
>
> I'm very interested in PostgreSQL memory management, specially in the
> concept "memory context". I've read the official documentation at
> http://www.postgresql.org/docs/8.4/static/spi-memory.html, but I'd
> like to learn more about it. Do you recommend me any particular book
> or url?
>
> Many thanks in advance,

Have a look at the source code notes
pgsql/src/backend/utils/mmgr/README

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication: slave is in permanent startup 'recovery'

2011-04-13 Thread Tomas Vondra
Dne 13.4.2011 20:42, Henry C. napsal(a):
> 
> Forgot to mention recovery.conf on slave:
> 
> standby_mode = 'on'
> primary_conninfo = 'host..."
> restore_command = 'cp /home/psql-wal-archive/%f "%p"'
> archive_cleanup_command = 'pg_archivecleanup /home/psql-wal-archive %r'
> 
> 
> The wiki states "If wal_keep_segments is a high enough number to retain the
> WAL segments required for the standby server, this [restore_command] may not
> be necessary."
> 
> Presumably this is referring to wal_keep_segments config on the master, right?

Right. If there are enough WAL segments on the master, the standby may
ask for them and the data will be streamed to the standby. So the
archive mode is not a requirement, although if you already use WAL
archiving, it's a good idea to use it (no additional overhead on the
primary etc.).

regards
Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory management in Postgres

2011-04-13 Thread Jorge Arévalo
2011/4/13 Simon Riggs :
> 2011/4/13 Jorge Arévalo :
>>
>> I'm very interested in PostgreSQL memory management, specially in the
>> concept "memory context". I've read the official documentation at
>> http://www.postgresql.org/docs/8.4/static/spi-memory.html, but I'd
>> like to learn more about it. Do you recommend me any particular book
>> or url?
>>
>> Many thanks in advance,
>
> Have a look at the source code notes
> pgsql/src/backend/utils/mmgr/README
>
> --
>  Simon Riggs   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>

Good read! Many thanks

-- 
Jorge Arévalo
Internet & Mobilty Division, DEIMOS
jorge.arev...@deimos-space.com
http://es.linkedin.com/in/jorgearevalo80
http://mobility.grupodeimos.com/
http://gis4free.wordpress.com
http://geohash.org/ezjqgrgzz0g

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 8.3 erro on shared memory windows

2011-04-13 Thread Craig Ringer

On 13/04/2011 8:02 PM, paulo matadr wrote:

Hi All,

Anyone knowns how to start postgres on win 2003 r2 with 2 GB of shared
memory on O.S with 8GB.


You can't on Windows, unless you're running a 64-bit build on a 64-bit OS.

Given that Pg on Windows doesn't perform as well with lots of shared 
memory, why do you want to? In any case, 2GB of shared memory wouldn't 
leave you any address space for anything else important.


I wouldn't want to go above 1GB on 32-bit Windows.

--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.0 Out of memory

2011-04-13 Thread Jeremy Palmer
> No, given the info from the memory map I'd have to say that the leakage
> is in the cursor not in what you do in the plpgsql function.  The cursor
> query looks fairly unexciting except for the cast from geometry to text.
> I don't have PostGIS installed here so I can't do any testing, but I
> wonder whether the leak goes away if you remove that part of the query
> (ie, leave the shape out of the "checksum" for testing purposes).
> If so, you probably ought to file the issue as a PostGIS bug.

Ok I removed the geometry column from the cursor query within the function and 
the session still runs out of memory. I'm still seeing the same error message 
as well:

PortalHeapMemory: 16384 total in 4 blocks; 5944 free (0 chunks); 10440 used
  ExecutorState: 122880 total in 4 blocks; 63984 free (8 chunks); 58896 used
ExprContext: 2496819768 total in 9 blocks; 21080 free (15 chunks); 
2496798688 used

So I guess it's not likely to be the PostGIS geometry to text cast that is 
leaking the memory.

One thing that has got me interested now is query that executes directly before 
(see SQL below). If I remove the geometry column that is generated using 
ST_Collect aggregate function, the subsequent function involving the cursor 
query completes and the transaction also runs to completion.

Is there any way that ST_Collect could be leaking memory into a context that 
does not get cleaned up after the query runs? Or do I have two leaks going on 
here?!

Cheers,
Jeremy

CREATE TEMP TABLE tmp_titles AS
SELECT
TTL.audit_id AS id,
TTL.title_no,
TTL.status,
TTLT.char_value AS type,
LOC.name AS land_district,
TTL.issue_date,
TTLG.char_value AS guarantee_status,
string_agg(
DISTINCT(
ETTT.char_value || ', ' || 
ETT.share || COALESCE(', ' || LGD.legal_desc_text, '') ||
COALESCE(', ' || to_char(ROUND(LGD.total_area, 0), 
'FM9G999G999G999G999') || ' m2', '')
),
E'\r\n'
ORDER BY
ETTT.char_value || ', ' || 
ETT.share || COALESCE(', ' || LGD.legal_desc_text, '') ||
COALESCE(', ' || to_char(ROUND(LGD.total_area, 0), 
'FM9G999G999G999G999') || ' m2', '') ASC
) AS estate_description,
string_agg(
DISTINCT 
CASE PRP.type
WHEN 'CORP' THEN PRP.corporate_name
WHEN 'PERS' THEN COALESCE(PRP.prime_other_names || ' ', '') 
|| PRP.prime_surname
END,
', '
ORDER BY
CASE PRP.type
WHEN 'CORP' THEN PRP.corporate_name
WHEN 'PERS' THEN COALESCE(PRP.prime_other_names || ' ', '') 
|| PRP.prime_surname
END ASC
) AS owners,
count(
DISTINCT
CASE PRP.type
WHEN 'CORP' THEN PRP.corporate_name
WHEN 'PERS' THEN COALESCE(PRP.prime_other_names || ' ', '') 
|| PRP.prime_surname
END
) AS number_owners,
TPA.title_no IS NOT NULL AS part_share,
-- With Postgis 1.5.2 the ST_Collect aggregate returns a truncated
-- collection when a null value is found. To fix this the shapes 
-- are order so all null shapes row are at the end of input list.
ST_Multi(ST_Collect(PAR.shape ORDER BY PAR.shape ASC)) AS shape
FROM
crs_title TTL
LEFT JOIN crs_title_estate ETT ON TTL.title_no = ETT.ttl_title_no AND 
ETT.status = 'REGD'
LEFT JOIN crs_estate_share ETS ON ETT.id = ETS.ett_id AND ETT.status = 
'REGD'
LEFT JOIN crs_proprietor PRP ON ETS.id = PRP.ets_id AND PRP.status = 
'REGD'
LEFT JOIN crs_legal_desc LGD ON ETT.lgd_id = LGD.id AND LGD.type = 
'ETT' AND LGD.status = 'REGD'
LEFT JOIN crs_legal_desc_prl LGP ON LGD.id = LGP.lgd_id
LEFT JOIN (
SELECT
title_no 
FROM
tmp_parcel_titles 
GROUP BY
title_no
HAVING
count(*) > 1
) TPA ON TTL.title_no = TPA.title_no
LEFT JOIN (
SELECT
id,
(ST_Dump(shape)).geom AS shape  
FROM
crs_parcel 
WHERE
status = 'CURR' AND 
ST_GeometryType(shape) IN ('ST_MultiPolygon', 'ST_Polygon')
) PAR ON LGP.par_id = PAR.id 
JOIN crs_locality LOC ON TTL.ldt_loc_id = LOC.id
JOIN crs_sys_code TTLG ON TTL.guarantee_status = TTLG.code AND 
TTLG.scg_code = 'TTLG'
JOIN crs_sys_code TTLT ON TTL.type = TTLT.code AND TTLT.scg_code = 
'TTLT'
LEFT JOIN crs_sys_code ETTT ON ETT.type = ETTT.code AND ETTT.scg_code = 
'ETTT'
WHERE
TTL.status IN ('LIVE', 'PRTC') AND
TTL.title_no NOT IN (SELECT title_no FROM tmp_excluded_titles)
GROUP BY
TTL.audit_id,
   

Re: [GENERAL] 9.0 Out of memory

2011-04-13 Thread Tom Lane
Jeremy Palmer  writes:
> Ok I removed the geometry column from the cursor query within the function 
> and the session still runs out of memory. I'm still seeing the same error 
> message as well:

> PortalHeapMemory: 16384 total in 4 blocks; 5944 free (0 chunks); 10440 
> used
>   ExecutorState: 122880 total in 4 blocks; 63984 free (8 chunks); 58896 
> used
> ExprContext: 2496819768 total in 9 blocks; 21080 free (15 chunks); 
> 2496798688 used

> So I guess it's not likely to be the PostGIS geometry to text cast that is 
> leaking the memory.

OK, so that was a wrong guess.

> One thing that has got me interested now is query that executes directly 
> before (see SQL below). If I remove the geometry column that is generated 
> using ST_Collect aggregate function, the subsequent function involving the 
> cursor query completes and the transaction also runs to completion.

Hrm.  We were pretty much guessing as to which query was running in that
portal, I think.  It seems entirely plausible that this other query is
the one at fault instead.  It might be premature to blame ST_Collect per
se though --- in particular I'm wondering about the ORDER BY on the
ST_Collect's input.  But if this line of thought is correct, you ought
to be able to exhibit a memory leak using just that sub-part of that
query, without the surrounding function or any other baggage.  Maybe the
leak wouldn't drive the backend to complete failure without that
additional overhead; but a leak of a couple gig ought to be pretty
obvious when watching the process with "top" or similar tool.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SSDs with Postgresql?

2011-04-13 Thread Benjamin Smith
The speed benefits of SSDs as benchmarked would seem incredible. Can anybody 
comment on SSD benefits and problems in real life use? 

I maintain some 100 databases on 3 servers, with 32 GB of RAM each and an 
extremely rich, complex schema. (300+ normalized tables) 

I was wondering if anybody here could comment on the benefits of SSD in 
similar, high-demand rich schema situations? 


-Ben 

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



Re: [GENERAL] updating rows which have a common value forconsecutive dates

2011-04-13 Thread Lonni J Friedman
Hi David,
Thanks for your reply.  I'm using 8.4.7, so window functions are
certainly an option, although I've admittedly never used them before.
I've spent the past few hours reading the dox, and I now have a
rudimentary understanding of window functions.  I tried to compose a
query based on your suggestion, but I think i'm running up against my
lack of experience.  This query seems to give me all failures but not
neccesarily when there are two in a row for a unique group (although
I'm still not 100% certain its actually returning only last_update
consecutive rows):
SELECT testname,os,arch,build_type,branch,current_status,last_update,rank()
OVER (PARTITION BY testname,os,arch,branch,build_type ORDER BY
last_update DESC) FROM mytable WHERE current_status='FAILED' AND
age(now(),last_update) <= INTERVAL '15 days'

However, if I'm understanding how this works, what I really care about
is when a rank=2 exists, as that's truly when something failed for two
consecutive last_update's.  I thought this might do it, but apparently
I'm doing it wrong:

SELECT testname,os,arch,build_type,branch,current_status,last_update,rank()
OVER (PARTITION BY testname,os,arch,branch,build_type ORDER BY
last_update DESC) FROM mytable WHERE current_status='FAILED' AND
age(now(),last_update) <= INTERVAL '15 days' AND rank()=2 ;
ERROR:  window function call requires an OVER clause
LINE 1: ... age(now(),last_update) <= INTERVAL '15 days' AND rank()=2 ;

How do I restrict the results to only show when rank=2 ?

thanks!


On Wed, Apr 13, 2011 at 9:57 AM, David Johnston  wrote:
> If you have the ability to use Window functions you can group (as necessary), 
> order by last_update, and then use rank() to number each test run 
> sequentially.  Then you can limit the results to  ( rank() <= 2 AND 
> current_status = 'FAILED' ).
>
> David J.
>
> -Original Message-
> From: pgsql-general-ow...@postgresql.org 
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Lonni J Friedman
> Sent: Wednesday, April 13, 2011 3:34 AM
> To: pgsql-general
> Subject: [GENERAL] updating rows which have a common value forconsecutive 
> dates
>
> Greetings,
> I have a table full of automated test data, which continuously has new unique 
> data inserted:
>
>    Column     |            Type             |
>  Modifiers
> +-+-
> +-+-
> +-+---
>  id             | integer                     | not null default
> nextval('dbltests_id_seq'::regclass)
>  testname       | text                        | not null
>  last_update    | timestamp without time zone | not null default now()
>  current_status | text                        | not null
>  os             | text                        | not null
>  arch           | text                        | not null
>  build_type     | text                        | not null
>  branch         | text                        | not null
>
> The 'testname' column contains many different tests, and each unique 
> 'testname' has numerous different associated os,arch,build_type & branch 
> values.  For example, testname='foo' will run on 
> os='Linux',arch='i686',build_type='debug',branch='t38', and also on 
> os='Windows7',arch='x86_64',build_type='release',branch='r40', etc, and there 
> will be many other different testname's with similar permutations of the 
> os,arch,build_type & branch columns.  So for example, there will also be 
> testname='bar' or testname='omega' or testname='sigma' for the other assorted 
> os,arch,build_type & branch permutations.
>
> The current_status column is either 'PASSED' or 'FAILED'.
>
> What I'm struggling with is how to put together a SQL query which shows me 
> all instances of a given testname permutation
> (os,arch,build_type,branch) which has a current_status value of 'FAILED' for 
> two or more most recent consecutive 'last_update' values.
>
> Suggestions welcome.
>
> thanks in advance!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] updating rows which have a common value forconsecutive dates

2011-04-13 Thread David Johnston
You need to turn the query with the window function into a sub-query and then 
in the outer query you can refer to the inner-query's rank() column.  The inner 
query should effectively get you the last two test results for each context and 
then you can check to see if any of those failed.

I have a hunch you might need a third layer of sub-queries to handle the 
failure aspect of the requirement properly; possibly as part of a "WITH" CTE.  
You for sure need to in order to. Properly utilize the rank() function limiting.

Dave

On Apr 14, 2011, at 0:52, Lonni J Friedman  wrote:

> Hi David,
> Thanks for your reply.  I'm using 8.4.7, so window functions are
> certainly an option, although I've admittedly never used them before.
> I've spent the past few hours reading the dox, and I now have a
> rudimentary understanding of window functions.  I tried to compose a
> query based on your suggestion, but I think i'm running up against my
> lack of experience.  This query seems to give me all failures but not
> neccesarily when there are two in a row for a unique group (although
> I'm still not 100% certain its actually returning only last_update
> consecutive rows):
> SELECT testname,os,arch,build_type,branch,current_status,last_update,rank()
> OVER (PARTITION BY testname,os,arch,branch,build_type ORDER BY
> last_update DESC) FROM mytable WHERE current_status='FAILED' AND
> age(now(),last_update) <= INTERVAL '15 days'
> 
> However, if I'm understanding how this works, what I really care about
> is when a rank=2 exists, as that's truly when something failed for two
> consecutive last_update's.  I thought this might do it, but apparently
> I'm doing it wrong:
> 
> SELECT testname,os,arch,build_type,branch,current_status,last_update,rank()
> OVER (PARTITION BY testname,os,arch,branch,build_type ORDER BY
> last_update DESC) FROM mytable WHERE current_status='FAILED' AND
> age(now(),last_update) <= INTERVAL '15 days' AND rank()=2 ;
> ERROR:  window function call requires an OVER clause
> LINE 1: ... age(now(),last_update) <= INTERVAL '15 days' AND rank()=2 ;
> 
> How do I restrict the results to only show when rank=2 ?
> 
> thanks!
> 
> 
> On Wed, Apr 13, 2011 at 9:57 AM, David Johnston  wrote:
>> If you have the ability to use Window functions you can group (as 
>> necessary), order by last_update, and then use rank() to number each test 
>> run sequentially.  Then you can limit the results to  ( rank() <= 2 AND 
>> current_status = 'FAILED' ).
>> 
>> David J.
>> 
>> -Original Message-
>> From: pgsql-general-ow...@postgresql.org 
>> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Lonni J Friedman
>> Sent: Wednesday, April 13, 2011 3:34 AM
>> To: pgsql-general
>> Subject: [GENERAL] updating rows which have a common value forconsecutive 
>> dates
>> 
>> Greetings,
>> I have a table full of automated test data, which continuously has new 
>> unique data inserted:
>> 
>>Column |Type |
>>  Modifiers
>> +-+-
>> +-+-
>> +-+---
>>  id | integer | not null default
>> nextval('dbltests_id_seq'::regclass)
>>  testname   | text| not null
>>  last_update| timestamp without time zone | not null default now()
>>  current_status | text| not null
>>  os | text| not null
>>  arch   | text| not null
>>  build_type | text| not null
>>  branch | text| not null
>> 
>> The 'testname' column contains many different tests, and each unique 
>> 'testname' has numerous different associated os,arch,build_type & branch 
>> values.  For example, testname='foo' will run on 
>> os='Linux',arch='i686',build_type='debug',branch='t38', and also on 
>> os='Windows7',arch='x86_64',build_type='release',branch='r40', etc, and 
>> there will be many other different testname's with similar permutations of 
>> the os,arch,build_type & branch columns.  So for example, there will also be 
>> testname='bar' or testname='omega' or testname='sigma' for the other 
>> assorted os,arch,build_type & branch permutations.
>> 
>> The current_status column is either 'PASSED' or 'FAILED'.
>> 
>> What I'm struggling with is how to put together a SQL query which shows me 
>> all instances of a given testname permutation
>> (os,arch,build_type,branch) which has a current_status value of 'FAILED' for 
>> two or more most recent consecutive 'last_update' values.
>> 
>> Suggestions welcome.
>> 
>> thanks in advance!
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-gene

[GENERAL] PostgreSQL trap, and assertion failed

2011-04-13 Thread Radosław Smogura
Hello,

I have small crash reporting code, which I use during mmap-ing database. After 
last merge with master I got 

TRAP: FailedAssertion("!(slot > 0 && slot <= PMSignalState->num_child_flags)", 
File: "pmsignal.c", Line: 227)
LOG:  server process (PID 5128) was terminated by signal 6: Aborted
LOG:  terminating any other active server processes
TRAP: FailedAssertion("!(slot > 0 && slot <= PMSignalState->num_child_flags)", 
File: "pmsignal.c", Line: 189)

I've installed crash reports on sigbus and sigseg signals. May I ask what may 
be wrong, and how to prevent this?

Regards,
Radek.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-13 Thread John R Pierce

On 04/13/11 9:19 PM, Benjamin Smith wrote:


The speed benefits of SSDs as benchmarked would seem incredible. Can 
anybody comment on SSD benefits and problems in real life use?



I maintain some 100 databases on 3 servers, with 32 GB of RAM each and 
an extremely rich, complex schema. (300+ normalized tables)



I was wondering if anybody here could comment on the benefits of SSD 
in similar, high-demand rich schema situations?





consumer grade MLC SSD's will crash and burn in short order under a 
heavy transactional workload characterized by sustained small block 
random writes.


The enterprise grade SLC SSDs' will perform very nicely, but they are 
very very expensive, and found in high end enterprise database servers 
like Oracle's Exadata machines.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general