Re: [GENERAL] help replacing expresion in plpgsql

2013-12-18 Thread Albe Laurenz
Juan Pablo L wrote:
 Hi, i have a function that receives a parameter which represents days:
 
 FUNCTION aaa_recharge_account(expdays integer)
 
 i want to add those days to the CURRENT_DATE, but i do not know how to do it, 
 i have tried several
 ways to replace that in an expresion like:
 
 newexpdate := CURRENT_TIMESTAMP + interval '$1 days' using expdays;
 (newexpdate is declared as timestamp)
 
 and many more but none work, can someone please help me to find out how can i 
 replace that parameter
 into an expression that i can add to CURRENT_TIMESTAMP or any other way that 
 i can accomplish what i
 need which is to add that parameter to the current timestamp. thanks!!!

There are many ways.
Two I can think of right away:

newexpdate := CURRENT_TIMESTAMP + CAST(expdays || ' days' AS interval);

newexpdate := CURRENT_TIMESTAMP + expdays * INTERVAL '1 days';

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bus error in libxml2 in postgresql 9.2.x on FreeBSD

2013-12-18 Thread Achilleas Mantzios

Hello Steve,
this reminds me the problems i had when trying to install pl/java in postgresql 
9.2 FreeBSD :
http://lists.freebsd.org/pipermail/freebsd-java/2013-February/010019.html
http://lists.freebsd.org/pipermail/freebsd-java/2013-February/010021.html
http://lists.freebsd.org/pipermail/freebsd-java/2013-February/010022.html

The solution was to explicit link postgresql with /usr/lib/libpthread.so in 
src/backend/Makefile

postgres: $(OBJS)
$(CC) $(CFLAGS) $(LDFLAGS) $(LDFLAGS_EX) $(export_dynamic) $(call 
expand_subsys,$^) $(LIBS) -lpthread -o $@

Just an advice, you are also likely to find answers for similar problems in 
FreeBSD lists.

On 16/12/2013 16:41, Steve McCoy wrote:

Hello,

I found some semi-recent messages related to this in 9.0.x, but wanted to confirm for anybody searching that the same issue exists in 9.2.x. It crashes for me with a similar backtrace as [Laurentius 
Purba’s][1]. The same fix/workaround applies as well: Compile libxml2 without the “threads” option.


If I’m understanding the explanations correctly, the issue is that libxml2+threads expects something to be initialized by the calling program (postgres), but since the program isn’t multithreaded, 
this doesn’t happen. It sounds like something that neither side can really fix, but if I’m wrong and any devs would like a core file, I can provide one.



[1] 
http://www.postgresql.org/message-id/camflbnhq6641ymbij2-7ozksgmgfne5v3ggfkvyxojy8umr...@mail.gmail.com



--
Achilleas Mantzios



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Multi Master Replication

2013-12-18 Thread itishree sukla
Hi all,

I need suggestion about setting up multi master replication between two
postgresql server place two different geographical area. As i know using
some third party tool like Bucardo,RubyRep it can be achievable, not sue
which is the good one to use. If any one  can provide me some online
documentation links, it will help me as well.


Thanks in advance.

Regards,
Itishree


[GENERAL] anyrecord/anyelement escaping question.

2013-12-18 Thread Marc Mamin
Hello,

I'd  like to store  some special record of a list of tables in a single table 
for later reimport:

e.g.:

table t1 (a int, b int)
table t2 (c int, t varchar)

myTable (tsource text   trecord text):

t1, (1,2)
t1, (3,18)
t2, (1,'a b')

Later I will be able to populate t1 or t2 tables using the content of myTable
something like :

EXECUTE 'insert into '||tsource||' values '||trecord FROM myTable.

What I'm looking for is a record escaping function that would put the single 
quotes as required later on from a table.

e.g. select somefunc(t2) from t2:

somefunc
text
---
1,'a b'


I could be  happy too with such a function on anyelement:

select escape(c)||','||escape(t)  from t2



Is there some built in function I can use for this, or do I have to write it 
from scratch:

create function myescape(a anyelement, OUT escaped text) as $$

..
select case when typeof(a) =   then
when typeof(a) =   then
when typeof(a) =   then
when typeof(a) =   then
..


regards,

Marc Mamin



Re: [GENERAL] Replication: GZIP compression in WAL sender/receiver processes communication?

2013-12-18 Thread Rahila Syed
Hello,

Yes gzip compression can be used for compressing WAL traffic during
streaming replication  Following tools can be used in this regard.
SSL compression-SSL support is built in PostgreSQL. You need to ensure you
have OpenSSL library support in your PostgreSQL installation.
Also, you can compress WAL traffic by setting up SSH tunneling between
master and standby and turn on compression while setting up SSH tunnel.
Following link can be followed for the same.
http://www.postgresql.org/docs/9.3/static/ssh-tunnels.html
Ofcourse, these are the solutions outside PostgreSQL.


On Tue, Dec 10, 2013 at 4:43 AM, Dmitry Koterov dmi...@koterov.ru wrote:

 Hello.

 Is there a way to compress the traffic between master and slave during the
 replication?.. The streaming gzip would be quite efficient for that.

 (WAL archiving is not too good for this purpose because of high lag. I
 just need to minimize the cross-datacenter traffic keeping the replication
 lag low.)



[GENERAL] Foreign keys

2013-12-18 Thread Dean Gibson (DB Administrator)

I have general question about FOREIGN KEYs:

1. Suppose I have table A with primary key X, and another table B with
   field Y.
2. When I 'ALTER TABLE B  ADD FOREIGN KEY( Y )  REFERENCES A  ON
   UPDATE CASCADE  ON DELETE CASCADE', that clearly spends some time
   building a separate index.  Since there is already a unique index on
   X, presumably (?) the index being built is on Y.
3. However, the PostgreSQL documentation seems to indicate that it's a
   good idea to also separately create an index on Y.
4. Why, and why is the FOREIGN KEY index different from the ones on X
   and Y in any way but trivial?
5. If I need the separate index on Y, should it be built before or
   after the FOREIGN KEY constraint?


--
Mail to my list address MUST be sent via the mailing list.
All other mail to my list address will bounce.



Re: [GENERAL] Foreign keys

2013-12-18 Thread Rob Sargent

On 12/18/2013 11:02 AM, Dean Gibson (DB Administrator) wrote:

I have general question about FOREIGN KEYs:

 1. Suppose I have table A with primary key X, and another table B
with field Y.
 2. When I 'ALTER TABLE B  ADD FOREIGN KEY( Y )  REFERENCES A 
ON UPDATE CASCADE  ON DELETE CASCADE', that clearly spends some

time building a separate index.  Since there is already a unique
index on X, presumably (?) the index being built is on Y.
 3. However, the PostgreSQL documentation seems to indicate that it's
a good idea to also separately create an index on Y.
 4. Why, and why is the FOREIGN KEY index different from the ones on X
and Y in any way but trivial?
 5. If I need the separate index on Y, should it be built before or
after the FOREIGN KEY constraint?


--
Mail to my list address MUST be sent via the mailing list.
All other mail to my list address will bounce.
Perhaps you 
wanthttp://www.postgresql.org/docs/9.3/static/sql-altertable.html

add table_constraint_using_index


Re: [GENERAL] Foreign keys

2013-12-18 Thread Kevin Grittner
Dean Gibson (DB Administrator) postgre...@ultimeth.com wrote:

 I have general question about FOREIGN KEYs:

    1. Suppose I have table A with primary key X, and another
   table B with field Y.
    2. When I 'ALTER TABLE B  ADD FOREIGN KEY( Y )  REFERENCES
   A  ON UPDATE CASCADE  ON DELETE CASCADE', that clearly
   spends some time building a separate index.

No it doesn't.  If you are observing activity at that time, it is
probably from validating that the constraint is initially valid.

    3. However, the PostgreSQL documentation seems to indicate
   that it's a good idea to also separately create an index on
   Y.

It *often* is, but there are various reasons you might not want
such an index, which is why its creation is not automatic.

    5. If I need the separate index on Y, should it be built
   before or after the FOREIGN KEY constraint?

In some cases it may allow faster initial validation of the
constraint; if I wanted the index I would probably build it before
adding the constraint.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Replication failed after stalling

2013-12-18 Thread Joe Van Dyk
I'm running Postgresql 9.3. I have a streaming replication server. Someone
was running a long COPY query (8 hours) on the standby which halted
replication. The replication stopped at 3:30 am. I canceled the
long-running query at 9:30 am and replication data started catching up.

The data up until 10 am got restored fine (took until 10:30 am to restore
that much). Then I started getting errors like FATAL:  could not receive
data from WAL stream: ERROR:  requested WAL segment
000103C30086 has already been removed.

I'm confused about how pg could restore data from 3:30 am to 10 am, then
start complaining about missing WAL files.

What's the best way to avoid this problem? Increase wal_keep_segments?

Joe


Re: [GENERAL] Foreign keys

2013-12-18 Thread Dean Gibson (DB Administrator)


On 2013-12-18 10:41, Kevin Grittner wrote:

Dean Gibson (DB Administrator) postgre...@ultimeth.com wrote:


... that clearly spends some time building a separate index.

No it doesn't.  If you are observing activity at that time, it is probably from 
validating that the constraint is initially valid.


Ah ha!  That's what's consuming the time!  Thanks!




 5. If I need the separate index on Y, should it be built before or after 
the FOREIGN KEY constraint?

In some cases it may allow faster initial validation of the constraint; if I 
wanted the index I would probably build it before adding the constraint.


Again, that's what I needed to know!  Thanks again!

--
Mail to my list address MUST be sent via the mailing list.
All other mail to my list address will bounce.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication failed after stalling

2013-12-18 Thread Joe Van Dyk
A possibly related question:

I've set wal_keep_segments to 10,000 and also have archive_command running
wal-e. I'm seeing my wal files disappear from pg_xlog after 30 minutes. Is
that expected? Is there a way around that?

(I want to use streaming replication and wal-e for PITR restores)


On Wed, Dec 18, 2013 at 11:26 AM, Joe Van Dyk j...@tanga.com wrote:

 I'm running Postgresql 9.3. I have a streaming replication server. Someone
 was running a long COPY query (8 hours) on the standby which halted
 replication. The replication stopped at 3:30 am. I canceled the
 long-running query at 9:30 am and replication data started catching up.

 The data up until 10 am got restored fine (took until 10:30 am to restore
 that much). Then I started getting errors like FATAL:  could not receive
 data from WAL stream: ERROR:  requested WAL segment
 000103C30086 has already been removed.

 I'm confused about how pg could restore data from 3:30 am to 10 am, then
 start complaining about missing WAL files.

 What's the best way to avoid this problem? Increase wal_keep_segments?

 Joe



Re: [GENERAL] Multi Master Replication

2013-12-18 Thread John R Pierce

On 12/18/2013 1:31 AM, itishree sukla wrote:
I need suggestion about setting up multi master replication between 
two postgresql server place two different geographical area. As i know 
using some third party tool like Bucardo,RubyRep it can be achievable, 
not sue which is the good one to use. If any one  can provide me some 
online documentation links, it will help me as well.


that sort of replication is very problematic.   its virtually impossible 
to maintain ACID (Atomicity, Consistency, Isolation, Durability) and 
maintain any semblance of performance.


question for you, what do you expect to happen if the communications 
link between the servers is interrupted, and updates continue to be sent 
to both servers?


--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication failed after stalling

2013-12-18 Thread Jerry Sievers
Joe Van Dyk j...@tanga.com writes:

 I'm running Postgresql 9.3. I have a streaming replication server. Someone 
 was running a long COPY query (8 hours) on the standby which halted 
 replication. The
 replication stopped at 3:30 am. I canceled the long-running query at 9:30 am 
 and replication data started catching up.

 The data up until 10 am got restored fine (took until 10:30 am to restore 
 that much). Then I started getting errors like FATAL:  could not receive 
 data from WAL
 stream: ERROR:  requested WAL segment 000103C30086 has already 
 been removed.

 I'm confused about how pg could restore data from 3:30 am to 10 am, then 
 start complaining about missing WAL files.

 What's the best way to avoid this problem? Increase wal_keep_segments?

Yes and/or implement as a hybrid of streaming and WAL shipping.

Quite simply, your wal_keep segments was almost enough to  get you
through that backlog period but as your standby was catching up, it hit
a point  where  there was a gap.

Depending on how much traffic your master sees at various times of the
day, it's unsurprising that during peak loads, your grace-period is a
lot lower than during off-peak times due to variations in how quickly
WAL segments are filled and cycled over. 

HTH


 Joe


-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication failed after stalling

2013-12-18 Thread Adrian Klaver

On 12/18/2013 12:15 PM, Joe Van Dyk wrote:

A possibly related question:

I've set wal_keep_segments to 10,000 and also have archive_command
running wal-e. I'm seeing my wal files disappear from pg_xlog after 30
minutes. Is that expected? Is there a way around that?


Well a WAL segment is 16MB in size so that should give you a basis for 
determining whether the above is appropriate, my guess it is not. I do 
not know enough about Wal-e, but my guess is it is siphoning off WAL 
segments before you want it to.





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question(s) about crosstab

2013-12-18 Thread John Abraham
Regarding crosstab, yes it's basically too complicated to use directly.  Here 
are the options:

1) write code (in another language, perhaps) to create your cross tab queries by
inspecting the tables, which then submits those queries to create views.  We 
have a web-app in 
django/python that will create crosstab views in this way.  (We use it to 
attach the values to spatial shapes 
in PostGIS, so that other GIS programs, such as mapserver, can use it.  GIS 
programs always 
seem to expect things to be in crosstab format.)

2) Export to CSV, then pivottable in Excel, or AWK, or Perl, or whatever.  The 
problem with this
is that the results are NOT in your database, they are in the external table.

3) Can't someone write a pl language routine that does it better?  I'd be 
willing to work on the core
functionality in python if someone else would be willing to embed it in 
plpython (I've never used plpython.)



 --
 
 Date: Tue, 17 Dec 2013 15:31:54 -0800
 From: Ken Tanzer ken.tan...@gmail.com
 Hi.  I've got a simple table unit_hold, with grant numbers, buildings and
 counts of unit types, which I need to summarize, along with a table listing
 unit types:

snip
 
 I thought this would be a good candidate for crosstab.  After wrestling
 with the documentation, this is the best I could come up with:
 
 SELECT * FROM crosstab(
  'SELECT housing_project_code||''_''||grant_number_code AS
 project_and_grant,grant_number_code,housing_project_code,unit_type_code,count
 FROM unit_hold ORDER BY 1,2',
  'SELECT * FROM (SELECT DISTINCT unit_type_code FROM l_unit_type) foo
 ORDER BY unit_type_code ~ ''^[0-9]'' DESC, unit_type_code'
 ) AS ct(project_and_grant varchar, grant_number_code varchar,
 housing_project_code varchar, 0BR bigint, 1BR bigint, 2BR bigint,
 3BR bigint, 4BR bigint, 5BR bigint, 6BR bigint,GROUP bigint,
 SRO bigint, UNKNOWN bigint)
 
 So here are my questions:
 
 1)  Is there a simpler way?  I'm hoping I made this unnecessarily
 cumbersome and complicated.
 2)  AFAICT, if a new unit type were to be added, I'd have to rewrite this
 query.  Is there any way to avoid that?
 3)  It seems like everything after the first query, except for the category
 field, is redundant information, and that in theory you should be able to
 say crosstab('query','category_field').  Is there any inherent reason this
 simpler form couldn't work, or is it just that no one has wanted to do it,
 or gotten to it yet?
 

And from David Johnston:

 1) Pivot Tables...(not a PostgreSQL feature I'm afraid)
 
 2) Not that I am aware of.  I would suggest writing the query so that
 Other is a valid group and any unmapped types get aliased to Other so at
 least the query counts everything and you know that if Other is non-zero
 you have some alterations to make.
 
 3) Limitation of SQL - explained below:
 
 
 
 
And from Scott Marlowe:

 In the past I've written simple bash, perl, php etc scripts that
 interrogated catalogs and then built my crosstab queries for me. You
 could do it in a pl language, tho probably not easily in plpgsql.
 plpython or plperl etc would proabably be a good place to start.
 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question(s) about crosstab

2013-12-18 Thread Joe Conway

On 12/18/2013 03:32 PM, John Abraham wrote:

Regarding crosstab, yes it's basically too complicated to use directly.  Here 
are the options:

1) write code (in another language, perhaps) to create your cross tab queries by
inspecting the tables, which then submits those queries to create views.  We 
have a web-app in
django/python that will create crosstab views in this way.  (We use it to 
attach the values to spatial shapes
in PostGIS, so that other GIS programs, such as mapserver, can use it.  GIS 
programs always
seem to expect things to be in crosstab format.)


1) This is the best option. I've done it with plpgsql in the past. You 
don't need to inspect your tables so much as determine how many result 
columns to expect based on the categories SQL string. Once you know how 
many categories there are, you can define the column definition list 
which allows you to write the crosstab query. So basically your app 
calls the plpgsql function and then executes the resulting returned 
query string.



3) Can't someone write a pl language routine that does it better?  I'd be 
willing to work on the core
functionality in python if someone else would be willing to embed it in 
plpython (I've never used plpython.)


3) Not possible -- reason was given down thread. Column definition must 
be known/determinable by the parser prior to query execution.


Basically to improve this you would have to hack the postgres backend in 
such a way that it didn't need the column definition list until query 
execution time, which I also doubt is possible.


Joe

--
Joe Conway
credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting,  24x7 Support


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question(s) about crosstab

2013-12-18 Thread Sergey Konoplev
On Tue, Dec 17, 2013 at 3:31 PM, Ken Tanzer ken.tan...@gmail.com wrote:
 1)  Is there a simpler way?  I'm hoping I made this unnecessarily cumbersome 
 and complicated.
 2)  AFAICT, if a new unit type were to be added, I'd have to rewrite this 
 query.  Is there any way to avoid that?
 3)  It seems like everything after the first query, except for the category 
 field, is redundant information, and that in theory you should be able to say 
 crosstab('query','category_field').  Is there any inherent reason this 
 simpler form couldn't work, or is it just that no one has wanted to do it, or 
 gotten to it yet?

Try to look at this article [1]. The guy has made some plpgsql
automation so it generate the resulting crostab query kind of like you
described it in 3, and it looks like is solves 1 and 2. For complex
queries you can make views and use them with the tablename argument.

[1] 
http://www.cureffi.org/2013/03/19/automatically-creating-pivot-table-column-names-in-postgresql/

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multi Master Replication

2013-12-18 Thread Scott Marlowe
On Wed, Dec 18, 2013 at 1:16 PM, John R Pierce pie...@hogranch.com wrote:
 On 12/18/2013 1:31 AM, itishree sukla wrote:

 I need suggestion about setting up multi master replication between two
 postgresql server place two different geographical area. As i know using
 some third party tool like Bucardo,RubyRep it can be achievable, not sue
 which is the good one to use. If any one  can provide me some online
 documentation links, it will help me as well.


 that sort of replication is very problematic.   its virtually impossible to
 maintain ACID (Atomicity, Consistency, Isolation, Durability) and maintain
 any semblance of performance.

 question for you, what do you expect to happen if the communications link
 between the servers is interrupted, and updates continue to be sent to both
 servers?

When people start talking multi-master replication my first response
is to ask what problem you're trying to solve. Sometimes MM Rep IS the
answer. But quite often it's not the best one for your problem. So to
OP I'd ask what problem they're trying to solve.

-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question(s) about crosstab

2013-12-18 Thread John Abraham

On Dec 18, 2013, at 3:52 PM, Joe Conway m...@joeconway.com wrote:
 
 3) Can't someone write a pl language routine that does it better?  I'd be 
 willing to work on the core
 functionality in python if someone else would be willing to embed it in 
 plpython (I've never used plpython.)
 
 3) Not possible -- reason was given down thread. Column definition must be 
 known/determinable by the parser prior to query execution.

No, wait, I think you misunderstood my idea.  Can’t we have a python function 
that crosstabs the data in python, then creates a new table, then inserts that 
data into the new table by looping through a bunch of inserts?

The parser wouldn’t’ need to know a thing.  There would be no output from the 
function itself, it would just create a table while it ran.  (I suppose it 
could return the name of the table, or a boolean success/fail flag, or return 
the number of columns that were created, but all of these are simple things 
knowable to the parser in advance.)

I’ve written functions before in plpgsql that create tables behind the scenes, 
and basically return no output themselves.

—
John

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question(s) about crosstab

2013-12-18 Thread AI Rumman
Hi,

Once I faced the same problem of adding new type and reqriting the query
working with crosstab function. Then I created a dynamic crosstab function.
You may have a look at it if it work out for you:
http://www.rummandba.com/2013/03/postgresql-dynamic-crosstab-function.html

Thanks.


On Tue, Dec 17, 2013 at 3:31 PM, Ken Tanzer ken.tan...@gmail.com wrote:

 Hi.  I've got a simple table unit_hold, with grant numbers, buildings and
 counts of unit types, which I need to summarize, along with a table listing
 unit types:

 \d unit_hold
  Table public.unit_hold
 Column| Type  | Modifiers
 --+---+---
  grant_number_code| character varying(10) |
  housing_project_code | character varying(10) |
  unit_type_code   | character varying(10) |
  count| bigint|

 SELECT * FROM unit_hold limit 3;
  grant_number_code | housing_project_code | unit_type_code | count
 ---+--++---
  1 |  | 4BR| 1
  1 |  | 1BR| 1
  1 |  | 1BR| 1

 SELECT unit_type_code,description FROM l_unit_type;
  unit_type_code | description
 +-
  5BR| 5 Bedroom
  4BR| 4 Bedroom
  3BR| 3 Bedroom
  6BR| 6 Bedroom
  UNKNOWN| Unknown
  GROUP  | Group Home
  2BR| 2 Bedroom
  1BR| 1 Bedroom
  0BR| Studio
  SRO| SRO


 I thought this would be a good candidate for crosstab.  After wrestling
 with the documentation, this is the best I could come up with:

 SELECT * FROM crosstab(
   'SELECT housing_project_code||''_''||grant_number_code AS
 project_and_grant,grant_number_code,housing_project_code,unit_type_code,count
 FROM unit_hold ORDER BY 1,2',
   'SELECT * FROM (SELECT DISTINCT unit_type_code FROM l_unit_type) foo
 ORDER BY unit_type_code ~ ''^[0-9]'' DESC, unit_type_code'
 ) AS ct(project_and_grant varchar, grant_number_code varchar,
 housing_project_code varchar, 0BR bigint, 1BR bigint, 2BR bigint,
 3BR bigint, 4BR bigint, 5BR bigint, 6BR bigint,GROUP bigint,
 SRO bigint, UNKNOWN bigint)

 So here are my questions:

 1)  Is there a simpler way?  I'm hoping I made this unnecessarily
 cumbersome and complicated.
 2)  AFAICT, if a new unit type were to be added, I'd have to rewrite this
 query.  Is there any way to avoid that?
 3)  It seems like everything after the first query, except for the
 category field, is redundant information, and that in theory you should be
 able to say crosstab('query','category_field').  Is there any inherent
 reason this simpler form couldn't work, or is it just that no one has
 wanted to do it, or gotten to it yet?

 Thanks in advance!

 Ken


 --
 AGENCY Software
 A data system that puts you in control
 100% Free Software
 *http://agency-software.org/ http://agency-software.org/*
 ken.tan...@agency-software.orghttps://mail.google.com/mail/?view=cmfs=1tf=1to=ken.tan...@agency-software.org
 (253) 245-3801

 Subscribe to the mailing 
 listhttps://mail.google.com/mail/?view=cmfs=1tf=1to=agency-general-requ...@lists.sourceforge.netbody=subscribe
  to
 learn more about AGENCY or
 follow the discussion.



Re: [GENERAL] Replication failed after stalling

2013-12-18 Thread Sergey Konoplev
On Wed, Dec 18, 2013 at 11:26 AM, Joe Van Dyk j...@tanga.com wrote:
 I'm running Postgresql 9.3. I have a streaming replication server. Someone
 was running a long COPY query (8 hours) on the standby which halted
 replication. The replication stopped at 3:30 am. I canceled the long-running
 query at 9:30 am and replication data started catching up.

What do you mean by COPY on the standby halted replication?

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question(s) about crosstab

2013-12-18 Thread Joe Conway

On 12/18/2013 05:14 PM, John Abraham wrote:

On Dec 18, 2013, at 3:52 PM, Joe Conway m...@joeconway.com wrote:

3) Not possible -- reason was given down thread. Column definition
must be known/determinable by the parser prior to query execution.


No, wait, I think you misunderstood my idea.  Can’t we have a python
function that crosstabs the data in python, then creates a new table,
then inserts that data into the new table by looping through a bunch
of inserts?


Oh, well I believe you could do that today in plpgsql (build the create 
table as select * from crosstab... sql as described earlier in this 
thread, then execute it), so I'm sure you could do it in plpython as 
well. But this isn't generally what people are looking for.


Joe

--
Joe Conway
credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting,  24x7 Support


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] unexpected pageaddr error in db log

2013-12-18 Thread wd
hi,

We have upgrade our database from PostgreSQL 9.2.4 to 9.3.2, use dump 
restore in a fresh db created by initdb.

Then use pg_basebackup created 2 slave db, the error occur when start the
database.

db1

[2013-12-19 04:00:56.882 CST 17956 52b1fef8.4624 1 0]LOG:
database system was interrupted; last known up at 2013-12-19 03:54:05
CST
[2013-12-19 04:00:56.883 CST 17956 52b1fef8.4624 2 0]LOG:
entering standby mode
[2013-12-19 04:00:57.395 CST 17956 52b1fef8.4624 3 0]LOG:
restored log file 00010004001D from archive
[2013-12-19 04:00:57.415 CST 17956 52b1fef8.4624 4 0]LOG:  redo
starts at 4/1D28
[2013-12-19 04:00:57.923 CST 17956 52b1fef8.4624 5 0]LOG:
restored log file 00010004001E from archive
[2013-12-19 04:00:57.933 CST 17956 52b1fef8.4624 6 0]LOG:
consistent recovery state reached at 4/1E012088
[2013-12-19 04:00:57.933 CST 17948 52b1fef8.461c 4 0]LOG:
database system is ready to accept read only connections [
2013-12-19 04:00:58.078 CST 17956 52b1fef8.4624 7 0]LOG:  unexpected
pageaddr 3/2B00 in log segment 00010004001F, offset 0
[2013-12-19 04:00:58.084 CST 17969 52b1fefa.4631 1 0]LOG:  started
streaming WAL from primary at 4/1F00 on timeline 1



db2

[2013-12-19 04:14:20.366 CST 18340 52b2021c.47a4 1 0]LOG:
database system was interrupted; last known up at 2013-12-19 04:07:43
CST
[2013-12-19 04:14:20.367 CST 18340 52b2021c.47a4 2 0]LOG:
entering standby mode
[2013-12-19 04:14:20.870 CST 18340 52b2021c.47a4 3 0]LOG:
restored log file 000100040021 from archive
[2013-12-19 04:14:20.881 CST 18340 52b2021c.47a4 4 0]LOG:  redo
starts at 4/2128
[2013-12-19 04:14:21.648 CST 18340 52b2021c.47a4 5 0]LOG:
restored log file 000100040022 from archive
[2013-12-19 04:14:22.415 CST 18340 52b2021c.47a4 6 0]LOG:
restored log file 000100040023 from archive
[2013-12-19 04:14:23.061 CST 18340 52b2021c.47a4 7 0]LOG:
restored log file 000100040024 from archive
[2013-12-19 04:14:23.831 CST 18340 52b2021c.47a4 8 0]LOG:
restored log file 000100040025 from archive
[2013-12-19 04:14:24.598 CST 18340 52b2021c.47a4 9 0]LOG:
restored log file 000100040026 from archive
[2013-12-19 04:14:25.365 CST 18340 52b2021c.47a4 10 0]LOG:
restored log file 000100040027 from archive
[2013-12-19 04:14:26.093 CST 18340 52b2021c.47a4 11 0]LOG:
restored log file 000100040028 from archive
[2013-12-19 04:14:26.857 CST 18340 52b2021c.47a4 12 0]LOG:
restored log file 000100040029 from archive
[2013-12-19 04:14:27.592 CST 18340 52b2021c.47a4 13 0]LOG:
restored log file 00010004002A from archive
[2013-12-19 04:14:28.355 CST 18340 52b2021c.47a4 14 0]LOG:
restored log file 00010004002B from archive
[2013-12-19 04:14:29.077 CST 18340 52b2021c.47a4 15 0]LOG:
restored log file 00010004002C from archive
[2013-12-19 04:14:29.843 CST 18340 52b2021c.47a4 16 0]LOG:
restored log file 00010004002D from archive
[2013-12-19 04:14:30.564 CST 18340 52b2021c.47a4 17 0]LOG:
restored log file 00010004002E from archive
[2013-12-19 04:14:31.319 CST 18340 52b2021c.47a4 18 0]LOG:
restored log file 00010004002F from archive
[2013-12-19 04:14:32.071 CST 18340 52b2021c.47a4 19 0]LOG:
restored log file 000100040030 from archive
[2013-12-19 04:14:32.923 CST 18340 52b2021c.47a4 20 0]LOG:
restored log file 000100040031 from archive
[2013-12-19 04:14:33.587 CST 18340 52b2021c.47a4 21 0]LOG:
restored log file 000100040032 from archive
[2013-12-19 04:14:33.909 CST 18340 52b2021c.47a4 22 0]LOG:
consistent recovery state reached at 4/32CCAA88
[2013-12-19 04:14:33.909 CST 18332 52b2021c.479c 4 0]LOG:
database system is ready to accept read only connections[
2013-12-19 04:14:34.059 CST 18340 52b2021c.47a4 23 0]LOG:  unexpected
pageaddr 3/5A00 in log segment 000100040033, offset 0
[2013-12-19 04:14:34.065 CST 18431 52b2022a.47ff 1 0]LOG:  started
streaming WAL from primary at 4/3300 on timeline 1



 Both db1 and db2 are working properly, so what's the error mean? Is this a
bug?


Re: [GENERAL] Multi Master Replication

2013-12-18 Thread Michael Paquier
On Thu, Dec 19, 2013 at 7:12 AM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Wed, Dec 18, 2013 at 1:16 PM, John R Pierce pie...@hogranch.com wrote:
 that sort of replication is very problematic.   its virtually impossible to
 maintain ACID (Atomicity, Consistency, Isolation, Durability) and maintain
 any semblance of performance.
Yep, there's usually a trade-off between performance and data
consistency. OLTP applications can benefit from MM with a
shared-nothing architecture, more than data warehouse type of things
that need to transfer a lot of data for join operations, or SQL
operations that use non-pushable clauses (for example stable/volatile
functions).

 question for you, what do you expect to happen if the communications link
 between the servers is interrupted, and updates continue to be sent to both
 servers?
Split-brain is another problem, hard to solve. Even harder if you have
several types nodes in your cluster dedicated to provide some piece
building the MM system.

 When people start talking multi-master replication my first response
 is to ask what problem you're trying to solve. Sometimes MM Rep IS the
 answer. But quite often it's not the best one for your problem. So to
 OP I'd ask what problem they're trying to solve.
Yes that's actually the right approach, multi-master replication is
often cited as a marketing term for a fantastic technology that can
solve a lot of problems, which could be solved with a couple of
Postgres servers using a single-master, multiple-slave approach, or by
simply design a system that can do data sharding among a set of
Postgres servers to achieve some kind of write scalability.

Regards,
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Installed postgres.app 9.3.1.0. pgadmin doesn't appear to see it

2013-12-18 Thread Bob Futrelle
I uninstalled 9.2 before installing 9.3.1.0.

The app is called Postgres93, it is version 9.3.1.0
I downloaded the latest pgAdmin, it is pgAdmin3 version 1.18.1

I have a database MiniServer which is supposed to use postgres
as its Maintenance database.
But there is no such database.

Perhaps it can't find the DBs that were there?

pg_upgrade requires

pg_upgrade -b oldbindir -B newbindir -d olddatadir -D newdatadir [option...]

but I'm not sure what the 'old' values should be (since I uninstalled 9.2 -
maybe I shouldn't have?)

In my /Library/PostgreSQL/9.2/data/global I see 42 files fom 8K to 25K in
size, most created
last March - my data must be there.  So that may be telling me to create an
empty folder,
 /Library/PostgreSQL/9.3/

In my Postgres93 bundle there's a Contents/MacOS/postgres, a small unix
executable.
Is that the binary?   But again there's no binary for 9.2.

I was rolling along with 9.2 getting lots of good work done via
Eclipse/JDBC.
But now I'm stuck.

Sorry for being so dense. But I've never been afraid to ask questions.
The answers I get increase my understanding of whatever the topic is.

Thanks in advance for helping a tyro.

 - Bob


Re: [GENERAL] Multi Master Replication

2013-12-18 Thread Scott Marlowe
On Wed, Dec 18, 2013 at 10:25 PM, Michael Paquier
michael.paqu...@gmail.com wrote:

 On Thu, Dec 19, 2013 at 7:12 AM, Scott Marlowe scott.marl...@gmail.com 
 wrote:

 When people start talking multi-master replication my first response
 is to ask what problem you're trying to solve. Sometimes MM Rep IS the
 answer. But quite often it's not the best one for your problem. So to
 OP I'd ask what problem they're trying to solve.

 Yes that's actually the right approach, multi-master replication is
 often cited as a marketing term for a fantastic technology that can
 solve a lot of problems, which could be solved with a couple of
 Postgres servers using a single-master, multiple-slave approach, or by
 simply design a system that can do data sharding among a set of
 Postgres servers to achieve some kind of write scalability.

Sharding with plproxy is pretty easy and can scale hugely.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unexpected pageaddr error in db log

2013-12-18 Thread Amit Langote
On Thu, Dec 19, 2013 at 12:25 PM, wd w...@wdicc.com wrote:

 hi,

 We have upgrade our database from PostgreSQL 9.2.4 to 9.3.2, use dump  
 restore in a fresh db created by initdb.

 Then use pg_basebackup created 2 slave db, the error occur when start the 
 database.

 db1

 [2013-12-19 04:00:56.882 CST 17956 52b1fef8.4624 1 0]LOG:  database 
 system was interrupted; last known up at 2013-12-19 03:54:05 CST
 [2013-12-19 04:00:56.883 CST 17956 52b1fef8.4624 2 0]LOG:  entering 
 standby mode
 [2013-12-19 04:00:57.395 CST 17956 52b1fef8.4624 3 0]LOG:  restored log 
 file 00010004001D from archive
 [2013-12-19 04:00:57.415 CST 17956 52b1fef8.4624 4 0]LOG:  redo starts at 
 4/1D28
 [2013-12-19 04:00:57.923 CST 17956 52b1fef8.4624 5 0]LOG:  restored log 
 file 00010004001E from archive
 [2013-12-19 04:00:57.933 CST 17956 52b1fef8.4624 6 0]LOG:  consistent 
 recovery state reached at 4/1E012088
 [2013-12-19 04:00:57.933 CST 17948 52b1fef8.461c 4 0]LOG:  database 
 system is ready to accept read only connections
 [2013-12-19 04:00:58.078 CST 17956 52b1fef8.4624 7 0]LOG:  unexpected 
 pageaddr 3/2B00 in log segment 00010004001F, offset 0
 [2013-12-19 04:00:58.084 CST 17969 52b1fefa.4631 1 0]LOG:  started 
 streaming WAL from primary at 4/1F00 on timeline 1


 db2

 [2013-12-19 04:14:20.366 CST 18340 52b2021c.47a4 1 0]LOG:  database 
 system was interrupted; last known up at 2013-12-19 04:07:43 CST
 [2013-12-19 04:14:20.367 CST 18340 52b2021c.47a4 2 0]LOG:  entering 
 standby mode
 [2013-12-19 04:14:20.870 CST 18340 52b2021c.47a4 3 0]LOG:  restored log 
 file 000100040021 from archive
 [2013-12-19 04:14:20.881 CST 18340 52b2021c.47a4 4 0]LOG:  redo starts at 
 4/2128
 [2013-12-19 04:14:21.648 CST 18340 52b2021c.47a4 5 0]LOG:  restored log 
 file 000100040022 from archive
 [2013-12-19 04:14:22.415 CST 18340 52b2021c.47a4 6 0]LOG:  restored log 
 file 000100040023 from archive
 [2013-12-19 04:14:23.061 CST 18340 52b2021c.47a4 7 0]LOG:  restored log 
 file 000100040024 from archive
 [2013-12-19 04:14:23.831 CST 18340 52b2021c.47a4 8 0]LOG:  restored log 
 file 000100040025 from archive
 [2013-12-19 04:14:24.598 CST 18340 52b2021c.47a4 9 0]LOG:  restored log 
 file 000100040026 from archive
 [2013-12-19 04:14:25.365 CST 18340 52b2021c.47a4 10 0]LOG:  restored log 
 file 000100040027 from archive
 [2013-12-19 04:14:26.093 CST 18340 52b2021c.47a4 11 0]LOG:  restored log 
 file 000100040028 from archive
 [2013-12-19 04:14:26.857 CST 18340 52b2021c.47a4 12 0]LOG:  restored log 
 file 000100040029 from archive
 [2013-12-19 04:14:27.592 CST 18340 52b2021c.47a4 13 0]LOG:  restored log 
 file 00010004002A from archive
 [2013-12-19 04:14:28.355 CST 18340 52b2021c.47a4 14 0]LOG:  restored log 
 file 00010004002B from archive
 [2013-12-19 04:14:29.077 CST 18340 52b2021c.47a4 15 0]LOG:  restored log 
 file 00010004002C from archive
 [2013-12-19 04:14:29.843 CST 18340 52b2021c.47a4 16 0]LOG:  restored log 
 file 00010004002D from archive
 [2013-12-19 04:14:30.564 CST 18340 52b2021c.47a4 17 0]LOG:  restored log 
 file 00010004002E from archive
 [2013-12-19 04:14:31.319 CST 18340 52b2021c.47a4 18 0]LOG:  restored log 
 file 00010004002F from archive
 [2013-12-19 04:14:32.071 CST 18340 52b2021c.47a4 19 0]LOG:  restored log 
 file 000100040030 from archive
 [2013-12-19 04:14:32.923 CST 18340 52b2021c.47a4 20 0]LOG:  restored log 
 file 000100040031 from archive
 [2013-12-19 04:14:33.587 CST 18340 52b2021c.47a4 21 0]LOG:  restored log 
 file 000100040032 from archive
 [2013-12-19 04:14:33.909 CST 18340 52b2021c.47a4 22 0]LOG:  consistent 
 recovery state reached at 4/32CCAA88
 [2013-12-19 04:14:33.909 CST 18332 52b2021c.479c 4 0]LOG:  database 
 system is ready to accept read only connections
 [2013-12-19 04:14:34.059 CST 18340 52b2021c.47a4 23 0]LOG:
 unexpected pageaddr 3/5A00 in log segment 000100040033, 
 offset 0
 [2013-12-19 04:14:34.065 CST 18431 52b2022a.47ff 1 0]LOG:  started 
 streaming WAL from primary at 4/3300 on timeline 1


 Both db1 and db2 are working properly, so what's the error mean? Is this a 
 bug?

unexpected pageaddr log entry in this case means the standby reached
the end of existing WAL.
So, just before connecting to walsender for streaming replication, it logs this.

--
Amit


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multi Master Replication

2013-12-18 Thread Michael Paquier
On Thu, Dec 19, 2013 at 2:05 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 Sharding with plproxy is pretty easy and can scale hugely.
Yeah indeed, the writable postgres_fdw could also be used as a
solution, if designed carefully.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multi Master Replication

2013-12-18 Thread Chris Travers
On Wed, Dec 18, 2013 at 1:31 AM, itishree sukla itishree.su...@gmail.comwrote:

 Hi all,

 I need suggestion about setting up multi master replication between two
 postgresql server place two different geographical area. As i know using
 some third party tool like Bucardo,RubyRep it can be achievable, not sue
 which is the good one to use. If any one  can provide me some online
 documentation links, it will help me as well.


First suggestion:  don't.  Multi-master replication almost never is a win.
 I tell customers this and they don't like to hear it but it is true.

Second suggestion:  If you absolutely must, go with Bucardo.  Rubyrep might
work but it is db-agnostic and this raises additional complexity concerns.

The major reasons why I argue against multi-master replication are:

1.  With async replication you have the problem of conflict resolution.  In
some (rare) cases this is avoidable, but in most cases it is not.  You have
to deal with the fact that two different people in two different locations
may update the same row, and you have to decide how to handle this.  Once
you have a conflict, every option you have to resolve the conflict results
in data loss.   There are rare cases where this is the right solution.

2.  With sync replication, you have coordination problems and therefore it
is never (at least IME) a win compared to master-slave replication since
all writes must occur in the same order in the set, or you need global
sequences, or such.  So I would say that something like PGPool for
multi-master replication is just to be avoided generally (however for load
balancing in master/slave it can be a win).  You will never get better read
or write throughput, or a more robust system than you will with
master/slave replication in a synchronous environment.  Keep it simple.

As others have mentioned your best bet here is pl/proxy.  I would go a
little further however and suggest that you can separate storage dbs from
proxy db's and thus create the appearance of multi-master over a
master-slave setup.  If you are going to go this route however, I don't
know whether Postgres-XC would be worth looking into.

The key here though is that design effort is important.  If you carefully
design your federated storage, then you should be good.  However this leads
to lots of problems and you need to think them through.

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more.shtml