Re: [GENERAL] streaming replication and recovery

2014-04-14 Thread Michael Paquier
On Sat, Apr 12, 2014 at 3:12 PM, Anupama Ramaswamy anumr_0...@yahoo.com wrote:
 Lets suppose at this point there is 0 delivery lag but  bytes of replay
 lag.

All your answers are here:
http://www.postgresql.org/docs/devel/static/warm-standby.html
Standby mode is exited and the server switches to normal operation
when pg_ctl promote is run or a trigger file is found (trigger_file).
Before failover, any WAL immediately available in the archive or in
pg_xlog will be restored, but no attempt is made to connect to the
master.

 a) Will the replay complete before the standby stops replicating (because it
 sees the trigger file) ?
Any WAL available in the archives or pg_xlog will be replayed before
the failover.

 b) If I want to run this as new master and attach other secondaries to point
 to this master, can I do it soon after touch /tmp/pgsql.failover OR should
 I wait till the secondary has finished replaying all the delivered stream ?
You need to wait until all the WAL has been replayed, which is the
point where failover occurs.

 c) How do I know if the replay is over and it is ready for a standalone
 operation ?
SELECT pg_is_in_recovery(); returns true if server is still
performing recovery operations.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] streaming replication + wal shipping

2014-04-14 Thread Albe Laurenz
Anupama Ramaswamy wrote:
 I would like to setup a 2 servers with streaming replication, one master and 
 another hot standby.
 I want to use the standby for read-only queries. So I want the replication 
 lag to be as small as
 possible.
 So I choose streaming replication over WAL shipping.
 
 When the master fails, I want the standby to take over as master. So I would 
 like minimal data loss,
 if there is a streaming replication delay.
 
 Is it possible to setup such a way that under normal conditions the standby 
 by replicating using
 streaming replication and on failover, it uses the WAL archive for syncing up 
 with the transactions.
 Of course the WAL will be available on a shared storage volume. If this is 
 possible, what exactly do I
 need in my configuration files - postgresql.conf, recovery.conf ?

Most of this will happen automatically - WAL archives are used if recovery
falls behind.

Where you will need additional software is automatic failover; you need some
OS cluster software that can detect failure and automatically promote the 
standby.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CLOB BLOB limitations in PostgreSQL

2014-04-14 Thread Albe Laurenz
Jack.O'Sullivan wrote:
 I am working for a client who is interested in migrating from Oracle to 
 Postgres. Their database is
 currently ~20TB in size, and is growing. The biggest table in this database 
 is effectively a BLOB
 store and currently has around 1 billion rows.
 
 From reading around Postgres, there are a couple of limits which are 
 concerning in terms of being able
 to migrate this database. We are not up against these limits just yet, but it 
 is likely that they will
 be a potential blocker within the next few years.
 
 1) Table can be maximum of 32TB  (http://www.postgresql.org/about/ 
 http://www.postgresql.org/about/
 )
 
 2) When storing bytea or text datatypes there is a limit of 4 billion entries 
 per table
 (https://wiki.postgresql.org/wiki/BinaryFilesInDB 
 https://wiki.postgresql.org/wiki/BinaryFilesInDB )
 
 With both of these, are they hard limits or can they be worked around with 
 partitioning of tables?
 Could we set the table up in such a way that each child table was limited, 
 but there was no limit on
 the number of children?

Yes, if you store the BLOBs as bytea.  The limits will be per partition.

If you want to use LOBs, there cannot be more than 2^32 per database.

 With point two, does this mean that any table with a bytea datatype is 
 limited to 4 billion rows
 (which would seem in conflict with the unlimited rows shown by 
 http://www.postgresql.org/about
 http://www.postgresql.org/about )? If we had rows where the bytea was a 
 null entry would they
 contribute towards this total or is it 4 billion non-null entries?

I think it is 4 billion rows that contain a column that is TOASTed.
NULLs won't contribute.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] efficient way to do fuzzy join

2014-04-14 Thread Rémi Cura
2014-04-12 15:04 GMT+02:00 Andy Colson a...@squeakycode.net:

 On 04/12/2014 06:29 AM, Rémi Cura wrote:

 (please note that this random string function is NOT the good way to
 do it, i should random int then use it as index to an array
 containing all the letter)

 Thanks a lot for this new version! It seems to be slower than your
 first solution (no index use I guess, I gave up after 5 minutes vs 5
 sec for the previous). Morevover, I canno't make assumption about a
 fixed interval (2 sec in your example). But I think I see where you
 are going.


 After some test, the fastest is using BETWEEN and range. (it is way
 faster than using the @, strangely)

 Here is the code :


 Ah, sorry about that.  I got pulled away to work on work stuff.  I was
 trying to figure out how to use an index on the range query, but not sure,
 without adding a new column if it would even work.

 I've never had the need for ranges yet, this is the first time I've gotten
 to play with them.

 I would not have thought about between like that, good call.  I'd have
 never guess it would be so fast.


 If you can't use the fixed interval, then ranges are out.

 I was thinking this could be improved:


 select t,
  (select t from a where a.t = b.t order by a.t limit 1) as mint,
  (select t from a where a.t  b.t order by a.t desc limit 1) as maxt
 from b

 It does two selects into a to find the nearest.  Given this:

 create table a(t float);


 insert into a values (1), (5), (6);

 could you write a single query to find the number nearest 3.5?  If so we
 might cut the work by 50%.

 -Andy

 PS: This list prefers you don't top post.


Hey,
the best I can come up with using your original idea is :
--
--fast-ish: 10sec
DROP TABLE IF EXISTS t;
CREATE TABLE t AS
SELECT lower_b_a.gid AS gid_b, lower_b_a.t AS t_b --, lower_b_a.data AS
data_b
, lower_b_a.gid_l_b AS gid_a_lower , a1.t AS t_a_lower--, a1.data
AS data_a_lower
, lower_b_a.gid_l_b -1 AS gid_a_upper , a2.t AS t_a_upper--,
a2.data AS data_a_upper
FROM (
SELECT b.gid, b.t
, (SELECT  gid  FROM a WHERE a.t=b.t order by a.t ASC
LIMIT 1  ) AS gid_l_b
FROM b) as lower_b_a
LEFT OUTTER JOIN a AS a1 ON (a1.gid = gid_l_b) LEFT OUTTER JOIN a
AS a2 ON  (a2.gid = gid_l_b-1)
---

As you suggested it doesn't read the table twice, but only once (to find
the closest lower value). The closest upper value is found by knowing it is
in the next row taht the closest lower value.

Yet it is still slower :-/

The way to go seems to be the numrange.

Thanks a lot for the help in this optimization !

Cheers,

Rémi-C


Re: [GENERAL] CLOB BLOB limitations in PostgreSQL

2014-04-14 Thread Ivan Voras
On 11/04/2014 16:45, Jack.O'sulli...@tessella.com wrote:

 With point two, does this mean that any table with a bytea datatype is
 limited to 4 billion rows (which would seem in conflict with the
 unlimited rows shown by http://www.postgresql.org/about)? If we had
 rows where the bytea was a null entry would they contribute towards
 this total or is it 4 billion non-null entries?

This seems strange. A core developer should confirm this but it doesn't
make much sense - bytea fields are stored the same as text fields
(including varchar etc), i.e. the varlena internal representation, so
having the limit you are talking about would mean that any non-trivial
table with long-ish text fields would be limited to 2^32 entries...



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] encrypting data stored in PostgreSQL

2014-04-14 Thread Ivan Voras
On 09/04/2014 22:40, CS_DBA wrote:
 Hi All;
 
 We have a client with this requirement:
 
 At rest data must be encrypted with a unique client key
 
 Any thoughts on how to pull this off for PostgreSQL stored data?

Some time ago I did this, mostly as an experiment but IIRC it works
decently:
https://bitbucket.org/ivoras/pgenctypes




signature.asc
Description: OpenPGP digital signature


[GENERAL] CentOS 6 and Postgresql 9.3.4 from PGDG

2014-04-14 Thread Steve Clark

Hello,

Don't know if this is better asked on the CentOS ML or here, but...

CentOS 6 supplies 8.4.20 but I want to use hot standby - the issue is that the 
PGDG
packages don't install into the usual place they are installed in version 
specific directories,
including the data, binaries, libraries etc. How do people deal with this when 
for years they
have been using postgres and stuff is in standard directories not version 
specific directories?

Thanks for any tips.
Steve

--
Stephen Clark
*NetWolves Managed Services, LLC.*
Director of Technology
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


Re: [GENERAL] CentOS 6 and Postgresql 9.3.4 from PGDG

2014-04-14 Thread Moshe Jacobson
On Mon, Apr 14, 2014 at 8:24 AM, Steve Clark scl...@netwolves.com wrote:

 CentOS 6 supplies 8.4.20 but I want to use hot standby - the issue is that
 the PGDG
 packages don't install into the usual place they are installed in
 version specific directories,
 including the data, binaries, libraries etc. How do people deal with this
 when for years they
 have been using postgres and stuff is in standard directories not
 version specific directories?


It's actually nicer that it uses a version specific directory, IMO, since
you can have two versions installed simultaneously for upgrade purposes.
I just create symlinks data/ and backups/ in /var/lib/pgsql to point to the
dirs of the same names under the 9.3/ directory.


Moshe Jacobson
Manager of Systems Engineering, Nead Werx Inc. http://www.neadwerx.com
2323 Cumberland Parkway · Suite 201 · Atlanta, GA 30339

Quality is not an act, it is a habit. -- Aristotle


Re: [GENERAL] CentOS 6 and Postgresql 9.3.4 from PGDG

2014-04-14 Thread Steve Clark

On 04/14/2014 09:02 AM, Moshe Jacobson wrote:


On Mon, Apr 14, 2014 at 8:24 AM, Steve Clark scl...@netwolves.com 
mailto:scl...@netwolves.com wrote:

CentOS 6 supplies 8.4.20 but I want to use hot standby - the issue is that 
the PGDG
packages don't install into the usual place they are installed in version 
specific directories,
including the data, binaries, libraries etc. How do people deal with this 
when for years they
have been using postgres and stuff is in standard directories not version 
specific directories?


It's actually nicer that it uses a version specific directory, IMO, since you 
can have two versions installed simultaneously for upgrade purposes.
I just create symlinks data/ and backups/ in /var/lib/pgsql to point to the 
dirs of the same names under the 9.3/ directory.


How did you deal with binaries and libraries, as well as third party apps like 
perl modules or php/apache modules?

--
Stephen Clark
*NetWolves Managed Services, LLC.*
Director of Technology
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


[GENERAL] lpgport issue while installing pg_bulkload utility on fedora 14 OS

2014-04-14 Thread chiru r
Hi,

I am facing below  *-lpgport *issue while installing pg_bulkload utility on
fedora 14 OS.

Details:

*OS* : *Fedora 14*

Linux localhost.localdomain 2.6.35.6-45.fc14.x86_64 #1 SMP Mon Oct 18
23:57:44 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux

*PG* : PostgreSQL 8.4.9

*pg_bulkload* 3.1.4

[root@localhost pg_bulkload-3.1.4]# make

make[1]: Entering directory `/home/postgres/pg_bulkload-3.1.4/bin'

gcc -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic
-DLINUX_OOM_ADJ=0 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv
pg_bulkload.o recovery.o pgut/pgut.o pgut/pgut-fe.o pgut/pgut-list.o
-L/usr/lib64 -lpq -L/usr/lib64 -Wl,--as-needed -lpgport -lpam -lssl
-lcrypto -lkrb5 -lcom_err -lgssapi_krb5 -lz -lreadline -lcrypt -ldl -lm -o
pg_bulkload

*/usr/bin/ld: cannot find -lpgport*

collect2: ld returned 1 exit status

make[1]: *** [pg_bulkload] Error 1

make[1]: Leaving directory `/home/postgres/pg_bulkload-3.1.4/bin'

make: *** [all] Error 2

[root@localhost pg_bulkload-3.1.4]#


Please share your experience on this issue.

--Chiru


Re: [GENERAL] CentOS 6 and Postgresql 9.3.4 from PGDG

2014-04-14 Thread Moshe Jacobson
On Mon, Apr 14, 2014 at 9:13 AM, Steve Clark scl...@netwolves.com wrote:

 How did you deal with binaries and libraries, as well as third party apps
 like perl modules or php/apache modules?


All of this is managed through the /etc/alternatives system. I'm honestly
not sure how all of it works, but I know that on my system, all of the
postgres binaries and libs are linked through /etc/alternatives.


Moshe Jacobson
Manager of Systems Engineering, Nead Werx Inc. http://www.neadwerx.com
2323 Cumberland Parkway · Suite 201 · Atlanta, GA 30339

Quality is not an act, it is a habit. -- Aristotle


Re: [GENERAL] lpgport issue while installing pg_bulkload utility on fedora 14 OS

2014-04-14 Thread Adrian Klaver

On 04/14/2014 06:21 AM, chiru r wrote:

Hi,

I am facing below *-lpgport *issue while installing pg_bulkload utility
on fedora 14 OS.

Details:

*OS* : *Fedora 14*

Linux localhost.localdomain 2.6.35.6-45.fc14.x86_64 #1 SMP Mon Oct 18
23:57:44 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux

*PG* : PostgreSQL 8.4.9

*pg_bulkload* 3.1.4

[root@localhost pg_bulkload-3.1.4]# make

make[1]: Entering directory `/home/postgres/pg_bulkload-3.1.4/bin'

gcc -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic
-DLINUX_OOM_ADJ=0 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing
-fwrapv pg_bulkload.o recovery.o pgut/pgut.o pgut/pgut-fe.o
pgut/pgut-list.o -L/usr/lib64 -lpq -L/usr/lib64 -Wl,--as-needed -lpgport
-lpam -lssl -lcrypto -lkrb5 -lcom_err -lgssapi_krb5 -lz -lreadline
-lcrypt -ldl -lm -o pg_bulkload

*/usr/bin/ld: cannot find -lpgport*

collect2: ld returned 1 exit status

make[1]: *** [pg_bulkload] Error 1

make[1]: Leaving directory `/home/postgres/pg_bulkload-3.1.4/bin'

make: *** [all] Error 2

[root@localhost pg_bulkload-3.1.4]#


Please share your experience on this issue.


Just what the docs say:

http://pgbulkload.projects.pgfoundry.org/pg_bulkload.html#install

$ cd pg_bulkload
$ make USE_PGXS=1
$ su
$ make USE_PGXS=1 install




--Chiru





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] lpgport issue while installing pg_bulkload utility on fedora 14 OS

2014-04-14 Thread chiru r

 Actually Just what the docs say:
 http://pgbulkload.projects.pgfoundry.org/pg_bulkload.html#install
 $ cd pg_bulkload
 $ make USE_PGXS=1
 $ su
 $ make USE_PGXS=1 install


Thanks Adrian...

USE_PGXS is already present in MAKEFILE.
simple make and Make Install is working fine in Centos 5.9,i have
successfully installed.

But facing lpgport issue while installing on Fedora 14 OS.

Even I have executed as per docs, also getting same *lpgport *issue.

[root@localhost pg_bulkload-3.1.4]# *make USE_PGXS=1*
make[1]: Entering directory `/home/postgres/pg_bulkload-3.1.4/bin'
gcc -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic
-DLINUX_OOM_ADJ=0 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv
pg_bulkload.o recovery.o pgut/pgut.o pgut/pgut-fe.o pgut/pgut-list.o
-L/usr/lib64 -lpq -L/usr/lib64 -Wl,--as-needed -lpgport -lpam -lssl
-lcrypto -lkrb5 -lcom_err -lgssapi_krb5 -lz -lreadline -lcrypt -ldl -lm -o
pg_bulkload
*/usr/bin/ld: cannot find -lpgport*
collect2: ld returned 1 exit status
make[1]: *** [pg_bulkload] Error 1
make[1]: Leaving directory `/home/postgres/pg_bulkload-3.1.4/bin'
make: *** [all] Error 2
[root@localhost pg_bulkload-3.1.4]#  *make USE_PGXS=1 install*
make[1]: Entering directory `/home/postgres/pg_bulkload-3.1.4/bin'
gcc -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic
-DLINUX_OOM_ADJ=0 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv
pg_bulkload.o recovery.o pgut/pgut.o pgut/pgut-fe.o pgut/pgut-list.o
-L/usr/lib64 -lpq -L/usr/lib64 -Wl,--as-needed -lpgport -lpam -lssl
-lcrypto -lkrb5 -lcom_err -lgssapi_krb5 -lz -lreadline -lcrypt -ldl -lm -o
pg_bulkload
*/usr/bin/ld: cannot find -lpgport*
collect2: ld returned 1 exit status
make[1]: *** [pg_bulkload] Error 1
make[1]: Leaving directory `/home/postgres/pg_bulkload-3.1.4/bin'
make: *** [all] Error 2
[root@localhost pg_bulkload-3.1.4]#



On Mon, Apr 14, 2014 at 7:10 PM, Adrian Klaver adrian.kla...@aklaver.comwrote:

 On 04/14/2014 06:21 AM, chiru r wrote:

 Hi,

 I am facing below *-lpgport *issue while installing pg_bulkload utility

 on fedora 14 OS.

 Details:

 *OS* : *Fedora 14*


 Linux localhost.localdomain 2.6.35.6-45.fc14.x86_64 #1 SMP Mon Oct 18
 23:57:44 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux

 *PG* : PostgreSQL 8.4.9

 *pg_bulkload* 3.1.4


 [root@localhost pg_bulkload-3.1.4]# make

 make[1]: Entering directory `/home/postgres/pg_bulkload-3.1.4/bin'

 gcc -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
 -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic
 -DLINUX_OOM_ADJ=0 -Wall -Wmissing-prototypes -Wpointer-arith
 -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing
 -fwrapv pg_bulkload.o recovery.o pgut/pgut.o pgut/pgut-fe.o
 pgut/pgut-list.o -L/usr/lib64 -lpq -L/usr/lib64 -Wl,--as-needed -lpgport
 -lpam -lssl -lcrypto -lkrb5 -lcom_err -lgssapi_krb5 -lz -lreadline
 -lcrypt -ldl -lm -o pg_bulkload

 */usr/bin/ld: cannot find -lpgport*


 collect2: ld returned 1 exit status

 make[1]: *** [pg_bulkload] Error 1

 make[1]: Leaving directory `/home/postgres/pg_bulkload-3.1.4/bin'

 make: *** [all] Error 2

 [root@localhost pg_bulkload-3.1.4]#


 Please share your experience on this issue.


 Just what the docs say:

 http://pgbulkload.projects.pgfoundry.org/pg_bulkload.html#install

 $ cd pg_bulkload
 $ make USE_PGXS=1
 $ su
 $ make USE_PGXS=1 install



 --Chiru




 --
 Adrian Klaver
 adrian.kla...@aklaver.com


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] lpgport issue while installing pg_bulkload utility on fedora 14 OS

2014-04-14 Thread Adrian Klaver

On 04/14/2014 07:21 AM, chiru r wrote:

Actually Just what the docs say:
http://pgbulkload.projects.__pgfoundry.org/pg_bulkload.__html#install 
http://pgbulkload.projects.pgfoundry.org/pg_bulkload.html#install
$ cd pg_bulkload
$ make USE_PGXS=1
$ su
$ make USE_PGXS=1 install


Thanks Adrian...

USE_PGXS is already present in MAKEFILE.
simple make and Make Install is working fine in Centos 5.9,i have
successfully installed.

But facing lpgport issue while installing on Fedora 14 OS.

Even I have executed as per docs, also getting same *lpgport *issue.



Not sure how Fedora breaks out its packages, but do you have something 
equivalent to postgresql-dev installed?


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] lpgport issue while installing pg_bulkload utility on fedora 14 OS

2014-04-14 Thread Adrian Klaver

On 04/14/2014 07:21 AM, chiru r wrote:

Actually Just what the docs say:
http://pgbulkload.projects.__pgfoundry.org/pg_bulkload.__html#install 
http://pgbulkload.projects.pgfoundry.org/pg_bulkload.html#install
$ cd pg_bulkload
$ make USE_PGXS=1
$ su
$ make USE_PGXS=1 install


Thanks Adrian...

USE_PGXS is already present in MAKEFILE.
simple make and Make Install is working fine in Centos 5.9,i have
successfully installed.

But facing lpgport issue while installing on Fedora 14 OS.

Even I have executed as per docs, also getting same *lpgport *issue.



Realized that in my previous message I should have asked what is the 
source of your Postgres packages? Some Googling shows this error 
occurring quite often with Fedora, so this probably requires someone 
with more experience with that distribution than I. In any case I have 
to run, hope you find the answer.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CentOS 6 and Postgresql 9.3.4 from PGDG

2014-04-14 Thread Alan Hodgson
On Monday, April 14, 2014 09:13:51 AM Steve Clark wrote:
 How did you deal with binaries and libraries, as well as third party apps
 like perl modules or php/apache modules?

The 8.4 library package usually ends up installed to satisfy other package 
requirements.

Binaries get handled through the alternatives system.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] lpgport issue while installing pg_bulkload utility on fedora 14 OS

2014-04-14 Thread chiru r
Okay,installed postgresql-devel.x86_64  package on fedora14 OS.

Google blogs says it seems Fedora Postgresql-devel package not
supported libpgport.


*http://www.postgresql.org/message-id/4ee2338f.1040...@dunslane.net
http://www.postgresql.org/message-id/4ee2338f.1040...@dunslane.net*

Is there any workaround.




On Mon, Apr 14, 2014 at 8:23 PM, Adrian Klaver adrian.kla...@aklaver.comwrote:

 On 04/14/2014 07:21 AM, chiru r wrote:

 Actually Just what the docs say:
 http://pgbulkload.projects.__pgfoundry.org/pg_bulkload.__html#install
 http://pgbulkload.projects.pgfoundry.org/pg_bulkload.html#install

 $ cd pg_bulkload
 $ make USE_PGXS=1
 $ su
 $ make USE_PGXS=1 install


 Thanks Adrian...

 USE_PGXS is already present in MAKEFILE.
 simple make and Make Install is working fine in Centos 5.9,i have
 successfully installed.

 But facing lpgport issue while installing on Fedora 14 OS.

 Even I have executed as per docs, also getting same *lpgport *issue.


 Realized that in my previous message I should have asked what is the
 source of your Postgres packages? Some Googling shows this error occurring
 quite often with Fedora, so this probably requires someone with more
 experience with that distribution than I. In any case I have to run, hope
 you find the answer.


 --
 Adrian Klaver
 adrian.kla...@aklaver.com



[GENERAL] Approach to Data Summary and Analysis

2014-04-14 Thread Robert DiFalco
I have several related tables that represent a call state. Let's think of
these as phone calls to simplify things. Sometimes I need to determine the
last time a user was called, the last time a user answered a call, or the
last time a user completed a call.

The basic schema is something like this:

CREATE TABLE calls (
  id  BIGINT NOT NULL, // sequence generator
  user_id BIGINT NOT NULL,
  called  TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,

  PRIMARY KEY (id),
  FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE
);

CREATE TABLE calls_answered (
  idBIGINT NOT NULL,
  answered  TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,

  PRIMARY KEY (id),
  FOREIGN KEY (id) REFERENCES calls(id) ON DELETE CASCADE
);


And so on for calls_connected, calls_completed, call_errors, etc.

Occasionally I will want to know things like When was the last time a user
answered a call or How many times has a user been called.

I can do these queries using a combination of MAX or COUNT. But I'm
concerned about the performance.

SELECT MAX(a.id)
FROM calls_answered a JOIN calls c ON c.id = a.id
WHERE c.user_id = ?;


Or the number of answered calls:

SELECT MAX(a.id)
FROM calls_answered a JOIN calls c ON c.id = a.id
WHERE c.user_id = ?;


Sometimes I might want to get this data for a whole bunch of users. For
example, give me all users whose have not answered a call in the last 5
days. Or even what percentage of users called actually answered a call.
This approach could become a performance issue. So the other option is to
create a call_summary table that is updated with triggers.

The summary table would need fields like user_id, last_call_id,
call_count, last_answered_id, answered_count, last_completed_id,
last_completed_count, etc.

My only issue with a summary table is that I don't want a bunch of null
fields. For example, if the user was *called* but they have never *answered* at
call then the last_call_id and call_count fields on the summary table would
be non-NULL but the last_answer_id and answer_count fields WOULD be NULL.
But over time all fields would eventually become non-NULL.

So that leads me to a summary table for EACH call state. Each summary table
would have a user id, a ref_id, and a count -- one summary table for each
state e.g. call_summary, call_answered_summary, etc.

This approach has the down side that it creates a lot of tables and
triggers. It has the upside of being pretty efficient without having to
deal with NULL values.  It's also pretty easy to reason about.

So for my question -- is the choice between these a personal preference
sort of thing or is there a right or wrong approach? Am I missing another
approach that would be better?  I'm okay with SQL but I'm not expert so I'm
not sure if there is an accepted DESIGN PATTERN for this that I am missing.

Thanks!


[GENERAL] Querying all documents for a company and its projects etc

2014-04-14 Thread Andreas Joseph Krogh
Hi all.   I'm trying to make an efficient query to list all documents related 
to a company and also documents related to employees and projects for that 
company.   I have this sample-schema: create table entity( id integer primary 
key, entity_type varchar not null, check (entity_type IN ('COMPANY', 'PERSON', 
'PROJECT')) ); create table company( id integer primary key references 
entity(id), name varchar not null ); create table person( id integer primary 
key referencesentity(id), name varchar not null, company_id integer references 
company(id) ); create table project( id integer primary key references 
entity(id), name varchar not null, company_id integer references company(id) ); 
create table document( id integer primary key, name varchar not null ); create 
tabledocument_usage( document_id integer not null references document(id), 
entity_idinteger not null references entity(id) ); insert into entity(id, 
entity_type)values(1, 'COMPANY'); insert into company(id, name) values(1, 'ACME'
); insert into entity(id, entity_type) values(2, 'PERSON'); insert into 
person(id,name, company_id) values(2, 'Bill', 1); insert into entity(id, 
entity_type)values(3, 'PROJECT'); insert into project(id, name, company_id) 
values(3, 'Development', 1); insert into document(id, name) values(1, 'Doc 1'); 
insert into document(id, name) values(2, 'Doc 2'); insert into document(id, name
)values(3, 'Doc 3'); insert into document_usage(document_id, entity_id) values(1
,1); insert into document_usage(document_id, entity_id) values(1, 3); insert 
intodocument_usage(document_id, entity_id) values(2, 2); insert into 
document_usage(document_id, entity_id)values(3, 3); So, documents are related 
to companies, persons or projects thru the document_usage table. I have this 
query to list all documents for a specific company and related employees and 
projects (belonging to that company) select doc.id, doc.name as document_name, 
comp.nameas company_name, null as person_name, null as project_name from 
documentdoc JOIN document_usage du ON doc.id = du.document_id JOIN company comp 
ONdu.entity_id = comp.id WHERE comp.id = 1 UNION SELECT doc.id, doc.name as 
document_name, comp.nameas company_name, pers.name as person_name, null as 
project_name from document doc JOIN document_usage du ON doc.id = du.document_id
JOINperson pers ON pers.id = du.entity_id JOIN company comp ON comp.id = 
pers.company_id WHERE comp.id = 1 UNION SELECT doc.id, doc.name as 
document_name, comp.nameas company_name, null as person_name, proj.name as 
project_name from document doc JOIN document_usage du ON doc.id = du.document_id
JOINproject proj ON proj.id = du.entity_id JOIN company comp ON comp.id = 
proj.company_id WHERE comp.id = 1 order by document_name ;  id | document_name 
| company_name | person_name | project_name
 +---+--+-+--
   1 | Doc 1 | ACME | |
   1 | Doc 1 | ACME | | Development
   2 | Doc 2 | ACME | Bill    |
   3 | Doc 3 | ACME | | Development
 (4 rows)     I'm looking for a more efficient query where I don't have to 
repeat JOINing with document, document_usage and company all the time, and 
somehow avoid the UNIONs.   Anyone has a better solution respecting the schema? 
  Thanks.   --
 Andreas Joseph Krogh andr...@officenet.no      mob: +47 909 56 963
 Senior Software Developer / CTO - OfficeNet AS - http://www.officenet.no
 Public key: http://home.officenet.no/~andreak/public_key.asc

[GENERAL] Trouble installing Slony 2.0

2014-04-14 Thread sparikh
I have been struggling to install slony 2.0 for past few days.

Linux server detail : Linux vmbndbdev01 2.6.32-279.el6.x86_64 #1 SMP Fri Jun
22 12:19:21 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

I have 2 versions of postgresql installed 8.4.18 and 9.1.12, but only 9.x is
in use.

I get following errors when i issue command gmake all. I would really
appreciate if somebody can shed some light or give me some pointers.

Thanks in Advance.

[root@vmbndbdev01 slony1-2.0.0-rc1]# gmake all

gmake[1]: Entering directory `/root/slony1-2.0.0-rc1/src'
gmake[2]: Entering directory `/root/slony1-2.0.0-rc1/src/parsestatements'
./test-scanner  /dev/null  emptytestresult.log
cmp ./emptytestresult.log emptytestresult.expected
./test-scanner  ./test_sql.sql  test_sql.log
cmp ./test_sql.log ./test_sql.expected
./test-scanner  ./cstylecomments.sql  cstylecomments.log
cmp ./cstylecomments.log ./cstylecomments.expected
gmake[2]: Leaving directory `/root/slony1-2.0.0-rc1/src/parsestatements'
gmake[2]: Entering directory `/root/slony1-2.0.0-rc1/src/slon'
gmake[2]: Nothing to be done for `all'.
gmake[2]: Leaving directory `/root/slony1-2.0.0-rc1/src/slon'
gmake[2]: Entering directory `/root/slony1-2.0.0-rc1/src/slonik'
gmake[2]: Nothing to be done for `all'.
gmake[2]: Leaving directory `/root/slony1-2.0.0-rc1/src/slonik'
gmake[2]: Entering directory `/root/slony1-2.0.0-rc1/src/backend'
gcc -g -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../.. -fpic
-I/usr/pgsql-9.1/include/ -I/usr/pgsql-9.1/include/server/  -c -o
slony1_funcs.o slony1_funcs.c
slony1_funcs.c: In function â_Slony_I_createEventâ:
slony1_funcs.c:137: error: âSerializableSnapshotâ undeclared (first use in
this function)
slony1_funcs.c:137: error: (Each undeclared identifier is reported only once
slony1_funcs.c:137: error: for each function it appears in.)
slony1_funcs.c: In function âslon_quote_literalâ:
slony1_funcs.c:1013: warning: pointer targets in passing argument 1 of
âpg_mblenâ differ in signedness
/usr/pgsql-9.1/include/server/mb/pg_wchar.h:399: note: expected âconst char
*â but argument is of type âunsigned char *â
slony1_funcs.c: In function âslon_quote_identifierâ:
slony1_funcs.c:1094: error: too few arguments to function
âScanKeywordLookupâ
slony1_funcs.c: In function âgetClusterStatusâ:
slony1_funcs.c:1247: error: too many arguments to function âtypenameTypeIdâ
slony1_funcs.c:1268: error: âTEXTOIDâ undeclared (first use in this
function)
slony1_funcs.c:1321: error: âINT4OIDâ undeclared (first use in this
function)
gmake[2]: *** [slony1_funcs.o] Error 1
gmake[2]: Leaving directory `/root/slony1-2.0.0-rc1/src/backend'
gmake[1]: *** [all] Error 2
gmake[1]: Leaving directory `/root/slony1-2.0.0-rc1/src'
gmake: *** [all] Error 2







--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Trouble-installing-Slony-2-0-tp5799687.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] lpgport issue while installing pg_bulkload utility on fedora 14 OS

2014-04-14 Thread Adrian Klaver

On 04/14/2014 07:56 AM, chiru r wrote:

Okay,installed postgresql-devel.x86_64  package on fedora14 OS.

Google blogs says it seems Fedora Postgresql-devel package not
supported libpgport.

_http://www.postgresql.org/message-id/4ee2338f.1040...@dunslane.net


The bigger issue is that the RedHat family discourages static libraries 
in packages.



_

Is there any workaround.


Depends on your situation.

Is the server you are trying to build pg_bulkload a production server 
that cannot be changed?


In other words is possible to build the server from source?

Failing that there is the part way there method. I am not sure how well 
this would work, others may have comments on this.


1) Go here:
http://www.postgresql.org/ftp/source/

Find 8.4.9(FYI 8.4 is now up to release 8.4.21)

2) Do the make without the install in the source.

3) Find libpgport.a in ~/src/port/

4) Copy it to the Postgres library directory. Not sure where that is in 
Fedora. Though a search for libpq.so should find it.


5) Cross fingers, retry build of pg_bulkloader.






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Approach to Data Summary and Analysis

2014-04-14 Thread Vincent Veyron


On Mon, 14 Apr 2014 09:27:29 -0700
Robert DiFalco robert.difa...@gmail.com wrote:

 I have several related tables that represent a call state. 
 
 And so on for calls_connected, calls_completed, call_errors, etc.
 
 So for my question -- is the choice between these a personal preference
 sort of thing or is there a right or wrong approach? Am I missing another
 approach that would be better?  

Hi Robert,

I guess a call state is subject to change, in which case you would have to 
shuffle records between tables when that happens?

ISTM you should consider using only a 'calls' table, and add an 'id_call_state' 
field to it that references the list of possible states. This would make your 
queries simpler.

create table call_state(
id_call_state text PRIMARY KEY,
libelle text);

INSERT INTO call_state (id_call_state, libelle) VALUES ('calls_connected', 
'Connected'), ('calls_completed', 'Completed'), ('call_errors', 'Error');

 CREATE TABLE calls (
   id  BIGINT NOT NULL, // sequence generator

id_call_state INTEGER NOT NULL REFERENCES call_state,

   user_id BIGINT NOT NULL,
   called  TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,
 
   PRIMARY KEY (id),
   FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE
 );


-- 

Salutations, Vincent Veyron

http://marica.fr
Gestion des contentieux juridiques, des contrats et des sinistres d'assurance


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CentOS 6 and Postgresql 9.3.4 from PGDG

2014-04-14 Thread Devrim GÜNDÜZ

Hi,

On Mon, 2014-04-14 at 09:13 -0400, Steve Clark wrote:
 How did you deal with binaries and libraries, as well as third party
 apps like perl modules or php/apache modules?

I added Provides: to each package, along with a ldconfig file, so that
the dependencies are satisfied.

Regards,

-- 
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Approach to Data Summary and Analysis

2014-04-14 Thread Robert DiFalco
But then I lose a bunch of data like the TIMESTAMPTZ of the call, answer,
connection, etc. Btw, currently these tables never need to be UPDATEd. They
are immutable in the current design. And in the end I'm not sure how the
proposal of one table and a state that is updatable changes the basic
thrust of the question. For example, getting last call, last answered,
total called, total answered. If the state of a call transitions from
called to answered then making it a field loses all the data with the
previous state, make sense?


On Mon, Apr 14, 2014 at 2:43 PM, Vincent Veyron vv.li...@wanadoo.fr wrote:



 On Mon, 14 Apr 2014 09:27:29 -0700
 Robert DiFalco robert.difa...@gmail.com wrote:

  I have several related tables that represent a call state.
 
  And so on for calls_connected, calls_completed, call_errors, etc.
 
  So for my question -- is the choice between these a personal preference
  sort of thing or is there a right or wrong approach? Am I missing another
  approach that would be better?

 Hi Robert,

 I guess a call state is subject to change, in which case you would have to
 shuffle records between tables when that happens?

 ISTM you should consider using only a 'calls' table, and add an
 'id_call_state' field to it that references the list of possible states.
 This would make your queries simpler.

 create table call_state(
 id_call_state text PRIMARY KEY,
 libelle text);

 INSERT INTO call_state (id_call_state, libelle) VALUES ('calls_connected',
 'Connected'), ('calls_completed', 'Completed'), ('call_errors', 'Error');

  CREATE TABLE calls (
id  BIGINT NOT NULL, // sequence generator

 id_call_state INTEGER NOT NULL REFERENCES call_state,

user_id BIGINT NOT NULL,
called  TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,
 
PRIMARY KEY (id),
FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE
  );


 --

 Salutations, Vincent Veyron

 http://marica.fr
 Gestion des contentieux juridiques, des contrats et des sinistres
 d'assurance


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Approach to Data Summary and Analysis

2014-04-14 Thread Rob Sargent

On 04/14/2014 04:22 PM, Robert DiFalco wrote:
But then I lose a bunch of data like the TIMESTAMPTZ of the call, 
answer, connection, etc. Btw, currently these tables never need to be 
UPDATEd. They are immutable in the current design. And in the end I'm 
not sure how the proposal of one table and a state that is updatable 
changes the basic thrust of the question. For example, getting last 
call, last answered, total called, total answered. If the state of a 
call transitions from called to answered then making it a field loses 
all the data with the previous state, make sense?



On Mon, Apr 14, 2014 at 2:43 PM, Vincent Veyron vv.li...@wanadoo.fr 
mailto:vv.li...@wanadoo.fr wrote:




On Mon, 14 Apr 2014 09:27:29 -0700
Robert DiFalco robert.difa...@gmail.com
mailto:robert.difa...@gmail.com wrote:

 I have several related tables that represent a call state.

 And so on for calls_connected, calls_completed, call_errors, etc.

 So for my question -- is the choice between these a personal
preference
 sort of thing or is there a right or wrong approach? Am I
missing another
 approach that would be better?

Hi Robert,

I guess a call state is subject to change, in which case you would
have to shuffle records between tables when that happens?

ISTM you should consider using only a 'calls' table, and add an
'id_call_state' field to it that references the list of possible
states. This would make your queries simpler.

create table call_state(
id_call_state text PRIMARY KEY,
libelle text);

INSERT INTO call_state (id_call_state, libelle) VALUES
('calls_connected', 'Connected'), ('calls_completed',
'Completed'), ('call_errors', 'Error');

 CREATE TABLE calls (
   id  BIGINT NOT NULL, // sequence generator

id_call_state INTEGER NOT NULL REFERENCES call_state,

   user_id BIGINT NOT NULL,
   called  TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,

   PRIMARY KEY (id),
   FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE
 );


--

Salutations, Vincent Veyron

http://marica.fr
Gestion des contentieux juridiques, des contrats et des sinistres
d'assurance


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
mailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


I wonder if you really need to place the parts of the call into the 
various tables.  ringtime, answertime, closetime and all the values 
associated with those parts of a call are all fundamental to a single 
call, though perhaps collected incrementally.  Easy queries, for sure. 
(Sorry, I haven't gone back to see your orig. schema.  If it's clear 
there why these are in separate files, say no more)





Re: [GENERAL] streaming replication and recovery

2014-04-14 Thread Anupama Ramaswamy
Thanks so much. That clarifies.

-Anupama
On Monday, April 14, 2014 12:09 PM, Michael Paquier michael.paqu...@gmail.com 
wrote:
 
On Sat, Apr 12, 2014 at 3:12 PM, Anupama Ramaswamy anumr_0...@yahoo.com wrote:
 Lets suppose at this point there is 0 delivery lag but  bytes of replay
 lag.

All your answers are here:
http://www.postgresql.org/docs/devel/static/warm-standby.html
Standby mode is exited and the server switches to normal operation
when pg_ctl promote is run or a trigger file is found (trigger_file).
Before failover, any WAL immediately available in the archive or in
pg_xlog will be restored, but no attempt is made to connect to the
master.

 a) Will the replay complete before the standby stops replicating (because it
 sees the trigger file) ?
Any WAL available in the archives or pg_xlog will be replayed before
the failover.

 b) If I want to run this as new master and attach other secondaries to point
 to this master, can I do it soon after touch /tmp/pgsql.failover OR should
 I wait till the secondary has finished replaying all the delivered stream ?
You need to wait until all the WAL has been replayed, which is the
point where failover occurs.


 c) How do I know if the replay is over and it is ready for a standalone
 operation ?
SELECT pg_is_in_recovery(); returns true if server is still
performing recovery operations.
-- 
Michael

Re: [GENERAL] streaming replication + wal shipping

2014-04-14 Thread Anupama Ramaswamy
Thanks for your response.

So are you saying that if I setup the following in my recovery.conf
restore_command =.

It will it be used only when the streaming replication falls behind more than ( 
wal_keep_segments ) or replication stream is not available (master goes down) ?

Thanks for your help.


On Monday, April 14, 2014 1:35 PM, Albe Laurenz laurenz.a...@wien.gv.at wrote:
 
Anupama Ramaswamy wrote:

 I would like to setup a 2 servers with streaming replication, one master and 
 another hot standby.
 I want to use the standby for read-only queries. So I want the replication 
 lag to be as small as
 possible.
 So I choose streaming replication over WAL shipping.
 
 When the master fails, I want the standby to take over as master. So I would 
 like minimal data loss,
 if there is a streaming replication delay.
 
 Is it possible to setup such a way that under normal conditions the standby 
 by replicating using
 streaming replication and on failover, it uses the WAL archive for syncing up 
 with the transactions.
 Of course the WAL will be available on a shared storage volume. If this is 
 possible, what exactly do I
 need in my configuration files - postgresql.conf, recovery.conf ?

Most of this will happen automatically - WAL archives are used if recovery
falls behind.

Where you will need additional software is automatic failover; you need some
OS cluster software that can detect failure and automatically promote the 
standby.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Unrecognized service

2014-04-14 Thread Augori
Hi Folks,

I set up postgresql on a CentOS 5 Linux months ago. I had a process that
ran every night and connected to the database.  Everything was working fine
until a few days ago when my process tried to connect and failed.   Now I'm
getting:

# service postgresql status
postgresql: unrecognized service

Does this mean it's gone?  Does anyone have any suggestions?

Thank you.


Re: [GENERAL] Unrecognized service

2014-04-14 Thread Adrian Klaver

On 04/14/2014 05:33 PM, Augori wrote:


Hi Folks,

I set up postgresql on a CentOS 5 Linux months ago. I had a process that
ran every night and connected to the database.  Everything was working
fine until a few days ago when my process tried to connect and failed.
Now I'm getting:

# service postgresql status
postgresql: unrecognized service

Does this mean it's gone?  Does anyone have any suggestions?


Does ps ax | grep post show postgres running?

What is the process?

Have there been any updates to software between the time it ran and the 
time it started failing?




Thank you.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unrecognized service

2014-04-14 Thread Augori
Here's what the ps command gives:

root@server# ps ax | grep post
 9165 pts/1S+ 0:00 grep post

Does this mean it's not running?

It's certainly possible that software updates have occurred.  There are a
lot of people working on this machine, so I'm not aware of which changes
have been made lately.

Do you think I need to reinstall the works and set everything up again?


On Mon, Apr 14, 2014 at 8:48 PM, Adrian Klaver adrian.kla...@aklaver.comwrote:

 On 04/14/2014 05:33 PM, Augori wrote:


 Hi Folks,

 I set up postgresql on a CentOS 5 Linux months ago. I had a process that
 ran every night and connected to the database.  Everything was working
 fine until a few days ago when my process tried to connect and failed.
 Now I'm getting:

 # service postgresql status
 postgresql: unrecognized service

 Does this mean it's gone?  Does anyone have any suggestions?


 Does ps ax | grep post show postgres running?

 What is the process?

 Have there been any updates to software between the time it ran and the
 time it started failing?


 Thank you.



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL] Unrecognized service

2014-04-14 Thread Tom Lane
Augori aug...@gmail.com writes:
 Here's what the ps command gives:
 root@server# ps ax | grep post
  9165 pts/1S+ 0:00 grep post

 Does this mean it's not running?

Sure looks that way.

 It's certainly possible that software updates have occurred.  There are a
 lot of people working on this machine, so I'm not aware of which changes
 have been made lately.

If service is saying it doesn't know a service it used to know,
then either somebody removed the relevant rc.d file (or more likely
the whole postgresql package), or your filesystem is corrupted.
The former seems more likely.  /var/log/yum.log might be helpful
in affixing blame.

 Do you think I need to reinstall the works and set everything up again?

Well, you definitely need to reinstall the postgresql software, but with
any luck the data directory is still there and you can just start up the
server after reinstalling the missing package(s).  Red Hat's packages
were certainly never configured to remove the data directory on package
deletion, and I don't think Devrim's are either.

If the data directory is gone too, you need to have words with whoever
did that ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unrecognized service

2014-04-14 Thread John R Pierce

On 4/14/2014 5:33 PM, Augori wrote:

# service postgresql status
postgresql: unrecognized service

Does this mean it's gone?  Does anyone have any suggestions?


the 'service' command on rhel/centos/etc runs /etc/rc.d/init.d/$1 $2

so, ls -l /etc/rc.d/init.d/postgres*and see what the service name 
is.   my postgreses are generally postgresql-x.y where x.y is the major 
version (9.1, 9.3, etc)




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Approach to Data Summary and Analysis

2014-04-14 Thread Robert DiFalco
Things like this. AVG ring time before answer, average connected call
duration. % of calls never answered. % of calls that are answered that are
connected. Number of times John has answered a call versus how many times
we've called him.That sort of stuff.


On Mon, Apr 14, 2014 at 3:34 PM, Rob Sargent robjsarg...@gmail.com wrote:

  On 04/14/2014 04:22 PM, Robert DiFalco wrote:

 But then I lose a bunch of data like the TIMESTAMPTZ of the call, answer,
 connection, etc. Btw, currently these tables never need to be UPDATEd. They
 are immutable in the current design. And in the end I'm not sure how the
 proposal of one table and a state that is updatable changes the basic
 thrust of the question. For example, getting last call, last answered,
 total called, total answered. If the state of a call transitions from
 called to answered then making it a field loses all the data with the
 previous state, make sense?


 On Mon, Apr 14, 2014 at 2:43 PM, Vincent Veyron vv.li...@wanadoo.frwrote:



 On Mon, 14 Apr 2014 09:27:29 -0700
 Robert DiFalco robert.difa...@gmail.com wrote:

  I have several related tables that represent a call state.
 
  And so on for calls_connected, calls_completed, call_errors, etc.
 
  So for my question -- is the choice between these a personal preference
  sort of thing or is there a right or wrong approach? Am I missing
 another
  approach that would be better?

 Hi Robert,

 I guess a call state is subject to change, in which case you would have
 to shuffle records between tables when that happens?

 ISTM you should consider using only a 'calls' table, and add an
 'id_call_state' field to it that references the list of possible states.
 This would make your queries simpler.

 create table call_state(
 id_call_state text PRIMARY KEY,
 libelle text);

 INSERT INTO call_state (id_call_state, libelle) VALUES
 ('calls_connected', 'Connected'), ('calls_completed', 'Completed'),
 ('call_errors', 'Error');

  CREATE TABLE calls (
id  BIGINT NOT NULL, // sequence generator

 id_call_state INTEGER NOT NULL REFERENCES call_state,

user_id BIGINT NOT NULL,
called  TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,
 
PRIMARY KEY (id),
FOREIGN KEY (user_id) REFERENCES my_users(id) ON DELETE CASCADE
  );


 --

  Salutations, Vincent Veyron

 http://marica.fr
 Gestion des contentieux juridiques, des contrats et des sinistres
 d'assurance


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


  I wonder if you really need to place the parts of the call into the
 various tables.  ringtime, answertime, closetime and all the values
 associated with those parts of a call are all fundamental to a single call,
 though perhaps collected incrementally.  Easy queries, for sure. (Sorry, I
 haven't gone back to see your orig. schema.  If it's clear there why these
 are in separate files, say no more)





Re: [GENERAL] Unrecognized service

2014-04-14 Thread John R Pierce

On 4/14/2014 7:17 PM, Augori wrote:

Here's what the ps command gives:

root@server# ps ax | grep post
 9165 pts/1S+ 0:00 grep post

Does this mean it's not running?

It's certainly possible that software updates have occurred.  There 
are a lot of people working on this machine, so I'm not aware of which 
changes have been made lately.


Do you think I need to reinstall the works and set everything up again?


rpm -qa |grep postgres



--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unrecognized service

2014-04-14 Thread John R Pierce

On 4/14/2014 7:34 PM, Tom Lane wrote:

Augoriaug...@gmail.com  writes:

Here's what the ps command gives:
root@server# ps ax | grep post
  9165 pts/1S+ 0:00 grep post
Does this mean it's not running?

Sure looks that way.


It's certainly possible that software updates have occurred.  There are a
lot of people working on this machine, so I'm not aware of which changes
have been made lately.

If service is saying it doesn't know a service it used to know,
then either somebody removed the relevant rc.d file (or more likely
the whole postgresql package), or your filesystem is corrupted.
The former seems more likely.  /var/log/yum.log might be helpful
in affixing blame.



or maybe you're looking at the wrong server or VM ?



--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unrecognized service

2014-04-14 Thread Adrian Klaver

On 04/14/2014 07:17 PM, Augori wrote:

Here's what the ps command gives:

root@server# ps ax | grep post
  9165 pts/1S+ 0:00 grep post

Does this mean it's not running?


Yes, it is not running.



It's certainly possible that software updates have occurred.  There are
a lot of people working on this machine, so I'm not aware of which
changes have been made lately.


The part that has me confused is where you say this started a few days 
ago with the failure of the nightly process. So is there anything else 
using this database? If so where there any other failures? If not then I 
am totally at a loss as what is going on.




Do you think I need to reinstall the works and set everything up again?


Well, per the other,  posts at least the start up scripts.






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unrecognized service

2014-04-14 Thread Adrian Klaver

On 04/14/2014 07:52 PM, John R Pierce wrote:

On 4/14/2014 7:34 PM, Tom Lane wrote:

Augoriaug...@gmail.com  writes:

Here's what the ps command gives:
root@server# ps ax | grep post
  9165 pts/1S+ 0:00 grep post
Does this mean it's not running?

Sure looks that way.


It's certainly possible that software updates have occurred.  There
are a
lot of people working on this machine, so I'm not aware of which
changes
have been made lately.

If service is saying it doesn't know a service it used to know,
then either somebody removed the relevant rc.d file (or more likely
the whole postgresql package), or your filesystem is corrupted.
The former seems more likely.  /var/log/yum.log might be helpful
in affixing blame.



or maybe you're looking at the wrong server or VM ?


Yea, that thought just occurred to me also.

When you did the ps ax|grep was that on the machine with the Postgres 
server?










--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unrecognized service

2014-04-14 Thread Augori
Hi all,

woohoo!

service postgresql-9.2 status

(pid  9924) is running...

  It seems that I was looking for the service by the wrong name, as John
guessed correcty.Also, Tom, it's good to know that the data won't
necessarily go away if I need to reinstall at some point.

thank you so much for the messages from all three of you.  Your rapid
responses were very encouraging.


On Mon, Apr 14, 2014 at 10:34 PM, John R Pierce pie...@hogranch.com wrote:

 On 4/14/2014 5:33 PM, Augori wrote:

 # service postgresql status
 postgresql: unrecognized service

 Does this mean it's gone?  Does anyone have any suggestions?


 the 'service' command on rhel/centos/etc runs /etc/rc.d/init.d/$1 $2

 so, ls -l /etc/rc.d/init.d/postgres*and see what the service name is.
   my postgreses are generally postgresql-x.y where x.y is the major version
 (9.1, 9.3, etc)



 --
 john r pierce  37N 122W
 somewhere on the middle of the left coast



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Unrecognized service

2014-04-14 Thread Adrian Klaver

On 04/14/2014 08:25 PM, Augori wrote:

Hi all,

woohoo!

service postgresql-9.2 status

(pid  9924) is running...


   It seems that I was looking for the service by the wrong name, as
John guessed correcty.Also, Tom, it's good to know that the data
won't necessarily go away if I need to reinstall at some point.


Well that still leaves two questions unanswered.

1) Why did the postgres process not show up in the ps ax output?

2) Why is the nightly process doing a status check on postgresql not 
postgresql-9.2 ?


From the original post:

# service postgresql status
postgresql: unrecognized service





thank you so much for the messages from all three of you.  Your rapid
responses were very encouraging.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Non-deterministic 100% CPU hang on postgres 9.3

2014-04-14 Thread Fenn Bailey
Hi all,

I'm experiencing an issue where certain queries appear to
non-deterministically hang, with a CPU pinned at 100%.

I say hang, where really I've given up after ~12 hours execution. The
exact same query can then be terminated and run in 90 seconds, with none
of the underlying data changing.

I can completely reset the DB (drop tables/recreate) and re-run and
sometimes certain queries will appear to hang forever, sometimes they will
execute in minutes.

I've tried official debian (amd64) packages of postgres 9.3.2 - 9.3.4 (same
issues).

The workload is ostensibly analytics - As a disclaimer I absolutely
understand that these queries may not be non-optimized (I didn't build
them/have not reviewed fully). The part I'm trying to work out is why they
sometimes finish easily in minutes and other times hang.

I've straced the process when it's pinned and there appears to be no system
calls (ie: no IO) just a pure CPU loop.

If anyone could explain this behaviour, or even how I might go about
diagnosing, that would be wonderful.

Query/analyze details follow.

Thanks!

Query:

UPDATE ad_events e
  set ad_user_id = x.ad_user_id, ad_session_id = x.ad_session_id
FROM
  (SELECT t.ad_event_id, COALESCE (s.ad_user_id, u.merged_id,u.ad_user_id)
ad_user_id,
 case when (name = 'Clickthrough from Email' or properties-('mp_lib')
= 'web' ) then s.ad_session_id
  else null
   end as ad_session_id,
 t.timestamp, name , properties
  from ad_events_mv t
  INNER JOIN ad_users u ON (t.user_id = u.orig_distinct_id)
  LEFT OUTER JOIN  ad_sessions s ON (t.timestamp between s.session_start
and s.session_end  and (s.ad_user_id = u.ad_user_id or s.ad_user_id =
u.merged_id))
  where t.processed = false
  order by s.ad_user_id, s.ad_session_id, timestamp) x
WHERE
  x.ad_event_id = e.ad_event_id;


EXPLAIN ANALYZE follows:   QUERY PLAN
-
 Update on ad_events e  (cost=39730372.92..39765372.92 rows=100
width=237) (actual time=130140.450..130140.450 rows=0 loops=1)
   -  Hash Join  (cost=39730372.92..39765372.92 rows=100 width=237)
(actual time=54243.877..66848.448 rows=200 loops=1)
 Hash Cond: (x.ad_event_id = e.ad_event_id)
 -  Subquery Scan on x  (cost=39654433.45..39666933.45
rows=100 width=144) (actual time=52682.740..57668.998 rows=200
loops=1)
   -  Sort  (cost=39654433.45..39656933.45 rows=100
width=108) (actual time=52682.693..55003.467 rows=200 loops=1)
 Sort Key: s.ad_user_id, s.ad_session_id, t.timestamp
 Sort Method: external merge  Disk: 1078104kB
 -  Nested Loop Left Join  (cost=12054.20..39554775.61
rows=100 width=108) (actual time=204.872..43742.396 rows=200
loops=1)
   Join Filter: ((t.timestamp = s.session_start)
AND (t.timestamp = s.session_end))
   Rows Removed by Join Filter: 18792613
   -  Hash Join  (cost=12046.09..203878.09
rows=100 width=92) (actual time=204.402..4293.175 rows=200 loops=1)
 Hash Cond: (t.user_id = u.orig_distinct_id)
 -  Seq Scan on ad_events_mv t
 (cost=0.00..173082.00 rows=100 width=112) (actual time=0.022..1813.272
rows=200 loops=1)
   Filter: (NOT processed)
 -  Hash  (cost=7932.15..7932.15
rows=329115 width=67) (actual time=204.166..204.166 rows=329115 loops=1)
   Buckets: 65536  Batches: 1  Memory
Usage: 30590kB
   -  Seq Scan on ad_users u
 (cost=0.00..7932.15 rows=329115 width=67) (actual time=0.009..85.231
rows=329115 loops=1)
   -  Bitmap Heap Scan on ad_sessions s
 (cost=8.11..39.22 rows=8 width=32) (actual time=0.007..0.015 rows=10
loops=200)
 Recheck Cond: ((ad_user_id = u.ad_user_id)
OR (ad_user_id = u.merged_id))
 -  BitmapOr  (cost=8.11..8.11 rows=8
width=0) (actual time=0.005..0.005 rows=0 loops=200)
   -  Bitmap Index Scan on
ad_sessions_ad_user_id_idx  (cost=0.00..4.05 rows=4 width=0) (actual
time=0.003..0.003 rows=8 loops=200)
 Index Cond: (ad_user_id =
u.ad_user_id)
   -  Bitmap Index Scan on
ad_sessions_ad_user_id_idx  (cost=0.00..4.05 rows=4 width=0) (actual
time=0.001..0.001 rows=3 loops=200)
:
  QUERY PLAN
-
 Update on ad_events e  

Re: [GENERAL] Non-deterministic 100% CPU hang on postgres 9.3

2014-04-14 Thread Tom Lane
Fenn Bailey fenn.bai...@gmail.com writes:
 I'm experiencing an issue where certain queries appear to
 non-deterministically hang, with a CPU pinned at 100%.

 I say hang, where really I've given up after ~12 hours execution. The
 exact same query can then be terminated and run in 90 seconds, with none
 of the underlying data changing.

But does the plan change?

 If anyone could explain this behaviour, or even how I might go about
 diagnosing, that would be wonderful.

perf or oprofile or local equivalent would help identify where the
looping is happening.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general