[GENERAL]

2014-07-23 Thread Kalai R
*Well the above would indicate it is not being shutdown properly.How are
you shutting it down?*

PSQL server starterd as stoped by postgresql windows service. So when I
shutdown the machine postgres shutting down by postgres windows service

*What and how are you installing?*

Using windows binary zip file, I extract zip file and create cluster using
intidb then register windows service using pg_ctl

*Who are you running the service as?*

I try to start the psql server from command prompt using pg_ctl.exe


Re: [GENERAL] PSQL Server couldn't start

2014-07-23 Thread Kalai R
*Well the above would indicate it is not being shutdown properly.How are
you shutting it down?*

PSQL server starterd as stoped by postgresql windows service. So when I
shutdown the machine postgres shutting down by postgres windows service

*What and how are you installing?*

Using windows binary zip file, I extract zip file and create cluster using
intidb then register windows service using pg_ctl

*Who are you running the service as?*

I try to start the psql server from command prompt using pg_ctl.exe


On Wed, Jul 23, 2014 at 9:47 AM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 07/22/2014 09:01 PM, Kalai R wrote:

 Hi,

 I am facing a serious problem with postgresql frequently. I am using
 postgresql 9.3 in Windows OS with VisualStudio. we have more customers.

 We shutting down the system properly. But when we booting system,
 postgresql service  didn't start. This happens frequently after we
 install.

 log details when I trying to start the psql server

 *server starting*
 *LOG:  database system was interrupted while in recovery at 2014-07-22
 18:31:34 IST*
 *HINT:  This probably means that some data is corrupted and you will
 have to use the last backup for recovery.*
 *LOG:  database system was not properly shut down; automatic recovery in
 progress*


 Well the above would indicate it is not being shutdown properly.

 How are you shutting it down?

 What and how are you installing?

  *LOG:  redo starts at 0/1772908*
 *FATAL:  could not remove symbolic link pg_tblspc/17681: Permission
 denied*


 Permissions issue.

 Who are you running the service as?

  *CONTEXT:  xlog redo create tablespace: 17681 C:/GLOIER/gPRO/Data/GT*
 *LOG:  startup process (PID 4992) exited with exit code 1*
 *LOG:  aborting startup due to startup process failure*



 Why this happens?

 Help me to solve this problem.

 Thank You.




 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL]

2014-07-23 Thread John R Pierce

On 7/22/2014 11:14 PM, Kalai R wrote:

*Well the above would indicate it is not being shutdown properly.

How are you shutting it down?*
*
*
PSQL server starterd as stoped by postgresql windows service. So when 
I shutdown the machine postgres shutting down by postgres windows service


*What and how are you installing?*
*
*
Using windows binary zip file, I extract zip file and create cluster 
using intidb then register windows service using pg_ctl

*
*
*Who are you running the service as?**
*
*
*
I try to start the psql server from command prompt using pg_ctl.exe


why not use the installer, which sets it all up correctly for you?

I think the problem is, you ran initdb as your user account, but the 
system service is running as the special system accounbt 
NETWORK_SERVICE, or similar, and there may be files in the data 
directory which the NETWORK_SERVICE account doesn't have write access 
writes to.   you can probably fix this with the cacls command.






--
john r pierce  37N 122W
somewhere on the middle of the left coast



Re: [GENERAL]

2014-07-23 Thread Kalai R
*why not use the installer, which sets it all up correctly for you?*

In previous, I use installer. but I face more data corruption problem and
also server starting problem  when shutting down the system.  so I try to
install my self.


*I think the problem is, you ran initdb as your user account, but the
system service is running as the special system accounbt NETWORK_SERVICE,
or similar, and there may be files in the data directory which the
NETWORK_SERVICE account doesn't have write access writes to.   you can
probably fix this with the cacls command.*

I think you are correct. But I want to know why database corrupted even
though I shutting down the machine properly. This is frequently happen when
I use installer also.




On Wed, Jul 23, 2014 at 11:55 AM, John R Pierce pie...@hogranch.com wrote:

  On 7/22/2014 11:14 PM, Kalai R wrote:



 *Well the above would indicate it is not being shutdown properly. How are
 you shutting it down?*

  PSQL server starterd as stoped by postgresql windows service. So when I
 shutdown the machine postgres shutting down by postgres windows service

  *What and how are you installing?*

  Using windows binary zip file, I extract zip file and create cluster
 using intidb then register windows service using pg_ctl

  *Who are you running the service as?*

  I try to start the psql server from command prompt using pg_ctl.exe


 why not use the installer, which sets it all up correctly for you?

 I think the problem is, you ran initdb as your user account, but the
 system service is running as the special system accounbt NETWORK_SERVICE,
 or similar, and there may be files in the data directory which the
 NETWORK_SERVICE account doesn't have write access writes to.   you can
 probably fix this with the cacls command.





 --
 john r pierce  37N 122W
 somewhere on the middle of the left coast




Re: [GENERAL]

2014-07-23 Thread John R Pierce

On 7/22/2014 11:42 PM, Kalai R wrote:

*why not use the installer, which sets it all up correctly for you?*
*
*
In previous, I use installer. but I face more data corruption problem 
and also server starting problem  when shutting down the system.  so I 
try to install my self.



*I think the problem is, you ran initdb as your user account, but the 
system service is running as the special system accounbt 
NETWORK_SERVICE, or similar, and there may be files in the data 
directory which the NETWORK_SERVICE account doesn't have write access 
writes to.   you can probably fix this with the cacls command.*

*
*
I think you are correct. But I want to know why database corrupted 
even though I shutting down the machine properly. This is frequently 
happen when I use installer also.




It seems like Windows isn't waiting for the service to exit cleanly 
before its pulling the plug.   Is this a Windows *server* version, or is 
it one of the newer desktop OS's like Windows 8?   Windows 8 seems to 
shut down rather fast, its quite possible its not bothering to wait for 
services to exit.


this error...
*FATAL:  could not remove symbolic link pg_tblspc/17681: Permission 
denied*

*CONTEXT:  xlog redo create tablespace: 17681 C:/GLOIER/gPRO/Data/GT
*
suggests you're using a postgres tablespace ?  I've not used tablespaces 
much or at all in Windows, I don't know how well they work.





--
john r pierce  37N 122W
somewhere on the middle of the left coast



Re: [GENERAL]

2014-07-23 Thread Kalai R
* Is this a Windows *server* version, or is it one of the newer desktop
OS's like Windows 8?   Windows 8 seems to shut down rather fast, its quite
possible its not bothering to wait for services to exit.*

I face  this problem in Windows XP and Windows 7 but not all windows XP and
Windows 7.
In Windows 8 and Windows Server no problem.

*It seems like Windows isn't waiting for the service to exit cleanly before
its pulling the plug.*

I think so, but I don't know how to solve this problem.


On Wed, Jul 23, 2014 at 12:20 PM, John R Pierce pie...@hogranch.com wrote:

  On 7/22/2014 11:42 PM, Kalai R wrote:

 *why not use the installer, which sets it all up correctly for you?*

  In previous, I use installer. but I face more data corruption problem and
 also server starting problem  when shutting down the system.  so I try to
 install my self.


  *I think the problem is, you ran initdb as your user account, but the
 system service is running as the special system accounbt NETWORK_SERVICE,
 or similar, and there may be files in the data directory which the
 NETWORK_SERVICE account doesn't have write access writes to.   you can
 probably fix this with the cacls command.*

  I think you are correct. But I want to know why database corrupted even
 though I shutting down the machine properly. This is frequently happen when
 I use installer also.


 It seems like Windows isn't waiting for the service to exit cleanly before
 its pulling the plug.   Is this a Windows *server* version, or is it one of
 the newer desktop OS's like Windows 8?   Windows 8 seems to shut down
 rather fast, its quite possible its not bothering to wait for services to
 exit.

 this error...
 *FATAL:  could not remove symbolic link pg_tblspc/17681: Permission
 denied*

 *CONTEXT:  xlog redo create tablespace: 17681 C:/GLOIER/gPRO/Data/GT *
 suggests you're using a postgres tablespace ?  I've not used tablespaces
 much or at all in Windows, I don't know how well they work.




 --
 john r pierce  37N 122W
 somewhere on the middle of the left coast




[GENERAL] Index usage with slow query

2014-07-23 Thread Rebecca Clarke
Hi all,

Looking for some advice regarding a slow query I have and indexing.

I'm using postgresql 9.1 and this is my table that has around 680 rows:

CREATE TABLE mytable
(
  class character varying,
  floor character varying,
  source_id integer,
  the_geom geometry
)
WITH (
  OIDS=TRUE
);


INDEX idx_source_id
USING btree
  (source_id);

INDEX idx_the_geom_gist
  USING gist
  (the_geom);


This table is constantly hit with the below query (not always the same
values in the where). The only difference between queries are the values in
the where clause:

SELECT the_geom,oid from mytable
WHERE
the_geom  ST_GeomFromText('POLYGON((529342.334095833
180696.22173,529342.334095833 181533.44595,530964.336820833
181533.44595,530964.336820833 180696.22173,529342.334095833
180696.22173))',find_srid('','mytable','the_geom'))
AND
(floor = 'gf' AND source_id = '689' AND class = 'General')


As the table has increased in size, this query has become slower, so I made
this index:


INDEX idx_floor_sourceid_class
  USING btree
  (floor, source_id, class);


When I run an EXPLAIN and EXPLAIN ANALYZE the query isn't using this new
index.

Sometimes it uses just idx_the_geom_gist

other times it uses idx_the_geom_gist and idx_source_id


I don't understand why it's inconsistent in its use of indexes when the
query is always the same structure, just different where clause values, and
I don't understand why it's not using the new index either.


Would love some help with this. I'm not sure where I'm going wrong.

Thanks in advance.


Re: [GENERAL] question about memory usage

2014-07-23 Thread klo uo
Bill, thanks for your reply.

shared_buffers is set to 128MB.

Now that you mention config file, the only thing I did change there, and
was suggested to me while I made some on my databases was
max_locks_per_transaction = 5 (which has default value 1).

After resetting max_locks_per_transaction to default value and restarting
the server, memory occupied in working set reduced linearly to around 200
MB.

I guess this is it.


On Wed, Jul 23, 2014 at 5:53 AM, Bill Moran wmo...@potentialtech.com
wrote:


 I'm not an expert on the Windows version, so I could be off-base, but the
 POSIX versions of Postgres allocate shared_buffers worth of memory at
 startup
 and lock it for exclusive use by Postgres.  Do you have shared_buffers set
 to
 around 1G, perhaps?




Re: [GENERAL]

2014-07-23 Thread Adrian Klaver

On 07/22/2014 11:59 PM, Kalai R wrote:

* Is this a Windows *server* version, or is it one of the newer desktop
OS's like Windows 8?   Windows 8 seems to shut down rather fast, its
quite possible its not bothering to wait for services to exit.*
*
*
I face  this problem in Windows XP and Windows 7 but not all windows XP
and Windows 7.
In Windows 8 and Windows Server no problem.
*
*
*It seems like Windows isn't waiting for the service to exit cleanly
before its pulling the plug.**
*
*
*
I think so, but I don't know how to solve this problem.



First would be to show the log entries from the period immediately 
before the server shutdown incorrectly.


From the previous go around on this:

http://www.postgresql.org/message-id/CAGxuanNsgNDTCv1bm+ap2tSr=vztGca_yvAFrXwW=9w0o3m...@mail.gmail.com

you indicated that you application does things when installing itself 
and the server.


It would be nice to know what that is?

In particular an outline of the exact steps you application takes during 
the install.


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question about memory usage

2014-07-23 Thread Tomas Vondra
On 23 Červenec 2014, 15:56, klo uo wrote:
 Bill, thanks for your reply.

 shared_buffers is set to 128MB.

 Now that you mention config file, the only thing I did change there, and
 was suggested to me while I made some on my databases was
 max_locks_per_transaction = 5 (which has default value 1).

 After resetting max_locks_per_transaction to default value and
 restarting
 the server, memory occupied in working set reduced linearly to around 200
 MB.

 I guess this is it.

The default value for max_locks_per_transaction is 64, not 1. Values
this high are quite insane, and suggest that either you don't know what
the value means (and increased it just in case, because more is always
better) or that the application does something wrong (eventually
requiring so many locks).

You really need to check this (notice how the amount of shared memory
depends on max_locks_per_transaction):

http://www.postgresql.org/docs/9.0/static/kernel-resources.html#SHARED-MEMORY-PARAMETERS

and this (which explains what max_locks_per_transaction does):

http://www.databasesoup.com/2012/06/postgresqlconf-maxlockspertransaction.html

regards
Tomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] postgresql_fdw issues with triggers on the foreign tables

2014-07-23 Thread Eelke Klein
I'm experimenting with using foreign data wrappers to get data from one
database to another. Most things work perfectly but I am encountering two
issues with triggers on the foreign tables.

The first one is when a query triggers a trigger on the foreign table the
trigger doesn't have any search_path. Not even a select on a table in
public will work from the trigger unless I specify the schema name.

The second problem has to do with DEFAULT values. One of the tables i'm
inserting data on has a column called id of type bigserial. This column is
not filled by the INSERT statement I'm using so should use it's default
which is nextval('...'::regclass); However in the BEFORE INSERT trigger the
value of NEW.id IS NULL while normally it would already be assigned a value
from the sequence.

The issues with the triggers do not occur when they are triggered by
statements executed directly on the database. Only when the statements come
from the FDW I have these issues.


The local database is version 9.3.3 x64 windows from EnterpriseDB
The foreign database is version 9.3.2 x64 freebsd 10 clang3.3

Regards, Eelke


Re: [GENERAL] Index usage with slow query

2014-07-23 Thread Bill Moran
On Wed, 23 Jul 2014 10:45:56 +0100
Rebecca Clarke r.clark...@gmail.com wrote:

 Hi all,
 
 Looking for some advice regarding a slow query I have and indexing.
 
 I'm using postgresql 9.1 and this is my table that has around 680 rows:
 
 CREATE TABLE mytable
 (
   class character varying,
   floor character varying,
   source_id integer,
   the_geom geometry
 )
 WITH (
   OIDS=TRUE
 );
 
 
 INDEX idx_source_id
 USING btree
   (source_id);
 
 INDEX idx_the_geom_gist
   USING gist
   (the_geom);
 
 
 This table is constantly hit with the below query (not always the same
 values in the where). The only difference between queries are the values in
 the where clause:
 
 SELECT the_geom,oid from mytable
 WHERE
 the_geom  ST_GeomFromText('POLYGON((529342.334095833
 180696.22173,529342.334095833 181533.44595,530964.336820833
 181533.44595,530964.336820833 180696.22173,529342.334095833
 180696.22173))',find_srid('','mytable','the_geom'))
 AND
 (floor = 'gf' AND source_id = '689' AND class = 'General')
 
 
 As the table has increased in size, this query has become slower, so I made
 this index:
 
 
 INDEX idx_floor_sourceid_class
   USING btree
   (floor, source_id, class);
 
 
 When I run an EXPLAIN and EXPLAIN ANALYZE the query isn't using this new
 index.
 
 Sometimes it uses just idx_the_geom_gist
 
 other times it uses idx_the_geom_gist and idx_source_id
 
 
 I don't understand why it's inconsistent in its use of indexes when the
 query is always the same structure, just different where clause values, and
 I don't understand why it's not using the new index either.

It depends on the data.  The planner will make estimates on what the fastest
way to execute will be based on a lot of things, one of which is how helpful
an index is really expected to be.  Since your indexes aren't arranged to
allow an index-only scan (although I don't remember if 9.1 had index-only
scans yet ...) it will have to use the index to narrow down the rows, then
load up the rows and filter them further (you didn't provide explain output,
but I'll bet a dozen nickels that's what it says).  So if the values in
source_id are unique enough that the planner doesn't think that 
idx_floor_sourceid_class will narrow the results any better than
idx_source_id, it will use the former because it's a smaller index and will
require less disk fetches to load it.

Of course, without explain output, I'm assuming a lot.  But the basic operation
still stands, indexes aren't always guaranteed to be faster than other types of
access.  And depending on the distribution of the data, some indexes might be
faster with some fetches than with others.

The key is not whether it's using the index or not, it's whether it's getting 
the
fastest plan or not.  The first step in ensuring that is to make sure the table
is getting analyzed frequently enough, otherwise the stats that the planner uses
to predict will be off and it will often choose poor plans.  The next step would
be to isolate specific instances that you're suspicious of and test to see if 
the
planner really is getting the best plan.  Hopefully you have a test database 
where
you can copy the data and add/remove indexes at will.  That type of easter egg
hunt may not be necessary, though.  EXPLAIN ANALYZE can often tell you if the 
plan
is bad by showing you where estimated times vary wildly from actual times.

Hope this helps, but before you worry too much about it, I'd suggest asking
yourself 1 question: is the performance at an acceptable level, even if you 
don't
understand the rational behind the planner's choice?  Of course, that may not be
important if you're asking the question just to understand better.

-- 
Bill Moran wmo...@potentialtech.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql_fdw issues with triggers on the foreign tables

2014-07-23 Thread Albe Laurenz
Eelke Klein wrote:
 I'm experimenting with using foreign data wrappers to get data from one 
 database to another. Most
 things work perfectly but I am encountering two issues with triggers on the 
 foreign tables.
 
 The first one is when a query triggers a trigger on the foreign table the 
 trigger doesn't have any
 search_path. Not even a select on a table in public will work from the 
 trigger unless I specify the
 schema name.

Wouldn't a trigger that depends on the current search_path setting be
pretty dangerous anyway?  Anybody can change the setting.
I'd suggest that you make the trigger more robust.

 The second problem has to do with DEFAULT values. One of the tables i'm 
 inserting data on has a column
 called id of type bigserial. This column is not filled by the INSERT 
 statement I'm using so should use
 it's default which is nextval('...'::regclass); However in the BEFORE INSERT 
 trigger the value of
 NEW.id IS NULL while normally it would already be assigned a value from the 
 sequence.

That is actually working as intended, see this thread:
http://www.postgresql.org/message-id/24107.1363027...@sss.pgh.pa.us
Nobody could come up with a better solution.

 The issues with the triggers do not occur when they are triggered by 
 statements executed directly on
 the database. Only when the statements come from the FDW I have these issues.

Can you reproduce the trigger problem without postgres_fdw by setting 
search_path=pg_catalog ?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql_fdw issues with triggers on the foreign tables

2014-07-23 Thread Igor Neyman


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Eelke Klein
Sent: Wednesday, July 23, 2014 10:31 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] postgresql_fdw issues with triggers on the foreign tables


I'm experimenting with using foreign data wrappers to get data from one 
database to another. Most things work perfectly but I am encountering two 
issues with triggers on the foreign tables.

The first one is when a query triggers a trigger on the foreign table the 
trigger doesn't have any search_path. Not even a select on a table in public 
will work from the trigger unless I specify the schema name.
The second problem has to do with DEFAULT values. One of the tables i'm 
inserting data on has a column called id of type bigserial. This column is not 
filled by the INSERT statement I'm using so should use it's default which is 
nextval('...'::regclass); However in the BEFORE INSERT trigger the value of 
NEW.id IS NULL while normally it would already be assigned a value from the 
sequence.
The issues with the triggers do not occur when they are triggered by statements 
executed directly on the database. Only when the statements come from the FDW I 
have these issues.

The local database is version 9.3.3 x64 windows from EnterpriseDB
The foreign database is version 9.3.2 x64 freebsd 10 clang3.3

Regards, Eelke


So, as a workaround, if you already catch “NEW.id IS NULL” in BEFORE INSRT 
trigger, you could assign NEW.id value nextval('...'::regclass)   right then 
and there explicitly in such cases.

Regards,
Igor Neyman


[GENERAL] Why is unique constraint needed for upsert?

2014-07-23 Thread Seamus Abshere

hi all,

Upsert is usually defined [1] in reference to a violating a unique key:


Insert, if unique constraint violation then update; or update, if not found 
then insert.


Is this theoretically preferable to just looking for a row that matches 
certain criteria, updating it if found or inserting otherwise?


For an example of the latter approach, see MongoDB's flavor of upsert 
[2]. You just give it a query and an update. It seems to me this is 
better because it functions correctly whether or not an index is in place.


Best, thanks,
Seamus


[1] 
http://postgresql.uservoice.com/forums/21853-general/suggestions/245202-merge-upsert-replace

[2] http://docs.mongodb.org/manual/reference/method/db.collection.update/

--
Seamus Abshere, SCEA
https://github.com/seamusabshere


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2014-07-23 Thread Kalai R
I already give log entries before the shutdown incorrectly, in the previous
post

http://www.postgresql.org/message-id/cagxuanmc4zwcjqncaqn-qktj5kzf1pevvt9o_9wftet8kr_...@mail.gmail.com


Also I explain the steps to install in the same post

http://www.postgresql.org/message-id/cagxuano8jgxeplpfxg1whaopunfnrh_5hpze0jvhq9zgrq_...@mail.gmail.com




On Wed, Jul 23, 2014 at 7:39 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 07/22/2014 11:59 PM, Kalai R wrote:

 * Is this a Windows *server* version, or is it one of the newer desktop

 OS's like Windows 8?   Windows 8 seems to shut down rather fast, its
 quite possible its not bothering to wait for services to exit.*
 *

 *
 I face  this problem in Windows XP and Windows 7 but not all windows XP
 and Windows 7.
 In Windows 8 and Windows Server no problem.
 *
 *
 *It seems like Windows isn't waiting for the service to exit cleanly
 before its pulling the plug.**
 *
 *

 *
 I think so, but I don't know how to solve this problem.


 First would be to show the log entries from the period immediately before
 the server shutdown incorrectly.

 From the previous go around on this:

 http://www.postgresql.org/message-id/CAGxuanNsgNDTCv1bm+
 ap2tSr=vztGca_yvAFrXwW=9w0o3m...@mail.gmail.com

 you indicated that you application does things when installing itself and
 the server.

 It would be nice to know what that is?

 In particular an outline of the exact steps you application takes during
 the install.

 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL] Why is unique constraint needed for upsert?

2014-07-23 Thread John R Pierce

On 7/23/2014 10:21 AM, Seamus Abshere wrote:

hi all,

Upsert is usually defined [1] in reference to a violating a unique key:

Insert, if unique constraint violation then update; or update, if not 
found then insert.


Is this theoretically preferable to just looking for a row that 
matches certain criteria, updating it if found or inserting otherwise?


what happens when two connections do this more or less concurrently, in 
transactions?




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why is unique constraint needed for upsert?

2014-07-23 Thread Igor Neyman
-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
Sent: Wednesday, July 23, 2014 1:32 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Why is unique constraint needed for upsert?

On 7/23/2014 10:21 AM, Seamus Abshere wrote:
 hi all,

 Upsert is usually defined [1] in reference to a violating a unique key:

 Insert, if unique constraint violation then update; or update, if not 
 found then insert.

 Is this theoretically preferable to just looking for a row that 
 matches certain criteria, updating it if found or inserting otherwise?

what happens when two connections do this more or less concurrently, in 
transactions?



-- 
john r pierce  37N 122W
somewhere on the middle of the left coast


Well, that's exactly why OP prefers Mongo, which doesn't care about such 
small things as ACID.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Referencing serial col's sequence for insert

2014-07-23 Thread Francisco Olarte
Hi Anil:

On Tue, Jul 22, 2014 at 6:46 PM, Anil Menon gakme...@gmail.com wrote:
 Am a bit confused -which one comes first?
 1) the 'data'||currval('id01_col1_seq') is parsed first : which means it

or
 1) an insert is attempted which causes a sequence.nextval to be performed
...
 I observe the latter on my single session notebook instance of postgres.

Don't be confused, you have experimental evidence which beats theories
hand down, it's either the later or a superset of it ( i.e., second
for single sessions only, or second on notebook sessions, but it seems
unlikely ).

Also note the 1st one cannot be unless you are not using a fresh
session ( i.e., the insert is the first command typed, which if it is
not signals you are testing badly ), since currval is documented as
failing in this case.

Anyway, you aproach is risky. You've been told a lot of alternatives
which have predictable behaviour ( here is another one, start work,
select and ignore nextval, then use currval for BOTH values ), so why
not use one of them? Bear in mind that the second alternative maybe
working due to undefined behaviour which may change in a future
release, or when using multiple rows ( or when using an even number of
sessions, although, as before, I find that one unlikely ). ( Or, if
you really want to know for knowledges sake which is the behaviour,
download the sources, procure yourself a tank of your favorite
caffeinated potion and hack on. )

Regards.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why is unique constraint needed for upsert?

2014-07-23 Thread Tom Lane
John R Pierce pie...@hogranch.com writes:
 On 7/23/2014 10:21 AM, Seamus Abshere wrote:
 Upsert is usually defined [1] in reference to a violating a unique key:
 Is this theoretically preferable to just looking for a row that 
 matches certain criteria, updating it if found or inserting otherwise?

 what happens when two connections do this more or less concurrently, in 
 transactions?

For the OP's benefit --- the subtext John left unstated is that the
unique-key mechanism has already solved the problem of preventing
concurrent updates from creating duplicate keys.  If we build a version of
UPSERT that doesn't rely on a unique index then it'll need some entirely
new mechanism to prevent concurrent key insertion.  (And if you don't care
about concurrent cases, you don't really need UPSERT ...)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question about memory usage

2014-07-23 Thread klo uo
Tomas, thanks for the heads up!

I certainly didn't know what this setting means, except the obvious name.
Your links helped.
I just can't find where was this setting suggested, but IIRC it was in a
guide for migrating OSM to PostGIS, as other tables were just created by
GDAL OGR.

I had this line in my `postgresql.conf`:

max_locks_per_transaction = 5# 1

that's why I thought that 1 is the default, but it may be that
commented value was entered by me, and not the real default value.

I've set it now to 64.

Thanks again




On Wed, Jul 23, 2014 at 4:10 PM, Tomas Vondra t...@fuzzy.cz wrote:

 On 23 Červenec 2014, 15:56, klo uo wrote:
  Bill, thanks for your reply.
 
  shared_buffers is set to 128MB.
 
  Now that you mention config file, the only thing I did change there, and
  was suggested to me while I made some on my databases was
  max_locks_per_transaction = 5 (which has default value 1).
 
  After resetting max_locks_per_transaction to default value and
  restarting
  the server, memory occupied in working set reduced linearly to around 200
  MB.
 
  I guess this is it.

 The default value for max_locks_per_transaction is 64, not 1. Values
 this high are quite insane, and suggest that either you don't know what
 the value means (and increased it just in case, because more is always
 better) or that the application does something wrong (eventually
 requiring so many locks).

 You really need to check this (notice how the amount of shared memory
 depends on max_locks_per_transaction):


 http://www.postgresql.org/docs/9.0/static/kernel-resources.html#SHARED-MEMORY-PARAMETERS

 and this (which explains what max_locks_per_transaction does):


 http://www.databasesoup.com/2012/06/postgresqlconf-maxlockspertransaction.html

 regards
 Tomas




[GENERAL] Dead Locks

2014-07-23 Thread Valter Nogueira
I am receiveing dead locks like this:

Jul 22, 2014 11:00:29 PM br.com.fastway.fastdialer.FastDialerDB query
SEVERE: SELECT * FROM ONGOING WHERE STATUS='FILA' ORDER BY TRUNK_ID,
PRIORIDADE_TRONCO, ID;
org.postgresql.util.PSQLException: ERROR: deadlock detected
  Detail: Process 508 waits for AccessShareLock on relation 2662 of
database 16384; blocked by process 8504.
Process 8504 waits for AccessExclusiveLock on relation 2663 of database
16384; blocked by process 508.

What seems odd is that relations 2662 and 2663 are not user tables but
postgres objects.

select oid, relname from pg_class where oid in (2662,2663)

2662;pg_class_oid_index
2663;pg_class_relname_nsp_index

Futhermore, this system don't use transactions (I mean it is autocommit)
and we don't use triggers, rules or even foreign keys contraints.

Thanks for any help.

Valter


Re: [GENERAL] Dead Locks

2014-07-23 Thread Tom Lane
Valter Nogueira vgnogue...@gmail.com writes:
 I am receiveing dead locks like this:
 Jul 22, 2014 11:00:29 PM br.com.fastway.fastdialer.FastDialerDB query
 SEVERE: SELECT * FROM ONGOING WHERE STATUS='FILA' ORDER BY TRUNK_ID,
 PRIORIDADE_TRONCO, ID;
 org.postgresql.util.PSQLException: ERROR: deadlock detected
   Detail: Process 508 waits for AccessShareLock on relation 2662 of
 database 16384; blocked by process 8504.
 Process 8504 waits for AccessExclusiveLock on relation 2663 of database
 16384; blocked by process 508.

 What seems odd is that relations 2662 and 2663 are not user tables but
 postgres objects.

Yeah, they're indexes of pg_class.  What PG version is that?  Are you
perhaps running maintenance operations that try to VACUUM FULL the system
catalogs?  (If so, I'd counsel against it.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2014-07-23 Thread Krystian Bigaj
On 23 July 2014 08:59, Kalai R softlinne...@gmail.com wrote:

 * Is this a Windows *server* version, or is it one of the newer desktop
 OS's like Windows 8?   Windows 8 seems to shut down rather fast, its quite
 possible its not bothering to wait for services to exit.*

 I face  this problem in Windows XP and Windows 7 but not all windows XP
 and Windows 7.
 In Windows 8 and Windows Server no problem.


 *It seems like Windows isn't waiting for the service to exit cleanly
 before its pulling the plug.*

 I think so, but I don't know how to solve this problem.

Windows will kill service processes on shutdown based on
WaitToKillServiceTimeout registry setting (this will vary by OS, and other
application might change it).
This value can be lowered by some stupid 'pseudo' OS optimizer.
Try to increase WaitToKillServiceTimeout value to for example 30 (it's
5 min.), restart OS, and see if you can still reproduce problem on
shutdown. It problem will appear again and system shutdown hangs for 5min
then you can suspect some deadlock.

There is also bug in Windows 7/Windows Server 2008 R2 (even with SP1, and
all fixes from Windows Update):
http://support.microsoft.com/kb/2549760 (only hotfix - not available on
Windows Update) - without this hotfix changing WaitToKillServiceTimeout doesn't
have any effect!

Windows 2003 Small Business Server installer incorrectly set this value
type: http://support.microsoft.com/kb/839262 - this one might be fixed in
SP1, but I haven't time/need to test it.

(Notes below might not be helpful for this case, but it describes
problems/solutions that I had with Postgres working as a Windows service).

Another note about Windows services and shutdown is that service dependency
isn't considered during system shutdown (this one is documented/by design).
However here I think that Postgres have a problem when working as a Windows
service:
- main postmaster and forked sub-postmasters processes calls
pgwin32_signal_initialize()
(src\backend\main\main.c and src\backend\postmaster\postmaster.c),
- pgwin32_signal_initialize() registers console handler by
SetConsoleCtrlHandler(pg_console_handler,
...) - this is registered also when PG works as a Windows service,
- when pg_console_handler receives CTRL_SHUTDOWN_EVENT from OS, then it
calls pg_queue_signal(SIGINT).

Problems:
- when OS is in shutdown path, then it sends CTRL_SHUTDOWN_EVENT, and *all*
Postgres processes (main and sub/forked) will call pg_queue_signal(SIGINT)
- so main and sub processes will start to shutdown independently? Can this
have any bad consequences?
- even if Postgres is running as a Windows service (for example by pg_ctl),
then postgres.exe process can exit even before pg_ctl will receive event
about shutdown - this one was a problem for me, because in our app. I need
to wait first that our service stops, and then I let Postgres to stop (I
have my own service - not pg_ctl - which will start/stop postgres.exe
process).

I think that CTRL_SHUTDOWN_EVENT should be removed from pg_console_handler,
because as docs says:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms683242(v=vs.85).aspx

CTRL_SHUTDOWN_EVENT
A signal that the system sends when the system is shutting down.
Interactive applications are not present by the time the system sends this
signal, therefore it can be received only be services in this situation.

so it will be called only for service processes (because app. processes
won't be preset on shutdown),
but in that case, service should be responsible to send shutdown signal to
Postgres - like pg_ctl do it by kill(postmasterPID, SIGINT).

This code with CTRL_SHUTDOWN_EVENT (intially in libpg/pgsignal.c) is since:
Revision: f06e79525a57ccbf54ae5d0b673cd904ca978d67
Date: 2004-02-08 23:28:57
so I'm not sure if I'm missing something here, or this bug wasn't noticed
for 10 years :)

My workaround: own service which will start/stop Postgres (mainly to wait
for our service to stop, running initdb.exe in context of NetworkService,
etc.).
I've also written small Postgres module loaded via shared_preload_libraries
which will call SetConsoleCtrlHandler, and my HandlerRoutine which simply
returns TRUE. Because of this pg_console_handler won't be called (so no
processing CTRL_SHUTDOWN_EVENT by any Postgress process).

Best regards,
Krystian Bigaj


Re: [GENERAL] Dead Locks

2014-07-23 Thread Tom Lane
[ please keep the list cc'd, and avoid top-posting ]

Valter Nogueira vgnogue...@gmail.com writes:
 I get the error in different server with different pg versions.
 In this server PG is:
  PostgreSQL 9.1.13 on i686-pc-linux-gnu, compiled by gcc (Ubuntu/Linaro
 4.6.3-1ubuntu5) 4.6.3, 32-bit

Well, that has the only fix I can think of that might be relevant (commit
1192ba8b6).

 Maybe I have autovaccum? I am not sure

AFAIK, autovacuum would never take an exclusive lock on an index; nor
would any plain DML operation.  There must be some command you are issuing
that tries to grab exclusive lock on that index, and I'd bet it's
something along the line of VACUUM FULL, CLUSTER, or REINDEX applied to
pg_class.

You might try looking into the postmaster log, which I think will include
the text of all SQL commands involved in the deadlock report.  If it
doesn't, try turning on log_statements so you can see what the other
process is running.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] tab_to_sting

2014-07-23 Thread Ramesh T
Hi,

SELECT hr.tab_to_largestring(CAST(COLLECT(cust_name) AS
t_varchar2_tab)) AS employees
FROM   hr.customer

when i run function for  table column values  to single row function name
is hr.tab_to_largestring

this code from oracle


it return like  function collect(character varying) does not exit

please let me know in postgres collect () key is thier..?


and how to run this function..?


Re: [GENERAL] tab_to_sting

2014-07-23 Thread hubert depesz lubaczewski
I don't know what collect actually does, but just guessing, I would say
that you're looking for string_agg()

depesz


On Wed, Jul 23, 2014 at 6:12 PM, Ramesh T rameshparnandit...@gmail.com
wrote:

 Hi,

 SELECT hr.tab_to_largestring(CAST(COLLECT(cust_name) AS
 t_varchar2_tab)) AS employees
 FROM   hr.customer

 when i run function for  table column values  to single row function name
 is hr.tab_to_largestring

 this code from oracle


 it return like  function collect(character varying) does not exit

 please let me know in postgres collect () key is thier..?


 and how to run this function..?



Re: [GENERAL]

2014-07-23 Thread Adrian Klaver

On 07/23/2014 10:30 AM, Kalai R wrote:

I already give log entries before the shutdown incorrectly, in the
previous post

http://www.postgresql.org/message-id/cagxuanmc4zwcjqncaqn-qktj5kzf1pevvt9o_9wftet8kr_...@mail.gmail.com


Also I explain the steps to install in the same post

http://www.postgresql.org/message-id/cagxuano8jgxeplpfxg1whaopunfnrh_5hpze0jvhq9zgrq_...@mail.gmail.com




Yes, but these are for a different iteration of your application. One 
where you where using the installer, but doing a silent install. If I am 
following correctly you have changed that procedure:



What and how are you installing?

Using windows binary zip file, I extract zip file and create cluster 
using intidb then register windows service using pg_ctl




So I figured, new procedure, new logs on the theory that the errors 
would not be the same.


So, do you have logs from this most recent problem and are the errors 
the same?


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tab_to_sting

2014-07-23 Thread Adrian Klaver

On 07/23/2014 09:12 AM, Ramesh T wrote:

Hi,

 SELECT hr.tab_to_largestring(CAST(COLLECT(cust_name) AS
t_varchar2_tab)) AS employees
 FROM   hr.customer

when i run function for  table column values  to single row function
name is hr.tab_to_largestring

this code from oracle

it return like  function collect(character varying) does not exit

please let me know in postgres collect () key is thier..?


and how to run this function..?


To help with getting answers, it would be helpful if you told the list 
what the Oracle function does or point to the documentation:


http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions031.htm#SQLRF51285

Another option would be to investigate EnterpriseDB as they have an 
Oracle compatibility layer available:


http://www.enterprisedb.com/


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why is unique constraint needed for upsert? (treat atomicity as optional)

2014-07-23 Thread Seamus Abshere

On 7/23/14 3:40 PM, Tom Lane wrote:

John R Pierce pie...@hogranch.com writes:

On 7/23/2014 10:21 AM, Seamus Abshere wrote:

Upsert is usually defined [1] in reference to a violating a unique key:
Is this theoretically preferable to just looking for a row that
matches certain criteria, updating it if found or inserting otherwise?



what happens when two connections do this more or less concurrently, in
transactions?


For the OP's benefit --- the subtext John left unstated is that the
unique-key mechanism has already solved the problem of preventing
concurrent updates from creating duplicate keys.  If we build a version of
UPSERT that doesn't rely on a unique index then it'll need some entirely
new mechanism to prevent concurrent key insertion.  (And if you don't care
about concurrent cases, you don't really need UPSERT ...)


hi all,

What if we treat atomicity as optional? You could have extremely 
readable syntax like:



-- no guarantees, no index required
UPSERT age = 5 INTO dogs WHERE name = 'Jerry';



-- optionally tell us how you want to deal with collision
UPSERT age = 3 INTO dogs WHERE name = 'Jerry' ON DUPLICATE KEY KEEP NEWEST;
UPSERT age = 5 INTO dogs WHERE name = 'Jerry' ON DUPLICATE KEY KEEP NEWEST;



-- only **require** (by throwing an error) a unique index or a locked table for 
queries like
UPSERT age = age+1 INTO dogs WHERE name = 'Jerry';


Obviously this flies in the face of what most people say the 
fundamental Upsert property is [1]



At READ COMMITTED isolation level, you should always get an atomic insert or 
update [1]


I just think there are a lot of non-concurrent bulk loading and 
processing workflows that could benefit from the performance advantages 
of upsert (one trip to database).


Best, thanks,
Seamus

[1] http://www.pgcon.org/2014/schedule/events/661.en.html

--
Seamus Abshere, SCEA
https://github.com/seamusabshere


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why is unique constraint needed for upsert? (treat atomicity as optional)

2014-07-23 Thread Tom Lane
Seamus Abshere sea...@abshere.net writes:
 On 7/23/14 3:40 PM, Tom Lane wrote:
 For the OP's benefit --- the subtext John left unstated is that the
 unique-key mechanism has already solved the problem of preventing
 concurrent updates from creating duplicate keys.

 What if we treat atomicity as optional?

You'll get a *much* warmer response to that kind of suggestion from
MongoDB or MySQL, no doubt.  PG is not in the business of optional
data integrity.

 I just think there are a lot of non-concurrent bulk loading and 
 processing workflows that could benefit from the performance advantages 
 of upsert (one trip to database).

What exactly is your argument for supposing that an UPSERT without an
underlying index would perform so well?  It seems much more likely
that it'd suck, because of having to do full-table scans to look
for existing rows.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why is unique constraint needed for upsert? (treat atomicity as optional)

2014-07-23 Thread John R Pierce

On 7/23/2014 1:45 PM, Seamus Abshere wrote:


What if we treat atomicity as optional? You could have extremely 
readable syntax like:


atomicity is not and never will be optional in PostgreSQL.


-- no guarantees, no index required
UPSERT age = 5 INTO dogs WHERE name = 'Jerry'; 


and if there's several rows with name='Jerry', you'd want to update them 
ALL ?  if name isn't indexed, this will, as Tom suggests, require a FULL 
table scan, and it still will have issues with concurrency (connection 
scans table, finds nothing, starts to insert, user 2 scans table, finds 
nothing, starts to insert, poof, now we have two records?!?).   If name 
*is* indexed and unique, this collision will cause a error at commit for 
at least one of those connections.


--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why is unique constraint needed for upsert? (treat atomicity as optional)

2014-07-23 Thread Seamus Abshere

On 7/23/14 6:03 PM, John R Pierce wrote:

On 7/23/2014 1:45 PM, Seamus Abshere wrote:

What if we treat atomicity as optional?


atomicity is not and never will be optional in PostgreSQL.


I'm wondering what a minimal definition of upsert could be - possibly 
separating concurrency handling out as a (rigorously defined) option for 
those who need it.



-- no guarantees, no index required
UPSERT age = 5 INTO dogs WHERE name = 'Jerry';


and if there's several rows with name='Jerry', you'd want to update them
ALL ?  if name isn't indexed, this will, as Tom suggests, require a FULL
table scan, and it still will have issues with concurrency


Ah, I was just saying, in terms of correctness, it seems to me that 
upsert shouldn't NEED a index to work, just like you don't need an index 
on name when you say WHERE name = 'Jerry' in SELECTs or INSERTS or 
UPDATES.


Appreciate the defense of data integrity in any case!!

Best,
Seamus

--
Seamus Abshere, SCEA
https://github.com/seamusabshere


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Why is unique constraint needed for upsert? (treat atomicity as optional)

2014-07-23 Thread David G Johnston
seamusabshere wrote
 At READ COMMITTED isolation level, you should always get an atomic insert
 or update [1]
 
 I just think there are a lot of non-concurrent bulk loading and 
 processing workflows that could benefit from the performance advantages 
 of upsert (one trip to database).

Bulk load raw data into UNLOGGED staging table
LOCK production table
UPDATE matched records
INSERT unmatched records
UNLOCK production table
TRUNCATE staging table

This seems like it would be sufficient for non-concurrent bulk loading...

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Why-is-unique-constraint-needed-for-upsert-tp5812552p5812628.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Why is unique constraint needed for upsert? (treat atomicity as optional)

2014-07-23 Thread David G Johnston
seamusabshere wrote
 On 7/23/14 6:03 PM, John R Pierce wrote:
 On 7/23/2014 1:45 PM, Seamus Abshere wrote:
 What if we treat atomicity as optional?

 atomicity is not and never will be optional in PostgreSQL.
 
 I'm wondering what a minimal definition of upsert could be - possibly 
 separating concurrency handling out as a (rigorously defined) option for 
 those who need it.

I don't know how you can avoid the implicit need for an IF in the
algorithm.  I guess if you had some way to force an INSERT to automatically
hide any previous entries/records with the same PK you could move the
checking to the read side of the equation - and deal with the necessary
periodic cleanup.  At this point you are basically implementing a Temporal
database...

If you leave the checking to occur during write why wouldn't you want an
index to make that go faster?  It isn't mandatory but any performant
implementation is going to use one.

You can enforce a unique constraint violation without an index so you
initial premise is wrong - though again why would you want to?

Also, why do you assume MongoDB doesn't use an index to execute the supplied
query?

From your link:

To prevent MongoDB from inserting the same document more than once, create
a unique index on the name field. With a unique index, if an applications
issues a group of upsert operations, exactly one update() would successfully
insert a new document.

Given we do not have native UPSERT I'm not sure where your question is
coming from anyway.  I'm not sure what the plans are for UPSERT at the
moment but nothing prevents us from performing the UPSERT comparison on a
non-uniqe set of columns.  If the only unique index on a table is its
serial column then you would get behavior similar to MongoDB w/o a unique
index on name.

Though that does re-up the question about what happens when you issue a
subsequent UPSERT and more than one matching record is returned...the most
logical being apply the update to all matched records.  

I have difficulty imaging a situation where this would be desirable.  If I
am using UPSERT I am defining a complete entity that I need to cause to
exist.  If three of them already exist there is some differentiating factor
between them that my UPSERT command would clobber.  The example given in the
MongoDB link is not a particularly convincing use-case.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Why-is-unique-constraint-needed-for-upsert-tp5812552p5812631.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Why is unique constraint needed for upsert? (treat atomicity as optional)

2014-07-23 Thread Seamus Abshere

On 7/23/14 6:50 PM, David G Johnston wrote:

seamusabshere wrote

On 7/23/14 6:03 PM, John R Pierce wrote:

On 7/23/2014 1:45 PM, Seamus Abshere wrote:

What if we treat atomicity as optional?

atomicity is not and never will be optional in PostgreSQL.

I'm wondering what a minimal definition of upsert could be - possibly
separating concurrency handling out as a (rigorously defined) option for
those who need it.

Given we do not have native UPSERT I'm not sure where your question is
coming from anyway.  I'm not sure what the plans are for UPSERT at the
moment but nothing prevents us from performing the UPSERT comparison on a
non-uniqe set of columns.


hi David,

My argument lives and dies on the assumption that UPSERT would be useful 
even if it was (when given with no options) just a macro for



  UPDATE db SET b = data WHERE a = key;
  IF NOT found THEN
INSERT INTO db(a,b) VALUES (key, data);
  END IF;


Adding things like unique indexes would work like you would expect with 
individual INSERTs or UPDATEs - your statement might raise an exception. 
Then, going beyond, UPSERT would optionally support atomic a = a+1 
stuff, special actions to take on duplicate keys, all the concurrency 
stuff that people have been talking about.


IMO having such a complicated definition of what an upsert must be 
makes it a unicorn when it could just be a sibling to INSERT and UPDATE.


Best,
Seamus


--
Seamus Abshere, SCEA
https://github.com/seamusabshere


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Why is unique constraint needed for upsert? (treat atomicity as optional)

2014-07-23 Thread John R Pierce

On 7/23/2014 3:29 PM, Seamus Abshere wrote:
My argument lives and dies on the assumption that UPSERT would be 
useful even if it was (when given with no options) just a macro for



  UPDATE db SET b = data WHERE a = key;
  IF NOT found THEN
INSERT INTO db(a,b) VALUES (key, data);
  END IF; 


but that won't work if two connections execute similar 'upserts' 
concurrently.both updates will see the record isn't there, then one 
or the other insert will fail, depending on which transaction commits first.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Why is unique constraint needed for upsert? (treat atomicity as optional)

2014-07-23 Thread David G Johnston


 hi David,

 My argument lives and dies on the assumption that UPSERT would be useful
 even if it was (when given with no options) just a macro for

UPDATE db SET b = data WHERE a = key;
IF NOT found THEN
  INSERT INTO db(a,b) VALUES (key, data);
END IF;

 Adding things like unique indexes would work like you would expect with
 individual INSERTs or UPDATEs - your statement might raise an exception.
 Then, going beyond, UPSERT would optionally support atomic a = a+1
 stuff, special actions to take on duplicate keys, all the concurrency
 stuff that people have been talking about.

 IMO having such a complicated definition of what an upsert must be
 makes it a unicorn when it could just be a sibling to INSERT and UPDATE.


Fair enough.  I'd personally much rather have a staging table and use
writeable CTEs to implement something that simple - retrying on the off
chance an error occurs.

I'd use UPSERT (probably still with a staging table) if I expect a high
level of concurrency is going to force me to retry often and the
implementation will handle that for me.

To be honest though I haven't given it that much thought as I've had little
need for it.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Why-is-unique-constraint-needed-for-upsert-tp5812552p5812641.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] Re: Why is unique constraint needed for upsert? (treat atomicity as optional)

2014-07-23 Thread Seamus Abshere

On 7/23/14 7:45 PM, John R Pierce wrote:

On 7/23/2014 3:29 PM, Seamus Abshere wrote:

My argument lives and dies on the assumption that UPSERT would be
useful even if it was (when given with no options) just a macro for


  UPDATE db SET b = data WHERE a = key;
  IF NOT found THEN
INSERT INTO db(a,b) VALUES (key, data);
  END IF;


but that won't work if two connections execute similar 'upserts'
concurrently.both updates will see the record isn't there, then one
or the other insert will fail, depending on which transaction commits
first.


John,

Right - if you had a situation where that might happen, you would use a 
slightly more advanced version of the UPSERT command (and/or add a 
unique index).


UPSERT, in this conception and in its most basic form, would be subject 
to many of the same (and more) concurrency concerns as basic INSERTs and 
UPDATEs.


Providing options may be preferable magically handling everything.

Best,
Seamus


--
Seamus Abshere, SCEA
+598 99 54 99 54
https://github.com/seamusabshere


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Complex Recursive Query

2014-07-23 Thread Jim Garrison
I have a collection of relationship rows of the form

Table: graph
key1 varchar
key2 varchar

A row of the form ('a','b') indicates that 'a' and 'b' are related.
The table contains many relationships between keys, forming several
disjoint sets. All relationships are bi-directional, and both
directions are present.  I.e. the table contains a set of disjoint
graphs specified as node pairs.

For example the set of values 

key1key2
-   -
  a   x
  a   y
  b   w
  c   t
  x   a
  y   a
  y   z
  z   y
  t   c
  w   b
  w   d
  d   w

defines three disjoint groups of connected keys:

  a x y z
  c t
  b w d

What I would like to achieve is a single SQL query that returns

  group key
  - ---
1a
1x
1y
1z
2c
2t
3b
3w
3d

I don't care about preserving the node-to-node relationships, only
the group membership for each node.

I've been playing with WITH RECURSIVE CTEs but haven't had any
success.  I'm not really sure how to express what I want in SQL, and
it's not completely clear to me that recursive CTEs will help here.
Also I'm not sure how to generate the sequence numbers for the groups


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Why is unique constraint needed for upsert? (treat atomicity as optional)

2014-07-23 Thread John R Pierce

On 7/23/2014 3:58 PM, Seamus Abshere wrote:
Right - if you had a situation where that might happen, you would use 
a slightly more advanced version of the UPSERT command (and/or add a 
unique index).


a unique index wouldn't resolve the problem.  without one, you'd end up 
with two records, with one, you'd end up with an error.


naive programmers never seem to expect concurrency, its something that 
just happens.


--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Referencing serial col's sequence for insert

2014-07-23 Thread Anil Menon
Thanks Olarte,
Exactly following your advice...this being the beauty of open source -you
can read the source code
​. It's that itch to drink deep from the fountain of knowledge.​


I really do like
​ ​
​Laurenz Albe's advice of using WITH() AS which seems to be the best
practice I can ask the developers to follow as it eliminates a lot of
uncertainties and db specific behavior - and seems like a best practice
even for other DBs.
​In fact I am sort of expanding that a bit to say wherever sequences need
to be used ​
​use the WITH() AS construct pattern.​

Thanks everyone for the inputs.

Regards
​,​

A
​nil​

On 24 Jul 2014 02:03, Francisco Olarte fola...@peoplecall.com wrote:

 Hi Anil:

 On Tue, Jul 22, 2014 at 6:46 PM, Anil Menon gakme...@gmail.com wrote:
  Am a bit confused -which one comes first?
  1) the 'data'||currval('id01_col1_seq') is parsed first : which means it
 
 or
  1) an insert is attempted which causes a sequence.nextval to be performed
 ...
  I observe the latter on my single session notebook instance of postgres.

 Don't be confused, you have experimental evidence which beats theories
 hand down, it's either the later or a superset of it ( i.e., second
 for single sessions only, or second on notebook sessions, but it seems
 unlikely ).

 Also note the 1st one cannot be unless you are not using a fresh
 session ( i.e., the insert is the first command typed, which if it is
 not signals you are testing badly ), since currval is documented as
 failing in this case.

 Anyway, you aproach is risky. You've been told a lot of alternatives
 which have predictable behaviour ( here is another one, start work,
 select and ignore nextval, then use currval for BOTH values ), so why
 not use one of them? Bear in mind that the second alternative maybe
 working due to undefined behaviour which may change in a future
 release, or when using multiple rows ( or when using an even number of
 sessions, although, as before, I find that one unlikely ). ( Or, if
 you really want to know for knowledges sake which is the behaviour,
 download the sources, procure yourself a tank of your favorite
 caffeinated potion and hack on. )

 Regards.

 Francisco Olarte.



Re: [GENERAL] Watching Views

2014-07-23 Thread Nick Guenther




Quoting David G Johnston david.g.johns...@gmail.com:


Nick Guenther wrote

Dear List,

In principle, I am looking for some way to say
```
CREATE VIEW view13131 AS select (id, name, bank_account) from actors
where age  22;
WATCH view13131;
```

and get output to stdout like
```

INSERT view13131 VALUES (241, Mortimer, 131.09);
...
INSERT view13131 VALUES (427, Schezwan, 95.89);
UPDATE view13131 SET bank_account = 1017.12 WHERE id = 427;
DELETE FROM view13131 WHERE id = 92;
...
```


9.4 - http://www.postgresql.org/docs/9.4/static/logicaldecoding.html

Though I doubt your requirement to obtain only a subset of data is something
that can be accommodated; especially in SQL form.

And, yes, you can create triggers on views.

http://www.postgresql.org/docs/9.4/static/sql-createtrigger.html

But assuming your view is meant to be dynamic, covering only the subset of
data you wish to watch, no one is going to be using your view to actually
Insert/Update/Delete against the underlying table(s) so it will not do you
any good to add triggers to it.

You probably need to create some kind of materialized view and add a trigger
to the relevant source table to maintain that view on an ongoing basis.
Then remove the trigger (and optionally the materialized view) when you no
longer care to watch.

This:
http://www.postgresql.org/docs/9.4/static/sql-creatematerializedview.html
 or roll your own.

You can also use the full power of whatever programming languages you can
install onto the server (e.g., pl/perl, pl/python, pl/sh) to link up with
the outside world from inside one of those triggers...


Thanks David, your tips were very helpful. I'm not a SQL expert, but  
these ideas springboarded me ahead. I've been playing with your ideas  
this week and I've come up with a way to extract the logical changes  
without using 9.4, which I'll share here in case anyone else is  
curious. This is preliminary, so don't rely on it for anything  
important. It doesn't write DELETE FROM  lines, but it does write  
the data in a json-esque format which could be without too much work  
turned into my invented WATCH language.



```{psql}
-- watch.psql
-- This postgres + pl/python2 script demonstrates watching changes to  
a table via a trigger.

-- Python opens a FIFO on which it writes lines like
--  + {'rating': 3, 'kind': 'Documentary', 'name': 'The Mail Man'}.
-- The FIFO part is the flakiest bit, because it requires you to load  
the trigger,
-- then immediately run a reader (e.g. `tail -f  
/path/to/postgres/data/_changes_films`)

-- *before* any DB updates happen.
-- If you have no reader, updates will fail (the kernel will raise  
ENXIO at print FIFO).
-- The other option is to ignore the ENXIOs, but then changes will get  
lost. I'm not sure.
-- Some more design (subprocess? calling this from a master script?)  
can probably fix that awkwardness.

--
-- The second point of flakiness is that attaching the trigger is  
rather verbose.

-- This can be solved with pl/pgsql subroutines.
--
-- This could be probably done in plpgsql, but I know python better,  
and it comes with serialization (json, msgpack, pickle) available  
easily.

-- these tips are due to
--   
http://www.postgresql.org/message-id/1405660725952-5811931.p...@n5.nabble.com  
and
--   
http://www.postgresql.org/message-id/1405703990457-5811982.p...@n5.nabble.com
-- The reason I'm not using Logical Decoding  
http://www.postgresql.org/docs/devel/static/logicaldecoding-example.html is  
because it's still in devel---not even Arch Linux, usually full of  
bleeding edge code, has this feature yet. Plus it requires fiddling  
with the .conf file.



DROP TABLE IF EXISTS films CASCADE;
CREATE TABLE films (name text, kind text, rating int);


DROP FUNCTION IF EXISTS watch_table();
CREATE FUNCTION watch_table() RETURNS trigger AS $$
  tablename = TD[args][0]

  FIFO = _changes_%s % (tablename,)
  if FIFO not in SD:
#this is our first time running in this instance of the python  
interpreter:

# run initializations

#PL/Python is really meant for small one-off tasks, mostly. Most  
data should probably just be stuffed straight into the database.

# however, things like file descriptors don't work so well like that
# for these things, we need to use the facilities PL/python  
provides:  
http://www.postgresql.org/docs/9.3/static/plpython-sharing.html
#  summary is: SD stands for static data and behaves like  
static locals in C (they must have some kind of trap table kicking  
around that switches in values of SD when the appropriate function is  
called).

#  GD stands for global data and is the same everywhere
#both begin as empty dictionaries
#   note also that it seems that one python interpreter is  
invoked ~per client connection~; not per-statement (which would be too  
fine) nor per

import sys, os

if os.path.exists(FIFO):
  #TODO: check that, if it exists, it's a FIFO and we have perms 

[GENERAL] event triggers in 9.3.4

2014-07-23 Thread Vasudevan, Ramya

I set up the following to log all DDLs executed in the database:

CREATE TABLE log_ddl_info(ddl_tag text, ddl_event text, ddl_time timestamp);

CREATE OR REPLACE FUNCTION log_ddl_execution()
RETURNS event_trigger AS $$
DECLARE
insertquery TEXT;
BEGIN
insertquery := 'INSERT INTO log_ddl_info VALUES (''' || tg_tag ||''', ''' || 
tg_event || ''', statement_timestamp())';
EXECUTE insertquery;
RAISE NOTICE 'Recorded execution of command % with event %', tg_tag, tg_event;
END;
$$ LANGUAGE plpgsql;

CREATE EVENT TRIGGER log_ddl_info_start  ON ddl_command_start EXECUTE PROCEDURE 
log_ddl_execution();
CREATE EVENT TRIGGER log_ddl_info_end   ON ddl_command_end  EXECUTE PROCEDURE 
log_ddl_execution();

Is there a way to log the object name (or the oid) in the function?

Thank you,

Ramya Vasudevan
Database Administrator

CLASSMATES
333 Elliott Ave. West, Suite 500
Seattle, WA 98119
206.301.4933 o



Re: [GENERAL] event triggers in 9.3.4

2014-07-23 Thread Adrian Klaver

On 07/23/2014 05:22 PM, Vasudevan, Ramya wrote:

I set up the following to log all DDLs executed in the database:

CREATE TABLE log_ddl_info(ddl_tag text, ddl_event text, ddl_time timestamp);

CREATE OR REPLACE FUNCTION log_ddl_execution()

RETURNS event_trigger AS $$

DECLARE

insertquery TEXT;

BEGIN

insertquery := 'INSERT INTO log_ddl_info VALUES (''' || tg_tag ||''',
''' || tg_event || ''', statement_timestamp())';

EXECUTE insertquery;

RAISE NOTICE 'Recorded execution of command % with event %', tg_tag,
tg_event;

END;

$$ LANGUAGE plpgsql;

CREATE EVENT TRIGGER log_ddl_info_start  ON ddl_command_start EXECUTE
PROCEDURE log_ddl_execution();

CREATE EVENT TRIGGER log_ddl_info_end   ON ddl_command_end  EXECUTE
PROCEDURE log_ddl_execution();

Is there a way to log the object name (or the oid) in the function?


The only thing I see is for dropped objects:

http://www.postgresql.org/docs/9.4/static/functions-event-triggers.html

pg_event_trigger_dropped_objects()



Thank you,





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Complex Recursive Query

2014-07-23 Thread matt
I wouldn't do this with recursion; plain old iteration is your friend
(yes, WITH RECURSIVE is actually iterative, not recursive...)

The algorithm goes like this:

1. Extend your graph relation to be symmetric and transitive.
2. Assign a integer group id to each node.
3. Repeatedly join the node list to the (extended) relation, updating each
node's group id to be the minimum of the group ids of every node it
touches.
4. Stop when the group ids stop changing.

Here's some example code, using your data:

DROP SCHEMA IF EXISTS test CASCADE;
CREATE SCHEMA test;
SET SEARCH_PATH TO test;

CREATE TABLE graph(key1 TEXT, key2 TEXT);

INSERT INTO graph VALUES
('a', 'x'),
('a', 'y'),
('b', 'w'),
('c', 't'),
('x', 'a'),
('y', 'a'),
('y', 'z'),
('z', 'y'),
('t', 'c'),
('w', 'b'),
('w', 'd'),
('d', 'w');

DO
$$
DECLARE
  prev INT = 0;
  curr INT;
BEGIN
  CREATE TABLE rel AS
  SELECT key1, key2 FROM graph
  UNION
  SELECT key2, key1 FROM graph
  UNION
  SELECT key1, key1 FROM graph
  UNION
  SELECT key2, key2 FROM graph;

  CREATE TABLE group_ids AS
  SELECT
key,
ROW_NUMBER() OVER (ORDER BY key) AS group_id
  FROM
(
  SELECT key1 AS key FROM graph
  UNION
  SELECT key2 FROM graph
) _;

  SELECT SUM(group_id) INTO curr FROM group_ids;
  WHILE prev != curr LOOP
prev = curr;
DROP TABLE IF EXISTS min_ids;
CREATE TABLE min_ids AS
SELECT
  a.key,
  MIN(c.group_id) AS group_id
FROM
  group_ids a
INNER JOIN
  rel b
ON
  a.key = b.key1
INNER JOIN
  group_ids c
ON
  b.key2 = c.key
GROUP BY
  a.key;

UPDATE
  group_ids
SET
  group_id = min_ids.group_id
FROM
  min_ids
WHERE
  group_ids.key = min_ids.key;

SELECT SUM(group_id) INTO curr FROM group_ids;
  END LOOP;

  DROP TABLE IF EXISTS rel;
  DROP TABLE IF EXISTS min_ids;
END
$$;

SELECT * FROM group_ids;


Hope it helps,

Matthew




 I have a collection of relationship rows of the form

 Table: graph
 key1 varchar
 key2 varchar

 A row of the form ('a','b') indicates that 'a' and 'b' are related.
 The table contains many relationships between keys, forming several
 disjoint sets. All relationships are bi-directional, and both
 directions are present.  I.e. the table contains a set of disjoint
 graphs specified as node pairs.

 For example the set of values

 key1key2
 -   -
   a   x
   a   y
   b   w
   c   t
   x   a
   y   a
   y   z
   z   y
   t   c
   w   b
   w   d
   d   w

 defines three disjoint groups of connected keys:

   a x y z
   c t
   b w d

 What I would like to achieve is a single SQL query that returns

   group key
   - ---
 1a
 1x
 1y
 1z
 2c
 2t
 3b
 3w
 3d

 I don't care about preserving the node-to-node relationships, only
 the group membership for each node.

 I've been playing with WITH RECURSIVE CTEs but haven't had any
 success.  I'm not really sure how to express what I want in SQL, and
 it's not completely clear to me that recursive CTEs will help here.
 Also I'm not sure how to generate the sequence numbers for the groups


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Table checksum proposal

2014-07-23 Thread matt
I have a suggestion for a table checksumming facility within PostgreSQL. 
The applications are reasonably obvious - detecting changes to tables,
validating data migrations, unit testing etc.  A possible algorithm is as
follows:

1. For each row of the table, take the binary representations of the
values and serialise them to CSV.
2. Calculate the MD5 sum of each CSV-serialised row.
3. XOR the row MD5 sums together.
4. CSV-serialise and MD5 a list of representations (of some sort) of the
types of the table's columns and XOR it with the rest.
5. Output the result as the table's checksum.

Advantages of this approach:

1. Easily implemented using SPI.
2. Since XOR is commutative and associative, order of ingestion of rows
doesn't matter; therefore, unlike some other table checksumming methods,
this doesn't need an expensive ORDER BY *.  So, this should be pretty much
as fast as a SELECT * FROM, which is probably as fast as a table checksum
can be.
3. Using a cursor in SPI, rows can be ingested a few at a time.  So memory
footprint is low even for large tables.
4. Output has a convenient fixed size of 128 bits.

Questions:

1. Should this be a contrib module which provides a function, or should it
be a built-in piece of functionality?
2. Is MD5 too heavyweight for this?  Would using a non-cryptographic
checksum be worth the speed boost?
3. Is there a risk of different architectures/versions returning different
checksums for tables which could be considered identical?  If so, is this
worth worrying about?

I have knocked up some sample code if anyone is interested.

Regards,

Matthew


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Complex Recursive Query

2014-07-23 Thread John W Higgins
https://gist.github.com/wishdev/635f7a839877d79a6781

Sorry for the 3rd party site - just easier to get the layout correct.

A CTE and dense_rank is all it takes. I am always amazed at what one can
now pack into such small amounts of code.


On Wed, Jul 23, 2014 at 4:00 PM, Jim Garrison jim.garri...@nwea.org wrote:

 I have a collection of relationship rows of the form

 Table: graph
 key1 varchar
 key2 varchar

 A row of the form ('a','b') indicates that 'a' and 'b' are related.
 The table contains many relationships between keys, forming several
 disjoint sets. All relationships are bi-directional, and both
 directions are present.  I.e. the table contains a set of disjoint
 graphs specified as node pairs.

 For example the set of values

 key1key2
 -   -
   a   x
   a   y
   b   w
   c   t
   x   a
   y   a
   y   z
   z   y
   t   c
   w   b
   w   d
   d   w

 defines three disjoint groups of connected keys:

   a x y z
   c t
   b w d

 What I would like to achieve is a single SQL query that returns

   group key
   - ---
 1a
 1x
 1y
 1z
 2c
 2t
 3b
 3w
 3d

 I don't care about preserving the node-to-node relationships, only
 the group membership for each node.

 I've been playing with WITH RECURSIVE CTEs but haven't had any
 success.  I'm not really sure how to express what I want in SQL, and
 it's not completely clear to me that recursive CTEs will help here.
 Also I'm not sure how to generate the sequence numbers for the groups


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Standby Server Bus 7 error

2014-07-23 Thread Fabio Milano
Hi,

Any assistance in interpreting the logs is much appreciated.

Replication server crashes. Below is snippet from log.

2014-07-22 23:36:23 EDT LOG:  started streaming WAL from pr
imary at 12/B000 on timeline 1
2014-07-22 23:43:12 EDT FATAL:  could not receive data from
WAL stream: server closed the connection unexpectedly
This probably means the server terminated a
bnormally
before or while processing the request.

cp: cannot stat '/var/lib/postgresql/archive/000100
1200D0': No such file or directory
2014-07-22 23:43:12 EDT LOG:  unexpected pageaddr 12/8D
00 in log segment 0001001200D0, offset 0
2014-07-22 23:43:28 EDT LOG:  restored log file 000100
1200D0 from archive
cp: cannot stat '/var/lib/postgresql/archive/000100
1200D1': No such file or directory
2014-07-22 23:43:28 EDT LOG:  unexpected pageaddr 12/A5
00 in log segment 0001001200D1, offset 0
2014-07-22 23:43:28 EDT LOG:  started streaming WAL from primary at 12/D100 
on timeline 1
2014-07-22 23:53:13 EDT FATAL:  could not receive data from WAL stream: server 
closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

cp: cannot stat '/var/lib/postgresql/archive/0001001200F1': No such 
file or directory
2014-07-22 23:53:13 EDT LOG:  unexpected pageaddr 12/AA00 in log segment 
0001001200F1, offset 0
2014-07-22 23:53:13 EDT LOG:  started streaming WAL from primary at 12/F100 
on timeline 1




Upon trying a /etc/init.d/postgresql restart

2014-07-23 14:47:55 EDT LOG:  restored log file 000   1B001D from 
archive
2014-07-23 14:47:55 EDT LOG:  consistent recovery stateched at 1B/1DFC64C0
2014-07-23 14:47:55 EDT LOG:  database system is ready t   ccept read only 
connections
2014-07-23 14:47:55 EDT LOG:  restored log file 000   1B001E from 
archive
2014-07-23 14:47:56 EDT LOG:  startup process (PID 730) terminated by 
signal 7: Bus error
2014-07-23 14:47:56 EDT LOG:  terminating any other acti   server processes

Recovery.conf

restore_command = 'cp /var/lib/postgresql/archive/%f %p'
standby_mode = 'on'
primary_conninfo = 'host=[ipaddress] port=5432 user=sherweb_standby_server 
sslmode=require'
archive_cleanup_command = '/usr/lib/postgresql/9.3/bin/pg_archivecleanup 
/var/lib/postgresql/9.3/archive/ %r'


Re: [GENERAL] What query currently running within function

2014-07-23 Thread Peter Geoghegan
On Tue, Jul 22, 2014 at 2:45 AM, Guillaume Lelarge
guilla...@lelarge.info wrote:
 Unfortunately, no. Even with the latest release. pg_stat_activity shows you
 what the client fires, not what the server does.


pg_stat_statements has a track GUC which controls whether or not
nested statements, such as statements executed within functions are
tracked.

-- 
Regards,
Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Standby Server Bus 7 error

2014-07-23 Thread Michael Paquier
On Thu, Jul 24, 2014 at 12:53 PM, Fabio Milano fmil...@crdsgroup.com wrote:
 Any assistance in interpreting the logs is much appreciated.
 Replication server crashes. Below is snippet from log.
 2014-07-22 23:36:23 EDT LOG:  started streaming WAL from pr
 imary at 12/B000 on timeline 1
 2014-07-22 23:43:12 EDT FATAL:  could not receive data from
 WAL stream: server closed the connection unexpectedly
 This probably means the server terminated a
 bnormally
 before or while processing the request.
The replication stream has finished. Did you root server crash in some way?

 cp: cannot stat '/var/lib/postgresql/archive/000100
 1200D0': No such file or directory
 2014-07-22 23:43:12 EDT LOG:  unexpected pageaddr 12/8D
 00 in log segment 0001001200D0, offset 0
This means that server expected new WAL, but found instead an old
record, something that happens when a WAL file is recycled and is not
yet overwritten with new data. Did your archives get corrupted in some
way by a crash of master?

 2014-07-22 23:43:28 EDT LOG:  restored log file 000100
 1200D0 from archive
 cp: cannot stat '/var/lib/postgresql/archive/000100
 1200D1': No such file or directory
 2014-07-22 23:43:28 EDT LOG:  unexpected pageaddr 12/A5
 00 in log segment 0001001200D1, offset 0
Same here.

 Upon trying a /etc/init.d/postgresql restart
 2014-07-23 14:47:55 EDT LOG:  restored log file 000   1B001D
 from archive
 2014-07-23 14:47:55 EDT LOG:  consistent recovery stateched at
 1B/1DFC64C0
 2014-07-23 14:47:55 EDT LOG:  database system is ready t   ccept read only
 connections
 2014-07-23 14:47:55 EDT LOG:  restored log file 000   1B001E
 from archive
 2014-07-23 14:47:56 EDT LOG:  startup process (PID 730) terminated by
 signal 7: Bus error
This may find its root to a hardware problem...

What is the version of 9.3 you are using btw?
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: System shutdown signal on Windows (was Re: [GENERAL])

2014-07-23 Thread Kalai R
Thank You so much Krystian Bigaj.

Since last 2 years I had suffering this problem. But today I got solution
from you.
I am developing .Net application with Postgres.
I am using WCF service (host as windows service) to connect postgres
database.

*My workaround: own service which will start/stop Postgres (mainly to wait
for our service to stop, running initdb.exe in context of NetworkService,
etc.).*
*I've also written small Postgres module loaded via
shared_preload_libraries which will call SetConsoleCtrlHandler, and my
HandlerRoutine which simply returns TRUE. Because of
this pg_console_handler won't be called (so no processing
CTRL_SHUTDOWN_EVENT by any Postgress process).*

* I need to wait first that our service stops, and then I let Postgres to
stop (I have my own service - not pg_ctl - which will start/stop
postgres.exe process).*


I have similar situation to you. I also want create my own service to start
/stop postgres. Please let me know how to start/stop postgres without
pg_ctl. Could you share code for your small postgres module loaded via
shared_preloaded libraries?


Thanks  Regards
Kalai



On Thu, Jul 24, 2014 at 3:59 AM, Krystian Bigaj krystian.bi...@gmail.com
wrote:

 On 23 July 2014 22:16, Tom Lane t...@sss.pgh.pa.us wrote:

 Krystian Bigaj krystian.bi...@gmail.com writes:
  - when pg_console_handler receives CTRL_SHUTDOWN_EVENT from OS, then it
  calls pg_queue_signal(SIGINT).

  Problems:
  - when OS is in shutdown path, then it sends CTRL_SHUTDOWN_EVENT, and
 *all*
  Postgres processes (main and sub/forked) will call
 pg_queue_signal(SIGINT)
  - so main and sub processes will start to shutdown independently? Can
 this
  have any bad consequences?

 Hm.  We ought to have that sending SIGTERM instead, so as to mimic the
 situation when Unix init is trying to shut down the system.  It might be
 that SIGINT will more or less work, but the postmaster logic is designed
 to work with global SIGTERM as being the clean-up-ASAP trigger.  As an
 example, backends servicing remote applications (which will *not* have
 already gotten killed) would not exit in response to SIGINT.

  I think that CTRL_SHUTDOWN_EVENT should be removed from
 pg_console_handler,

 That does not sound like a good idea, at least not if Windows has the
 same behavior as init does of proceeding to hard kills after some
 grace period.

 regards, tom lane


 I'm not really familiar with Unix and it's SIG-commands. I know only
 about SIGINT/SIGTERM from Postgres documentation.

 However form what I see is that when Postgress is running by pg_ctl from
 service, then it will receive SIGINT (independently and in general in
 unspecified order)
 - *each* postgres.exe process will queue itself SIGINT (because of
 CTRL_SHUTDOWN_EVENT),
 - pg_ctl will send SIGINT to main postmaster process (and possibly it will
 pass that command to sub-processes)
 So there are two independent paths where SIGINT are sent, and pg_ctl
 doesn't have really a control when postgres.exe receives SIGINT.
 This CTRL_SHUTDOWN_EVENT is not used when postgres.exe is run on *user
 session* - so removing it won't change anything.

 I see only two cases where CTRL_SHUTDOWN_EVENT might be need (all of there
 where postgres.exe is run on service session):
 - postgres.exe run by pg_ctl.exe, but pg_ctl service process was
 terminated/killed, and then system was shutdown
 - someone starts postgres.exe from own service, but doesn't send
 SIGINT/SIGTERM command to postgres.exe on service system shutdown (but he
 must for service stop)
 As I previously wrote, I have workaround for it, so if you think that this
 change would break compatibility and don't want to change it, then I'm
 really fine with it.

 However I've probably found something with pg_ctl.c regarding shutdown and
 maybe that suspicious postgres.exe process termination on Windows.

 1) shutdownEvent is signaled in pgwin32_ServiceMain
 by SERVICE_CONTROL_STOP/SERVICE_CONTROL_SHUTDOWN in pgwin32_ServiceHandler
 There is dwWaitHint = 1.

 2)
 ...
   /* Wait for quit... */
 ret = WaitForMultipleObjects(2, shutdownHandles, FALSE, INFINITE);

 pgwin32_SetServiceStatus(SERVICE_STOP_PENDING);
 switch (ret)
  {
 case WAIT_OBJECT_0: /* shutdown event */
  kill(postmasterPID, SIGINT);

 /*
  * Increment the checkpoint and try again Abort after 12
  * checkpoints as the postmaster has probably hung
  */
 while (WaitForSingleObject(postmasterProcess, 5000) == WAIT_TIMEOUT 
 status.dwCheckPoint  12)
  status.dwCheckPoint++;  missing call
 to pgwin32_SetServiceStatus(SERVICE_STOP_PENDING)
 or SetServiceStatus(hStatus, (LPSERVICE_STATUS) status);
  break;
 ...

 There is incremented dwCheckPoint every 5000ms, but that status is not
 updated (missing pgwin32_SetServiceStatus/SetServiceStatus), so SCM after
 10s (dwWaitHint = 1) will not receive incremented dwCheckPoint, and
 it's allowed to kill that process (because this service didn't respond with
 dwWaitHint). It kills pg_ctl.exe, but 

Re: [GENERAL] Watching Views

2014-07-23 Thread David G Johnston
Nick Guenther wrote
 As you said, attaching the trigger to a view is useless (for  
 BEFORE/AFTER, which I'm interested in, also only works on statement  
 level changes, which I would rather not have to deal with). I tried  
 attaching my trigger to a materialized view and found that postgres  
 does not support that; as you said, I need to write a trigger on the  
 source to keep the materialized view and the source in sync. But in  
 that case I don't think a materialized view helps me at all, since  
 without triggers on it I can't even have it compute the diffs for me  
 (and I suspect that internally postgres simply truncates the old table  
 and refills it, so there would be nothing to hook anyway).
 
 My bottleneck is the size of my database and the network: I want to  
 take slices of a potentially gigabytes-large database and stream them  
 out to multiple clients. Thus I need to stream--not poll--for changes.  
 I think a materialized view would force me to poll for changes, and in  
 that case I would be doing REFRESH MATERIALIZED VIEW on each poll and  
 therefore not winning anything over a regualar view. Ideally, when an  
 update comes in to a parent table I want all views that have sliced it  
 to be notified; I'm interested in doing dataflow* programming,  
 essentially. Is there maybe some sort of extension that does  
 dependency tracking inside of postgres?

While PostgreSQL has materialized view functionality built in the concept is
general and can be done manually.  Instead of the main table having the link
to the FIFO I was thinking you would instead replicate record changes to all
active subset tables and then triggers on those tables would send the
relevant changes out to the world.

Keep in mind you can attach a where clause to your trigger, and I think you
can pass in arguments to it as well.  You should have on trigger per view
attached to the source table - though with good meta-data and some imposed
limits you can probably pool some clients into the same stream.

Database size is less an issue compared to the change rate of the affected
table(s).  Triggers let you plug into the change stream.

You could even cascade the triggers so less restrictive filters are grouped
together at one layer and those materialized views then forward to other
tables with more restrictive filters.

If you make these other tables UNLOGGED you should be able to mitigate the
performance hit somewhat.

Beyond that if your views have common and reasonably broad high-level
filters you should consider both in-database partitioning and multiserver
sharding.

The client, not the trigger, should create the FIFO.  If the FIFO is
unusable the trigger should update a control table and a monitor process
should remove that trigger at the next scan.  This should then be extended
to provide control over the addition and removal of viewers and their
corresponding schematic objects.

The view tables also help avoid the problem since then even if a FIFO write
fails you have an idea of what should have been, but was not, written and
can cause it to be written later once the client is listening.

Before stock 9.4 triggers are all you got.  9.4 gives logical wal but no
clue how that works.  There my be solutions via third-party tools like Slony
but I am not familiar with them but they focus on direct database
replication.

Mostly theorizing as I've never actually coded this kind of process...so
some of these ideas may not pan out.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Watching-Views-tp5811927p5812680.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general