Re: [GENERAL] The case of PostgreSQL on NFS Server (II)

2015-04-03 Thread Octavi Fors
Hi David, John et al.,

as far as I understand eSATA is not an option for me. First because I
 already have bought the DS2415+ NAS, which does not has eSATA I/O :(
 And second, because I need the database to be accessible from two
 computers in the same LAN.


 ​This is new - and while the desire for using your multi-terrabyte device
 may ​make the risk of running over NFS potentially worthwhile there is no
 reasonable way to have two running databases share data files.  You can
 share the NAS device and have each PostgreSQL instance consume an unshared
 portion of its resources.

 ​You appear to either be mis-speaking or omitting potentially critically
 important details here...


Oops, sorry yes I think I may miss-spoke when explaining my second reason
why not choosing eSATA.
My situation is the following:

-Two computers (C1  C2) and NAS (with no eSATA I/O) on the same LAN.
-C1 acquires images from a telescope and periodically stores them via NFS
in the NAS (no database involved here, just in the ext4 filesystem).
-C2 is a 12 xeon core-class server designed to analyze the stored images in
the NAS, and compute astrometry  photometry measurements (catalogs  light
curves) for every star  image. These measurements are inserted in the
catalogs database inside the NAS.

Therefore there's only *one* computer (C2) which will run postgresql server
with the tablespace onNAS.

So does this approach sound like feasible if the NFS parameters are set
properly?

I have a small NAS, but my main 2Tb disk is eSATA. My fstab entry for
 the NAS just looks like:
 smartstor1:/VOLUME1/DV7T /SmartStor1/Volume1/DV7T nfs
 user,rw,noauto,noexec,nosuid 0 0
 which is basically what you have, performance wise, as best as I can tell.


I see this message
http://www.postgresql.org/message-id/4c24352d.7060...@postnewspapers.com.au
in this list any ideas suggesting to use the NFS parameters 'sync' (for
synchronizing changes to a file) and 'noac' (for no caching).

Could you confirm that
nas_ip:/volume1/data /home/ofors/Documents/nas nfs noac,sync
would be good options for /etc/fstab?

Any additional NFS parameter?


Note, the most important aspect of that fact is that your WAL gets written
 to your data directory and not to the tablespace on which the database
 tables reside. (i.e. WAL does not make it to NAS - unless you setup wal
 shipping).


first time I hear about the importance of WAL and NFS.
I googled some and found this discussion
http://stackoverflow.com/questions/19047954/where-is-the-postgresql-wal-located-how-can-i-specify-a-different-path
about the topic.
Any ideas on how to include the options they mention (archinve_mode?) into
NAS or /etc/fstab?

Thanks a lot in advance,
-- 
Octavi Fors


On Thu, Apr 2, 2015 at 8:47 PM, David G. Johnston 
david.g.johns...@gmail.com wrote:

 On Thu, Apr 2, 2015 at 5:09 PM, Octavi Fors oct...@live.unc.edu wrote:

 Thanks John for your extensive and helpful response.

 I have a NAS box. But I would worry about responsiveness. What is
 better, IMO, is an external SATA connected DAS box. DAS is Direct
 Attached Storage. Many PCs have a eSATA port on the back side.


 as far as I understand eSATA is not an option for me. First because I
 already have bought the DS2415+ NAS, which does not has eSATA I/O :(
 And second, because I need the database to be accessible from two
 computers in the same LAN.


 ​This is new - and while the desire for using your multi-terrabyte device
 may ​make the risk of running over NFS potentially worthwhile there is no
 reasonable way to have two running databases share data files.  You can
 share the NAS device and have each PostgreSQL instance consume an unshared
 portion of its resources.

 ​You appear to either be mis-speaking or omitting potentially critically
 important details here...

 1) could you confirm that I don't have to mount --bind
 /var/lib/postgresql/9.2/main ?


 ​/var/lib/... is not on the NAS but, likely, on whatever your primary
 internal hard drive is.  Note, the most important aspect of that fact is
 that your WAL gets written to your data directory and not to the tablespace
 on which the database tables reside. (i.e. WAL does not make it to NAS -
 unless you setup wal shipping).

 2) on my my /etc/fstab here is the current line for my NAS:
 nas_ip:/volume1/data /home/ofors/Documents/nas nfs

 Which NFS client and server (on NAS side) options/arguments do you
 suggest for optimizing performance? Or in other words, for minimizing
 database corruption in case of NAS (note that NAS drives are in RAID6) or
 computer failure?


 ​I am a little out of my league here but the main risk is that incomplete
 data is sent/received by/from the NAS.  Once the data is in the NAS it is
 really no different than any other storage medium in terms of durability
 concerns. I do not really know how checkpoints and transient failed reads
 interact with PostgreSQL and what circumstances would prevent properly
 recorded WAL from being used to restore should a 

Re: [GENERAL] Why doesn't `RAISE EXCEPTION` provide error context?

2015-04-03 Thread Alban Hertroys
On 2 April 2015 at 19:15, Taylor Brown tay...@youneedabudget.com wrote:

 So, I would rather put a check like this at the top of my function:

 --
 important_variable = (p_request::json-'important_variable')::integer;
 IF (important_variable is NULL) THEN RAISE EXCEPTION 'important_variable
 must not be NULL.'; END IF;
 --

 But I won't be able to get the context for that exception, and all I'll be
 able to return from the function or write to the logs is
 'important_variable must not be NULL.'. If that's the only place I throw
 that error, I'll know where to look. Otherwise, I have no context, and
 won't be able to determine where my exception was thrown. So I'm actually
 better off _not_ throwing my own custom exceptions, even though I would
 prefer to be more defensive about this sort of thing in my code.

 I thought I might be able to trick postgres into throwing another
 unrelated exception that would not only include my custom error message,
 but allow me to extract the context, telling me the function where I should
 begin debugging.


Just a quick idea, but... Perhaps you could put a constraint on those JSON
messages indirectly, by mapping it to some kind of template table?
The top-level of a JSON message can usually be mapped to a table, and a
table can have constraints and, for example, a BEFORE INSERT trigger that
always returns NULL so that the data is not actually inserted into the
table.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


Re: [GENERAL] quick q re execute scope of new

2015-04-03 Thread Andrew J. Kopciuch
On April 2, 2015, Scott Ribe wrote:
 On Apr 2, 2015, at 10:14 PM, Adrian Klaver adrian.kla...@aklaver.com wrote:
  EXECUTE 'insert into ' || quote_ident(tblname) || ' values(' || new.* ||
  ')'

 Not that easy, strings are not quoted correctly, and null values are blank.
 Might be a function to translate new.* into a string as needed for this
 use, but I found another way based on Tom's suggestion:

 execute('insert into ' || tblnm || ' select $1.*') using new;


I've done similar in triggers for partition schemes, something like this :

EXECUTE 'INSERT INTO ' || partitionName || ' (SELECT ( masterTableName ' || 
quote_literal(NEW) || ').*)';

I can't remember the reference I found on line that helped me get there though.
The key is doing quote_literal on the NEW, and casting it to a compatible 
type.

HTH,


Andy




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2015-04-03 Thread Pavel Stehule
Hi

http://stackoverflow.com/questions/4477301/creating-jobs-and-schedules-programatically-with-pgagent

regards

Pavel Stehule

2015-04-03 11:27 GMT+02:00 Ramesh T rameshparnandit...@gmail.com:

 Hi ,
   How to create job in pgAgent.Where I need to place script in
 pgAgent..any help

 Advanced thanks...




Re: [GENERAL] Serializable transaction restart/re-execute

2015-04-03 Thread Bill Moran
On Fri, 3 Apr 2015 15:35:14 +0100
Filipe Pina filipe.p...@impactzero.pt wrote:

 Hello,
 
 I come from a GTM background and once of the transactional features there are 
 the ?Transaction Restarts?.
 
 Transaction restart is when we have two concurrent processes reading/writing 
 to the same region/table of the database, the last process to commit will 
 ?see? that the database is not the same as it was when the transaction 
 started and goes back to the beginning of the transactional code and 
 re-executes it.
 
 The closest I found to this in PGSQL is the Serializable transaction 
 isolation mode and it does seem to work well except it simply throws an error 
 (serialization_failure) instead of restarting.
 
 I?m trying to make use of this exception to implement restartable functions 
 and I have all the examples and conditions mentioned here in a question in SO 
 (without any answer so far?):
 
 http://stackoverflow.com/questions/29372202/postgresql-generic-handler-for-serialization-failure
  
 http://stackoverflow.com/questions/29372202/postgresql-generic-handler-for-serialization-failure
 
 So basically I have two questions:
 - the restartable ?wrapper? function never gets its ?DB view? refreshed once 
 it restarts, I assume it?s because of the outter transaction (at function 
 level) so it never re-reads the new values and keeps failing with 
 serialization_failure.. Any way to solve this?
 - the ideal would be to be able to define this at database level so I 
 wouldn?t have to implement wrappers for all functions.. Implementing a 
 ?serialization_failure? generic handler that would simply re-call the 
 function that threw that exception (up to a number of tries). Is this 
 possible without going into pgsql source code?

I suspect that savepoints will accomplish what you want:
http://www.postgresql.org/docs/9.4/static/sql-savepoint.html

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not split GIN page; no new items fit

2015-04-03 Thread Tom Lane
Chris Curvey ch...@chriscurvey.com writes:
 Hmm, I'm trying to create a gin index, thusly:
 create index foo_idx on foo using gin(entry gin_trgm_ops);

 and I'm getting the error could not split GIN page; no new items fit

 Any idea what this means, or how I can get around it?

Looks to me like a bug (ie, the code seems to think this is a can't-happen
case).  Don't suppose you could supply sample data that triggers this?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Overlap function for hstore?

2015-04-03 Thread David G. Johnston
On Fri, Apr 3, 2015 at 5:37 PM, Stefan Keller sfkel...@gmail.com wrote:

 Hi,

 I'd like to get an overlap function similar to '' but for key-value
 pairs of hstore!

 This underfits:

 postgres=# select hstore_to_array('a=1,b=2,c=3'::hstore) 
 hstore_to_array('a=2,d=4,b=2'::hstore)

 ...because array overlaps op takes every element (even 'a' or 2 alone)
 and doesn't test for key and value together like in 'b=2'!

 Any clever ideas?


​Not tested!

SELECT hstore(array_agg(key), array_agg(value)) FROM (
SELECT each(...)
INTERSECT
SELECT each(...)​
) src

David J.


[GENERAL] could not split GIN page; no new items fit

2015-04-03 Thread Chris Curvey
Hmm, I'm trying to create a gin index, thusly:

create index foo_idx on foo using gin(entry gin_trgm_ops);

and I'm getting the error could not split GIN page; no new items fit

Any idea what this means, or how I can get around it?  The table in
question has about 23MM rows, if that makes any difference.  The only
reference that search engines returned was the source code.

select version()

PostgreSQL 9.4.1 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu
4.9.1-16ubuntu6) 4.9.1, 64-bit

-Chris

-- 
If money can fix it, it's not a problem.  - Tom Magliozzi


Re: [GENERAL] Overlap function for hstore?

2015-04-03 Thread Paul Jungwirth

 This underfits:

 postgres=# select hstore_to_array('a=1,b=2,c=3'::hstore) 
 hstore_to_array('a=2,d=4,b=2'::hstore)

 ...because array overlaps op takes every element (even 'a' or 2 alone)
 and doesn't test for key and value together like in 'b=2'!

How about hstore_to_matrix? Then you have a tuple for each key/value pair.

Paul




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Overlap function for hstore?

2015-04-03 Thread Stefan Keller
Hi,

I'd like to get an overlap function similar to '' but for key-value
pairs of hstore!

This underfits:

postgres=# select hstore_to_array('a=1,b=2,c=3'::hstore) 
hstore_to_array('a=2,d=4,b=2'::hstore)

...because array overlaps op takes every element (even 'a' or 2 alone)
and doesn't test for key and value together like in 'b=2'!

Any clever ideas?

Yours, Stefan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] quick q re execute scope of new

2015-04-03 Thread Adrian Klaver

On 04/02/2015 09:59 PM, Scott Ribe wrote:

On Apr 2, 2015, at 10:14 PM, Adrian Klaver adrian.kla...@aklaver.com wrote:


EXECUTE 'insert into ' || quote_ident(tblname) || ' values(' || new.* || ')'


Not that easy, strings are not quoted correctly, and null values are blank. 
Might be a function to translate new.* into a string as needed for this use, 
but I found another way based on Tom's suggestion:


My mistake for grabbing off the top of my head without testing my code.



execute('insert into ' || tblnm || ' select $1.*') using new;




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Serializable transaction restart/re-execute

2015-04-03 Thread Filipe Pina
Hello,

I come from a GTM background and once of the transactional features there are 
the “Transaction Restarts”.

Transaction restart is when we have two concurrent processes reading/writing to 
the same region/table of the database, the last process to commit will “see” 
that the database is not the same as it was when the transaction started and 
goes back to the beginning of the transactional code and re-executes it.

The closest I found to this in PGSQL is the Serializable transaction isolation 
mode and it does seem to work well except it simply throws an error 
(serialization_failure) instead of restarting.

I’m trying to make use of this exception to implement restartable functions and 
I have all the examples and conditions mentioned here in a question in SO 
(without any answer so far…):

http://stackoverflow.com/questions/29372202/postgresql-generic-handler-for-serialization-failure
 
http://stackoverflow.com/questions/29372202/postgresql-generic-handler-for-serialization-failure

So basically I have two questions:
- the restartable “wrapper” function never gets its “DB view” refreshed once it 
restarts, I assume it’s because of the outter transaction (at function level) 
so it never re-reads the new values and keeps failing with 
serialization_failure.. Any way to solve this?
- the ideal would be to be able to define this at database level so I wouldn’t 
have to implement wrappers for all functions.. Implementing a 
“serialization_failure” generic handler that would simply re-call the function 
that threw that exception (up to a number of tries). Is this possible without 
going into pgsql source code?

Thanks,
Filipe

Re: [GENERAL] The case of PostgreSQL on NFS Server (II)

2015-04-03 Thread John McKown
On Fri, Apr 3, 2015 at 1:40 AM, Octavi Fors oct...@live.unc.edu wrote:
 Hi David, John et al.,
snip
 Oops, sorry yes I think I may miss-spoke when explaining my second reason
 why not choosing eSATA.
 My situation is the following:

 -Two computers (C1  C2) and NAS (with no eSATA I/O) on the same LAN.
 -C1 acquires images from a telescope and periodically stores them via NFS in
 the NAS (no database involved here, just in the ext4 filesystem).
 -C2 is a 12 xeon core-class server designed to analyze the stored images in
 the NAS, and compute astrometry  photometry measurements (catalogs  light
 curves) for every star  image. These measurements are inserted in the
 catalogs database inside the NAS.

 Therefore there's only *one* computer (C2) which will run postgresql server
 with the tablespace onNAS.

 So does this approach sound like feasible if the NFS parameters are set
 properly?

OK, it is very understandable why the images are on the NAS. It is the
easiest way to share them. I guess you want the DB on the NAS simply
because you don't have sufficient disk space on the disks connected to
C2.


snip
 Could you confirm that
 nas_ip:/volume1/data /home/ofors/Documents/nas nfs noac,sync
 would be good options for /etc/fstab?

 Any additional NFS parameter?

Have you done a web search on NFS Performance? I got some good hits
with Google.
http://www.tldp.org/HOWTO/NFS-HOWTO/performance.html  (a bit old, I've
been told)
http://www.cyberciti.biz/faq/linux-unix-tuning-nfs-server-client-performance/

Mentions noatime and nodiratime to not update the last access
date/time on files  directories, saving bandwidth. A good explanation
of the NFS mount options, IMO, are on this site:
http://www.dbaexpert.com/blog/nfs-options-for-performance/

Some interesting speed test code:
https://github.com/sabujp/nfsSpeedTest




snip
 Thanks a lot in advance,
 --
 Octavi Fors

-- 
If you sent twitter messages while exploring, are you on a textpedition?

He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! 
John McKown


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general