Re: [GENERAL] Request - repeat value of \pset title during \watch interations

2016-03-09 Thread David G. Johnston
Adding -hackers for consideration in the Commitfest.

Thanks!

David J.

>>>Original request by me

http://www.postgresql.org/message-id/CAKFQuwZqjz-je3Z=8jdodym3jm-n2ul4cuqy5vh8n75e5v1...@mail.gmail.com

When executing a query using \watch in psql the first execution of the
query includes "Title is [...]" when \pset title is in use.  Subsequent
executions do not.  Once that first display goes off-screen the information
in the title is no longer readily accessible.  If using \watch for a
long-running monitoring query it can be helpful to incorporate some context
information into the title.

-- Forwarded message --
From: Michael Paquier 
Date: Thu, Jan 28, 2016 at 6:01 AM
Subject: Re: [GENERAL] Request - repeat value of \pset title during \watch
interations
To: "David G. Johnston" 
Cc: Tom Lane , "pgsql-general@postgresql.org" <
pgsql-general@postgresql.org>


On Thu, Jan 28, 2016 at 1:54 PM, David G. Johnston
 wrote:
> On Wed, Jan 27, 2016 at 9:13 PM, Michael Paquier <
michael.paqu...@gmail.com>
> wrote:
>>
>> On Thu, Jan 28, 2016 at 9:34 AM, David G. Johnston
>>  wrote:
>> > So how about:
>> >
>> > + snprintf(title, strlen(myopt.title) + 50,
>> > + _("Watch every %lds\t%s\t%s"),
>> > +  sleep, head_title, asctime(localtime()));
>>
>> I would just keep the timestamp and the title separated so what do you
>> think about that instead?
>> Watch every Xs   $timestamp
>> $head_title
>
>
> That works.  I like having the title immediately above the table.
>
> The other option that came to mind would be to place the time information
> after the table display while leaving the title before it.  On an output
> that requires more vertical space than is available in the terminal one
> would no longer have to scroll up to confirm last execution time.  If
doing
> this I'd probably get rid of any logic that attempts to center the time
> information on the table and simply leave it left-aligned.

​And the example:
​
OK, attached is an updated patch. How does that look?

   Watch every 5sFri Jan 29 13:06:31 2016

This is a medium length title
repeat


--
 

(1 row)
​
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 9750a5b..3241d27 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -3020,7 +3020,8 @@ static bool
 do_watch(PQExpBuffer query_buf, long sleep)
 {
 	printQueryOpt myopt = pset.popt;
-	char		title[50];
+	char		 *title;
+	bool		 *head_title = NULL;
 
 	if (!query_buf || query_buf->len <= 0)
 	{
@@ -3034,6 +3035,18 @@ do_watch(PQExpBuffer query_buf, long sleep)
 	 */
 	myopt.topt.pager = 0;
 
+	/*
+	 * Take into account any title present in the user setup as a part of
+	 * what is printed for each iteration by using it as a header.
+	 */
+	if (myopt.title)
+	{
+		title = pg_malloc(strlen(myopt.title) + 50);
+		head_title = pg_strdup(myopt.title);
+	}
+	else
+		title = pg_malloc(50);
+
 	for (;;)
 	{
 		int			res;
@@ -3045,8 +3058,13 @@ do_watch(PQExpBuffer query_buf, long sleep)
 		 * of completion of the command?
 		 */
 		timer = time(NULL);
-		snprintf(title, sizeof(title), _("Watch every %lds\t%s"),
- sleep, asctime(localtime()));
+		if (head_title)
+			snprintf(title, strlen(myopt.title) + 50,
+	 _("Watch every %lds\t%s\n%s"),
+	 sleep, asctime(localtime()), head_title);
+		else
+			snprintf(title, 50, _("Watch every %lds\t%s"),
+	 sleep, asctime(localtime()));
 		myopt.title = title;
 
 		/* Run the query and print out the results */
@@ -3059,7 +3077,11 @@ do_watch(PQExpBuffer query_buf, long sleep)
 		if (res == 0)
 			break;
 		if (res == -1)
+		{
+			pg_free(title);
+			pg_free(head_title);
 			return false;
+		}
 
 		/*
 		 * Set up cancellation of 'watch' via SIGINT.  We redo this each time
@@ -3084,6 +3106,8 @@ do_watch(PQExpBuffer query_buf, long sleep)
 		sigint_interrupt_enabled = false;
 	}
 
+	pg_free(title);
+	pg_free(head_title);
 	return true;
 }
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Plan to support predicate push-down into subqueries with aggregates?

2016-03-09 Thread David G. Johnston
On Wednesday, March 9, 2016, rob stone  wrote:

> On Wed, 2016-03-09 at 12:25 -0500, Adam Brusselback wrote:
>
> Note that:- (a) I included header_description in the definition.
> (b) Removed some lines as if you want the total you may as well
> include it in your select from the view, and for the life of me I
> couldn't understand the purpose of:-
>
> header.amount = coalesce(detail_1.amount, 0) +
> coalesce(detail_2.amount, 0) as balanced
>
> Is "balanced" supposed to be a boolean?
>
> If you need header.amount include it in the view.
>
>
It's basically accounting.  Does the total amount on the header
(header.amount) equal the sum of the two components (1.amount + 2.amount),
returns true or false based upon the equality.  So not only do you need to
compute 1.amount but you also need to immediately use it as part of an
expression.  And return both usages from the view.

David J.


Re: [GENERAL] Plan to support predicate push-down into subqueries with aggregates?

2016-03-09 Thread rob stone
On Wed, 2016-03-09 at 12:25 -0500, Adam Brusselback wrote:
> I responded yesterday, but it seems to have gotten caught up because
> it was too big with the attachments... Here it is again.
> 
> Sorry about not posting correctly, hopefully I did it right this
> time.
> 
> So I wanted to see if Sql Server (2014) could handle this type of
> query differently than Postgres (9.5.1), so I got an instance of
> express installed and ported the test script to it.
> 
> I updated my Postgres script so the data is the same in each server. 
> The end result is Sql Server seems to be able to optimize all of
> these queries MUCH better than Postgres.
> I disabled parallelism in Sql Server to make the comparison fair.
> 
> I've attached the explain analyze results for Postgres, and the
> execution plan for Sql Server (in picture form... don't know a better
> way)
> 
> Results are:
> --Sql Server:15ms average
> --Postgres: 6ms average
> SELECT *
> FROM header
> INNER JOIN header_total
> ON header.header_id = header_total.header_id
> WHERE header.header_id = 26;
> 
> 
> --Sql Server: 15ms average
> --Postgres: 1250ms average
> SELECT *
> FROM header
> INNER JOIN header_total
> ON header.header_id = header_total.header_id
> WHERE header.header_id < 27
> AND header.header_id > 24;
> 
> 
> --Sql Server: 567ms average
> --Postgres: 1265ms average
> SELECT *
> FROM header
> INNER JOIN header_total
> ON header.header_id = header_total.header_id
> WHERE header.description like '%5%';
> 
> 
> --Sql Server: 15ms average
> --Postgres: 1252ms average
> SELECT *
> FROM header_total
> WHERE header_total.header_id IN (
> SELECT header_id
> FROM header
> WHERE header.header_id < 27
> AND header.header_id > 24);
> 


Hello Adam,

I can't believe that something originating from the dark side can do
things better. So I went thru your test case. Running this on my trusty
Compaq Presario with 5Gb of RAM and using Postgres 9.5.1.

1) Created your tables but re-named the column "description" as
"header_description". header has 1,000 rows, detail_1 has 400,391 rows
and detail_2 has 3,451 rows.
2) altered your index definitions to make them unique by including
their primary keys. (I have a penchant for unique indexes).
3) Ran the ANALYZE.
4) Shut down the database so as to clear the cache.
5) A while later started the database and created this view:-

CREATE OR REPLACE VIEW header_total AS
SELECT header.header_id, header.header_description,
amount_1, detail_1_count,
amount_2, detail_2_count
FROM header
LEFT JOIN (
SELECT header_id, SUM(rate * quantity) AS amount_1, COUNT(detail_1_id)
AS detail_1_count
FROM detail_1
GROUP BY detail_1.header_id
) detail_1
ON header.header_id = detail_1.header_id
LEFT JOIN (
SELECT header_id, SUM(amount) AS amount_2, COUNT(detail_2_id) AS
detail_2_count
FROM detail_2
GROUP BY detail_2.header_id
) detail_2
ON header.header_id = detail_2.header_id

Note that:- (a) I included header_description in the definition.
    (b) Removed some lines as if you want the total you may as well
include it in your select from the view, and for the life of me I
couldn't understand the purpose of:-

header.amount = coalesce(detail_1.amount, 0) +
coalesce(detail_2.amount, 0) as balanced

Is "balanced" supposed to be a boolean?

If you need header.amount include it in the view.


6) Ran your three queries and here are the timings from the log:-

2016-03-10 13:07:47 AEDTLOG:  duration: 0.221 ms  parse :
SELECT * FROM header_total WHERE header_id = 26
2016-03-10 13:07:47 AEDTLOG:  duration: 0.551 ms  bind :
SELECT * FROM header_total WHERE header_id = 26
2016-03-10 13:07:47 AEDTLOG:  duration: 1.103 ms  execute :
SELECT * FROM header_total WHERE header_id = 26
2016-03-10 13:07:54 AEDTLOG:  duration: 0.180 ms  parse :
SELECT * FROM header_total WHERE header_id < 200
2016-03-10 13:07:54 AEDTLOG:  duration: 0.481 ms  bind :
SELECT * FROM header_total WHERE header_id < 200
2016-03-10 13:07:55 AEDTLOG:  duration: 458.418 ms  execute :
SELECT * FROM header_total WHERE header_id < 200
2016-03-10 13:08:01 AEDTLOG:  duration: 0.230 ms  parse :
SELECT * FROM header_total WHERE header_description like '%5%'
2016-03-10 13:08:01 AEDTLOG:  duration: 0.542 ms  bind :
SELECT * FROM header_total WHERE header_description like '%5%'
2016-03-10 13:08:01 AEDTLOG:  duration: 459.346 ms  execute :
SELECT * FROM header_total WHERE header_description like '%5%'

I don't believe that (0.230 ms + 0.542 ms + 459.346 ms) could be
described as "slow" when it returns 271 rows.


Obviously it would help if there were more details about your
application. Also creating the test data via those bulk inserts doesn't
replicate any randomness that may occur via inserts made by an
application.

BTW, I'm a great fan of using views. 

HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL crashed, whole PC not responding

2016-03-09 Thread Kiswono Prayogo
Hi, I'm upgrading to PostgreSQL 9.5.1 in 64-bit ArchLinux (Linux
4.4.1-2-ARCH) for around 4 days (previously we use 9.4.x), and the
PostgreSQL crashed and make the whole PC down, with this last log from
journalctl:

Mar 09 23:31:17 asd kernel: BUG: Bad page state in process postgres
 pfn:d18515
Mar 09 23:31:17 asd kernel: page:ea0034614540 count:0 mapcount:0
mapping:  (null) index:0x1
Mar 09 23:31:17 asd kernel: flags: 0x2fffe00()
Mar 09 23:31:17 asd kernel: page dumped because: page still charged to
cgroup
Mar 09 23:31:17 asd kernel: page->mem_cgroup:0002

Is there any possible cause of this? or how to search the causation of
this? since the only log i have is journalctl
and log_min_duration_statement 2000 (and it shows nothing).

-- 
Best Regards,
Kiswono P / SoftDev
GB


Re: Email address VERP problems (was RE: [GENERAL] Does a call to a language handler provide a context/session, and somewhere to keep session data?

2016-03-09 Thread david
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Magnus Hagander
Sent: Thursday, 10 March 2016 6:18 AM
To: Alvaro Herrera 
Cc: David Bennett ; pgsql-general@postgresql.org
Subject: Re: Email address VERP problems (was RE: [GENERAL] Does a call to a 
language handler provide a context/session, and somewhere to keep session data?

 

On Wed, Mar 9, 2016 at 7:03 PM, Alvaro Herrera  > wrote:

David Bennett wrote:
> > ow...@postgresql.org  ] On Behalf Of Alvaro 
> > Herrera
>
> > On that subject.  I noticed that Outlook seems to add the "return
> > path"
> > addresses (sometimes called bounce address or envelope sender) to the
> > CC header, which sets a new record in the stupidity scale.  Since we
> > use VERP, each message gets a different return path address, so with
> > each reply you make, Outlook adds a new address to the CC.
>
> Interesting. I use a lot of mailing lists and I've not run across one
> actually using VERP before. Is it becoming more frequent?

Not sure if it's becoming more frequent -- I only manage *this* list
server and we enabled VERP several years ago.  I thought it was common
practice ... the idea of manually managing addresses that bounce seems
completely outdated now.

 

 

It's been frequent for quite some time.

 

 

> I checked the headers. It seems this list is using a VERP address for both
> the Return-path and the Reply-To, and only the Sender identifies the list
> directly.

I'm pretty sure our list server is not setting the VERP address in
Reply-To.  That would be insane, wouldn't it.  We don't touch the
Reply-To header at all.  Maybe some other program along the way modifies
the email before Outlook gets it?

 

Yeah, same here.

 

However, if you look at the thread, it seems the VERP address was added to the 
*original email*. In the To field. Perhaps that's what confused the MUA into 
adding *another* VERP address on the reply?

 

That happens when I hit reply to.

 

Then AFAICT in 
http://www.postgresql.org/message-id/raw/001601d17852$7bea9e80$73bfdb80$@pfxcorp.com
 a second VERP address was added to the mail (very the 485 one).

 

To me it look slike this was definitely done by the MTA or MUA at pfxcorp.com 
 . The archived copy (which is delivered the exact same way 
as a general email, it doesn't have any shortcut) does not contain this address 
naywhere, it was only used as an envelope sender. Possibly it got confused by 
the other VERP address in the initial email, which AFAICT is a manual mistake.

 

That’s possible. We have a non-standard mail server (as you can see from the 
headers). I’ll check.

 

Regards

David M Bennett FACS

  _  

Andl - A New Database Language - andl.org

 

 

 



Re: [GENERAL] Re: "missing chunk number XX for toast value YY in pg_toast ..." after pg_basebackup.

2016-03-09 Thread Adrian Klaver

On 03/09/2016 10:41 AM, fred...@huitfeldt.com wrote:

Hi Adrian,

thank you very much for your patience. I apologise for the missing
information.

On 9 March 2016 16:13:00 +01:00, Adrian Klaver
 wrote:

On 03/09/2016 04:56 AM, fred...@huitfeldt.com
 wrote:

Hi Adrian,

thank you very much for your response.

I ran the "VACUUM ANALYZE" command on the master node.

Regarding log messages.

Here is the contents of the log (excluding
connections/disconnections):


Assuming the below is from the replica database.

the "LOG: recovery was paused" message was indeed from the replica.



2016-02-22 02:30:08 GMT 24616 LOG: recovery has paused


So what happened to cause the above?


we automatically pause recovery on the replica before running pg_dump.
This is in order to make certain that we get a consistent dump of the
database.


Still muddling through this, but to recap and be clear in my mind:

1) This only started occurring with 9.1.15, but worked in previous 
versions of 9.1.

So what was the last version of 9.1 that worked?

2) You seed a replica with pg_basebackup.

3) You set up synchronous streaming replication to the replica.

4) You pause the replication and use pg_dump to dump the replica.

5) At this point the error in the subject has occurred twice since you 
switched to 9.1.5


6) Running full vacuum analyze on the master solves the problem.
How is it solved?
In other words do you resume replication after the vacuum, or before?
Then do you redo the pg_dump?
Or do you start over with a new pg_basebackup?



I am not seeing anything below that indicates the recovery started again.

the reason why we do not see a matching "resume" is that the pg_dump
failed and our error handling was insufficient.


2016-02-22 02:30:08 GMT 24616 HINT: Execute pg_xlog_replay_resume() to
continue.
2016-02-22 02:37:19 GMT 23859 DBNAME ERROR: missing chunk number 0 for
toast value 2747579 in pg_toast_22066
2016-02-22 02:37:19 GMT 23859 DBNAME STATEMENT: COPY public.room_shape
(room_uuid, data) TO stdout;
2016-02-22 02:37:41 GMT 2648 DBNAME LOG: could not receive data from
client: Connection reset by peer
2016-02-22 02:37:41 GMT 2648 DBNAME LOG: unexpected EOF on client
connection


What does the log from the master show?

It doesnt seem to show much. It does have these repeated messages, however:

2016-02-22 02:12:18 GMT 30908  LOG:  using stale statistics instead of
current ones because stats collector is not responding
2016-02-22 02:13:01 GMT 30908  LOG:  using stale statistics instead of
current ones because stats collector is not responding
2016-02-22 02:13:52 GMT 30908  LOG:  using stale statistics instead of
current ones because stats collector is not responding

There are lots of these mesages within the timeframe. There seems to be
a couple of them every 2-4 hours.


This is usually a sign of resource starvation. I see this on an old 
machine, at night, when I run some intensive file system backups. I 
figured it out by looking at my crontab. The problems such as they are 
is that the messages fill up logs and your statistics become, as the 
message says, stale for how ever long the collector does not respond.







Best regards,
Fredrik Huitfeldt


On 7 March 2016 16:35:29 +01:00, Adrian Klaver
> wrote:

On 03/06/2016 10:18 PM, fred...@huitfeldt.com

>
wrote:

HI All,

i would really appreciate any help I can get on this issue.

basically, a pg_basebackup + streaming attach, led to a database
that we
could not read from afterwards.


From original post:

http://www.postgresql.org/message-id/1456919678340.31300.116900@webmail2

"The issue remained until we ran a full vacuum analyze on the
cluster."

Which cluster was that, the master or the slave?

"I have logfiles from the incident, but I cannot see anything
out of
the ordinary (despite having a fair amount of experience
investigating
postgresql logs)."


Can we see the section before and after ERROR?


Beset regards,
Fredrik

PS please advise if this is better posted on another list.



--
Adrian Klaver
adrian.kla...@aklaver.com 
>




--
Adrian Klaver
adrian.kla...@aklaver.com 

Best regards,
Fredrik




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: How does PostgreSQL serializable transaction works? (SELECT/UPDATE vs INSERT conflict)

2016-03-09 Thread Kevin Grittner
On Wed, Mar 9, 2016 at 1:39 PM, Alexandru Lazarev
 wrote:

Jeff's answer is entirely correct; I'm just going to go into more
detail -- just in case you're interested enough to work through it.

> `CREATE TABLE public.foo (id serial PRIMARY KEY, mynum integer);`
>
> and following data
>
>  id | mynum
> +---
>   1 |10
>   2 |10
>   3 |10
>   4 |10
> (4 rows)
>
> I run 2 serialize transactions in parallel (2 `psql` consoles):
>
> -- both transactions
> mydb=# begin;
> BEGIN
> mydb=# set transaction isolation level serializable;
> SET
>
> -- tx1
> mydb=# select * from foo where mynum < 100;
> id | mynum
> +---
>   1 |10
>   2 |10
>   3 |10
>   4 |10
> (4 rows)
> --tx1: Shouldn't freeze data visible for tx1 select?

Yes, tx1 does have a snapshot which will guarantee that it sees a
repeatable set view of the data for this predicate.

> --tx2
> mydb=# insert into foo (mynum) values (10);
> INSERT 0 1
> -- tx2 will insert next row with id 5 in foo table
> -- Shouldn't insert of tx2 broke data snapshot visible for tx1?

The snapshot tx1 has guarantees that overlapping changes won't
change it's view of things, and there is no reason for anything to
be blocked or canceled here.  The insert creates what is called a
read-write dependency (or rw-conflict for short) that establishes
that in any serial ordering of a set of transactions which includes
tx1 and tx2, tx1 must precede tx2 in the apparent order of
execution.

> --tx1
> mydb=# update foo set mynum = 20 where id < 100;
> UPDATE 4
> -- Shouldn't here appear serialization fail or at least on tx1 commit?

No, there is no cycle in the apparent order of execution.  The
snapshot for tx1 still limits it to the same set of rows, and there
is nothing visible that is inconsistent with tx1 running before
tx2.

> --tx2
> mydb=# commit;
> COMMIT
>
> --tx1
> mydb=# commit;
> COMMIT
> -- tx1 Commit is OK - no any error

According to the SQL standard, and in the PostgreSQL implementation
of SERIALIZABLE transactions, commit order does not, by itself,
establish apparent  order of execution.

> -- implicit tx
> mydb=# select * from foo;
> id | mynum
> +---
>   1 |20
>   2 |20
>   3 |20
>   4 |20
>   5 |10
> (5 rows)

As Jeff said, this is consistent with the implicit transaction
running last, so tx1 -> tx2 -> implicit_tx.

Now, you are pretty close to a situation which does need to trigger
a serialization failure -- just switch the commit of tx1 and the
implicit transaction.  If tx2 has committed but tx1 has not yet
committed:

mydb=# select * from foo;
 id | mynum
+---
  1 |10
  2 |10
  3 |10
  4 |10
  5 |10
(5 rows)

*Now* we have a problem -- this only makes sense if the implicit tx
was run after tx2 and before tx1.  So apparent order of execution
is tx1 -> tx2 -> implicit_tx -> tx1.  There is a cycle in the
apparent order of execution, which causes anomalies which can ruin
data integrity.  Now, if the implicit transaction is not
serializable, it is allowed to see such things, but if you make it
serializable (and let's call it tx3 now) it sees a state where only
tx2 ran; tx1 could not have run:

-- tx3
mydb=# begin;
BEGIN
mydb=# set transaction isolation level serializable;
SET
mydb=# select * from foo;
 id | mynum
+---
  1 |10
  2 |10
  3 |10
  4 |10
  5 |10
(5 rows)

mydb=# commit;
COMMIT

So now, tx1 is not allowed to commit, or for that matter do
anything else -- it has been "doomed" by tx3:

mydb=# select * from foo;
ERROR:  could not serialize access due to read/write dependencies
among transactions
DETAIL:  Reason code: Canceled on identification as a pivot, during
conflict out checking.
HINT:  The transaction might succeed if retried.

Hopefully you are using some framework to automatically detect this
SQLSTATE and retry the transaction from the start.  So on retry,
tx1 does this:

-- tx1 (retry)
mydb=# begin;
BEGIN
mydb=# set transaction isolation level serializable;
SET
mydb=# select * from foo where mynum < 100;
 id | mynum
+---
  1 |10
  2 |10
  3 |10
  4 |10
  5 |10
(5 rows)

mydb=# update foo set mynum = 20 where id < 100;
UPDATE 5
mydb=# select * from foo;
 id | mynum
+---
  1 |20
  2 |20
  3 |20
  4 |20
  5 |20
(5 rows)

mydb=# commit;
COMMIT

Now the result of all successfully committed serializiable
transactions is consistent with the order tx2 -> tx3 -> tx1.  All
is good.

Kevin Grittner


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: How does PostgreSQL serializable transaction works? (SELECT/UPDATE vs INSERT conflict)

2016-03-09 Thread Jeff Janes
On Wed, Mar 9, 2016 at 11:39 AM, Alexandru Lazarev
 wrote:
>
> Transaction 1 operated on set of data (`WHERE` clause) on which 2nd
> transaction do an `INSERT`, which fit to clause from 1st transaction.
> Shouldn't 1st transaction fail if 2nd commit first?
>
> I have following table (in PostgreSQL 9.5 db)
>
> `CREATE TABLE public.foo (id serial PRIMARY KEY, mynum integer);`
>
> and following data
>
>  id | mynum
> +---
>   1 |10
>   2 |10
>   3 |10
>   4 |10
> (4 rows)
>
> I run 2 serialize transactions in parallel (2 `psql` consoles):
>
> -- both transactions
> mydb=# begin;
> BEGIN
> mydb=# set transaction isolation level serializable;
> SET
>
> -- tx1
> mydb=# select * from foo where mynum < 100;
> id | mynum
> +---
>   1 |10
>   2 |10
>   3 |10
>   4 |10
> (4 rows)
> --tx1: Shouldn't freeze data visible for tx1 select?
>
> --tx2
> mydb=# insert into foo (mynum) values (10);
> INSERT 0 1
> -- tx2 will insert next row with id 5 in foo table
> -- Shouldn't insert of tx2 broke data snapshot visible for tx1?
>
> --tx1
> mydb=# update foo set mynum = 20 where id < 100;
> UPDATE 4
> -- Shouldn't here appear serialization fail or at least on tx1 commit?
>
> --tx2
> mydb=# commit;
> COMMIT
>
> --tx1
> mydb=# commit;
> COMMIT
> -- tx1 Commit is OK - no any error
>
> -- implicit tx
> mydb=# select * from foo;
> id | mynum
> +---
>   1 |20
>   2 |20
>   3 |20
>   4 |20
>   5 |10
> (5 rows)

What you are seeing here is exactly what you would see if tx1 started
and ran to completion, and then tx2 started and ran to completion, and
then the implicit tx started and ran to completion, in that order.
Isn't it? If so, there is no serialization failure.

Serializable means that there needs to be some serial ordering of the
transactions which would result in the same overall outcome that
actually occurred.  It doesn't mean that the serial ordering which
would produce that outcome has to be the same as the actual
chronological commit order.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Fwd: How does PostgreSQL serializable transaction works? (SELECT/UPDATE vs INSERT conflict)

2016-03-09 Thread Alexandru Lazarev
Transaction 1 operated on set of data (`WHERE` clause) on which 2nd
transaction do an `INSERT`, which fit to clause from 1st transaction.
Shouldn't 1st transaction fail if 2nd commit first?

I have following table (in PostgreSQL 9.5 db)

`CREATE TABLE public.foo (id serial PRIMARY KEY, mynum integer);`

and following data

 id | mynum
+---
  1 |10
  2 |10
  3 |10
  4 |10
(4 rows)

I run 2 serialize transactions in parallel (2 `psql` consoles):

-- both transactions
mydb=# begin;
BEGIN
mydb=# set transaction isolation level serializable;
SET

-- tx1
mydb=# select * from foo where mynum < 100;
id | mynum
+---
  1 |10
  2 |10
  3 |10
  4 |10
(4 rows)
--tx1: Shouldn't freeze data visible for tx1 select?

--tx2
mydb=# insert into foo (mynum) values (10);
INSERT 0 1
-- tx2 will insert next row with id 5 in foo table
-- Shouldn't insert of tx2 broke data snapshot visible for tx1?

--tx1
mydb=# update foo set mynum = 20 where id < 100;
UPDATE 4
-- Shouldn't here appear serialization fail or at least on tx1 commit?

--tx2
mydb=# commit;
COMMIT

--tx1
mydb=# commit;
COMMIT
-- tx1 Commit is OK - no any error

-- implicit tx
mydb=# select * from foo;
id | mynum
+---
  1 |20
  2 |20
  3 |20
  4 |20
  5 |10
(5 rows)

I am wondering why it behave so, taking in consideration PostgreSQL
documentation

> "To guarantee true serializability PostgreSQL uses predicate locking,
> which means that it keeps locks which allow it to determine when a
> write would have had an impact on the result of a previous read from a
> concurrent transaction, had it run first."
link: http://www.postgresql.org/docs/current/static/transaction-iso.html


Re: Email address VERP problems (was RE: [GENERAL] Does a call to a language handler provide a context/session, and somewhere to keep session data?

2016-03-09 Thread Magnus Hagander
On Wed, Mar 9, 2016 at 7:03 PM, Alvaro Herrera 
wrote:

> David Bennett wrote:
> > > ow...@postgresql.org] On Behalf Of Alvaro Herrera
> >
> > > On that subject.  I noticed that Outlook seems to add the "return
> > > path"
> > > addresses (sometimes called bounce address or envelope sender) to the
> > > CC header, which sets a new record in the stupidity scale.  Since we
> > > use VERP, each message gets a different return path address, so with
> > > each reply you make, Outlook adds a new address to the CC.
> >
> > Interesting. I use a lot of mailing lists and I've not run across one
> > actually using VERP before. Is it becoming more frequent?
>
> Not sure if it's becoming more frequent -- I only manage *this* list
> server and we enabled VERP several years ago.  I thought it was common
> practice ... the idea of manually managing addresses that bounce seems
> completely outdated now.
>


It's been frequent for quite some time.



> > I checked the headers. It seems this list is using a VERP address for
> both
> > the Return-path and the Reply-To, and only the Sender identifies the list
> > directly.
>
> I'm pretty sure our list server is not setting the VERP address in
> Reply-To.  That would be insane, wouldn't it.  We don't touch the
> Reply-To header at all.  Maybe some other program along the way modifies
> the email before Outlook gets it?
>

Yeah, same here.

However, if you look at the thread, it seems the VERP address was added to
the *original email*. In the To field. Perhaps that's what confused the MUA
into adding *another* VERP address on the reply?

Then AFAICT in
http://www.postgresql.org/message-id/raw/001601d17852$7bea9e80$73bfdb80$@pfxcorp.com
a second VERP address was added to the mail (very the 485 one).

To me it look slike this was definitely done by the MTA or MUA at
pfxcorp.com. The archived copy (which is delivered the exact same way as a
general email, it doesn't have any shortcut) does not contain this address
naywhere, it was only used as an envelope sender. Possibly it got confused
by the other VERP address in the initial email, which AFAICT is a manual
mistake.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [GENERAL] Re: "missing chunk number XX for toast value YY in pg_toast ..." after pg_basebackup.

2016-03-09 Thread fredrik
Hi Adrian,

thank you very much for your patience. I apologise for the missing information.

On 9 March 2016 16:13:00 +01:00, Adrian Klaver  
wrote:

> On 03/09/2016 04:56 AM,  wrote:
> 
> > Hi Adrian,
> > 
> > thank you very much for your response.
> > 
> > I ran the "VACUUM ANALYZE" command on the master node.
> > 
> > Regarding log messages.
> > 
> > Here is the contents of the log (excluding connections/disconnections):
> > 
> Assuming the below is from the replica database.
> the "LOG: recovery was paused" message was indeed from the replica.

> 
> 
> > 
> > 2016-02-22 02:30:08 GMT 24616 LOG: recovery has paused
> > 
> So what happened to cause the above?
> 
we automatically pause recovery on the replica before running pg_dump. This is 
in order to make certain that we get a consistent dump of the database.

> 
> I am not seeing anything below that indicates the recovery started again.
> the reason why we do not see a matching "resume" is that the pg_dump failed 
> and our error handling was insufficient.

> 
> 
> > 2016-02-22 02:30:08 GMT 24616 HINT: Execute pg_xlog_replay_resume() to
> > continue.
> > 2016-02-22 02:37:19 GMT 23859 DBNAME ERROR: missing chunk number 0 for
> > toast value 2747579 in pg_toast_22066
> > 2016-02-22 02:37:19 GMT 23859 DBNAME STATEMENT: COPY public.room_shape
> > (room_uuid, data) TO stdout;
> > 2016-02-22 02:37:41 GMT 2648 DBNAME LOG: could not receive data from
> > client: Connection reset by peer
> > 2016-02-22 02:37:41 GMT 2648 DBNAME LOG: unexpected EOF on client
> > connection
> > 
> What does the log from the master show?
> 
It doesnt seem to show much. It does have these repeated messages, however:
2016-02-22 02:12:18 GMT 30908 LOG: using stale statistics instead of current 
ones because stats collector is not responding
2016-02-22 02:13:01 GMT 30908 LOG: using stale statistics instead of current 
ones because stats collector is not responding
2016-02-22 02:13:52 GMT 30908 LOG: using stale statistics instead of current 
ones because stats collector is not responding

There are lots of these mesages within the timeframe. There seems to be a 
couple of them every 2-4 hours.


> > 
> > 
> > Best regards,
> > Fredrik Huitfeldt
> > 
> > 
> > On 7 March 2016 16:35:29 +01:00, Adrian Klaver
> > <> wrote:
> > 
> > > On 03/06/2016 10:18 PM, 
> > > > wrote:
> > > 
> > > HI All,
> > > 
> > > i would really appreciate any help I can get on this issue.
> > > 
> > > basically, a pg_basebackup + streaming attach, led to a database
> > > that we
> > > could not read from afterwards.
> > > 
> > > 
> > > From original post:
> > > 
> > > 
> > > 
> > > "The issue remained until we ran a full vacuum analyze on the cluster."
> > > 
> > > Which cluster was that, the master or the slave?
> > > 
> > > "I have logfiles from the incident, but I cannot see anything out of
> > > the ordinary (despite having a fair amount of experience investigating
> > > postgresql logs)."
> > > 
> > > 
> > > Can we see the section before and after ERROR?
> > > 
> > > 
> > > Beset regards,
> > > Fredrik
> > > 
> > > PS please advise if this is better posted on another list.
> > > 
> > > 
> > > 
> > > --
> > > Adrian Klaver
> > >  >
> > > 
> > 
> 
> -- 
> Adrian Klaver
> 
> Best regards,

Fredrik



Re: Email address VERP problems (was RE: [GENERAL] Does a call to a language handler provide a context/session, and somewhere to keep session data?

2016-03-09 Thread Alvaro Herrera
David Bennett wrote:
> > ow...@postgresql.org] On Behalf Of Alvaro Herrera
> 
> > On that subject.  I noticed that Outlook seems to add the "return
> > path"
> > addresses (sometimes called bounce address or envelope sender) to the
> > CC header, which sets a new record in the stupidity scale.  Since we
> > use VERP, each message gets a different return path address, so with
> > each reply you make, Outlook adds a new address to the CC.
> 
> Interesting. I use a lot of mailing lists and I've not run across one
> actually using VERP before. Is it becoming more frequent?

Not sure if it's becoming more frequent -- I only manage *this* list
server and we enabled VERP several years ago.  I thought it was common
practice ... the idea of manually managing addresses that bounce seems
completely outdated now.

> I checked the headers. It seems this list is using a VERP address for both
> the Return-path and the Reply-To, and only the Sender identifies the list
> directly.

I'm pretty sure our list server is not setting the VERP address in
Reply-To.  That would be insane, wouldn't it.  We don't touch the
Reply-To header at all.  Maybe some other program along the way modifies
the email before Outlook gets it?

> I rather think the problem is that the list software is not pruning VERP
> addresses from the emails it sends out. I don't know exactly what is causing
> them, but it certainly seems to me something that the list software could
> handle effortlessly.

Well, since the VERP addresses are not supposed to appear in those
headers, the list software doesn't try to prune -- that would be useless
99.95% of the time.  You see, this is the first time that I have seen
any mail chain do this (I've been here for several years.)

> Sorry, but there is nothing to tweak. It all seems to be working just fine
> at this end. But I will keep any eye out for extra VERPs and delete them (as
> I have on this message).

Thanks.  I think it will be better for your continued mental health to
ensure that nothing adds such addresses to Reply-To, if that is indeed
what is happening.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Email address VERP problems (was RE: [GENERAL] Does a call to a language handler provide a context/session, and somewhere to keep session data?

2016-03-09 Thread David Bennett
> ow...@postgresql.org] On Behalf Of Alvaro Herrera

> On that subject.  I noticed that Outlook seems to add the "return
> path"
> addresses (sometimes called bounce address or envelope sender) to the
> CC header, which sets a new record in the stupidity scale.  Since we
> use VERP, each message gets a different return path address, so with
> each reply you make, Outlook adds a new address to the CC.

Interesting. I use a lot of mailing lists and I've not run across one
actually using VERP before. Is it becoming more frequent?

I checked the headers. It seems this list is using a VERP address for both
the Return-path and the Reply-To, and only the Sender identifies the list
directly.

No, Outlook does no such thing. Outlook by default replies to the Reply-To
(which seems reasonable). It seems that the list software is able to remove
the VERP address in this case.

If I use 'reply all' Outlook adds the Sender to the To address and preserves
existing CCs. I did not see it do what you describe in several attempts.

I rather think the problem is that the list software is not pruning VERP
addresses from the emails it sends out. I don't know exactly what is causing
them, but it certainly seems to me something that the list software could
handle effortlessly.

> Maybe this is tweakable.  If so, please turn it off.

Sorry, but there is nothing to tweak. It all seems to be working just fine
at this end. But I will keep any eye out for extra VERPs and delete them (as
I have on this message).

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows default directory for client certificates

2016-03-09 Thread Lupi Loop
Thank you very much for your answer Laurenz, I was storing the client
certificates in PGDATA (C:\Program Files\PostgreSQL\9.5\data\) instead of
in APPDATA (C:\Users\XXX\AppData\Roaming).

Problem solved.

Thank you



2016-03-09 11:45 GMT+01:00 Albe Laurenz :

> Lupi Loop wrote:
> > PostgreSQL documentation at
> http://www.postgresql.org/docs/9.5/static/libpq-ssl.html
> > says that when a client certificate is requested by a server, a windows
> client psql will use by
> > default the credentials  located at %APPDATA%\postgresql\postgresql.crt
> and
> > %APPDATA%\postgresql\postgresql.key
> >
> > However, my psql client application (v 9.5.0) in a Windows Server 2012
> R2 cannot find the certificates
> > in this location and only works when this location is specifically set
> using the sslcert and sslkey
> > attributes when connecting. Is this a bug or am I using a wrong path?
> >
> > This an example of execution:
> >
> > ---
> > C:\Program Files\PostgreSQL\9.5\data> ..\bin\psql
> "postgresql://postgres@localhost/postgres"
> > psql: FATAL:  connection requires a valid client certificate
> > FATAL:  no pg_hba.conf entry for host "::1", user "postgres",
> database"postgres", SSL off
> >
> > C:\Program Files\PostgreSQL\9.5\data> ..\bin\psql
> > "postgresql://postgres@localhost
> /postgres?sslkey=postgresql\postgresql.key=postgresql\postgres
> > ql.crt"
> > psql (9.5.0)
> > SSL connection (protocol: TLSv1.2, cipher:
> ECDHE-RSA-AES256-GCM-SHA384,bits: 256, compression: off)
> > Type "help" for help.
> > postgres=#
> > 
> > cheers
>
> To make this work, you will have to have a root certificate "root.crt" in
> the
> server's data directory and the configuration parameter "ssl_ca_file" set
> to "root.crt".
>
> The corresponding line in pg_hba.conf should look like this:
> hostssl  /32  md5 clientcert=1
>
> Then you have to restart the server.
> But I guess you have done that since it works if you specify the files
> explicitly.
>
> Perhaps you are not in the %APPDATA% directory.
> What do you get when you type
>echo %APPDATA%
> on the command prompt?
>
> One possibility to investigate this is to run "Process Monitor" and add
> the filters
> "Process Name is psql.exe" and "Patch ends with postgresql.key".
> Then you should see where psql looks for the client key.
>
> Yours,
> Laurenz Albe
>


Re: [GENERAL] Plan to support predicate push-down into subqueries with aggregates?

2016-03-09 Thread Adam Brusselback
I responded yesterday, but it seems to have gotten caught up because it was
too big with the attachments... Here it is again.

Sorry about not posting correctly, hopefully I did it right this time.

So I wanted to see if Sql Server (2014) could handle this type of query
differently than Postgres (9.5.1), so I got an instance of express
installed and ported the test script to it.

I updated my Postgres script so the data is the same in each server.  The
end result is Sql Server seems to be able to optimize all of these queries
MUCH better than Postgres.
I disabled parallelism in Sql Server to make the comparison fair.

I've attached the explain analyze results for Postgres, and the execution
plan for Sql Server (in picture form... don't know a better way)

Results are:
--Sql Server:15ms average
--Postgres: 6ms average
SELECT *
FROM header
INNER JOIN header_total
ON header.header_id = header_total.header_id
WHERE header.header_id = 26;


--Sql Server: 15ms average
--Postgres: 1250ms average
SELECT *
FROM header
INNER JOIN header_total
ON header.header_id = header_total.header_id
WHERE header.header_id < 27
AND header.header_id > 24;


--Sql Server: 567ms average
--Postgres: 1265ms average
SELECT *
FROM header
INNER JOIN header_total
ON header.header_id = header_total.header_id
WHERE header.description like '%5%';


--Sql Server: 15ms average
--Postgres: 1252ms average
SELECT *
FROM header_total
WHERE header_total.header_id IN (
SELECT header_id
FROM header
WHERE header.header_id < 27
AND header.header_id > 24);

Here are the sql server execution plans as links rather than attachments:
https://drive.google.com/file/d/0BzWRjbj6CQLeb29JZ0lMMnp4QTA/view?usp=sharing
https://drive.google.com/file/d/0BzWRjbj6CQLeM2t0MmZDdE03OHc/view?usp=sharing
https://drive.google.com/file/d/0BzWRjbj6CQLeV0hjRmM5NE9CTWc/view?usp=sharing
https://drive.google.com/file/d/0BzWRjbj6CQLeNmdlQWpHYU1BVHM/view?usp=sharing
query1:

'Nested Loop Left Join  (cost=13.22..1022.98 rows=1 width=125) (actual 
time=3.021..3.024 rows=1 loops=1)'
'  Join Filter: (header_1.header_id = detail_2.header_id)'
'  ->  Nested Loop Left Join  (cost=11.78..1013.93 rows=1 width=89) (actual 
time=2.978..2.981 rows=1 loops=1)'
'Join Filter: (header_1.header_id = detail_1.header_id)'
'->  Nested Loop  (cost=0.55..4.99 rows=1 width=49) (actual 
time=0.028..0.030 rows=1 loops=1)'
'  ->  Index Scan using header_pkey on header  (cost=0.28..2.49 
rows=1 width=33) (actual time=0.022..0.023 rows=1 loops=1)'
'Index Cond: (header_id = 26)'
'  ->  Index Scan using header_pkey on header header_1  
(cost=0.28..2.49 rows=1 width=16) (actual time=0.002..0.003 rows=1 loops=1)'
'Index Cond: (header_id = 26)'
'->  GroupAggregate  (cost=11.23..1008.92 rows=1 width=32) (actual 
time=2.944..2.945 rows=1 loops=1)'
'  Group Key: detail_1.header_id'
'  ->  Bitmap Heap Scan on detail_1  (cost=11.23..999.21 rows=969 
width=32) (actual time=0.301..2.067 rows=1000 loops=1)'
'Recheck Cond: (header_id = 26)'
'Heap Blocks: exact=1000'
'->  Bitmap Index Scan on idx_detail_1_header_id  
(cost=0.00..10.99 rows=969 width=0) (actual time=0.156..0.156 rows=1000 
loops=1)'
'  Index Cond: (header_id = 26)'
'  ->  GroupAggregate  (cost=1.44..9.01 rows=1 width=20) (actual 
time=0.033..0.033 rows=1 loops=1)'
'Group Key: detail_2.header_id'
'->  Bitmap Heap Scan on detail_2  (cost=1.44..8.95 rows=7 width=20) 
(actual time=0.013..0.027 rows=7 loops=1)'
'  Recheck Cond: (header_id = 26)'
'  Heap Blocks: exact=7'
'  ->  Bitmap Index Scan on idx_detail_2_header_id  
(cost=0.00..1.44 rows=7 width=0) (actual time=0.010..0.010 rows=7 loops=1)'
'Index Cond: (header_id = 26)'
'Planning time: 0.420 ms'
'Execution time: 3.127 ms'

query2:

'Hash Right Join  (cost=28367.90..28394.20 rows=3 width=129) (actual 
time=1245.897..1246.442 rows=2 loops=1)'
'  Hash Cond: (detail_1.header_id = header_1.header_id)'
'  ->  HashAggregate  (cost=28164.00..28176.50 rows=1000 width=32) (actual 
time=1235.943..1236.304 rows=1000 loops=1)'
'Group Key: detail_1.header_id'
'->  Seq Scan on detail_1  (cost=0.00..18164.00 rows=100 width=32) 
(actual time=0.008..215.064 rows=100 loops=1)'
'  ->  Hash  (cost=203.86..203.86 rows=3 width=89) (actual time=9.895..9.895 
rows=2 loops=1)'
'Buckets: 1024  Batches: 1  Memory Usage: 9kB'
'->  Hash Right Join  (cost=177.58..203.86 rows=3 width=89) (actual 
time=8.835..9.891 rows=2 loops=1)'
'  Hash Cond: (detail_2.header_id = header_1.header_id)'
'  ->  HashAggregate  (cost=167.50..180.00 rows=1000 width=20) 
(actual time=8.743..9.407 rows=1000 loops=1)'
'Group Key: detail_2.header_id'
'->  Seq Scan on detail_2  (cost=0.00..115.00 

Re: [GENERAL] Streaming replication and slave-local temp tables

2016-03-09 Thread Pavel Stehule
Hi

2016-03-09 18:14 GMT+01:00 Ivan Voras :

> Hello,
>
> Is it possible (or will it be possible) to issue CREATE TEMP TABLE
> statements on the read-only slave nodes in master-slave streaming
> replication in recent version of PostgreSQL (9.4+)?
>

Currently it is not possible. There are two plans related to this issue: 1.
global temp tables, 2. logical replication. The work on logical replication
is near finalization and probably it will be available in 9.6.

Regards

Pavel


[GENERAL] Streaming replication and slave-local temp tables

2016-03-09 Thread Ivan Voras
Hello,

Is it possible (or will it be possible) to issue CREATE TEMP TABLE
statements on the read-only slave nodes in master-slave streaming
replication in recent version of PostgreSQL (9.4+)?


Re: [GENERAL] How to delete few elements from array beginning?

2016-03-09 Thread Chris Travers
On Wed, Mar 9, 2016 at 4:53 PM, Alexander Farber  wrote:

> Hello Chris,
>
> On Wed, Mar 9, 2016 at 4:42 PM, Chris Travers 
> wrote:
>
>>
>>
>> On Wed, Mar 9, 2016 at 1:10 PM, Alexander Farber <
>> alexander.far...@gmail.com> wrote:
>>
>>>
>>> what is please the most efficient way to delete a slice from the start
>>> of a longer array (after I have copied it to another array)?
>>>
>>> Do I really have to copy a large slice of the array to itself, like in
>>> the last line here:
>>>
>>> pile_array := pile_array || swap_array;
>>>
>>> /* here I copy away swap_len elements */
>>> new_hand := pile_array[1:swap_len];
>>>
>>> /* here I don't know how to efficiently remove already copied
>>> elements */
>>> pile_array := pile_array[(swap_len + 1):ARRAY_LENGTH(pile_array,
>>> 1)];
>>>
>>> or is there a better way?
>>>
>>
>> This looks like something for card hands?
>>
>> What you are doing seems correct to me.
>>
>>
> actually card hands would be easier - because cards are unique in the deck.
>
> But here I have letter hands (like "AAABCDE") in a word game and they are
> not unique in the pile...
>

But it seems like a similar problem.  Namely:

1.  You have an ordered list of cards (these may be lettered cards).  These
start out as existing in a shuffled deck but once shuffled you have an
order.  Whether or not the cards are unque

2.  When you draw a card, you have to remove it from the head of the list.

Now on to some basic points (though you can probably do this in a straight
SQL query):

SQL arrays are more or less like math matrices.  For this reason it doesn't
make sense to treat them as programming structures per se.  Instead you
have a matrix and you create another matrix by slicing it. So this is how
you have to think about the problem.

Also consider that PostgreSQL tuples are copy on write anyway.

Now, if you want something that uses an array more like a queue, it is best
to write that portion in pl/perl.  However if you do that you have to think
carefully about mutability.

Hope this helps
Chris Travers

>
> Regards
> Alex
>
>


-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: [GENERAL] How to delete few elements from array beginning?

2016-03-09 Thread Alexander Farber
Hello Chris,

On Wed, Mar 9, 2016 at 4:42 PM, Chris Travers 
wrote:

>
>
> On Wed, Mar 9, 2016 at 1:10 PM, Alexander Farber <
> alexander.far...@gmail.com> wrote:
>
>>
>> what is please the most efficient way to delete a slice from the start of
>> a longer array (after I have copied it to another array)?
>>
>> Do I really have to copy a large slice of the array to itself, like in
>> the last line here:
>>
>> pile_array := pile_array || swap_array;
>>
>> /* here I copy away swap_len elements */
>> new_hand := pile_array[1:swap_len];
>>
>> /* here I don't know how to efficiently remove already copied
>> elements */
>> pile_array := pile_array[(swap_len + 1):ARRAY_LENGTH(pile_array,
>> 1)];
>>
>> or is there a better way?
>>
>
> This looks like something for card hands?
>
> What you are doing seems correct to me.
>
>
actually card hands would be easier - because cards are unique in the deck.

But here I have letter hands (like "AAABCDE") in a word game and they are
not unique in the pile...

Regards
Alex


Re: [GENERAL] How to delete few elements from array beginning?

2016-03-09 Thread Chris Travers
On Wed, Mar 9, 2016 at 1:10 PM, Alexander Farber  wrote:

> Hello fellow PostgreSQL users,
>
> what is please the most efficient way to delete a slice from the start of
> a longer array (after I have copied it to another array)?
>
> Do I really have to copy a large slice of the array to itself, like in the
> last line here:
>
> pile_array := pile_array || swap_array;
>
> /* here I copy away swap_len elements */
> new_hand := pile_array[1:swap_len];
>
> /* here I don't know how to efficiently remove already copied
> elements */
> pile_array := pile_array[(swap_len + 1):ARRAY_LENGTH(pile_array,
> 1)];
>
> or is there a better way?
>

This looks like something for card hands?

What you are doing seems correct to me.

>
> Thank you
> Alex
>
> P.S. The listing of the entire stored function in question:
>
> CREATE OR REPLACE FUNCTION words_swap_game(
> IN in_uid integer,
> IN in_gid integer,
> IN in_swap varchar(7))
> RETURNS void AS
> $func$
> DECLARE
> i   integer;
> j   integer;
> swap_leninteger;
> hand_leninteger;
> pile_leninteger;
> swap_array  varchar[];
> pile_array  varchar[];
> old_handvarchar[];
> new_handvarchar[];
> hand_ignore boolean[];
> BEGIN
> swap_array := STRING_TO_ARRAY(in_swap, NULL);
> swap_len := ARRAY_LENGTH(swap_array, 1);
>
> SELECT hand1, pile, ARRAY_LENGTH(hand1, 1), ARRAY_LENGTH(pile, 1)
> INTO old_hand, pile_array, hand_len, pile_len
> FROM words_games
> WHERE gid = in_gid
> AND player1 = in_uid
> /* and it is first player's turn */
> AND (played1 IS NULL OR played1 < played2);
>
> IF NOT FOUND THEN
> SELECT hand2, pile, ARRAY_LENGTH(hand2, 1),
> ARRAY_LENGTH(pile, 1)
> INTO old_hand, pile_array, hand_len, pile_len
> FROM words_games
> WHERE gid = in_gid
> AND player2 = in_uid
> /* and it is second player's turn */
> AND (played2 IS NULL OR played2 < played1);
> END IF;
>
> pile_array := pile_array || swap_array;
> -- pile_array := words_shuffle(pile_array);
> new_hand := pile_array[1:swap_len];
> pile_array := pile_array[(swap_len + 1):pile_len]; /* XXX is it
> good? */
>
> hand_ignore := ARRAY_FILL(FALSE, ARRAY[hand_len]);
>
> <>
> FOR i IN 1..hand_len LOOP
> FOR j IN 1..swap_len LOOP
> IF hand_ignore[j] = FALSE AND
>old_hand[i] = swap_array[j] THEN
> hand_ignore[j] := TRUE;
> CONTINUE hand_loop;
> END IF;
> END LOOP;
>
> new_hand := new_hand || old_hand[i];
> END LOOP;
> /*
> UPDATE words_games
> SET hand1 = new_hand,
> pile = pile_array,
> played1 = CURRENT_TIMESTAMP
> WHERE gid = in_gid
> AND player1 = in_uid
> -- and it is first player's turn
> AND (played1 IS NULL OR played1 < played2);
>
> IF NOT FOUND THEN
> UPDATE words_games
> SET hand2 = new_hand,
> pile = pile_array,
> played2 = CURRENT_TIMESTAMP
> WHERE gid = in_gid
> AND player2 = in_uid
> -- and it is second player's turn
> AND (played2 IS NULL OR played2 < played1);
> END IF;
> */
>
> END
> $func$ LANGUAGE plpgsql;
>
>
>
>


-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: [GENERAL] How to delete few elements from array beginning?

2016-03-09 Thread Alexander Farber
Hello Mike,

On Wed, Mar 9, 2016 at 2:42 PM, Mike Sofen  wrote:
>
> Have you considered a normal (relational), non-array-based data model for
this app (2 or 3 tables in a 1:M/M:M) instead of the single table model
you’ve shown?  That would then allow you to use normal sql set-based
operations that are readable, understandable, maintainable and very
fast/scalable.
>
> When I see row by row operations (looping or cursors) in what should be a
real time query…that’s my alarm bell that perhaps the code has wandered off
a valid solution path.
>

thank you for your reply.

I have considered that for my Scrabble-like word game, but with rows I
would not know how to -

1) swap several tiles with same letter values (like player hand is
"AABBCCD" and she swaps "BBC"). With rows and DISTINCT I don't know how to
do that

2) how to represent 15 x 15 game board ok I could store a varchar(225)
string...

For the backend of my game I would like to implement as much as possible in
PL/pgSQL and as little as possible in PHP.

A decade ago I implemented a card game as a "hobby programmer project" and
I like how its data is still kept clean by PostgreSQL, despite 4000 active
players.

Also while implementing the card game I was given a great advice on this
mailing list (to use timestamptz instead of year-week strings) and later
regretted ignoring it :-) So any advices are welcome

Regards
Alex

P.S. Here my current implementation of letter swapping, any comments are
welcome:

CREATE TABLE words_games (
gid SERIAL PRIMARY KEY,
created timestamptz NOT NULL,

player1 integer REFERENCES words_users(uid) ON DELETE CASCADE NOT
NULL,
player2 integer REFERENCES words_users(uid) ON DELETE CASCADE,

played1 timestamptz,
played2 timestamptz,

hand1 varchar[7] NOT NULL,
hand2 varchar[7] NOT NULL,
pile  varchar[116] NOT NULL,

board varchar[15][15] NOT NULL,
style integer NOT NULL CHECK (1 <= style AND style <= 4)
);

CREATE OR REPLACE FUNCTION words_swap_game(
IN in_uid integer,
IN in_gid integer,
IN in_swap varchar(7))
RETURNS void AS
$func$
DECLARE
i   integer;
j   integer;
letter  varchar;
swapped integer;
swap_leninteger;
hand_leninteger;
pile_leninteger;
swap_array  varchar[];
pile_array  varchar[];
old_handvarchar[];
new_handvarchar[];
BEGIN
swap_array := STRING_TO_ARRAY(in_swap, NULL);
swap_len := ARRAY_LENGTH(swap_array, 1);

SELECT
hand1,
pile,
ARRAY_LENGTH(hand1, 1),
ARRAY_LENGTH(pile, 1)
INTO
old_hand,
pile_array,
hand_len,
pile_len
FROM words_games
WHERE gid = in_gid
AND player1 = in_uid
/* and it is first player's turn */
AND (played1 IS NULL OR played1 < played2);

IF NOT FOUND THEN
SELECT
hand2,
pile,
ARRAY_LENGTH(hand2, 1),
ARRAY_LENGTH(pile, 1)
INTO
old_hand,
pile_array,
hand_len,
pile_len
FROM words_games
WHERE gid = in_gid
AND player2 = in_uid
/* and it is second player's turn */
AND (played2 IS NULL OR played2 < played1);
END IF;

IF NOT FOUND THEN
RAISE EXCEPTION 'Game % not found for user %', in_gid,
in_uid;
END IF;

swapped := 0;
<>
FOR i IN 1..hand_len LOOP
letter := old_hand[i];

FOR j IN 1..swap_len LOOP
IF swap_array[j] IS NOT NULL AND
   swap_array[j] = letter THEN
/* move letter from swap to pile */
pile_array := pile_array || letter;
swap_array[j] := NULL;
swapped := swapped + 1;
CONTINUE hand_loop;
END IF;
END LOOP;

/* letter was not found in swap, keep it in hand */
new_hand := new_hand || letter;
END LOOP;

IF swapped = 0 OR swapped <> swap_len THEN
RAISE EXCEPTION 'Invalid swap % for hand %', in_swap,
old_hand;
END IF;

-- pile_array := words_shuffle(pile_array);
new_hand   := new_hand || pile_array[1:swapped];
pile_array := pile_array[(1 + swapped):(pile_len + swapped)];

UPDATE words_games SET
hand1   = new_hand,
pile= pile_array,
played1 = 

Re: [GENERAL] Re: "missing chunk number XX for toast value YY in pg_toast ..." after pg_basebackup.

2016-03-09 Thread Adrian Klaver

On 03/09/2016 04:56 AM, fred...@huitfeldt.com wrote:

Hi Adrian,

thank you very much for your response.

I ran the "VACUUM ANALYZE" command on the master node.

Regarding log messages.

Here is the contents of the log (excluding connections/disconnections):


Assuming the below is from the replica database.



2016-02-22 02:30:08 GMT 24616  LOG:  recovery has paused


So what happened to cause the above?

I am not seeing anything below that indicates the recovery started again.


2016-02-22 02:30:08 GMT 24616  HINT:  Execute pg_xlog_replay_resume() to
continue.
2016-02-22 02:37:19 GMT 23859 DBNAME ERROR:  missing chunk number 0 for
toast value 2747579 in pg_toast_22066
2016-02-22 02:37:19 GMT 23859 DBNAME STATEMENT:  COPY public.room_shape
(room_uuid, data) TO stdout;
2016-02-22 02:37:41 GMT 2648 DBNAME LOG:  could not receive data from
client: Connection reset by peer
2016-02-22 02:37:41 GMT 2648 DBNAME LOG:  unexpected EOF on client
connection



What does the log from the master show?




Best regards,
Fredrik Huitfeldt


On 7 March 2016 16:35:29 +01:00, Adrian Klaver
 wrote:

On 03/06/2016 10:18 PM, fred...@huitfeldt.com
 wrote:

HI All,

i would really appreciate any help I can get on this issue.

basically, a pg_basebackup + streaming attach, led to a database
that we
could not read from afterwards.


From original post:

http://www.postgresql.org/message-id/1456919678340.31300.116900@webmail2

"The issue remained until we ran a full vacuum analyze on the cluster."

Which cluster was that, the master or the slave?

"I have logfiles from the incident, but I cannot see anything out of
the ordinary (despite having a fair amount of experience investigating
postgresql logs)."


Can we see the section before and after ERROR?


Beset regards,
Fredrik

PS please advise if this is better posted on another list.



--
Adrian Klaver
adrian.kla...@aklaver.com 





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] create create of database of a particular name

2016-03-09 Thread Adrian Klaver

On 03/09/2016 07:02 AM, Chris Withers wrote:

On 09/03/2016 14:53, Adrian Klaver wrote:

On 03/09/2016 05:49 AM, Chris Withers wrote:

Hi All,

Django, a popular python web framework, runs its unit tests by creating
a 'test_'-prefixed database and then dropping it at the end of the test
run.


FYI in Django 1.8+ you can prevent that:

https://docs.djangoproject.com/en/1.8/topics/testing/overview/#the-test-database



" New in Django 1.8:

You can prevent the test databases from being destroyed by adding the
--keepdb flag to the test command. This will preserve the test database
between runs. If the database does not exist, it will first be created.
Any migrations will also be applied in order to keep it up to date.
"


...and we're on 1.7 :-(


Then you might want to try what I use, pytest-django:

https://pypi.python.org/pypi/pytest-django


It has done the above for some time now:

http://pytest-django.readthedocs.org/en/latest/database.html#reuse-db-reuse-the-testing-database-between-test-runs

Guessing that is where Django got the idea.





Chris




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] create create of database of a particular name

2016-03-09 Thread Chris Withers

On 09/03/2016 14:53, Adrian Klaver wrote:

On 03/09/2016 05:49 AM, Chris Withers wrote:

Hi All,

Django, a popular python web framework, runs its unit tests by creating
a 'test_'-prefixed database and then dropping it at the end of the test
run.


FYI in Django 1.8+ you can prevent that:

https://docs.djangoproject.com/en/1.8/topics/testing/overview/#the-test-database


" New in Django 1.8:

You can prevent the test databases from being destroyed by adding the
--keepdb flag to the test command. This will preserve the test database
between runs. If the database does not exist, it will first be created.
Any migrations will also be applied in order to keep it up to date.
"


...and we're on 1.7 :-(

Chris


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] create create of database of a particular name

2016-03-09 Thread Adrian Klaver

On 03/09/2016 05:49 AM, Chris Withers wrote:

Hi All,

Django, a popular python web framework, runs its unit tests by creating
a 'test_'-prefixed database and then dropping it at the end of the test
run.


FYI in Django 1.8+ you can prevent that:

https://docs.djangoproject.com/en/1.8/topics/testing/overview/#the-test-database

" New in Django 1.8:

You can prevent the test databases from being destroyed by adding the 
--keepdb flag to the test command. This will preserve the test database 
between runs. If the database does not exist, it will first be created. 
Any migrations will also be applied in order to keep it up to date.

"


Is there any way I can grant a user in postgres the ability only to
create a database of a particular name?


Not that I know of.



cheers,

Chris





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to setup Active Directory users in Postgres 9.3.11

2016-03-09 Thread Wyatt Sanford
Hi,

I have recently been tasked with installing Postgres 9.3.11 on a Windows 2012 
R2 server on Active Directory and restoring backups from a Linux server running 
Postgres 9.3.11.  I have all of the databases restored to the windows server 
and now I need to set up access for users on Active Directory.  I've read a few 
things online, but did not find any good examples.  I know that you have to add 
login roles in Postgres for each AD user that will need to connect, but I 
haven't found any good examples of how to edit the pg_hba.conf file to accept 
AD connections.  The Postgres.conf file is already listening on all ports.  Can 
anyone give me some examples of the entries I need to add to the pg_hba.conf 
file or point me to some examples on the web.

Thanks,

Wyatt Sanford
Database Administrator
Mississippi Department of Revenue


DISCLAIMER : This e-mail transmission is intended for the use of the individual 
or entity addressed and may contain protected confidential information. If you 
are not the intended recipient or the employee or agent responsible for 
delivering the transmission to the intended recipient, any dissemination, 
forwarding, printing, distribution, copying or use of this transmission is 
strictly prohibited. If you have received this transmission in error, please 
notify the sender and delete this transmission.


[GENERAL] create create of database of a particular name

2016-03-09 Thread Chris Withers

Hi All,

Django, a popular python web framework, runs its unit tests by creating 
a 'test_'-prefixed database and then dropping it at the end of the test run.


Is there any way I can grant a user in postgres the ability only to 
create a database of a particular name?


cheers,

Chris


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] regarding table migration from sql to postgres with runmtk.sh

2016-03-09 Thread Raghavendra
On Wed, Mar 9, 2016 at 7:26 AM, Durgamahesh Manne  wrote:

> Hi sir
> i got following erro when i ran runmtk.sh
>
> initially i placed the jtds driver related to sql in
> /opt/postgresplus/edbmtk/lib
>
> ./runMTK.sh  -sourcedbtype sqlserver -targetdbtype postgresql
> feeds_morningstar_bond_sector
> Running EnterpriseDB Migration Toolkit (Build 48.0.2) ...
> Source database connectivity info...
> conn =jdbc:jtds:sqlserver://fxserver.trustfort.com:49888/DataFeedHandler
> user =trustfort
> password=**
> Target database connectivity info...
> conn =jdbc:postgresql://192.168.24.128:5432/dbo_sql
> user =postgres
> password=**
> Connecting with source SQL Server database server...
> Connected to Microsoft SQL Server, version '10.50.1600'
> Connecting with target Postgres database server...
> Exception in thread "main" java.lang.NoClassDefFoundError:
> org/postgresql/Driver
> at
> com.edb.dbhandler.postgresql.PGConnection.(PGConnection.java:32)
> at
> com.edb.common.MTKFactory.createMTKConnection(MTKFactory.java:228)
> at
> com.edb.MigrationToolkit.createNewTargetConnection(MigrationToolkit.java:5610)
> at com.edb.MigrationToolkit.initToolkit(MigrationToolkit.java:3122)
> at com.edb.MigrationToolkit.main(MigrationToolkit.java:1521)
> Caused by: java.lang.ClassNotFoundException: org.postgresql.Driver
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>
>
​Looks like the driver issue.
./runMTK.sh​ is EnterpriseDB Migration Studio tool. Can you please reach
out to EnterpriseDB support or Forum.

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/


Re: [GENERAL] How to delete few elements from array beginning?

2016-03-09 Thread Mike Sofen
>>Alexander Farber wrote on  Wednesday, March 09, 2016 4:11 AM



Hello fellow PostgreSQL users,

what is please the most efficient way to delete a slice from the start of a 
longer array (after I have copied it to another array)?

Do I really have to copy a large slice of the array to itself, like in the last 
line here:

pile_array := pile_array || swap_array;

/* here I copy away swap_len elements */
new_hand := pile_array[1:swap_len];

/* here I don't know how to efficiently remove already copied elements 
*/
pile_array := pile_array[(swap_len + 1):ARRAY_LENGTH(pile_array, 1)];

or is there a better way?

Thank you

Alex

<< 

Have you considered a normal (relational), non-array-based data model for this 
app (2 or 3 tables in a 1:M/M:M) instead of the single table model you’ve 
shown?  That would then allow you to use normal sql set-based operations that 
are readable, understandable, maintainable and very fast/scalable.  

When I see row by row operations (looping or cursors) in what should be a real 
time query…that’s my alarm bell that perhaps the code has wandered off a valid 
solution path.

Mike



Re: [GENERAL] regarding table migration from sql to postgres with runmtk.sh

2016-03-09 Thread Andrew Sullivan
On Wed, Mar 09, 2016 at 06:56:38PM +0530, Durgamahesh Manne wrote:
> Hi sir
> i got following erro when i ran runmtk.sh
> 
> initially i placed the jtds driver related to sql in
> /opt/postgresplus/edbmtk/lib

Looks like you're using postgresplus, which is EnterpriseDB's fork.  But …

> Exception in thread "main" java.lang.NoClassDefFoundError:
> org/postgresql/Driver

… since it can't find the driver, I'd bet that your classpath doesn't
contain /opt/postgresplus/edbmtk/lib.

Best regards,

A

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] regarding table migration from sql to postgres with runmtk.sh

2016-03-09 Thread Durgamahesh Manne
Hi sir
i got following erro when i ran runmtk.sh

initially i placed the jtds driver related to sql in
/opt/postgresplus/edbmtk/lib

./runMTK.sh  -sourcedbtype sqlserver -targetdbtype postgresql
feeds_morningstar_bond_sector
Running EnterpriseDB Migration Toolkit (Build 48.0.2) ...
Source database connectivity info...
conn =jdbc:jtds:sqlserver://fxserver.trustfort.com:49888/DataFeedHandler
user =trustfort
password=**
Target database connectivity info...
conn =jdbc:postgresql://192.168.24.128:5432/dbo_sql
user =postgres
password=**
Connecting with source SQL Server database server...
Connected to Microsoft SQL Server, version '10.50.1600'
Connecting with target Postgres database server...
Exception in thread "main" java.lang.NoClassDefFoundError:
org/postgresql/Driver
at
com.edb.dbhandler.postgresql.PGConnection.(PGConnection.java:32)
at
com.edb.common.MTKFactory.createMTKConnection(MTKFactory.java:228)
at
com.edb.MigrationToolkit.createNewTargetConnection(MigrationToolkit.java:5610)
at com.edb.MigrationToolkit.initToolkit(MigrationToolkit.java:3122)
at com.edb.MigrationToolkit.main(MigrationToolkit.java:1521)
Caused by: java.lang.ClassNotFoundException: org.postgresql.Driver
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)



regards
mahesh


Re: [GENERAL] Re: "missing chunk number XX for toast value YY in pg_toast ..." after pg_basebackup.

2016-03-09 Thread fredrik
Hi Adrian,

thank you very much for your response.

I ran the "VACUUM ANALYZE" command on the master node.

Regarding log messages.

Here is the contents of the log (excluding connections/disconnections):

2016-02-22 02:30:08 GMT 24616 LOG: recovery has paused
2016-02-22 02:30:08 GMT 24616 HINT: Execute pg_xlog_replay_resume() to continue.
2016-02-22 02:37:19 GMT 23859 DBNAME ERROR: missing chunk number 0 for toast 
value 2747579 in pg_toast_22066
2016-02-22 02:37:19 GMT 23859 DBNAME STATEMENT: COPY public.room_shape 
(room_uuid, data) TO stdout;
2016-02-22 02:37:41 GMT 2648 DBNAME LOG: could not receive data from client: 
Connection reset by peer
2016-02-22 02:37:41 GMT 2648 DBNAME LOG: unexpected EOF on client connection


Best regards,

Fredrik Huitfeldt

On 7 March 2016 16:35:29 +01:00, Adrian Klaver  
wrote:

> On 03/06/2016 10:18 PM,  wrote:
> 
> > HI All,
> > 
> > i would really appreciate any help I can get on this issue.
> > 
> > basically, a pg_basebackup + streaming attach, led to a database that we
> > could not read from afterwards.
> > 
> From original post:
> 
> 
> 
> "The issue remained until we ran a full vacuum analyze on the cluster."
> 
> Which cluster was that, the master or the slave?
> 
> "I have logfiles from the incident, but I cannot see anything out of the 
> ordinary (despite having a fair amount of experience investigating postgresql 
> logs)."
> 
>
> Can we see the section before and after ERROR?
> 
> 
> > 
> > Beset regards,
> > Fredrik
> > 
> > PS please advise if this is better posted on another list.
> > 
> 
> -- 
> Adrian Klaver
> 
>



[GENERAL] How to delete few elements from array beginning?

2016-03-09 Thread Alexander Farber
Hello fellow PostgreSQL users,

what is please the most efficient way to delete a slice from the start of a
longer array (after I have copied it to another array)?

Do I really have to copy a large slice of the array to itself, like in the
last line here:

pile_array := pile_array || swap_array;

/* here I copy away swap_len elements */
new_hand := pile_array[1:swap_len];

/* here I don't know how to efficiently remove already copied
elements */
pile_array := pile_array[(swap_len + 1):ARRAY_LENGTH(pile_array,
1)];

or is there a better way?

Thank you
Alex

P.S. The listing of the entire stored function in question:

CREATE OR REPLACE FUNCTION words_swap_game(
IN in_uid integer,
IN in_gid integer,
IN in_swap varchar(7))
RETURNS void AS
$func$
DECLARE
i   integer;
j   integer;
swap_leninteger;
hand_leninteger;
pile_leninteger;
swap_array  varchar[];
pile_array  varchar[];
old_handvarchar[];
new_handvarchar[];
hand_ignore boolean[];
BEGIN
swap_array := STRING_TO_ARRAY(in_swap, NULL);
swap_len := ARRAY_LENGTH(swap_array, 1);

SELECT hand1, pile, ARRAY_LENGTH(hand1, 1), ARRAY_LENGTH(pile, 1)
INTO old_hand, pile_array, hand_len, pile_len
FROM words_games
WHERE gid = in_gid
AND player1 = in_uid
/* and it is first player's turn */
AND (played1 IS NULL OR played1 < played2);

IF NOT FOUND THEN
SELECT hand2, pile, ARRAY_LENGTH(hand2, 1),
ARRAY_LENGTH(pile, 1)
INTO old_hand, pile_array, hand_len, pile_len
FROM words_games
WHERE gid = in_gid
AND player2 = in_uid
/* and it is second player's turn */
AND (played2 IS NULL OR played2 < played1);
END IF;

pile_array := pile_array || swap_array;
-- pile_array := words_shuffle(pile_array);
new_hand := pile_array[1:swap_len];
pile_array := pile_array[(swap_len + 1):pile_len]; /* XXX is it
good? */

hand_ignore := ARRAY_FILL(FALSE, ARRAY[hand_len]);

<>
FOR i IN 1..hand_len LOOP
FOR j IN 1..swap_len LOOP
IF hand_ignore[j] = FALSE AND
   old_hand[i] = swap_array[j] THEN
hand_ignore[j] := TRUE;
CONTINUE hand_loop;
END IF;
END LOOP;

new_hand := new_hand || old_hand[i];
END LOOP;
/*
UPDATE words_games
SET hand1 = new_hand,
pile = pile_array,
played1 = CURRENT_TIMESTAMP
WHERE gid = in_gid
AND player1 = in_uid
-- and it is first player's turn
AND (played1 IS NULL OR played1 < played2);

IF NOT FOUND THEN
UPDATE words_games
SET hand2 = new_hand,
pile = pile_array,
played2 = CURRENT_TIMESTAMP
WHERE gid = in_gid
AND player2 = in_uid
-- and it is second player's turn
AND (played2 IS NULL OR played2 < played1);
END IF;
*/

END
$func$ LANGUAGE plpgsql;


Re: [GENERAL] Windows default directory for client certificates

2016-03-09 Thread Albe Laurenz
Lupi Loop wrote:
> PostgreSQL documentation at 
> http://www.postgresql.org/docs/9.5/static/libpq-ssl.html
> says that when a client certificate is requested by a server, a windows 
> client psql will use by
> default the credentials  located at %APPDATA%\postgresql\postgresql.crt and
> %APPDATA%\postgresql\postgresql.key
> 
> However, my psql client application (v 9.5.0) in a Windows Server 2012 R2 
> cannot find the certificates
> in this location and only works when this location is specifically set using 
> the sslcert and sslkey
> attributes when connecting. Is this a bug or am I using a wrong path?
> 
> This an example of execution:
> 
> ---
> C:\Program Files\PostgreSQL\9.5\data> ..\bin\psql 
> "postgresql://postgres@localhost/postgres"
> psql: FATAL:  connection requires a valid client certificate
> FATAL:  no pg_hba.conf entry for host "::1", user "postgres", 
> database"postgres", SSL off
> 
> C:\Program Files\PostgreSQL\9.5\data> ..\bin\psql
> "postgresql://postgres@localhost/postgres?sslkey=postgresql\postgresql.key=postgresql\postgres
> ql.crt"
> psql (9.5.0)
> SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384,bits: 
> 256, compression: off)
> Type "help" for help.
> postgres=#
> 
> cheers

To make this work, you will have to have a root certificate "root.crt" in the
server's data directory and the configuration parameter "ssl_ca_file" set to 
"root.crt".

The corresponding line in pg_hba.conf should look like this:
hostssl  /32  md5 clientcert=1

Then you have to restart the server.
But I guess you have done that since it works if you specify the files 
explicitly.

Perhaps you are not in the %APPDATA% directory.
What do you get when you type
   echo %APPDATA%
on the command prompt?

One possibility to investigate this is to run "Process Monitor" and add the 
filters
"Process Name is psql.exe" and "Patch ends with postgresql.key".
Then you should see where psql looks for the client key.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Script to check replication

2016-03-09 Thread Jehan-Guillaume de Rorthais
Le Fri, 4 Mar 2016 19:35:16 +,
Ashish Chauhan  a écrit :

> Hi,
> 
> We are running Streaming replication with Hot standby in our current
> production. I do have below script to check replication status and
> replication lag. Does anyone have script which runs every 15 mins to check
> replication status and send out email if replication is not running or
> lagging behind.
> 
> SELECT pg_last_xlog_receive_location() receive,
> pg_last_xlog_replay_location() replay, now() -
> pg_last_xact_replay_timestamp() AS replication_delay, (extract(epoch FROM
> now()) -   extract(epoch FROM pg_last_xact_replay_timestamp()))::int lag
> 
> Thanks for your help!

Usually, such requirement is fullfilled by a monitoring system. Eg. Nagios,
Zabbix, Munin, and so on.

Considering the replication check or lag check, we are using
check_pgactivity[1] (see releases [2]). See its "streaming_delta" service. If
you don't use Nagios or a compatible derivative, writing a wrapper around this
script is quite easy to do.

Regards,

[1] https://github.com/OPMDG/check_pgactivity
[2] https://github.com/OPMDG/check_pgactivity/releases


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general