(multiplatform) replication problem

2019-01-08 Thread W.P.

Hi there,
I have following setup:
- master database, 9.5.7, on I386 (Pentium M),
now i want to replicate database to:
- slave database. 9.5.7 on armhf (OrangePiPC+).
Is in possible?
I already followed "replication setup" 
https://www.howtoforge.com/tutorial/postgresql-replication-on-ubuntu-15-04/
Downloaded and compiled 9.5.7 (same as on master DB) from source, as at 
the beginning on ARM there was 9.4 PgSQL and I got "incompatible server 
version" at 'pg_basebackup' step. If I initialize DB from scratch it 
starts w/o problem.


Now "pg_basebackup" step succedes, but trying to start slave gives error:
'FATAL:  incorrect checksum in control file'

1). Is it possible to manage such configuration?
2). How?

Best regards
Wawrzyniec Piotrzkowski
Poland



multiple configurations with repmgr

2019-01-08 Thread ROS Didier
Hi
   We are going to use repmgr  with one node for the primary, one 
node for the standby and one node for the witness.
   It works fine with one project.

The problem is  that we want to have several other projects. Each one with its 
own primary node, its own standby node and the same witness node.
   Question : is it possible to do that ? One witness node which 
surveys several other nodes.

   Thanks in advance

Best Regards
[cid:image002.png@01D14E0E.8515EB90]


Didier ROS
Expertise SGBD
DS IT/IT DMA/Solutions Groupe EDF/Expertise Applicative - SGBD
Nanterre Picasso - E2 565D (aile nord-est)
32 Avenue Pablo Picasso
92000 Nanterre






Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à 
l'intention exclusive des destinataires et les informations qui y figurent sont 
strictement confidentielles. Toute utilisation de ce Message non conforme à sa 
destination, toute diffusion ou toute publication totale ou partielle, est 
interdite sauf autorisation expresse.

Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le 
copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si 
vous avez reçu ce Message par erreur, merci de le supprimer de votre système, 
ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support 
que ce soit. Nous vous remercions également d'en avertir immédiatement 
l'expéditeur par retour du message.

Il est impossible de garantir que les communications par messagerie 
électronique arrivent en temps utile, sont sécurisées ou dénuées de toute 
erreur ou virus.


This message and any attachments (the 'Message') are intended solely for the 
addressees. The information contained in this Message is confidential. Any use 
of information contained in this Message not in accord with its purpose, any 
dissemination or disclosure, either whole or partial, is prohibited except 
formal approval.

If you are not the addressee, you may not copy, forward, disclose or use any 
part of it. If you have received this message in error, please delete it and 
all copies from your system and notify the sender immediately by return message.

E-mail communication cannot be guaranteed to be timely secure, error or 
virus-free.


jdbc PGCopyOutputStream close() v. endCopy()

2019-01-08 Thread Rob Sargent
As is often the case, I'm unsure of which of these methods to use, or if 
I'm using them correctly.


PG10.5, jooq-3.10.8, postgresql-42.1.4, linux (redhat 6.9) and logback 
to a file.


I have been using close() for a while but thought I would make use of 
either the returned long from endCopy() or perhaps getHandledRowCount().


Both work perfectly, but when I use endCopy() I always get the exception 
shown near the bottom of this log excerpt. The COPY is on its own thread 
from the same connection as the direct jooq writes also listed.  Again, 
the data is all saved but I am worried that I'm not closing properly 
even if I use close(). The data here doesn't warrent bulk copy but it's 
just a quick example to repeat.


   13:32:55.785 [pool-1-thread-1] DEBUG
   edu.utah.camplab.jx.PayloadFromMux - STAGING TABLE CREATED:
   bulk."flk_g16-forcing very long name to trigger truncation_22_8045c0"
   13:32:55.786 [pool-1-thread-1] INFO
   edu.utah.camplab.jx.PayloadFromMux -
   8045c057-99ec-490b-90c1-85875269afee: started COPY work at 1546979575786
   13:32:55.789 [pool-1-thread-1] INFO
   edu.utah.camplab.jx.PayloadFromMux -
   8045c057-99ec-490b-90c1-85875269afee: Total segment save took 22 ms
   13:32:55.790 [pool-1-thread-1] INFO
   edu.utah.camplab.jx.AbstractPayload -
   8045c057-99ec-490b-90c1-85875269afee: closing process
   8045c057-99ec-490b-90c1-85875269afee
   13:32:55.790 [8045c057-99ec-490b-90c1-85875269afee] INFO
   e.u.camplab.jx.PayloadWriterThread - bulk."flk_g16-forcing very long
   name to trigger truncation_22_8045c0": Begin bulk copy segment
   13:32:55.793 [8045c057-99ec-490b-90c1-85875269afee] INFO
   e.u.camplab.jx.PayloadWriterThread - bulked up to 89, maybe?
   13:32:55.793 [pool-1-thread-1] DEBUG org.jooq.tools.LoggerListener -
   Executing batch query    : insert into "process_input" ("id",
   "process_id", "input_type", "input_ref") values (?, ?, ?, ?)
   13:32:55.795 [8045c057-99ec-490b-90c1-85875269afee] INFO
   e.u.camplab.jx.PayloadWriterThread - bulked up to 197, maybe?
   13:32:55.797 [8045c057-99ec-490b-90c1-85875269afee] INFO
   e.u.camplab.jx.PayloadWriterThread - bulked up to 318, maybe?
   13:32:55.798 [8045c057-99ec-490b-90c1-85875269afee] INFO
   e.u.camplab.jx.PayloadWriterThread - bulked up to 393, maybe?
   13:32:55.799 [8045c057-99ec-490b-90c1-85875269afee] INFO
   e.u.camplab.jx.PayloadWriterThread - 393/393 segments delivered in 9 ms
   13:32:55.799 [8045c057-99ec-490b-90c1-85875269afee] DEBUG
   e.u.camplab.jx.PayloadWriterThread - staged in 9 ms
   13:32:55.800 [pool-1-thread-1] DEBUG org.jooq.tools.LoggerListener -
   Executing batch query    : insert into "process_output" ("id",
   "process_id", "output_type", "output_ref") values (?, ?, ?, ?)
   13:32:55.805 [8045c057-99ec-490b-90c1-85875269afee] ERROR
   e.u.camplab.jx.PayloadWriterThread - bulk."flk_g16-forcing very long
   name to trigger truncation_22_8045c0": i/o trouble
   java.io.IOException: Ending write to copy failed.
    at
   org.postgresql.copy.PGCopyOutputStream.close(PGCopyOutputStream.java:107)
   ~[postgresql-42.1.4.jar:42.1.4]
    at
   edu.utah.camplab.jx.PayloadWriterThread.run(PayloadWriterThread.java:75)
   ~[transport/:na]
   Caused by: org.postgresql.util.PSQLException: Tried to write to an
   inactive copy operation
    at
   
org.postgresql.core.v3.QueryExecutorImpl.writeToCopy(QueryExecutorImpl.java:978)
   ~[postgresql-42.1.4.jar:42.1.4]
    at
   org.postgresql.core.v3.CopyInImpl.writeToCopy(CopyInImpl.java:35)
   ~[postgresql-42.1.4.jar:42.1.4]
    at
   org.postgresql.copy.PGCopyOutputStream.endCopy(PGCopyOutputStream.java:166)
   ~[postgresql-42.1.4.jar:42.1.4]
    at
   org.postgresql.copy.PGCopyOutputStream.close(PGCopyOutputStream.java:105)
   ~[postgresql-42.1.4.jar:42.1.4]
    ... 1 common frames omitted
   13:32:55.810 [pool-1-thread-1] DEBUG org.jooq.tools.LoggerListener -
   Executing batch query    : insert into "process_arg" ("id",
   "process_id", "argname", "argvalue_int", "argvalue_float",
   "argvalue_text") values (?, ?, ?, ?, ?, ?)

The class doing the bulk work, PayloadWriterThread extends Thread, the 
thread name is set from the caller and the critical parts are as follows:


    @Override
    public void run() {
    try (PGCopyOutputStream writer = new
   PGCopyOutputStream(copyIn)) {
    long startAt = System.currentTimeMillis();
    deliverSegments(writer);
    long postDeliver = System.currentTimeMillis();
    logger.debug("staged in {} ms", postDeliver - startAt);
    }
    catch (SQLException sqle) {
    sqle.printStackTrace();
    logger.error("{}: sql trouble", tableName, sqle);
    throw new RuntimeException("Bulk copy failed on sql",
   sqle);
    }
    catch (IOException ioe) {
    ioe.printStackTrace();
    logger.error("{}: i/o trouble", tableName, ioe);
 

Re: Optimizing the same PREPAREd static query (without parameters)

2019-01-08 Thread David Rowley
On Tue, 8 Jan 2019 at 06:45, Mitar  wrote:
> So it could learn that the values used are not distinct values, or
> that column values are not uniformly distributed? And maybe decide to
> change the plan? So it makes a plan, runs it, determines that the plan
> was not as good as expected, I run it again, it decides to try another
> plan. It is better, it decides to switch to it and keep it.

Sounds like machine learning in the query planner.  Nothing like this
exists in core, but there have been projects in the past to do this,
for example, 
https://axleproject.eu/2015/07/17/augmenting-the-postgresql-planner-with-machine-learning/

Perhaps there are others that have worked on similar things, however,
I don't recall any conversations on these postgresql.org mailing lists
though. Maybe it's worth trying searching the archives?

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: multiple configurations with repmgr

2019-01-08 Thread Nicolas Karolak
As far as i know, repmgr work at the "cluster" level so you would need
to run another postgres "cluster" instance (running on another port)
and another repmgrd instance (which would load a configuration
pointing to the new "cluster").

Regards,
Nicolas KAROLAK | Linux Systems Engineer
6-8, rue André Voguet - 94200 Ivry-sur-Seine
+33 1 77 56 77 96 | www.ubicast.eu | www.ubicast.tv

On Tue, Jan 8, 2019 at 5:17 PM ROS Didier  wrote:
>
> Hi
>
>We are going to use repmgr  with one node for the primary, one 
> node for the standby and one node for the witness.
>
>It works fine with one project.
>
>
>
> The problem is  that we want to have several other projects. Each one with 
> its own primary node, its own standby node and the same witness node.
>
>Question : is it possible to do that ? One witness node which 
> surveys several other nodes.
>
>
>
>Thanks in advance
>
>
>
> Best Regards
>
>
> Didier ROS
> Expertise SGBD
> DS IT/IT DMA/Solutions Groupe EDF/Expertise Applicative - SGBD
> Nanterre Picasso - E2 565D (aile nord-est)
> 32 Avenue Pablo Picasso
> 92000 Nanterre
>
>
>
>
>
>
> Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis 
> à l'intention exclusive des destinataires et les informations qui y figurent 
> sont strictement confidentielles. Toute utilisation de ce Message non 
> conforme à sa destination, toute diffusion ou toute publication totale ou 
> partielle, est interdite sauf autorisation expresse.
>
> Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le 
> copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. 
> Si vous avez reçu ce Message par erreur, merci de le supprimer de votre 
> système, ainsi que toutes ses copies, et de n'en garder aucune trace sur 
> quelque support que ce soit. Nous vous remercions également d'en avertir 
> immédiatement l'expéditeur par retour du message.
>
> Il est impossible de garantir que les communications par messagerie 
> électronique arrivent en temps utile, sont sécurisées ou dénuées de toute 
> erreur ou virus.
> 
>
> This message and any attachments (the 'Message') are intended solely for the 
> addressees. The information contained in this Message is confidential. Any 
> use of information contained in this Message not in accord with its purpose, 
> any dissemination or disclosure, either whole or partial, is prohibited 
> except formal approval.
>
> If you are not the addressee, you may not copy, forward, disclose or use any 
> part of it. If you have received this message in error, please delete it and 
> all copies from your system and notify the sender immediately by return 
> message.
>
> E-mail communication cannot be guaranteed to be timely secure, error or 
> virus-free.