RE: [Extern] Re: Advice on binary installation

2021-04-13 Thread Kevin Brannen
-Original Message-
From: Zwettler Markus (OIZ) 
Sent: Tuesday, April 13, 2021 8:08 AM


> -Ursprüngliche Nachricht-

> Von: Paul Förster mailto:paul.foers...@gmail.com>>

> Gesendet: Dienstag, 13. April 2021 15:02

> Betreff: [Extern] Re: Advice on binary installation

>

> Hi Markus,

>

> On 13. Apr, 2021, at 14:43, Zwettler Markus (OIZ)

> mailto:markus.zwett...@zuerich.ch>>

> wrote:

> > We assume to get more than 100 Postgres clusters in the future.

> >

> > We will get a very heterogeneous binary installation basis if we

> > install needed

> extensions (e.g. ip4r) or software (e.g. patroni) on a per project basis.

> >

> > There could be even more incompatibility problems otherwise if we

> > install all

> global needed extensions or software with every project to get a

> homogenous binary installation.

> >

> > Software installation is done with yum using PGDG downstream channels.

> >

> > Any recommendations?

>

> I don't know how you plan to do it, but I can only say how we do it:

>

> - Compile from source including all extensions needed (make install-world).

>   Create a binary only directory this way, i.e. compile source to

>   /data/postgres/xx.x

> - Create a tar file of said directory.

> - Distribute that via Ansible untaring it on the destination servers.

> - Have a standard postgresql.conf ready which includes a

>   shared_preload_libraries = 'pg_stat_statements'

>   (or whatever you need) for use with initdb.

>

> This way, we make sure that all servers get the same new software

> directory as needed in a separate directory including all necessary

> extensions, which means we can then delete the old directory if it is

> not longer needed (i.e. after all databases have been upgraded). Also, this 
> makes sure, everything is loaded properly.

>

> With individual and only some few cases, we then use "create

> extension", but only extensions which we deliver with the tar via

> Ansible. If there is doing to be a new extension (which we avaoid if

> we can), then we put it into the tar Archive and nowhere else. So it's on all 
> servers, but only a few databases use it then.

>

> Hope this helps.

>

> Paul

>





Out IT Sec requires to do software patching at least every 3 months.



How to you do software patching with your method? Creating + deploy new 
tarballs every 3 month?



Thanks, Markus





We used compile our own to do something very similar until we got rid of the 
need for 1 extension with an external dependency that was holding us back. 
After that, we just use the community packages and install the extensions 
package too. That allows us to do "create extension X" to load the ones we need 
in our application.



When we need to upgrade, we just get the newer community packages and install 
them with the OS package manager ("rpm" in our case). As long as you stay 
within the same major version it's all easy. If you cross major versions then 
"pg_upgrade" is required (be sure to use the -k option).



With a little effort, all of that is scriptable so you can run it on any 
servers as required when it's time for that server to upgrade. Ansible, Puppet, 
Chef, or your own distribution software can be useful to get it there and put 
into service.



To us, the above is the easy part. The hard part is certifying that the our 
software works with the newer version of Pg. It's rare we have to change 
things, but Pg has "tightened up" a little over time, showing us where we were 
a little "loose and carefree" in our SQL. That hasn't hit us in the last couple 
of releases, so hopefully we're beyond that and now we left with "what new 
features look useful to take advantage of?". 



HTH,

Kevin

This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


RE: Ways to "serialize" result set for later use?

2021-04-13 Thread Kevin Brannen
From: Adam Brusselback 
Sent: Monday, April 12, 2021 12:51 PM

>  Checking data (DML), if functions are doing the right things is something we 
> do in our code unit tests.

This is exactly what I am writing, unit tests for my code (which is pl/pgsql). 
This is an ELT pipeline for my customers to bulk update their data in my 
system, with detailed error reporting for any issues per-row/column. The code 
is all plpgsql, as are the few tests i've written so far. pgTAP is my unit 
testing framework for this process.

So unit testing my company's (vast) database code is something I am just trying 
to figure out and get into my workflow, and it didn't feel like I had to fight 
too hard with it at this point, other than $subject$. And even that isn't an 
issue with my hacky function in place, it just feels a little...dirty I guess? 
Was just wanting a gut check if there seemed to be an obviously better way to 
get the same results.

If there is something built into Pg that does what you want, I'm not aware of 
it, but there are a number of extensions out there. You could check out that 
list to see.

Otherwise, I'm not sure we can help you much. If that approach seems to be 
working, continue on and see where it takes you.

Our unit tests are in Perl based on a module we created that inherits from 
Test::More. From there, we can exercise any code we need, many of which 
contains calls to DB functions, injecting known values for all the use cases we 
can think of, and making sure we get back the expected results. That also means 
we can COPY data in for testing if required. It's pretty simplistic and 
straight-forward in some ways, but it works for us.

HTH,
Kevin
.
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


Suboptimal plan when IN(...), ORDER BY and LIMIT are used (no JOINs)

2021-04-13 Thread Dmitry Koterov
Hi.

I'm trying to understand the logic which the planner uses in "WHERE x IN
(IDS) ORDER BY y LIMIT N" queries when the correct index exists in the
database.

I expected that, if IDS list is small and N is small too, the planner
should've done the following: for each element in IDS, query first N items
from the index, then union the results (up to IDS*N elements, thus small)
and limit it by N items.

Instead, the planner decides to run a bitmap index scan, fetch thousands of
rows, and then post-filter most of them. Why? Is it possible to somehow
tell the planner to use individual first-N fetches?

(SET STATISTICS to 1 for both columns doesn't change anything; also I
don't see how cardinality of any of these fields can theoretically affect
the plan: we still need first N items from each of the index sub-parts.)

CREATE TABLE roles(
  id bigint NOT NULL,
  id1 bigint,
  created_at timestamptz NOT NULL
);
CREATE INDEX ON roles(id1, created_at DESC);

EXPLAIN ANALYZE SELECT * FROM roles
WHERE id1 IN(
  '1001361878439251615', '1001349402553202617', '1001329448424677858',
  '1001348457743394950', '1001361706624116300', '1001338330225145648',
  '1001363186688934748', '1001366841628692013'
)
ORDER BY created_at DESC LIMIT 50

Limit  (cost=50171.99..50177.83 rows=50 width=42) (actual
time=206.056..208.865 rows=50 loops=1)
   ->  Gather Merge  (cost=50171.99..57802.99 rows=65404 width=42)
(actual time=206.055..208.857 rows=50 loops=1)
 Workers Planned: 2
 Workers Launched: 2
 ->  Sort  (cost=49171.97..49253.73 rows=32702 width=42)
(actual time=198.944..198.948 rows=40 loops=3)
   Sort Key: created_at DESC
   Sort Method: top-N heapsort  Memory: 31kB
   Worker 0:  Sort Method: top-N heapsort  Memory: 30kB
   Worker 1:  Sort Method: top-N heapsort  Memory: 32kB
   ->  Parallel Bitmap Heap Scan on roles
(cost=4209.08..48085.63 rows=32702 width=42) (actual
time=78.119..180.352 rows=60563 loops=3)
 Recheck Cond: (id1 = ANY
('{1001361878439251615,1001349402553202617,1001329448424677858,1001348457743394950,1001361706624116300,1001338330225145648,1001363186688934748,1001366841628692013}'::bigint[]))
 Filter: (cred_id = '1001344096118566254'::bigint)
 Rows Removed by Filter: 526
 Heap Blocks: exact=5890
 ->  Bitmap Index Scan on roles_asset_created_desc
 (cost=0.00..4189.46 rows=182139 width=0) (actual time=73.761..73.761
rows=183934 loops=1)
   Index Cond: (id1 = ANY
('{1001361878439251615,1001349402553202617,1001329448424677858,1001348457743394950,1001361706624116300,1001338330225145648,1001363186688934748,1001366841628692013}'::bigint[]))
 Planning Time: 0.590 ms
 Execution Time: 208.935 ms


RE: looking for a installation package to Using GSSAPI with Postgres12 for windows

2021-04-13 Thread LE MENTEC, SANDRINE
Hello Jehan-Guillaume,

Thanks a lot for your answer. It looks simplier to use SSPI (witch is already 
includes).

I have try your suggestion and it seems to work. I need to try some value of 
the differents parameters to make my test fully work (include_realm, 
compat_realm, upn_username and krb_realm).

Thanks again,

Regards,


Sandrine Le Mentec
Software Engineer | CSD 

-Message d'origine-
De : Jehan-Guillaume de Rorthais  
Envoyé : mardi 13 avril 2021 16:35
À : LE MENTEC, SANDRINE
Cc : pgsql-general@lists.postgresql.org; KOUMIRA, Adnane
Objet : Re: looking for a installation package to Using GSSAPI with Postgres12 
for windows

Hello,

On Tue, 13 Apr 2021 08:10:06 +
"LE MENTEC, SANDRINE"  wrote:

> Dear postgres community,
> 
> I am currently working on Postgres 12.5 on a windows server 2016. I 
> need to use Kerberos token for the authentication on the database.
> 
> To do so, I am looking for an installation executable file for windows 
> with the GSSAPI included. The basic one provided by EDB does not seem 
> to include this features.

I had the same problem and fixed it by using sspi.

If you want to authenticate against a microsoft domain, try to replace "gss"
with "sspi", and maybe set "compat_realm=0" if needed. It will try to use 
Kerberos, or fallback on NTLM.

See: https://www.postgresql.org/docs/12/sspi-auth.html

Regards,
This message contains information that may be privileged or confidential and is 
the property of the Capgemini Group. It is intended only for the person to whom 
it is addressed. If you are not the intended recipient, you are not authorized 
to read, print, retain, copy, disseminate, distribute, or use this message or 
any part thereof. If you receive this message in error, please notify the 
sender immediately and delete all copies of this message.





Re: "missing chunk number 0 for toast value xxx in pg_toast_xxx" when pg_basebackup

2021-04-13 Thread Laurenz Albe
On Tue, 2021-04-13 at 06:36 -0700, MaXinjian wrote:
> > Why do you write the WAL to /tmp/pg_wal, only to later mount that at the
> > default location?
>
> pg_wal dir has size limitation, if wal files are too large, they will be
> overwrited, right?

No, they won't.
You could run out of space on the file system though.

> > I see nothing wrong with what you are doing, but I may have got lost in
> > your complicated procedure.
> > You don't happen to remove "backup_label", do you?
>
> em, I do remove backup_label...

Then that's your problem.
That will corrupt your data, because recovery starts from the wrong
checkpoint.

> 1. It means recovery.conf is not necessary, backup_label is necessary?

Yes, exactly.

> 2. Which key in backup_label is necessary?

The whole file needs to be preserved unchanged, just as it is.
Don't mess with that file.

> 3. I searched the log, it do has recoveried.
> Then, if there is no backup_label, what's the default START WAL LOCATION and
> CHECKPOINT LOCATION?

That's the catch.

"backup_label" is the *only way* to tell a backup from a crashed
PostgreSQL cluster.

If there is no "backup_label", PostgreSQL will get the latest checkpoint
from the control file (global/pg_control), which may well be later than
the checkpoint that started the backup, so you will miss to recover some
transactions.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: "missing chunk number 0 for toast value xxx in pg_toast_xxx" when pg_basebackup

2021-04-13 Thread Tom Lane
Ma Xinjian  writes:
> When I use pg_basebackup to backup and restore db(Let's call it A) to a
> standalone instance(Let's call it B), "missing chunk number 0 for toast
> value xxx in pg_toast_xxx" errors output.
> PG version: 10.3

10.3 is quite a few bug fixes ago.  Maybe you'd have better results
with the current release (10.16).

regards, tom lane




Re: looking for a installation package to Using GSSAPI with Postgres12 for windows

2021-04-13 Thread Jehan-Guillaume de Rorthais
Hello,

On Tue, 13 Apr 2021 08:10:06 +
"LE MENTEC, SANDRINE"  wrote:

> Dear postgres community,
> 
> I am currently working on Postgres 12.5 on a windows server 2016. I need to
> use Kerberos token for the authentication on the database.
> 
> To do so, I am looking for an installation executable file for windows with
> the GSSAPI included. The basic one provided by EDB does not seem to include
> this features.

I had the same problem and fixed it by using sspi.

If you want to authenticate against a microsoft domain, try to replace "gss"
with "sspi", and maybe set "compat_realm=0" if needed. It will try to use
Kerberos, or fallback on NTLM.

See: https://www.postgresql.org/docs/12/sspi-auth.html

Regards,




Re: "missing chunk number 0 for toast value xxx in pg_toast_xxx" when pg_basebackup

2021-04-13 Thread MaXinjian
> Why do you write the WAL to /tmp/pg_wal, only to later mount that at the
> default location?
pg_wal dir has size limitation, if wal files are too large, they will be
overwrited, right?

> I see nothing wrong with what you are doing, but I may have got lost in
> your complicated procedure.

> You don't happen to remove "backup_label", do you?
em, I do remove backup_label...

1. It means recovery.conf is not necessary, backup_label is necessary?
2. Which key in backup_label is necessary?
3. I searched the log, it do has recoveried.
Then, if there is no backup_label, what's the default START WAL LOCATION and
CHECKPOINT LOCATION?



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




AW: [Extern] Re: Advice on binary installation

2021-04-13 Thread Zwettler Markus (OIZ)
> -Ursprüngliche Nachricht-
> Von: Paul Förster 
> Gesendet: Dienstag, 13. April 2021 15:02
> An: Zwettler Markus (OIZ) 
> Cc: pgsql-gene...@postgresql.org
> Betreff: [Extern] Re: Advice on binary installation
> 
> Hi Markus,
> 
> On 13. Apr, 2021, at 14:43, Zwettler Markus (OIZ) 
> wrote:
> > We assume to get more than 100 Postgres clusters in the future.
> >
> > We will get a very heterogeneous binary installation basis if we install 
> > needed
> extensions (e.g. ip4r) or software (e.g. patroni) on a per project basis.
> >
> > There could be even more incompatibility problems otherwise if we install 
> > all
> global needed extensions or software with every project to get a homogenous
> binary installation.
> >
> > Software installation is done with yum using PGDG downstream channels.
> >
> > Any recommendations?
> 
> I don't know how you plan to do it, but I can only say how we do it:
> 
> - Compile from source including all extensions needed (make install-world).
>   Create a binary only directory this way, i.e. compile source to
>   /data/postgres/xx.x
> - Create a tar file of said directory.
> - Distribute that via Ansible untaring it on the destination servers.
> - Have a standard postgresql.conf ready which includes a
>   shared_preload_libraries = 'pg_stat_statements'
>   (or whatever you need) for use with initdb.
> 
> This way, we make sure that all servers get the same new software directory as
> needed in a separate directory including all necessary extensions, which 
> means we
> can then delete the old directory if it is not longer needed (i.e. after all 
> databases
> have been upgraded). Also, this makes sure, everything is loaded properly.
> 
> With individual and only some few cases, we then use "create extension", but 
> only
> extensions which we deliver with the tar via Ansible. If there is doing to be 
> a new
> extension (which we avaoid if we can), then we put it into the tar Archive and
> nowhere else. So it's on all servers, but only a few databases use it then.
> 
> Hope this helps.
> 
> Paul
> 


Out IT Sec requires to do software patching at least every 3 months.

How to you do software patching with your method? Creating + deploy new 
tarballs every 3 month?

Thanks, Markus





Re: Advice on binary installation

2021-04-13 Thread Paul Förster
Hi Markus,

On 13. Apr, 2021, at 14:43, Zwettler Markus (OIZ)  
wrote:
> We assume to get more than 100 Postgres clusters in the future.
>  
> We will get a very heterogeneous binary installation basis if we install 
> needed extensions (e.g. ip4r) or software (e.g. patroni) on a per project 
> basis.
>  
> There could be even more incompatibility problems otherwise if we install all 
> global needed extensions or software with every project to get a homogenous 
> binary installation.
>  
> Software installation is done with yum using PGDG downstream channels.
>  
> Any recommendations?

I don't know how you plan to do it, but I can only say how we do it:

- Compile from source including all extensions needed (make install-world).
  Create a binary only directory this way, i.e. compile source to
  /data/postgres/xx.x
- Create a tar file of said directory.
- Distribute that via Ansible untaring it on the destination servers.
- Have a standard postgresql.conf ready which includes a
  shared_preload_libraries = 'pg_stat_statements'
  (or whatever you need) for use with initdb.

This way, we make sure that all servers get the same new software directory as 
needed in a separate directory including all necessary extensions, which means 
we can then delete the old directory if it is not longer needed (i.e. after all 
databases have been upgraded). Also, this makes sure, everything is loaded 
properly.

With individual and only some few cases, we then use "create extension", but 
only extensions which we deliver with the tar via Ansible. If there is doing to 
be a new extension (which we avaoid if we can), then we put it into the tar 
Archive and nowhere else. So it's on all servers, but only a few databases use 
it then.

Hope this helps.

Paul



Advice on binary installation

2021-04-13 Thread Zwettler Markus (OIZ)
Hi,

We assume to get more than 100 Postgres clusters in the future.

We will get a very heterogeneous binary installation basis if we install needed 
extensions (e.g. ip4r) or software (e.g. patroni) on a per project basis.

There could be even more incompatibility problems otherwise if we install all 
global needed extensions or software with every project to get a homogenous 
binary installation.

Software installation is done with yum using PGDG downstream channels.

Any recommendations?

Thanks, Markus






Re: "missing chunk number 0 for toast value xxx in pg_toast_xxx" when pg_basebackup

2021-04-13 Thread Laurenz Albe
On Tue, 2021-04-13 at 02:38 -0700, Ma Xinjian wrote:
> When I use pg_basebackup to backup and restore db(Let's call it A) to a
> standalone instance(Let's call it B), "missing chunk number 0 for toast
> value xxx in pg_toast_xxx" errors output.
> 
> PG version: 10.3
> pg_basebackup command:
>  /usr/pgsql-10/bin/pg_basebackup -h p-rdb-c01 -D /var/lib/pgsql/10/data
> -Xs -P -n --waldir=/tmp/pg_wal
>I have mounted a disk to /tmp/pg_wal before, then I will mount the disk
> to /var/lib/pgsql/10/data/pg_wal, so as to ensure completeness of wal
> records during backup.
>  
> Since I don't want B to be a standy server, I just want it to be a
> standalone server.
> I removed recovery.conf, then simply start postgresql-10.service. It turned
> out that postgresql-10.service
> can be started successfully. But when I use this postgresql(reindex, vacumm
> and so on), "missing chunk number 0 for toast value xxx in pg_toast_xxx"
> errors output.
> 
> When pg_basebackup, it will store wal under pg_wal, can't postgresql work
> with wal records locally?
> I think primary_conninfo in recovery.conf is just used to get newer wal
> records from A. Right?
> 
> I have also tested:
> If I start postgresql-10.service with recovery.conf firstly, then split it
> from postgresql cluster, everything works fine.
> 
> Above test seems proved that it is wal records's problem. I am really
> confused.

Your mail got me confused...

Why do you write the WAL to /tmp/pg_wal, only to later mount that at the
default location?

I see nothing wrong with what you are doing, but I may have got lost
in your complicated procedure.

You don't happen to remove "backup_label", do you?

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: rollback previous commit if the current one fails

2021-04-13 Thread pinker
thank you Luis, but this is not supported in plpgsql



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: rollback previous commit if the current one fails

2021-04-13 Thread luis . roberto
- Mensagem original -

> Any idea how to approach it?


Hi!

https://www.postgresql.org/docs/current/sql-savepoint.html



Luis R. Weck 




"missing chunk number 0 for toast value xxx in pg_toast_xxx" when pg_basebackup

2021-04-13 Thread Ma Xinjian
Hi,

When I use pg_basebackup to backup and restore db(Let's call it A) to a
standalone instance(Let's call it B), "missing chunk number 0 for toast
value xxx in pg_toast_xxx" errors output.

PG version: 10.3
pg_basebackup command:
 /usr/pgsql-10/bin/pg_basebackup -h p-rdb-c01 -D /var/lib/pgsql/10/data
-Xs -P -n --waldir=/tmp/pg_wal
   I have mounted a disk to /tmp/pg_wal before, then I will mount the disk
to /var/lib/pgsql/10/data/pg_wal, so as to ensure completeness of wal
records during backup.
 
Since I don't want B to be a standy server, I just want it to be a
standalone server.
I removed recovery.conf, then simply start postgresql-10.service. It turned
out that postgresql-10.service
can be started successfully. But when I use this postgresql(reindex, vacumm
and so on), "missing chunk number 0 for toast value xxx in pg_toast_xxx"
errors output.

When pg_basebackup, it will store wal under pg_wal, can't postgresql work
with wal records locally?
I think primary_conninfo in recovery.conf is just used to get newer wal
records from A. Right?

I have also tested:
If I start postgresql-10.service with recovery.conf firstly, then split it
from postgresql cluster, everything works fine.

Above test seems proved that it is wal records's problem. I am really
confused.

Regards
Ma Xinjian






--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




looking for a installation package to Using GSSAPI with Postgres12 for windows

2021-04-13 Thread LE MENTEC, SANDRINE
Dear postgres community,

I am currently working on Postgres 12.5 on a windows server 2016. I need to use 
Kerberos token for the authentication on the database.

To do so, I am looking for an installation executable file for windows with the 
GSSAPI included. The basic one provided by EDB does not seem to include this 
features.

Do you know where I could find such installer ?

Thanks for your help,
Regards,

Sandrine Le Mentec
Software Engineer
This message contains information that may be privileged or confidential and is 
the property of the Capgemini Group. It is intended only for the person to whom 
it is addressed. If you are not the intended recipient, you are not authorized 
to read, print, retain, copy, disseminate, distribute, or use this message or 
any part thereof. If you receive this message in error, please notify the 
sender immediately and delete all copies of this message.


rollback previous commit if the current one fails

2021-04-13 Thread pinker
Hi,
i need to emulate oracle's savepoint behaviour inside of the plpgsql
function.

This function is able to insert all the rows that weren't caught on the
exception, but i need also to rollback the insert that happens before the
exception.

So let's say the exception is thrown when j=3 so i need also to rollback
j=2.
Any idea how to approach it?

DROP TABLE IF EXISTS test;
CREATE TABLE test
(
id INT
);

CREATE OR REPLACE PROCEDURE test()
AS
$$
DECLARE
j INT;
BEGIN

FOR j IN 0..6
LOOP
BEGIN
INSERT INTO test VALUES (1 / j);
EXCEPTION
WHEN OTHERS THEN
END;
END LOOP;
END;
$$ LANGUAGE plpgsql;

CALL test();
TABLE test;



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html