Re: Why can't lseek the STDIN_FILENO?

2023-06-23 Thread John McKown
My best advice would be to ask a C language question on a C language forum.
This forum is really only for questions about the SQL language for the
PostgreSQL database. I.e. no MariaDB, MySQL, MS SQL questions.

First, you didn't say what OS and she'll you're using. I an guessing BASH
and Linux.

Second, you did NO error checking. I would purely guess that the lseek() is
getting a return value of -1, probably with an error of ESPIPE.

This is probably a better explanation:
https://unix.stackexchange.com/questions/502518/problems-when-test-whether-standard-input-is-capable-of-seeking

The bottom line from the above post is that STDIN is not seekable when it
is a terminal.

On Fri, Jun 23, 2023, 21:17 Wen Yi <896634...@qq.com> wrote:

> Hi community,
> I am testing the lseek & write & read, and I write the code like this:
>
> /*
> lseek_test.c
> Test the lseek
> Wen Yi
> */
> #include 
> #include 
> int main()
> {
> int fd = 0;
> char buffer[16] = {};
> write(STDIN_FILENO, "Hello world\n", sizeof("Hello world\n"));
> lseek(STDIN_FILENO, 0, SEEK_SET);
> read(STDIN_FILENO, buffer, sizeof(buffer));
> write(STDIN_FILENO, buffer, sizeof(buffer));
> return 0;
> }
>
> And I run the program ("Something Input" is my input content)
>
> [beginnerc@bogon 学习 C语言]$ gcc lseek_test.c
> [beginnerc@bogon 学习 C语言]$ ./a.out
> Hello world
> Something Input
> Something Input
> [beginnerc@bogon 学习 C语言]$
>
> I really don't know, why the buffer's content not be "Hello world\n"? (I
> use the lseek to move the cursor to the beginning region)
>
> Can someone give me some advice?
> Thanks in advance!
>
> Yours,
> Wen Yi
>


Re: Why can't lseek the STDIN_FILENO?

2023-06-23 Thread Tom Lane
"=?gb18030?B?V2VuIFlp?=" <896634...@qq.com> writes:
>  lseek(STDIN_FILENO, 0, SEEK_SET);

If you are not checking for failure return from a system call,
you are in a state of sin.

> I really don't know, why the buffer's content not be "Hello world\n"?

Probably because a tty input device is not seekable.

regards, tom lane




Why can't lseek the STDIN_FILENO?

2023-06-23 Thread Wen Yi
Hi community,
I am testing the lseek  write  read, and I write the code like this:


/*
 lseek_test.c
  Test the lseek
 Wen Yi
*/
#include 

Re: foreign keys on multiple parent table

2023-06-23 Thread Lorusso Domenico
ehm.. I'm not sure I understood correctly :-D
in which way do you generate column?

Il giorno mer 21 giu 2023 alle ore 09:47 Dominique Devienne <
ddevie...@gmail.com> ha scritto:

> On Tue, Jun 20, 2023 at 10:47 PM Lorusso Domenico 
> wrote:
>
>> Could work, but is there a way to set a reference key over the uuid of
>> all the tables?
>>
>
> Yes, it's possible. We do it. There are several ways to emulate what I
> call "polymorphic" FKs.
>
> All approaches have pros and cons, the one we use relies on CHECK
> constraints and virtual/generated columns.
> It assumes all mutually exclusive FKs are of the same type. For ON DELETE
> CASCADE FKs, you have the primary
> "fk" concrete column, plus a secondary "fk$t" type column, telling you
> which FK is active, then N "fk$N" virtual columns
> whose expression automatically turn them ON (="fk") or OFF (is NULL) based
> on "fk$t"'s value. A CHECK constraint
> ensures only 0 or 1 "fk$N" column is ON, depending on "fk"'s NULLablity.
> For ON DELETE SET NULL, you need to
> reverse the concrete and virtual columns, so the constraint can *write*
> the "fk$N" columns, with more CHECK constraints.
>
> The technique works because FKs on virtual column works fine. As with all
> FKs with ON DELETE CASCADE, you want
> to index your FKs to avoid full scans. With partial indexes (since the FKs
> are mutually exclusive and full of NULLs), the
> storage overhead from multiplicating (virtual) columns and indexes can be
> limited (i.e. not as bad as N times the single index).
> Of course, this is tricky to pull-off correctly w/o automatic schema
> generation from a logic model. We have dozens of these PFKs,
> of various cardinality, maintaining those manually would be a nightmare.
> And when the polymorphism is too much,
> we give up on referential integrity on a case by case basis, to avoid
> bloating the tables and schema. It's a tradeof, as always.
>
> I'm sure I didn't invent this technique. But it sure isn't very common and
> it has been our "secret sauce" for a few years.
> On Oracle first, now on PostgreSQL. A Dalibo consultant once told me I
> should present it at a PGCon conference :).
>
> Good luck if you try that. FWIW, --DD
>


-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: foreign keys on multiple parent table

2023-06-23 Thread Lorusso Domenico
Thank you Les for the link, it's a very good example, unfortunately my need
is more applicative (we need to store user of application, not the on pg,
proces who start etc), but for sure I can take advantage of it.

Il giorno mar 20 giu 2023 alle ore 23:01 Les  ha scritto:

> .
>>
>
>
>> From programming point of view and also to reduce the number of objects
>> in DB could be convinient create just an audit table with a structure like:
>>
>>- auditi id
>>- reference_uuid (the key of the main table)
>>- table_name
>>- list of audit data
>>
>>
> Could work, but is there a way to set a reference key over the uuid of all
>> the tables?
>>
>
> For existing solution, check out
> https://github.com/2ndQuadrant/audit-trigger
>
> Regarding fk constraints, a single fk constraint can only reference the
> primary key of a single table.
>
> But, if you want to be serious about audit logs, then you need to keep
> logs of deletions too, and for those, foreign key constraints would not
> work anyway.
>
> You may also want to consider bulk insert speed. Foreign key constraint
> checking can reduce speed.
>
>   Laszlo
>
>
>
>
>
>
>>

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Language Pack missing from StackBuilder (EDB Windows download)

2023-06-23 Thread Anthony DeBarros
While installing a fresh copy of PostgreSQL 15.3 downloaded from
EDB's website, I have discovered that the Language Pack usually present in
StackBuilder is missing.

Anyone here from EDB who might know if this is a temporary or a permanent
change?

I will try emailing EDB separately, but thought I would try here in case
someone from the company is reading.

Thanks,
Anthony


Re: plan using BTree VS GIN

2023-06-23 Thread Laurenz Albe
On Fri, 2023-06-23 at 12:08 +, Nicolas Seinlet wrote:
> we faced an issue with a select query on a relatively large table on our 
> database.
> The query involves one single table. The table has more than 10 million 
> records.
> It's mainly composed of varchar fields, have a primary key (id) of type 
> serial,
> and when records of this table are shown to users, they are sorted users 2 
> fields,
> display_name (varchar) and id (the primary key). Because this table is 
> heavily used
> in various contexts in our application, we have multiple indexes on it. Among 
> other
> index, we have gin index on some fields of the table.
> 
> The btree index res_partner_displayname_id_idx have been added lately and 
> perfectly
> match a criteria (where active) and sorting (display_name, id) we have in 
> quite all
> our queries on this table.
> 
> The query that cause the issue is this one:
> SELECT "res_partner"."id"
>   FROM "res_partner"
>  WHERE (("res_partner"."active" = true) AND
>          (
>          (
>            (
>              ((unaccent("res_partner"."display_name"::text) ilike 
> unaccent('%nse%'))
>            OR (unaccent("res_partner"."email"::text) ilike unaccent('%nse%')))
>         OR (unaccent("res_partner"."ref"::text) ilike unaccent('%nse)%')))
>      OR (unaccent("res_partner"."vat"::text) ilike unaccent('%nse%')))
>    OR (unaccent("res_partner"."company_registry"::text) ilike 
> unaccent('%nse)%'
> 
>  AND ((("res_partner"."type" != 'private') OR "res_partner"."type" IS NULL) 
> OR "res_partner"."type" IS NULL )
> 
> ORDER BY "res_partner"."display_name" ,"res_partner"."id"  
>    LIMIT 100
> 
> We have the common criteria  (active=true), the common sorting, a limit, and 
> a search
> on various fields. The fields on which we're searching with criteria like '% 
> whatever%' are gin indexed.
> 
>  Here is the query plan:
>  Limit  (cost=0.56..10703.36 rows=100 width=25) (actual 
> time=56383.794..86509.036 rows=1 loops=1)
>    Output: id, display_name
>    Buffers: shared hit=4322296 read=1608998 dirtied=1 written=1247
>    ->  Index Scan using res_partner_displayname_id_idx on public.res_partner  
> (cost=0.56..1200212.37 rows=11214 width=25) (actual time=56383.793..86509.022 
> rows=1 loops=1)
>          Output: id, display_name
>          Filter: res_partner.type)::text <> 'private'::text) OR 
> (res_partner.type IS NULL) OR (res_partner.type IS NULL)) AND 
> ((unaccent((res_partner.display_name)::text) ~~* '%nse%'::text) OR
> (unaccent((res_partner.email)::text) ~~
> * '%nse%'::text) OR (unaccent((res_partner.ref)::text) ~~* '%nse%'::text) OR 
> (unaccent((res_partner.vat)::text) ~~* '%nse%'::text) OR 
> (unaccent((res_partner.company_registry)::text) ~~*
> '%nse%'::text)))
>          Rows Removed by Filter: 6226870
>          Buffers: shared hit=4322296 read=1608998 dirtied=1 written=1247
>  Planning Time: 0.891 ms
>  Execution Time: 86509.070 ms
> (10 rows)
> 
> It's not using our gin index at all, but the btree one.

The problem is that PostgreSQL estimates that the index scan will return 11214
rows, when it is actually one.  This makes the plan to scan the table using
an index that matches the ORDER BY clause appealing: we might find 100 rows
quickly and avoid a sort.

You can try to improve the estimates with more detailed statistics,
but if that doesn't do the job, you can modify the ORDER BY clause so
that it cannot use the bad index:

  ORDER BY res_partner.display_name ,res_partner.id + 0

Yours,
Laurenz Albe




plan using BTree VS GIN

2023-06-23 Thread Nicolas Seinlet
Hello, we faced an issue with a select query on a relatively large table on our 
database. The query involves one single table. The table has more than 10 
million records. It's mainly composed of varchar fields, have a primary key 
(id) of type serial, and when records of this table are shown to users, they 
are sorted users 2 fields, display_name (varchar) and id (the primary key). 
Because this table is heavily used in various contexts in our application, we 
have multiple indexes on it. Among other index, we have gin index on some 
fields of the table. Among other things, we're using unaccent. We are aware the 
unaccent function is mutable, but we have an immutable version of unaccent. the 
table is similar to (I can give you all the fields of the table if needed):     
                                                Table "public.res_partner"      
         Column               |            Type             | Collation | 
Nullable |                 Default                 
---+-+---+--+-
  id                                | integer                     |           | 
not null | nextval('res_partner_id_seq'::regclass)  active                      
      | boolean                     |           |          |  name              
                | character varying           |           |          |  
display_name                      | character varying           |           |   
       |  ref                               | character varying           |     
      |          |  email                             | character varying       
    |           |          |  vat                               | character 
varying           |           |          |  type                              | 
character varying           |           |          |  company_registry          
        | character varying           |           |          | Gin Index: 
"res_partner_unaccent_tgm_ref" gin (unaccent(ref::text) gin_trgm_ops) WHERE ref 
IS NOT NULL "res_partner_unaccent_tgm_vat" gin (unaccent(vat::text) 
gin_trgm_ops) WHERE vat IS NOT NULL "res_partner_unaccent_tgm_idx_gin2" gin 
(unaccent(name::text) gin_trgm_ops, unaccent(display_name::text) gin_trgm_ops, 
unaccent(ref::text) gin_trgm_ops, unaccent(email::text) gin_trgm_ops, 
unaccent(vat::text) gin_trgm_ops) "res_partner_name_tgm_idx_gin" gin (name 
gin_trgm_ops, display_name gin_trgm_ops, ref gin_trgm_ops, email gin_trgm_ops, 
vat gin_trgm_ops) "res_partner_unaccent_tgm_display_namee" gin 
(unaccent(display_name::text) gin_trgm_ops) "res_partner_unaccent_tgm_email" 
gin (unaccent(email::text) gin_trgm_ops) WHERE email IS NOT NULL 
"res_partner_comp_reg_idx3" gin (unaccent(company_registry::text) gin_trgm_ops) 
WHERE company_registry IS NOT NULL BTree index: 
"res_partner_displayname_id_idx" btree (display_name, id) WHERE active 
"res_partner_comp_reg_idx2" btree (unaccent(company_registry::text)) WHERE 
company_registry IS NOT NULL The btree index res_partner_displayname_id_idx 
have been added lately and perfectly match a criteria (where active) and 
sorting (display_name, id) we have in quite all our queries on this table. The 
query that cause the issue is this one: SELECT "res_partner"."id"   FROM 
"res_partner"  WHERE (("res_partner"."active" = true) AND          (          ( 
           (              ((unaccent("res_partner"."display_name"::text) ilike 
unaccent('%nse%'))            OR (unaccent("res_partner"."email"::text) ilike 
unaccent('%nse%')))         OR (unaccent("res_partner"."ref"::text) ilike 
unaccent('%nse)%')))      OR (unaccent("res_partner"."vat"::text) ilike 
unaccent('%nse%')))    OR (unaccent("res_partner"."company_registry"::text) 
ilike unaccent('%nse)%'  AND ((("res_partner"."type" != 'private') OR 
"res_partner"."type" IS NULL) OR "res_partner"."type" IS NULL ) ORDER BY 
"res_partner"."display_name" ,"res_partner"."id"      LIMIT 100 We have the 
common criteria  (active=true), the common sorting, a limit, and a search on 
various fields. The fields on which we're searching with criteria like '% 
whatever%' are gin indexed.  Here is the query plan:  Limit  
(cost=0.56..10703.36 rows=100 width=25) (actual time=56383.794..86509.036 
rows=1 loops=1)    Output: id, display_name    Buffers: shared hit=4322296 
read=1608998 dirtied=1 written=1247    ->  Index Scan using 
res_partner_displayname_id_idx on public.res_partner  (cost=0.56..1200212.37 
rows=11214 width=25) (actual time=56383.793..86509.022 rows=1 loops=1)          
Output: id, display_name          Filter: res_partner.type)::text <> 
'private'::text) OR (res_partner.type IS NULL) OR (res_partner.type IS NULL)) 
AND ((unaccent((res_partner.display_name)::text) ~~* '%nse%'::text) OR 
(unaccent((res_partner.email)::text) ~~ * '%nse%'::text) OR 
(unaccent((res_partner.ref)::text) ~~* '%nse%'::text) OR 
(unaccent((res_partner.vat)::text) ~~* '%nse%'::text) 

Re: move databases from a MySQL server to Postgresql.

2023-06-23 Thread Tomas Vondra
On 6/23/23 13:45, Alfredo Alcala wrote:
> Hello
> 
> I need to move some databases from a MySQL server to Postgresql.
> 
> Can someone tell me the migration procedure, tools and recommendations?
> 

I'm not an expect on this, but migrations tend to be somewhat
application-specific. I'd suggest you take a look at this wiki page:

https://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL

and maybe try some of the tools mentioned there (pgloader, mysql2pgsql,
and so on).

You'll have to give it a try on your databases, and then ask questions
about practical issues you run ran into.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: synchronous_commit= remote_apply | "The transaction has already committed locally..."

2023-06-23 Thread Laurenz Albe
On Fri, 2023-06-23 at 16:23 +0530, Postgres all-rounder wrote:
> Could you please point me to the link where the "two-phase commit" approach 
> is being discussed.
> I can track it for my reference.

I looked, and didn't find it.  I must have mis-remembered.

There is this proposal:
https://www.postgresql.org/message-id/flat/CALj2ACUrOB59QaE6%3DjF2cFAyv1MR7fzD8tr4YM5%2BOwEYG1SNzA%40mail.gmail.com

Yours,
Laurenz Albe




move databases from a MySQL server to Postgresql.

2023-06-23 Thread Alfredo Alcala
Hello

I need to move some databases from a MySQL server to Postgresql.

Can someone tell me the migration procedure, tools and recommendations?

Thanks


Fwd: Migration database from mysql to postgress

2023-06-23 Thread Alfredo Alcala
Hello

Necesito mover algunas bases de datos de un servidor MySQL a Postgresql.

Can anyone tell me the migration procedure, tools and recommendations?

Gracias


El vie, 23 jun 2023 a las 11:30, Alfredo Alcala ()
escribió:

> Hola
>
> Necesito mover algunas bases de datos de un servidor MySQL a Postgresql.
>
> ¿Alguien puede decirme el procedimiento de migración, las herramientas y
> las recomendaciones?
>
> Gracias
>


Re: ECPG Semantic Analysis

2023-06-23 Thread Tom Lane
Michael Paquier  writes:
> On Fri, Jun 23, 2023 at 12:21:48AM -0400, Juan Rodrigo Alejandro Burgos Mella 
> wrote:
>> I have a modified version of ECPG, to which I gave the ability to do
>> semantic analysis of SQL statements. Where can you share it or with whom
>> can I discuss it?

> I cannot say what kind of problem this solves and/or if this is useful
> as a feature of the ECPG driver in PostgreSQL core itself, but you
> could consider submitting a patch for integration into core.

TBH I'd have to discourage you from expecting that such a patch would
be accepted.  ECPG is pretty much of a development backwater nowadays.
We keep maintaining it because it's (mostly) not too much trouble
thanks to the work that was done years ago to auto-generate its
grammar from the main grammar.  However, adding any sort of semantic
analysis to it seems like it'd take an enormous amount of new C code
that would then have to be kept in sync (by hand) with the backend
parser.  Testing such a thing seems like a big time sink as well.
I seriously doubt that we'd be willing to take on such a maintenance
burden.

regards, tom lane




Re: synchronous_commit= remote_apply | "The transaction has already committed locally..."

2023-06-23 Thread Postgres all-rounder
Hi Laurenz,

Thank you for the quick response.

Could you please point me to the link where the "two-phase commit" approach
is being discussed.
I can track it for my reference.

On Fri, Jun 23, 2023 at 3:26 PM Laurenz Albe 
wrote:

> On Fri, 2023-06-23 at 15:05 +0530, Postgres all-rounder wrote:
> > Context: We have faced a network isolation and ended-up with locally
> committed data on the
> > old primary database server as one of the tools that is in-place for HA
> decided to promote
> > one of the SYNC standby servers. As the PostgreSQL won't provide a HA
> solution as in-built,
> > I would like to just confirm on the behaviour of core parameter
> synchronous_commit= remote_apply.
> >
> > As per the documentation the PRIMARY database server will NOT commit
> unless
> > the SYNC standby acknowledges  that it  received the commit record of
> the transaction
> > and applied it, so that it has become visible to queries on the
> standby(s), and also written
> > to durable storage on the standbys.
>
> That's not true.  The primary will commit locally, but wait for the
> synchronous standby
> servers before it reports success to the client.
>
> > However, during the network outage or few scenarios where the current
> primary is waiting
> > for the SYNC to acknowledge and when the application sends a cancel
> signal [even control +c
> > from a PSQL session which inserted data]  then we see locally committed
> data on the primary
> > database server.
> >
> > "The transaction has already committed locally, but might not have been
> replicated to the standby."
> >
> > 1. It appears to be a known behaviour, however wanted to understand, is
> this considered as an
> > expected behaviour or limitation with the architecture
>
> This is expected behavior AND a limitation of PostgreSQL.
>
> > 2. Any known future plans in the backlog to change the behaviour in
> > such a way PRIMARY won't have the LOCALLY commit data which is NOT
> received and acknowledged
> > by a SYNC standby when  synchronous_commit= remote_apply is used?
>
> There have been efforts to use two-phase commit, but that would require
> PostgreSQL to
> have its own distributed transaction manager.
>
> > 3. If the information is available in the document that primary database
> can have locally
> > committed data when it is waiting on SYNC and receive the cancel signal
> from the application,
> > it can be helpful.
>
> I don't think that's anywhere in the documentation.
>
> Yours,
> Laurenz Albe
>


Re: synchronous_commit= remote_apply | "The transaction has already committed locally..."

2023-06-23 Thread Laurenz Albe
On Fri, 2023-06-23 at 15:05 +0530, Postgres all-rounder wrote:
> Context: We have faced a network isolation and ended-up with locally 
> committed data on the 
> old primary database server as one of the tools that is in-place for HA 
> decided to promote
> one of the SYNC standby servers. As the PostgreSQL won't provide a HA 
> solution as in-built,
> I would like to just confirm on the behaviour of core parameter 
> synchronous_commit= remote_apply.
> 
> As per the documentation the PRIMARY database server will NOT commit unless
> the SYNC standby acknowledges  that it  received the commit record of the 
> transaction
> and applied it, so that it has become visible to queries on the standby(s), 
> and also written
> to durable storage on the standbys.

That's not true.  The primary will commit locally, but wait for the synchronous 
standby
servers before it reports success to the client.

> However, during the network outage or few scenarios where the current primary 
> is waiting
> for the SYNC to acknowledge and when the application sends a cancel signal 
> [even control +c
> from a PSQL session which inserted data]  then we see locally committed data 
> on the primary
> database server.
> 
> "The transaction has already committed locally, but might not have been 
> replicated to the standby."
> 
> 1. It appears to be a known behaviour, however wanted to understand, is this 
> considered as an
> expected behaviour or limitation with the architecture

This is expected behavior AND a limitation of PostgreSQL.

> 2. Any known future plans in the backlog to change the behaviour in 
> such a way PRIMARY won't have the LOCALLY commit data which is NOT received 
> and acknowledged
> by a SYNC standby when  synchronous_commit= remote_apply is used?

There have been efforts to use two-phase commit, but that would require 
PostgreSQL to
have its own distributed transaction manager.

> 3. If the information is available in the document that primary database can 
> have locally
> committed data when it is waiting on SYNC and receive the cancel signal from 
> the application,
> it can be helpful.

I don't think that's anywhere in the documentation.

Yours,
Laurenz Albe




synchronous_commit= remote_apply | "The transaction has already committed locally..."

2023-06-23 Thread Postgres all-rounder
Hi Team,

*Context: *We have faced a network isolation and ended-up with locally
committed data on the
old primary database server as one of the tools that is in-place for HA
decided to promote one of the SYNC standby servers. As the PostgreSQL won't
provide a HA solution as in-built, I would like to just confirm on the
behaviour of core parameter *synchronous_commit= remote_apply.*

As per the documentation the PRIMARY database server will *NOT* commit
unless
the SYNC standby acknowledges  that it  received the commit record of the
transaction
and applied it, so that it has become visible to queries on the standby(s),
and also written to durable storage on the standbys.

However, during the network outage or few scenarios where the current
primary is waiting
for the SYNC to acknowledge and when the application sends a cancel signal
[even control +c  from a PSQL session which inserted data]  then we see
locally committed data on the primary database server.

*"The transaction has already committed locally, but might not have been
replicated to the standby."*

1. It appears to be a known behaviour, however wanted to understand, is
this considered as an
expected behaviour or limitation with the architecture

2. Any known future plans in the backlog to change the behaviour in
such a way PRIMARY won't have the *LOCALLY* *commit* data which is NOT
received and acknowledged by a SYNC standby when  *synchronous_commit=
remote_apply* is used?

3. If the information is available in the document that *primary database
can have locally committed data *when it is waiting on SYNC and receive the
cancel signal from the application,
it can be helpful.


Re: FIPS-related Error: Password Must Be at Least 112 Bits on Postgres 14, Unlike in Postgres 11

2023-06-23 Thread Abhishek Dasgupta
On Fri, Jun 23, 2023 at 11:27 AM Abhishek Dasgupta <
abhishekdasgupta...@gmail.com> wrote:

> Hey Michael,
> Thanks for the reply.
>
> This error is specific to the Postgres JDBC driver, which relies on
>> its own application layer for FIPS and SCRAM because it speaks
>> directly the protocol and because it has no dependency to libpq.
>
>
> The thing is we are currently using the same password, which is less than
> 112 bits in length, for both versions 11 and 14 of Postgres. Although I am
> not a Postgres expert, I would like to understand the specific changes in
> the Postgres JDBC driver that are causing this error in postgres14
>
> Could you please clarify if the Postgres JDBC driver has been updated
> between Postgres 11 and 14? I am also interested in knowing how I can
> investigate the root cause within the Postgres JDBC driver itself.
>
> Additionally, I would like to inquire if there are any alternative steps
> to resolve this issue without requiring a password change to a length
> greater than 14 characters.
>
>  Are there any specific failures you are seeing in the PostgreSQL backend
>> that you find confusing?
>>
>
> The FIPS error is the main source of confusion for me. It seems that this
> error occurs specifically during the cluster setup, which subsequently
> leads to the failure of the DB setup.
>
>
>
>
> On Fri, Jun 23, 2023 at 3:56 AM Michael Paquier 
> wrote:
>
>> On Thu, Jun 22, 2023 at 07:16:21PM +0530, Abhishek Dasgupta wrote:
>> > I am puzzled as to why this error occurs only with PostgreSQL 14 and not
>> > with PostgreSQL 11.
>>
>> This error is specific to the Postgres JDBC driver, which relies on
>> its own application layer for FIPS and SCRAM because it speaks
>> directly the protocol and because it has no dependency to libpq.  Are
>> there any specific failures you are seeing in the PostgreSQL backend
>> that you find confusing?
>> --
>> Michael
>>
>


Re: 2 master 3 standby replication

2023-06-23 Thread Pavel Stehule
Hi

pá 23. 6. 2023 v 10:37 odesílatel Atul Kumar  napsal:

> Hi,
>
> Please help me with the query I raised.
>
>
Currently there is not any community based multi master solution.

Regards

Pavel Stehule


>
> Regards.
>
> On Fri, 23 Jun 2023, 00:12 Atul Kumar,  wrote:
>
>> Hi,
>>
>> Do we have any solution to Configure an architecture of replication
>> having 2 master nodes and 3 standby nodes replicating the data from any of
>> the 2 master ?
>>
>>
>> Please let me know if you have any link/ dedicated document.
>>
>>
>>
>> Regards,
>> Atul
>>
>>
>>
>>
>>


Re: 2 master 3 standby replication

2023-06-23 Thread Atul Kumar
Hi,

Please help me with the query I raised.


Regards.

On Fri, 23 Jun 2023, 00:12 Atul Kumar,  wrote:

> Hi,
>
> Do we have any solution to Configure an architecture of replication having
> 2 master nodes and 3 standby nodes replicating the data from any of the 2
> master ?
>
>
> Please let me know if you have any link/ dedicated document.
>
>
>
> Regards,
> Atul
>
>
>
>
>