Re: Understanding partial index selection

2023-12-01 Thread Owen Nelson
I was able to pull some stats with pgstattuple and nothing looks
particularly hinky to me.

version: 4
tree_level: 2
index_size: 499589120
root_block_no: 412
internal_pages: 194
leaf_pages: 54572
empty_pages: 0
deleted_pages: 6218
avg_leaf_density: 90.08
leaf_fragmentation: 0.01

For flavor, If I remember correctly, the table has around 50mil rows, and
around 17mil of them should be included in the partial index due to the
"where payload is not null" predicate.

0 deleted pages would be nicer than ~6k, but by my count, that's around 10%
of the total index size. I also assume if the index was not cleaned up
during regular operations this number would be much larger. I think this
points away from index bloat as the culprit, but please check me on this.

We're checking assumptions about when/how often the table is getting
analyzed, but other than possibly using extended stats it sounds like the
only other odd thing is "Aurora not being Postgres," which I'm not sure
there's much I can do about right now :(

On Tue, Nov 28, 2023 at 9:23 PM Adrian Klaver 
wrote:

> On 11/28/23 18:13, Owen Nelson wrote:
> >  > Aurora is not really Postgres
> >
> > Oh geez, I didn't realize there was such a divide. This is my first look
> > at Aurora and I thought it was just a hosted postgres offering.
>
>
> https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html#aur-shared-resp
>
>
> "Aurora includes a high-performance storage subsystem. Its MySQL- and
> PostgreSQL-compatible database engines are customized to take advantage
> of that fast distributed storage. "
>
> When I see things like *-compatible alarms start going off.
>
> >
> > Still, I'll take what I can get. Hopefully, some of this will carry over.
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Query related to pg_dump write to a pipe on a windows client and with compressed format

2023-12-01 Thread Tom Lane
Meera Nair  writes:
> We tried pg_dump write to a pipe with compressed format and faced issue as in 
> thread referred here.
> It was on windows client and for postgres 9.2 version. So during pg_restore, 
> seek error was seen.

Are you really using PG 9.2.something?  If not, then what?

The problem here probably is that pg_restore is failing to detect
that the pipe input file is not seekable.  We've had repeated rounds
of fixes to try to detect that reliably on Windows.  I think it works
if you are running a current PG release (something shipped in the last
six months), but maybe not with older ones.

regards, tom lane




Re: Help understand why DELETE is so slow

2023-12-01 Thread Christophe Pettus



> On Dec 1, 2023, at 09:47, Ping Yao  wrote:
> Can someone help me understand why my simple DELETE query is so slow to run?

Based on the plan, you're running PostgreSQL with the Citus extension, and the 
delay is in Citus-related code.  This is probably a question best directed to 
either the open-source Citus community, or Microsoft.



Help understand why DELETE is so slow

2023-12-01 Thread Ping Yao
Hello All.

First timer here, long time user though. Thank you in advance.

Can someone help me understand why my simple DELETE query is so slow to run?

*System:*
We are running Citus with 4 workers with 256 shards (default), with
replication using pg_autoctl.

*PostgreSQL Version:*
=# select version();
version
---
 PostgreSQL 14.10 (Debian 14.10-1.pgdg110+1) on x86_64-pc-linux-gnu,
compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit

*Query with explain:*
=*# explain (analyze,buffers,timing)
delete from organization where _id=:orgid returning *;

   QUERY PLAN

 Custom Scan (Citus Adaptive)  (cost=0.00..0.00 rows=0 width=0) (actual
time=328233.766..328233.767 rows=1 loops=1)
   Task Count: 1
   Tuple data received from nodes: 760 bytes
   Tasks Shown: All
   ->  Task
 Tuple data received from node: 760 bytes
 Node: host=_db_master_01 port=5432 dbname=
 ->  Delete on organization_102008 organization  (cost=0.28..2.51
rows=1 width=6) (actual time=0.049..0.050 rows=1 loops=1)
   Buffers: shared hit=6
   ->  Index Scan using organization__id_key_102008 on
organization_102008 organization  (cost=0.28..2.51 rows=1 width=6) (actual
time=0.014..0.015 rows=1 loops=1)
 Index Cond: (_id =
'f9903e13-383e-418c-a0e9-e39916b3eb1c'::uuid)
 Buffers: shared hit=3
 Planning Time: 0.049 ms
 Trigger for constraint customer_org_uuid_fkey_102008:
time=0.106 calls=1
 Trigger for constraint parent_uuid_102008: time=0.053 calls=1
 Trigger for constraint parent_org_102009: time=0.045 calls=1
 Execution Time: 0.273 ms
   Buffers: shared hit=6
 Planning Time: 0.053 ms
 Execution Time: 328233.799 ms
(20 rows)

If I understand this correct, each step is quite quick, but for some
reason, the total execution time still took >5mins.

Thank you.

Ping

--

Pook-Ping Yao (He/Him)

Chief Technology Officer and co-Founder

Optigo Networks Inc.

+1-604-897-7464 | p...@optigo.net


This email, including any files attached hereto, may contain privileged or
confidential information and is only for the intended addressee(s). If this
email was sent to you in error, this does not constitute a waiver by Optigo
Networks Inc. and we request that you kindly delete the email and notify
the sender. Unauthorized use of this email is prohibited.


Re: libpq crashing on macOS during connection startup

2023-12-01 Thread John DeSoi



> On Dec 1, 2023, at 11:02 AM, Tom Lane  wrote:
> 
> I'd suggest filing a bug against Homebrew's krb5 package.
> Whatever this is, it seems pretty clear that it's not a
> Postgres bug.

Will do, thank you and everyone else for the help and feedback.

John DeSoi, Ph.D.






Re: libpq crashing on macOS during connection startup

2023-12-01 Thread Tom Lane
John DeSoi  writes:
>> On Nov 30, 2023, at 7:53 PM, Tom Lane  wrote:
>> Ugh, not only Heimdal but a very obsolete version thereof?  It borders
>> on negligence for the homebrew PG package to be building against that.
>> They should be pulling in homebrew's MIT Kerberos package and using
>> that, if they want to enable GSSAPI.

> I was looking at the homebrew source for PostgreSQL package to see if there 
> was a way to customize the build options. I did not find one but saw the 
> comment below. Apparently this is a known issue and it was suggested to use 
> the MIT Kerberos package 4 years ago. Instead they just added this comment in 
> 2020.

> # GSSAPI provided by Kerberos.framework crashes when forked.
> # See https://github.com/Homebrew/homebrew-core/issues/47494.

Oh, thanks for finding that.  But you misinterpreted the outcome;
the commit that closed that thread did

+# GSSAPI provided by Kerberos.framework crashes when forked.
+# See https://github.com/Homebrew/homebrew-core/issues/47494.
+depends_on "krb5"

The "depends_on" was evidently meant to force building against krb5,
and I suppose it did have that effect when committed.  Could they
have done something since then to break it?

Looking closer, your stack trace seems to show that libpq *is*
linked against MIT Kerberos: at least, control flows from
libpq.5.dylib to libgssapi_krb5.2.2.dylib, which is not a
library that Apple supplies.  However, then a few subroutines
further deep, we somehow end up in Apple's Kerberos framework,
and that eventually calls libdispatch which is the source of
the problem according to the discussion in issues/47494.

My guess at this point is that somebody at Homebrew put in a
hack (perhaps quite recently) that causes their build of MIT
Kerberos to sometimes call Apple's implementation, and that
ill-advised idea has re-opened the problem that issues/47494
meant to solve.

I'd suggest filing a bug against Homebrew's krb5 package.
Whatever this is, it seems pretty clear that it's not a
Postgres bug.

regards, tom lane




Re: Query related to pg_dump write to a pipe on a windows client and with compressed format

2023-12-01 Thread Adrian Klaver

On 11/30/23 18:57, Meera Nair wrote:

Hi all,

https://www.postgresql.org/message-id/flat/CAKKd065aJ1LuUMw_bhBgmgoM6Ng-cLdBobpzRiU%2BUsdsmW2aOg%40mail.gmail.com#996021734fa788bd1bc737254002ad11
 


We tried pg_dump write to a pipe with compressed format and faced issue 
as in thread referred here.


The complete command line was?

It was on windows client and for postgres 9.2 version. So during 
pg_restore, seek error was seen.
With pg_dump writing the dump to a file and then move to another machine 
for backup, this problem was not there.


The pg_dump command used for this was?

But now the issue is that we need to have a  lot of free space to write 
the dump file

Size of file?

So trying to understand if the compressed format still needs the staging 
of output file with the latest versions? Or we can write the output to a 
pipe to move the data to another machine for backup.


What happens if you don't compress the pg_dump output?

In other words is the network fast enough to compensate for not compressing?



Regards,

Meera



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Two started cluster on the same data directory and port

2023-12-01 Thread Adrian Klaver

On 12/1/23 01:17, Loles wrote:

Yes, it seems to be caused by systemd.


To be more accurate it is caused by the the file postgresql-generator 
located here:


/lib/systemd/system-generators/

That file is installed by the Postgres Debian/Ubuntu package, so it is 
the Postgres install that is using systemd in this manner.




I'll investigate there. If not, I can upgrade PostgreSQL and get rid of 
server 14.


If your directory structure resembles:

/etc/postgresql/14
├── main
│   ├── conf.d
│   ├── environment
│   ├── pg_ctl.conf
│   ├── pg_hba.conf
│   ├── pg_ident.conf
│   ├── postgresql.conf
│   └── start.conf
└── main_old
├── conf.d
├── environment
├── pg_ctl.conf
├── pg_hba.conf
├── pg_ident.conf
├── postgresql.conf
└── start.conf

Then you just need to:

1) Stop the server
2) mv or rm the main_old/ directory
3) Restart the server


Thank you very much Adrian... I'm going to get a cup of coffee because 
the topic requires it :D



El vie, 1 dic 2023 a las 6:19, Adrian Klaver (>) escribió:


On 11/30/23 20:43, Matthias Apitz wrote:
 > El día jueves, noviembre 30, 2023 a las 02:10:25p. m. -0800,
Adrian Klaver escribió:
 >
 >> On 11/30/23 12:35, Adrian Klaver wrote:
 >>> On 11/30/23 10:35, Adrian Klaver wrote:
  On 11/30/23 09:27, Loles wrote:
 
 >>
 >>> cd /etc/postgresql/14/
 >>> sudo rm -r main_old/
 >>>     or
 >>> sudo cp -r main_old 
 >>
 >> Arrgh.
 >>
 >>      sudo mv -r main_old 
 >>
 >> Memo to self don't eat lunch and copy/paste at same time.
 >
 > Hmmm
 >
 > purism@pureos:~$ uname -s
 > Linux
 > purism@pureos:~$ mv -r foo bar
 > mv: invalid option -- 'r'

The gift that keeps on giving. Third time's a charm?:

         sudo mv  main_old/ 

 > Try 'mv --help' for more information.
 >
 >       matthias

-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Query related to pg_dump write to a pipe on a windows client and with compressed format

2023-12-01 Thread Ron Johnson
On Fri, Dec 1, 2023 at 9:10 AM Meera Nair  wrote:

> Hi all,
>
>
> https://www.postgresql.org/message-id/flat/CAKKd065aJ1LuUMw_bhBgmgoM6Ng-cLdBobpzRiU%2BUsdsmW2aOg%40mail.gmail.com#996021734fa788bd1bc737254002ad11
>
> We tried pg_dump write to a pipe with compressed format and faced issue as
> in thread referred here.
> It was on windows client and for postgres 9.2 version. So during
> pg_restore, seek error was seen.
> With pg_dump writing the dump to a file and then move to another machine
> for backup, this problem was not there.
> But now the issue is that we need to have a  lot of free space to write
> the dump file.
>
> If we use tar format, temporary files created in C:\ take space.
> If we use parallel dump format, then also staging the dump output is must.
>
> So trying to understand if the compressed format still needs the staging
> of output file with the latest versions? Or we can write the output to a
> pipe to move the data to another machine for backup.
>

In PG 9.6 (which I have experience with) and beyond, pg_dump
--format=directory compresses by default, and on the fly.


Re: libpq crashing on macOS during connection startup

2023-12-01 Thread John DeSoi


> On Nov 30, 2023, at 7:53 PM, Tom Lane  wrote:
> 
> gssencmode=disable in your connection options; but that's a tad
> inconvenient probably.

Yes, the application uses PHP PDO to connect to PostgreSQL. I don't see any way 
to specify that in the connection options.


> Ugh, not only Heimdal but a very obsolete version thereof?  It borders
> on negligence for the homebrew PG package to be building against that.
> They should be pulling in homebrew's MIT Kerberos package and using
> that, if they want to enable GSSAPI.

I was looking at the homebrew source for PostgreSQL package to see if there was 
a way to customize the build options. I did not find one but saw the comment 
below. Apparently this is a known issue and it was suggested to use the MIT 
Kerberos package 4 years ago. Instead they just added this comment in 2020.

# GSSAPI provided by Kerberos.framework crashes when forked.
# See https://github.com/Homebrew/homebrew-core/issues/47494.


John DeSoi, Ph.D.



Query related to pg_dump write to a pipe on a windows client and with compressed format

2023-12-01 Thread Meera Nair
Hi all,

https://www.postgresql.org/message-id/flat/CAKKd065aJ1LuUMw_bhBgmgoM6Ng-cLdBobpzRiU%2BUsdsmW2aOg%40mail.gmail.com#996021734fa788bd1bc737254002ad11

We tried pg_dump write to a pipe with compressed format and faced issue as in 
thread referred here.
It was on windows client and for postgres 9.2 version. So during pg_restore, 
seek error was seen.
With pg_dump writing the dump to a file and then move to another machine for 
backup, this problem was not there.
But now the issue is that we need to have a  lot of free space to write the 
dump file.

If we use tar format, temporary files created in C:\ take space.
If we use parallel dump format, then also staging the dump output is must.

So trying to understand if the compressed format still needs the staging of 
output file with the latest versions? Or we can write the output to a pipe to 
move the data to another machine for backup.


Regards,

Meera




Re: Feature request for INITCAP() function

2023-12-01 Thread Daniel Verite
Jeff Gerbracht wrote:

> It would be great if there was a way to set exceptions to the 'word'
> delimiter list used in the INITCAP() function.For example, I have
> hyphenated words like blue-green or possessives and contractions like
> don't and cat's tail
> These become Blue-Green, Don'T and Cat'S Tail.

An ICU collation may produce better results for this:
 
select initcap('blue-green don''t cat''s tail' COLLATE "en-x-icu");
   initcap   
-
 Blue-Green Don't Cat's Tail

With hyphens, it's not clear what the results should be.
Looking at the IMDb movies database [1] for instance, titles have
a mix of upper and lower case following the hyphen, even within the
same english title:

Hell-to-Pay Austin
The Co-respondent
A Pit-boy's Romance
A Cowboy's Mother-in-Law
Our Parents-In-Law
The Good-for-Nothing
When the Fire-Bells Rang
The post-millennial wave
...

[1] https://datasets.imdbws.com/


Best regards,
-- 
Daniel Vérité
https://postgresql.verite.pro/
Twitter: @DanielVerite




Re: Two started cluster on the same data directory and port

2023-12-01 Thread Loles
Yes, it seems to be caused by systemd.

I'll investigate there. If not, I can upgrade PostgreSQL and get rid of
server 14.

Thank you very much Adrian... I'm going to get a cup of coffee because the
topic requires it :D


El vie, 1 dic 2023 a las 6:19, Adrian Klaver ()
escribió:

> On 11/30/23 20:43, Matthias Apitz wrote:
> > El día jueves, noviembre 30, 2023 a las 02:10:25p. m. -0800, Adrian
> Klaver escribió:
> >
> >> On 11/30/23 12:35, Adrian Klaver wrote:
> >>> On 11/30/23 10:35, Adrian Klaver wrote:
>  On 11/30/23 09:27, Loles wrote:
> 
> >>
> >>> cd /etc/postgresql/14/
> >>> sudo rm -r main_old/
> >>> or
> >>> sudo cp -r main_old 
> >>
> >> Arrgh.
> >>
> >>  sudo mv -r main_old 
> >>
> >> Memo to self don't eat lunch and copy/paste at same time.
> >
> > Hmmm
> >
> > purism@pureos:~$ uname -s
> > Linux
> > purism@pureos:~$ mv -r foo bar
> > mv: invalid option -- 'r'
>
> The gift that keeps on giving. Third time's a charm?:
>
> sudo mv  main_old/ 
>
> > Try 'mv --help' for more information.
> >
> >   matthias
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: PAF with Pacemaker

2023-12-01 Thread Jehan-Guillaume de Rorthais
Hi,

On Thu, 30 Nov 2023 19:07:34 +
Vijaykumar Patil  wrote:

> I have two postgres server one is primary and other one replica, I have setup
> replication and configured pacemaker and corosync.
> 
> But still I'm facing issue while creating resource. It is showing invalid
> parameters.
> 
> [root@scrbtrheldbaas001 heartbeat]# pcs status
> Cluster name: pg_cluster
> Cluster Summary:
>   * Stack: corosync (Pacemaker is running)
>   * Current DC: scrbtrheldbaas001 (version 2.1.6-8.el8-6fdc9deea29) -
> partition with quorum
>   * Last updated: Thu Nov 30 19:04:29 2023 on scrbtrheldbaas001
>   * Last change:  Thu Nov 30 13:41:53 2023 by root via cibadmin on
> scrbtrheldbaas002
>   * 2 nodes configured
>   * 2 resource instances configured
> 
> Node List:
>   * Online: [ scrbtrheldbaas001 scrbtrheldbaas002 ]
> 
> Full List of Resources:
>   * Clone Set: pgsqld-clone [pgsqld] (promotable):
> * Stopped (invalid parameter): [ scrbtrheldbaas001 scrbtrheldbaas002 ]

Side note: make sure to setup fencing and/or watchdog.

> My postgres version is 15.3 but still  is searching recover.conf . please
> find below log.

It does not search for the recovery.conf for v15. In fact, if you setup a
recovery.conf with postgres v15, PAF errors immediately with the appropriate
error message:

https://github.com/ClusterLabs/PAF/blob/master/script/pgsqlms#L1350

> ...
> Nov 30 13:43:32 scrbtrheldbaas002.crb.apmoller.net
> pacemaker-controld[1114133]:  notice: Result of probe operation for pgsqld on
> scrbtrheldbaas002: invalid parameter (Recovery template file must contain
> "standby_mode = on")

This is the real error. But this error is only checked for v11 and before. So
now I wonder what version of PAF you are actually using? Is it up to date or a
very old one?

Or maybe the agent failed to parse correctly your actual version from
$PGDATA/PG_VERSION ?

https://github.com/ClusterLabs/PAF/blob/master/script/pgsqlms#L637

++