Re: Connection fails on one system in a address range allowed to connect

2023-11-22 Thread Inzamam Shafiq
you need to allow connection from this IP in your pg_hba file and reload the 
configurations.

From: Johnson, Bruce E - (bjohnson) 
Sent: Wednesday, November 22, 2023 4:27 AM
To: pgsql-general@lists.postgresql.org 
Subject: Connection fails on one system in a address range allowed to connect

I am migrating an existing web application from Oracle to postgres and I’m 
testing the connectivity.

Trying to run a test program (that works on another system in the same subnet!) 
I get this error:

Error system:

[root@dhbroomscheduling4 ~]# ./pg_test.pl
DBI 
connect('dbname=webdata;host=dhbpostgres.pharmacy.arizona.edu;port=5432','trav',...)
 failed: FATAL:  password authentication failed for user "trav"
FATAL:  no pg_hba.conf entry for host "150.135.124.50", user "trav", database 
"webdata", no encryption at ./pg_test.pl line 8.

Working system:

[root@avipg perl]# ./pg_test.pl
Sector Alpha Crucis has 44 worlds
Sector Antares has 37 worlds
Sector Core has 221 worlds …

(The test dataset is a collection of mapping data for an old RPG game)

Note the pg_test.pl script was copied from the working server to the non 
working one.

The pg_hba.conf on the server includes this which should encompass all systems 
in this VLAN

# external 'OldMTM' site range
hostssl all all 150.135.124.0/25 password

Another system in the same address range is working just fine with the 
identical setup, in fact it’s in production without issues.

Both systems are running Rocky Linux 9, using the perl DBI interface with 
DBD::Pg all installed from the Rocky Linux repositories.

Firewall settings, Perl version, env variables etc are the same on both client 
hosts

I know the password is correct because I just successfully logged in on the 
server with psql -U trav -d webdata -W  and used the password in the connect 
statement in the script.

Anywhere else that I should look for a cause?


--
Bruce Johnson
University of Arizona
College of Pharmacy
Information Technology Group

Institutions do not have opinions, merely customs




Re: Regarding HA, Failover and Load Balancing

2023-11-03 Thread Inzamam Shafiq
Hi,

You can use combination of patroni, pgbouncer, and HAProxy. patroni will work 
for auto failover, pgbouncer is for connection pooling and HAProxy can be used 
for load balancing. These all tools are open source.

Regards,
Inzamam Shafiq

From: Arif Hussain 
Sent: Friday, November 3, 2023 2:04 PM
To: pgsql-general@lists.postgresql.org 
Subject: Regarding HA, Failover and Load Balancing

Hello,


Could anyone please help to implement High availability, replication and 
failover for postgresql.

  *   We are using postgreSQL 14 on VMs (planning to upgrade soon).

  *   We have a single server and planning to achieve:
i) High availability (1st priority)
ii) Automatic Failover (1st priority)
iii) Data Replication. (1st priority)
iv) Fault detection (2nd priority)
v) Monitoring and Alerts (2nd priority)
vi) GUI (2nd priority)

  *   As far as I know , postgres does support different replication options 
but it does NOT support automatic failover and load balancing.

  *   As per my research, I found the below list of tools but all of the 
required features are NOT provided by any one tool. Every tool has some pros 
and cons.

  pgpool-II: it provides pooling and load balancing but it does not 
provide automatic failover.

Patroni: it provides automatic failover but it is not good for load 
balancing.

ClusterControl: Another candidate with free community addition but 
failover is only in paid version. Not sure how efficient it is.

   EnterpriseDB: Another one fully paid. Exploring each and every tool to 
understand its capabilities and limitations would be a very time consuming task.

I want to know which tool or set of tools are being used mostly so that I can 
narrow down my research to make an efficient decision quickly. Open source 
tools are our preference but if some paid tools provide almost all of the 
features, we would think of it.



Thanks,


Arif



Re: Replication between different 15.x minor versions ok?

2023-06-28 Thread Inzamam Shafiq
Streaming replication between minor versions works perfectly fine. However, 
it's better to use same versions but different minor versions also work.

Sent from Outlook for Android


From: David Tinker 
Sent: Wednesday, June 28, 2023 3:02:22 pm
To: pgsql-general@lists.postgresql.org 
Subject: Replication between different 15.x minor versions ok?

Is it ok to use physical replication between different 15.x minor releases (on 
Ubuntu 22.04)? I haven't been able to find a definitive answer. Thanks.




Patroni Issue

2023-05-10 Thread Inzamam Shafiq
Hi Team,

I hope you are doing well.

I am working on patroni auto failover, I have 3 ETCD nodes, 2 pgsql/patroni 
nodes, ETCD cluster is running fine with no issues, now I have installed 
postgresql on patroni nodes and configured streaming replication using 
pg_basebackup, which is running fine. On top of that I have created yaml file 
for patroni, when I start patroni on the first(primary) node, it show the 
following logs and keep on showing this message and stay in starting state. can 
someone please help me to identify the issue, logs are as follows,

-bash-4.2$ patroni /etc/patroni/patroni.yml
2023-05-10 16:43:59,147 INFO: Selected new etcd server http://172.xx.xx.xx:2379
2023-05-10 16:43:59,151 INFO: No PostgreSQL configuration items changed, 
nothing to reload.
2023-05-10 16:43:59,171 WARNING: Postgresql is not running.
2023-05-10 16:43:59,171 INFO: Lock owner: None; I am pg1
2023-05-10 16:43:59,174 INFO: pg_controldata:
  pg_control version number: 1201
  Catalog version number: 201909212
  Database system identifier: 7230768165275119881
  Database cluster state: shut down
  pg_control last modified: Wed May 10 16:43:54 2023
  Latest checkpoint location: 0/6000150
  Latest checkpoint's REDO location: 0/6000150
  Latest checkpoint's REDO WAL file: 00010006
  Latest checkpoint's TimeLineID: 1
  Latest checkpoint's PrevTimeLineID: 1
  Latest checkpoint's full_page_writes: on
  Latest checkpoint's NextXID: 0:490
  Latest checkpoint's NextOID: 16386
  Latest checkpoint's NextMultiXactId: 1
  Latest checkpoint's NextMultiOffset: 0
  Latest checkpoint's oldestXID: 479
  Latest checkpoint's oldestXID's DB: 1
  Latest checkpoint's oldestActiveXID: 0
  Latest checkpoint's oldestMultiXid: 1
  Latest checkpoint's oldestMulti's DB: 1
  Latest checkpoint's oldestCommitTsXid: 0
  Latest checkpoint's newestCommitTsXid: 0
  Time of latest checkpoint: Wed May 10 16:43:53 2023
  Fake LSN counter for unlogged rels: 0/3E8
  Minimum recovery ending location: 0/0
  Min recovery ending loc's timeline: 0
  Backup start location: 0/0
  Backup end location: 0/0
  End-of-backup record required: no
  wal_level setting: replica
  wal_log_hints setting: on
  max_connections setting: 100
  max_worker_processes setting: 8
  max_wal_senders setting: 5
  max_prepared_xacts setting: 0
  max_locks_per_xact setting: 64
  track_commit_timestamp setting: off
  Maximum data alignment: 8
  Database block size: 8192
  Blocks per segment of large relation: 131072
  WAL block size: 8192
  Bytes per WAL segment: 16777216
  Maximum length of identifiers: 64
  Maximum columns in an index: 32
  Maximum size of a TOAST chunk: 1996
  Size of a large-object chunk: 2048
  Date/time type storage: 64-bit integers
  Float4 argument passing: by value
  Float8 argument passing: by value
  Data page checksum version: 0
  Mock authentication nonce: 
c557e5f51c201ffaa61d5372fe87384044552322ff979d1e05f6030be1fc7cc0

2023-05-10 16:43:59,185 INFO: Lock owner: None; I am pg1
2023-05-10 16:43:59,191 INFO: starting as a secondary
2023-05-10 16:43:59.461 PKT [22878] LOG:  starting PostgreSQL 12.14 on 
x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 
64-bit
2023-05-10 16:43:59.461 PKT [22878] LOG:  listening on IPv4 address "0.0.0.0", 
port 5432
2023-05-10 16:43:59,476 INFO: postmaster pid=22878
2023-05-10 16:43:59.479 PKT [22878] LOG:  listening on Unix socket 
"/var/run/postgresql/.s.PGSQL.5432"
localhost:5432 - no response
2023-05-10 16:43:59.498 PKT [22878] LOG:  redirecting log output to logging 
collector process
2023-05-10 16:43:59.498 PKT [22878] HINT:  Future log output will appear in 
directory "log".
localhost:5432 - no response
localhost:5432 - no response
localhost:5432 - no response
localhost:5432 - no response
localhost:5432 - no response
localhost:5432 - no response
localhost:5432 - no response
localhost:5432 - no response
localhost:5432 - no response
2023-05-10 16:44:09,163 INFO: Lock owner: None; I am pg1
2023-05-10 16:44:09,163 INFO: not healthy enough for leader race
2023-05-10 16:44:09,169 INFO: restarting after failure in progress
localhost:5432 - no response
localhost:5432 - no response
localhost:5432 - no response
localhost:5432 - no response
localhost:5432 - no response
localhost:5432 - no response
localhost:5432 - no response
localhost:5432 - no response
localhost:5432 - no response
localhost:5432 - no response
2023-05-10 16:44:19,163 INFO: Lock owner: None; I am pg1
2023-05-10 16:44:19,163 INFO: not healthy enough for leader race
2023-05-10 16:44:19,169 INFO: restarting after failure in progress
localhost:5432 - no response
localhost:5432 - no response
localhost:5432 - no response

Thanks.

Regards,

Inzamam Shafiq
Sr. DBA


ETCD for Patroni configuration error

2023-04-19 Thread Inzamam Shafiq
Hi Team,

I am in configuring etcd cluster, and facing below error while I start etcd or 
see member list.

2023-04-19 11:31:31.001071 N | embed: serving insecure client requests on 
xxx.xx.xx.xx:2379, this is strongly discouraged!
2023-04-19 11:31:31.001184 N | embed: serving insecure client requests on 
127.0.0.1:2379, this is strongly discouraged!
WARNING: 2023/04/19 11:31:31 grpc: addrConn.resetTransport failed to create 
client transport: connection error: desc = "transport: Error while dialing dial 
tcp: missing address"; Reconnecting to {xxx.xx.xx.xx:2379 0  }
WARNING: 2023/04/19 11:31:32 grpc: addrConn.resetTransport failed to create 
client transport: connection error: desc = "transport: Error while dialing dial 
tcp: missing address"; Reconnecting to {xxx.xx.xx.xx:2379 0  }
WARNING: 2023/04/19 11:31:33 grpc: addrConn.resetTransport failed to create 
client transport: connection error: desc = "transport: Error while dialing dial 
tcp: missing address"; Reconnecting to {xxx.xx.xx.xx:2379 0  }
WARNING: 2023/04/19 11:31:36 grpc: addrConn.resetTransport failed to create 
client transport: connection error: desc = "transport: Error while dialing dial 
tcp: missing address"; Reconnecting to {xxx.xx.xx.xx:2379 0  }
WARNING: 2023/04/19 11:31:41 grpc: addrConn.resetTransport failed to create 
client transport: connection error: desc = "transport: Error while dialing dial 
tcp: missing address"; Reconnecting to {xxx.xx.xx.xx:2379 0  }
WARNING: 2023/04/19 11:31:48 grpc: addrConn.resetTransport failed to create 
client transport: connection error: desc = "transport: Error while dialing dial 
tcp: missing address"; Reconnecting to {xxx.xx.xx.xx:2379 0  }

when I execute this command

etcdctl member list

below error received,

client: etcd cluster is unavailable or misconfigured; error #0: proxyconnect 
tcp: dial tcp :80: connect: connection refused

can someone please help in identifying the issue?

does anything related to port 80 or any configuration issue?

my conf file is as follows,

name: "etcd"
initial-cluster: "etcd=http://xxx.xx.xx.xx:2380;
initial-cluster-state: "new"
initial-cluster-token: "etcd-cluster"
initial-advertise-peer-urls: "http://xxx.xx.xx.xx:2380;
data-dir: "/var/lib/etcd/pgsql"
listen-peer-urls: "http://xxx.xx.xx.xx:2380;
listen-client-urls: "http://xxx.xx.xx.xx:2379,http://localhost:2379;
advertise-client-urls: "http://xxx.xx.xx.xx:2379;

Regards,

Inzamam Shafiq
Sr. DBA


Re: Replicate data from one standby server to another standby

2023-04-09 Thread Inzamam Shafiq
Yes, you can replicate data from one slave to another.

Sent from Outlook for Android

From: Atul Kumar 
Sent: Sunday, April 9, 2023 3:14:06 PM
To: pgsql-general 
Subject: Replicate data from one standby server to another standby

Hi,

I have Configured "sync" streaming replication  to replicate the data from one 
primary node to one slave node.

Now I want one to add one more node as slave that will replicate the data from 
previously created slave replica (not from primary replica as we do 
traditionally).

So please let me know whether do we have any such flexibility where we can 
replicate the data from one slave replica to another slave replica ?



Regards.


Re: Patroni vs pgpool II

2023-04-05 Thread Inzamam Shafiq
But, I heard PgPool is still affected by Split brain syndrome.

Regards,

Inzamam Shafiq
Sr. DBA

From: Tatsuo Ishii 
Sent: Wednesday, April 5, 2023 12:38 PM
To: cyberd...@gmail.com 
Cc: inzamam.sha...@hotmail.com ; 
pgsql-general@lists.postgresql.org 
Subject: Re: Patroni vs pgpool II

> BUT, even if there is a solution that parses queries to make a decision it
> I would not recommend anyone to use it unless all consequences are
> understood.
> Specifically, not every read-only query could be salefy sent to a replica,
> because they could be lagging behind the primary.
> Only application (developers) could decide whether for a specific query
> they could afford slightly outdated results. Most of the popular
> application frameworks support configuring two connection strings for this
> purpose.

I think Pgpool-II users well understand the effect of replication
lagging because I've never heard complains like "hey, why my query
result is sometimes outdated?"

Moreover Pgpool-II provides many load balancing features depending on
user's needs. For example users can:

- just turn off load balancing
- turn off load balancing only for specific application name
- turn off load balancing only for specific database
- turn off load balancing if current transaction includes write query

Best reagards,
--
Tatsuo Ishii
SRA OSS LLC
English: http://www.sraoss.co.jp/index_en/
Japanese:http://www.sraoss.co.jp


Patroni vs pgpool II

2023-04-03 Thread Inzamam Shafiq
Hi Guys,

Hope you are doing well.

Can someone please suggest what is one (Patroni vs PGPool II) is best for 
achieving HA/Auto failover, Load balancing for DB servers. Along with this, can 
you please share the company/client names using these tools for large PG 
databases?

Thanks.

Regards,

Inzamam Shafiq


Re: PostgreSQL vs MariaDB

2023-03-25 Thread Inzamam Shafiq
Hi Ben,

We have a complex running, and we will be having a lot of Insert, update and 
deletes. We have many partitioned tables with huge data and some complex SQL is 
written at the application logic, some analytical functions, union/intersect 
etc. clauses and also we have materialized views used at the DB level. There 
are some triggers and complex procedures as well.

Regards,

Inzamam Shafiq
Sr. DBA

From: Ben Chobot 
Sent: Friday, March 24, 2023 7:08 PM
To: Inzamam Shafiq 
Cc: pgsql-general@lists.postgresql.org 
Subject: Re: PostgreSQL vs MariaDB

Inzamam Shafiq wrote on 3/24/23 4:07 AM:
Hi Team,

Hope you are doing well.

Can someone please list pros and cons of MariaDB vs PostgreSQL that actually 
needs serious consideration while choosing the right database for large OLTP 
DBs (Terabytes)?

That's a very broad question, which will take far more effort to answer than 
most people are willing to provide.

I would suggest describing your specific use case and asking how well 
PostgreSQL will fit it. You clearly have a lot of data, for example, so is that 
distributed across many tables? Large blobs or lots of rows? Lots of inserts? 
Reads? Updates? Those sorts of things.

Maybe somebody on this PostgreSQL list might even know MariaDB well enough to 
compare and contrast the two for you.


PostgreSQL vs MariaDB

2023-03-24 Thread Inzamam Shafiq
Hi Team,

Hope you are doing well.

Can someone please list pros and cons of MariaDB vs PostgreSQL that actually 
needs serious consideration while choosing the right database for large OLTP 
DBs (Terabytes)?

Thanks.

Regards,

Inzamam Shafiq
Sr. DBA


Re: Oracle to PostgreSQL Migration

2023-03-24 Thread Inzamam Shafiq
Hi Umair,

Thanks, oracle_fdw can be used, but it doesn't provide real time sync, we have 
to schedule jobs to insert data in actual schema from foreign tables, so some 
delay might be there.

Regards,

Inzamam Shafiq

From: Umair Shahid 
Sent: Tuesday, March 21, 2023 3:48 PM
To: Inzamam Shafiq 
Cc: Thomas Kellerer ; pgsql-general@lists.postgresql.org 

Subject: Re: Oracle to PostgreSQL Migration

Could oracle_fdw<https://github.com/laurenz/oracle_fdw> help in your use case? 
You could consider setting up a job to pull data into PostgreSQL live and then 
cut over when you are ready.

- Umair

On Tue, Mar 21, 2023 at 10:36 AM Inzamam Shafiq 
mailto:inzamam.sha...@hotmail.com>> wrote:
Hi Thomas,

We have tried kafka, but in that we stuck in CDC part, update/delete was not 
working due to some NULL value issue. Do you have any helping material for 
Oracle to PostgreSQL data migration using debezium?

Regards,

Inzamam Shafiq
Sr. DBA

From: Thomas Kellerer mailto:sham...@gmx.net>>
Sent: Monday, March 20, 2023 7:21 PM
To: 
pgsql-general@lists.postgresql.org<mailto:pgsql-general@lists.postgresql.org> 
mailto:pgsql-general@lists.postgresql.org>>
Subject: Re: Oracle to PostgreSQL Migration

Inzamam Shafiq schrieb am 20.03.2023 um 13:57:
> We have an Oracle DB which is around 1TB and we want to migrate to
> PostgreSQL that have a new table structure, so we want to perform
> data transformation and real time CDC from Oracle to PostgreSQL. Do
> we have any good open source tool to achieve this with No Coding
> involved.??


You could try debezium, but I don't know how good it works and if it qualifies 
as "no coding involved"





Re: Oracle to PostgreSQL Migration

2023-03-20 Thread Inzamam Shafiq
Hi Thomas,

We have tried kafka, but in that we stuck in CDC part, update/delete was not 
working due to some NULL value issue. Do you have any helping material for 
Oracle to PostgreSQL data migration using debezium?

Regards,

Inzamam Shafiq
Sr. DBA

From: Thomas Kellerer 
Sent: Monday, March 20, 2023 7:21 PM
To: pgsql-general@lists.postgresql.org 
Subject: Re: Oracle to PostgreSQL Migration

Inzamam Shafiq schrieb am 20.03.2023 um 13:57:
> We have an Oracle DB which is around 1TB and we want to migrate to
> PostgreSQL that have a new table structure, so we want to perform
> data transformation and real time CDC from Oracle to PostgreSQL. Do
> we have any good open source tool to achieve this with No Coding
> involved.??


You could try debezium, but I don't know how good it works and if it qualifies 
as "no coding involved"





Re: Oracle to PostgreSQL Migration

2023-03-20 Thread Inzamam Shafiq
Hi Ron,

I have used ora2pg for a small database, but we have limitation of zero 
downtime, how do we replicate real time data from Oracle to PostgreSQL. We can 
migrate schema from Oracle to PostgreSQL using ora2pg and single time bulk data 
but what about the changed data after bulk load?

Regards,

Inzamam Shafiq
Sr. DBA

From: Ron Johnson 
Sent: Monday, March 20, 2023 7:15 PM
To: pgsql-general@lists.postgresql.org 
Subject: Re: Oracle to PostgreSQL Migration

Real-time CDC is the difficult part.  ora2pg (using views) can do a static 
migration.  No coding (unless you consider clever use of bash to modify config 
files to be coding).  I used it to migrate a 7TB db to Postgresql.

https://ora2pg.darold.net/

On Mon, Mar 20, 2023 at 8:58 AM Inzamam Shafiq 
mailto:inzamam.sha...@hotmail.com>> wrote:
Hi,

Hope everyone is fine.

Can someone help or guide regarding Open Source tools for Oracle to PostgreSQL 
migration with real time CDC. along with this is there any possibility to 
change the structure of the database? Let me explain a little more,

We have an Oracle DB which is around 1TB and we want to migrate to PostgreSQL 
that have a new table structure, so we want to perform data transformation and 
real time CDC from Oracle to PostgreSQL. Do we have any good open source tool 
to achieve this with No Coding involved.??

Thanks.

Regards,

Inzamam Shafiq
Sr. DBA


Oracle to PostgreSQL Migration

2023-03-20 Thread Inzamam Shafiq
Hi,

Hope everyone is fine.

Can someone help or guide regarding Open Source tools for Oracle to PostgreSQL 
migration with real time CDC. along with this is there any possibility to 
change the structure of the database? Let me explain a little more,

We have an Oracle DB which is around 1TB and we want to migrate to PostgreSQL 
that have a new table structure, so we want to perform data transformation and 
real time CDC from Oracle to PostgreSQL. Do we have any good open source tool 
to achieve this with No Coding involved.??

Thanks.

Regards,

Inzamam Shafiq
Sr. DBA


Re: Purging few months old data and vacuuming in production

2022-12-29 Thread Inzamam Shafiq
Hi,

I think partitioning will be a good option, you can easily detach partitions 
will minimal time.

Regards,

Inzamam Shafiq
Sr. DBA

From: Ranjith Paliyath 
Sent: Friday, December 30, 2022 11:39 AM
To: pgsql-general@lists.postgresql.org 
Subject: Purging few months old data and vacuuming in production

Hi,

We have a PostgreSQL (slightly old version, something like - PostgreSQL 11.2 on 
x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36), 
64-bit) production, where one particular table and its related 5 tables need to 
be purged of 3 months prior data. Each of these tables' daily record increment 
is on an average 2 to 3 million.

Approach needed is to do a daily purge of 90days prior data.  Probable purge 
processing window is expected to be 2hrs. Observed test timing for deletion is 
exceeding 2-3hrs and we are trying to do vacuuming after the deletes, which is 
again taking exceeding another 2hrs.
There is a suggestion for re-creating the tables with partitions, and as purge 
approach could then be a deletion/dropping of these partitions, which would not 
really require a vacuuming later on.

When we go for a Daily purge approach it should not put a strain on other 
processes which could be affecting this same set of tables, like these tables 
should not get locked because of the purge.

Questions are -
(a) Should we recommend PostgreSQL upgrade, if possible, to v15.1? Could this 
bring in some benefits related to vacuuming?
(b) Would partitioning be an optimal approach?

Thank you,
Regards


This electronic mail (including any attachment thereto) may be confidential and 
privileged and is intended only for the individual or entity named above. Any 
unauthorized use, printing, copying, disclosure or dissemination of this 
communication may be subject to legal restriction or sanction. Accordingly, if 
you are not the intended recipient, please notify the sender by replying to 
this email immediately and delete this email (and any attachment thereto) from 
your computer system...Thank You.




Re: Autovacuum on sys tables

2022-12-18 Thread Inzamam Shafiq
Thanks Thomas for the response,

It means if I run VACUUM ANALYZE on pg_attribute or pg_class table, it is 
completely safe? Can you please also please confirm what is meant by 
"mid-level" vacuum?

Regards,

Inzamam Shafiq
Sr. DBA

From: Thomas Boussekey 
Sent: Sunday, December 18, 2022 4:01 PM
To: Inzamam Shafiq 
Cc: pgsql-general@lists.postgresql.org 
Subject: Re: Autovacuum on sys tables

Hello Inzamam,

Le sam. 17 déc. 2022 à 08:16, Inzamam Shafiq 
mailto:inzamam.sha...@hotmail.com>> a écrit :
Dear Experts,

Hope you are doing well.

I have a question that autovacuum is running on sys tables like pg_class, 
pg_attribute, is it a normal thing? Further, what is dead tuples are not 
removed from Autovacuum, can we remove the dead tuples using FULL VACUUMM or 
pg_repack on sys tables?

pg_repack cannot run on system tables, it will FAIL with an explicit error 
message explaining the limitation.

Each time you perform DDL operations (CREATE, DROP, ALTER), rows are 
inserted/updated or deleted into the system tables : pg_class, pg_attribute ...
Autovacuum operations perform "low-level" operations, it can be interesting to 
perform "middle-level" vacuum with VACUUM ANALYZE... that is not blocking, but 
will be more a resource-consuming operation than autovacuum.

Performing VACUUM FULL operation will block access to these pillar tables of 
your database.
If your application/users can handle it, go ahead!
At work on this kind of operation, I set a statement_timeout, in order to 
properly stop the process if it is over a defined amount of time.

Hope this helps,
Thomas

Thank you.

Regards,

Inzamam Shafiq
Sr. DBA


Autovacuum on sys tables

2022-12-16 Thread Inzamam Shafiq
Dear Experts,

Hope you are doing well.

I have a question that autovacuum is running on sys tables like pg_class, 
pg_attribute, is it a normal thing? Further, what is dead tuples are not 
removed from Autovacuum, can we remove the dead tuples using FULL VACUUMM or 
pg_repack on sys tables?

Thank you.

Regards,

Inzamam Shafiq
Sr. DBA


Re: pgbackrest Help Required

2022-09-28 Thread Inzamam Shafiq
Hi Stephen,

Thanks for your response.

> Not sure what 'traditional zip method' means here, but if you've copied
the shell script out of the documentation, that's not a safe
configuration.

Conventional zip method means my archive_command and restore_command parameters 
are as follows,

archive_command = 'if test ! -d "/archives/"; then mkdir -p "/archives/"; fi; 
test ! -f "/archives/%f" && gzip < "%p" > "/archives/%f.gz"'

restore_command = 'gunzip < "/archives/%f".gz > "%p"'


> In configuration of pgbackrest, simply configure each of the PG hosts
and the repo and then set backup-standby=y.  You'll need to make sure
that the archive_command is set to pgbackrest, of course.  Having an
existing streaming replication setup is perfectly fine and pgbackrest
won't interfere with that.  You will want to set your restore command
to pgbackrest also though, in case streaming fails.

I have done configurations, I just need a confirmation whether I did it right 
or still there needs some improvement. it will be helpful if I can have a 
documentation where someone has primary and standby in place and he had 
configured pgbackrest without disturbing the replication. I have followed below 
steps, please have a look at them and suggest any imporvements if requried.


  1.  Configure passwordless SSH connectivity between hosts
  2.  Configure NFS for shared drive between hosts
  3.  Stop Standby Cluster
  4.  Prepare Primary node for pgbackrest and replication
  5.  Configure pgbackrest to backup Primary node
  6.  Setup Standby node for pgbackrest and resume replication

 *   Configure Standby cluster for pgbackrest and check backup info
 *   If standby is having replication issues, then --delta restore using 
pgbackrest
 *   Check the replication
  1.  Test Streaming Replication
  2.  Edit pgbackrest.conf on Standby Node for backup
  3.  Perform backup from Standby
  4.  Test Steaming Replication again

Regards,

Inzamam Shafiq
Sr. DBA

____________
From: Stephen Frost
Sent: Tuesday, September 27, 2022 8:39 PM
To: Inzamam Shafiq
Cc: pgsql-general@lists.postgresql.org
Subject: Re: pgbackrest Help Required

Greetings,

* Inzamam Shafiq (inzamam.sha...@hotmail.com) wrote:
> I am in process of configuring pgbackrest, I have followed documents and 
> configured backup from backup server. I have a setup of Primary and Hot 
> Standby, when I configured pgbackrest the standby DB got un-synced and now I 
> am unable to recover it, can anyone help or share a good document where we 
> can implement pgbackrest where streaming replication is already in place 
> using traditional pg_basebackup method?
>
> Note: I want to configure pgbackrest and to take backup from standby DB, 
> however streaming replication is already there, where the archive and restore 
> commands are configured using traditional zip method.

Not sure what 'traditional zip method' means here, but if you've copied
the shell script out of the documentation, that's not a safe
configuration.

In configuration of pgbackrest, simply configure each of the PG hosts
and the repo and then set backup-standby=y.  You'll need to make sure
that the archive_command is set to pgbackrest, of course.  Having an
existing streaming replication setup is perfectly fine and pgbackrest
won't interfere with that.  You will want to set your restore command
to pgbackrest also though, in case streaming fails.

Thanks,

Stephen


Re: PCI-DSS Requirements

2022-09-22 Thread Inzamam Shafiq
Hi Ron,

Thank you for the response.

Actually we are in a starting phase and I have done instance level encryption 
(CYBERTECH TDE Patch) but if someone take dump and restore it on another server 
the data get restored successfully. Also the problem is that the data is in 
plain text.

So I want to ask if disk or instance level encryption useful or we should focus 
on column level encryption?

Also if any error occurred during DML and a plain query will be written into 
the logs which may not be compliant with PCI. How to overcome that?

Thanks.

Regards,

Inzamam Shafiq
Sr. DBA

From: Ron 
Sent: Tuesday, September 20, 2022 10:44 PM
To: pgsql-general@lists.postgresql.org 
Subject: Re: PCI-DSS Requirements

On 9/20/22 04:27, Inzamam Shafiq wrote:

Hi Team,


Anyone on PCI-DSS requirements for PostgreSQL DB, need help for some of the 
points.

Can you be more specific?  (Typically. the auditors or the "audit pre-check" 
team will ask for a bunch of details on how your instance is configured.)

The usual questions I get are:
- What password hash algorithm is used?
- How frequently to passwords expire?
- Is SSL used when communicating with applications?

--
Angular momentum makes the world go 'round.


pgbackrest Help Required

2022-09-22 Thread Inzamam Shafiq
Hi Folks,

Hope you are doing well.

I am in process of configuring pgbackrest, I have followed documents and 
configured backup from backup server. I have a setup of Primary and Hot 
Standby, when I configured pgbackrest the standby DB got un-synced and now I am 
unable to recover it, can anyone help or share a good document where we can 
implement pgbackrest where streaming replication is already in place using 
traditional pg_basebackup method?

Note: I want to configure pgbackrest and to take backup from standby DB, 
however streaming replication is already there, where the archive and restore 
commands are configured using traditional zip method.

Thanks.

Regards,

Inzamam Shafiq
Sr. DBA


PCI-DSS Requirements

2022-09-20 Thread Inzamam Shafiq
Hi Team,


Anyone on PCI-DSS requirements for PostgreSQL DB, need help for some of the 
points.

Regards,

Inzamam Shafiq
DBA