Re: Dropping all tables in a database

2023-08-07 Thread H
On 08/06/2023 09:24 PM, Christophe Pettus wrote:
>
>> On Aug 6, 2023, at 18:17, H  wrote:
>>
>> Is there some setting I have to change in the database to have the first SQL 
>> statement to work or have I run into a possible bug?
> The first statement just generates a line of text output that contains the 
> statement.  There's nothing in it that would cause that statement to be 
> executed.
>
> If you want to create a statement dynamically and then execute it, you can do 
> that with pl/pgSQL:
>
>   
> https://www.postgresql.org/docs/current/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN
>
> Otherwise, the solution is to do as you did: write the output to a file, trim 
> out any extraneous lines, and then use that as a script.

Oops, you are right, thank you. I worked around it by making sure the database 
to be restored is saved using the options --clean, --if-exists and --no-owner 
which solve my problem.





Re: DB Server slow down & hang during Peak hours of Usage

2023-08-07 Thread Ron

On 8/7/23 08:41, KK CHN wrote:

List ,

*Description:*

Maintaining a DB Server Postgres and with a lot of read writes to this 
Server( virtual machine running on  ESXi 7 with CentOS 7) .


( I am not sure how to get the read / write counts or required IOPS or any 
other parameters for you. If  you point our  I can execute those commands 
and get the data. )


Peak hours  say 19:00 Hrs to 21:00 hrs it hangs ( The application is an 
Emergency call response system  writing many  Emergency Response vehicles 
locations coordinates to the DB every 30 Seconds and every emergency call 
metadata (username, phone number, location info and address of the caller 
to the DB for each call)


During these hours  the system hangs and the  Application ( which shows 
the location of the vehicles on a  GIS map hangs ) and the CAD machines 
which connects to the system hangs as those machines can't  connect to the 
DB and get data for displaying the caller information to the call taking 
persons working on them. )


Have you looked at the postgresql log file?

Have you looked in the *application* error log?



*Issue : *
How to trace out what makes this DB  hangs and make it slow  and how to 
fix it..


*Resource poured on the system :*
*
*
*64 vCPUs  allocate ( Out of a host machine comprised of 2 processor slots 
of 20 cores each with Hyper Threading, intel xeon 2nd Gen, CPU usage show 
50 % in vCentre Console), and RAM 64 GB allocated ( buy usage always 
showing around 33 GB only ) *

*
*


What else happens on the ESXi host between 19:00 and 21:00?


*Query :*
*
*
How to rectify the issues that makes the DB server underperforming and 
find a permanent fix for this slow down issue*. *

*
*
*Attached the  Postgres.conf file here for reference .*
*
*
*Any more information required I can share for analysis to fix the issue. *
*
*

*Krishane *


--
Born in Arizona, moved to Babylonia.

DB Server slow down & hang during Peak hours of Usage

2023-08-07 Thread KK CHN
List ,

*Description:*

Maintaining a DB Server Postgres and with a lot of read writes to this
Server( virtual machine running on  ESXi 7 with CentOS 7) .

( I am not sure how to get the read / write counts or required IOPS or any
other parameters for you. If  you point our  I can execute those commands
and get the data. )

Peak hours  say 19:00 Hrs to 21:00 hrs it hangs ( The application is an
Emergency call response system  writing many  Emergency Response vehicles
locations coordinates to the DB every 30 Seconds and every emergency call
metadata (username, phone number, location info and address of the caller
to the DB for each call)

During these hours  the system hangs and the  Application ( which shows the
location of the vehicles on a  GIS map hangs ) and the CAD machines which
connects to the system hangs as those machines can't  connect to the DB and
get data for displaying the caller information to the call taking persons
working on them. )

*Issue : *
How to trace out what makes this DB  hangs and make it slow  and how to fix
it..

*Resource poured on the system :*

*64 vCPUs  allocate ( Out of a host machine comprised of 2 processor slots
of 20 cores each with Hyper Threading, intel xeon 2nd Gen, CPU usage show
50 % in vCentre Console), and RAM 64 GB allocated ( buy usage always
showing around 33 GB only ) *

*Query :*

How to rectify the issues that makes the DB server underperforming and find
a permanent fix for this slow down issue*. *

*Attached the  Postgres.conf file here for reference .*

*Any more information required I can share for analysis to fix the issue. *


*Krishane *


postgresql(1).conf
Description: Binary data


Re: Backup Copy of a Production server.

2023-08-07 Thread Ron

On 8/7/23 07:05, KK CHN wrote:



On Mon, Aug 7, 2023 at 10:49 AM Ron  wrote:

On 8/7/23 00:02, KK CHN wrote:

List,

I am in need to copy a production PostgreSQL server  data( 1 TB)  to 
an external storage( Say USB Hard Drive) and need to set up a backup
server with this data dir.

What is the trivial method to achieve this ??

1. Is Sqldump an option at a production server ?? (  Will this affect
the server performance  and possible slowdown of the production
server ? This server has a high IOPS). This much size 1.2 TB will the
Sqldump support ? Any bottlenecks ?


Whether or not there will be bottlenecks depends on how busy (CPU and
disk load) the current server is.



2. Is copying the data directory from the production server to an
external storage and replace the data dir  at a  backup server with
same postgres version and replace it's data directory with this data
dir copy is a viable option ?


# cp  -r   ./data  /media/mydb_backup  ( Does this affect the
Production database server performance ??)   due to the copy command
overhead ?


OR  doing a WAL Replication Configuration to a standby is the right
method to achieve this ??


But you say you can't establish a network connection outside the DC.



( I can't do for a remote machine .. But I can do  a WAL replication to 
another host in the same network inside the DC. So that If I  do a sqldump  
or Copy of Data dir of the standby server it won't affect the production 
server, is this sounds good  ?  )


"WAL replication" is streaming replication.  Yes, you can do that.

But to what end?  See my prior comments about when you should use PgBackRest.





 This is to take out the database backup outside the Datacenter and
our DC policy won't allow us to establish a network connection
outside the DC to a remote location for WAL replication .


If you're unsure of what Linux distro & version and Postgresql version
that you'll be restoring the database to, then the solution is:
DB=the_database_you_want_to_backup
THREADS=
cd $PGDATA
cp -v pg_hba.conf postgresql.conf /media/mydb_backup
cd /media/mydb_backup
pg_dumpall --globals-only > globals.sql


What is the relevance of  globals-only


It's all spelled out in the pg_dumpall man page.


and  what this will do  ${DB}.log  // or is it ${DB}.sql  ?

pg_dump --format=d --verbose --jobs=$THREADS $DB &> ${DB}.log  //
.log couldn't get an idea what it mean



I get the impression that you don't have any experience with the bash shell.

"&> ${DB}.log" redirects stdout and stderr to the file named ${DB}.log.



If you're 100% positive that the system you might someday restore to
is *exactly* the same distro & version, and Postgresql major version,
then I'd use PgBackRest.



--
Born in Arizona, moved to Babylonia.

Re: Backup Copy of a Production server.

2023-08-07 Thread Chris Travers
On Mon, Aug 7, 2023 at 7:00 PM KK CHN  wrote:

>
>
> On Mon, Aug 7, 2023 at 10:49 AM Ron  wrote:
>
>> On 8/7/23 00:02, KK CHN wrote:
>>
>> List,
>>
>> I am in need to copy a production PostgreSQL server  data( 1 TB)  to  an
>> external storage( Say USB Hard Drive) and need to set up a backup server
>> with this data dir.
>>
>> What is the trivial method to achieve this ??
>>
>>
pg_basebackup backs up the data dir.

Alternatively you could use another tool like pgbackrest which also has
some nice features regarding incremental backups, wal management etc and
parallel backup and restore.

>
>>
>> 1. Is Sqldump an option at a production server ?? (  Will this affect the
>> server performance  and possible slowdown of the production server ? This
>> server has a high IOPS). This much size 1.2 TB will the Sqldump support ?
>> Any bottlenecks ?
>>
>>
A sql dump can be had with pg_dumpall, but this is a different backup
strategy.  I would recommend in that case pg_dumpall -g to dump only
globals (roles and tablespaces) and then pg_dump on the databases
individually with format selection to either custom (if parallelism is not
required) or tar (if it is).  See the pg_dump man page for details.

The file size will depend on file format selected etc.  My naive guess for
custom format would be maybe 200-400GB.  For tar format probably more
(double or more) but total size depends on many factors and cannot be
reliably estimated.  In rare cases, it could even be larger than your data
directory.

>
>> Whether or not there will be bottlenecks depends on how busy (CPU and
>> disk load) the current server is.
>>
>
pg_basebackup is limited  by the fact that it is single threaded on both
sides (aside from wal), and this also limits disk I/O as well as network
throughput (if you have a long fat pipe).

pg_dump is also limited by having to interpret and serialize the output,
and also, if you have large text or binary fields, having to retrieve these
one at a time.  Additionally you could have lock contention.

>
>>
>> 2. Is copying the data directory from the production server to an
>> external storage and replace the data dir  at a  backup server with same
>> postgres version and replace it's data directory with this data dir copy is
>> a viable option ?
>>
>>
There is a lot of complexity to doing that right.  If you want to do that,
look at using pgbackrest.

>
>>
>>
>> # cp  -r   ./data  /media/mydb_backup  ( Does this affect the Production
>> database server performance ??)   due to the copy command overhead ?
>>
>>
>> OR  doing a WAL Replication Configuration to a standby is the right
>> method to achieve this ??
>>
>>
That is often also used, but you need to define what you want out of a
backup.  A standby will protect you from hardware failure for the most
part.  It will not, without a lot of other thought and configuration,
protect you from an administrator accidently dropping an important table or
database.   WAL archiving and backups can help there though (and hence my
recommendation for pgbackrest, which can also restore the data directory
and/or wals to a standby).

>
>>
>> But you say you can't establish a network connection outside the DC.  ( I
>> can't do for a remote machine .. But I can do  a WAL replication to another
>> host in the same network inside the DC. So that If I  do a sqldump  or Copy
>> of Data dir of the standby server it won't affect the production server, is
>> this sounds good  ?  )
>>
>
With a good backup archive accessible from only the places it needs to be
accessed, this problem goes away.

>
>>
>>
>>  This is to take out the database backup outside the Datacenter and our
>> DC policy won't allow us to establish a network connection outside the DC
>> to a remote location for WAL replication .
>>
>>
>> If you're unsure of what Linux distro & version and Postgresql version
>> that you'll be restoring the database to, then the solution is:
>> DB=the_database_you_want_to_backup
>> THREADS=
>> cd $PGDATA
>> cp -v pg_hba.conf postgresql.conf /media/mydb_backup
>> cd /media/mydb_backup
>> pg_dumpall --globals-only > globals.sql
>>
>
> What is the relevance of  globals-only and  what this will do  ${DB}.log
> // or is it  ${DB}.sql  ?
>
> pg_dump --format=d --verbose --jobs=$THREADS $DB &> ${DB}.log  // .log
>> couldn't get an idea what it mean
>>
>> If you're 100% positive that the system you might someday restore to is
>> *exactly* the same distro & version, and Postgresql major version, then
>> I'd use PgBackRest.
>>
>> --
>> Born in Arizona, moved to Babylonia.
>>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: My 1st TABLESPACE

2023-08-07 Thread Amn Ojee Uw

Thanks Negora.

Makes sense, I will check it out.

On 8/7/23 1:48 a.m., negora wrote:


Hi:

Although the "postgres" user owns the "data" directory, Has he access 
to the whole branch of directories? Maybe the problem is that he can't 
reach the "data" directory.


Regards.


On 07/08/2023 07:43, Amn Ojee Uw wrote:


I'd like to create a TABLESPACE, so, following this web page 
,  
I have done the following :


*mkdir 
/home/my_debian_account/Documents/NetbeansWorkSpace/JavaSE/Jme/database/postgresql/data*


*sudo chown postgres:postgres 
/home/my_debian_account/Documents/NetbeansWorkSpace/JavaSE/Jme/database/postgresql/data*


*sudo -u postgres psql*

*\du**
** arbolone    | Cannot login  | {}**
** chispa | | {prosafe}**
** workerbee | Superuser, Create DB    | {arbolone}**
** jme | | {arbolone}**
** postgres    | Superuser, Create role, Create DB, Replication, 
Bypass RLS | {}**
** prosafe  | Cannot login  | 
{}**


**CREATE TABLESPACE jmetablespace OWNER jme LOCATION 
'/home/my_debian_account/Documents/NetbeansWorkSpace/JavaSE/Jme/database/postgresql/data';*


The *CREATE **TABLESPACE* schema throws this error message :

/*ERROR:  could not set permissions on directory 
"/home/my_debian_account/Documents/NetbeansWorkSpace/JavaSE/Jme/database/postgresql/data": 
Permission denied*/


I have followed the web page to the best of my abilities, and AFAIK, 
the postgres user owns the folder '*data*'.


I know that something is missing, where did I go wrong and how can I 
resolve this issue?



Thanks in advance.



Re: Backup Copy of a Production server.

2023-08-07 Thread KK CHN
On Mon, Aug 7, 2023 at 10:49 AM Ron  wrote:

> On 8/7/23 00:02, KK CHN wrote:
>
> List,
>
> I am in need to copy a production PostgreSQL server  data( 1 TB)  to  an
> external storage( Say USB Hard Drive) and need to set up a backup server
> with this data dir.
>
> What is the trivial method to achieve this ??
>
> 1. Is Sqldump an option at a production server ?? (  Will this affect the
> server performance  and possible slowdown of the production server ? This
> server has a high IOPS). This much size 1.2 TB will the Sqldump support ?
> Any bottlenecks ?
>
>
> Whether or not there will be bottlenecks depends on how busy (CPU and disk
> load) the current server is.
>
>
> 2. Is copying the data directory from the production server to an external
> storage and replace the data dir  at a  backup server with same postgres
> version and replace it's data directory with this data dir copy is a viable
> option ?
>
>
> # cp  -r   ./data  /media/mydb_backup  ( Does this affect the Production
> database server performance ??)   due to the copy command overhead ?
>
>
> OR  doing a WAL Replication Configuration to a standby is the right method
> to achieve this ??
>
>
> But you say you can't establish a network connection outside the DC.  ( I
> can't do for a remote machine .. But I can do  a WAL replication to another
> host in the same network inside the DC. So that If I  do a sqldump  or Copy
> of Data dir of the standby server it won't affect the production server, is
> this sounds good  ?  )
>
>
>  This is to take out the database backup outside the Datacenter and our DC
> policy won't allow us to establish a network connection outside the DC to a
> remote location for WAL replication .
>
>
> If you're unsure of what Linux distro & version and Postgresql version
> that you'll be restoring the database to, then the solution is:
> DB=the_database_you_want_to_backup
> THREADS=
> cd $PGDATA
> cp -v pg_hba.conf postgresql.conf /media/mydb_backup
> cd /media/mydb_backup
> pg_dumpall --globals-only > globals.sql
>

What is the relevance of  globals-only and  what this will do  ${DB}.log
// or is it  ${DB}.sql  ?

pg_dump --format=d --verbose --jobs=$THREADS $DB &> ${DB}.log  // .log
> couldn't get an idea what it mean
>
> If you're 100% positive that the system you might someday restore to is
> *exactly* the same distro & version, and Postgresql major version, then
> I'd use PgBackRest.
>
> --
> Born in Arizona, moved to Babylonia.
>


Re:How to solve the warning?

2023-08-07 Thread Wen Yi
Hi,

I check the image you send, I think you pass on a incompatible pointer:
The PostgreSQL want you to pass on a 'List*' but you give a 'Node*'.


And please send plain text next time, the PostgreSQL community archieve always 
just store the patch  plain text.


Yours,
Wen Yi





--Original--
From:   
 "jacktby jacktby"  
  


[Question] Can someone provide the explain of the relation between PGPROC and transaction?

2023-08-07 Thread Wen Yi
Hi Community!
I have already read the comment of the proc.h, read some related question about 
it, and read the "74. Transaction Processing" of the PostgreSQL document.


But I really don't know how the PGPROC works in the transcation?
Can someone provide me some help? Thanks in advance!


Yours,
Wen Yi.