Re: Creating table and indexes for new application

2024-02-22 Thread yudhi s
On Fri, 23 Feb, 2024, 1:20 pm sud,  wrote:

>
>
> On Fri, 23 Feb, 2024, 12:41 pm Laurenz Albe, 
> wrote:
>
>> On Fri, 2024-02-23 at 02:05 +0530, yudhi s
>>
>> > 2)Should we be creating composite indexes on each foreign key for
>> table2 and table3, because
>> >   any update or delete on parent is going to take lock on all child
>> tables?
>>
>> Every foreign key needs its own index.  A composite index is only
>> appropriate if the foreign
>> key spans multiple columns.
>>
>>
>
> From the DDL which OP posted it's using composite foreign key thus a
> composite index would be needed.
> However, if someone doesn't delete or update the parent table PK   , is it
> still advisable to have all the FK indexed? Like in general I think
> transaction id should not get updated in a normal scenario unless some
> special case.
>
>
>
Thank you. I can double check if we have confirmed use case of deleting the
parent table or updating PK in the parent table. But anyway it can happen
for data fix for sure in some scenario.

But yes, we are certainly going to drop/purge partition from all the parent
and child table after specific days. So isn't that need the FK to be
indexed or else it will scan whole parent table partition?

>
>


Re: Creating table and indexes for new application

2024-02-22 Thread sud
On Fri, 23 Feb, 2024, 12:41 pm Laurenz Albe, 
wrote:

> On Fri, 2024-02-23 at 02:05 +0530, yudhi s
>
> > 2)Should we be creating composite indexes on each foreign key for table2
> and table3, because
> >   any update or delete on parent is going to take lock on all child
> tables?
>
> Every foreign key needs its own index.  A composite index is only
> appropriate if the foreign
> key spans multiple columns.
>
>

>From the DDL which OP posted it's using composite foreign key thus a
composite index would be needed.
However, if someone doesn't delete or update the parent table PK   , is it
still advisable to have all the FK indexed? Like in general I think
transaction id should not get updated in a normal scenario unless some
special case.


Re: Creating table and indexes for new application

2024-02-22 Thread Laurenz Albe
On Fri, 2024-02-23 at 02:05 +0530, yudhi s wrote:
> postgres version 15+ database. And it would be ~400million transactions/rows 
> per day in the
> main transaction table and almost double in the multiple child tables and 
> some child tables
> will hold lesser records too.
> 
> We are considering all of these tables for partitioning by the same 
> transaction_date column
> and it would be daily partitions. We have some questions,
> 
> 1)While creating these tables and related indexes, do we need to be careful 
> of defining any
>   other storage parameters like tablespaces etc Or its fine to make those 
> table/indexes
>   aligned to the default tablespace only? and are there any constraints on 
> tablespace size ,
>   as we will have 100's GB of data going to be stored in each of the daily 
> partitions?

There are no constraints on the size of a tablespace other than the limits of 
the underlying
file system.  Use the default tablespace.

> 2)Should we be creating composite indexes on each foreign key for table2 and 
> table3, because
>   any update or delete on parent is going to take lock on all child tables?

Every foreign key needs its own index.  A composite index is only appropriate 
if the foreign
key spans multiple columns.

> 3)We were thinking of simple Btree indexes to be created on the columns based 
> on the search
>   criteria of the queries. but the indexes doc i see in postgres having 
> INCLUDE keywords also
>   available in them. So I'm struggling to understand a bit, how it's adding 
> value to the read
>   query performance if those additional columns are not added explicitly to 
> the index but are
>   part of the INCLUDE clause? Will it give some benefit in regards to the 
> index storage space?
>   or should we always keep all the columns in the index definition itself 
> other than some
>   exception scenario? Struggling to understand the real benefit of the 
> INCLUDE clause.

The benefits of putting a column into INCLUDE is twofold:

1) The column only has to be stored in the leaf pages, since it is not used for 
searching.
   That makes the intermediate index entries smaller, which causes a wider 
fan-out of the
   index, which in turn makes the tree shallower and hence faster to search.

2) It is an implicit documentation that the column is not to be used for 
searching.

Yours,
Laurenz Albe




Re: Question on Table creation

2024-02-22 Thread sud
Thank you so much. This really helped.

Regards
Sud

>


Re: Creating table and indexes for new application

2024-02-22 Thread Lok P
My 2 cents...
Foreign key indexes are required for avoiding locking when deleting and
updating the pk in parent. But insert only table may not have any issue.
And we used to do this in other relational databases like oracle but not
sure how different is this in PG.
However considering your high transaction volume it might have some
negative impact on your data inserts having foreign keys on the txn tables
and also related indexes in first place I think. So better evaluate.
Others may comment on this.

On Fri, 23 Feb, 2024, 2:05 am yudhi s,  wrote:

> Hello Friends,
> We are newly creating tables for a system which is going to consume
> transaction data from customers and store in postgres version 15+ database.
> And it would be ~400million transactions/rows per day in the main
> transaction table and almost double in the multiple child tables and some
> child tables will hold lesser records too.
>
> Say TABLE1 is the main or parent transaction table which will hold
> ~400million transactions.
> TABLE2 is the child table which is going to hold ~800million rows/day. It
> has one to many relationships with table1.
> TABLe3 is another child table which is going to hold ~200million rows per
> day.
>
> We are considering all of these tables for partitioning by the same
> transaction_date column and it would be daily partitions. We have some
> questions,
>
> 1)While creating these tables and related indexes, do we need to be
> careful of defining any other storage parameters like tablespaces etc Or
> its fine to make those table/indexes aligned to the default tablespace
> only? and are there any constraints on tablespace size , as we will have
> 100's GB of data going to be stored in each of the daily partitions?
>
> 2)Should we be creating composite indexes on each foreign key for table2
> and table3, because any update or delete on parent is going to take lock on
> all child tables?
>
> 3)We were thinking of simple Btree indexes to be created on the columns
> based on the search criteria of the queries. but the indexes doc i see in
> postgres having INCLUDE keywords also available in them. So I'm struggling
> to understand a bit, how it's adding value to the read query performance if
> those additional columns are not added explicitly to the index but are part
> of the INCLUDE clause? Will it give some benefit in regards to the index
> storage space? or should we always keep all the columns in the index
> definition itself other than some exception scenario? Struggling to
> understand the real benefit of the INCLUDE clause.
>
> Below is a sample DDL of what it will look like.
>
>  Create table syntax:-
>
> CREATE TABLE TABLE1
> (
> TXN_ID varchar(36)  NOT NULL ,
> txn_timestamp date  NOT NULL ,
> CONSTRAINT TABLE1_PK PRIMARY KEY (TXN_ID,txn_timestamp)
> ) partition by range (txn_timestamp);
>
>
> CREATE TABLE TABLE2
> (
> table2_id varchar(36) not null,
> TXN_ID varchar(36)  NOT NULL ,
> txn_timestamp date  NOT NULL ,
> CONSTRAINT TABLE2_PK PRIMARY KEY (table2_id,txn_timestamp)
> ) partition by range (txn_timestamp);
>
> alter table table2 add constraint fk1 (TXN_ID,txn_timestamp) references
>  table1(TXN_ID,txn_timestamp);
> Create Index idx1 on TABLE2(TXN_ID,txn_timestamp); -- Index for foreign Key
>
> CREATE TABLE TABLE3
> (
> table3_id varchar(36) not null,
> TXN_ID varchar(36)  NOT NULL ,
> txn_timestamp date  NOT NULL ,
> CONSTRAINT TABLE3_PK PRIMARY KEY (table3_id,txn_timestamp)
> ) partition by range (txn_timestamp);
>
> alter table table2 add constraint fk2 (TXN_ID,txn_timestamp) references
>  table1(TXN_ID,txn_timestamp);
> Create Index idx2 on TABLE3(TXN_ID,txn_timestamp); -- Index for foreign
> key.
>
> Thanks And Regards
> Yudhi
>


Re: Postgresql assistance needed

2024-02-22 Thread Sasmit Utkarsh
Thanks, I'll check it out.

Regards,
Sasmit Utkarsh
+91-7674022625

On Thu, 22 Feb, 2024, 21:40 Laurenz Albe,  wrote:

> On Thu, 2024-02-22 at 19:35 +0530, Sasmit Utkarsh wrote:
> >  Is there any configuration/query that can be checked to verify if
> "shc_uadmin" has the correct path set?
>
> The SQL statement "SHOW search_path" would return the current setting.
>
> But look at the error message first.
>
> Yours,
> Laurenz Albe
>
>


walreceiver fails on asynchronous replica [SEC=UNOFFICIAL]

2024-02-22 Thread Mark Schloss
UNOFFICIAL

Hello,

We have the following setup in our DEV environment -

- Primary/Replica using asynchronous streaming replication
- Servers run Ubuntu
  Linux 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 
2023 x86_64 x86_64 GNU/Linux
- Postgres version
  postgres (PostgreSQL) 13.13 (Ubuntu 13.13-1.pgdg22.04+1)

The replica has been running for many weeks and then the walreceiver failed 
with the following -

<2024-02-22 14:12:34.731 AEDT [565228]: [60136-1] user=,db= > DETAIL:  Last 
completed transaction was at log time 2024-02-22 14:12:00.327411+11.
<2024-02-22 14:17:34.822 AEDT [565228]: [60137-1] user=,db= > LOG:  
restartpoint starting: time
<2024-02-22 14:17:34.935 AEDT [565228]: [60138-1] user=,db= > LOG:  
restartpoint complete: wrote 1 buffers (0.0%); 0 WAL file(s) added, 0 removed, 
0 recycled; write=0.106 s, sync=0.002 s, total=0.114 s; sync files=1, 
longest=0.002 s, average=0.002 s; distance=4 kB,
estimate=14 kB
<2024-02-22 14:17:34.935 AEDT [565228]: [60139-1] user=,db= > LOG:  recovery 
restart point at 6/B0ACB188
<2024-02-22 14:17:34.935 AEDT [565228]: [60140-1] user=,db= > DETAIL:  Last 
completed transaction was at log time 2024-02-22 14:15:00.457984+11.
<2024-02-22 14:20:23.383 AEDT [565227]: [7-1] user=,db= > LOG:  invalid record 
length at 6/B0ACC360: wanted 24, got 0
<2024-02-22 14:20:23.383 AEDT [565231]: [6-1] user=,db= > FATAL:  terminating 
walreceiver process due to administrator command
<2024-02-22 14:20:23.385 AEDT [565227]: [8-1] user=,db= > LOG:  invalid record 
length at 6/B0ACC360: wanted 24, got 0
<2024-02-22 14:20:23.385 AEDT [565227]: [9-1] user=,db= > LOG:  invalid record 
length at 6/B0ACC360: wanted 24, got 0
<2024-02-22 14:20:23.385 AEDT [565227]: [10-1] user=,db= > LOG:  invalid record 
length at 6/B0ACC360: wanted 24, got 0
<2024-02-22 14:20:28.390 AEDT [565227]: [11-1] user=,db= > LOG:  invalid record 
length at 6/B0ACC360: wanted 24, got 0
<2024-02-22 14:20:33.394 AEDT [565227]: [12-1] user=,db= > LOG:  invalid record 
length at 6/B0ACC360: wanted 24, got 0
<2024-02-22 14:20:38.394 AEDT [565227]: [13-1] user=,db= > LOG:  invalid record 
length at 6/B0ACC360: wanted 24, got 0
<2024-02-22 14:20:43.398 AEDT [565227]: [14-1] user=,db= > LOG:  invalid record 
length at 6/B0ACC360: wanted 24, got 0
<2024-02-22 14:20:48.402 AEDT [565227]: [15-1] user=,db= > LOG:  invalid record 
length at 6/B0ACC360: wanted 24, got 0
<2024-02-22 14:20:53.406 AEDT [565227]: [16-1] user=,db= > LOG:  invalid record 
length at 6/B0ACC360: wanted 24, got 0


The replica was restarted and the walreceiver immediately failed again with -

2024-02-23 07:50:05.591 AEDT [1957117]: [2-1] user=,db= > LOG:  entering 
standby mode
<2024-02-23 07:50:05.607 AEDT [1957117]: [3-1] user=,db= > LOG:  redo starts at 
6/B0ACB188
<2024-02-23 07:50:05.607 AEDT [1957117]: [4-1] user=,db= > LOG:  consistent 
recovery state reached at 6/B0ACC360
<2024-02-23 07:50:05.607 AEDT [1957117]: [5-1] user=,db= > LOG:  invalid record 
length at 6/B0ACC360: wanted 24, got 0
<2024-02-23 07:50:05.609 AEDT [1957114]: [5-1] user=,db= > LOG:  database 
system is ready to accept read only connections
<2024-02-23 07:50:05.637 AEDT [1957121]: [1-1] user=,db= > LOG:  started 
streaming WAL from primary at 6/B000 on timeline 5
<2024-02-23 07:50:05.696 AEDT [1957117]: [6-1] user=,db= > LOG:  invalid magic 
number  in log segment 0005000600B0, offset 0
<2024-02-23 07:50:05.697 AEDT [1957121]: [2-1] user=,db= > FATAL:  terminating 
walreceiver process due to administrator command
<2024-02-23 07:50:05.798 AEDT [1957117]: [7-1] user=,db= > LOG:  invalid magic 
number  in log segment 0005000600B0, offset 0
<2024-02-23 07:50:05.798 AEDT [1957117]: [8-1] user=,db= > LOG:  invalid magic 
number  in log segment 0005000600B0, offset 0
<2024-02-23 07:50:06.215 AEDT [1957125]: [1-1] user=[unknown],db=[unknown] > 
LOG:  connection received: host=[local]


Running a pg_waldump against the WAL file on the primary server produces -

  deveng >pg_waldump 0005000600B0
  pg_waldump: fatal: WAL segment size must be a power of two 
between 1 MB and 1 GB, but the WAL file "0005000600B0" header 
specifies 0 bytes

The same error is returned when using the WAL in pg_wal or the WAL that has 
been archived locally on the primary server.

We also stream WAL to a barman backup server. Running pg_waldump against the 
WAL file on the barman server errors with -

  pg_waldump: fatal: error in WAL record at 6/B0ACC328: invalid 
record length at 6/B0ACC360: wanted 24, got 0

The primary cluster Postgres log does not show any unusual messages at the time 
of the original walreceiver failure -

<2024-02-22 14:20:00.279 AEDT [2408502]: [2-1] 
user=crmportaluser,db=reportingentity > LOG:  connection authorized: 
user=crmportaluser database=reportingentity SSL enabled (protocol=TLSv1.3, c
ipher=TLS_AES_256_GCM_SHA384, 

Re: Postgres 16 missing from apt repo?

2024-02-22 Thread Adrian Klaver

On 2/22/24 15:18, Tamal Saha wrote:

Hi Adrian,
Thanks for the response. I am trying to build the postgres:16.1 docker 
image and it fails. This is the docker file (I am not associated with 
Docker Inc.).

https://github.com/docker-library/postgres/blob/d416768b1a7f03919b9cf0fef6adc9dcad937888/16/bookworm/Dockerfile#L93
 


I get the following error

6.976 Package postgresql-16 is not available, but is referred to by 
another package.

6.976 This may mean that the package is missing, has been obsoleted, or
6.976 is only available from another source
6.976
6.980 E: Version '16.1-1.pgdg120+1' for 'postgresql-16' was not found

How may I fix this?


https://wiki.postgresql.org/wiki/Apt/FAQ#I_want_only_specific_packages_from_this_repository

Where are older versions of the packages?

The *-pgdg distributions only contain the most recent version of each 
package. Older versions are available in the archive at 
https://apt-archive.postgresql.org/



https://apt-archive.postgresql.org/


Archive of all released packages on apt.postgresql.org
The archive also contains all versions of all packages that have been 
part of some DIST-pgdg distribution on apt.postgresql.org. These 
distributions are called DIST-pgdg-archive here. The archive is updated 
daily; the Packages/Sources files contain all versions at once.


/etc/apt/sources.list entry for the archive repository:

deb https://apt-archive.postgresql.org/pub/repos/apt DIST-pgdg-archive main
deb-src https://apt-archive.postgresql.org/pub/repos/apt 
DIST-pgdg-archive main






Regards,
Tamal






--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Postgres 16 missing from apt repo?

2024-02-22 Thread Tamal Saha
Hi Adrian,
Thanks for the response. I am trying to build the postgres:16.1 docker
image and it fails. This is the docker file (I am not associated with
Docker Inc.).
https://github.com/docker-library/postgres/blob/d416768b1a7f03919b9cf0fef6adc9dcad937888/16/bookworm/Dockerfile#L93


I get the following error

6.976 Package postgresql-16 is not available, but is referred to by another
package.
6.976 This may mean that the package is missing, has been obsoleted, or
6.976 is only available from another source
6.976
6.980 E: Version '16.1-1.pgdg120+1' for 'postgresql-16' was not found

How may I fix this?

Regards,
Tamal



On Thu, Feb 22, 2024 at 3:10 PM Adrian Klaver 
wrote:

> On 2/22/24 13:29, Tamal Saha wrote:
> > Hi,
> > I can't find the Postgres 16 folder in
> > https://apt.postgresql.org/pub/repos/apt/pool/
> > <
> https://mailtrack.io/l/5afb5f83fab81e3ff7e312ccb5f762a6aca1f1d9?url=https%3A%2F%2Fapt.postgresql.org%2Fpub%2Frepos%2Fapt%2Fpool%2F=3471932=db336b9bf87dd299>
> for the last 4-5 days. Is this a known issue? Thanks!
>
>  From
>
> https://apt.postgresql.org/pub/repos/apt/pool/README
>
> "Almost all packages in the apt.postgresql.org repository, including the
> PostgreSQL server packages for all versions, are located in the "main"
> component (aka directory). The other N.N components here (like "9.2")
> merely serve as stowage for the non-default-version library packages
> built from server packages. The libpq packages from the older PostgreSQL
> versions live here, as well as those from beta and devel branches. The
> libpq packages from the newest stable PostgreSQL version live in "main"
> as well.
>
> This is why the N.N directories here only contain few package, and most
> of the time there will not even be a directory for the "current" stable
> PostgreSQL version.
> "
>
> Are you having issues installing/upgrading Postgres 16?
>
> >
> > Regards,
> > Tamal
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Postgres 16 missing from apt repo?

2024-02-22 Thread Adrian Klaver

On 2/22/24 13:29, Tamal Saha wrote:

Hi,
I can't find the Postgres 16 folder in 
https://apt.postgresql.org/pub/repos/apt/pool/ 
 for the last 4-5 days. Is this a known issue? Thanks!


From

https://apt.postgresql.org/pub/repos/apt/pool/README

"Almost all packages in the apt.postgresql.org repository, including the
PostgreSQL server packages for all versions, are located in the "main"
component (aka directory). The other N.N components here (like "9.2") 
merely serve as stowage for the non-default-version library packages 
built from server packages. The libpq packages from the older PostgreSQL 
versions live here, as well as those from beta and devel branches. The 
libpq packages from the newest stable PostgreSQL version live in "main" 
as well.


This is why the N.N directories here only contain few package, and most 
of the time there will not even be a directory for the "current" stable 
PostgreSQL version.

"

Are you having issues installing/upgrading Postgres 16?



Regards,
Tamal


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: How to schema-qualify "IS DISTINCT FROM" on trigger definition? (I created a db that cannot be dump/restored)

2024-02-22 Thread Erik Wienhold
On 2024-02-22 22:14 +0100, Vick Khera wrote:
> On Wed, Feb 21, 2024 at 4:27 PM Tom Lane  wrote:
> 
> > For the moment, I think the only feasible solution is for your trigger
> > function to set the search path it needs by adding a "SET search_path
> > = whatever" clause to the function's CREATE command.
> 
> 
> The error is not in the function, it is the WHEN clause of the trigger.
> There's no way to set a search path on the trigger as far as I see.
> 
> The only option I see is to remove the WHEN clause on the trigger and wrap
> my function with an IF with those same conditions. I hope this will not
> result in any noticeable difference in speed.

You may also try the equivalent CASE expression in the WHEN clause.

https://wiki.postgresql.org/wiki/Is_distinct_from#Writing_with_CASE_statements

-- 
Erik




Postgres 16 missing from apt repo?

2024-02-22 Thread Tamal Saha
Hi,
I can't find the Postgres 16 folder in
https://apt.postgresql.org/pub/repos/apt/pool/

for the last 4-5 days. Is this a known issue? Thanks!

Regards,
Tamal


Re: Performance issue debugging

2024-02-22 Thread Vick Khera
On Thu, Feb 22, 2024 at 4:03 PM veem v  wrote:

> Hi All,
> As i understand we have pg_stats_activity which shows the real time
> activity of sessions currently running in the database. And the
> pg_stats_statement provides the aggregated information of the historical
> execution of all the queries in the database. But I don't see any sampling
> or timing information in those views. For example at a certain point in
> time in the past , what queries were getting executed in the database and
> overall wait events etc.
>
> So is there any other view which provides such information to dig into the
> past to diagnose any historical performance issues ? or should we create
> our own table and flush the information from the pg_stats_activity view to
> that with the current timestamp and that would be helpful in analyzing
> performance issues or any extension available for such?
>
>
Look at the auto_explain setting. Taking a random interval snapshot of
running queries likely will not teach you anything useful.


> Also even the explain analyze can only provide the exact run time stats of
> a completed query. If we want to see what's going on for a long running
> query and at what step in the execution path the query is spending most
> resources and time when it keeps running in the database, is there any
> available option in postgres database?  for e.g. in a SELECT query index
> access path if taking most of the time OR in an INSERT query INDEX block is
> causing contention while inserting data into the table , how would we be
> able to find that for a currently running query or a for a historical query?
>

You can see locking contention in the pg_locks table. In my experience I
rarely ever saw anything in there even when I was cranking tens of millions
of inserts and updates per day. I don't think there's anything for
historical queries or to probe anything more about a currently running
query's progress.

Take some time to think about your queries and how you can reduce any
locking they need. If you do need some locking, consider using the FOR
UPDATE clause in SELECT to limit what you do lock.


Re: How to schema-qualify "IS DISTINCT FROM" on trigger definition? (I created a db that cannot be dump/restored)

2024-02-22 Thread Vick Khera
On Wed, Feb 21, 2024 at 4:27 PM Tom Lane  wrote:

> For the moment, I think the only feasible solution is for your trigger
> function to set the search path it needs by adding a "SET search_path
> = whatever" clause to the function's CREATE command.


The error is not in the function, it is the WHEN clause of the trigger.
There's no way to set a search path on the trigger as far as I see.

The only option I see is to remove the WHEN clause on the trigger and wrap
my function with an IF with those same conditions. I hope this will not
result in any noticeable difference in speed.

It is rather unfortunate that one can end up with a schema that a backup
with pg_dump cannot be restored. Feel free to keep my example for
regression testing when postgres does grow the ability to schema-qualify
such operators.


Re: How do you optimize the disk IO when you cannot assume a file will start at a boundary then?

2024-02-22 Thread Thomas Munro
On Thu, Feb 22, 2024 at 2:23 PM Siddharth Jain  wrote:
> I understand the storage layer in databases goes to great lengths to ensure:
> - a row does not cross a block boundary
> - read/writes/allocation happen in units of blocks
> etc. The motivation is that at the OS level, it reads and writes pages 
> (blocks), not individual bytes. I am only concerned about SSDs but I think 
> the principle applies to HDD as well.
>
> but how can we do all this when we are not even guaranteed that the beginning 
> of a file will be aligned with a block boundary? refer this.

Interesting question.  I was aware of FFS/UFS fragments 'cause I've
poked at that code a bit (one of my learn-to-hack-kernels side
projects was to add O_DSYNC support), but not some of those others
mentioned.  I don't think they are a problem for anything PostgreSQL
does.  Even with O_DIRECT, FFS (at least in its FreeBSD descendant)
just quietly skips its raw fast path if the I/O is not *sector*
aligned, but otherwise quietly falls back to the usual buffered path;
actual interesting sized tables wouldn't use that feature anyway, so
really it's just an internal edge case space optimisation that we
don't have to worry about.  Generally, that family of systems
interprets O_DIRECT to mean "'*try* to bypass buffers" anyway, so
there is no way for it to bite us.  On the other hand, it does perform
pretty badly if you use logical blocks that are too large: that's why
in src/tools/ci/gcp_freebsd_repartition.sh I set up a new partition
with 8KB blocks for CI testing (I originally did that in a branch
doing a lot more with O_DIRECT, where the slow down due to block
mismatch is worse).  I think we just have to call that an
administrator's problem to worry about.

I have idly wondered before about some way to set a preferred 'grain'
size on a tablespace so that, for example, a parallel seq scan over a
very large read-only compressed ZFS/BTRFS/whatever file with 128KB
blocks could try to align the allocation of block ranges with those
boundaries in mind, just as a sort of performance hint.  Or perhaps
read it directly from st_blksize[1].  Combined with the I/O merging
thing I'm working on right now which can build up single 128KB
preadv() calls, that'd allow parallel workers to line those up neatly,
and avoid accessing the same underlying block from two processes,
which might have advantages in terms of ZFS's decompression threads
and other kinds of contentions.  I haven't looked into the potential
benefits of that though, that is just speculation...

For direct I/O, which we're still working on bringing to PostgreSQL in
useful form, we had to do this:

commit faeedbcefd40bfdf314e048c425b6d9208896d90
Author: Thomas Munro 
Date:   Sat Apr 8 10:38:09 2023 +1200

Introduce PG_IO_ALIGN_SIZE and align all I/O buffers.

... to avoid EINVAL errors, falling back to buffered mode or
pathologically bad performance (depending on the system).  It probably
also helps buffered I/O performance for non-huge-pages a tiny bit (by
touching the minimum number of memory pages).  There is no industry
standard to tell us what exactly is required for O_DIRECT, but we make
sure addresses, offsets and lengths are all aligned to our macro
PG_IO_ALIGN_SIZE, which is 4KB, because this is enough to satisfy all
the systems we are aware of today (on some systems the true
offset/length requirement is based on real or historical sectors,
either 4KB or 512 bytes, which may be smaller than the filesystem
block size, while on others it may be the filesystem block size which
is usually some power-of-two multiple of those numbers).  Since you
can theoretically set PostgreSQL's data or WAL block size to less than
that at compile time, we just don't let you turn direct I/O on in that
case (perhaps we should, in case it does work, and just let it error
out if it's going to?  It's early days...).  If the file system's
block size is larger than that, you may have some issues, though,
depending on the file system.

Nobody standardised this stuff, and we certainly can't make any
guarantees that anything we come up with will work on future systems,
eg if someone invents 8KB sectors maybe it will need some work.
That's OK, because direct I/O will likely always be a non-default
optional mode, and many systems don't support it at all today.

> Further, I don't see any APIs exposing I/O operations in terms of blocks. All 
> File I/O APIs I see expose a file as a randomly accessible contiguous byte 
> buffer. Would it not have been easier if there were APIs that exposed I/O 
> operations in terms of blocks?

There are/were operating systems with block or record oriented I/O (eg
IBM ones).  Unix was about minimalism and simplicity and explicitly
rejected that idea (first bullet point in [2]).  Buffered,
byte-oriented, synchronous I/O suited most applications.  But yeah,
not so much databases.  Dr Stonebraker, who started this project,
wrote a bunch of papers and articles about the mismatch between Unix
file 

Performance issue debugging

2024-02-22 Thread veem v
Hi All,
As i understand we have pg_stats_activity which shows the real time
activity of sessions currently running in the database. And the
pg_stats_statement provides the aggregated information of the historical
execution of all the queries in the database. But I don't see any sampling
or timing information in those views. For example at a certain point in
time in the past , what queries were getting executed in the database and
overall wait events etc.

So is there any other view which provides such information to dig into the
past to diagnose any historical performance issues ? or should we create
our own table and flush the information from the pg_stats_activity view to
that with the current timestamp and that would be helpful in analyzing
performance issues or any extension available for such?

Also even the explain analyze can only provide the exact run time stats of
a completed query. If we want to see what's going on for a long running
query and at what step in the execution path the query is spending most
resources and time when it keeps running in the database, is there any
available option in postgres database?  for e.g. in a SELECT query index
access path if taking most of the time OR in an INSERT query INDEX block is
causing contention while inserting data into the table , how would we be
able to find that for a currently running query or a for a historical query?

Regards
Veem


Creating table and indexes for new application

2024-02-22 Thread yudhi s
Hello Friends,
We are newly creating tables for a system which is going to consume
transaction data from customers and store in postgres version 15+ database.
And it would be ~400million transactions/rows per day in the main
transaction table and almost double in the multiple child tables and some
child tables will hold lesser records too.

Say TABLE1 is the main or parent transaction table which will hold
~400million transactions.
TABLE2 is the child table which is going to hold ~800million rows/day. It
has one to many relationships with table1.
TABLe3 is another child table which is going to hold ~200million rows per
day.

We are considering all of these tables for partitioning by the same
transaction_date column and it would be daily partitions. We have some
questions,

1)While creating these tables and related indexes, do we need to be careful
of defining any other storage parameters like tablespaces etc Or its fine
to make those table/indexes aligned to the default tablespace only? and are
there any constraints on tablespace size , as we will have 100's GB of data
going to be stored in each of the daily partitions?

2)Should we be creating composite indexes on each foreign key for table2
and table3, because any update or delete on parent is going to take lock on
all child tables?

3)We were thinking of simple Btree indexes to be created on the columns
based on the search criteria of the queries. but the indexes doc i see in
postgres having INCLUDE keywords also available in them. So I'm struggling
to understand a bit, how it's adding value to the read query performance if
those additional columns are not added explicitly to the index but are part
of the INCLUDE clause? Will it give some benefit in regards to the index
storage space? or should we always keep all the columns in the index
definition itself other than some exception scenario? Struggling to
understand the real benefit of the INCLUDE clause.

Below is a sample DDL of what it will look like.

 Create table syntax:-

CREATE TABLE TABLE1
(
TXN_ID varchar(36)  NOT NULL ,
txn_timestamp date  NOT NULL ,
CONSTRAINT TABLE1_PK PRIMARY KEY (TXN_ID,txn_timestamp)
) partition by range (txn_timestamp);


CREATE TABLE TABLE2
(
table2_id varchar(36) not null,
TXN_ID varchar(36)  NOT NULL ,
txn_timestamp date  NOT NULL ,
CONSTRAINT TABLE2_PK PRIMARY KEY (table2_id,txn_timestamp)
) partition by range (txn_timestamp);

alter table table2 add constraint fk1 (TXN_ID,txn_timestamp) references
 table1(TXN_ID,txn_timestamp);
Create Index idx1 on TABLE2(TXN_ID,txn_timestamp); -- Index for foreign Key

CREATE TABLE TABLE3
(
table3_id varchar(36) not null,
TXN_ID varchar(36)  NOT NULL ,
txn_timestamp date  NOT NULL ,
CONSTRAINT TABLE3_PK PRIMARY KEY (table3_id,txn_timestamp)
) partition by range (txn_timestamp);

alter table table2 add constraint fk2 (TXN_ID,txn_timestamp) references
 table1(TXN_ID,txn_timestamp);
Create Index idx2 on TABLE3(TXN_ID,txn_timestamp); -- Index for foreign key.

Thanks And Regards
Yudhi


Re: Partitioning, Identity and Uniqueness (given pg 16 changes)

2024-02-22 Thread Greg Sabino Mullane
>
> I have a table that is capturing what is, basically, time series data


Time series data usually is concerned with "recent" data, and has a
subsequent drop off date. This is ideal for partitioning by timestamp - not
only do your queries only need to hit a few of the total tables, but you
can simply drop old tables that contain older data. Optionally saving them
first to deep storage. The DROP TABLE approach is extraordinarily better
than the alternative of DELETE FROM ... WHERE ...

Cheers,
Greg


Re: [SPAM] Re: Partial table duplication via triggger

2024-02-22 Thread Moreno Andreo




On 22/02/24 17:49, Erik Wienhold wrote:

On 2024-02-22 15:14 +0100, Moreno Andreo wrote:

suppose I have 2 tables
[snip]
What am I missing?

The parameters you pass in with USING have to be referenced as $1, $2,
and so on.  For example:

DECLARE
fieldlist text := (
SELECT string_agg(quote_ident(column_name), ', ')
FROM information_schema.columns
WHERE table_name = TG_TABLE_NAME AND column_name <> 'id'
);
oldfieldlist text := (
SELECT string_agg('$1.' || quote_ident(column_name), ', ')
FROM information_schema.columns
WHERE table_name = TG_TABLE_NAME AND column_name <> 'id'
);
BEGIN
EXECUTE '
INSERT INTO t2 (id, master_id, op_ts, ' || fieldlist || ')
VALUES (gen_random_uuid(), $1.id, now(), ' || oldfieldlist || ')
' USING OLD;
RETURN NULL;
END;

Also make sure to use quote_ident() when constructing statements that
way to avoid SQL injections via column names in this case.  Or use
format() with placeholder %I, although it's not simpler when you need to
construct that variable list of identifiers.


Erik,
It worked perfectly!
I had not clear in mind how to use $1, $2, etc, with using; after your 
reply I had a closer look at the docs and now it's clearer to me.


Many thanks,
Moreno.






Re: pg_dump performance issues

2024-02-22 Thread Adrian Klaver

On 2/22/24 06:40, Dominique Devienne wrote:
On Thu, Feb 22, 2024 at 12:20 PM Dominique Devienne > wrote:


On Thu, Feb 22, 2024 at 11:41 AM Francisco Olarte
mailto:fola...@peoplecall.com>> wrote:

Dominique:
 >  in the 10-12MB/s throughput range.

This has the faint smell of a saturated 100Mbps link in the
middle (12*8=96Mbps)


So indeed, I'm not on 1Gb/s as I expected... I'll report back when
this is fixed.


So IT fixing the networking on my Windows machine is slower than I 
expected...


Thus I switched to a Linux client, which I verified using iperf3 really 
has 1Gbps.
And there I can read at 100MB/s, which is almost saturating the network 
link.
The Linux dumps are 45MB this time, instead of 100MB on Windows, as if 
the former is compressed, and the latter isn't.


Did you use the same pg_dump command on the Windows client?

Where was the Windows client Postgres software installed from?

Where was the Linux client Postgres software installed from?

Perhaps the (implicit?) compression of the dump is why it's 8x slower 
than mine? (I do fewer catalog queries I suspect)


From

https://www.postgresql.org/docs/current/app-pgdump.html

d
directory

 <...>
 This format is compressed by default using gzip and also supports 
parallel dumps.


And I see no speedup from parallelizing, unfortunately, which I find 
strange, as mentioned earlier. Any idea why?


From same link as above

-j njobs
--jobs=njobs

Run the dump in parallel by dumping njobs tables simultaneously. This 
option may reduce the time needed to perform the dump but it also 
increases the load on the database server. You can only use this option 
with the directory output format because this is the only output format 
where multiple processes can write their data at the same time.


<...>


I'm guessing on this small a database the overhead of -j is greater then 
the benefit derived.




Note that this is against the v12 server, and the client is still 16.1.

Applying a two-threaded producer-consumer scheme to my dump, to 
parallelize reads and writes, I might approach 100MB/s overall.
Before getting into parallelizing reads, which complicate matters, and 
my dump container's API is serial anyway (it's SQLite...)


So looks like I'm networking bound on 1Gbps, while Marc perhaps reached 
PostgreSQL's (or the disks'?) limits before saturating his 10Gbps 
networking.


Thanks, --DD

[ddevienne]$ my_dump ...
COPY:         25x (   88,073 rows,     56,768,104 bytes) in   0.537s 
(100.7 MB/s)

DUMP'd 88,023 rows (56,761,583 bytes) from 25 tables in 1.062s (51.0 MB/s)

[ddevienne]$ ll db
-rw-r--r-- 1 ddevienne users 58,351,616 Feb 22 15:10 db

[ddevienne]$ time $PDGM_POSTGRESQL_DIR/bin/pg_dump --role="..." -Fc -n 
'"..."' -f ...-Fc.dump "host=acme dbname=copyperf"

7.561u 0.216s 0:08.46 91.8%     0+0k 0+91056io 0pf+0w
[ddevienne]$ time $PDGM_POSTGRESQL_DIR/bin/pg_dump --role="..." -Fd -n 
'"..."' -f ...-Fd.dump "host=acme dbname=copyperf"

7.351u 0.219s 0:08.33 90.7%     0+0k 0+90808io 0pf+0w
[ddevienne]$ time $PDGM_POSTGRESQL_DIR/bin/pg_dump --role="..." -Fd -j2 
-n '"..."' -f ...-Fdj2.dump "host=acme dbname=copyperf"

7.667u 0.228s 0:08.56 92.0%     0+0k 0+90808io 0pf+0w
[ddevienne]$ time $PDGM_POSTGRESQL_DIR/bin/pg_dump --role="..." -Fd -j3 
-n '"..."' -f ...-Fdj3.dump "host=acme dbname=copyperf"

7.964u 0.247s 0:08.71 94.1%     0+0k 0+90808io 0pf+0w
[ddevienne]$ time $PDGM_POSTGRESQL_DIR/bin/pg_dump --role="..." -Fd -j4 
-n '"..."' -f ...-Fdj4.dump "host=acme dbname=copyperf"

7.491u 0.243s 0:08.21 94.1%     0+0k 0+90808io 0pf+0w
[ddevienne]$ du -sm ...-F*
45      ...-Fc.dump
45      ...-Fd.dump
45      ...-Fdj2.dump
45      ...-Fdj3.dump
45      ...-Fdj4.dump


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: How do you optimize the disk IO when you cannot assume a file will start at a boundary then?

2024-02-22 Thread Siddharth Jain
Thanks Tomas

On Thu, Feb 22, 2024 at 3:05 AM Tomas Vondra 
wrote:

> On 2/22/24 02:22, Siddharth Jain wrote:
> > Hi All,
> >
> > I understand the storage layer in databases goes to great lengths to
> ensure:
> > - a row does not cross a block boundary
> > - read/writes/allocation happen in units of blocks
> > etc. The motivation is that at the OS level, it reads and writes pages
> > (blocks), not individual bytes. I am only concerned about SSDs but I
> think
> > the principle applies to HDD as well.
> >
> > but how can we do all this when we are not even guaranteed that the
> > beginning of a file will be aligned with a block boundary? refer this
> > <
> https://stackoverflow.com/questions/8018449/is-it-guaranteed-that-the-beginning-of-a-file-is-aligned-with-pagesize-of-file-s
> >
> > .
> >
> > Further, I don't see any APIs exposing I/O operations in terms of blocks.
> > All File I/O APIs I see expose a file as a randomly accessible contiguous
> > byte buffer. Would it not have been easier if there were APIs that
> exposed
> > I/O operations in terms of blocks?
> >
> > can someone explain this to me?
> >
>
> The short answer is that this is well outside our control. We do the
> best we can - split our data files to "our" 8kB pages - and hope that
> the OS / filesystem will do the right thing to map this to blocks at the
> storage level.
>
> The filesystems do the same thing, to some extent - they align stuff
> with respect to the beginning of the partition, but if the partition
> itself is not properly aligned, that won't really work.
>
> As for the APIs, we work with what we have in POSIX - I don't think
> there are any APIs working with blocks, and it's not clear to me how
> would it fundamentally differ from the APIs we have now. Moreover, it's
> not really clear which of the "block" would matter. The postgres 8kB
> page? The filesytem page? The storage block/sector size?
>
> FWIW I think for SSDs this matters way more than for HDD, because SSDs
> have to erase the space before a rewrite, which makes it much more
> expensive. But that's not just about the alignment, but about the page
> size (with smaller pages being better).
>
>
> regards
>
> --
> Tomas Vondra
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: Partial table duplication via triggger

2024-02-22 Thread Erik Wienhold
On 2024-02-22 15:14 +0100, Moreno Andreo wrote:
> suppose I have 2 tables
> 
> CREATE TABLE t1(
>     id uuid,
>     name text,
>     surname text,
>     ...
>     PRIMARY KEY(id)
> )
> 
> CREATE TABLE t2(
>     id uuid,
>     master_id uuid,
>     op_ts timestamp with time zone,
>     name text,
>     surname text,
>     ...
>     PRIMARY KEY(id)
> )
> 
> I need to write an AFTER TRIGGER on UPDATE so all columns of t1 go in the
> same columns in t2 (except for t1.id that goes in t2.master_id, and t2.op_ts
> gets now())
> 
> I cannot write an 1 to 1 column assignment (like NEW.name := OLD.name and so
> on) because the trigger has to be used on many tables, that has different
> (and evolving) schema and I don't want to write dozen of function that have
> to be frequently mantained.
> 
> I'm quite noob at pl-pgsql; at the moment I wrote this, but I can't "tell
> the function that fields are from OLD row" (the error is "missing
> FROM-clause entry for table 'old')
> 
> I tried also with field names alone (without OLD.), with no success.
> Trigger is fired AFTER UPDATE in t1 (CREATE TRIGGER update_id AFTER UPDATE
> ON t1 FOR EACH ROW EXECUTE PROCEDURE update_op());
> 
> CREATE OR REPLACE FUNCTION update_op() RETURNS TRIGGER
> AS $$
> DECLARE
>     fieldlist text := (select string_agg(column_name, ', ')
>                     from information_schema.columns c
>                     where table_name = TG_TABLE_NAME and
>                       (column_name <> 'id'));
> 
>     oldfieldlist text := (select string_agg(column_name, ', OLD.')
>                     from information_schema.columns c
>                     where table_name = TG_TABLE_NAME and
>                       (column_name <> 'id'));
> 
> BEGIN
>     EXECUTE 'INSERT INTO t2 (master_id, op_ts, '|| fieldlist ||') VALUES
> (OLD.id, now(), OLD.'||oldfieldlist||')' USING OLD;
>     RETURN NULL;
> END;
> $$
> LANGUAGE 'plpgsql';
> 
> What am I missing?

The parameters you pass in with USING have to be referenced as $1, $2,
and so on.  For example:

DECLARE
fieldlist text := (
SELECT string_agg(quote_ident(column_name), ', ')
FROM information_schema.columns
WHERE table_name = TG_TABLE_NAME AND column_name <> 'id'
);
oldfieldlist text := (
SELECT string_agg('$1.' || quote_ident(column_name), ', ')
FROM information_schema.columns
WHERE table_name = TG_TABLE_NAME AND column_name <> 'id'
);
BEGIN
EXECUTE '
INSERT INTO t2 (id, master_id, op_ts, ' || fieldlist || ')
VALUES (gen_random_uuid(), $1.id, now(), ' || oldfieldlist || ')
' USING OLD;
RETURN NULL;
END;

Also make sure to use quote_ident() when constructing statements that
way to avoid SQL injections via column names in this case.  Or use
format() with placeholder %I, although it's not simpler when you need to
construct that variable list of identifiers.

-- 
Erik




Re: Postgresql assistance needed

2024-02-22 Thread Laurenz Albe
On Thu, 2024-02-22 at 19:35 +0530, Sasmit Utkarsh wrote:
>  Is there any configuration/query that can be checked to verify if  
> "shc_uadmin" has the correct path set?

The SQL statement "SHOW search_path" would return the current setting.

But look at the error message first.

Yours,
Laurenz Albe





Re: Postgresql assistance needed

2024-02-22 Thread Greg Sabino Mullane
On the surface, it looks as though it *should* work if the only thing
changing is the username. Those other more serious errors should get fixed,
but putting those aside for now... We don't know what your program does.
Write a smaller one that just does a PQexec and calls nextval, then returns
a proper error message if it fails.

Cheers,
Greg


Re: pg_dump performance issues

2024-02-22 Thread Dominique Devienne
On Thu, Feb 22, 2024 at 12:20 PM Dominique Devienne 
wrote:

> On Thu, Feb 22, 2024 at 11:41 AM Francisco Olarte 
> wrote:
>
>> Dominique:
>> >  in the 10-12MB/s throughput range.
>>
>> This has the faint smell of a saturated 100Mbps link in the middle
>> (12*8=96Mbps)
>>
>
> So indeed, I'm not on 1Gb/s as I expected... I'll report back when this is
> fixed.
>

So IT fixing the networking on my Windows machine is slower than I
expected...

Thus I switched to a Linux client, which I verified using iperf3 really has
1Gbps.
And there I can read at 100MB/s, which is almost saturating the network
link.
The Linux dumps are 45MB this time, instead of 100MB on Windows, as if the
former is compressed, and the latter isn't.
Perhaps the (implicit?) compression of the dump is why it's 8x slower than
mine? (I do fewer catalog queries I suspect)
And I see no speedup from parallelizing, unfortunately, which I find
strange, as mentioned earlier. Any idea why?
Note that this is against the v12 server, and the client is still 16.1.

Applying a two-threaded producer-consumer scheme to my dump, to parallelize
reads and writes, I might approach 100MB/s overall.
Before getting into parallelizing reads, which complicate matters, and my
dump container's API is serial anyway (it's SQLite...)

So looks like I'm networking bound on 1Gbps, while Marc perhaps reached
PostgreSQL's (or the disks'?) limits before saturating his 10Gbps
networking.

Thanks, --DD

[ddevienne]$ my_dump ...
COPY: 25x (   88,073 rows, 56,768,104 bytes) in   0.537s (100.7
MB/s)
DUMP'd 88,023 rows (56,761,583 bytes) from 25 tables in 1.062s (51.0 MB/s)

[ddevienne]$ ll db
-rw-r--r-- 1 ddevienne users 58,351,616 Feb 22 15:10 db

[ddevienne]$ time $PDGM_POSTGRESQL_DIR/bin/pg_dump --role="..." -Fc -n
'"..."' -f ...-Fc.dump "host=acme dbname=copyperf"
7.561u 0.216s 0:08.46 91.8% 0+0k 0+91056io 0pf+0w
[ddevienne]$ time $PDGM_POSTGRESQL_DIR/bin/pg_dump --role="..." -Fd -n
'"..."' -f ...-Fd.dump "host=acme dbname=copyperf"
7.351u 0.219s 0:08.33 90.7% 0+0k 0+90808io 0pf+0w
[ddevienne]$ time $PDGM_POSTGRESQL_DIR/bin/pg_dump --role="..." -Fd -j2 -n
'"..."' -f ...-Fdj2.dump "host=acme dbname=copyperf"
7.667u 0.228s 0:08.56 92.0% 0+0k 0+90808io 0pf+0w
[ddevienne]$ time $PDGM_POSTGRESQL_DIR/bin/pg_dump --role="..." -Fd -j3 -n
'"..."' -f ...-Fdj3.dump "host=acme dbname=copyperf"
7.964u 0.247s 0:08.71 94.1% 0+0k 0+90808io 0pf+0w
[ddevienne]$ time $PDGM_POSTGRESQL_DIR/bin/pg_dump --role="..." -Fd -j4 -n
'"..."' -f ...-Fdj4.dump "host=acme dbname=copyperf"
7.491u 0.243s 0:08.21 94.1% 0+0k 0+90808io 0pf+0w
[ddevienne]$ du -sm ...-F*
45  ...-Fc.dump
45  ...-Fd.dump
45  ...-Fdj2.dump
45  ...-Fdj3.dump
45  ...-Fdj4.dump


Partial table duplication via triggger

2024-02-22 Thread Moreno Andreo

Hi *,
    suppose I have 2 tables

CREATE TABLE t1(
    id uuid,
    name text,
    surname text,
    ...
    PRIMARY KEY(id)
)

CREATE TABLE t2(
    id uuid,
    master_id uuid,
    op_ts timestamp with time zone,
    name text,
    surname text,
    ...
    PRIMARY KEY(id)
)

I need to write an AFTER TRIGGER on UPDATE so all columns of t1 go in 
the same columns in t2 (except for t1.id that goes in t2.master_id, and 
t2.op_ts gets now())


I cannot write an 1 to 1 column assignment (like NEW.name := OLD.name 
and so on) because the trigger has to be used on many tables, that has 
different (and evolving) schema and I don't want to write dozen of 
function that have to be frequently mantained.


I'm quite noob at pl-pgsql; at the moment I wrote this, but I can't 
"tell the function that fields are from OLD row" (the error is "missing 
FROM-clause entry for table 'old')


I tried also with field names alone (without OLD.), with no success.
Trigger is fired AFTER UPDATE in t1 (CREATE TRIGGER update_id AFTER 
UPDATE ON t1 FOR EACH ROW EXECUTE PROCEDURE update_op());


CREATE OR REPLACE FUNCTION update_op() RETURNS TRIGGER
AS $$
DECLARE
    fieldlist text := (select string_agg(column_name, ', ')
                    from information_schema.columns c
                    where table_name = TG_TABLE_NAME and
                      (column_name <> 'id'));

    oldfieldlist text := (select string_agg(column_name, ', OLD.')
                    from information_schema.columns c
                    where table_name = TG_TABLE_NAME and
                      (column_name <> 'id'));

BEGIN
    EXECUTE 'INSERT INTO t2 (master_id, op_ts, '|| fieldlist ||') 
VALUES (OLD.id, now(), OLD.'||oldfieldlist||')' USING OLD;

    RETURN NULL;
END;
$$
LANGUAGE 'plpgsql';

What am I missing?
Thanks,
Moreno

Re: Postgresql assistance needed

2024-02-22 Thread Sasmit Utkarsh
Thanks Laurenz

->To debug that, get the actual error message using PQerrorMessage().
That should tell you what is going on.  -- will try to add the
PQerrorMessage() in the logs

->Perhaps the sequence is not on your "search_path", and you should
qualify the name with the schema.  -- I have given the below set of env
variables including setting the schema path, which works well for
*"pgddb_admin"
*and not for user  "*shc_uadmin". *Is there any configuration/query that
can be checked to verify if  "*shc_uadmin" *has the correct path set?

export PGHOST=cucmpsgsu0.postgres.database.azure.com
export PGDATABASE=mshcd
*export PGUSER=shc_uadmin*
export PGPASSWORD=abc123
export PGOPTIONS='--search_path=*shc,shc_tty,public*'



Regards,
Sasmit Utkarsh
+91-7674022625


On Thu, Feb 22, 2024 at 7:11 PM Laurenz Albe 
wrote:

> On Thu, 2024-02-22 at 17:18 +0530, Sasmit Utkarsh wrote:
> > Perhaps, the issue I wanted to highlight here is that I get the same
> entry working
> > when I switch user to "pgddb_admin" and not when change user with same
> privileges
> > as PGUSER "shc_uadmin" I get the message in the error log like
> > "<3>3343433-[ERROR] SELECT nextval Failed [../tpfasm.c:3561:get_pool_fa]"
> > even though i see the next val from db. Please see below when logged in
> to the db
> > and snippet of the code lines where it was throwing the error.
> >
> > [shc_user@cucmtpccu1 ~]$ psql "host=
> cucmpsgsu0.postgres.database.azure.com port=5432 dbname=mshcd
> user=shc_uadmin password=abc123 sslmode=require
> options=--search_path=shc,public"
> > psql (16.1, server 15.4)
> > SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384,
> compression: off)
> > Type "help" for help.
> > mshcd=> SELECT nextval('unassigned_pool_fa');
> >nextval
> > -
> >  -1811939322
> > (1 row)
> >
> >
> > code snippet:
> > 3555 } else {
> > 3556 LOG_WARN("No File_address is found with
> status=free");
> > 3557 //If no free address is found, get the next
> value available address from the sequence
> > 3558 PQclear(res);
> > 3559 res = PQexec(conn, "SELECT
> nextval('unassigned_pool_fa');");
> > 3560 if (PQresultStatus(res) != PGRES_TUPLES_OK)
> {
> > 3561 LOG_ERROR("SELECT nextval Failed");
> > 3562 LOG_DEBUG("ROLLBACK TRANSACTION");
> > 3563 res = PQexec(conn,"ROLLBACK
> TRANSACTION");
> > 3564 PQclear(res);
> > 3565 return 0;
> > 3566 }
>
> To debug that, get the actual error message using PQerrorMessage().
> That should tell you what is going on.
>
> Perhaps the sequence is not on your "search_path", and you should
> qualify the name with the schema.
>
> Yours,
> Laurenz Albe
>


Re: Postgresql assistance needed

2024-02-22 Thread Laurenz Albe
On Thu, 2024-02-22 at 17:18 +0530, Sasmit Utkarsh wrote:
> Perhaps, the issue I wanted to highlight here is that I get the same entry 
> working
> when I switch user to "pgddb_admin" and not when change user with same 
> privileges
> as PGUSER "shc_uadmin" I get the message in the error log like
> "<3>3343433-[ERROR] SELECT nextval Failed [../tpfasm.c:3561:get_pool_fa]"
> even though i see the next val from db. Please see below when logged in to 
> the db
> and snippet of the code lines where it was throwing the error. 
> 
> [shc_user@cucmtpccu1 ~]$ psql "host= cucmpsgsu0.postgres.database.azure.com 
> port=5432 dbname=mshcd user=shc_uadmin password=abc123 sslmode=require 
> options=--search_path=shc,public"
> psql (16.1, server 15.4)
> SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, 
> compression: off)
> Type "help" for help.
> mshcd=> SELECT nextval('unassigned_pool_fa');
>    nextval
> -
>  -1811939322
> (1 row)
> 
> 
> code snippet:
> 3555                 } else {
> 3556                         LOG_WARN("No File_address is found with 
> status=free");
> 3557                         //If no free address is found, get the next 
> value available address from the sequence
> 3558                         PQclear(res);
> 3559                         res = PQexec(conn, "SELECT 
> nextval('unassigned_pool_fa');");
> 3560                         if (PQresultStatus(res) != PGRES_TUPLES_OK) {
> 3561                                 LOG_ERROR("SELECT nextval Failed");
> 3562                                 LOG_DEBUG("ROLLBACK TRANSACTION");
> 3563                                 res = PQexec(conn,"ROLLBACK 
> TRANSACTION");
> 3564                                 PQclear(res);
> 3565                                 return 0;
> 3566                         }

To debug that, get the actual error message using PQerrorMessage().
That should tell you what is going on.

Perhaps the sequence is not on your "search_path", and you should
qualify the name with the schema.

Yours,
Laurenz Albe




Re: Postgresql assistance needed

2024-02-22 Thread Sasmit Utkarsh
Hi Laurenz/Postgresql Team,

Perhaps, the issue I wanted to highlight here is that I get the same entry
working when I switch user to  *"pgddb_admin" *and not when change user
with same privileges as PGUSER "*shc_uadmin" *I get the message in the
error log like* "<3>3343433-[ERROR] SELECT nextval Failed
[../tpfasm.c:3561:get_pool_fa]" *even though i see the next val from db.
Please see below when logged in to the db and snippet of the code lines
where it was throwing the error.

*[shc_user@cucmtpccu1 ~]$ psql
"host= cucmpsgsu0.postgres.database.azure.com
 port=5432 dbname=mshcd
user=shc_uadmin password=abc123 sslmode=require
options=--search_path=shc,public"*



*psql (16.1, server 15.4)SSL connection (protocol: TLSv1.3, cipher:
TLS_AES_256_GCM_SHA384, compression: off)Type "help" for help.*




*mshcd=> SELECT nextval('unassigned_pool_fa');
 nextval- -1811939322(1 row)*


code snippet:
3555 } else {
 3556 LOG_WARN("No File_address is found with
status=free");
 3557 //If no free address is found, get the next
value available address from the sequence
 3558 PQclear(res);
 *3559 res = PQexec(conn, "SELECT
nextval('unassigned_pool_fa');");*
 3560 if (PQresultStatus(res) != PGRES_TUPLES_OK) {
 3561 LOG_ERROR("SELECT nextval Failed");
 3562 LOG_DEBUG("ROLLBACK TRANSACTION");
 3563 res = PQexec(conn,"ROLLBACK
TRANSACTION");
 3564 PQclear(res);
 3565 return 0;
 3566 }

Is the issue related to configuration of PGUSER or something else

Regards,
Sasmit Utkarsh
+91-7674022625


On Thu, Feb 22, 2024 at 4:20 PM Laurenz Albe 
wrote:

> On Thu, 2024-02-22 at 16:13 +0530, Sasmit Utkarsh wrote:
> > On Thu, Feb 22, 2024 at 2:20 PM Laurenz Albe 
> wrote:
> > > On Thu, 2024-02-22 at 14:01 +0530, Sasmit Utkarsh wrote:
> > > > ==3343433==ERROR: AddressSanitizer: heap-buffer-overflow on address
> 0xf337ba80 at pc 0xf795fcdd bp 0xff8a74d8 sp 0xff8a70b0
> > > > READ of size 4096 at 0xf337ba80 thread T0
> > > > #0 0xf795fcdc in __interceptor_memcpy
> (/nix/store/3blqv6wzmwqgkkw45i6c8w0bkwldk418-gcc-12.2.0-lib/lib/libasan.so.8+0x42cdc)
> > > > #1 0xf78c34bb in pqPutnchar
> (/nix/store/glkv3qv4y64xigwpjcwflp3ni2anjma0-postgresql-14.9-lib/lib/libpq.so.5+0x174bb)
> > > > #2 0xf78be05e in PQsendQueryGuts
> (/nix/store/glkv3qv4y64xigwpjcwflp3ni2anjma0-postgresql-14.9-lib/lib/libpq.so.5+0x1205e)
> > > > #3 0xf78c05a2 in PQexecPrepared
> (/nix/store/glkv3qv4y64xigwpjcwflp3ni2anjma0-postgresql-14.9-lib/lib/libpq.so.5+0x145a2)
> > >
> > > Perhaps you forgot to terminate a string with '\0'.
> >
> > Sorry but are you talking about the export variables in the result?
>
> Whatever you fed to PQexecPrepared.
>
> Yours,
> Laurenz Albe
>
[shc_user@cucmtpccu1 ~]$ export PGHOST=cucmpsgsu0.postgres.database.azure.com
[shc_user@cucmtpccu1 ~]$ export PGDATABASE=mshcd
[shc_user@cucmtpccu1 ~]$ export PGUSER=shc_uadmin
[shc_user@cucmtpccu1 ~]$ export PGPASSWORD=3mnAK~lc7
[shc_user@cucmtpccu1 ~]$ export PGOPTIONS='--search_path=shc,shc_tty,public'

[shc_user@cucmtpccu1 ~]$ export PGUSER=pgddb_admin
[shc_user@cucmtpccu1 ~]$ export PGPASSWORD=xyz123
[shc_user@cucmtpccu1 ~]$ .nix-profile/bin/test 'K-T-*' -q
<6>3429958-[INFO] BUILD_TAG = jenkins-Tpfasmbuild1-515
<6>3429958-[INFO] sql_tty_connect_db: tty_conninfo:'dbname=shc_tty_data'
<6>3429958-[INFO] process_name=10.166.29.36#3429958
<7>3429958-
<7>3429958-Using PostgreSQL database
<7>3429958-

<7>3429958-SQL_init_db_connection() conninfo=dbname=shc_data
<7>3429958-SQL_init_db_connection() Connection to shc_data database SUCCESSFUL
<6>3429958-[INFO] PostgreSQL Server Version = 1500.4 protocol 3
<7>3429958-SQL_init_db_connection() PREPARE AdvisoryLockData PQresultStatus = 
PGRES_COMMAND_OK
<7>3429958-SQL_init_db_connection() PREPARE AdvisoryUnLockData PQresultStatus = 
PGRES_COMMAND_OK
<7>3429958-SQL_init_db_connection() PREPARE SelectSize PQresultStatus = 
PGRES_COMMAND_OK
<7>3429958-SQL_init_db_connection() PREPARE SelectData PQresultStatus = 
PGRES_COMMAND_OK
<7>3429958-SQL_init_db_connection() PREPARE InsertData PQresultStatus = 
PGRES_COMMAND_OK
<7>3429958-SQL_init_db_connection() PREPARE UpdateData PQresultStatus = 
PGRES_COMMAND_OK
<7>3429958-loading builtin FACE
<7>3429958-loading builtin FMSG
<7>3429958-loading builtin FMLG
<7>3429958-loading builtin UII1
<7>3429958-loading builtin UIO1
<7>3429958-loading builtin UIM1
<7>3429958-loading builtin UIY3
<7>3429958-loading builtin UIY2
<7>3429958-loading builtin UIY1
<7>3429958-loading builtin UIZ1
<7>3429958-loading builtin WGA1

Re: pg_dump performance issues

2024-02-22 Thread Dominique Devienne
On Thu, Feb 22, 2024 at 11:41 AM Francisco Olarte 
wrote:

> Dominique:
> >  in the 10-12MB/s throughput range.
>
> This has the faint smell of a saturated 100Mbps link in the middle
> (12*8=96Mbps)
>

Very good call Francisco! Thanks a lot.

Below is connecting to the v14 PostgreSQL host, from my Windows desktop.
So indeed, I'm not on 1Gb/s as I expected... I'll report back when this is
fixed.

Thanks again, --DD

C:\Apps\iperf3>.\iperf3.exe -c XX.YY.ZZ.13
Connecting to host XX.YY.ZZ.13, port 5201
[  4] local XX.YY.ZZ.49 port 62617 connected to XX.YY.ZZ.13 port 5201
[ ID] Interval   Transfer Bandwidth
[  4]   0.00-1.01   sec  11.5 MBytes  95.9 Mbits/sec
[  4]   1.01-2.00   sec  11.4 MBytes  95.7 Mbits/sec
[  4]   2.00-3.01   sec  11.4 MBytes  94.9 Mbits/sec
[  4]   3.01-4.00   sec  11.2 MBytes  94.9 Mbits/sec
[  4]   4.00-5.01   sec  11.4 MBytes  94.9 Mbits/sec
[  4]   5.01-6.00   sec  11.2 MBytes  94.9 Mbits/sec
[  4]   6.00-7.01   sec  11.2 MBytes  94.0 Mbits/sec
[  4]   7.01-8.00   sec  11.4 MBytes  95.8 Mbits/sec
[  4]   8.00-9.00   sec  11.2 MBytes  94.7 Mbits/sec
[  4]   9.00-10.01  sec  11.4 MBytes  94.9 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bandwidth
[  4]   0.00-10.01  sec   113 MBytes  95.0 Mbits/sec  sender
[  4]   0.00-10.01  sec   113 MBytes  94.8 Mbits/sec
 receiver

iperf Done.


Re: How do you optimize the disk IO when you cannot assume a file will start at a boundary then?

2024-02-22 Thread Tomas Vondra
On 2/22/24 02:22, Siddharth Jain wrote:
> Hi All,
> 
> I understand the storage layer in databases goes to great lengths to ensure:
> - a row does not cross a block boundary
> - read/writes/allocation happen in units of blocks
> etc. The motivation is that at the OS level, it reads and writes pages
> (blocks), not individual bytes. I am only concerned about SSDs but I think
> the principle applies to HDD as well.
> 
> but how can we do all this when we are not even guaranteed that the
> beginning of a file will be aligned with a block boundary? refer this
> 
> .
> 
> Further, I don't see any APIs exposing I/O operations in terms of blocks.
> All File I/O APIs I see expose a file as a randomly accessible contiguous
> byte buffer. Would it not have been easier if there were APIs that exposed
> I/O operations in terms of blocks?
> 
> can someone explain this to me?
> 

The short answer is that this is well outside our control. We do the
best we can - split our data files to "our" 8kB pages - and hope that
the OS / filesystem will do the right thing to map this to blocks at the
storage level.

The filesystems do the same thing, to some extent - they align stuff
with respect to the beginning of the partition, but if the partition
itself is not properly aligned, that won't really work.

As for the APIs, we work with what we have in POSIX - I don't think
there are any APIs working with blocks, and it's not clear to me how
would it fundamentally differ from the APIs we have now. Moreover, it's
not really clear which of the "block" would matter. The postgres 8kB
page? The filesytem page? The storage block/sector size?

FWIW I think for SSDs this matters way more than for HDD, because SSDs
have to erase the space before a rewrite, which makes it much more
expensive. But that's not just about the alignment, but about the page
size (with smaller pages being better).


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Postgresql assistance needed

2024-02-22 Thread Laurenz Albe
On Thu, 2024-02-22 at 16:13 +0530, Sasmit Utkarsh wrote:
> On Thu, Feb 22, 2024 at 2:20 PM Laurenz Albe  wrote:
> > On Thu, 2024-02-22 at 14:01 +0530, Sasmit Utkarsh wrote:
> > > ==3343433==ERROR: AddressSanitizer: heap-buffer-overflow on address 
> > > 0xf337ba80 at pc 0xf795fcdd bp 0xff8a74d8 sp 0xff8a70b0
> > > READ of size 4096 at 0xf337ba80 thread T0
> > >     #0 0xf795fcdc in __interceptor_memcpy 
> > > (/nix/store/3blqv6wzmwqgkkw45i6c8w0bkwldk418-gcc-12.2.0-lib/lib/libasan.so.8+0x42cdc)
> > >     #1 0xf78c34bb in pqPutnchar 
> > > (/nix/store/glkv3qv4y64xigwpjcwflp3ni2anjma0-postgresql-14.9-lib/lib/libpq.so.5+0x174bb)
> > >     #2 0xf78be05e in PQsendQueryGuts 
> > > (/nix/store/glkv3qv4y64xigwpjcwflp3ni2anjma0-postgresql-14.9-lib/lib/libpq.so.5+0x1205e)
> > >     #3 0xf78c05a2 in PQexecPrepared 
> > > (/nix/store/glkv3qv4y64xigwpjcwflp3ni2anjma0-postgresql-14.9-lib/lib/libpq.so.5+0x145a2)
> > 
> > Perhaps you forgot to terminate a string with '\0'.
>
> Sorry but are you talking about the export variables in the result?

Whatever you fed to PQexecPrepared.

Yours,
Laurenz Albe




Re: Postgresql assistance needed

2024-02-22 Thread Sasmit Utkarsh
Hi Laurenz,

Sorry but are you talking about the export variables in the result?

Regards,
Sasmit Utkarsh
+91-7674022625


On Thu, Feb 22, 2024 at 2:20 PM Laurenz Albe 
wrote:

> On Thu, 2024-02-22 at 14:01 +0530, Sasmit Utkarsh wrote:
> > ==3343433==ERROR: AddressSanitizer: heap-buffer-overflow on address
> 0xf337ba80 at pc 0xf795fcdd bp 0xff8a74d8 sp 0xff8a70b0
> > READ of size 4096 at 0xf337ba80 thread T0
> > #0 0xf795fcdc in __interceptor_memcpy
> (/nix/store/3blqv6wzmwqgkkw45i6c8w0bkwldk418-gcc-12.2.0-lib/lib/libasan.so.8+0x42cdc)
> > #1 0xf78c34bb in pqPutnchar
> (/nix/store/glkv3qv4y64xigwpjcwflp3ni2anjma0-postgresql-14.9-lib/lib/libpq.so.5+0x174bb)
> > #2 0xf78be05e in PQsendQueryGuts
> (/nix/store/glkv3qv4y64xigwpjcwflp3ni2anjma0-postgresql-14.9-lib/lib/libpq.so.5+0x1205e)
> > #3 0xf78c05a2 in PQexecPrepared
> (/nix/store/glkv3qv4y64xigwpjcwflp3ni2anjma0-postgresql-14.9-lib/lib/libpq.so.5+0x145a2)
>
> Perhaps you forgot to terminate a string with '\0'.
>
> Yours,
> Laurenz Albe
>


Re: pg_dump performance issues

2024-02-22 Thread Francisco Olarte
Dominique:

On Thu, 22 Feb 2024 at 10:50, Dominique Devienne  wrote:
> In the past, I've read [this post][1] from Marc Millas that reports `pg_dump 
> ... | psql` at throughput around 500MB/s (5Gb/s) on a 10Gb/s network.

That is half the theoretical bandwidth,more or less.

> I tried against a v12 and v14 Linux servers (real hardware, server-class 
> machines, in the $10K range each, 2 years old and 7 years old, running RedHat 
> 7)
> I'm on a 1Gb/s LAN.
The you would max out at about 50 if getting the same behaviour as
your reference.

> The client side is a Windows 11 PRO desktop running 16.1 release builds of 
> pg_dump (and my own client).

Before doing any more thing you should test the network throughput
between your client and servers, on a naked connection. There are
plenty of programs for these, even a ftp/scp/network file copy would
be a useful reference. You have not provided any and.

>  in the 10-12MB/s throughput range.

This has the faint smell of a saturated 100Mbps link in the middle (12*8=96Mbps)

Make some tests of the network and measure ( and post them ) before,
so people can guesstimate something.

Francisco Olarte.




pg_dump performance issues

2024-02-22 Thread Dominique Devienne
In the past, I've read [this post][1] from Marc Millas that reports
`pg_dump ... | psql` at throughput around 500MB/s (5Gb/s) on a 10Gb/s
network.

Today, I've tried pg_dump on a single schema of 25 tables, 88K rows, 70MB
total (as reported by pg total relation sizes). 1 table of 225 rows
contains blobs totaling 48MB of the 70MB by itself. The largest table has
40K rows.

And it takes around 10s (+/- 1s), generating a ~ 100MB dump file (or
directory).
I tried all 4 formats, and even with -jN with N in 2,3,4 for -Fd. Little to
no difference, both in duration and output sizes.
I tried against a v12 and v14 Linux servers (real hardware, server-class
machines, in the $10K range each, 2 years old and 7 years old, running
RedHat 7)
I'm on a 1Gb/s LAN.
The client side is a Windows 11 PRO desktop running 16.1 release builds of
pg_dump (and my own client).

These results mean a throughput of only around 10MB/s, when considering the
output dump, and less when considering the relation sizes (70MB)

Then I tried my own "dump", using COPY BINARY. And got a 58MB dump file in
around 5-6s, with my libpq wrapper reporting metrics of the COPY itself
(i.e. just the libpq COPY APIs used, independent of how I write the results
to disk) in the 10-12MB/s throughput range. So basically not quite twice
faster than pg_dump, with a resulting dump not quite half as small.

The reason I'm writing this is because I'm surprised how slow that is.
There's a chasm between 500MB/s and 10MB/s.
What could explain such slow performance?
Of pg_dump? And my own slightly faster dump?

Also, how come `-Fd -j4` is not any faster than the serial version,
especially when there's a table with few rows and large blobs that is 68%
of the total?

What could be wrong to explain such poor throughput?
Especially given that even pg_dump itself is slow, thus it doesn't appear
to be my code.

I'm concerned with these results, and would appreciate some expert advice.

Thanks, --DD

PS: The PostgreSQL installations are mostly "stock", with little to no
tuning...

[1]:
https://postgrespro.com/list/id/cadx_1aanc4gibvwjjeomuxgxtt9xw8gavrctc4jwgnbspjo...@mail.gmail.com


Re: Postgresql assistance needed

2024-02-22 Thread Laurenz Albe
On Thu, 2024-02-22 at 14:01 +0530, Sasmit Utkarsh wrote:
> ==3343433==ERROR: AddressSanitizer: heap-buffer-overflow on address 
> 0xf337ba80 at pc 0xf795fcdd bp 0xff8a74d8 sp 0xff8a70b0
> READ of size 4096 at 0xf337ba80 thread T0
>     #0 0xf795fcdc in __interceptor_memcpy 
> (/nix/store/3blqv6wzmwqgkkw45i6c8w0bkwldk418-gcc-12.2.0-lib/lib/libasan.so.8+0x42cdc)
>     #1 0xf78c34bb in pqPutnchar 
> (/nix/store/glkv3qv4y64xigwpjcwflp3ni2anjma0-postgresql-14.9-lib/lib/libpq.so.5+0x174bb)
>     #2 0xf78be05e in PQsendQueryGuts 
> (/nix/store/glkv3qv4y64xigwpjcwflp3ni2anjma0-postgresql-14.9-lib/lib/libpq.so.5+0x1205e)
>     #3 0xf78c05a2 in PQexecPrepared 
> (/nix/store/glkv3qv4y64xigwpjcwflp3ni2anjma0-postgresql-14.9-lib/lib/libpq.so.5+0x145a2)

Perhaps you forgot to terminate a string with '\0'.

Yours,
Laurenz Albe




Postgresql assistance needed

2024-02-22 Thread Sasmit Utkarsh
Hi Postgresql Team,

I'm facing a weird issue which testing the application code using libpq in
C. when i run a test case with PGUSER as *"pgddb_admin" * i get the
expected result (more details attached in success_log doc) whereas when i
run the same test case using another user "*shc_uadmin" *which has same
privileges, I get below error message highlighted(more details in Error_log
doc attached)

*<3>3343433-[ERROR] SELECT nextval Failed [../tpfasm.c:3561:get_pool_fa*]

More details related to "*shc_uadmin"* and sequence *"unassigned_pool_fa"*
which is present in non-public schema *(shc)* below from the terminal.


















*[shc_user@cucmtpccu1 ~]$ psql "host=
cucmpsgsu0.postgres.database.azure.com
 port=5432 dbname=mshcd
user=shc_uadmin password=abc123 sslmode=require
options=--search_path=shc,public"psql (16.1, server 15.4)SSL connection
(protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)Type
"help" for help.mshcd=> \dp unassigned_pool_fa
Access privileges Schema |Name|   Type   |
 Access privileges  | Column privileges |
Policies++--+-+---+--
shc
   | unassigned_pool_fa | sequence | pgddb_admin=rwU/pgddb_admin+|
  |||  |
shc_uadmin=rwU/pgddb_admin  |   |(1 row)mshcd=> SELECT
nextval('unassigned_pool_fa');   nextval- -1811939322(1 row)*

Is there any other issue related to the permissions or configuration
associated with the *shc_uadmin* user in PostgreSQL which needs to be
checked/verified? Please assist with your inputs


Regards,
Sasmit Utkarsh
+91-7674022625
[shc_user@cucmtpccu1 ~]$ export PGHOST=cucmpsgsu0.postgres.database.azure.com
[shc_user@cucmtpccu1 ~]$ export PGDATABASE=mshcd
[shc_user@cucmtpccu1 ~]$ export PGUSER=shc_uadmin
[shc_user@cucmtpccu1 ~]$ export PGPASSWORD=3mnAK~lc7
[shc_user@cucmtpccu1 ~]$ export PGOPTIONS='--search_path=shc,shc_tty,public'

[shc_user@cucmtpccu1 ~]$ export PGUSER=pgddb_admin
[shc_user@cucmtpccu1 ~]$ export PGPASSWORD=xyz123
[shc_user@cucmtpccu1 ~]$ .nix-profile/bin/test 'K-T-*' -q
<6>3429958-[INFO] BUILD_TAG = jenkins-Tpfasmbuild1-515
<6>3429958-[INFO] sql_tty_connect_db: tty_conninfo:'dbname=shc_tty_data'
<6>3429958-[INFO] process_name=10.166.29.36#3429958
<7>3429958-
<7>3429958-Using PostgreSQL database
<7>3429958-

<7>3429958-SQL_init_db_connection() conninfo=dbname=shc_data
<7>3429958-SQL_init_db_connection() Connection to shc_data database SUCCESSFUL
<6>3429958-[INFO] PostgreSQL Server Version = 1500.4 protocol 3
<7>3429958-SQL_init_db_connection() PREPARE AdvisoryLockData PQresultStatus = 
PGRES_COMMAND_OK
<7>3429958-SQL_init_db_connection() PREPARE AdvisoryUnLockData PQresultStatus = 
PGRES_COMMAND_OK
<7>3429958-SQL_init_db_connection() PREPARE SelectSize PQresultStatus = 
PGRES_COMMAND_OK
<7>3429958-SQL_init_db_connection() PREPARE SelectData PQresultStatus = 
PGRES_COMMAND_OK
<7>3429958-SQL_init_db_connection() PREPARE InsertData PQresultStatus = 
PGRES_COMMAND_OK
<7>3429958-SQL_init_db_connection() PREPARE UpdateData PQresultStatus = 
PGRES_COMMAND_OK
<7>3429958-loading builtin FACE
<7>3429958-loading builtin FMSG
<7>3429958-loading builtin FMLG
<7>3429958-loading builtin UII1
<7>3429958-loading builtin UIO1
<7>3429958-loading builtin UIM1
<7>3429958-loading builtin UIY3
<7>3429958-loading builtin UIY2
<7>3429958-loading builtin UIY1
<7>3429958-loading builtin UIZ1
<7>3429958-loading builtin WGA1
<7>3429958-loading builtin COPA
<7>3429958-loading builtin HCVT
<7>3429958-loading builtin HTEA
<7>3429958-loading builtin WGR1
<7>3429958-loading builtin UGK6
<7>3429958-loading builtin UGK7
<7>3429958-loading builtin CVFG
<7>3429958-loading builtin GMI9
<7>3429958-loading builtin HLOG
<7>3429958-loading builtin GMI8
<7>3429958-loading builtin UMA1
<7>3429958-loading builtin UMD1
<7>3429958-loading builtin UMS1
<7>3429958-loading builtin UGI1
<7>3429958-loading builtin HTCZ
<7>3429958-loading builtin C000
<7>3429958-loading builtin HRD4
<7>3429958-loading builtin HTCY
<4>3429958-[WARN] init_globals() not fully unimplemented
<7>3429958-init_globals() load_core_resident
<4>3429958-[WARN] Init dbref_SW05SR
<4>3429958-[WARN] Init dbref_SR05SR
<4>3429958-[WARN] Init dbref_GW01SR
<4>3429958-[WARN] Init dbref_SEQ0SW
<4>3429958-[WARN] Init dbref_FFQ0SW
<4>3429958-[WARN] Init dbref_PNR0SW
<7>3429958-init_pool()
<7>3429958-/fs-data/POOL.DAT file already exists
<7>3429958-getFreeFileAddr()
<7>3429958-getActiveTape()
<7>3429958-1 tuples returned
<7>3429958-getActiveTape()
<7>3429958-1 tuples returned
<7>3429958-getActiveTape()
<7>3429958-1 tuples returned
<7>3429958-initializing segment DRIV transfer vector
<7>3429958-initializing tvp[0]=DRV1 nmix=642637 tsv=1
<7>3429958-initializing