How to convert escaped text column - force E prefix

2021-01-05 Thread Durumdara
Dear Members!

A web developer stores JSON like strings in a text column.

With E prefix we can get the real text:

Select E'Az ad\u00f3kulcsonk\u00e9nti'

Hungarian: "Az adókulcsonkénti" (ISO-8859-2)

How to get the same result from a table column?

select WhatAFunction( ATable.JSONLikeTextColumn) from ATable

What function do we need to use to "simulate" E prefix?

Thank you for the help!

Best regards
dd


LDAP(s) doc misleading

2021-01-05 Thread Paul Förster
Hi,

I found what I believe to be misleading in the LDAP documentation:

https://www.postgresql.org/docs/current/auth-ldap.html

It says:
"ldapscheme
Set to ldaps to use LDAPS."...

IMHO, it should say:
"ldapscheme
Set to ldapscheme to use LDAPS (ldapscheme=ldaps)."...

I found this because I'm in the process of making our Linux LDAP servers 
obsolete by reconfiguring PostgreSQL to use our company Windows Active 
Directory LDAPS service.

Cheers,
Paul



Re: Declarative partitioning and automatically generated row-IDs using BIGSERIAL

2021-01-05 Thread Laurenz Albe
On Thu, 2020-12-31 at 17:38 +0100, Thorsten Schöning wrote:
> I have the following table containing 100+ millions rows currently and
> which needs to be queried by "captured_at" a lot. That table stores
> rows for the last 6 years, but most of the queries focus on the last
> 15 months, 15 days or really only 15 minutes.
> 
> > CREATE TABLE public.clt_rec(
> > id bigserial NOT NULL,
> > oms_rec bigint NOT NULL,
> > captured_at timestamp with time zone NOT NULL,
> > rssi smallint NOT NULL,
> > CONSTRAINT pk_clt_rec PRIMARY KEY (id)
> >  WITH (FILLFACTOR = 10),
> > CONSTRAINT uk_clt_rec_oms_rec UNIQUE (oms_rec)
> > );
> 
> We want to test if performance of some queries can be improved by
> using declarative partitioning with far less rows, e.g. one table per
> year or half a year or alike.
> 
> [...] Therefore I need to keep that ID when splitting the
> table into individual partitions and as well need to guarantee that
> IDs are unique across all partitions.
> 
> I've read a lot of similar examples using BIGSERIAL or alike already,
> but would like to make sure I understood correctly how those SERIAL
> values are generated.

You should experiment with partitioned tables, that will answer most of
your questions.

> Am I correct that after migrating the available table to a partitioned
> one I keep INSERTing into the partitioned table only in my app?

That's the idea, yes.

> Because of the declarative partitioninig used, the only thing I need
> to assure is to have necessary partitions available when INSERT
> happens? I would create them beforehand as part of some maintenance,
> but read about a DEFAULT partition already as well.

Yes, you need the partitions created in advance.

Stay away from default partitions, that might be a trap in your case.
If there is a conflicting value in the default partition, it can prevent
the creation of a new partition.

> Because I keep INSERTing into the partitioned table, the semantic of
> my ID doesn't change, correct? There's a SEQUENCE associated with the
> column "id" in the partitioned table and that generated my value,
> which is AFTERWARDS stored with all other values of a row in whichever
> partitions fits to the partition key.

Yes.  All partitions should share the same sequence, so values are
automatically unique.

> Or is the target partition calculated first and AFTERWARDS a SEQUENCE
> private to each partition table is used to calculate the ID? I don't
> think so, but according the docs indexes etc. are inherited by
> partitions as well. So maybe Postgres maintains multiple SEQUENCES in
> the background for some reason as well.

Create a partitioned table and look at the definition, and you will see
that the default value (taken from the sequence) is calculated according
to how you created the partitioned table.  So it works like you expect.

> > Unique constraints (and hence primary keys) on partitioned tables
> > must include all the partition key columns. This limitation exists
> > because PostgreSQL can only enforce uniqueness in each partition
> > individually.
> 
> https://www.postgresql.org/docs/11/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE-BEST-PRACTICES
> 
> Doesn't the above allow manually updating different rows in different
> partition tables to contain the same ID in the end? The main benefit
> of the PRIMARY KEY left is uniqueness per partition, correct?

You won't be able to enforce uniqueness with a constraint, period.
You have to make sure that your application always uses the sequence.

One way to make this more likely is to use an identity column rather
than "serial": GENERATED ALWAYS AS IDENTITY
Not only does this conform to the standard, but it will prevent
overriding the sequence value with a user supplied value.
(That is, you'd have to use special syntax to override the sequence
value.)

> > While primary keys are supported on partitioned tables, foreign keys
> > referencing partitioned tables are not supported. (Foreign key
> > references from a partitioned table to some other table are
> > supported.)
> 
> The docs for Postgres 11 mention that partitioned tables can not be
> used as target of foreign keys, while that statement is removed from
> the docs of Postgres 13. How is a 1:1 relationship guaranteed in newer
> version when PRIMARY KEYS are still local to their individual
> partition table?

There are foreign keys referencing partitioned tables from v12 on.

You can guarantee a 1:1 relationship with a unique constraint on the
source columns.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: SQL to query running transactions with subtransactions that exceeds 64

2021-01-05 Thread Laurenz Albe
On Tue, 2021-01-05 at 10:42 +, Li EF Zhang wrote:
> I am new to postgresql and sql. I want to check running transactions whose
>  subtransactions exceeds 64. Is there any SQL statement or other method can
>  do this? Thanks!

You may want to read
https://www.cybertec-postgresql.com/en/subtransactions-and-performance-in-postgresql/

You could export a snapshot with pg_export_snapshot() and see if the resulting 
file
contains the line "sof:1".

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: PostgreSQL 13 on CentOS 7

2021-01-05 Thread H
On 01/05/2021 08:58 PM, H wrote:
> I am compiling an application that uses postgreSQL 13 and Qt 5.12. I have 
> built the latter from scratch and successfully linked with libraries for 
> postgreSQL 9.6 (the current version on CentOS 7) but have run into problems 
> with postgreSQL 13. The server, client and libs are available in the postgres 
> repository but I also need the devel library which in turns requires 
> llvm5.0-devel which does not want seem to available...
>
> Has anyone successfully build an application use pg 13 or otherwise used 
> postgresql13-devel?
>
> Thanks.
>
Found yum install libpq5-devel-13.0-10PGDG.rhel7.x86_64 and it seems to compile.





PostgreSQL 13 on CentOS 7

2021-01-05 Thread H
I am compiling an application that uses postgreSQL 13 and Qt 5.12. I have built 
the latter from scratch and successfully linked with libraries for postgreSQL 
9.6 (the current version on CentOS 7) but have run into problems with 
postgreSQL 13. The server, client and libs are available in the postgres 
repository but I also need the devel library which in turns requires 
llvm5.0-devel which does not want seem to available...

Has anyone successfully build an application use pg 13 or otherwise used 
postgresql13-devel?

Thanks.





RE: Max# of tablespaces

2021-01-05 Thread Thomas Flatley
I agree - it requires a re-think/re-build

As for oracle, quite easy to add tablepaces in flight, assuming you don’t hit 
max db_files

I was more curious if there was an actual defined limit - oracle stops at 64K , 
and their old application release would have 2tbsp per module, and at 400 or so 
that’s a hassle



-Original Message-
From: Christophe Pettus  
Sent: Tuesday, January 5, 2021 5:02 PM
To: Thomas Flatley 
Cc: Andreas Kretschmer ; 
pgsql-general@lists.postgresql.org
Subject: Re: Max# of tablespaces



> On Jan 5, 2021, at 13:55, Thomas Flatley  wrote:
> 
> As far as I can tell, each tablespace is a partition, and I assume they felt 
> this was the best way to perform partition maintenance - again, I don’t know 
> , 

It's a very common Oracle-ism to have a lot of tablespaces, in part because 
(IIRC) Oracle makes it an incredible pain in the neck to add tablespaces once 
the DB is in use.  For sharding purposes, you probably want schemas in 
PostgreSQL instead of tablespaces, although having that many schemas is going 
to not be optimal, either.

--
-- Christophe Pettus
   x...@thebuild.com





Re: FTS and tri-grams

2021-01-05 Thread Christophe Pettus



> On Jan 5, 2021, at 13:26, Mark Phillips  wrote:
> 1. Is FTS required for tri-gram to work?
> 2. Are these independent of each other?
> 3. Is tri-gram alone sufficient for a “full text search” feature?

The answers are, kind of in order:

2. Yes.
1. No.
3. It depends on what you mean by "full text search."

Trigrams are mostly for fuzzy matching on a single or small number of words.  
There are things that the tsvector machinery can do that trigrams can't, such 
as proximity searches between words, prefix and stemmed searches, and things of 
that type.  If you just want fuzzy searching on a small number of words, 
trigrams are probably fine; for more sophisticated kinds of searching, you want 
tsvector.

They're completely different sets of functionality in PostgreSQL.

--
-- Christophe Pettus
   x...@thebuild.com





Re: Max# of tablespaces

2021-01-05 Thread Ron
It's certainly the "I only know bash" method of determining which partition 
growing the fastest.


On 1/5/21 3:55 PM, Thomas Flatley wrote:

I don’t, but I didn’t set up the env

As far as I can tell, each tablespace is a partition, and I assume they felt 
this was the best way to perform partition maintenance - again, I don’t know ,

-Original Message-
From: Andreas Kretschmer 
Sent: Sunday, January 3, 2021 11:52 AM
To: pgsql-general@lists.postgresql.org; Thomas Flatley ; 
pgsql-general@lists.postgresql.org
Subject: Re: Max# of tablespaces

On 3 January 2021 13:59:31 CET, Thomas Flatley  wrote:

Hello, I've checked the docs but cant seem to find if there is a max #
of tablespaces allowed - I've come across a 9.5 env with 1600
tablespaces - they want to double that

why on earth do you think you will need so many tablespaces? They have an other 
meaning than in oracle.


--
Angular momentum makes the world go 'round.




Re: Max# of tablespaces

2021-01-05 Thread Christophe Pettus



> On Jan 5, 2021, at 13:55, Thomas Flatley  wrote:
> 
> As far as I can tell, each tablespace is a partition, and I assume they felt 
> this was the best way to perform partition maintenance - again, I don’t know 
> , 

It's a very common Oracle-ism to have a lot of tablespaces, in part because 
(IIRC) Oracle makes it an incredible pain in the neck to add tablespaces once 
the DB is in use.  For sharding purposes, you probably want schemas in 
PostgreSQL instead of tablespaces, although having that many schemas is going 
to not be optimal, either.

--
-- Christophe Pettus
   x...@thebuild.com





RE: Max# of tablespaces

2021-01-05 Thread Thomas Flatley
I don’t, but I didn’t set up the env

As far as I can tell, each tablespace is a partition, and I assume they felt 
this was the best way to perform partition maintenance - again, I don’t know , 

-Original Message-
From: Andreas Kretschmer  
Sent: Sunday, January 3, 2021 11:52 AM
To: pgsql-general@lists.postgresql.org; Thomas Flatley ; 
pgsql-general@lists.postgresql.org
Subject: Re: Max# of tablespaces

On 3 January 2021 13:59:31 CET, Thomas Flatley  wrote:
>Hello, I've checked the docs but cant seem to find if there is a max # 
>of tablespaces allowed - I've come across a 9.5 env with 1600 
>tablespaces - they want to double that

why on earth do you think you will need so many tablespaces? They have an other 
meaning than in oracle.


--
2ndQuadrant - The PostgreSQL Support Company


Re: Max# of tablespaces

2021-01-05 Thread Andreas Kretschmer
On 3 January 2021 13:59:31 CET, Thomas Flatley  wrote:
>Hello, I've checked the docs but cant seem to find if there is a max #
>of tablespaces allowed - I've come across a 9.5 env with 1600
>tablespaces - they want to double that 

why on earth do you think you will need so many tablespaces? They have an other 
meaning than in oracle.


-- 
2ndQuadrant - The PostgreSQL Support Company




FTS and tri-grams

2021-01-05 Thread Mark Phillips
Some years ago we implemented FTS. At the time, we also added a column with the 
tsvector data type. We assumed we needed that for FTS to work.

Later, we added tri-gram support. At the time, we assumed tri-gram complimented 
FTS.

More recently, we have run into a problem with replicating the tsvector column. 

We now wonder if tri-gram alone can achieve a fast full text search result. If 
so, then we could the tsvector column be dropped?

1. Is FTS required for tri-gram to work?
2. Are these independent of each other?
3. Is tri-gram alone sufficient for a “full text search” feature?

Any thoughts, experiences or other thoughts greatly appreciated.

 - Mark Phillips



Re: Max# of tablespaces

2021-01-05 Thread Bruce Momjian
On Sun, Jan  3, 2021 at 05:37:52PM +, Thomas Flatley wrote:
> Excellent - thanks for the fast response - it was an oracle dba that set it 
> up initially so that may explain it - 

Agreed.  It was probably done that way for an invalid reason and should
be cleaned up.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: CROSSTAB( .. only one column has values... )

2021-01-05 Thread Pavel Stehule
Hi

út 5. 1. 2021 v 19:45 odesílatel Joe Conway  napsal:

> On 1/5/21 12:11 PM, Tim Clarke wrote:
> > imho the crosstab() function isn't a good implementation. The biggest
> failure it
> > has is that you must know exactly how many output columns you will have
> in the
> > result /_prior to running it
>
>
> *That* is a function of how Postgres set returning functions work, and not
> specific to crosstab(). It is not easily fixed. Patches to fix that would
> be
> welcomed!
>

https://www.postgresql.org/message-id/flat/CAFj8pRC%2BhNzpH%2B0bPRCnqNncUCGjEvpwX%2B0nbhb1F7gwjYZZNg%40mail.gmail.com#9b3fbdd968c271668049a103bfc93880

Regards

Pavel

>
> Joe
>
> --
> Crunchy Data - http://crunchydata.com
> PostgreSQL Support for Secure Enterprises
> Consulting, Training, & Open Source Development
>
>


Re: CROSSTAB( .. only one column has values... )

2021-01-05 Thread Joe Conway
On 1/5/21 10:46 AM, Adam Tauno Williams wrote:
> I'm using the crosstab feature and do not understand why I am only
> getting values in the first column.



> So I put this in as a crosstab:
> 
> SELECT * FROM crosstab(
> $$
 

>  $$) AS final_result (invoice_date VARCHAR, "parts" BIGINT, "rental"
> BIGINT, "sales" BIGINT, "service" BIGINT);
> 
>  - and I get the results of -
> 
> invoice_date parts rental sales  service 
>  - -- -- --- 
> 2001-09  1 (null) (null) (null)  
> 2007-07  1 (null) (null) (null)  
> 2013-02  5353  (null) (null) (null)  
> 2013-02  3454  (null) (null) (null)  
> 2013-03  3512  (null) (null) (null)  
> 2013-03  5366  (null) (null) (null)  
> ...
> 
> Only the first column has values, all the rest are NULL.
> I assume I am missing something stupid.

Seems you are using the wrong form of the crosstab() function. See

  https://www.postgresql.org/docs/current/tablefunc.html#id-1.11.7.47.5.7.2

"The main limitation of the single-parameter form of crosstab is that it treats
all values in a group alike, inserting each value into the first available
column. If you want the value columns to correspond to specific categories of
data, and some groups might not have data for some of the categories, that
doesn't work well. The two-parameter form of crosstab handles this case by
providing an explicit list of the categories corresponding to the output 
columns."

HTH,

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: CROSSTAB( .. only one column has values... )

2021-01-05 Thread Joe Conway
On 1/5/21 12:11 PM, Tim Clarke wrote:
> imho the crosstab() function isn't a good implementation. The biggest failure 
> it
> has is that you must know exactly how many output columns you will have in the
> result /_prior to running it


*That* is a function of how Postgres set returning functions work, and not
specific to crosstab(). It is not easily fixed. Patches to fix that would be
welcomed!

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: CROSSTAB( .. only one column has values... )

2021-01-05 Thread Thomas Kellerer

Adam Tauno Williams schrieb am 05.01.2021 um 16:46:

I'm using the crosstab feature and do not understand why I am only
getting values in the first column.

The query:
SELECT
   date_trunc('month', t2.value_date) AS invoice_date,
   t1.value_string AS invoice_type
   COUNT(*)
FROM document d
   LEFT OUTER JOIN obj_property t1
 ON (t1.obj_id = d.document_id
 AND t1.namespace_prefix = 'http://www.example.com/ctabs'
 AND t1.value_key = 'invoiceType')
   LEFT OUTER JOIN obj_property t2
 ON (t2.obj_id = d.document_id
 AND t2.namespace_prefix = 'http://www.example.com/ctabs'
 AND t2.value_key = 'invoiceDate')
WHERE ((d.project_id = 1134510213) AND (d.is_folder = 0))
GROUP BY 1,2

  - has results like -

invoice_date invoice_type count
  -
2013-02  service  3454
2013-03  service  3512
2013-03  parts5366
2013-04  parts5657
2013-04  service  4612
2013-05  service  4946
2013-05  parts5508
...

So I put this in as a crosstab:



I find using filtered aggregation to be way easier and more flexible than using 
crosstab():

SELECT date_trunc('month', t2.value_date) AS invoice_date,
   count(*) filter (where value_string = 'rental') as rental,
   count(*) filter (where value_string = 'sales') as sales,
   count(*) filter (where value_string = 'service') as service
FROM document d
   LEFT OUTER JOIN obj_property t1
 ON (t1.obj_id = d.document_id
 AND t1.namespace_prefix = 'http://www.example.com/ctabs'
 AND t1.value_key = 'invoiceType')
   LEFT OUTER JOIN obj_property t2
 ON (t2.obj_id = d.document_id
 AND t2.namespace_prefix = 'http://www.example.com/ctabs'
 AND t2.value_key = 'invoiceDate')
WHERE ((d.project_id = 1134510213) AND (d.is_folder = 0))
GROUP BY 1


Thomas




Re: CROSSTAB( .. only one column has values... )

2021-01-05 Thread Tim Clarke
On 05/01/2021 16:12, David G. Johnston wrote:
On Tue, Jan 5, 2021 at 8:46 AM Adam Tauno Williams 
mailto:awill...@whitemice.org>> wrote:
Only the first column has values, all the rest are NULL.
I assume I am missing something stupid.

I think you are assigning the function more intelligence/effort than it puts 
out.

From the documentation:
"""
The crosstab function produces one output row for each consecutive group of 
input rows with the same row_name value. It fills the output value columns, 
left to right, with the value fields from these rows. If there are fewer rows 
in a group than there are output value columns, the extra output columns are 
filled with nulls; if there are more rows, the extra input rows are skipped.

In practice the SQL query should always specify ORDER BY 1,2 to ensure that the 
input rows are properly ordered, that is, values with the same row_name are 
brought together and correctly ordered within the row. Notice that crosstab 
itself does not pay any attention to the second column of the query result; 
it's just there to be ordered by, to control the order in which the 
third-column values appear across the page.
"""

The fact you don't have an order by, and that there are not an equal number of 
records per date, suggests to me that you are expecting the function to fill in 
the blanks when the documentation says it doesn't do that.

David J.




+1

imho the crosstab() function isn't a good implementation. The biggest failure 
it has is that you must know exactly how many output columns you will have in 
the result prior to running it


Tim Clarke MBCS
IT Director
Direct: +44 (0)1376 504510 | Mobile: +44 (0)7887 563420



Telephone: Witham: +44(0)1376 503500 | London: +44 (0)20 3009 0853 | Frankfurt: 
+49 (0)69 7191 6000 | Hong Kong: +852 5803 1687 | Toronto: +1 647 503 2848
Web: https://www.manifest.co.uk/



Minerva Analytics Ltd - A Solactive Company
9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | United Kingdom



Copyright: This e-mail may contain confidential or legally privileged 
information. If you are not the named addressee you must not use or disclose 
such information, instead please report it to 
ad...@minerva.info
Legal: Minerva Analytics is the trading name of: Minerva Analytics Ltd: 
Registered in England Number 11260966 & The Manifest Voting Agency Ltd: 
Registered in England Number 2920820 Registered Office at above address. Please 
Click Here https://www.manifest.co.uk/legal/ for further information.


Re: CROSSTAB( .. only one column has values... )

2021-01-05 Thread David G. Johnston
On Tue, Jan 5, 2021 at 8:46 AM Adam Tauno Williams 
wrote:

> Only the first column has values, all the rest are NULL.
> I assume I am missing something stupid.
>

I think you are assigning the function more intelligence/effort than it
puts out.

>From the documentation:
"""
The crosstab function produces one output row for each consecutive group of
input rows with the same row_name value. It fills the output value columns,
left to right, with the value fields from these rows. If there are fewer
rows in a group than there are output value columns, the extra output
columns are filled with nulls; if there are more rows, the extra input rows
are skipped.

In practice the SQL query should always specify ORDER BY 1,2 to ensure that
the input rows are properly ordered, that is, values with the same row_name
are brought together and correctly ordered within the row. Notice that
crosstab itself does not pay any attention to the second column of the
query result; it's just there to be ordered by, to control the order in
which the third-column values appear across the page.
"""

The fact you don't have an order by, and that there are not an equal number
of records per date, suggests to me that you are expecting the function to
fill in the blanks when the documentation says it doesn't do that.

David J.


CROSSTAB( .. only one column has values... )

2021-01-05 Thread Adam Tauno Williams
I'm using the crosstab feature and do not understand why I am only
getting values in the first column.

The query:
SELECT 
  date_trunc('month', t2.value_date) AS invoice_date, 
  t1.value_string AS invoice_type
  COUNT(*)
FROM document d
  LEFT OUTER JOIN obj_property t1
ON (t1.obj_id = d.document_id
AND t1.namespace_prefix = 'http://www.example.com/ctabs'
AND t1.value_key = 'invoiceType')
  LEFT OUTER JOIN obj_property t2
ON (t2.obj_id = d.document_id
AND t2.namespace_prefix = 'http://www.example.com/ctabs'
AND t2.value_key = 'invoiceDate')
WHERE ((d.project_id = 1134510213) AND (d.is_folder = 0))
GROUP BY 1,2

 - has results like -

invoice_date invoice_type count 
  - 
2013-02  service  3454  
2013-03  service  3512  
2013-03  parts5366  
2013-04  parts5657  
2013-04  service  4612  
2013-05  service  4946  
2013-05  parts5508  
...

So I put this in as a crosstab:

SELECT * FROM crosstab(
$$
SELECT 
  SUBSTR(t2.value_string, 1, 7) AS invoice_date,
  t1.value_string AS invoice_type,
  COUNT(*)
FROM document d
  LEFT OUTER JOIN obj_property t1
ON (t1.obj_id = d.document_id
AND t1.namespace_prefix = 'http://www.example.com/ctabs'
AND t1.value_key = 'invoiceType')
  LEFT OUTER JOIN obj_property t2
ON (t2.obj_id = d.document_id
AND t2.namespace_prefix = 'http://www.example.com/ctabs'
AND t2.value_key = 'invoiceDate')
WHERE ((d.project_id = 1134510213) AND (d.is_folder = 0))
GROUP BY 1,2
 $$) AS final_result (invoice_date VARCHAR, "parts" BIGINT, "rental"
BIGINT, "sales" BIGINT, "service" BIGINT);

 - and I get the results of -

invoice_date parts rental sales  service 
 - -- -- --- 
2001-09  1 (null) (null) (null)  
2007-07  1 (null) (null) (null)  
2013-02  5353  (null) (null) (null)  
2013-02  3454  (null) (null) (null)  
2013-03  3512  (null) (null) (null)  
2013-03  5366  (null) (null) (null)  
...

Only the first column has values, all the rest are NULL.
I assume I am missing something stupid.

-- 
Adam Tauno Williams  GPG D95ED383
Systems Administrator, Python Developer, LPI / NCLA





duplicate key value violates unique constraint pg_default_acl_role_nsp_obj_index

2021-01-05 Thread Andrus

Hi!

I have used script below to remove all rights from user and add some. 
Today it starts to throw error


ERROR:  duplicate key value violates unique constraint 
"pg_default_acl_role_nsp_obj_index"


DETAIL:  Key (defaclrole, defaclnamespace, defaclobjtype)=(30152, 
186783649, r) already exists.


How to fix this and which is proper way to implement this ?

Code used:

revoke all on all tables in schema 
public,firma1,firma3,firma4,firma5,firma6,firma15,firma17,firma19,firma20,firma21,firma22,firma23,firma24,firma25,firma26,firma27,firma30,firma31,firma32,firma34,firma36,firma37,firma41,firma55,firma59,firma60,firma62,firma63,firma65,firma66,firma67,firma68,firma70,firma71,firma73,firma74,firma76,firma77,firma78,firma79,firma80,firma85,firma88,firma89,firma90,firma91,firma93,firma94,firma95,firma96,firma97,firma98,firma100,firma101,firma102,firma103,firma104,firma105,firma106,firma107,firma110,firma111,firma112,firma148,firma149,firma151,firma152,firma153,firma154,firma155,firma156,firma157,firma158,firma159,firma160,firma163,firma164,firma165,firma166,firma172,firma173,firma174,firma175,firma176,firma177,firma178,firma179,firma180,firma181,firma188,firma189,firma190,firma191,firma192,firma193,firma194,firma195,firma196 
from "testuser" cascade;



revoke all on all sequences in schema 
public,firma1,firma3,firma4,firma5,firma6,firma15,firma17,firma19,firma20,firma21,firma22,firma23,firma24,firma25,firma26,firma27,firma30,firma31,firma32,firma34,firma36,firma37,firma41,firma55,firma59,firma60,firma62,firma63,firma65,firma66,firma67,firma68,firma70,firma71,firma73,firma74,firma76,firma77,firma78,firma79,firma80,firma85,firma88,firma89,firma90,firma91,firma93,firma94,firma95,firma96,firma97,firma98,firma100,firma101,firma102,firma103,firma104,firma105,firma106,firma107,firma110,firma111,firma112,firma148,firma149,firma151,firma152,firma153,firma154,firma155,firma156,firma157,firma158,firma159,firma160,firma163,firma164,firma165,firma166,firma172,firma173,firma174,firma175,firma176,firma177,firma178,firma179,firma180,firma181,firma188,firma189,firma190,firma191,firma192,firma193,firma194,firma195,firma196 
from "testuser" cascade;
revoke all on database 
sba,mef,aus,reta,idd,sam,namm,nevoteks,haa,yle,eevakk,swecon,kose,kliima,best,eeva,kiilikvh,pariisi,nopri,laaneteed,belimo,nkaubandus,netroo,kuluk 
from "testuser" cascade;
revoke all on all functions in schema 
public,firma1,firma3,firma4,firma5,firma6,firma15,firma17,firma19,firma20,firma21,firma22,firma23,firma24,firma25,firma26,firma27,firma30,firma31,firma32,firma34,firma36,firma37,firma41,firma55,firma59,firma60,firma62,firma63,firma65,firma66,firma67,firma68,firma70,firma71,firma73,firma74,firma76,firma77,firma78,firma79,firma80,firma85,firma88,firma89,firma90,firma91,firma93,firma94,firma95,firma96,firma97,firma98,firma100,firma101,firma102,firma103,firma104,firma105,firma106,firma107,firma110,firma111,firma112,firma148,firma149,firma151,firma152,firma153,firma154,firma155,firma156,firma157,firma158,firma159,firma160,firma163,firma164,firma165,firma166,firma172,firma173,firma174,firma175,firma176,firma177,firma178,firma179,firma180,firma181,firma188,firma189,firma190,firma191,firma192,firma193,firma194,firma195,firma196 
from "testuser" cascade;
revoke all on schema 
public,firma1,firma3,firma4,firma5,firma6,firma15,firma17,firma19,firma20,firma21,firma22,firma23,firma24,firma25,firma26,firma27,firma30,firma31,firma32,firma34,firma36,firma37,firma41,firma55,firma59,firma60,firma62,firma63,firma65,firma66,firma67,firma68,firma70,firma71,firma73,firma74,firma76,firma77,firma78,firma79,firma80,firma85,firma88,firma89,firma90,firma91,firma93,firma94,firma95,firma96,firma97,firma98,firma100,firma101,firma102,firma103,firma104,firma105,firma106,firma107,firma110,firma111,firma112,firma148,firma149,firma151,firma152,firma153,firma154,firma155,firma156,firma157,firma158,firma159,firma160,firma163,firma164,firma165,firma166,firma172,firma173,firma174,firma175,firma176,firma177,firma178,firma179,firma180,firma181,firma188,firma189,firma190,firma191,firma192,firma193,firma194,firma195,firma196 
from "testuser" cascade;

revoke sba_owner from "testuser" cascade;

ALTER ROLE "testuser" inherit NOCREATEROLE NOCREATEDB LOGIN;

grant all on all tables in schema 
public,firma193,firma193,firma195,firma196 to "testuser";
grant all on all sequences in schema 
public,firma193,firma193,firma195,firma196 to "testuser";

grant all on database sba to "testuser";
grant all on schema public,firma193,firma193,firma195,firma196 to 
"testuser";
ALTER DEFAULT PRIVILEGES IN SCHEMA 
public,firma193,firma193,firma195,firma196

   GRANT all ON TABLES TO "testuser";
revoke all on  kasutaja,kaspriv,logifail from "testuser" cascade;
grant select on kaspriv,kasutaja to "testuser";
grant update (eesnimi, 
nimi,email,amet,islocked,telefon,language,vabakuup,kasilfirma) on 
kasutaja to "testuser";

grant insert on logifail to "testuser";

Using

PostgreSQL 12.2 (Debian 

SQL to query running transactions with subtransactions that exceeds 64

2021-01-05 Thread Li EF Zhang
I am new to postgresql and sql. I want to check running transactions whose subtransactions exceeds 64. Is there any SQL statement or other method can do this? Thanks!