Re: Variable constants ?

2019-08-15 Thread Charles Clavadetscher

On 2019-08-15 23:27, Rich Shepard wrote:

On Thu, 15 Aug 2019, stan wrote:

I need to put a few bossiness constants, such as a labor rate 
multiplier
in an application. I am adverse to hard coding these things. The best 
plan
i have come up with so far is to store them in a table, which would 
have

only 1 row, and a column for each needed constant.

Anyone have a better way to do this?

Failing a better way is there some way I can limit this table to only
allow one row to exist?


Stan,

I've resolved similar issues with changing regulatory agency staff. For 
your

application(s) I suggest a table like this:

create table labor_rate_mult (
  rate  real primary_key,
  start_datedate not null,
  end_date  date
)

This provides both a history of labor rate multipliers and the ability 
to

select either the most current one or a previous one.

If other factors affect the rate, add attribute columns for them.

Regards,

Rich


Another way to keep a history is using a daterange instead of two 
columns for start and end date. Something like


create table labor_rate_mult (
  rate  real primary_key,
  validity  daterange not null
)

This makes it easier to manage and avoid e.g. overlappings.

Regards
Charles






Re: Error XX000 After pg11 upgrade

2019-08-15 Thread Tom Lane
Simon Windsor  writes:
> Since then, large bulk inserts of configuration changes are failing with 
> this Error, but adhoc and small changes are working ok.

Might it be that things work as long as the trigger is only tasked with
redirecting to the same child table (or limited set of child tables)
within a particular insertion command?

I'm wondering if this could be related to bug #15913 --- which I just
fixed today, so maybe I just have it on the brain too much.  The
manifestation doesn't look quite the same, but given the way your
trigger is written, something about NEW.* changing type from one
call to the next might have something to do with it.

I also wonder how often you create/delete child tables.

regards, tom lane




Re: Error XX000 After pg11 upgrade

2019-08-15 Thread Gavin Flower

On 16/08/2019 04:35, Simon Windsor wrote:

Hi

The Full system used to in an Oracle DB and was ported to Postgres 9.5 
about 2+ years ago, and partitioned using inheritance tables.


Since then pg_upgrade has been used to upgrade to pg10 (with apt 
upgrade to take to 10.5 occasionally).


Last week, pg_upgrade was againn used to upgrade to pg11.4.

Since then, large bulk inserts of configuration changes are failing 
with this Error, but adhoc and small changes are working ok.


The actual error is reported by a Java process

Caused by: org.postgresql.util.PSQLException: ERROR: cache lookup 
failed for type 22079
  Where: SQL statement "insert into configObjectsFull_2019 values 
(new.*)"
PL/pgSQL function configobjectsfull_insert_trigger() line 28 at SQL 
statement
    at 
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2310)

    at

The DB Tableand Trigger are attached.

As you can see, the id and timestamp are set on insert, and are used 
to determine the partition used.


This issue started after the upgrade to pg11, pg10 and pg9 had no 
problems.


Any ideas would be appreciated.



[...]

Probably will make no difference, but have you considered testing using 
pg 11.5?  It would at least rule out a lot of Red Herrings!



Cheers,
Gavin






RE: Variable constants ?

2019-08-15 Thread Igor Neyman
-Original Message-
From: Tom Lane  
Sent: Thursday, August 15, 2019 6:13 PM
To: stan 
Cc: pgsql-gene...@postgresql.org
Subject: Re: Variable constants ?

stan  writes:
> Failing a better way is there some way I can limit this table to only 
> allow one row to exist?

I was recently reminded of a cute trick for that: make a unique index on a 
constant.

regression=# create table consts(f1 int, f2 int); CREATE TABLE regression=# 
create unique index consts_only_one on consts((1)); CREATE INDEX regression=# 
insert into consts values(1,2); INSERT 0 1 regression=# insert into consts 
values(3,4);
ERROR:  duplicate key value violates unique constraint "consts_only_one"
DETAIL:  Key ((1))=(1) already exists.

Shepard's nearby advice about keeping some history is probably a better plan 
though.

regards, tom lane


Here is another trick to keep a table to just one row:

reg#create table test(c1 int check(c1=1) default 1, c2 int);
CREATE TABLE
reg#create unique index one_row on test(c1);
CREATE INDEX
reg#insert into test (c2) values(3);
INSERT 01
reg# insert into test (c2) values(4);
ERROR:  duplicate key value violates unique constraint "one_row"
DETAIL:  Key (c1)=(1) already exists.
SQL state: 23505

Regards,
Igor Neyman





Re: Variable constants ?

2019-08-15 Thread Tom Lane
stan  writes:
> Failing a better way is there some way I can limit this table to only allow
> one row to exist?

I was recently reminded of a cute trick for that: make a unique index
on a constant.

regression=# create table consts(f1 int, f2 int);
CREATE TABLE
regression=# create unique index consts_only_one on consts((1));
CREATE INDEX
regression=# insert into consts values(1,2);
INSERT 0 1
regression=# insert into consts values(3,4);
ERROR:  duplicate key value violates unique constraint "consts_only_one"
DETAIL:  Key ((1))=(1) already exists.

Shepard's nearby advice about keeping some history is probably a better
plan though.

regards, tom lane




Re: Variable constants ?

2019-08-15 Thread Rich Shepard

On Thu, 15 Aug 2019, stan wrote:


I need to put a few bossiness constants, such as a labor rate multiplier
in an application. I am adverse to hard coding these things. The best plan
i have come up with so far is to store them in a table, which would have
only 1 row, and a column for each needed constant.

Anyone have a better way to do this?

Failing a better way is there some way I can limit this table to only
allow one row to exist?


Stan,

I've resolved similar issues with changing regulatory agency staff. For your
application(s) I suggest a table like this:

create table labor_rate_mult (
  rate  real primary_key,
  start_datedate not null,
  end_date  date
)

This provides both a history of labor rate multipliers and the ability to
select either the most current one or a previous one.

If other factors affect the rate, add attribute columns for them.

Regards,

Rich




Re: Variable constants ?

2019-08-15 Thread Adrian Klaver

On 8/15/19 1:56 PM, stan wrote:

I need to put a few bossiness constants, such as a labor rate multiplier in an
application. I am adverse to hard coding these things. The best plan i have
come up with so far is to store them in a table, which would have only 1
row, and a column for each needed constant.

Anyone have a better way to do this?


Another way(better in eye of beholder) a conf file that can be written to.



Failing a better way is there some way I can limit this table to only allow
one row to exist?


Yes add the row and REVOKE INSERT afterwards. Then you have a single row 
that can be updated/deleted(though you could revoke that also).








--
Adrian Klaver
adrian.kla...@aklaver.com




Variable constants ?

2019-08-15 Thread stan
I need to put a few bossiness constants, such as a labor rate multiplier in an
application. I am adverse to hard coding these things. The best plan i have
come up with so far is to store them in a table, which would have only 1
row, and a column for each needed constant.

Anyone have a better way to do this?

Failing a better way is there some way I can limit this table to only allow
one row to exist?


-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: slow queries on system tables

2019-08-15 Thread Adrian Klaver

On 8/15/19 12:09 PM, PegoraroF10 wrote:

last values for autovacuum and autoanalyse are Null.

Thanks, I´ll change and see if that solve our problem


I thought we had been down this trail before:

https://www.postgresql.org/message-id/1564168884421-0.post%40n3.nabble.com





--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: slow queries on system tables

2019-08-15 Thread PegoraroF10
last values for autovacuum and autoanalyse are Null.

Thanks, I´ll change and see if that solve our problem



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: slow queries on system tables

2019-08-15 Thread Michael Lewis
Are you autovacuuming and analyzing aggressively enough? Is there bloat or
other concern for these system tables? I expect it may be a concern based
on what I remember about your environment having thousands of tables and
perhaps autovacuum not keeping up (re: Too slow to create new schema and
their tables, functions, triggers).

Can you check results for last (auto)vacuum and analyze?
select * from pg_stat_all_tables where relname IN( 'pg_class',
'pg_attribute', 'pg_index' );


Re: Error XX000 After pg11 upgrade

2019-08-15 Thread Simon Windsor

Hi

The Full system used to in an Oracle DB and was ported to Postgres 9.5 
about 2+ years ago, and partitioned using inheritance tables.


Since then pg_upgrade has been used to upgrade to pg10 (with apt upgrade 
to take to 10.5 occasionally).


Last week, pg_upgrade was againn used to upgrade to pg11.4.

Since then, large bulk inserts of configuration changes are failing with 
this Error, but adhoc and small changes are working ok.


The actual error is reported by a Java process

Caused by: org.postgresql.util.PSQLException: ERROR: cache lookup failed 
for type 22079

  Where: SQL statement "insert into configObjectsFull_2019 values (new.*)"
PL/pgSQL function configobjectsfull_insert_trigger() line 28 at SQL 
statement
    at 
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2310)

    at

The DB Tableand Trigger are attached.

As you can see, the id and timestamp are set on insert, and are used to 
determine the partition used.


This issue started after the upgrade to pg11, pg10 and pg9 had no problems.

Any ideas would be appreciated.

Simon

On 15/08/2019 16:31, Tom Lane wrote:

Simon Windsor  writes:

I have just upgraded a Db from pg 10.5 to pg11.4 and almost immediately we
are seeing errors like
ERROR,XX000,"cache lookup failed for type 22079"
When inserting into a a partitioned table. About 30% of inserts are
reporting this error.

Hmm ... can you show the full schema (eg, psql \d+ output) for the
partitioned table and its children?  Is there any pattern to the
failing inserts, eg do they all resolve as inserts to the same
partition(s)?


... And an insert trigger uses the current date to dertmine where the
object is stored.

In other words, this isn't actual partitioning as introduced in v10,
but a hand-rolled equivalent?


How do I determine what type 22079 refers to?

Presumably, the problem is that that type OID *doesn't* refer to anything
any more.  You should be asking "where is this dangling reference coming
from?".  It's possibly hiding in the partitioning expression(s) of
this partitioned table, but there's no way to tell with this amount
of info.

How did you do the upgrade exactly?

regards, tom lane


--
Simon Windsor

Eml: simon.wind...@cornfield.me.uk
Tel: 01454 617689
Mob: 0755 197 9733


“There is nothing in the world that some man cannot make a little worse and 
sell a little cheaper, and he who considers price only is that man's lawful 
prey.”



objects.sql
Description: application/sql


slow queries on system tables

2019-08-15 Thread PegoraroF10
The driver I´m using does some queries on system tables and obviously I
cannot change them because are driver inside. These two queries are usually
fast, but sometimes they take 8 or 15 times more to run. The question is,
can I change something on these tables ? Can i create an index, can I load
them into cache ? What can I do to speed up if I cannot change the way these
queries are written ?

Extracted from pg_stat_statements:
Query 1 - calls: 20 times, min_time ~ 20ms, mean_time ~ 50ms, max_time ~
480ms
SELECT current_database() AS TABLE_CATALOG, n.nspname AS TABLE_SCHEMA,
t.relname AS TABLE_NAME, t.oid AS TABLE_OID, current_database() AS
INDEX_CATALOG, n.nspname AS INDEX_SCHEMA, c.relname AS INDEX_NAME, c.oid AS
INDEX_OID, a.attname
AS COLUMN_NAME, a.attnum AS COLUMN_POSITION,indoption[index] & $1 AS
DESCENDING , indisunique::integer AS UNIQUE FROM   (SELECT
generate_series($2, indnatts - $3), indrelid, indexrelid, indkey,  
indoption, indisunique, indisprimary
FROM pg_index i)   i(index, indrelid, indexrelid, indkey, indoption,
indisunique, indisprimary)  INNER JOIN pg_class c ON c.oid = indexrelid  
INNER JOIN pg_class t ON t.oid = indrelid   INNER JOIN pg_namespace n ON
n.oid = t.relnamespace
INNER JOIN pg_attribute a ON a.attrelid = indrelid AND a.attnum =
indkey[index] WHERE n.nspname LIKE $4 AND t.relname LIKE $5 ORDER BY
indisprimary::integer DESC, n.nspname, t.relname, c.relname, a.attnum

Query 1 - calls: 2 times, min_time ~ 70ms, mean_time ~ 95ms, max_time ~
170ms.
SELECT pg_attribute.attname FROM pg_index, pg_class, pg_attribute  WHERE
upper(pg_class.relname) = $1 AND indrelid = pg_class.oid AND
pg_attribute.attrelid = pg_class.oidAND pg_attribute.attnum =
any(pg_index.indkey) AND indisprimary




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Error XX000 After pg11 upgrade

2019-08-15 Thread Simon Windsor
Hi

I have just upgraded a Db from pg 10.5 to pg11.4 and almost immediately we
are seeing errors like

ERROR,XX000,"cache lookup failed for type 22079"

When inserting into a a partitioned table. About 30% of inserts are
reporting this error.

How do I determine what type 22079 refers to? I have checked pg_attributes
and pg_classes without success? What would be causing this error?

The basic table layout is

objects
objects_2010
objects_2011
-
objects_2018
objects_2019

And an insert trigger uses the current date to dertmine where the object is
stored.

Any ideas or suggestions would be appreciated.

Simon
-- 
Simon

Simon Windsor
Eml: simon.wind...@cornfield.org.uk
Tel: 01454 617689
Mob: 07960 321599


TRANSACTION NOTIFICATION IN POSTGRESQL

2019-08-15 Thread Olagoke Akinyemi
Hello,

I need guidance on sending notification (data) to a soap server when 
transaction occur in PostgreSQL server? How can I accompany this?
Your help will be highly appreciated.

Thank you.

Regards,Sahoong


Is it possible to save some database in user's harddisk

2019-08-15 Thread jaehyun1986.lee
Hi,I am a begginer of PostgreSQL.Is it possible to store some database in 
user's harddisk?There is a main server but I want to save some data such as 
preference in user's hard disk (not main server).Thank you for your kind 
comments.Best regards,Jaehyun삼성 갤럭시 스마트폰에서 보냈습니다.