Re: timestamp out of range while casting return value to function's return type

2019-08-23 Thread Tom Lane
Thiemo Kellner  writes:
> I created a function that ought to return a timestamptz (another  
> likewise timestamp) but calling it, I get mentionied error. What do I  
> miss?

Whether that:

>  return '294277-01-01 00:59:59.99'::timestamptz;

is in range or not depends on your timezone setting.  It would
be considered in-range in UTC+1 or further east (so that the date
wraps back to 294276AD); in or west of Greenwich, not so much.

> I tried to return a timestamp of the year 2000 to no avail.

Hardly ...

regards, tom lane




timestamp out of range while casting return value to function's return type

2019-08-23 Thread Thiemo Kellner

Hi all

I created a function that ought to return a timestamptz (another  
likewise timestamp) but calling it, I get mentionied error. What do I  
miss? I tried to return a timestamp of the year 2000 to no avail.


Call: select utils.get_max_timestamptz();
--

Function
--
create or replace function GET_MAX_TIMESTAMPTZ()
  returns timestamptz
  language plpgsql
  immutable
  -- Include the hosting schema into search_path so that dblink
  -- can find the pglogger objects. There is no need to access
  -- objects in other schematas not covered with public.
  as
$body$
begin
-- highest timestamps on 64bit lubuntu vanilla PostgreSQL 11.3
return '294277-01-01 00:59:59.99'::timestamptz;
end;
$body$;

Kind regards

Thiemo

--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B






Re: slow queries on system tables

2019-08-23 Thread Adrian Klaver

On 8/23/19 3:47 PM, PegoraroF10 wrote:

This week we added just 5 new customers. Every schema has 100 tables, 300
indices, 400 triggers.
I cannot imagine our script doing 75000 updates just for adding those
schemas.


Very rough calculation:

https://www.postgresql.org/docs/11/catalog-pg-class.html

The catalog pg_class catalogs tables and most everything else that has 
columns or is otherwise similar to a table. This includes indexes (but 
see also pg_index), sequences (but see also pg_sequence), views, 
materialized views, composite types, and TOAST tables; see relkind. ...



From above, not counting TOAST tables:

5 * (100 + 300) = 2,000 new entries.

From previous post you said you had 190 schemas at that time.

190 * 400 = 76,000 entries
+2,000
78,000

In pg_class rel* fields:

"It is updated by VACUUM, ANALYZE, and a few DDL commands such as CREATE 
INDEX."


Assuming tables/indexes have changed enough to trigger vacuum run:

78,000 * 1 vacuum = 78,000 updates








--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: slow queries on system tables

2019-08-23 Thread Michael Lewis
Can you manually execute vacuum analyze on all three involved tables and
again share the plan for the same query? If it is significantly improved,
it would seem like following the recommendation to tune autovacuum (and
analyze) to be more frequent would be prudent.

You haven't seemed to change from default vacuum/analyze settings despite
the default parameters being suggested only for bare minimum hardware and
very light use databases.


Re: slow queries on system tables

2019-08-23 Thread PegoraroF10
This week we added just 5 new customers. Every schema has 100 tables, 300
indices, 400 triggers. 
I cannot imagine our script doing 75000 updates just for adding those
schemas.




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: slow queries on system tables

2019-08-23 Thread PegoraroF10
autovacuum_analyze_threshold   = 50;
autovacuum_analyze_scale_factor   = 0.1



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: slow queries on system tables

2019-08-23 Thread PegoraroF10
https://explain.depesz.com/s/5Rrd

EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT
current_database() AS TABLE_CATALOG, n.nspname AS TABLE_SCHEMA, t.relname AS
TABLE_NAME, t.oid AS TABLE_OID, current_database() AS INDEX_CATALOG,
n.nspname AS INDEX_SCHEMA, c.relname AS INDEX_NAME, c.oid AS INDEX_OID,
a.attname AS COLUMN_NAME, a.attnum AS COLUMN_POSITION,indoption[index] & 1
AS DESCENDING , indisunique::integer AS UNIQUE FROM   (SELECT
generate_series(0, indnatts - 1), indrelid, indexrelid, indkey,   indoption,
indisunique, indisprimary FROM pg_index i)   i(index, indrelid, indexrelid,
indkey, indoption, indisunique, indisprimary)  INNER JOIN pg_class c ON
c.oid = indexrelid   INNER JOIN pg_class t ON t.oid = indrelid   INNER JOIN
pg_namespace n ON n.oid = t.relnamespace   INNER JOIN pg_attribute a ON
a.attrelid = indrelid AND a.attnum = indkey[index] WHERE n.nspname LIKE
'f10_0275, public' AND t.relname LIKE 'sys_var' ORDER BY
indisprimary::integer DESC, n.nspname, t.relname, c.relname, a.attnum;



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: slow queries on system tables

2019-08-23 Thread Adrian Klaver

On 8/23/19 2:47 PM, PegoraroF10 wrote:
Coming back to my problem, today happened again, that search comes slow. 
Explaining again, auto vacuum and auto analyse for pg_* tables are not 
configured one by one, so its using that defaults of scale_factor = 0.2 
and threshold = 50. Today, when that problem of speed came back, the 
view pg_stat_sys_tables for pg_class table shows that it ran 415 times 
autovacuum only this week and there are 75187 updates on this table. How 
can I have been updated 75000 times this table only in a week ? What 


If you are still doing this:

https://www.postgresql.org/message-id/1563909633976-0.post%40n3.nabble.com

then it is not surprising.

kind of statement does an update on this table ? And why did it run 415 
times autovacuum and only 1 autoanalyse ?


Can you show the actual settings in postgresql.conf for:

autovacuum_analyze_threshold

autovacuum_analyze_scale_factor

Also do you have an include in postgresql.conf pointing to another file?

I know you said they are using the defaults but the scale_factor you 
show above is the default for autovacuum_vacuum_scale_factor. For 
autovacuum_analyze_scale_factor the default is 0.1, which makes wonder.


relname	seq_scan	seq_tup_read	idx_scan	idx_tup_fetch	n_tup_ins 
n_tup_upd	n_tup_del	n_tup_hot_upd	n_live_tup	n_dead_tup 
n_mod_since_analyze	last_vacuum	last_autovacuum	last_analyze 
last_autoanalyze	vacuum_count	autovacuum_count	analyze_count 
autoanalyze_count
pg_class	5395512	38598962913	616674711	2219978581	2243	75187	625	44008 
169891	5645	7435	NULL	2019-08-22 19:58:48.497317	NULL	2019-08-22 
13:03:02.770579	0	415	0	1




Sent from the PostgreSQL - general mailing list archive 
 at 
Nabble.com.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: slow queries on system tables

2019-08-23 Thread Michael Lewis
Might the issue be with pg_index or pg_attribute rather than pg_class? Is
the query still slow? Do you have the output of explain analyze for use on
https://explain.depesz.com/ or the output of EXPLAIN (ANALYZE, COSTS,
VERBOSE, BUFFERS, FORMAT JSON) for use on
http://tatiyants.com/pev/#/plans/new website?


Re: slow queries on system tables

2019-08-23 Thread PegoraroF10
Coming back to my problem, today happened again, that search comes slow.
Explaining again, auto vacuum and auto analyse for pg_* tables are not
configured one by one, so its using that defaults of scale_factor = 0.2 and
threshold = 50. Today, when that problem of speed came back, the view
pg_stat_sys_tables for pg_class table shows that it ran 415 times autovacuum
only this week and there are 75187 updates on this table. How can I have
been updated 75000 times this table only in a week ? What kind of statement
does an update on this table ? And why did it run 415 times autovacuum and
only 1 autoanalyse ?
relname seq_scanseq_tup_readidx_scanidx_tup_fetch   
n_tup_ins   n_tup_upd
n_tup_del   n_tup_hot_upd   n_live_tup  n_dead_tup  
n_mod_since_analyze
last_vacuum last_autovacuum last_analyzelast_autoanalyze
vacuum_count
autovacuum_countanalyze_count   autoanalyze_count
pg_class5395512 38598962913 616674711   2219978581  2243
75187   625 44008
169891  56457435NULL2019-08-22 19:58:48.497317  NULL
2019-08-22
13:03:02.770579 0   415 0   1




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: How to use brin_summarize_range

2019-08-23 Thread Michael Lewis
What is default_statistics_target set to? Or is there custom statistics
value set for this table/column? Perhaps the planner is making sub-optimal
choices because it only has a vague idea about the data and the histogram
is not very informative. Planning time will increase when statistics target
is increased, but perhaps 250 instead of default 100 would give *enough*
more information to make better choices. Or perhaps your target is already
max 1 and then I have no idea why btree would be chosen. Except, if
correlation is too low for the optimizer to consider BRIN to be best
perhaps. What does pg_stats say about the column in question?

>


Re: Question on pgwatch

2019-08-23 Thread Georg H.

Hi Bikram,

Am 23.08.2019 um 22:10 schrieb Bikram Majumdar:

Hi George,

So nice for your response.

Thanks. But, my question is how does it get the IP address 172.17.0.2 ?

And, how does one run psql command to connect/login to the test 
database ( pgwatch configuration database)  to add any database ?


Thanks and regards,
Bikram


the pgwatch2 configuration database resides within the docker container 
when using the docker variant.


Whether you've exposed this port on container creation/start or you have 
to go into the docker image (docker exec -it IMAGENAME /bin/bash) and 
connect there with psql (pg_hba.conf should be in 
/etc/postgresql/[version]/main or similiar).


regards

Georg





Re: Question on pgwatch

2019-08-23 Thread Bikram Majumdar
Hi George,

So nice for your response.

Thanks. But, my question is how does it get the IP address 172.17.0.2 ?

And, how does one run psql command to connect/login to the test database (
pgwatch configuration database)  to add any database ?

Thanks and regards,
Bikram

On Fri, Aug 23, 2019 at 2:00 AM Georg H.  wrote:

> Hello Bikram,
>
> Am 22.08.2019 um 23:50 schrieb Bikram MAJUMDAR:
> > Hi,
> > Need help from anyone in the team who has worked on pgwatch2.
> >
> > Yesterday we installed  pgwatch2 docker image and started the container.
> > We are running the pgwatch2 container from the database server itself -
> > Now we want to add our databases for monitoring using the admin
> interface:
> > We are opening the pgwatch2 admin interface at :8080/dbs.But,
> when we try to add the database (cemtore) in the admin interface we get the
> following error:
> >
> > Could not connect to specified host (ignore if gatherer daemon runs on
> another host): FATAL: no pg_hba.conf entry for host "172.17.0.2", user
> "cemtore", database "cemtore", SSL off
> >
> > Any idea what we are doing wrong?
> >
> > We would now like to add the database manually following notes below:
> > Usage
> > by default the pgwatch2 configuration database running inside Docker is
> being monitored so that you can immediately see some graphs, but you should
> add new databases by opening the admin interface at 127.0.0.1:8080/dbs or
> logging into the Postgres config DB and inserting into
> pgwatch2.monitored_db table (db - pgwatch2 , default user/pw -
> pgwatch2/pgwatch2admin)
> >
> > But, my question is, how do I run the psql on my  database server to
> login to this default pgwatch2 configuration DB?
> >
> > Bikram Majumdar
> > Sr Software Developer/DBA, Aqsacom Inc.
> > c. 1.972.365.3737
>
> As the message states (FATAL: no pg_hba.conf entry for host
> "172.17.0.2", user "cemtore", database "cemtore", SSL off) you have to
> edit pg_hba.conf of the PostgreSQL Cluster hosting the cemtore database
>
> to allow connections from pgwatch to it as your current config doesn't
> allow that. Also you may need the connecting user within your database
> with the required rights (if not added yet).
>
>
> regards
>
> Georg
>
>
>
>
>
>


Re: How to use brin_summarize_range

2019-08-23 Thread Jeremy Finzel
On Thu, Aug 22, 2019 at 1:46 PM Jeremy Finzel  wrote:

> Good afternoon!
>
> I am finding it difficult to understand how to maintain my BRIN index from
> the docs.  Specifically, this is the documentation on the
> function brin_summarize_range which isn't clear to me:
>
> brin_summarize_range(index regclass, blockNumber bigint) integer
>
>- summarize the page range covering the given block, if not already
>summarized
>
> I answered my own question (I think).  blockNumber corresponds, I believe,
to pages_per_range.  So if I choose 64 as that value, I can run above
function on 64 possible values.  But perhaps I'm wrong about that?


> There is no information on how a user is to actually find blockNumber,
> especially what blockNumber she might be interested in (like the end of the
> table).  On my table, my BRIN index is all of a sudden all out of whack and
> I'm trying to figure out why.  The planner doesn't choose it.  Even if I
> force a BRIN scan, it estimates way wrong, and performs terribly.  I do not
> have autosummarize on.  I am curious if vacuum somehow invalidated
> everything?
>
> When I ran brin_summarize_new_values, it immediately returned 0.  This
> table has 292 million rows, and a straightforward insert-only pattern, but
> we also prune data older than 1 year old. The BRIN index is on insert
> time.  It was working great up until just a bit ago.
>
> Any direction on using these brin functions would be very appreciated.
>

I am also noticing bad plan choices with BRIN indexes on above scenario.  I
have tried creating said index with pages_per_range values of 64, 128, 500,
1000, and 1.  The 1000 value works best and executed in 11 seconds.

However, regardless of pages_per_change, the planner is still choosing a
btree index on (id, id_2, insert_time) fields, which is taking 30 seconds
to execute.  I have to SET enable_indexscan TO false to get the BRIN index
used, which is 3x faster.  What gives?

Any suggestions?

Thanks,
Jeremy


Re: How to install Postgres 12 beta with YUM

2019-08-23 Thread Adrian Klaver

On 8/23/19 3:38 AM, Devrim Gündüz wrote:

Hi,

v12 is in testing repos only. Find

[pgdg12-updates-testing]

in the repo file, and enable that one, please.


You may want to include the above here:

https://yum.postgresql.org/news-12snapshot-ready-for-testing.php



Regards, Devrim

On 22 August 2019 21:29:44 GMT+03:00, "Igal @ Lucee.org" 
 wrote:


I have installed the CentOS 7 PGDG file from
https://yum.postgresql.org/repopackages.php

If I `cat /etc/yum.repos.d/pgdg-redhat-all.repo` I see:

# PGDG Red Hat Enterprise Linux / CentOS / Scientific Linux stable
repositories:

[pgdg12]
name=PostgreSQL 12 $releasever - $basearch

baseurl=https://download.postgresql.org/pub/repos/yum/12/redhat/rhel-$releasever-$basearch
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-PGDG
...

But when I try to find the version to install I do not see it, e.g.
`yum list postgresql* | grep 12` doesn't show anything with version 12.

Any ideas?

Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 


--
Sent from my Android device with K-9 Mail. Please excuse my brevity.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: How to install Postgres 12 beta with YUM

2019-08-23 Thread Devrim Gündüz
Hi,

v12 is in testing repos only. Find

[pgdg12-updates-testing]

in the repo file, and enable that one, please.

Regards, Devrim

On 22 August 2019 21:29:44 GMT+03:00, "Igal @ Lucee.org"  wrote:
>I have installed the CentOS 7 PGDG file from 
>https://yum.postgresql.org/repopackages.php
>
>If I `cat /etc/yum.repos.d/pgdg-redhat-all.repo` I see:
>
># PGDG Red Hat Enterprise Linux / CentOS / Scientific Linux stable 
>repositories:
>
>[pgdg12]
>name=PostgreSQL 12 $releasever - $basearch
>baseurl=https://download.postgresql.org/pub/repos/yum/12/redhat/rhel-$releasever-$basearch
>enabled=0
>gpgcheck=1
>gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-PGDG
>...
>
>But when I try to find the version to install I do not see it, e.g.
>`yum 
>list postgresql* | grep 12` doesn't show anything with version 12.
>
>Any ideas?
>
>Thanks,
>
>Igal Sapir
>Lucee Core Developer
>Lucee.org 

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: Importing from CSV, auto creating table?

2019-08-23 Thread Steve Atkins

On 21/08/2019 22:15, stan wrote:

I have a situation where we need to import data, as an interim measure,
from spreadsheets.

I have read up on \copy and COPY, but I do not see that either of these can
use the header from a CSV file to define a new table. Am I missing
something?

Also downloaded something called pgfutter, which I thought would do this,
but have not had any success with this. After I (thought) I had figured out
the arguments, it just seams to hag forever.

You might find https://github.com/wttw/csvpg useful. It creates tables 
with column names based on the CSV header, and data types intuited from 
the data.


(The only pre-built binary there is for Mac right now; I should at least 
add one for Windows).


Cheers,
  Steve






Re: Question on pgwatch

2019-08-23 Thread Georg H.

Hello Bikram,

Am 22.08.2019 um 23:50 schrieb Bikram MAJUMDAR:

Hi,
Need help from anyone in the team who has worked on pgwatch2.

Yesterday we installed  pgwatch2 docker image and started the container.
We are running the pgwatch2 container from the database server itself -
Now we want to add our databases for monitoring using the admin interface:
We are opening the pgwatch2 admin interface at :8080/dbs.But, when we 
try to add the database (cemtore) in the admin interface we get the following error:

Could not connect to specified host (ignore if gatherer daemon runs on another host): FATAL: no pg_hba.conf 
entry for host "172.17.0.2", user "cemtore", database "cemtore", SSL off

Any idea what we are doing wrong?

We would now like to add the database manually following notes below:
Usage
by default the pgwatch2 configuration database running inside Docker is being 
monitored so that you can immediately see some graphs, but you should add new 
databases by opening the admin interface at 127.0.0.1:8080/dbs or logging into 
the Postgres config DB and inserting into pgwatch2.monitored_db table (db - 
pgwatch2 , default user/pw - pgwatch2/pgwatch2admin)

But, my question is, how do I run the psql on my  database server to login to 
this default pgwatch2 configuration DB?

Bikram Majumdar
Sr Software Developer/DBA, Aqsacom Inc.
c. 1.972.365.3737


As the message states (FATAL: no pg_hba.conf entry for host 
"172.17.0.2", user "cemtore", database "cemtore", SSL off) you have to 
edit pg_hba.conf of the PostgreSQL Cluster hosting the cemtore database


to allow connections from pgwatch to it as your current config doesn't 
allow that. Also you may need the connecting user within your database 
with the required rights (if not added yet).



regards

Georg







Re: psql \copy hanging

2019-08-23 Thread Arnaud L.

Le 22/08/2019 à 20:00, Jerry Sievers a écrit :

Is the overnight run being done as the same DB user you are testing
with?

If not, then perhaps the automated user deviates in resource settings
and/or permissions, in turn resulting in sluggish performance caused by
thrashing and/or inferior query plan.



Hi Jerry,
yes, they both run under the same user. I've checked my other scheduled 
tasks and I don't see what could interfere, but then this is quite 
complex so I could easily overlook something.

Unfortunately, tonight's run failed for a totally different reason...

Thanks for your help


Cheers
--
Arnaud