pg_stat_activity.query_id <-> pg_stat_statements.queryid

2024-02-15 Thread Daniel Westermann (DWE)
Hi,

quick question: What would be the cases for a query_id in pg_stat_activity not 
showing up in pg_stat_statements.queryid assuming pg_stat_statements.max is not 
yet reached?

Regards
Daniel




Re: Corruption or wrong results with 14.10?

2023-11-23 Thread Daniel Westermann (DWE)
>smrdbprod=# select count(*) from smrr_mgr.formula where crart_id = 2150718 
>and chemin = '@';
> count
>---
> 1
>(1 row)
>
>smrdbprod=# select crart_id, chemin, count(*) from smrr_mgr.formula group by  
>crart_id, chemin having count(*) > 1;
> crart_id | chemin | count
>--++---
>(0 rows)

Small update:

smrdbprod=# set enable_indexscan to off;
SET
smrdbprod=# select crart_id, chemin, count(*) from smrr_mgr.formula group by  
crart_id, chemin having count(*) > 1;
  crart_id   | chemin | count
-++---
 3505463 | @  | 2
  473731 | @  | 2
 1063238 | @  | 2
 1468088 | @  | 2
  462723 | @  | 2
 1099378 | @  | 2


I think I know what I have to do.




Corruption or wrong results with 14.10?

2023-11-23 Thread Daniel Westermann (DWE)
Hi,

I have a very strange behavior on 14.10. 

smrdbprod=# select version();
 version
--
 PostgreSQL 14.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 
(Red Hat 8.5.0-20), 64-bit
(1 row)

This is the statement which is executed:

insert into smrr_mgr.formula_composition_with_codes
select crart_id_par,
   product_par.code_fin code_fin_par,
   crfrm_chemin,
   is_primary,
   seq,
   crart_id_comp,
   product_comp.code_fin code_fin_comp,
   qty_per
from smrr_mgr.formula_composition,
 smrr_mgr.formula,
 smrr_mgr.product product_par,
 smrr_mgr.product product_comp
where formula_composition.crart_id_par = formula.crart_id
  and formula_composition.crfrm_chemin = formula.chemin
  and formula_composition.crart_id_par = product_par.id
  and formula_composition.crart_id_comp = product_comp.id
order by seq;

Here are the table definitions:

smrdbprod=# \d pg_class
 Table "pg_catalog.pg_class"
   Column| Type | Collation | Nullable | Default
-+--+---+--+-
 oid | oid  |   | not null |
 relname | name |   | not null |
 relnamespace| oid  |   | not null |
 reltype | oid  |   | not null |
 reloftype   | oid  |   | not null |
 relowner| oid  |   | not null |
 relam   | oid  |   | not null |
 relfilenode | oid  |   | not null |
 reltablespace   | oid  |   | not null |
 relpages| integer  |   | not null |
 reltuples   | real |   | not null |
 relallvisible   | integer  |   | not null |
 reltoastrelid   | oid  |   | not null |
 relhasindex | boolean  |   | not null |
 relisshared | boolean  |   | not null |
 relpersistence  | "char"   |   | not null |
 relkind | "char"   |   | not null |
 relnatts| smallint |   | not null |
 relchecks   | smallint |   | not null |
 relhasrules | boolean  |   | not null |
 relhastriggers  | boolean  |   | not null |
 relhassubclass  | boolean  |   | not null |
 relrowsecurity  | boolean  |   | not null |
 relforcerowsecurity | boolean  |   | not null |
 relispopulated  | boolean  |   | not null |
 relreplident| "char"   |   | not null |
 relispartition  | boolean  |   | not null |
 relrewrite  | oid  |   | not null |
 relfrozenxid| xid  |   | not null |
 relminmxid  | xid  |   | not null |
 relacl  | aclitem[]|   |  |
 reloptions  | text[]   | C |  |
 relpartbound| pg_node_tree | C |  |
Indexes:
"pg_class_oid_index" PRIMARY KEY, btree (oid)
"pg_class_relname_nsp_index" UNIQUE CONSTRAINT, btree (relname, 
relnamespace)
"pg_class_tblspc_relfilenode_index" btree (reltablespace, relfilenode)

smrdbprod=# \d smrr_mgr.formula_composition_with_codes
Table "smrr_mgr.formula_composition_with_codes"
Column | Type  | Collation | Nullable | Default
---+---+---+--+-
 crart_id_par  | numeric   |   |  |
 code_fin_par  | character varying(18) |   |  |
 crfrm_chemin  | character varying(2)  |   |  |
 is_primary| character varying(1)  |   |  |
 seq   | numeric(6,0)  |   |  |
 crart_id_comp | numeric   |   |  |
 code_fin_comp | character varying(18) |   |  |
 qty_per   | numeric   |   |  |
Indexes:
"formula_composition_with_codes_crart_id_comp_index" btree (crart_id_comp)
"formula_composition_with_codes_unique_index" UNIQUE, btree (crart_id_par, 
is_primary, crfrm_chemin, seq, crart_id_comp)


smrdbprod=# \d smrr_mgr.formula_composition
 Table "smrr_mgr.formula_composition"
 Column |Type | Collation | Nullable | Default
+-+---+--+-
 crart_id_par   | numeric |   | not null |
 crfrm_chemin   | character varying(2)|   | not null |
 seq| numeric(6,0)|   | not null |
 qty| numeric(7,0)

Re: Understanding pg_stat_io.evictions

2023-07-31 Thread Daniel Westermann (DWE)
>> postgres=# insert into t select i,  md5(i::text), md5(i::text), md5(i::text) 
>> from generate_series(1,100) i;
>..
>>  client backend |   207 | normal
>>  client backend | 0 | vacuum
>>
>> Shouldn't these evictions show up under context blkwrite? The description in 
>> docs is:

>No, that's not the case because INSERT doesn't execute a bulk
>write. It is is only performed during COPY FROM, and the write side of
>some other comands, such as CREATE AS (and some internal table-copy
>operations.).

Thanks, makes sense.

>> As far as I understand this, a ring buffer is used in this case. Do I miss 
>> something?

>Maybe you're confusiong it with bulk-read cases?

Yes, you're right.

Thank you
Daniel


Understanding pg_stat_io.evictions

2023-07-28 Thread Daniel Westermann (DWE)
Hi,

I am trying to understand the evictions statistic in pg_stat_io. I know what 
evictions are, so this is not the question.
Given this:

postgres=# select version();
   version  
  
--
 PostgreSQL 17devel on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 
12.2.0, 64-bit
(1 row)

postgres=# select pg_stat_reset_shared('io');
 pg_stat_reset_shared 
--
 
(1 row)

postgres=# create table t ( a int, b text, c text, d text );
CREATE TABLE

postgres=# insert into t select i,  md5(i::text), md5(i::text), md5(i::text) 
from generate_series(1,100) i;
INSERT 0 100

postgres=# select backend_type,evictions,context 
 from pg_stat_io 
where backend_type = 'client backend'
  and object ='relation';
  backend_type  | evictions |  context  
+---+---
 client backend | 0 | bulkread
 client backend | 0 | bulkwrite
 client backend |   207 | normal
 client backend | 0 | vacuum

Shouldn't these evictions show up under context blkwrite? The description in 
docs is:

"Number of times a block has been written out from a shared or local buffer in 
order to make it available for another use.

In context normal, this counts the number of times a block was evicted from a 
buffer and replaced with another block. In contexts bulkwrite, bulkread, and 
vacuum, this counts the number of times a block was evicted from shared buffers 
in order to add the shared buffer to a separate, size-limited ring buffer for 
use in a bulk I/O operation."

As far as I understand this, a ring buffer is used in this case. Do I miss 
something?

Many thanks in advance
Daniel




oid2name: Why table and not relation?

2022-03-03 Thread Daniel Westermann (DWE)
Hi,

given this simple example:

postgres=# create table t ( a int );
CREATE TABLE
postgres=# create index i on t(a);
CREATE INDEX
postgres=# select pg_relation_filepath('i');
 pg_relation_filepath 
--
 base/5/16388
(1 row)
postgres=# \! oid2name --oid 16388
>From database "postgres":
  Filenode  Table Name
--
 16388   i
postgres=# 
postgres-# \! oid2name --table=i%
>From database "postgres":
  Filenode  Table Name
--
 16388   i
postgres-# 

Is there a reason why the heading is called "Table Name" instead of "Relation 
Name"? 

The same is true for the parameters:
postgres-# \! oid2name --help | grep -iw table
  -f, --filenode=FILENODEshow info for table with given file node
  -o, --oid=OID  show info for table with given OID
  -t, --table=TABLE  show info for named table

This works with indexes as well, not only tables.

Regards
Daniel



Re: WAL Archiving and base backup

2022-01-14 Thread Daniel Westermann (DWE)
>On 1/14/22 1:40 PM, Stephen Frost wrote:
>snip]
>> We shouldn't be trying to provide
>> documentation around how to write a tool like pgbackrest, we should,
>> instead, have a tool like pgbackrest in core with its own documentation,
>> as most other RDBMS's do.

>That's an excellent solution to this problem.

While I know the reasons for not having something like pgbackrest in core, I 
think this is a major drawback for the project. People coming from commercial 
systems are used to have that tooling included by default. 

+1 for having that in core

Regards
Daniel 



Re: Why does the OID jump by 3 when creating tables?

2021-10-30 Thread Daniel Westermann (DWE)
>>Le sam. 30 oct. 2021 à 10:55, Daniel Westermann (DWE) 
>> a écrit :
>>Hi all,

>>as I could not find the reason in the source code, can someone tell me why 
>>the OID counter jumps by 3 between two create table >>statements?

>>postgres=# create table t1 ( a int );
>>CREATE TABLE
>>postgres=# create table t2 ( a int );
>>CREATE TABLE
>>postgres=# select oid,relname from pg_class where relname in ('t1','t2');
>>  oid  | relname 
>>---+-
>> 16453 | t1
>> 16456 | t2
>>(2 rows)

>>These seems not to happen with other objects, e.g. namespaces:

>>postgres=# create schema a;
>>CREATE SCHEMA
>>postgres=# create schema b;
>>CREATE SCHEMA
>>postgres=# select oid,nspname from pg_namespace where nspname in ('a','b');
>>  oid  | nspname 
>>---+-
>> 16459 | a
>> 16460 | b
>>(2 rows)

>>... or indexes:

>>postgres=# select oid,relname from pg_class where relname in ('i1','i2');
>>  oid  | relname 
>>---+-
>> 16461 | i1
>> 16462 | i2


>When you create a table, it also creates two data types: tablename and 
>_tablename. For example, for your table t1, you should have a >t1 type and a 
>_t1 type. Both have OIDs. On my cluster, your example gives me:

># select oid,relname from pg_class where relname in ('t1','t2');
>┌───┬─┐
>│  oid  │ relname │
>├───┼─┤
>│ 24635 │ t1      │
>│ 24638 │ t2      │
>└───┴─┘
>(2 rows)

>Time: 0.507 ms
># select oid, typname from pg_type where typname like '%t1' or typname like 
>'%t2' and oid>24000 order by oid;
>┌───┬─┐
>│  oid  │ typname │
>├───┼─┤
>│ 24636 │ _t1     │
>│ 24637 │ t1      │
>│ 24639 │ _t2     │
>│ 24640 │ t2      │
>└───┴─┘
>(4 rows)

>Time: 1.203 ms

>The jump between t1 OID (24635) and t2 OID (24638) is the _t1 data type OID 
>(24636) and the t1 data type OID (24637).

Thank you, Guillaume. 

Why does the OID jump by 3 when creating tables?

2021-10-30 Thread Daniel Westermann (DWE)
Hi all,

as I could not find the reason in the source code, can someone tell me why the 
OID counter jumps by 3 between two create table statements?

postgres=# create table t1 ( a int );
CREATE TABLE
postgres=# create table t2 ( a int );
CREATE TABLE
postgres=# select oid,relname from pg_class where relname in ('t1','t2');
  oid  | relname 
---+-
 16453 | t1
 16456 | t2
(2 rows)

These seems not to happen with other objects, e.g. namespaces:

postgres=# create schema a;
CREATE SCHEMA
postgres=# create schema b;
CREATE SCHEMA
postgres=# select oid,nspname from pg_namespace where nspname in ('a','b');
  oid  | nspname 
---+-
 16459 | a
 16460 | b
(2 rows)

... or indexes:

postgres=# select oid,relname from pg_class where relname in ('i1','i2');
  oid  | relname 
---+-
 16461 | i1
 16462 | i2

Thanks in advance
Daniel




Re: Behavior change in PostgreSQL 14Beta3 or bug?

2021-09-06 Thread Daniel Westermann (DWE)
>It's a non-hot update, and so there is a single dead index tuple. You're 
>seeing the new optimization that makes vacuum skip indexes in >marginal cases.

>Try running vacuum with index cleanup = on.

Thank you, Peter


Behavior change in PostgreSQL 14Beta3 or bug?

2021-09-06 Thread Daniel Westermann (DWE)
Hi,

while playing with PostgreSQL 14 Beta 3 I noticed a change when it comes to the 
visibility map and vacuum. Test case:


gbench=# create table t1 ( a int, b text ) with ( fillfactor = 100 );
CREATE TABLE
pgbench=# insert into t1 select a, a::text from generate_series(1,100) a;
INSERT 0 100
pgbench=# create index i1 on t1 (a);
CREATE INDEX


gbench=# select ctid,* from t1 order by 1 limit 5;
 ctid  | a | b 
---+---+---
 (0,1) | 1 | 1
 (0,2) | 2 | 2
 (0,3) | 3 | 3
 (0,4) | 4 | 4
 (0,5) | 5 | 5
(5 rows)


pgbench=# begin;
BEGIN
pgbench=*# update t1 set b ='xx' where a = 1;
UPDATE 1
pgbench=*# select ctid,* from t1 order by 1 limit 5;
 ctid  | a | b 
---+---+---
 (0,2) | 2 | 2
 (0,3) | 3 | 3
 (0,4) | 4 | 4
 (0,5) | 5 | 5
 (0,6) | 6 | 6
(5 rows)

pgbench=*# select ctid,* from t1 where a = 1;
  ctid   | a | b  
-+---+
 (4,203) | 1 | xx
(1 row)

pgbench=*# commit;

pgbench=# select pg_visibility_map('t1'::regclass, 0);
 pg_visibility_map 
---
 (f,f)
(1 row)

pgbench=# vacuum t1;
VACUUM
pgbench=# select pg_visibility_map('t1'::regclass, 0);
 pg_visibility_map 
---
 (f,f)   

Re: Upgrading from 11 to 13

2021-03-30 Thread Daniel Westermann (DWE)
>On Tue, Mar 30, 2021 at 08:10:08AM -0700, Adrian Klaver wrote:
>> On 3/30/21 8:06 AM, Bruce Momjian wrote:
>> > What other software needs to upgrade through all intermediate versions?
>> 
>> Many variations of OS'es, though not all. One of the benefits of Ubuntu is
>> that you can jump from x.LTS to y.LTS over intermediate versions.

>Oh, yeah.  Anyway, any idea of how to more clearly state users don't
>need to upgrade to intermediate versions?

The best place would be the release notes, I guess. Right at the beginning here:

E.3.2. Migration to Version 13

A dump/restore using pg_dumpall or use of pg_upgrade or logical replication is 
required for those wishing to migrate data from any previous release. See 
Section 18.6 for general information on migrating to new major releases.

We could mention that pg_upgrade works across the supported major versions.


Regards
Daniel



Re: Upgrading from 11 to 13

2021-03-30 Thread Daniel Westermann (DWE)

>On Tue, Mar 30, 2021 at 04:34:34PM +0200, Guillaume Lelarge wrote:
>> Hi,
>> 
>> Le mar. 30 mars 2021 à 16:10, Susan Joseph  a écrit 
>> :
>> 
>> I am currently using PostgreSQL 11.2 and would like to try and upgrade it
>> to the latest version 13.  Can I go straight from 11 to 13 or do I need 
>>to
>> upgrade to 12 first and then to 13?
>> 
>> 
>> You can go straight to 13.

>We get this question often.  Why do people feel they need to upgrade to
>intermediate releases?  If we knew, maybe we could better clarify this.

I believe this is coming from the Oracle world. You can, e.g.not directly go 
from 9 to18. There are supported upgrade paths and you need to stick to those, 
but they are documented.

Regards
Daniel



Re: Compiler warnings on Debian 10

2021-02-11 Thread Daniel Westermann (DWE)
>> Nothing special. Configure and make log attached. The same procedure works 
>> fine on my local Debian 10.7.

>Quite odd.  The configure output clearly shows that it thought gnu_printf
>is fine:

>checking for printf format archetype... gnu_printf

>Possibly the corresponding section of config.log would provide more
>insight.

Please find it attached.

Regards
Daniel

config.log.gz
Description: config.log.gz


Compiler warnings on Debian 10

2021-02-10 Thread Daniel Westermann (DWE)
HI,

I was wondering if someone already has seen these warnings on Debian 10 
(PostgreSQL 13.1):

../../../../src/include/port.h:176:70: warning: 'format' attribute argument not 
supported: gnu_printf
  [-Wignored-attributes]
extern int  pg_snprintf(char *str, size_t count, const char *fmt,...) 
pg_attribute_printf(3, 4);
  ^
../../../../src/include/c.h:127:49: note: expanded from macro 
'pg_attribute_printf'
#define pg_attribute_printf(f,a) __attribute__((format(PG_PRINTF_ATTRIBUTE, f, 
a)))
^
In file included from llvmjit_error.cpp:19:
In file included from ../../../../src/include/postgres.h:46:
In file included from ../../../../src/include/c.h:1318:
../../../../src/include/port.h:178:55: warning: 'format' attribute argument not 
supported: gnu_printf
  [-Wignored-attributes]
extern int  pg_sprintf(char *str, const char *fmt,...) 
pg_attribute_printf(2, 3);
   ^
../../../../src/include/c.h:127:49: note: expanded from macro 
'pg_attribute_printf'
#define pg_attribute_printf(f,a) __attribute__((format(PG_PRINTF_ATTRIBUTE, f, 
a)))
^
In file included from llvmjit_error.cpp:19:
In file included from ../../../../src/include/postgres.h:46:
In file included from ../../../../src/include/c.h:1318:
../../../../src/include/port.h:180:58: warning: 'format' attribute argument not 
supported: gnu_printf
  [-Wignored-attributes]
extern int  pg_fprintf(FILE *stream, const char *fmt,...) 
pg_attribute_printf(2, 3);
  ^

Configure was successful:
...
configure: creating ./config.status
config.status: creating GNUmakefile
config.status: creating src/Makefile.global
config.status: creating src/include/pg_config.h
config.status: creating src/include/pg_config_ext.h
config.status: creating src/interfaces/ecpg/include/ecpg_config.h
config.status: linking src/backend/port/tas/dummy.s to src/backend/port/tas.s
config.status: linking src/backend/port/posix_sema.c to 
src/backend/port/pg_sema.c
config.status: linking src/backend/port/sysv_shmem.c to 
src/backend/port/pg_shmem.c
config.status: linking src/include/port/linux.h to src/include/pg_config_os.h
config.status: linking src/makefiles/Makefile.linux to src/Makefile.port


This is the official Debian AMI on AWS:
postgres@pg13:~/postgresql-13.1$ cat /etc/debian_version 
10.8

Regards
Daniel



Re: PG 13 trusted extensions and pg_available_extensions

2020-09-25 Thread Daniel Westermann (DWE)
On Thu, Sep 24, 2020 at 10:58 AM Michael Paquier  wrote:
>>
>> On Wed, Sep 23, 2020 at 03:28:45PM +0000, Daniel Westermann (DWE) wrote:
>> > I was playing a bit with trusted extensions and wondered if there is
>> > a reason that the "trusted" flag is not exposed in pg_available_extensions.
>> > I believe that information would be quite useful so one can easily
>> > identify extensions that can be installed as "normal" user.
>>
>> Adding the trusted flag makes sense for visibility.  There is a bit
>> more that we could consider though?  For example, what about
>> "relocatable" and "requires"?

>+1, and also the schema (for non relocatable extensions).

+1



PG 13 trusted extensions and pg_available_extensions

2020-09-23 Thread Daniel Westermann (DWE)
Hi,

I was playing a bit with trusted extensions and wondered if there is a reason 
that the "trusted" flag is not exposed in pg_available_extensions. 
I believe that information would be quite useful so one can easily identify 
extensions that can be installed as "normal" user.

Regards
Daniel







Re: Out of memory with "create extension postgis"

2020-08-03 Thread Daniel Westermann (DWE)
>I am aware that the behavior is different from what we've seen last week but 
>this is how it looks today.
>Anything we missed or did not do correct?

Finally this can be re-produced quite easily by installing this extension: 
https://de.osdn.net/projects/pgstoreplans/downloads/72297/pg_store_plans12-1.4-1.el7.x86_64.rpm/
As soon as pg_store_plans is in shared_preload_libraries the postgis extension 
can not be installed anymore. This is the exact package list:

[root@rhel77 ~]# rpm -qa | egrep "postgres|postgis|store_plan"
postgresql12-server-12.3-5PGDG.rhel7.x86_64
postgresql12-contrib-12.3-5PGDG.rhel7.x86_64
postgresql12-12.3-5PGDG.rhel7.x86_64
postgis30_12-3.0.1-5.rhel7.x86_64
pg_store_plans12-1.4-1.el7.x86_64
postgresql12-libs-12.3-5PGDG.rhel7.x86_64
postgresql12-llvmjit-12.3-5PGDG.rhel7.x86_64

I will open an issue on the extensions' GitHub repo.

Regards
Daniel






Re: Out of memory with "create extension postgis"

2020-08-03 Thread Daniel Westermann (DWE)
>> Here is a new one with bt at the end:

>That's just showing the stack when the backend is idle waiting for input.
>We need to capture the stack at the moment when the "out of memory" error
>is reported (errfinish() should be the top of stack).

Then I don't know what/how to do it. Here is a complete trace of what we did 
today:

-- First session 

$ psql
# select pg_backend_pid();
 pg_backend_pid

  22480
(1 row)


-- Second session

# gdb -p 22480
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-115.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
.
Attaching to process 22480
Reading symbols from /usr/pgsql-12/bin/postgres...Reading symbols from 
/usr/lib/debug/usr/pgsql-12/bin/postgres.debug...done.
done.
Reading symbols from /lib64/libpthread.so.0...(no debugging symbols 
found)...done.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Loaded symbols for /lib64/libpthread.so.0
Reading symbols from /lib64/libxml2.so.2...Reading symbols from 
/lib64/libxml2.so.2...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Loaded symbols for /lib64/libxml2.so.2
Reading symbols from /lib64/libpam.so.0...Reading symbols from 
/lib64/libpam.so.0...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Loaded symbols for /lib64/libpam.so.0
Reading symbols from /lib64/libssl.so.10...Reading symbols from 
/lib64/libssl.so.10...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Loaded symbols for /lib64/libssl.so.10
Reading symbols from /lib64/libcrypto.so.10...Reading symbols from 
/lib64/libcrypto.so.10...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Loaded symbols for /lib64/libcrypto.so.10
Reading symbols from /lib64/libgssapi_krb5.so.2...Reading symbols from 
/lib64/libgssapi_krb5.so.2...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Loaded symbols for /lib64/libgssapi_krb5.so.2
Reading symbols from /lib64/librt.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib64/librt.so.1
Reading symbols from /lib64/libdl.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/libdl.so.2
Reading symbols from /lib64/libm.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib64/libm.so.6
Reading symbols from /lib64/libldap-2.4.so.2...Reading symbols from 
/lib64/libldap-2.4.so.2...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Loaded symbols for /lib64/libldap-2.4.so.2
Reading symbols from /lib64/libicui18n.so.50...Reading symbols from 
/lib64/libicui18n.so.50...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Loaded symbols for /lib64/libicui18n.so.50
Reading symbols from /lib64/libicuuc.so.50...Reading symbols from 
/lib64/libicuuc.so.50...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Loaded symbols for /lib64/libicuuc.so.50
Reading symbols from /lib64/libsystemd.so.0...Reading symbols from 
/lib64/libsystemd.so.0...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Loaded symbols for /lib64/libsystemd.so.0
Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib64/libc.so.6
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols 
found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /lib64/libz.so.1...Reading symbols from 
/lib64/libz.so.1...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Loaded symbols for /lib64/libz.so.1
Reading symbols from /lib64/liblzma.so.5...Reading symbols from 
/lib64/liblzma.so.5...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Loaded symbols for /lib64/liblzma.so.5
Reading symbols from /lib64/libaudit.so.1...Reading symbols from 
/lib64/libaudit.so.1...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Loaded symbols for /lib64/libaudit.so.1
Reading symbols from /lib64/libkrb5.so.3...Reading symbols from 
/lib64/libkrb5.so.3...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Loaded symbols for /lib64/libkrb5.so.3
Reading symbols from /lib64/libcom_err.so.2...Reading symbols from 
/lib64/libcom_err.so.2...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Loaded symbols for /lib64/libcom_err.so.2
Reading symbols from /lib64/libk5crypto.so.3...Reading symbols from 
/lib64/libk5crypto.so.3...(no debugging symbols found)...done.
(no debugging symbols found)...done.

Re: Out of memory with "create extension postgis"

2020-07-30 Thread Daniel Westermann (DWE)
From: Tom Lane 
Sent: Wednesday, July 29, 2020 17:05
To: Daniel Westermann (DWE) 
Cc: pgsql-general@lists.postgresql.org 
Subject: Re: Out of memory with "create extension postgis" 
 
"Daniel Westermann (DWE)"  writes:
>> So this is what we got today. In the log file there is this:

>> 2020-07-29 16:33:23 CEST 101995 ERROR:  out of memory
>> 2020-07-29 16:33:23 CEST 101995 DETAIL:  Failed on request of size 8265691 
>> in memory context "PortalContext".
>> 2020-07-29 16:33:23 CEST 101995 STATEMENT:  create extension postgis;

>Is there not a dump of memory context sizes just before the "ERROR: out of
>memory" line?  It should look something like

>TopMemoryContext: 68720 total in 5 blocks; 17040 free (15 chunks); 51680 used
>  MessageContext: 8192 total in 1 blocks; 6880 free (1 chunks); 1312 used
>...
>  ErrorContext: 8192 total in 1 blocks; 7928 free (3 chunks); 264 used
>Grand total: 1063328 bytes in 190 blocks; 312376 free (159 chunks); 750952 used

>(this taken from an idle backend, so numbers from a backend that's hit
>OOM would be a lot larger).  If you don't see that then you must be
>using some logging mechanism that fails to capture the postmaster's
>stderr output, such as syslog.  If your postmaster start script doesn't
>actually send stderr to /dev/null, you might find the context map in some
>other log file.

Thanks for the hint, will check

>Umm ... you didn't issue a "bt" when you got to errfinish, so there's
>no useful info here.

Here is a new one with bt at the end:

Breakpoint 1 at 0x87e210: file elog.c, line 411.
Continuing.

Breakpoint 1, errfinish (dummy=dummy@entry=0) at elog.c:411
411 elog.c: No such file or directory.
Continuing.

Breakpoint 1, errfinish (dummy=dummy@entry=0) at elog.c:411
411 in elog.c
Missing separate debuginfos, use: debuginfo-install CGAL-4.7-1.rhel7.1.x86_64 
boost-date-time-1.53.0-27.el7.x86_64 boost-serialization-1.53.0-27.el7.x86_64 
boost-system-1.53.0-27.el7.x86_64 boost-thread-1.53.0-27.el7.x86_64 
gmp-6.0.0-15.el7.x86_64 jbigkit-libs-2.0-11.el7.x86_64 
json-c-0.11-4.el7_0.x86_64 libcurl-7.29.0-54.el7_7.2.x86_64 
libidn-1.28-4.el7.x86_64 libjpeg-turbo-1.2.90-8.el7.x86_64 
libssh2-1.8.0-3.el7.x86_64 libtiff-4.0.3-32.el7.x86_64 mpfr-3.1.1-4.el7.x86_64 
postgis30_12-3.0.1-5.rhel7.x86_64 protobuf-c-1.0.2-3.el7.x86_64 
sqlite33-libs-3.30.1-1.rhel7.x86_64
Continuing.

Program received signal SIGINT, Interrupt.
0x7f94f4af2e93 in __epoll_wait_nocancel () from /lib64/libc.so.6
#0  0x7f94f4af2e93 in __epoll_wait_nocancel () from /lib64/libc.so.6
#1  0x0073fdae in WaitEventSetWaitBlock (nevents=1, 
occurred_events=0x7ffcf3b4bc30, cur_timeout=-1, set=0x27c3718) at latch.c:1080
#2  WaitEventSetWait (set=0x27c3718, timeout=timeout@entry=-1, 
occurred_events=occurred_events@entry=0x7ffcf3b4bc30, nevents=nevents@entry=1, 
wait_event_info=wait_event_info@entry=100663296) at latch.c:1032
#3  0x0064fbd7 in secure_read (port=0x27c3900, ptr=0xd45a80 
, len=8192) at be-secure.c:185
#4  0x0065aa38 in pq_recvbuf () at pqcomm.c:964
#5  0x0065b655 in pq_getbyte () at pqcomm.c:1007
#6  0x00761aaa in SocketBackend (inBuf=0x7ffcf3b4bda0) at postgres.c:341
#7  ReadCommand (inBuf=0x7ffcf3b4bda0) at postgres.c:514
#8  PostgresMain (argc=, argv=argv@entry=0x27cb420, 
dbname=0x27cb2e8 "pcl_l800", username=) at postgres.c:4189
#9  0x00484022 in BackendRun (port=, port=) at postmaster.c:4448
#10 BackendStartup (port=0x27c3900) at postmaster.c:4139
#11 ServerLoop () at postmaster.c:1704
#12 0x006f14c3 in PostmasterMain (argc=argc@entry=3, 
argv=argv@entry=0x278c280) at postmaster.c:1377
#13 0x00484f23 in main (argc=3, argv=0x278c280) at main.c:228
A debugging session is active.

Inferior 1 [process 97279] will be detached.

Quit anyway? (y or n) Detaching from program: /usr/pgsql-12/bin/postgres, 
process 97279


>> Missing separate debuginfos, use: debuginfo-install 
>> CGAL-4.7-1.rhel7.1.x86_64 boost-date-time-1.53.0-27.el7.x86_64 
>> boost-serialization-1.53.0-27.el7.x86_64 boost-system-1.53.0-27.el7.x86_64 
>> boost-thread-1.53.0-27.el7.x86_64 gmp-6.0.0-15.el7.x86_64 
>> jbigkit-libs-2.0-11.el7.x86_64 json-c-0.11-4.el7_0.x86_64 
>> libcurl-7.29.0-54.el7_7.2.x86_64 libidn-1.28-4.el7.x86_64 
>> libjpeg-turbo-1.2.90-8.el7.x86_64 libssh2-1.8.0-3.el7.x86_64 
>> libtiff-4.0.3-32.el7.x86_64 mpfr-3.1.1-4.el7.x86_64 
>> postgis30_12-3.0.1-5.rhel7.x86_64 protobuf-c-1.0.2-3.el7.x86_64 
>> sqlite33-libs-3.30.1-1.rhel7.x86_64

>This is a little weird and scary; I would not expect most of those
>libraries to have gotten linked into a Postgres backend.  What
>extensions are you using?  (And what the devil would any of them
>want with sqlite or libcurl?  boost-thread is even scarier, because
>we absolutely do

Re: Out of memory with "create extension postgis"

2020-07-29 Thread Daniel Westermann (DWE)
>>>> "Daniel Westermann (DWE)"  writes:
>>>>> The process eats all the available memory and finally dies:
>>>>> # create extension postgis;
>>>>> ERROR:  out of memory
>>>>> DETAIL:  Failed on request of size 8265691 in memory context 
>>>>> "PortalContext".
>>>>> Time: 773569.877 ms (12:53.570)

>>> There is nothing in the log file except these (not from today, but the 
>>> messages are always the same):
>>> 2020-07-03 16:52:16 CEST 53617 LOG:  server process (PID 54070) was 
>>> terminated by signal 9: Killed

>>A process that was killed by the OOM killer would not have managed to
>>produce an "out of memory" ERROR report, so these two are different
>>symptoms.  You need to reproduce the first case, or you won't have
>>any luck setting an error breakpoint either.

>You're right, that was before we adjusted the oom behavior. Will get back once 
>I have more information.

So this is what we got today. In the log file there is this:

2020-07-29 16:33:23 CEST 101995 ERROR:  out of memory
2020-07-29 16:33:23 CEST 101995 DETAIL:  Failed on request of size 8265691 in 
memory context "PortalContext".
2020-07-29 16:33:23 CEST 101995 STATEMENT:  create extension postgis;
2020-07-29 16:35:00 CEST 106695 LOG:  disconnection: session time: 0:04:45.200 
user=monitor database=pcl_l800 host=127.0.0.1 port=52160
2020-07-29 16:35:00 CEST 107063 LOG:  connection received: host=127.0.0.1 
port=52212
2020-07-29 16:35:00 CEST 107063 LOG:  connection authorized: user=monitor 
database=pdb_l80_oiz application_name=pg_statsinfod
2020-07-29 16:35:00 CEST 53630 ALERT:  pg_statsinfo: memory swap size exceeds 
threshold in snapshot '2020-07-29 16:35:00' --- 17383584 KiB (threshold = 
100 KiB)
2020-07-29 16:35:15 CEST 107063 LOG:  disconnection: session time: 0:00:14.770 
user=monitor database=pdb_l80_oiz host=127.0.0.1 port=52212
2020-07-29 16:35:15 CEST 107269 LOG:  connection received: host=127.0.0.1 
port=52220

The GDB session shows this:
(gdb) set pagination off
(gdb) set logging file postgis.log
(gdb)
(gdb) set logging on
Copying output to postgis.log.
(gdb) b errfinish
Breakpoint 1 at 0x87e210: file elog.c, line 411.
(gdb) cont
Continuing.
Breakpoint 1, errfinish (dummy=dummy@entry=0) at elog.c:411
411     elog.c: No such file or directory.
(gdb) cont
Continuing.
Breakpoint 1, errfinish (dummy=dummy@entry=0) at elog.c:411
411     in elog.c
Missing separate debuginfos, use: debuginfo-install CGAL-4.7-1.rhel7.1.x86_64 
boost-date-time-1.53.0-27.el7.x86_64 boost-serialization-1.53.0-27.el7.x86_64 
boost-system-1.53.0-27.el7.x86_64 boost-thread-1.53.0-27.el7.x86_64 
gmp-6.0.0-15.el7.x86_64 jbigkit-libs-2.0-11.el7.x86_64 
json-c-0.11-4.el7_0.x86_64 libcurl-7.29.0-54.el7_7.2.x86_64 
libidn-1.28-4.el7.x86_64 libjpeg-turbo-1.2.90-8.el7.x86_64 
libssh2-1.8.0-3.el7.x86_64 libtiff-4.0.3-32.el7.x86_64 mpfr-3.1.1-4.el7.x86_64 
postgis30_12-3.0.1-5.rhel7.x86_64 protobuf-c-1.0.2-3.el7.x86_64 
sqlite33-libs-3.30.1-1.rhel7.x86_64
(gdb)
Continuing.


The GDB logfile shows this:
[root@tstm49012 ~]# cat postgis.log
Breakpoint 1 at 0x87e210: file elog.c, line 411.
Continuing.
Breakpoint 1, errfinish (dummy=dummy@entry=0) at elog.c:411
411     elog.c: No such file or directory.
Continuing.
Breakpoint 1, errfinish (dummy=dummy@entry=0) at elog.c:411
411     in elog.c
Missing separate debuginfos, use: debuginfo-install CGAL-4.7-1.rhel7.1.x86_64 
boost-date-time-1.53.0-27.el7.x86_64 boost-serialization-1.53.0-27.el7.x86_64 
boost-system-1.53.0-27.el7.x86_64 boost-thread-1.53.0-27.el7.x86_64 
gmp-6.0.0-15.el7.x86_64 jbigkit-libs-2.0-11.el7.x86_64 
json-c-0.11-4.el7_0.x86_64 libcurl-7.29.0-54.el7_7.2.x86_64 
libidn-1.28-4.el7.x86_64 libjpeg-turbo-1.2.90-8.el7.x86_64 
libssh2-1.8.0-3.el7.x86_64 libtiff-4.0.3-32.el7.x86_64 mpfr-3.1.1-4.el7.x86_64 
postgis30_12-3.0.1-5.rhel7.x86_64 protobuf-c-1.0.2-3.el7.x86_64 
sqlite33-libs-3.30.1-1.rhel7.x86_64
Continuing.
Program received signal SIGINT, Interrupt.
0x7f94f4af2e93 in __epoll_wait_nocancel () from /lib64/libc.so.6
Detaching from program: /usr/pgsql-12/bin/postgres, process 101995
[root@tstm49012 ~]#

Is that of any help?

Regards
Daniel










Re: Out of memory with "create extension postgis"

2020-07-28 Thread Daniel Westermann (DWE)
>>> "Daniel Westermann (DWE)"  writes:
>>>> The process eats all the available memory and finally dies:
>>>> # create extension postgis;
>>>> ERROR:  out of memory
>>>> DETAIL:  Failed on request of size 8265691 in memory context 
>>>> "PortalContext".
>>>> Time: 773569.877 ms (12:53.570)

>> There is nothing in the log file except these (not from today, but the 
>> messages are always the same):
>> 2020-07-03 16:52:16 CEST 53617 LOG:  server process (PID 54070) was 
>> terminated by signal 9: Killed

>A process that was killed by the OOM killer would not have managed to
>produce an "out of memory" ERROR report, so these two are different
>symptoms.  You need to reproduce the first case, or you won't have
>any luck setting an error breakpoint either.

You're right, that was before we adjusted the oom behavior. Will get back once 
I have more information.

Regards
Daniel


Re: Out of memory with "create extension postgis"

2020-07-28 Thread Daniel Westermann (DWE)
>"Daniel Westermann (DWE)"  writes:
>> we have a very strange behavior on PostgreSQL 12.3 when we try to create the 
>> extension postgis. Postgres and postgis have both been installed from 
>> packages:
>> ...
>> The process eats all the available memory and finally dies:
>> # create extension postgis;
>> ERROR:  out of memory
>> DETAIL:  Failed on request of size 8265691 in memory context "PortalContext".
>> Time: 773569.877 ms (12:53.570)

>Quite odd.  There should have been a memory context dump written to the
>postmaster's stderr, can you show that?  Also possibly useful would be
>a backtrace (set a breakpoint at errfinish):

>https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend<https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend>

There is nothing in the log file except these (not from today, but the messages 
are always the same):
2020-07-03 16:52:16 CEST 53617 LOG:  server process (PID 54070) was terminated 
by signal 9: Killed
2020-07-03 16:52:16 CEST 53617 DETAIL:  Failed process was running: create 
extension postgis;
2020-07-03 16:52:16 CEST 53617 LOG:  terminating any other active server 
processes

I will take care of the backtrace

Regards
Daniel



Out of memory with "create extension postgis"

2020-07-28 Thread Daniel Westermann (DWE)
Hi,

we have a very strange behavior on PostgreSQL 12.3 when we try to create the 
extension postgis. Postgres and postgis have both been installed from packages:

postgresql12-server-12.3-5PGDG
postgis30_12-3.0.1-5.rhel7

The OS is RedHat 7.7.

Creating other extensions works fine, we only see this for postgis. A gdb 
session attached to the psql process while creating the extension gives:

Program received signal SIGINT, Interrupt.
0x7f19c0de9740 in __read_nocancel () from /lib64/libpthread.so.0
Detaching from program: /usr/pgsql-12/bin/psql, process 44202

The process eats all the available memory and finally dies:
# create extension postgis;
ERROR:  out of memory
DETAIL:  Failed on request of size 8265691 in memory context "PortalContext".
Time: 773569.877 ms (12:53.570)


Stats file from /proc:

# cat status
Name:   psql
Umask:  0022
State:  S (sleeping)
Tgid:   45958
Ngid:   0
Pid:    45958
PPid:   44075
TracerPid:      0
Uid:    280     280     280     280
Gid:    280     280     280     280
FDSize: 256
Groups: 280
VmPeak:   184604 kB
VmSize:   184600 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:      4304 kB
VmRSS:      4304 kB
RssAnon:             980 kB
RssFile:            3324 kB
RssShmem:              0 kB
VmData:      672 kB
VmStk:       132 kB
VmExe:       604 kB
VmLib:     11288 kB
VmPTE:       180 kB
VmSwap:        0 kB
Threads:        1
SigQ:   0/15635
SigPnd: 
ShdPnd: 
SigBlk: 
SigIgn: 
SigCgt: 00018002
CapInh: 
CapPrm: 
CapEff: 
CapBnd: 001f
CapAmb: 
NoNewPrivs:     0
Seccomp:        0
Speculation_Store_Bypass:       thread vulnerable
Cpus_allowed:   ,,,
Cpus_allowed_list:      0-127
Mems_allowed:   
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,0001
Mems_allowed_list:      0
voluntary_ctxt_switches:        47

stack file from /proc:
# cat stack
[] poll_schedule_timeout+0x55/0xc0
[] do_sys_poll+0x48d/0x590
[] SyS_poll+0x74/0x110
[] system_call_fastpath+0x25/0x2a
[] 0x

limits from /proc:
# cat limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    0                    bytes
Max resident set          unlimited            unlimited            bytes
Max processes             4096                 15635                processes
Max open files            1024                 4096                 files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       15635                15635                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us

Stracing the psql process shows no activity. Looking at the used swap shows 
that the postmaster uses a huge amount:

find /proc -maxdepth 2 -path "/proc/[0-9]*/status" -readable -exec awk -v 
FS=":" '{process[$1]=$2;sub(/^[ \t]+/,"",process[$1]);} END 
{if(process["VmSwap"] && process["VmSwap"] != "0 kB") printf "%10s %-30s 
%20s\n",process["Pid"],process["Name"],process["VmSwap"]}' '{}' \;
...
 48043 psql1004 kB
 48044 postmaster  21948064 kB
 49059 postmaster  1008 kB
 52550 rhsmcertd144 kB
...
 
Any hints what could be the issue? Can I provide any other information that 
would help troubleshooting this issue?

Thanks in advance
Daniel






Re: Temporary tablespaces on a RAM disk

2020-03-18 Thread Daniel Westermann (DWE)
>Laurenz Albe  writes:

>> On Tue, 2020-03-17 at 19:57 +0000, Daniel Westermann (DWE) wrote:
>>
>>> >> is someone using temporary tablespaces on a RAM disk ? Any experiences 
>>> >> with that?
>>> >> I did some quick tests and checked the archives but could not find any
>>> >> information that either confirmed it is a bad idea nor the opposite.
>>>
>>> >Taking a step back, wouldn't it be better to increase "work_mem"
>>> >and "temp_buffers" and set "temp_file_limit", so that temporary
>>> >files are avoided whenever possible and temporary tables are cached?
>>>
>>> I don't think you can avoid that for all cases, especially when working
>>> with large data sets. That was one of the reasons for my initial question.
>>
>> But with your proposed setup, any query that needs more temp space
>> than your RAM disk has will fail.  I don't think that is good for large
>> data sets.

>Perhaps not, but disk filling on the same volume as WAL is also a
>serious problem in case the process that eventually took the storage to
>100% and got an ENoSpace was the WAL write :-)

>Er, but any standard 1-tablespace configuration is at risk of that,
>generally.

Thank you all for your input.

Regards
Daniel


Re: Temporary tablespaces on a RAM disk

2020-03-17 Thread Daniel Westermann (DWE)
Hi  Laurenz,

>From: Laurenz Albe 
>Sent: Tuesday, March 17, 2020 20:50
>To: Daniel Westermann (DWE) ; 
>pgsql-general@lists.postgresql.org 
>Subject: Re: Temporary tablespaces on a RAM disk

>On Tue, 2020-03-17 at 09:49 +, Daniel Westermann (DWE) wrote:
>> is someone using temporary tablespaces on a RAM disk ? Any experiences with 
>> that?
>> I did some quick tests and checked the archives but could not find any 
>> information that either confirmed it is a bad idea nor the opposite.

>Taking a step back, wouldn't it be better to increase "work_mem"
>and "temp_buffers" and set "temp_file_limit", so that temporary
>files are avoided whenever possible and temporary tables are cached?

I don't think you can avoid that for all cases, especially when working with 
large data sets. That was one of the reasons for my initial question.

>Then essentially you are using the same memory for the same purposes,
>but in a supported fashion.

Agreed.

Regards
Daniel


Re: Temporary tablespaces on a RAM disk

2020-03-17 Thread Daniel Westermann (DWE)
>"Daniel Westermann (DWE)"  writes:

>>> People have asked about this before, so maybe it'd be an idea to make
>>> an explicit concept of a temp tablespace that only accepts temp tables,
>>> and do whatever is needful to make that robust.  But I've not heard of
>>> any work towards that.

>> That's what I thought temp_tablespaces are for ( plus sorts, temporary files 
>> getting created by materialized views ... )

>No ... temp_tablespaces says it's okay to use any of the listed
>tablespaces to keep temporary working files in, but it doesn't
>say that those tablespaces can *only* be used for that.

Ok, understood. For me, at least, it sounds weird to put anything other than 
real temporary stuff in there.

>The whole business of temp tables (as opposed to those invisible-to-SQL
>working files) in such a tablespace is a separate issue, too.  I think
>that the server would mostly survive having temp-table files disappear
>during reboot, but it's not an officially supported or tested scenario.

Thank you, that is what I wanted to know. I works for all the cases I tested, 
but it is not officially supported.

Regards
Daniel


Re: Temporary tablespaces on a RAM disk

2020-03-17 Thread Daniel Westermann (DWE)
Ron  writes:
> On 3/17/20 12:19 PM, David G. Johnston wrote:
>> There is a nice big bold warning callout in the documentation that covers
>> this explicitly.
>> https://www.postgresql.org/docs/12/manage-ag-tablespaces.html

>>> Warning
>>> Placing a tablespace on a temporary file system like a RAM disk risks the
>>> reliability of the entire cluster.

> But aren't temporary files removed when you restart Postgres?  (I'm assuming
> that temp_tablespaces is treated "the same" as data/pgsql_tmp.  Is that a
> false assumption?)

>I think the main issue is that there's no mechanism preventing you from
>putting regular (non-temp) tables into the "temporary" tablespace.
>If you do, crash recovery will get very unhappy when it tries to replay
>updates for those tables and they're not there.

Yes, that works and this is why I asked for temp_tablespaces which are supposed 
to hold temporary objects only, I believe.

>There are probably additional issues having to do with the tablespace
>directory(s) disappearing.  That, you could maybe finesse by having the
>postmaster start script re-create any missing directories.

Yes, but the server will start even if the tmpfs is gone, you just need to 
recreate it, which also can be handled with /etc/fstab for booting.

>All in all, I wouldn't try it without careful pre-testing of what happens
>after the RAM disk gets wiped.

Sure.

>People have asked about this before, so maybe it'd be an idea to make
>an explicit concept of a temp tablespace that only accepts temp tables,
>and do whatever is needful to make that robust.  But I've not heard of
>any work towards that.

That's what I thought temp_tablespaces are for ( plus sorts, temporary files 
getting created by materialized views ... )

Regards
Daniel




Re: Temporary tablespaces on a RAM disk

2020-03-17 Thread Daniel Westermann (DWE)
Daniel Westermann wrote:

> is someone using temporary tablespaces on a RAM disk ? Any experiences with 
> that?

> I did some quick tests and checked the archives but could not find any 
> information that either confirmed it is a bad idea nor the opposite.





>Well, we point our "stats_temp_directory" to a tmpfs partition, which is 
>probably fairly common (or so I'd guess).

>But a full tablespace with tables and everything? That sounds pretty risky for 
>anything other than a place to store

>data for transformation or summary of data that could be recreated by 
>restarting a process if the server goes down

>in the middle.



>I think you'd be better off explaining what your goal is and then people could 
>give you better comments.




Thank you for the hint with stats_temp_directory. I am only talking about 
temp_tablespaces which are supposed for temporary objects only. It is pretty 
clear not to use that for persistent objects, I know.


Regards

Daniel


Re: Temporary tablespaces on a RAM disk

2020-03-17 Thread Daniel Westermann (DWE)
On Tue, Mar 17, 2020 at 2:49 AM Daniel Westermann (DWE) 
mailto:daniel.westerm...@dbi-services.com>> 
wrote:
is someone using temporary tablespaces on a RAM disk ? Any experiences with 
that?
I did some quick tests and checked the archives but could not find any 
information that either confirmed it is a bad idea nor the opposite.

>There is a nice big bold warning callout in the documentation that covers this 
>explicitly.
>
>https://www.postgresql.org/docs/12/manage-ag-tablespaces.html<https://www.postgresql.org/docs/12/manage-ag-tablespaces.html>

I know, I am talking about temp_tablespaces only, so temporary objects like 
temporary tables, sorts, .etc.

Regards
Daniel



Temporary tablespaces on a RAM disk

2020-03-17 Thread Daniel Westermann (DWE)
Hi,

is someone using temporary tablespaces on a RAM disk ? Any experiences with 
that?
I did some quick tests and checked the archives but could not find any 
information that either confirmed it is a bad idea nor the opposite.

Thanks in advance
Daniel



Re: POLL: Adding transaction status to default psql prompt

2020-02-06 Thread Daniel Westermann (DWE)
>On Thu, Feb 06, 2020 at 03:54:48AM +0100, Vik Fearing wrote:
>> Please answer +1 if you want or don't mind seeing transaction status by
>> default in psql or -1 if you would prefer to keep the current default.
>
>+1

+1

Daniel




Re: How to execute .sql file inside a postgres schema

2019-04-29 Thread Daniel Westermann (DWE)


>>I have a database name “kbdb” that is having a schema “kb” and I want to 
>>execute the test.sql file inside >>this schema,



set schema 'your_schema';


Cheers,

Daniel