Re: When are largobject records TOASTed into pg_toast_2613?

2020-08-21 Thread Laurenz Albe
On Fri, 2020-08-21 at 18:59 +0200, Thomas Boussekey wrote:
> I wrote this BASH script to remove the TOAST table, if it may help anyone:
> 
> [...]
>   toast_pages="$(psql -U postgres --dbname=${database_name} -At --no-psqlrc 
> -c "select relpages from pg_class where relname = 'pg_toast_2613';" )"
>   toast_tuples="$(psql -U postgres --dbname=${database_name} -At --no-psqlrc 
> -c "select reltuples from pg_class where relname = 'pg_toast_2613';" )"
> [...]

That are just the estimates.
You need to ascertain that the table is *really* empty.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: When are largobject records TOASTed into pg_toast_2613?

2020-08-21 Thread Laurenz Albe
On Sat, 2020-08-22 at 10:47 +0900, Michael Paquier wrote:
> > Indeed, system tables have no TOAST tables in PostgreSQL, so I wonder
> > how your "pg_largeobject" table could have grown one.
> 
> FWIW, src/include/catalog/toasting.h is giving me a list of 28 catalog
> tables with a toast relation as of HEAD.

Yes, I was behind the times.

Catalog tables *do* have TOAST tables, but not all of them, and 
"pg_largeobject" is one that doesn't.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: When are largobject records TOASTed into pg_toast_2613?

2020-08-21 Thread Michael Paquier
On Fri, Aug 21, 2020 at 03:10:30PM +0200, Laurenz Albe wrote:
> Indeed, system tables have no TOAST tables in PostgreSQL, so I wonder
> how your "pg_largeobject" table could have grown one.

FWIW, src/include/catalog/toasting.h is giving me a list of 28 catalog
tables with a toast relation as of HEAD.
--
Michael


signature.asc
Description: PGP signature


Re: pg_dump & RLS

2020-08-21 Thread Tom Lane
=?UTF-8?Q?Eduard_Catal=C3=A0?=  writes:
> - ¿is posible to export using pg_dump only the rows that satisfy a rls
> check?
> - Of course, yes, use the --enable-row-security option in pg_dump
> - Yes, but my RLS expression relies on a GUC:
> CREATE POLICY my_policy  ON my_table  USING (company_id =
> *current_setting('company_id')::int*);

That isn't the world's greatest design, but you should be
able to do something like

export PGOPTIONS="-c custom.company_id=42"
pg_dump ...

I kind of wonder why bother with RLS if any user can bypass it
just by changing a GUC, though.  It'd be better for the policy
to check something like role membership.

regards, tom lane




pg_dump & RLS

2020-08-21 Thread Eduard Català
Hi all,
Sorry if this is not the appropriate list, I think so.

- ¿is posible to export using pg_dump only the rows that satisfy a rls
check?
- Of course, yes, use the --enable-row-security option in pg_dump
- Yes, but my RLS expression relies on a GUC:
CREATE POLICY my_policy  ON my_table  USING (company_id =
*current_setting('company_id')::int*);
Prior to starting dumping I need to set the company_id GUC into the
session, if not, there's no way to only export some rows.

Any ideas?
- Execute a command before starting the export
- Some kind of login trigger for a special user
- ...

Thank you!


Re: pgbouncer bug?

2020-08-21 Thread Achilleas Mantzios



On 21/8/20 7:56 μ.μ., greigwise wrote:

Not sure if this is the right place to post this, but if not someone please
point me in the right direction.

My issue is with pgbouncer 1.14.   This does not seem to happen on 1.13.

If I do a service pgbouncer restart, then anytime I try to connect to my
databases via pgbouncer, I get ERROR: no such user regardless of what user
I'm using.  It's almost like it's not recognizing the auth_query I have
configured.  But then if I issue a reload, then it seems to work fine and I
no longer get the user not found.  The problem is easy enough to work around
as I don't restart pgbouncer all that much, but it doesn't seem like this is
probably the intended behavior.


You may go here : 
https://github.com/pgbouncer/pgbouncer/commits/pgbouncer_1_14_0


and review all commits between 1.13 and 1.14



Thanks,
Greig Wise

[Full Disclosure: I accidentally posted this message at first to the "base"
PostgreSQL group and am now moving to "PostgreSQL - general".]



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html







Re: When are largobject records TOASTed into pg_toast_2613?

2020-08-21 Thread Thomas Boussekey
Le ven. 21 août 2020 à 16:45, Laurenz Albe  a
écrit :

> On Fri, 2020-08-21 at 15:46 +0200, Thomas Boussekey wrote:
> > Le ven. 21 août 2020 à 15:10, Laurenz Albe  a
> écrit :
> > > On Fri, 2020-08-21 at 14:00 +0200, Thomas Boussekey wrote:
> > > > Working on a PostgreSQL 9.5 to 12 upgrade, I encounter problems on a
> PostgreSQL instance when I have
> > > >  an existing table `pg_toast_2613` into my application database.
> > > >
> > > > The upgrade process fails with the following error:
> > > >
> > > > ```
> > > > No match found in new cluster for old relation with OID 16619 in
> database "mirakl_db": "pg_toast.pg_toast_2613" which is the TOAST table for
> "pg_catalog.pg_largeobject"
> > > > No match found in new cluster for old relation with OID 16621 in
> database "mirakl_db": "pg_toast.pg_toast_2613_index" which is an index on
> "pg_toast.pg_toast_2613" which is the TOAST table for
> > > > "pg_catalog.pg_largeobject"
> > > > ```
> > > >
> > > > The `pg_upgrade` command fails when I have the table `pg_toast_2613`
> that exists, even if it is empty.
> > > > I read the PostgreSQL documentation, and I didn't find when the
> pg_largeobject table needs to be toasted.
> > > >  I thought it might be linked with records' size, but my queries
> below don't correlate that!
> > >
> > > Indeed, system tables have no TOAST tables in PostgreSQL, so I wonder
> > > how your "pg_largeobject" table could have grown one.
> > >
> > > Did you do any strange catalog modifications?
> >
> > Several years before I arrived in this company, the `pg_largeobject`
> table had been moved to a dedicated tablespace located on a low-IOPS
> mechanical disk.
> > One of my first projects when I started working in the company was to
> move the `pg_largeobject` table back to the default system tablespace.
> > This might be a side-effect of the migration.
>
> I just tried that on v12, and it didn't create a TOAST table.
>
> But obviously there is/was a bug somewhere.
>
> > > The safest way would be to upgrade with pg_dumpall/psql.
> >
> > The `pg_dumpall` command will also copy the content and the existence of
> the `pg_toast_2613` table, isn't it?
> > It might generate errors at the execution on the new instance?
> > Moreover, it will generate a large downtime
>
> No, pg_dumpall will not duplicate that strange TOAST table.
> It would be the only safe way to upgrade.
>
> If you can ascertain that the TOAST table is empty and you
> like to live dangerous, you can try:
>
> UPDATE pg_class SET reltoastrelid = 0 WHERE oid = 2613;
> UPDATE pg_class SET relkind = 'r' WHERE relname = 'pg_toast_2613';
> DELETE FROM pg_depend WHERE classid = 1259 AND refclassid = 1259 AND
> refobjid = 2613 AND objid = 'pg_toast.pg_toast_2613'::regclass;
> DROP TABLE pg_toast.pg_toast_2613;
>
Thanks Laurenz & Tom for your precious information.

I wrote this BASH script to remove the TOAST table, if it may help anyone:

```sh
#!/usr/bin/env bash
#

set -euo pipefail

database_name="xxx"
postgresql_conf_file="/xxx/postgresql.conf"

# Step 1: check if table pg_toast_2613 exists
toast_count="$(psql -U postgres --dbname=${database_name} -At --no-psqlrc
-c "select count(*) from pg_class where relname = 'pg_toast_2613';")"
echo -e "TOAST exists ::${toast_count}"

if [[ "${toast_count}" == "1" ]]; then
  # Step 2: Check if table pg_toast_2613 has rows and pages
  toast_pages="$(psql -U postgres --dbname=${database_name} -At --no-psqlrc
-c "select relpages from pg_class where relname = 'pg_toast_2613';" )"
  toast_tuples="$(psql -U postgres --dbname=${database_name} -At
--no-psqlrc -c "select reltuples from pg_class where relname =
'pg_toast_2613';" )"

  echo -e "TOAST content pages ::${toast_pages}:: tuples ::${toast_tuples}"

  # Step 3 OPTIONAL: vacuum full pg_largobject if needed
  if [[ "${toast_tuples}" -gt "0" ]]; then

echo -e "Start of vacuum"
psql -U postgres --dbname=${database_name} -At --no-psqlrc -c "VACUUM
FULL VERBOSE pg_largobject;"
echo -e "End of vacuum"

## After VACUUM post-check
toast_pages="$(psql -U postgres --dbname=${database_name} -At
--no-psqlrc -c "select relpages from pg_class where relname =
'pg_toast_2613';" )"
toast_tuples="$(psql -U postgres --dbname=${database_name} -At
--no-psqlrc -c "select reltuples from pg_class where relname =
'pg_toast_2613';" )"

echo -e "TOAST content pages ::${toast_pages}:: tuples
::${toast_tuples}"
  fi

  # Step 4: Remove TOAST information for pg_largobject into pg_class
  echo -e "Remove TOAST on pg_largobject"
  psql -U postgres --dbname=${database_name} -At --no-psqlrc -c "UPDATE
pg_class SET reltoastrelid = 0 WHERE oid = 2613;"

  # Step 5: Drop pg_toast_2613% objects
  echo -e "Change pg_toast_2613 type to relation"
  psql -U postgres --dbname=${database_name} -At --no-psqlrc -c "UPDATE
pg_class SET relkind = 'r' WHERE relname = 'pg_toast_2613';"

  echo -e "Delete pg_depend for pg_toast_2613"
  psql -U postgres --dbname=${database_name} -At --no-psqlrc -c "DELETE
FROM pg_depend 

pgbouncer bug?

2020-08-21 Thread greigwise
Not sure if this is the right place to post this, but if not someone please
point me in the right direction.

My issue is with pgbouncer 1.14.   This does not seem to happen on 1.13.

If I do a service pgbouncer restart, then anytime I try to connect to my
databases via pgbouncer, I get ERROR: no such user regardless of what user
I'm using.  It's almost like it's not recognizing the auth_query I have
configured.  But then if I issue a reload, then it seems to work fine and I
no longer get the user not found.  The problem is easy enough to work around
as I don't restart pgbouncer all that much, but it doesn't seem like this is
probably the intended behavior.

Thanks,
Greig Wise

[Full Disclosure: I accidentally posted this message at first to the "base"
PostgreSQL group and am now moving to "PostgreSQL - general".]



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Query plan prefers hash join when nested loop is much faster

2020-08-21 Thread Michael Lewis
Your system is preferring sequential scan to
using test_result_module_result_id_idx in this case. What type of storage
do you use, what type of cache hits do you expect, and what do you have
random_page_cost set to? That comes to mind as a significant factor in
choosing index scans based on costs.


Query plan prefers hash join when nested loop is much faster

2020-08-21 Thread iulian dragos
Hi,

I am trying to understand why the query planner insists on using a hash
join, and how to make it choose the better option, which in this case would
be a nested loop. I have two tables:

// about 200 million rows
CREATE TABLE module_result(
*id* bigserial PRIMARY KEY,
name_id bigint NOT NULL references result_name(id),
*run_id* integer NOT NULL references run (id),
logs text NOT NULL,
status result_status NOT NULL
);
CREATE INDEX ON module_result (run_id);

// 500 million rows
CREATE TABLE test_result(
id bigserial PRIMARY KEY,
name_id bigint NOT NULL references result_name(id),
*module_result_id* bigint NOT NULL references module_result (id),
seconds float NOT NULL,
failure_msg text, -- Either a ... or an   Gather Merge  (cost=7771702.73..7771804.02 rows=6 width=12) (actual
time=32341.970..32343.222 rows=6 loops=1)
   |
| Workers Planned: 2

|
| Workers Launched: 2

   |
| ->  Partial GroupAggregate  (cost=7770702.71..7770803.30 rows=3
width=12) (actual time=32340.278..32340.286 rows=2 loops=3)
 |
|   Group Key: test_result.status

   |
|   ->  Sort  (cost=7770702.71..7770736.23 rows=13408 width=4)
(actual time=32339.698..32339.916 rows=4941 loops=3)
 |
| Sort Key: test_result.status

|
| Sort Method: quicksort  Memory: 431kB

   |
| Worker 0:  Sort Method: quicksort  Memory: 433kB

|
| Worker 1:  Sort Method: quicksort  Memory: 409kB

|
| ->  Hash Join  (cost=586.15..7769783.54 rows=13408
width=4) (actual time=18112.078..32339.011 rows=4941 loops=3)
  |
|   Hash Cond: (test_result.module_result_id =
module_result.id)
|
|   ->  Parallel Seq Scan on test_result
 (cost=0.00..7145224.72 rows=237703872 width=12) (actual
time=0.034..15957.894 rows=190207740 loops=3)|
|   ->  Hash  (cost=438.41..438.41 rows=11819
width=8) (actual time=3.905..3.905 rows=14824 loops=3)
  |
| Buckets: 16384  Batches: 1  Memory Usage:
708kB
   |
| ->  Index Scan using
module_result_run_id_idx on module_result  (cost=0.57..438.41 rows=11819
width=8) (actual time=0.017..2.197 rows=14824 loops=3) |
|   Index Cond: (run_id = 158523)

   |
| Planning Time: 0.178 ms

   |
| Execution Time: 32343.330 ms

|
+--+
EXPLAIN
Time: 32.572s (32 seconds), executed in: 32.551s (32 seconds)

This plan takes about 30s to execute. If I turn off seqscan, I get a nested
loop join that takes about 0.02s to execute:

set enable_seqscan = off


SET
Time: 0.305s
> explain analyze select test_result.status, count(test_result.status) as
"Count"  from test_result inner join module_result ON module_result.id =
test_result.module_result_id where module_resul
 t.run_id=158523   group by test_result.status


+---+
| QUERY PLAN

 |
|---|
| Finalize GroupAggregate  (cost=34297042.16..34297143.50 rows=3 width=12)
(actual time=15.014..15.015 rows=2 loops=1)
 |
|   Group Key: test_result.status

|
|   ->  Gather Merge  (cost=34297042.16..34297143.44 rows=6 width=12)
(actual time=15.005..15.850 rows=6 loops=1)
  |
| Workers Planned: 2

 |
| Workers Launched: 2

|
| ->  Partial GroupAggregate  (cost=34296042.13..34296142.72 rows=3
width=12) (actual time=12.937..12.940 rows=2 loops=3)
|
|   Group Key: test_result.status

|
|   ->  Sort  (cost=34296042.13..34296075.65 rows=13408
width=4) (actual time=12.339..12.559 rows=4941 loops=3)
 

Re: When are largobject records TOASTed into pg_toast_2613?

2020-08-21 Thread Laurenz Albe
On Fri, 2020-08-21 at 15:46 +0200, Thomas Boussekey wrote:
> Le ven. 21 août 2020 à 15:10, Laurenz Albe  a écrit 
> :
> > On Fri, 2020-08-21 at 14:00 +0200, Thomas Boussekey wrote:
> > > Working on a PostgreSQL 9.5 to 12 upgrade, I encounter problems on a 
> > > PostgreSQL instance when I have
> > >  an existing table `pg_toast_2613` into my application database.
> > > 
> > > The upgrade process fails with the following error:
> > > 
> > > ```
> > > No match found in new cluster for old relation with OID 16619 in database 
> > > "mirakl_db": "pg_toast.pg_toast_2613" which is the TOAST table for 
> > > "pg_catalog.pg_largeobject"
> > > No match found in new cluster for old relation with OID 16621 in database 
> > > "mirakl_db": "pg_toast.pg_toast_2613_index" which is an index on 
> > > "pg_toast.pg_toast_2613" which is the TOAST table for
> > > "pg_catalog.pg_largeobject"
> > > ```
> > > 
> > > The `pg_upgrade` command fails when I have the table `pg_toast_2613` that 
> > > exists, even if it is empty.
> > > I read the PostgreSQL documentation, and I didn't find when the 
> > > pg_largeobject table needs to be toasted.
> > >  I thought it might be linked with records' size, but my queries below 
> > > don't correlate that!
> > 
> > Indeed, system tables have no TOAST tables in PostgreSQL, so I wonder
> > how your "pg_largeobject" table could have grown one.
> >
> > Did you do any strange catalog modifications?
> 
> Several years before I arrived in this company, the `pg_largeobject` table 
> had been moved to a dedicated tablespace located on a low-IOPS mechanical 
> disk.
> One of my first projects when I started working in the company was to move 
> the `pg_largeobject` table back to the default system tablespace.
> This might be a side-effect of the migration.

I just tried that on v12, and it didn't create a TOAST table.

But obviously there is/was a bug somewhere.

> > The safest way would be to upgrade with pg_dumpall/psql.
> 
> The `pg_dumpall` command will also copy the content and the existence of the 
> `pg_toast_2613` table, isn't it?
> It might generate errors at the execution on the new instance?
> Moreover, it will generate a large downtime

No, pg_dumpall will not duplicate that strange TOAST table.
It would be the only safe way to upgrade.

If you can ascertain that the TOAST table is empty and you
like to live dangerous, you can try:

UPDATE pg_class SET reltoastrelid = 0 WHERE oid = 2613;
UPDATE pg_class SET relkind = 'r' WHERE relname = 'pg_toast_2613';
DELETE FROM pg_depend WHERE classid = 1259 AND refclassid = 1259 AND refobjid = 
2613 AND objid = 'pg_toast.pg_toast_2613'::regclass;
DROP TABLE pg_toast.pg_toast_2613;

But I won't guarantee that that won't break your database.

In particular, it is a no-go unless the TOAST table is empty.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: BUG? Slave don't reconnect to the master

2020-08-21 Thread Jehan-Guillaume de Rorthais
On Thu, 20 Aug 2020 15:16:10 +0300
Олег Самойлов  wrote:
[...]
> >> Almost works fine, but sometimes, rather rare, I detected that a slave
> >> don't reconnect to the new master after a failure. First case is
> >> PostgreSQL-STOP, when I `kill` by STOP signal postgres on the master to
> >> simulate freeze. The slave don't reconnect to the new master with errors
> >> in log:
> >> 
> >> 18:02:56.236 [3154] FATAL:  terminating walreceiver due to timeout
> >> 18:02:56.237 [1421] LOG:  record with incorrect prev-link 0/1600DDE8 at
> >> 0/1A00DE10  
> > 
> > Do you have more logs from both side of the replication?  
> 
> There was nothing special. Just errors from previous tests.

OK

> > How do you build your standbys?  
> Okey, I'll show you all configs. Linux Centos 7, PostgreSQL 11 from the
> official postgres repository. # rpm -q corosync
> corosync-2.4.5-4.el7.x86_64
> # rpm -q pacemaker
> pacemaker-1.1.21-4.el7.x86_64
> # rpm -q sbd
> sbd-1.4.0-15.el7.x86_64
> 
> The cluster designed for three datacenter with good connection between in one
> city , so it must survive a crush of one datacenter. So stonith is not used,
> instead I use quorum and sbd based watchdog. Name of nodes: tuchanka3a,
> tuchanka3b, tuchanka3c. Name for float ip: krogan3 for the master; krogan3s1,
> krogan3s2 for slaves.
> 
> postgresql common conf (not default options):
> 
> ident_file = '/var/lib/pgsql/pg_ident.conf'
> hba_file = '/var/lib/pgsql/pg_hba.conf'
> listen_addresses = '*'
> log_filename = 'postgresql.%F.log'
> wal_keep_segments = 1
> restart_after_crash = off 
> wal_receiver_timeout=0 # in case PostgreSQL-STOP wal_receiver_timeout is
> default 60s, of cause shared_buffers = 32MB
> max_wal_size=80MB

Based on setup per node, you can probably add
'synchronous_commit=remote_write' in the common conf.

> [...]
> pacemaker config, specific for this cluster:
> [...]

why did you add "monitor interval=15"? No harm, but it is redundant with
"monitor interval=16 role=Master" and "monitor interval=17 role=Slave".

By the way, nice catch to exclude RO IP when master score is
negative!

> Test PostgreSQL-STOP (wal_receiver_timeout is default 60s):
> killall -STOP postgres
> 
> PostgreSQL logs:
> From old master, tuchanka 3a:
> [...]

OK

> All errors are from former test, which was applied to slaves. (The host for
> tests choosen randomly).
> 
> New master tuchanka3b, some logs are copied from the old master on database
> restoration after previous test by pg_basebackup. I separated them by blank
> line.

You should probably consider putting your logs outside of your PGDATA, or
even better: send your PostgreSQL/Pacemaker logs to a dedicated syslog node
outside. Adding the hostname in the prefix of each log line might probably help
a lot.

> [...]
> 10:24:55.906 LOG:  entering standby mode
> 10:24:55.908 LOG:  redo starts at 0/1528
> 10:24:55.909 LOG:  consistent recovery state reached at 0/15002300
> 10:24:55.910 LOG:  database system is ready to accept read only connections
> 10:24:55.928 LOG:  started streaming WAL from primary at 0/1600 on tl 3
> 10:26:37.308 FATAL:  terminating walreceiver due to timeout

Timeout because of SIGSTOP on primary here.

> 10:26:37.308 LOG:  invalid record length at 0/1600C4D8: wanted 24, got 0
> 10:30:55.965 LOG:  received promote request

Promotion from Pacemaker here.

What happened during more than 4 minutes between the timeout and the promotion?

> 10:30:55.965 FATAL:  terminating walreceiver process due to administrator cmd
> 10:30:55.966 LOG:  redo done at 0/1600C4B0
> 10:30:55.966 LOG:  last completed transaction was at log time 10:25:38.76429
> 10:30:55.968 LOG:  selected new timeline ID: 4
> 10:30:56.001 LOG:  archive recovery complete
> 10:30:56.005 LOG:  database system is ready to accept connections

> The slave with didn't reconnected replication, tuchanka3c. Also I separated
> logs copied from the old master by a blank line:
> 
> [...]
> 
> 10:20:25.168 LOG:  database system was interrupted; last known up at 10:20:19
> 10:20:25.180 LOG:  entering standby mode
> 10:20:25.181 LOG:  redo starts at 0/1198
> 10:20:25.183 LOG:  consistent recovery state reached at 0/11000A68
> 10:20:25.183 LOG:  database system is ready to accept read only connections
> 10:20:25.193 LOG:  started streaming WAL from primary at 0/1200 on tl 3
> 10:25:05.370 LOG:  could not send data to client: Connection reset by peer
> 10:26:38.655 FATAL:  terminating walreceiver due to timeout
> 10:26:38.655 LOG:  record with incorrect prev-link 0/1200C4B0 at 0/1600C4D8

This message appear before the effective promotion of tuchanka3b. Do you have
logs about what happen *after* the promotion?

Reading at this error, it seems like record at 0/1600C4D8 references the
previous one in WAL 0/1200. So the file referenced as 0/16 have either
corrupted data or was 0/12 being recycled, but not zeroed correctly, as v11
always do no matter what (no wal_init_zero there).

That's why I'm wondering how you 

Re: Creating many tables gets logical replication stuck

2020-08-21 Thread Achilleas Mantzios

Dear Laurenz thank you for your analysis and report.

On 21/8/20 4:00 μ.μ., Laurenz Albe wrote:

Reproducer on 12.4:

This is identical problem with this report here : 
https://www.postgresql.org/message-id/6fa054d8-ad14-42a2-8926-5d79c97ecd65%40matrix.gatewaynet.com

Yours,
Laurenz Albe



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Fwd: When are largobject records TOASTed into pg_toast_2613?

2020-08-21 Thread Tom Lane
Thomas Boussekey  writes:
> Le ven. 21 août 2020 à 15:10, Laurenz Albe  a
> écrit :
>> The safest way would be to upgrade with pg_dumpall/psql.

> The `pg_dumpall` command will also copy the content and the existence of
> the `pg_toast_2613` table, isn't it?

No.  pg_dumpall does not do anything with system catalogs per se.

> Moreover, it will generate a large downtime

Yeah.  It's a tradeoff of safety versus downtime.  If that toast table
is physically empty (I would not bet on it without checking), then you
could manually hack up pg_class to remove the toast table, but there's
a nontrivial risk of permanently hosing your database by messing that
up.

regards, tom lane




Fwd: When are largobject records TOASTed into pg_toast_2613?

2020-08-21 Thread Thomas Boussekey
-- Forwarded message -
De : Thomas Boussekey 
Date: ven. 21 août 2020 à 15:37
Subject: Re: When are largobject records TOASTed into pg_toast_2613?
To: Laurenz Albe 


Le ven. 21 août 2020 à 15:10, Laurenz Albe  a
écrit :

> On Fri, 2020-08-21 at 14:00 +0200, Thomas Boussekey wrote:
> > Working on a PostgreSQL 9.5 to 12 upgrade, I encounter problems on a
> PostgreSQL instance when I have
> >  an existing table `pg_toast_2613` into my application database.
> >
> > The upgrade process fails with the following error:
> >
> > ```
> > No match found in new cluster for old relation with OID 16619 in
> database "mirakl_db": "pg_toast.pg_toast_2613" which is the TOAST table for
> "pg_catalog.pg_largeobject"
> > No match found in new cluster for old relation with OID 16621 in
> database "mirakl_db": "pg_toast.pg_toast_2613_index" which is an index on
> "pg_toast.pg_toast_2613" which is the TOAST table for
> > "pg_catalog.pg_largeobject"
> > ```
> >
> > The `pg_upgrade` command fails when I have the table `pg_toast_2613`
> that exists, even if it is empty.
> > I read the PostgreSQL documentation, and I didn't find when the
> pg_largeobject table needs to be toasted.
> >  I thought it might be linked with records' size, but my queries below
> don't correlate that!
>
> Indeed, system tables have no TOAST tables in PostgreSQL, so I wonder
> how your "pg_largeobject" table could have grown one.
>
Several years before I arrived in this company, the `pg_largeobject` table
had been moved to a dedicated tablespace located on a low-IOPS mechanical
disk.
One of my first projects when I started working in the company was to move
the `pg_largeobject` table back to the default system tablespace.
This might be a side-effect of the migration.

>
> Did you do any strange catalog modifications?
>
> The safest way would be to upgrade with pg_dumpall/psql.
>
The `pg_dumpall` command will also copy the content and the existence of
the `pg_toast_2613` table, isn't it?
It might generate errors at the execution on the new instance?
Moreover, it will generate a large downtime

> That should get rid of that data corruption.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>


Re: is date_part immutable or not?

2020-08-21 Thread Олег Самойлов


> 21 авг. 2020 г., в 16:14, Laurenz Albe  написал(а):
> Two approaches:
> 
> 1. Use "timestamp without time zone".

Yep, I redefined to timestamp without time zone. Thus loose timezone 
information from source, but happily there is not the daylight savings time 
shift in my country now.

> 
> 2. Partition in some other way, for example BY RANGE (log_time).
>   Your list partitions don't make a lot of sense to me.

This copies default name structure of the postgresql log files.



Re: is date_part immutable or not?

2020-08-21 Thread Laurenz Albe
On Fri, 2020-08-21 at 14:57 +0300, Олег Самойлов wrote:
> Just create partitioned table for PostgreSQL logs
> 
> CREATE TABLE pglog.pglog (
>log_time timestamp(3) with time zone,
> [...]
> ) PARTITION BY LIST (date_part('isodow', log_time));
> 
> ERROR:  functions in partition key expression must be marked IMMUTABLE

Two approaches:

1. Use "timestamp without time zone".

2. Partition in some other way, for example BY RANGE (log_time).
   Your list partitions don't make a lot of sense to me.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: When are largobject records TOASTed into pg_toast_2613?

2020-08-21 Thread Laurenz Albe
On Fri, 2020-08-21 at 14:00 +0200, Thomas Boussekey wrote:
> Working on a PostgreSQL 9.5 to 12 upgrade, I encounter problems on a 
> PostgreSQL instance when I have
>  an existing table `pg_toast_2613` into my application database.
> 
> The upgrade process fails with the following error:
> 
> ```
> No match found in new cluster for old relation with OID 16619 in database 
> "mirakl_db": "pg_toast.pg_toast_2613" which is the TOAST table for 
> "pg_catalog.pg_largeobject"
> No match found in new cluster for old relation with OID 16621 in database 
> "mirakl_db": "pg_toast.pg_toast_2613_index" which is an index on 
> "pg_toast.pg_toast_2613" which is the TOAST table for
> "pg_catalog.pg_largeobject"
> ```
> 
> The `pg_upgrade` command fails when I have the table `pg_toast_2613` that 
> exists, even if it is empty.
> I read the PostgreSQL documentation, and I didn't find when the 
> pg_largeobject table needs to be toasted.
>  I thought it might be linked with records' size, but my queries below don't 
> correlate that!

Indeed, system tables have no TOAST tables in PostgreSQL, so I wonder
how your "pg_largeobject" table could have grown one.

Did you do any strange catalog modifications?

The safest way would be to upgrade with pg_dumpall/psql.
That should get rid of that data corruption.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Creating many tables gets logical replication stuck

2020-08-21 Thread Laurenz Albe
Reproducer on 12.4:

On the primary:

CREATE TABLE repli (id bigint PRIMARY KEY, val text NOT NULL);
CREATE PUBLICATION repli_pub FOR TABLE repli;

On the standby:

CREATE TABLE repli (id bigint PRIMARY KEY, val text NOT NULL);
CREATE SUBSCRIPTION repli_sub CONNECTION '...' PUBLICATION repli_pub;

Logical replication works fine.

Now connect to the primary and run:

BEGIN;
SELECT format('CREATE TABLE table%s (id SERIAL PRIMARY KEY, somename text 
UNIQUE);', foo) FROM generate_series(1,13200) AS foo \gexec

13200 tables are created.
Don't commit the transaction yet and verify that logical replication is still 
working.

Now commit the transaction; logical replication stops working.

The log on the primary has lots of:

2020-08-21 12:41:47.899 CEST [25222] LOG:  connection received: host=[local]
2020-08-21 12:41:47.900 CEST [25222] LOG:  replication connection authorized: 
user=postgres application_name=repli_sub
2020-08-21 12:41:47.901 CEST [25222] ERROR:  replication slot "repli_sub" is 
active for PID 23911

The standby has

2020-08-21 12:41:47.897 CEST [25221] LOG:  logical replication apply worker for 
subscription "repli_sub" has started
2020-08-21 12:41:47.901 CEST [25221] ERROR:  could not start WAL streaming: 
ERROR:  replication slot "repli_sub" is active for PID 23911
2020-08-21 12:41:47.903 CEST [22901] LOG:  background worker "logical 
replication worker" (PID 25221) exited with exit code 1

When I try a fast shutdown on the primary server, the WAL sender just doesn't 
stop,
it can only be killed with SIGKILL.

I set "wal_sender_timeout = 6000s" and "log_min_messages = debug3" on the 
primary
and "wal_receiver_timeout = 6000s" on the standby and restarted the servers.

I see messages like this for the WAL sender:

2020-08-21 13:53:34.321 CEST [33594] LOG:  connection received: host=[local]
2020-08-21 13:53:34.321 CEST [33594] DEBUG:  postmaster child[33594]: starting 
with (
2020-08-21 13:53:34.321 CEST [33594] DEBUG: postgres
2020-08-21 13:53:34.321 CEST [33594] DEBUG:  )
2020-08-21 13:53:34.321 CEST [33594] DEBUG:  InitPostgres
2020-08-21 13:53:34.321 CEST [33594] LOG:  replication connection authorized: 
user=postgres application_name=repli_sub
2020-08-21 13:53:34.326 CEST [33594] DEBUG:  received replication command: 
IDENTIFY_SYSTEM
2020-08-21 13:53:34.326 CEST [33594] DEBUG:  received replication command: 
START_REPLICATION SLOT "repli_sub" LOGICAL 28/48643248 (proto_version '1', 
publication_names '"repli_pub"')
2020-08-21 13:53:34.326 CEST [33594] DEBUG:  cannot stream from 28/48643248, 
minimum is 28/48705180, forwarding
2020-08-21 13:53:34.326 CEST [33594] DEBUG:  find_in_dynamic_libpath: trying 
"/usr/pgsql-12/lib/pgoutput"
2020-08-21 13:53:34.326 CEST [33594] DEBUG:  find_in_dynamic_libpath: trying 
"/usr/pgsql-12/lib/pgoutput.so"
2020-08-21 13:53:34.326 CEST [33594] LOG:  starting logical decoding for slot 
"repli_sub"
2020-08-21 13:53:34.326 CEST [33594] DETAIL:  Streaming transactions committing 
after 28/48705180, reading WAL from 28/322FEDB8.
2020-08-21 13:53:34.326 CEST [33594] DEBUG:  switched to timeline 1 valid until 
0/0
2020-08-21 13:53:34.326 CEST [33594] DEBUG:  sending replication keepalive
2020-08-21 13:53:34.326 CEST [33594] LOG:  logical decoding found consistent 
point at 28/322FEDB8
2020-08-21 13:53:34.326 CEST [33594] DETAIL:  There are no running transactions.
2020-08-21 13:53:34.326 CEST [33594] DEBUG:  write 28/48705180 flush 
28/48705180 apply 28/48705180 reply_time 2020-08-21 13:53:34.326751+02
2020-08-21 13:53:34.335 CEST [33594] DEBUG:  spill 4096 changes in XID 
191852506 to disk
2020-08-21 13:53:34.349 CEST [33594] DEBUG:  spill 4096 changes in XID 
191852506 to disk
2020-08-21 13:53:34.362 CEST [33594] DEBUG:  spill 4096 changes in XID 
191852506 to disk
2020-08-21 13:53:34.377 CEST [33594] DEBUG:  spill 4096 changes in XID 
191852506 to disk
2020-08-21 13:53:34.391 CEST [33594] DEBUG:  spill 4096 changes in XID 
191852506 to disk
2020-08-21 13:53:34.406 CEST [33594] DEBUG:  spill 4096 changes in XID 
191852506 to disk
[...]
spill 4096 changes in XID 191852506 to disk
2020-08-21 13:53:38.388 CEST [33594] DEBUG:  spill 4096 changes in XID 
191852506 to disk
2020-08-21 13:53:38.405 CEST [33594] DEBUG:  spill 4096 changes in XID 
191852506 to disk
2020-08-21 13:53:38.423 CEST [33594] DEBUG:  spill 4096 changes in XID 
191852506 to disk
2020-08-21 13:53:38.438 CEST [33594] DEBUG:  purged committed transactions from 
0 to 0, xmin: 191852506, xmax: 191852506
2020-08-21 13:53:38.438 CEST [33594] DEBUG:  xmin: 191852506, xmax: 191852506, 
oldest running: 191852506, oldest xmin: 191852506
2020-08-21 13:53:38.439 CEST [33594] DEBUG:  purged committed transactions from 
0 to 0, xmin: 191852506, xmax: 191852506
2020-08-21 13:53:38.439 CEST [33594] DEBUG:  xmin: 191852506, xmax: 191852506, 
oldest running: 191852506, oldest xmin: 191852506
2020-08-21 13:53:38.501 CEST [33594] DEBUG:  found top level transaction 
191852506, with catalog changes
2020-08-21 

Re: When are largobject records TOASTed into pg_toast_2613?

2020-08-21 Thread Thomas Boussekey
Le ven. 21 août 2020 à 14:00, Thomas Boussekey 
a écrit :

> Hello all,
>
> Working on a PostgreSQL 9.5 to 12 upgrade, I encounter problems on a
> PostgreSQL instance when I have an existing table `pg_toast_2613` into my
> application database.
>
> The upgrade process fails with the following error:
>
> ```
> No match found in new cluster for old relation with OID 16619 in database
> "mirakl_db": "pg_toast.pg_toast_2613" which is the TOAST table for
> "pg_catalog.pg_largeobject"
> No match found in new cluster for old relation with OID 16621 in database
> "mirakl_db": "pg_toast.pg_toast_2613_index" which is an index on
> "pg_toast.pg_toast_2613" which is the TOAST table for
> "pg_catalog.pg_largeobject"
> ```
>
> The `pg_upgrade` command fails when I have the table `pg_toast_2613` that
> exists, even if it is empty.
> I read the PostgreSQL documentation, and I didn't find when the
> pg_largeobject table needs to be toasted. I thought it might be linked with
> records' size, but my queries below don't correlate that!
>
> I tried to dig into the data and found the following elements:
> * a records exists ONLY into one table (either the pg_largobject table or
> the pg_toast_2613, but not BOTH)
> * The `chunk_id` present into the `pg_toast_2613` table doesn't represent
> real large objects (impossible to query their size)
> * The `chunk_id` present into the `pg_toast_2613` table are not linked to
> existing documents into our applicative tables.
>
> I had a look on my 200+ production & test environments:
> * on half of these instances, the `pg_toast_2613` table doesn't exist
> * on 10% of them, the `pg_toast_2613` table exists and is empty
>
> Here are the points, I want to clarify:
> - What is the aim of the `pg_toast_2613` table?
> - Does it contain REAL large objects or other useful data?
> - Is there a workaround to make the `pg_upgrade` successful?
>
> Thanks in advance for your help,
> Thomas
>
>
> # Appendix
>
> ```sql
> -- Getting the 30 first items of BOTH tables
> # SELECT loid, count(*) from pg_largeobject group by loid order by 1 limit
> 30;
>   loid  | count
> +---
>   24567 | 1
>   24588 | 1
>   24608 | 1
>   24635 | 1
>   24648 | 1
>   24699 | 1
>   27505 | 1
>   84454 |32
>   89483 | 1
>  109676 |34
>  109753 |34
>  109821 |34
>  109855 | 2
>  137150 | 6
>  141236 |29
>  141265 | 1
>  156978 |29
>  157036 |29
>  157065 | 2
>  161835 |29
>  161864 | 1
>  166275 |29
>  166333 |29
>  166404 |29
>  166439 | 2
>  171487 |29
>  171516 | 1
>  175825 |29
>  175854 | 1
>  180171 |29
> (30 rows)
>
> # SELECT chunk_id, count(*) from pg_toast.pg_toast_2613 group by chunk_id
> order by 1 limit 30;
>  chunk_id | count
> --+---
> 84455 | 2
> 84456 | 2
> 84457 | 2
> 84458 | 2
> 84459 | 2
> 84460 | 2
> 84461 | 2
> 84462 | 2
> 84463 | 2
> 84464 | 2
> 84465 | 2
> 84466 | 2
> 84467 | 2
> 84468 | 2
> 84469 | 2
> 84470 | 2
> 84471 | 2
> 84472 | 2
> 84473 | 2
> 84474 | 2
> 84475 | 2
> 84476 | 2
> 84477 | 2
> 84478 | 2
> 84479 | 2
> 84480 | 2
> 84481 | 2
> 84482 | 2
> 84483 | 2
> 84484 | 2
> (30 rows)
>
> -- Searching IDs 84454, 84455 into applicative table
> # SELECT * from mirakl_lob where blob in (84454, 84455);
>  mirakl_document_id | blob
> +---
>2859 | 84454
>
> SELECT length(lo_get (84455));
> ERROR:  large object 84455 does not exist
>
> SELECT length(lo_get (84454));
>  length
> 
>   64080
>
> ```
>

Additional information,

I restored a basebackup for an instance containing the `pg_toast_2613`
table.

At first glimpse, the TOAST table is 30 times the size of pg_largobject
(see relpages in the first query below).
I tried to VACUUM FULL the `pg_largobject` table, and the rows into the
`pg_toast_2613` table vanished!

Can it be a suitable workaround to apply the following logic in my
migration process?

* If `pg_toast_2613` table exists
- Perform `VACUUM FULL VERBOSE pg_largeobject`
- If `SELECT COUNT(*) FROM pg_toast_2613;` = 0
- unTOAST the `pg_largobject` table (if a procedure exists)

```sql
# SELECT oid, relname, relnamespace, relfilenode, reltoastrelid, relpages,
reltuples
>from pg_class
>where relname like 'pg_toast_2613%'
>or relname like 'pg_largeobject%' order by relname;
  oid  |  relname  | relnamespace | relfilenode |
reltoastrelid | relpages | reltuples
---+---+--+-+---+--+---
  2613 | pg_largeobject|   11 | 5349225 |
  16637 |  263 |  5662
  2683 | pg_largeobject_loid_pn_index  |   11 | 

Query plan prefers hash join when nested loop is much faster

2020-08-21 Thread iulian dragos
Hi,

I am trying to understand why the query planner insists on using a hash
join, and how to make it choose the better option, which in this case would
be a nested loop. I have two tables:

// about 200 million rows
CREATE TABLE module_result(
*id* bigserial PRIMARY KEY,
name_id bigint NOT NULL references result_name(id),
*run_id* integer NOT NULL references run (id),
logs text NOT NULL,
status result_status NOT NULL
);
CREATE INDEX ON module_result (run_id);

// 500 million rows
CREATE TABLE test_result(
id bigserial PRIMARY KEY,
name_id bigint NOT NULL references result_name(id),
*module_result_id* bigint NOT NULL references module_result (id),
seconds float NOT NULL,
failure_msg text, -- Either a ... or an   Gather Merge  (cost=7771702.73..7771804.02 rows=6 width=12) (actual
time=32341.970..32343.222 rows=6 loops=1)
   |
| Workers Planned: 2

|
| Workers Launched: 2

   |
| ->  Partial GroupAggregate  (cost=7770702.71..7770803.30 rows=3
width=12) (actual time=32340.278..32340.286 rows=2 loops=3)
 |
|   Group Key: test_result.status

   |
|   ->  Sort  (cost=7770702.71..7770736.23 rows=13408 width=4)
(actual time=32339.698..32339.916 rows=4941 loops=3)
 |
| Sort Key: test_result.status

|
| Sort Method: quicksort  Memory: 431kB

   |
| Worker 0:  Sort Method: quicksort  Memory: 433kB

|
| Worker 1:  Sort Method: quicksort  Memory: 409kB

|
| ->  Hash Join  (cost=586.15..7769783.54 rows=13408
width=4) (actual time=18112.078..32339.011 rows=4941 loops=3)
  |
|   Hash Cond: (test_result.module_result_id =
module_result.id)
|
|   ->  Parallel Seq Scan on test_result
 (cost=0.00..7145224.72 rows=237703872 width=12) (actual
time=0.034..15957.894 rows=190207740 loops=3)|
|   ->  Hash  (cost=438.41..438.41 rows=11819
width=8) (actual time=3.905..3.905 rows=14824 loops=3)
  |
| Buckets: 16384  Batches: 1  Memory Usage:
708kB
   |
| ->  Index Scan using
module_result_run_id_idx on module_result  (cost=0.57..438.41 rows=11819
width=8) (actual time=0.017..2.197 rows=14824 loops=3) |
|   Index Cond: (run_id = 158523)

   |
| Planning Time: 0.178 ms

   |
| Execution Time: 32343.330 ms

|
+--+
EXPLAIN
Time: 32.572s (32 seconds), executed in: 32.551s (32 seconds)

This plan takes about 30s to execute. If I turn off seqscan, I get a nested
loop join that takes about 0.02s to execute:

set enable_seqscan = off


SET
Time: 0.305s
> explain analyze select test_result.status, count(test_result.status) as
"Count"  from test_result inner join module_result ON module_result.id =
test_result.module_result_id where module_resul
 t.run_id=158523   group by test_result.status


+---+
| QUERY PLAN

 |
|---|
| Finalize GroupAggregate  (cost=34297042.16..34297143.50 rows=3 width=12)
(actual time=15.014..15.015 rows=2 loops=1)
 |
|   Group Key: test_result.status

|
|   ->  Gather Merge  (cost=34297042.16..34297143.44 rows=6 width=12)
(actual time=15.005..15.850 rows=6 loops=1)
  |
| Workers Planned: 2

 |
| Workers Launched: 2

|
| ->  Partial GroupAggregate  (cost=34296042.13..34296142.72 rows=3
width=12) (actual time=12.937..12.940 rows=2 loops=3)
|
|   Group Key: test_result.status

|
|   ->  Sort  (cost=34296042.13..34296075.65 rows=13408
width=4) (actual time=12.339..12.559 rows=4941 loops=3)
 

Re: is date_part immutable or not?

2020-08-21 Thread Олег Самойлов
My mistake. 

Schema  | pg_catalog
Name| date_part
Result data type| double precision
Argument data types | text, timestamp with time zone
Type| func
Volatility  | stable
Parallel| safe
Owner   | postgres
Security| invoker
Access privileges   |
Language| internal
Source code | timestamptz_part
Description | extract field from timestamp with time zone

is stable, but

Schema  | pg_catalog
Name| date_part
Result data type| double precision
Argument data types | text, date
Type| func
Volatility  | immutable
Parallel| safe
Owner   | postgres
Security| invoker
Access privileges   |
Language| sql
Source code | select pg_catalog.date_part($1, cast($2 as timestamp 
without tim
e zone))
Description | extract field from date

is immutable and
Schema  | pg_catalog
Name| date_part
Result data type| double precision
Argument data types | text, timestamp without time zone
Type| func
Volatility  | immutable
Parallel| safe
Owner   | postgres
Security| invoker
Access privileges   |
Language| internal
Source code | timestamp_part
Description | extract field from timestamp

immutable too. But date_part('isodow', log_time::date) and date_part('isodow', 
log_time::timestamp) don't work too.

How to workaround this?

> 21 авг. 2020 г., в 14:57, Олег Самойлов  написал(а):
> 
> PostgreSQL 12.4
> Just create partitioned table for PostgreSQL logs
> 
> CREATE TABLE pglog.pglog (
>   log_time timestamp(3) with time zone,
>   user_name text,
>   database_name text,
>   process_id integer,
>   connection_from text,
>   session_id text,
>   session_line_num bigint,
>   command_tag text,
>   session_start_time timestamp with time zone,
>   virtual_transaction_id text,
>   transaction_id bigint,
>   error_severity text,
>   sql_state_code text,
>   message text,
>   detail text,
>   hint text,
>   internal_query text,
>   internal_query_pos integer,
>   context text,
>   query text,
>   query_pos integer,
>   location text,
>   application_name text
> ) PARTITION BY LIST (date_part('isodow', log_time));
> 
> ERROR:  functions in partition key expression must be marked IMMUTABLE
> 
> But, date_part is immutable
> 
> Schema  | pg_catalog
> Name| date_part
> Result data type| double precision
> Argument data types | text, time with time zone
> Type| func
> Volatility  | immutable
> Parallel| safe
> Owner   | postgres
> Security| invoker
> Access privileges   |
> Language| internal
> Source code | timetz_part
> Description | extract field from time with time zone
> 
> What is wrong here?
> 
> 





When are largobject records TOASTed into pg_toast_2613?

2020-08-21 Thread Thomas Boussekey
Hello all,

Working on a PostgreSQL 9.5 to 12 upgrade, I encounter problems on a
PostgreSQL instance when I have an existing table `pg_toast_2613` into my
application database.

The upgrade process fails with the following error:

```
No match found in new cluster for old relation with OID 16619 in database
"mirakl_db": "pg_toast.pg_toast_2613" which is the TOAST table for
"pg_catalog.pg_largeobject"
No match found in new cluster for old relation with OID 16621 in database
"mirakl_db": "pg_toast.pg_toast_2613_index" which is an index on
"pg_toast.pg_toast_2613" which is the TOAST table for
"pg_catalog.pg_largeobject"
```

The `pg_upgrade` command fails when I have the table `pg_toast_2613` that
exists, even if it is empty.
I read the PostgreSQL documentation, and I didn't find when the
pg_largeobject table needs to be toasted. I thought it might be linked with
records' size, but my queries below don't correlate that!

I tried to dig into the data and found the following elements:
* a records exists ONLY into one table (either the pg_largobject table or
the pg_toast_2613, but not BOTH)
* The `chunk_id` present into the `pg_toast_2613` table doesn't represent
real large objects (impossible to query their size)
* The `chunk_id` present into the `pg_toast_2613` table are not linked to
existing documents into our applicative tables.

I had a look on my 200+ production & test environments:
* on half of these instances, the `pg_toast_2613` table doesn't exist
* on 10% of them, the `pg_toast_2613` table exists and is empty

Here are the points, I want to clarify:
- What is the aim of the `pg_toast_2613` table?
- Does it contain REAL large objects or other useful data?
- Is there a workaround to make the `pg_upgrade` successful?

Thanks in advance for your help,
Thomas


# Appendix

```sql
-- Getting the 30 first items of BOTH tables
# SELECT loid, count(*) from pg_largeobject group by loid order by 1 limit
30;
  loid  | count
+---
  24567 | 1
  24588 | 1
  24608 | 1
  24635 | 1
  24648 | 1
  24699 | 1
  27505 | 1
  84454 |32
  89483 | 1
 109676 |34
 109753 |34
 109821 |34
 109855 | 2
 137150 | 6
 141236 |29
 141265 | 1
 156978 |29
 157036 |29
 157065 | 2
 161835 |29
 161864 | 1
 166275 |29
 166333 |29
 166404 |29
 166439 | 2
 171487 |29
 171516 | 1
 175825 |29
 175854 | 1
 180171 |29
(30 rows)

# SELECT chunk_id, count(*) from pg_toast.pg_toast_2613 group by chunk_id
order by 1 limit 30;
 chunk_id | count
--+---
84455 | 2
84456 | 2
84457 | 2
84458 | 2
84459 | 2
84460 | 2
84461 | 2
84462 | 2
84463 | 2
84464 | 2
84465 | 2
84466 | 2
84467 | 2
84468 | 2
84469 | 2
84470 | 2
84471 | 2
84472 | 2
84473 | 2
84474 | 2
84475 | 2
84476 | 2
84477 | 2
84478 | 2
84479 | 2
84480 | 2
84481 | 2
84482 | 2
84483 | 2
84484 | 2
(30 rows)

-- Searching IDs 84454, 84455 into applicative table
# SELECT * from mirakl_lob where blob in (84454, 84455);
 mirakl_document_id | blob
+---
   2859 | 84454

SELECT length(lo_get (84455));
ERROR:  large object 84455 does not exist

SELECT length(lo_get (84454));
 length

  64080

```


is date_part immutable or not?

2020-08-21 Thread Олег Самойлов
PostgreSQL 12.4
Just create partitioned table for PostgreSQL logs

CREATE TABLE pglog.pglog (
   log_time timestamp(3) with time zone,
   user_name text,
   database_name text,
   process_id integer,
   connection_from text,
   session_id text,
   session_line_num bigint,
   command_tag text,
   session_start_time timestamp with time zone,
   virtual_transaction_id text,
   transaction_id bigint,
   error_severity text,
   sql_state_code text,
   message text,
   detail text,
   hint text,
   internal_query text,
   internal_query_pos integer,
   context text,
   query text,
   query_pos integer,
   location text,
   application_name text
) PARTITION BY LIST (date_part('isodow', log_time));

ERROR:  functions in partition key expression must be marked IMMUTABLE

But, date_part is immutable

Schema  | pg_catalog
Name| date_part
Result data type| double precision
Argument data types | text, time with time zone
Type| func
Volatility  | immutable
Parallel| safe
Owner   | postgres
Security| invoker
Access privileges   |
Language| internal
Source code | timetz_part
Description | extract field from time with time zone

What is wrong here?




Re: > ERROR: syntax error at or near "BYTE"

2020-08-21 Thread Paul Förster
Hi,

> On 21. Aug, 2020, at 10:19, postgresdba...@outlook.com wrote:
> 
> CREATE TABLE "A"
> (
>   "b" DATE,
>   "c  "   NUMBER,
>  " d "  VARCHAR2(255 BYTE),
>   "e "VARCHAR2(255 BYTE))
>   
>   When ı create table then after error why error in byte please heplp me 
> thanks
>   
>   error:> ERROR:  syntax error at or near "BYTE"

several problems:

a) don't put blanks in your names, neither table names nor attributes.
b) PostgreSQL doesn't have NUMBER. Use integer instead.
c) PostgreSQL doesn't have VARCHAR2. User text instead.

So, the correct statement would be:

create table a (
b date,
c integer,
d text,
e text
);

Hope this helps.

Cheers,
Paul



Re: Loading Oracle Spatial Data to Postgresql

2020-08-21 Thread Laurenz Albe
On Thu, 2020-08-20 at 14:30 +, Ko, Christina wrote:
> I am running into issue using DMS to load oracle spatial data (oracle 
> 11.2.0.3) to postgresql.
> 
> Oracle table:
> 
> CREATE TABLE Spatial_Tbl
>  IDNUMBER(38,9),
>   P_ID  NUMBER(38,9),
>   GEOMETRY  MDSYS.SDO_GEOMETRY

If the geometries are not exotic, oracle_fdw would do the trick.

You won't be able to install it on a hosted database though.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: > ERROR: syntax error at or near "BYTE"

2020-08-21 Thread Magnus Hagander
On Fri, Aug 21, 2020 at 10:33 AM postgresdba...@outlook.com <
postgresdba...@outlook.com> wrote:

> CREATE TABLE "A"
> (
>   "b" DATE,
>   "c  "   NUMBER,
>  " d "  VARCHAR2(255 BYTE),
>   "e "VARCHAR2(255 BYTE))
>
>   When ı create table then after error why error in byte please heplp me
> thanks
>
>   error:> ERROR:  syntax error at or near "BYTE"
>

This is not valid syntax in PostgreSQL (or I believe, in SQL in general).
This is Oracle syntax.

PostgreSQL does not have the number data type, so you'll eventually get a
problem there as well. Which data type to use instead depends on what data
you are actually going to store.

PostgreSQL does not have the varchar2 data type (so you will have to use
varchar).

And finally, varchar just takes a number, not the special construct with
BYTE. PostgreSQL varchar:s always limit the size based on number of
characters, not bytes.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


> ERROR: syntax error at or near "BYTE"

2020-08-21 Thread postgresdba...@outlook.com
CREATE TABLE "A"
(
  "b" DATE,
  "c  "   NUMBER,
 " d "  VARCHAR2(255 BYTE),
  "e "VARCHAR2(255 BYTE))

  When ı create table then after error why error in byte please heplp me thanks

  error:> ERROR:  syntax error at or near "BYTE"