Re: Add minimal C example and SQL registration example for custom table access methods.

2024-05-24 Thread Fabrízio de Royes Mello
On Fri, May 24, 2024 at 3:11 PM Phil Eaton  wrote:

> > I think this should say something more like "Here is how an extension
> > SQL script might create a table access method handler".
>
> Fair point. It is referred to elsewhere [0] in docs as a "script
> file", so I've done that.
>
> > Shouldn't "mem_tableam_handler" be "my_tableam_handler"?
>
> Sorry about that, fixed.
>
> [0] https://www.postgresql.org/docs/current/extend-extensions.html
>
> Phil
>

Nice... LGTM!

-- 
Fabrízio de Royes Mello


Re: Adding the extension name to EData / log_line_prefix

2024-05-13 Thread Fabrízio de Royes Mello
On Mon, May 13, 2024 at 5:51 PM Andres Freund  wrote:
>
> Hi,
>
> It can be very useful to look at the log messages emitted by a larger
number
> of postgres instances to see if anything unusual is happening. E.g.
checking
> whether there are an increased number of internal, IO, corruption errors
(and
> LOGs too, because we emit plenty bad things as LOG) . One difficulty is
that
> extensions tend to not categorize their errors. But unfortunately errors
in
> extensions are hard to distinguish from errors emitted by postgres.
>
> A related issue is that it'd be useful to be able to group log messages by
> extension, to e.g. see which extensions are emitting disproportionally
many
> log messages.
>
> Therefore I'd like to collect the extension name in elog/ereport and add a
> matching log_line_prefix escape code.
>

I liked the idea ... It is very helpful for troubleshooting problems in
production.


> It's not entirely trivial to provide errfinish() with a parameter
indicating
> the extension, but it's doable:
>
> 1) Have PG_MODULE_MAGIC also define a new variable for the extension name,
>empty at that point
>
> 2) In internal_load_library(), look up that new variable, and fill it
with a,
>mangled, libname.
>
> 4) in elog.h, define a new macro depending on BUILDING_DLL (if it is set,
>we're in the server, otherwise an extension). In the backend itself,
define
>it to NULL, otherwise to the variable created by PG_MODULE_MAGIC.
>
> 5) In elog/ereport/errsave/... pass this new variable to
>errfinish/errsave_finish.
>

Then every extension should define their own Pg_extension_filename, right?


> I've attached a *very rough* prototype of this idea. My goal at this
stage was
> just to show that it's possible, not for the code to be in a reviewable
state.
>
>
> Here's e.g. what this produces with log_line_prefix='%m [%E] '
>
> 2024-05-13 13:50:17.518 PDT [postgres] LOG:  database system is ready to
accept connections
> 2024-05-13 13:50:19.138 PDT [cube] ERROR:  invalid input syntax for cube
at character 13
> 2024-05-13 13:50:19.138 PDT [cube] DETAIL:  syntax error at or near "f"
> 2024-05-13 13:50:19.138 PDT [cube] STATEMENT:  SELECT cube('foo');
>
> 2024-05-13 13:43:07.484 PDT [postgres] LOG:  database system is ready to
accept connections
> 2024-05-13 13:43:11.699 PDT [hstore] ERROR:  syntax error in hstore:
unexpected end of string at character 15
> 2024-05-13 13:43:11.699 PDT [hstore] STATEMENT:  SELECT hstore('foo');
>
>

Was not able to build your patch by simply:

./configure --prefix=/tmp/pg
...
make -j
...
/usr/bin/ld: ../../src/port/libpgport_srv.a(path_srv.o): warning:
relocation against `Pg_extension_filename' in read-only section `.text'
/usr/bin/ld: access/brin/brin.o: in function `brininsert':
/data/src/pg/main/src/backend/access/brin/brin.c:403: undefined reference
to `Pg_extension_filename'
/usr/bin/ld: access/brin/brin.o: in function `brinbuild':
/data/src/pg/main/src/backend/access/brin/brin.c:1107: undefined reference
to `Pg_extension_filename'
/usr/bin/ld: access/brin/brin.o: in function `brin_summarize_range':
/data/src/pg/main/src/backend/access/brin/brin.c:1383: undefined reference
to `Pg_extension_filename'
/usr/bin/ld: /data/src/pg/main/src/backend/access/brin/brin.c:1389:
undefined reference to `Pg_extension_filename'
/usr/bin/ld: /data/src/pg/main/src/backend/access/brin/brin.c:1434:
undefined reference to `Pg_extension_filename'
/usr/bin/ld:
access/brin/brin.o:/data/src/pg/main/src/backend/access/brin/brin.c:1450:
more undefined references to `Pg_extension_filename' follow
/usr/bin/ld: warning: creating DT_TEXTREL in a PIE
collect2: error: ld returned 1 exit status
make[2]: *** [Makefile:67: postgres] Error 1
make[2]: Leaving directory '/data/src/pg/main/src/backend'
make[1]: *** [Makefile:42: all-backend-recurse] Error 2
make[1]: Leaving directory '/data/src/pg/main/src'
make: *** [GNUmakefile:11: all-src-recurse] Error 2


> It's worth pointing out that this, quite fundamentally, can only work
when the
> log message is triggered directly by the extension. If the extension code
> calls some postgres function and that function then errors out, it'll be
seens
> as being part of postgres.
>
> But I think that's ok - they're going to be properly errcode-ified etc.
>

Hmmm, depending on the extension it can extensively call/use postgres code
so would be nice if we can differentiate if the code is called from
Postgres itself or from an extension.

Regards,

--
Fabrízio de Royes Mello


Re: New committers: Melanie Plageman, Richard Guo

2024-04-26 Thread Fabrízio de Royes Mello
On Fri, Apr 26, 2024 at 8:54 AM Jonathan S. Katz 
wrote:

> The Core Team would like to extend our congratulations to Melanie
> Plageman and Richard Guo, who have accepted invitations to become our
> newest PostgreSQL committers.
>
>
It is a HUGE step for the community having a woman committer... Congrats to
you both Melanie and Richard!!!

Regards,

-- 
Fabrízio de Royes Mello


Re: speed up a logical replica setup

2024-03-21 Thread Fabrízio de Royes Mello
On Fri, Mar 22, 2024 at 12:54 AM Amit Kapila 
wrote:
>
>
> The user choosing to convert a physical standby to a subscriber would
> in some cases be interested in converting it for all the databases
> (say for the case of upgrade [1]) and giving options for each database
> would be cumbersome for her.
>
> ...
>
> [1] - This tool can be used in an upgrade where the user first
> converts physical standby to subscriber to get incremental changes and
> then performs an online upgrade.
>

The first use case me and Euler discussed some time ago to upstream this
tool was exactly what Amit described so IMHO we should make it easier for
users to subscribe to all existing user databases.

Regards,

--
Fabrízio de Royes Mello


Re: glibc qsort() vulnerability

2024-02-12 Thread Fabrízio de Royes Mello
On Mon, Feb 12, 2024 at 5:51 PM Nathan Bossart 
wrote:
>
> On Mon, Feb 12, 2024 at 06:09:06PM +0100, Mats Kindahl wrote:
> > Here are the two fixed patches.
>
> These patches look pretty good to me.  Barring additional feedback, I'll
> plan on committing them in the next few days.
>

Also did some tests locally and everything went well. Patches apply to the
main branch without issues and all regression (including checkworld) pass!!

Regards
-- 
Fabrízio de Royes Mello


Re: speed up a logical replica setup

2024-01-31 Thread Fabrízio de Royes Mello
On Wed, Jan 31, 2024 at 12:38 PM Euler Taveira  wrote:
>
>
> Hmm. I didn't try it with the failover patch that was recently applied.
Did you
> have any special configuration on primary?
>

Nothing special, here the configurations I've changed after bootstrap:

port = '5432'
wal_level = 'logical'
max_wal_senders = '8'
max_replication_slots = '6'
hot_standby_feedback = 'on'
max_prepared_transactions = '10'
max_locks_per_transaction = '512'

Regards,

--
Fabrízio de Royes Mello


Re: speed up a logical replica setup

2024-01-31 Thread Fabrízio de Royes Mello
On Wed, Jan 31, 2024 at 11:35 AM Euler Taveira  wrote:
>
> On Wed, Jan 31, 2024, at 11:25 AM, Fabrízio de Royes Mello wrote:
>
> Jumping into this a bit late here... I'm trying a simple
pg_createsubscriber but getting an error:
>
>
> Try v11. It seems v12-0002 is not correct.

Using v11 I'm getting this error:

~/pgsql took 22s
✦ ➜ pg_createsubscriber -d fabrizio -r -D /tmp/replica5434 -S 'host=/tmp
port=5434' -P 'host=/tmp port=5432'
NOTICE:  changed the failover state of replication slot
"pg_createsubscriber_16384_706609" on publisher to false
pg_createsubscriber: error: could not drop replication slot
"pg_createsubscriber_706609_startpoint" on database "fabrizio": ERROR:
 replication slot "pg_createsubscriber_706609_startpoint" does not exist
Write-ahead log reset

Attached the output log.

Regards,

-- 
Fabrízio de Royes Mello
2024-01-31 11:53:31.882 -03 [706626] LOG:  starting PostgreSQL 17devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit
2024-01-31 11:53:31.882 -03 [706626] LOG:  listening on IPv6 address "::1", port 5434
2024-01-31 11:53:31.882 -03 [706626] LOG:  listening on IPv4 address "127.0.0.1", port 5434
2024-01-31 11:53:31.990 -03 [706626] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5434"
2024-01-31 11:53:32.082 -03 [706639] LOG:  database system was shut down in recovery at 2024-01-31 11:53:30 -03
2024-01-31 11:53:32.084 -03 [706639] LOG:  entering standby mode
2024-01-31 11:53:32.100 -03 [706639] LOG:  redo starts at 0/428
2024-01-31 11:53:32.189 -03 [706639] LOG:  consistent recovery state reached at 0/69EFF00
2024-01-31 11:53:32.189 -03 [706639] LOG:  invalid record length at 0/69F01D0: expected at least 24, got 0
2024-01-31 11:53:32.189 -03 [706626] LOG:  database system is ready to accept read-only connections
2024-01-31 11:53:32.200 -03 [706640] LOG:  started streaming WAL from primary at 0/600 on timeline 1
2024-01-31 11:53:32.218 -03 [706639] LOG:  recovery stopping after WAL location (LSN) "0/6A1F4C0"
2024-01-31 11:53:32.218 -03 [706639] LOG:  redo done at 0/6A1F4C0 system usage: CPU: user: 0.07 s, system: 0.01 s, elapsed: 0.11 s
2024-01-31 11:53:32.219 -03 [706639] LOG:  last completed transaction was at log time 2024-01-31 11:53:31.745182-03
2024-01-31 11:53:32.220 -03 [706640] FATAL:  terminating walreceiver process due to administrator command
2024-01-31 11:53:32.266 -03 [706639] LOG:  selected new timeline ID: 2
2024-01-31 11:53:32.390 -03 [706639] LOG:  archive recovery complete
2024-01-31 11:53:32.403 -03 [706637] LOG:  checkpoint starting: end-of-recovery immediate wait
2024-01-31 11:53:32.809 -03 [706637] LOG:  checkpoint complete: wrote 2141 buffers (13.1%); 0 WAL file(s) added, 0 removed, 2 recycled; write=0.067 s, sync=0.198 s, total=0.419 s; sync files=51, longest=0.018 s, average=0.004 s; distance=43133 kB, estimate=43133 kB; lsn=0/6A1F4F8, redo lsn=0/6A1F4F8
2024-01-31 11:53:32.827 -03 [706626] LOG:  database system is ready to accept connections
2024-01-31 11:53:33.398 -03 [706626] LOG:  received fast shutdown request
2024-01-31 11:53:33.409 -03 [706650] LOG:  logical replication apply worker for subscription "pg_createsubscriber_16384_706609" has started
2024-01-31 11:53:33.414 -03 [706626] LOG:  aborting any active transactions
2024-01-31 11:53:33.414 -03 [706650] FATAL:  terminating logical replication worker due to administrator command
2024-01-31 11:53:33.415 -03 [706626] LOG:  background worker "logical replication launcher" (PID 706645) exited with exit code 1
2024-01-31 11:53:33.415 -03 [706626] LOG:  background worker "logical replication apply worker" (PID 706650) exited with exit code 1
2024-01-31 11:53:33.415 -03 [706637] LOG:  shutting down
2024-01-31 11:53:33.438 -03 [706637] LOG:  checkpoint starting: shutdown immediate
2024-01-31 11:53:33.642 -03 [706637] LOG:  checkpoint complete: wrote 23 buffers (0.1%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.052 s, sync=0.047 s, total=0.228 s; sync files=14, longest=0.011 s, average=0.004 s; distance=3 kB, estimate=38820 kB; lsn=0/6A20270, redo lsn=0/6A20270
2024-01-31 11:53:33.648 -03 [706626] LOG:  database system is shut down


Re: speed up a logical replica setup

2024-01-31 Thread Fabrízio de Royes Mello
On Wed, Jan 31, 2024 at 9:52 AM Hayato Kuroda (Fujitsu) <
kuroda.hay...@fujitsu.com> wrote:
>
> Dear Euler,
>
> I extracted some review comments which may require many efforts. I hope
this makes them
> easy to review.
>
> 0001: not changed from yours.
> 0002: avoid to use replication connections. Source: comment #3[1]
> 0003: Remove -P option and use primary_conninfo instead. Source: [2]
> 0004: Exit earlier when dry_run is specified. Source: [3]
> 0005: Refactor data structures. Source: [4]
>
> [1]:
https://www.postgresql.org/message-id/TY3PR01MB9889593399165B9A04106741F5662%40TY3PR01MB9889.jpnprd01.prod.outlook.com
> [2]:
https://www.postgresql.org/message-id/TY3PR01MB98897C85700C6DF942D2D0A3F5792%40TY3PR01MB9889.jpnprd01.prod.outlook.com
> [3]:
https://www.postgresql.org/message-id/TY3PR01MB98897C85700C6DF942D2D0A3F5792%40TY3PR01MB9889.jpnprd01.prod.outlook.com
> [4]:
https://www.postgresql.org/message-id/TY3PR01MB9889C362FF76102C88FA1C29F56F2%40TY3PR01MB9889.jpnprd01.prod.outlook.com
>

Hey folks,

Jumping into this a bit late here... I'm trying a simple
pg_createsubscriber but getting an error:

~/pgsql took 19s
✦ ➜ pg_createsubscriber -d fabrizio -r -D /tmp/replica5434 -S 'host=/tmp
port=5434'
pg_createsubscriber: error: could not create subscription
"pg_createsubscriber_16384_695617" on database "fabrizio": ERROR:  syntax
error at or near "/"
LINE 1: ..._16384_695617 CONNECTION 'user=fabrizio passfile='/home/fabr...
 ^
pg_createsubscriber: error: could not drop replication slot
"pg_createsubscriber_16384_695617" on database "fabrizio":
pg_createsubscriber: error: could not drop replication slot
"pg_subscriber_695617_startpoint" on database "fabrizio": ERROR:
 replication slot "pg_subscriber_695617_startpoint" does not exist

And the LOG contains the following:

~/pgsql took 12s
✦ ➜ cat
/tmp/replica5434/pg_createsubscriber_output.d/server_start_20240131T110318.730.log

2024-01-31 11:03:19.138 -03 [695632] LOG:  starting PostgreSQL 17devel on
x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0,
64-bit
2024-01-31 11:03:19.138 -03 [695632] LOG:  listening on IPv6 address "::1",
port 5434
2024-01-31 11:03:19.138 -03 [695632] LOG:  listening on IPv4 address
"127.0.0.1", port 5434
2024-01-31 11:03:19.158 -03 [695632] LOG:  listening on Unix socket
"/tmp/.s.PGSQL.5434"
2024-01-31 11:03:19.179 -03 [695645] LOG:  database system was shut down in
recovery at 2024-01-31 11:03:18 -03
2024-01-31 11:03:19.180 -03 [695645] LOG:  entering standby mode
2024-01-31 11:03:19.192 -03 [695645] LOG:  redo starts at 0/428
2024-01-31 11:03:19.198 -03 [695645] LOG:  consistent recovery state
reached at 0/504DB08
2024-01-31 11:03:19.198 -03 [695645] LOG:  invalid record length at
0/504DB08: expected at least 24, got 0
2024-01-31 11:03:19.198 -03 [695632] LOG:  database system is ready to
accept read-only connections
2024-01-31 11:03:19.215 -03 [695646] LOG:  started streaming WAL from
primary at 0/500 on timeline 1
2024-01-31 11:03:29.587 -03 [695645] LOG:  recovery stopping after WAL
location (LSN) "0/504F260"
2024-01-31 11:03:29.587 -03 [695645] LOG:  redo done at 0/504F260 system
usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 10.39 s
2024-01-31 11:03:29.587 -03 [695645] LOG:  last completed transaction was
at log time 2024-01-31 11:03:18.761544-03
2024-01-31 11:03:29.587 -03 [695646] FATAL:  terminating walreceiver
process due to administrator command
2024-01-31 11:03:29.598 -03 [695645] LOG:  selected new timeline ID: 2
2024-01-31 11:03:29.680 -03 [695645] LOG:  archive recovery complete
2024-01-31 11:03:29.690 -03 [695643] LOG:  checkpoint starting:
end-of-recovery immediate wait
2024-01-31 11:03:29.795 -03 [695643] LOG:  checkpoint complete: wrote 51
buffers (0.3%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.021 s,
sync=0.034 s, total=0.115 s; sync files=17, longest=0.011 s, average=0.002
s; distance=16700 kB, estimate=16700 kB; lsn=0/504F298, redo lsn=0/504F298
2024-01-31 11:03:29.805 -03 [695632] LOG:  database system is ready to
accept connections
2024-01-31 11:03:30.332 -03 [695658] ERROR:  syntax error at or near "/" at
character 90
2024-01-31 11:03:30.332 -03 [695658] STATEMENT:  CREATE SUBSCRIPTION
pg_createsubscriber_16384_695617 CONNECTION 'user=fabrizio
passfile='/home/fabrizio/.pgpass' channel_binding=prefer host=localhost
port=5432 sslmode=prefer sslcompression=0 sslcertmode=allow sslsni=1
ssl_min_protocol_version=TLSv1.2 gssencmode=disable krbsrvname=postgres
gssdelegation=0 target_session_attrs=any load_balance_hosts=disable
dbname=fabrizio' PUBLICATION pg_createsubscriber_16384 WITH (create_slot =
false, copy_data = false, enabled = false)

Seems we need to escape connection params similar we do in dblink [1]

Regards,

[1]
https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/dblink/dblink.c;h=19a362526d21dff5d8b1cdc68b15afebe7d40249;hb=HEAD#l2882

-- 
Fabrízio de Royes Mello


Re: speed up a logical replica setup

2024-01-31 Thread Fabrízio de Royes Mello
On Thu, Jan 18, 2024 at 6:19 AM Peter Eisentraut 
wrote:
>
> Very early in this thread, someone mentioned the name
> pg_create_subscriber, and of course there is pglogical_create_subscriber
> as the historical predecessor.  Something along those lines seems better
> to me.  Maybe there are other ideas.
>

I've mentioned it upthread because of this pet project [1] that is one of
the motivations behind upstream this facility.

[1] https://github.com/fabriziomello/pg_create_subscriber

-- 
Fabrízio de Royes Mello


Re: Add minimal C example and SQL registration example for custom table access methods.

2024-01-26 Thread Fabrízio de Royes Mello
On Wed, Nov 15, 2023 at 8:29 PM Roberto Mello 
wrote:
>
> Suggestion:
>
> In the C example you added you mention in the comment:
>
> +  /* Methods from TableAmRoutine omitted from example, but all
> + non-optional ones must be provided here. */
>
> Perhaps you could provide a "see " to point the reader finding your
example where he could find these non-optional methods he must provide?
>
> Nitpicking a little: your patch appears to change more lines than it
does, because it added line breaks earlier in the lines. I would generally
avoid that unless there's good reason to do so.

Hey folks,

There is a previous patch [1] around the same topic. What about joining
efforts on pointing these documentation changes to the proposed test module?

[1] https://commitfest.postgresql.org/46/4588/

-- 
Fabrízio de Royes Mello


pg_class.relpages not updated for toast index

2023-11-21 Thread Fabrízio de Royes Mello
Hi all,

Was doing a relation size estimation based on pg_class.relpages of the
relation and the related objects (index, toast) and noticed that it is not
updated for the toast index, for example:

fabrizio=# CREATE TABLE t(c TEXT);
INSERT INTO t VALUES (repeat('x', (8192^2)::int));

VACUUM (ANALYZE) t;
CREATE TABLE
INSERT 0 1
VACUUM
fabrizio=# \x on
Expanded display is on.
fabrizio=# SELECT
  c.oid,
  c.relname,
  c.relpages,
  t.relname,
  t.relpages AS toast_pages,
  ci.relname,
  ci.relpages AS toast_index_pages,
  (pg_stat_file(pg_relation_filepath(ci.oid))).size AS toast_index_size
FROM
  pg_class c
  JOIN pg_class t ON t.oid = c.reltoastrelid
  JOIN pg_index i ON i.indrelid = t.oid
  JOIN pg_class ci ON ci.oid = i.indexrelid
WHERE
  c.oid = 't'::regclass;
-[ RECORD 1 ]-+-
oid   | 17787
relname   | t
relpages  | 1
relname   | pg_toast_17787
toast_pages   | 97
relname   | pg_toast_17787_index
toast_index_pages | 1
toast_index_size  | 16384

Are there any reasons for toast index relpages not to be updated? Or is it
a bug?

Regards,

-- 
Fabrízio de Royes Mello


Re: Add test module for Table Access Method

2023-09-26 Thread Fabrízio de Royes Mello
On Mon, Jun 5, 2023 at 1:24 PM Fabrízio de Royes Mello <
fabriziome...@gmail.com> wrote:
>
> On Sat, Jun 3, 2023 at 7:42 PM Fabrízio de Royes Mello <
fabriziome...@gmail.com> wrote:
> >
> >
> > Hi all,
> >
> > During the PGCon Unconference session about Table Access Method one
missing item pointed out is that currently we lack documentation and
examples of TAM.
> >
> > So in order to improve things a bit in this area I'm proposing to add a
test module for Table Access Method similar what we already have for Index
Access Method.
> >
> > This code is based on the "blackhole_am" implemented by Michael
Paquier: https://github.com/michaelpq/pg_plugins/tree/main/blackhole_am
> >
>
> Just added some more tests, ran pgindent and also organized a bit some
comments and README.txt.
>

Rebased version.

-- 
Fabrízio de Royes Mello
From 5b6642b520874f4ca7023fc33d2e8e875fb64693 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Fabr=C3=ADzio=20de=20Royes=20Mello?=
 
Date: Sat, 3 Jun 2023 17:23:05 -0400
Subject: [PATCH v3] Add test module for Table Access Method

---
 src/test/modules/Makefile |   1 +
 src/test/modules/dummy_table_am/.gitignore|   3 +
 src/test/modules/dummy_table_am/Makefile  |  20 +
 src/test/modules/dummy_table_am/README|   5 +
 .../dummy_table_am/dummy_table_am--1.0.sql|  14 +
 .../modules/dummy_table_am/dummy_table_am.c   | 500 ++
 .../dummy_table_am/dummy_table_am.control |   5 +
 .../expected/dummy_table_am.out   | 207 
 src/test/modules/dummy_table_am/meson.build   |  33 ++
 .../dummy_table_am/sql/dummy_table_am.sql |  55 ++
 src/test/modules/meson.build  |   1 +
 11 files changed, 844 insertions(+)
 create mode 100644 src/test/modules/dummy_table_am/.gitignore
 create mode 100644 src/test/modules/dummy_table_am/Makefile
 create mode 100644 src/test/modules/dummy_table_am/README
 create mode 100644 src/test/modules/dummy_table_am/dummy_table_am--1.0.sql
 create mode 100644 src/test/modules/dummy_table_am/dummy_table_am.c
 create mode 100644 src/test/modules/dummy_table_am/dummy_table_am.control
 create mode 100644 src/test/modules/dummy_table_am/expected/dummy_table_am.out
 create mode 100644 src/test/modules/dummy_table_am/meson.build
 create mode 100644 src/test/modules/dummy_table_am/sql/dummy_table_am.sql

diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index e81873cb5a..cb7a5b970a 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -10,6 +10,7 @@ SUBDIRS = \
 		  delay_execution \
 		  dummy_index_am \
 		  dummy_seclabel \
+		  dummy_table_am \
 		  libpq_pipeline \
 		  plsample \
 		  spgist_name_ops \
diff --git a/src/test/modules/dummy_table_am/.gitignore b/src/test/modules/dummy_table_am/.gitignore
new file mode 100644
index 00..44d119cfcc
--- /dev/null
+++ b/src/test/modules/dummy_table_am/.gitignore
@@ -0,0 +1,3 @@
+# Generated subdirectories
+/log/
+/results/
diff --git a/src/test/modules/dummy_table_am/Makefile b/src/test/modules/dummy_table_am/Makefile
new file mode 100644
index 00..9ea4a590c6
--- /dev/null
+++ b/src/test/modules/dummy_table_am/Makefile
@@ -0,0 +1,20 @@
+# src/test/modules/dummy_table_am/Makefile
+
+MODULES = dummy_table_am
+
+EXTENSION = dummy_table_am
+DATA = dummy_table_am--1.0.sql
+PGFILEDESC = "dummy_table_am - table access method template"
+
+REGRESS = dummy_table_am
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/dummy_table_am
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/dummy_table_am/README b/src/test/modules/dummy_table_am/README
new file mode 100644
index 00..35211554b0
--- /dev/null
+++ b/src/test/modules/dummy_table_am/README
@@ -0,0 +1,5 @@
+Dummy Table AM
+==
+
+Dummy table AM is a module for testing any facility usable by an table
+access method, whose code is kept a maximum simple.
diff --git a/src/test/modules/dummy_table_am/dummy_table_am--1.0.sql b/src/test/modules/dummy_table_am/dummy_table_am--1.0.sql
new file mode 100644
index 00..aa0fd82e61
--- /dev/null
+++ b/src/test/modules/dummy_table_am/dummy_table_am--1.0.sql
@@ -0,0 +1,14 @@
+/* src/test/modules/dummy_table_am/dummy_table_am--1.0.sql */
+
+-- complain if script is sourced in psql, rather than via CREATE EXTENSION
+\echo Use "CREATE EXTENSION dummy_table_am" to load this file. \quit
+
+CREATE FUNCTION dummy_table_am_handler(internal)
+RETURNS table_am_handler
+AS 'MODULE_PATHNAME'
+LANGUAGE C;
+
+-- Access method
+CREATE ACCESS METHOD dummy_table_am TYPE TABLE HANDLER dummy_table_am_handler;
+COMMENT ON ACCESS METHOD dummy_table_am IS 'dummy table access method';
+
diff --git a/src/test/modules/dum

Re: Removing the fixed-size buffer restriction in hba.c

2023-07-24 Thread Fabrízio de Royes Mello
On Mon, Jul 24, 2023 at 2:53 PM Tom Lane  wrote:
>
> We got a complaint at [1] about how a not-so-unreasonable LDAP
> configuration can hit the "authentication file token too long,
> skipping" error case in hba.c's next_token().  I think we've
> seen similar complaints before, although a desultory archives
> search didn't turn one up.
>
> A minimum-change response would be to increase the MAX_TOKEN
> constant from 256 to (say) 1K or 10K.  But it wouldn't be all
> that hard to replace the fixed-size buffer with a StringInfo,
> as attached.
>

+1 for replacing it with StringInfo. And the patch LGTM!


>
> Given the infrequency of complaints, I'm inclined to apply
> the more thorough fix only in HEAD, and to just raise MAX_TOKEN
> in the back branches.  Thoughts?
>

It makes sense to change it only in HEAD.

Regards,

--
Fabrízio de Royes Mello


Re: Including a sample Table Access Method with core code

2023-07-05 Thread Fabrízio de Royes Mello
>
> > Included Mark Dilger directly to this mail as he mentioned he has a
> > Perl script that makes a functional copy of heap AM that can be
> > compiled as installed as custom AM.
>
> Similar discussion has happened in 640c198 related to the creation of
> dummy_index_am, where the argument is that such a module needs to
> provide value in testing some of the core internals.  dummy_index_am
> did so for reloptions on indexes because there was not much coverage
> for that part of the system.
>
> > @mark - maybe you can create 3 boilerplate Table AMs for the above
> > named `mem_am`, `overlay_am` and `py3_am` and we could put them
> > somewhere for interested parties to play with ?
>
> Not sure if that's worth counting, but I also have a table AM template
> stored in my plugin repo:
> https://github.com/michaelpq/pg_plugins/tree/main/blackhole_am
>

And based on your `blackhole_am` I've sent a patch [1] to add a
`dummy_table_am` for testing purposes.

Regards,

[1]
https://www.postgresql.org/message-id/CAFcNs+pcU2ib=jvjnznbod+m2tho+vd77c_yzj2rsgr0tp3...@mail.gmail.com

--
Fabrízio de Royes Mello


Re: Add test module for Table Access Method

2023-06-05 Thread Fabrízio de Royes Mello
On Sat, Jun 3, 2023 at 7:42 PM Fabrízio de Royes Mello <
fabriziome...@gmail.com> wrote:
>
>
> Hi all,
>
> During the PGCon Unconference session about Table Access Method one
missing item pointed out is that currently we lack documentation and
examples of TAM.
>
> So in order to improve things a bit in this area I'm proposing to add a
test module for Table Access Method similar what we already have for Index
Access Method.
>
> This code is based on the "blackhole_am" implemented by Michael Paquier:
https://github.com/michaelpq/pg_plugins/tree/main/blackhole_am
>

Just added some more tests, ran pgindent and also organized a bit some
comments and README.txt.

Regards,

-- 
Fabrízio de Royes Mello
From 31d6bf00fbee6c229382db0760ba602e4d41c917 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Fabr=C3=ADzio=20de=20Royes=20Mello?=
 
Date: Sat, 3 Jun 2023 17:23:05 -0400
Subject: [PATCH v2] Add test module for Table Access Method

---
 src/test/modules/Makefile |   1 +
 src/test/modules/dummy_table_am/.gitignore|   3 +
 src/test/modules/dummy_table_am/Makefile  |  20 +
 src/test/modules/dummy_table_am/README|   5 +
 .../dummy_table_am/dummy_table_am--1.0.sql|  14 +
 .../modules/dummy_table_am/dummy_table_am.c   | 500 ++
 .../dummy_table_am/dummy_table_am.control |   5 +
 .../expected/dummy_table_am.out   | 207 
 src/test/modules/dummy_table_am/meson.build   |  33 ++
 .../dummy_table_am/sql/dummy_table_am.sql |  55 ++
 src/test/modules/meson.build  |   1 +
 11 files changed, 844 insertions(+)
 create mode 100644 src/test/modules/dummy_table_am/.gitignore
 create mode 100644 src/test/modules/dummy_table_am/Makefile
 create mode 100644 src/test/modules/dummy_table_am/README
 create mode 100644 src/test/modules/dummy_table_am/dummy_table_am--1.0.sql
 create mode 100644 src/test/modules/dummy_table_am/dummy_table_am.c
 create mode 100644 src/test/modules/dummy_table_am/dummy_table_am.control
 create mode 100644 src/test/modules/dummy_table_am/expected/dummy_table_am.out
 create mode 100644 src/test/modules/dummy_table_am/meson.build
 create mode 100644 src/test/modules/dummy_table_am/sql/dummy_table_am.sql

diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 6331c976dc..ce982b0e46 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -10,6 +10,7 @@ SUBDIRS = \
 		  delay_execution \
 		  dummy_index_am \
 		  dummy_seclabel \
+		  dummy_table_am \
 		  libpq_pipeline \
 		  plsample \
 		  snapshot_too_old \
diff --git a/src/test/modules/dummy_table_am/.gitignore b/src/test/modules/dummy_table_am/.gitignore
new file mode 100644
index 00..44d119cfcc
--- /dev/null
+++ b/src/test/modules/dummy_table_am/.gitignore
@@ -0,0 +1,3 @@
+# Generated subdirectories
+/log/
+/results/
diff --git a/src/test/modules/dummy_table_am/Makefile b/src/test/modules/dummy_table_am/Makefile
new file mode 100644
index 00..9ea4a590c6
--- /dev/null
+++ b/src/test/modules/dummy_table_am/Makefile
@@ -0,0 +1,20 @@
+# src/test/modules/dummy_table_am/Makefile
+
+MODULES = dummy_table_am
+
+EXTENSION = dummy_table_am
+DATA = dummy_table_am--1.0.sql
+PGFILEDESC = "dummy_table_am - table access method template"
+
+REGRESS = dummy_table_am
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/dummy_table_am
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/dummy_table_am/README b/src/test/modules/dummy_table_am/README
new file mode 100644
index 00..35211554b0
--- /dev/null
+++ b/src/test/modules/dummy_table_am/README
@@ -0,0 +1,5 @@
+Dummy Table AM
+==
+
+Dummy table AM is a module for testing any facility usable by an table
+access method, whose code is kept a maximum simple.
diff --git a/src/test/modules/dummy_table_am/dummy_table_am--1.0.sql b/src/test/modules/dummy_table_am/dummy_table_am--1.0.sql
new file mode 100644
index 00..aa0fd82e61
--- /dev/null
+++ b/src/test/modules/dummy_table_am/dummy_table_am--1.0.sql
@@ -0,0 +1,14 @@
+/* src/test/modules/dummy_table_am/dummy_table_am--1.0.sql */
+
+-- complain if script is sourced in psql, rather than via CREATE EXTENSION
+\echo Use "CREATE EXTENSION dummy_table_am" to load this file. \quit
+
+CREATE FUNCTION dummy_table_am_handler(internal)
+RETURNS table_am_handler
+AS 'MODULE_PATHNAME'
+LANGUAGE C;
+
+-- Access method
+CREATE ACCESS METHOD dummy_table_am TYPE TABLE HANDLER dummy_table_am_handler;
+COMMENT ON ACCESS METHOD dummy_table_am IS 'dummy table access method';
+
diff --git a/src/test/modules/dummy_table_am/dummy_table_am.c b/src/test/modules/dummy_table_am/dummy_table_am.c
new file mode 100644
index 00..132f7b18cd
--- /dev/null
+++

Add test module for Table Access Method

2023-06-03 Thread Fabrízio de Royes Mello
Hi all,

During the PGCon Unconference session about Table Access Method one missing
item pointed out is that currently we lack documentation and examples of
TAM.

So in order to improve things a bit in this area I'm proposing to add a
test module for Table Access Method similar what we already have for Index
Access Method.

This code is based on the "blackhole_am" implemented by Michael Paquier:
https://github.com/michaelpq/pg_plugins/tree/main/blackhole_am

Regards,

-- 
Fabrízio de Royes Mello
From 217b84f21ec1cdb0ede271d24b7a5863713db949 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Fabr=C3=ADzio=20de=20Royes=20Mello?=
 
Date: Sat, 3 Jun 2023 17:23:05 -0400
Subject: [PATCH] Add test module for Table Access Method

---
 src/test/modules/Makefile |   1 +
 src/test/modules/dummy_table_am/.gitignore|   3 +
 src/test/modules/dummy_table_am/Makefile  |  20 +
 src/test/modules/dummy_table_am/README|  12 +
 .../dummy_table_am/dummy_table_am--1.0.sql|  14 +
 .../modules/dummy_table_am/dummy_table_am.c   | 519 ++
 .../dummy_table_am/dummy_table_am.control |   5 +
 .../expected/dummy_table_am.out   | 127 +
 src/test/modules/dummy_table_am/meson.build   |  33 ++
 .../dummy_table_am/sql/dummy_table_am.sql |  34 ++
 src/test/modules/meson.build  |   1 +
 11 files changed, 769 insertions(+)
 create mode 100644 src/test/modules/dummy_table_am/.gitignore
 create mode 100644 src/test/modules/dummy_table_am/Makefile
 create mode 100644 src/test/modules/dummy_table_am/README
 create mode 100644 src/test/modules/dummy_table_am/dummy_table_am--1.0.sql
 create mode 100644 src/test/modules/dummy_table_am/dummy_table_am.c
 create mode 100644 src/test/modules/dummy_table_am/dummy_table_am.control
 create mode 100644 src/test/modules/dummy_table_am/expected/dummy_table_am.out
 create mode 100644 src/test/modules/dummy_table_am/meson.build
 create mode 100644 src/test/modules/dummy_table_am/sql/dummy_table_am.sql

diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 6331c976dc..ce982b0e46 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -10,6 +10,7 @@ SUBDIRS = \
 		  delay_execution \
 		  dummy_index_am \
 		  dummy_seclabel \
+		  dummy_table_am \
 		  libpq_pipeline \
 		  plsample \
 		  snapshot_too_old \
diff --git a/src/test/modules/dummy_table_am/.gitignore b/src/test/modules/dummy_table_am/.gitignore
new file mode 100644
index 00..44d119cfcc
--- /dev/null
+++ b/src/test/modules/dummy_table_am/.gitignore
@@ -0,0 +1,3 @@
+# Generated subdirectories
+/log/
+/results/
diff --git a/src/test/modules/dummy_table_am/Makefile b/src/test/modules/dummy_table_am/Makefile
new file mode 100644
index 00..9ea4a590c6
--- /dev/null
+++ b/src/test/modules/dummy_table_am/Makefile
@@ -0,0 +1,20 @@
+# src/test/modules/dummy_table_am/Makefile
+
+MODULES = dummy_table_am
+
+EXTENSION = dummy_table_am
+DATA = dummy_table_am--1.0.sql
+PGFILEDESC = "dummy_table_am - table access method template"
+
+REGRESS = dummy_table_am
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/dummy_table_am
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/dummy_table_am/README b/src/test/modules/dummy_table_am/README
new file mode 100644
index 00..61510f02fa
--- /dev/null
+++ b/src/test/modules/dummy_table_am/README
@@ -0,0 +1,12 @@
+Dummy Index AM
+==
+
+Dummy index AM is a module for testing any facility usable by an index
+access method, whose code is kept a maximum simple.
+
+This includes tests for all relation option types:
+- boolean
+- enum
+- integer
+- real
+- strings (with and without NULL as default)
diff --git a/src/test/modules/dummy_table_am/dummy_table_am--1.0.sql b/src/test/modules/dummy_table_am/dummy_table_am--1.0.sql
new file mode 100644
index 00..aa0fd82e61
--- /dev/null
+++ b/src/test/modules/dummy_table_am/dummy_table_am--1.0.sql
@@ -0,0 +1,14 @@
+/* src/test/modules/dummy_table_am/dummy_table_am--1.0.sql */
+
+-- complain if script is sourced in psql, rather than via CREATE EXTENSION
+\echo Use "CREATE EXTENSION dummy_table_am" to load this file. \quit
+
+CREATE FUNCTION dummy_table_am_handler(internal)
+RETURNS table_am_handler
+AS 'MODULE_PATHNAME'
+LANGUAGE C;
+
+-- Access method
+CREATE ACCESS METHOD dummy_table_am TYPE TABLE HANDLER dummy_table_am_handler;
+COMMENT ON ACCESS METHOD dummy_table_am IS 'dummy table access method';
+
diff --git a/src/test/modules/dummy_table_am/dummy_table_am.c b/src/test/modules/dummy_table_am/dummy_table_am.c
new file mode 100644
index 00..b299fe9c65
--- /dev/null
+++ b/src/test/modules/dummy_table_am/dummy_table_am.c
@@ -0,0 +1,519 @@
+/*

Re: Remove io prefix from pg_stat_io columns

2023-04-19 Thread Fabrízio de Royes Mello
On Wed, Apr 19, 2023 at 1:27 PM Melanie Plageman 
wrote:
>
> Hi,
>
> Over in [1], we discussed removing the "io" prefix from the columns
> "io_context" and "io_object" in pg_stat_io since they seem redundant
> given the view name
>

LGTM. All tests passed and were built without warnings.

Regards

--
Fabrízio de Royes Mello


Re: [PATCH] pg_dump: lock tables in batches

2022-12-07 Thread Fabrízio de Royes Mello
On Wed, Dec 7, 2022 at 2:28 PM Tom Lane  wrote:
>
> Andres Freund  writes:
> > On 2022-12-07 10:44:33 -0500, Tom Lane wrote:
> >> I have a strong sense of deja vu here.  I'm pretty sure I experimented
> >> with this idea last year and gave up on it.  I don't recall exactly
> >> why, but either it didn't show any meaningful performance improvement
> >> for me or there was some actual downside (that I'm not remembering
> >> right now).
>
> > IIRC the case we were looking at around 989596152 were CPU bound
workloads,
> > rather than latency bound workloads. It'd not be surprising to have
cases
> > where batching LOCKs helps latency, but not CPU bound.
>
> Yeah, perhaps.  Anyway my main point is that I don't want to just assume
> this is a win; I want to see some actual performance tests.
>

Here we have some numbers about the Aleksander's patch:

1) Setup script

CREATE DATABASE t1000;
CREATE DATABASE t1;
CREATE DATABASE t10;

\c t1000
SELECT format('CREATE TABLE t%s(c1 INTEGER PRIMARY KEY, c2 TEXT, c3
TIMESTAMPTZ);', i) FROM generate_series(1, 1000) AS i \gexec

\c t1
SELECT format('CREATE TABLE t%s(c1 INTEGER PRIMARY KEY, c2 TEXT, c3
TIMESTAMPTZ);', i) FROM generate_series(1, 1) AS i \gexec

\c t10
SELECT format('CREATE TABLE t%s(c1 INTEGER PRIMARY KEY, c2 TEXT, c3
TIMESTAMPTZ);', i) FROM generate_series(1, 10) AS i \gexec

2) Execution script

time pg_dump -s t1000 > /dev/null
time pg_dump -s t1 > /dev/null
time pg_dump -s t10 > /dev/null

3) HEAD execution

$ time pg_dump -s t1000 > /dev/null
0.02user 0.01system 0:00.36elapsed 8%CPU (0avgtext+0avgdata
11680maxresident)k
0inputs+0outputs (0major+1883minor)pagefaults 0swaps

$ time pg_dump -s t1 > /dev/null
0.30user 0.10system 0:05.04elapsed 8%CPU (0avgtext+0avgdata
57772maxresident)k
0inputs+0outputs (0major+14042minor)pagefaults 0swaps

$ time pg_dump -s t10 > /dev/null
3.42user 2.13system 7:50.09elapsed 1%CPU (0avgtext+0avgdata
517900maxresident)k
0inputs+0outputs (0major+134636minor)pagefaults 0swaps

4) PATCH execution

$ time pg_dump -s t1000 > /dev/null
0.02user 0.00system 0:00.28elapsed 9%CPU (0avgtext+0avgdata
11700maxresident)k
0inputs+0outputs (0major+1886minor)pagefaults 0swaps

$ time pg_dump -s t1 > /dev/null
0.18user 0.03system 0:02.17elapsed 10%CPU (0avgtext+0avgdata
57592maxresident)k
0inputs+0outputs (0major+14072minor)pagefaults 0swaps

$ time pg_dump -s t10 > /dev/null
1.97user 0.32system 0:21.39elapsed 10%CPU (0avgtext+0avgdata
517932maxresident)k
0inputs+0outputs (0major+134892minor)pagefaults 0swaps

5) Summary

 HEAD patch
  1k tables  0:00.36  0:00.28
 10k tables  0:05.04  0:02.17
100k tables  7:50.09  0:21.39

Seems we get very good performance gain using Aleksander's patch. I used
the "-s" to not waste time issuing COPY for each relation (even all is
empty) and evidence the difference due to roundtrip for LOCK TABLE. This
patch will also improve the pg_upgrade execution over database with
thousands of relations.

Regards,

--
Fabrízio de Royes Mello


Re: [PATCH] pg_dump: lock tables in batches

2022-12-07 Thread Fabrízio de Royes Mello
On Wed, Dec 7, 2022 at 12:09 PM Aleksander Alekseev <
aleksan...@timescale.com> wrote:
>
> Hi hackers,
>
> A colleague of mine reported a slight inconvenience with pg_dump.
>
> He is dumping the data from a remote server. There are several
> thousands of tables in the database. Making a dump locally and/or
> using pg_basebackup and/or logical replication is not an option. So
> what pg_dump currently does is sending LOCK TABLE queries one after
> another. Every query needs an extra round trip. So if we have let's
> say 2000 tables and every round trip takes 100 ms then ~3.5 minutes
> are spent in the not most useful way.
>
> What he proposes is taking the locks in batches. I.e. instead of:
>
> LOCK TABLE foo IN ACCESS SHARE MODE;
> LOCK TABLE bar IN ACCESS SHARE MODE;
>
> do:
>
> LOCK TABLE foo, bar, ... IN ACCESS SHARE MODE;
>
> The proposed patch makes this change. It's pretty straightforward and
> as a side effect saves a bit of network traffic too.
>

+1 for that change. It will improve the dump for databases with thousands
of relations.

The code LGTM and it passes in all tests and compiles without any warning.

Regards,

-- 
Fabrízio de Royes Mello


Re: trigger example for plsample

2022-04-06 Thread Fabrízio de Royes Mello
On Wed, Apr 6, 2022 at 5:44 PM Mark Wong  wrote:
>
> On Thu, Mar 10, 2022 at 06:36:44PM -0500, Chapman Flack wrote:
> > On 03/02/22 15:12, Mark Wong wrote:
> >
> > > I've attached v2, which reduces the output:
> > >
> > > * Removing the notices for the text body, and the "compile" message.
> > > * Replaced the notice for "compile" message with a comment as a
> > >   placeholder for where a compiling code or checking a cache may go.
> > > * Reducing the number of rows inserted into the table, thus reducing
> > >   the number of notice messages about which code path is taken.
> >
> > I think the simplifying assumption of a simple interpreted language
allows
> > a lot more of this code to go away. I mean more or less that whole first
> > PG_TRY...PG_END_TRY block, which could be replaced with a comment saying
> > something like
> >
> >   The source text may be augmented here, such as by wrapping it as the
> >   body of a function in the target language, prefixing a parameter list
> >   with names like TD_name, TD_relid, TD_table_name, TD_table_schema,
> >   TD_event, TD_when, TD_level, TD_NEW, TD_OLD, and args, using whatever
> >   types in the target language are convenient. The augmented text can be
> >   cached in a longer-lived memory context, or, if the target language
uses
> >   a compilation step, that can be done here, caching the result of the
> >   compilation.
> >
> > That would leave only the later PG_TRY block where the function gets
> > "executed". That could stay largely as is, but should probably also have
> > a comment within it, something like
> >
> >   Here the function (the possibly-augmented source text, or the result
> >   of compilation if the target language uses such a step) should be
> >   executed, after binding these values from the TriggerData struct to
> >   the expected parameters.
> >
> > That should make the example shorter and clearer, and preserve the
output
> > tested by the regression test. Trying to show much more than that
involves
> > assumptions about what the PL's target language syntax looks like, how
its
> > execution engine works and parameters are bound, and so on, and that is
> > likely to just be distracting, IMV.
>
> I think I've applied all of these suggestions and attached a new patch.
>

Cool... I also have a look into the code. To me everything is also ok!!!

Regards,

--
Fabrízio de Royes Mello


Re: Add pg_freespacemap extension sql test

2022-03-23 Thread Fabrízio de Royes Mello
On Wed, Mar 23, 2022 at 3:05 AM Michael Paquier  wrote:
>
> After review, I don't like much the idea of allowing concurrent
> autovacuums to run in parallel of the table(s) of this test, so we'd
> better disable it explicitely.

Make sense.

>  "t1" is also a very generic name to use in a regression test.

Agreed!

>  Another thing that itched me is that we
> could also test more with indexes, particularly with btree, BRIN and
> hash (the latter should not have a FSM with 10 pages as per the first
> group batch, and each one has a stable an initial state).

What about GIN/GIST indexes?

> I have extended the set of tests as of the attached, running these
> across everything I could (CI, all my hosts including Windows, macos,
> Linux).  We could do more later, of course, but this looks enough to
> me as a first step.  And I think that this will not upset the
> buildfarm.

Also LGTM.

Regards,

--
Fabrízio de Royes Mello


Re: speed up a logical replica setup

2022-03-21 Thread Fabrízio de Royes Mello
On Fri, 18 Mar 2022 at 19:34 Andrew Dunstan  wrote:

>
> On 3/15/22 09:51, Peter Eisentraut wrote:
> > On 21.02.22 13:09, Euler Taveira wrote:
> >> A new tool called pg_subscriber does this conversion and is tightly
> >> integrated
> >> with Postgres.
> >
> > Are we comfortable with the name pg_subscriber?  It seems too general.
> > Are we planning other subscriber-related operations in the future?  If
> > so, we should at least make this one use a --create option or
> > something like that.
>
>
> Not really sold on the name (and I didn't much like the name
> pglogical_create_subscriber either, although it's a cool facility and
> I'm happy to see us adopting something like it).
>
> ISTM we should have a name that conveys that we are *converting* a
> replica or equivalent to a subscriber.
>
>
Some time ago I did a POC on it [1] and I used the name pg_create_subscriber

[1]
https://github.com/fabriziomello/pg_create_subscriber
-- 
Fabrízio de Royes Mello


Re: Add pg_freespacemap extension sql test

2022-03-19 Thread Fabrízio de Royes Mello
On Sat, Mar 19, 2022 at 1:18 PM Dong Wook Lee  wrote:
>
> > Well, my guess is that you basically just care about being able to
> > detect if there is free space in the map or not, which goes down to
> > detecting if pg_freespace() returns 0 or a number strictly higher than
> > 0, so wouldn't it be enough to stick some > 0 in your test queries?
>
> I edited the previous patch file.
> Am I correct in understanding that?
>

I think what Michael meant is something like attached.

Regards,

--
Fabrízio de Royes Mello
diff --git a/contrib/pg_freespacemap/Makefile b/contrib/pg_freespacemap/Makefile
index da40b80c7c..2d525a1284 100644
--- a/contrib/pg_freespacemap/Makefile
+++ b/contrib/pg_freespacemap/Makefile
@@ -10,6 +10,8 @@ DATA = pg_freespacemap--1.1.sql pg_freespacemap--1.1--1.2.sql \
 	pg_freespacemap--1.0--1.1.sql
 PGFILEDESC = "pg_freespacemap - monitoring of free space map"
 
+REGRESS = pg_freespacemap
+
 ifdef USE_PGXS
 PG_CONFIG = pg_config
 PGXS := $(shell $(PG_CONFIG) --pgxs)
diff --git a/contrib/pg_freespacemap/expected/pg_freespacemap.out b/contrib/pg_freespacemap/expected/pg_freespacemap.out
new file mode 100644
index 00..9bf8055247
--- /dev/null
+++ b/contrib/pg_freespacemap/expected/pg_freespacemap.out
@@ -0,0 +1,36 @@
+CREATE EXTENSION pg_freespacemap;
+CREATE TABLE t1(c1 int);
+INSERT INTO t1 VALUES (generate_series(1, 1000));
+VACUUM t1;
+SELECT blkno, avail > 0 AS exists FROM pg_freespace('t1');
+ blkno | exists 
+---+
+ 0 | f
+ 1 | f
+ 2 | f
+ 3 | f
+ 4 | t
+(5 rows)
+
+DELETE FROM t1 WHERE c1 <= 10;
+VACUUM t1;
+SELECT blkno, avail > 0 AS exists FROM pg_freespace('t1');
+ blkno | exists 
+---+
+ 0 | t
+ 1 | f
+ 2 | f
+ 3 | f
+ 4 | t
+(5 rows)
+
+SELECT freespace > 0 FROM pg_freespace('t1', 0) AS freespace;
+ ?column? 
+--
+ t
+(1 row)
+
+SELECT * FROM pg_freespace('t1', -1);
+ERROR:  invalid block number
+SELECT * FROM pg_freespace('t1', 4294967295);
+ERROR:  invalid block number
diff --git a/contrib/pg_freespacemap/sql/pg_freespacemap.sql b/contrib/pg_freespacemap/sql/pg_freespacemap.sql
new file mode 100644
index 00..f416a29f12
--- /dev/null
+++ b/contrib/pg_freespacemap/sql/pg_freespacemap.sql
@@ -0,0 +1,17 @@
+CREATE EXTENSION pg_freespacemap;
+
+CREATE TABLE t1(c1 int);
+
+INSERT INTO t1 VALUES (generate_series(1, 1000));
+VACUUM t1;
+
+SELECT blkno, avail > 0 AS exists FROM pg_freespace('t1');
+
+DELETE FROM t1 WHERE c1 <= 10;
+VACUUM t1;
+
+SELECT blkno, avail > 0 AS exists FROM pg_freespace('t1');
+SELECT freespace > 0 FROM pg_freespace('t1', 0) AS freespace;
+
+SELECT * FROM pg_freespace('t1', -1);
+SELECT * FROM pg_freespace('t1', 4294967295);


Re: Size functions inconsistent results

2022-02-25 Thread Fabrízio de Royes Mello
On Fri, Feb 25, 2022 at 12:10 PM Japin Li  wrote:
>
>
> I think, you forget the index size of toast table.
>
> with relations as (
>   select schemaname, relname, relid
>   from pg_stat_user_tables
>   where relname = 'test_size'
> ),
> sizes as (
>   select
> schemaname,
> r.relname,
>
> pg_total_relation_size(relid) AS total_bytes,
>
> pg_relation_size(relid, 'main') +
> pg_relation_size(relid, 'init') +
> pg_relation_size(relid, 'fsm') +
> pg_relation_size(relid, 'vm') AS heap_bytes,
> pg_indexes_size(relid) AS index_bytes,
> pg_table_size(reltoastrelid) + pg_indexes_size(reltoastrelid) AS
toast_bytes
>   from relations r
>   join pg_class on pg_class.oid = r.relid
> )
> select
>   total_bytes, heap_bytes, index_bytes, toast_bytes,
>   (total_bytes = (heap_bytes+index_bytes+toast_bytes)) as "Equal?",
>   (total_bytes - (heap_bytes+index_bytes+toast_bytes)) as "Diff"
> from sizes;
>

Ahh perfect... thanks... make sense because pg_table_size don't compute the
indexes size, now it worked:

fabrizio=# with relations as (
  select schemaname, relname, relid
  from pg_stat_user_tables
  where relname = 'test_size'
),
sizes as (
  select
schemaname,
r.relname,

pg_total_relation_size(relid) AS total_bytes,

pg_relation_size(relid, 'main') +
pg_relation_size(relid, 'init') +
pg_relation_size(relid, 'fsm') +
pg_relation_size(relid, 'vm') AS heap_bytes,
pg_indexes_size(relid) AS index_bytes,
pg_table_size(reltoastrelid) + pg_indexes_size(reltoastrelid) AS
toast_bytes
  from relations r
  join pg_class on pg_class.oid = r.relid
)
select
  total_bytes, heap_bytes, index_bytes, toast_bytes,
  (total_bytes = (heap_bytes+index_bytes+toast_bytes)) as "Equal?",
  (total_bytes - (heap_bytes+index_bytes+toast_bytes)) as "Diff"
from sizes;
 total_bytes | heap_bytes | index_bytes | toast_bytes | Equal? | Diff
-++-----+-----++--
14622720 |  65536 |   40960 |14516224 | t  |0
(1 row)

Regards,

--
Fabrízio de Royes Mello


Size functions inconsistent results

2022-02-25 Thread Fabrízio de Royes Mello
Hi all,

While doing some work using our functions [1] for calculate relations size
I noticed an inconsistency between pg_total_relation_size and calculate
everything separately, have a look in this example:

fabrizio=# create table test_size (id bigserial primary key, toast_column
text);
CREATE TABLE

fabrizio=# insert into test_size (toast_column)

  select repeat('X'::text, pg_size_bytes('1MB')::integer)
  from generate_series(1,1000);
INSERT 0 1000

fabrizio=# with relations as (
  select schemaname, relname, relid
  from pg_stat_user_tables
  where relname = 'test_size'
),
sizes as (
  select
schemaname,
r.relname,

pg_total_relation_size(relid) AS total_bytes,

pg_relation_size(relid, 'main') +
pg_relation_size(relid, 'init') +
pg_relation_size(relid, 'fsm') +
pg_relation_size(relid, 'vm') AS heap_bytes,
pg_indexes_size(relid) AS index_bytes,
pg_table_size(reltoastrelid) AS toast_bytes
  from relations r
  join pg_class on pg_class.oid = r.relid
)
select
  total_bytes, heap_bytes, index_bytes, toast_bytes,
  (total_bytes = (heap_bytes+index_bytes+toast_bytes)) as "Equal?",
  (total_bytes - (heap_bytes+index_bytes+toast_bytes)) as "Diff"
from sizes;
 total_bytes | heap_bytes | index_bytes | toast_bytes | Equal? |  Diff
-++-+-++
14000128 |  90112 |   40960 |13688832 | f  | 180224
(1 row)

I want to calculate separately HEAP, INDEXES and TOAST (including indexes)
sizes but it seems it's a bit inconsistent with pg_total_relation_size.

Is it correct or am I missing something?

Regards,

[1]
https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE

-- 
Fabrízio de Royes Mello


Re: Remove redundant MemoryContextSwith in BeginCopyFrom

2022-01-19 Thread Fabrízio de Royes Mello
On Wed, Jan 19, 2022 at 11:21 AM Japin Li  wrote:
>
>
> Hi, hackers
>
> When I read the code of COPY ... FROM ..., I find there is a redundant
> MemoryContextSwith() in BeginCopyFrom().  In BeginCopyFrom, it creates
> a COPY memory context and then switches to it, in the middle of this
> function, it switches to the oldcontext and immediately switches back to
> COPY memory context, IMO, this is redundant, and can be removed safely.
>

LGTM (it passed all regression without any issue)

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Converting WAL to SQL

2022-01-05 Thread Fabrízio de Royes Mello
On Wed, Jan 5, 2022 at 2:19 PM Julien Rouhaud  wrote:
>
> On Thu, Jan 6, 2022 at 12:19 AM Bruce Momjian  wrote:
> >
> > On Tue, Jan  4, 2022 at 10:47:47AM -0300, Fabrízio de Royes Mello wrote:
> > >
> > >
> > > What we did was decode the 9.6 wal files and apply transactions to the
> > > old 9.2 to keep it in sync with the new promoted version. This was our
> > > "rollback" strategy if something went wrong with the new 9.6 version.
> >
> > How did you deal with the issue that SQL isn't granular enough (vs.
> > row-level changes) to reproduce the result reliably, as outlined here?
>
> This is a logical decoding plugin, so it's SQL containing decoded
> row-level changes.  It will behave the same as a
> publication/suscription (apart from being far less performant, due to
> being plain SQL of course).

Exactly!

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Converting WAL to SQL

2022-01-04 Thread Fabrízio de Royes Mello
On Tue, Jan 4, 2022 at 9:22 AM Michael Paquier  wrote:
>
> On Wed, Dec 29, 2021 at 08:50:23AM -0300, Fabrízio de Royes Mello wrote:
> > Try this:
> > https://github.com/michaelpq/pg_plugins/tree/main/decoder_raw
>
> You may want to be careful with this, and I don't know if anybody is
> using that for serious cases so some spots may have been missed.
>

I used it in the past during a major upgrade process from 9.2 to 9.6.

What we did was decode the 9.6 wal files and apply transactions to the
old 9.2 to keep it in sync with the new promoted version. This was our
"rollback" strategy if something went wrong with the new 9.6 version.

Regards,

-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Converting WAL to SQL

2021-12-29 Thread Fabrízio de Royes Mello
On Wed, 29 Dec 2021 at 03:18 rajesh singarapu 
wrote:

> Hi Hackers,
>
> I am wondering if we have a mechanism to convert WAL records to SQL
> statements.
>
> I am able to use logical decoders like wal2json or test_decoding for
> converting WAL to readable format, but I am looking for a way to convert
> WAL to sql statements.
>
>
>
Try this:
https://github.com/michaelpq/pg_plugins/tree/main/decoder_raw

-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Minimal logical decoding on standbys

2021-09-17 Thread Fabrízio de Royes Mello
On Wed, Sep 15, 2021 at 8:36 AM Drouvot, Bertrand 
wrote:
>
> Another rebase attached.
>
> The patch proposal to address Andre's walsender corner cases is still a
dedicated commit (as i think it may be easier to discuss).
>

Did one more battery of tests and everything went well...

But doing some manually tests:

1. Setup master/replica (wal_level=logical, hot_standby_feedback=on, etc)
2. Initialize the master instance: "pgbench -i -s10 on master"
3. Terminal1: execute "pgbench -c20 -T 2000"
4. Terminal2: create the logical replication slot:

271480 (replica) fabrizio=# select * from
pg_create_logical_replication_slot('test_logical', 'test_decoding');
-[ RECORD 1 ]---
slot_name | test_logical
lsn   | 1/C7C59E0

Time: 37658.725 ms (00:37.659)

5. Terminal3: start the pg_recvlogical

~/pgsql
➜ pg_recvlogical -p 5433 -S test_logical -d fabrizio -f - --start
pg_recvlogical: error: could not send replication command
"START_REPLICATION SLOT "test_logical" LOGICAL 0/0": ERROR:  replication
slot "test_logical" is active for PID 271480
pg_recvlogical: disconnected; waiting 5 seconds to try again
pg_recvlogical: error: could not send replication command
"START_REPLICATION SLOT "test_logical" LOGICAL 0/0": ERROR:  replication
slot "test_logical" is active for PID 271480
pg_recvlogical: disconnected; waiting 5 seconds to try again
pg_recvlogical: error: could not send replication command
"START_REPLICATION SLOT "test_logical" LOGICAL 0/0": ERROR:  replication
slot "test_logical" is active for PID 271480
pg_recvlogical: disconnected; waiting 5 seconds to try again
pg_recvlogical: error: could not send replication command
"START_REPLICATION SLOT "test_logical" LOGICAL 0/0": ERROR:  replication
slot "test_logical" is active for PID 271480
pg_recvlogical: disconnected; waiting 5 seconds to try again
pg_recvlogical: error: could not send replication command
"START_REPLICATION SLOT "test_logical" LOGICAL 0/0": ERROR:  replication
slot "test_logical" is active for PID 271480
pg_recvlogical: disconnected; waiting 5 seconds to try again
pg_recvlogical: error: could not send replication command
"START_REPLICATION SLOT "test_logical" LOGICAL 0/0": ERROR:  replication
slot "test_logical" is active for PID 271480
pg_recvlogical: disconnected; waiting 5 seconds to try again
pg_recvlogical: error: could not send replication command
"START_REPLICATION SLOT "test_logical" LOGICAL 0/0": ERROR:  replication
slot "test_logical" is active for PID 271480
pg_recvlogical: disconnected; waiting 5 seconds to try again
BEGIN 3767318
COMMIT 3767318
BEGIN 3767319
COMMIT 3767319
BEGIN 3767320
table public.pgbench_history: TRUNCATE: (no-flags)
COMMIT 3767320
BEGIN 3767323
table public.pgbench_accounts: UPDATE: aid[integer]:398507 bid[integer]:4
abalance[integer]:-1307 filler[character]:'
   '
table public.pgbench_tellers: UPDATE: tid[integer]:17 bid[integer]:2
tbalance[integer]:-775356 filler[character]:null
table public.pgbench_branches: UPDATE: bid[integer]:4
bbalance[integer]:1862180 filler[character]:null
table public.pgbench_history: INSERT: tid[integer]:17 bid[integer]:4
aid[integer]:398507 delta[integer]:182 mtime[timestamp without time
zone]:'2021-09-17 17:25:19.811239' filler[character]:null
COMMIT 3767323
BEGIN 3767322
table public.pgbench_accounts: UPDATE: aid[integer]:989789 bid[integer]:10
abalance[integer]:1224 filler[character]:'
   '
table public.pgbench_tellers: UPDATE: tid[integer]:86 bid[integer]:9
tbalance[integer]:-283737 filler[character]:null
table public.pgbench_branches: UPDATE: bid[integer]:9
bbalance[integer]:1277609 filler[character]:null
table public.pgbench_history: INSERT: tid[integer]:86 bid[integer]:9
aid[integer]:989789 delta[integer]:-2934 mtime[timestamp without time
zone]:'2021-09-17 17:25:19.811244' filler[character]:null
COMMIT 3767322

Even with activity on primary the creation of the logical replication slot
took ~38s. Can we do something related to it or should we need to clarify
even more the documentation?

Regards,

-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Is SPI + Procedures (with COMMIT) inside a bgworker broken?

2021-09-13 Thread Fabrízio de Royes Mello
On Mon, Sep 13, 2021 at 4:30 PM Tom Lane  wrote:
>
> The direct cause of that is that SPI_execute() doesn't permit the called
> query to perform COMMIT/ROLLBACK, which is because most callers would fail
> to cope with that.  You can instruct SPI to allow that by replacing the
> SPI_execute() call with something like
>
> SPIExecuteOptions options;
>
> ...
> memset(, 0, sizeof(options));
> options.allow_nonatomic = true;
>
> ret = SPI_execute_extended(buf.data, );
>

I completely forgot about the SPI execute options... Thanks for the
explanation!!!


> However, that's not enough to make this example work :-(.
> I find that it still fails inside the procedure's COMMIT,
> with
>
> 2021-09-13 15:14:54.775 EDT worker_spi[476310] ERROR:  portal snapshots
(0) did not account for all active snapshots (1)
> 2021-09-13 15:14:54.775 EDT worker_spi[476310] CONTEXT:  PL/pgSQL
function schema4.counted_proc() line 1 at COMMIT
> SQL statement "CALL "schema4"."counted_proc"()"
>
> I think what this indicates is that worker_spi_main's cavalier
> management of the active snapshot isn't up to snuff for this
> use-case.  The error is coming from ForgetPortalSnapshots, which
> is expecting that all active snapshots are attached to Portals;
> but that one isn't.
>

That is exactly the root cause of all my investigation.

At Timescale we have a scheduler (background worker) that launches another
background worker to "execute a job", and by executing a job it means to
call a function [1] or a procedure [2] directly without a SPI.

But now a user raised an issue about snapshots [3] and when I saw the code
for the first time I tried to use SPI and it didn't work as expected.

Even tweaking worker_spi to execute the procedure without SPI by calling
ExecuteCallStmt (attached) we end up with the same situation about the
active snapshots:

2021-09-13 20:14:36.654 -03 [21483] LOG:  worker_spi worker 2 initialized
with schema2.counted
2021-09-13 20:14:36.655 -03 [21484] LOG:  worker_spi worker 1 initialized
with schema1.counted
2021-09-13 20:14:36.657 -03 [21483] ERROR:  portal snapshots (0) did not
account for all active snapshots (1)
2021-09-13 20:14:36.657 -03 [21483] CONTEXT:  PL/pgSQL function
schema2.counted_proc() line 1 at COMMIT
2021-09-13 20:14:36.657 -03 [21484] ERROR:  portal snapshots (0) did not
account for all active snapshots (1)
2021-09-13 20:14:36.657 -03 [21484] CONTEXT:  PL/pgSQL function
schema1.counted_proc() line 1 at COMMIT
2021-09-13 20:14:36.659 -03 [21476] LOG:  background worker "worker_spi"
(PID 21483) exited with exit code 1
2021-09-13 20:14:36.659 -03 [21476] LOG:  background worker "worker_spi"
(PID 21484) exited with exit code 1


> Probably the most appropriate fix is to make worker_spi_main
> set up a Portal to run the query inside of.  There are other
> bits of code that are not happy if they're not inside a Portal,
> so if you're hoping to run arbitrary SQL this way, sooner or
> later you're going to have to cross that bridge.
>

I started digging with it [4] by creating a Portal from scratch to execute
the Function or Procedure and it worked.

We're wondering if we can avoid the parser for PortalRun, can we??

Regards,

[1]
https://github.com/timescale/timescaledb/blob/master/tsl/src/bgw_policy/job.c#L726
[2]
https://github.com/timescale/timescaledb/blob/master/tsl/src/bgw_policy/job.c#L741
[3] https://github.com/timescale/timescaledb/issues/3545
[4]
https://github.com/fabriziomello/timescaledb/blob/issue/3545/tsl/src/bgw_policy/job.c#L824

-- 
Fabrízio de Royes Mello
diff --git a/src/test/modules/worker_spi/worker_spi.c b/src/test/modules/worker_spi/worker_spi.c
index d0acef2652..dace150fa7 100644
--- a/src/test/modules/worker_spi/worker_spi.c
+++ b/src/test/modules/worker_spi/worker_spi.c
@@ -42,6 +42,12 @@
 #include "utils/snapmgr.h"
 #include "tcop/utility.h"
 
+#include "nodes/makefuncs.h"
+#include "nodes/nodes.h"
+#include "nodes/pg_list.h"
+#include "parser/parse_func.h"
+#include "commands/defrem.h"
+
 PG_MODULE_MAGIC;
 
 PG_FUNCTION_INFO_V1(worker_spi_launch);
@@ -59,6 +65,7 @@ typedef struct worktable
 {
 	const char *schema;
 	const char *name;
+	const char *proc;
 } worktable;
 
 /*
@@ -108,8 +115,20 @@ initialize_worker_spi(worktable *table)
 		 "		type text CHECK (type IN ('total', 'delta')), "
 		 "		value	integer)"
 		 "CREATE UNIQUE INDEX \"%s_unique_total\" ON \"%s\" (type) "
-		 "WHERE type = 'total'",
-		 table->schema, table->name, table->name, table->name);
+		 "WHERE type = 'total'; "
+		 "CREATE PROCEDURE \"%s\".\"%s\"() AS $$ &

Is SPI + Procedures (with COMMIT) inside a bgworker broken?

2021-09-13 Thread Fabrízio de Royes Mello
Hi all,

I'm trying to execute a PROCEDURE (with COMMIT inside) called from a
background worker using SPI but I'm always getting the error below:

2021-09-13 09:36:43.568 -03 [23845] LOG:  worker_spi worker 2 initialized
with schema2.counted
2021-09-13 09:36:43.568 -03 [23846] LOG:  worker_spi worker 1 initialized
with schema1.counted
2021-09-13 09:36:43.571 -03 [23846] ERROR:  invalid transaction termination
2021-09-13 09:36:43.571 -03 [23846] CONTEXT:  PL/pgSQL function
schema1.counted_proc() line 1 at COMMIT
SQL statement "CALL "schema1"."counted_proc"()"
2021-09-13 09:36:43.571 -03 [23846] STATEMENT:  CALL
"schema1"."counted_proc"()
2021-09-13 09:36:43.571 -03 [23845] ERROR:  invalid transaction termination
2021-09-13 09:36:43.571 -03 [23845] CONTEXT:  PL/pgSQL function
schema2.counted_proc() line 1 at COMMIT
SQL statement "CALL "schema2"."counted_proc"()"
2021-09-13 09:36:43.571 -03 [23845] STATEMENT:  CALL
"schema2"."counted_proc"()
2021-09-13 09:36:43.571 -03 [23838] LOG:  background worker "worker_spi"
(PID 23845) exited with exit code 1
2021-09-13 09:36:43.571 -03 [23838] LOG:  background worker "worker_spi"
(PID 23846) exited with exit code 1

I changed the worker_spi example (attached) a bit to execute a simple
procedure. Even using SPI_connect_ext(SPI_OPT_NONATOMIC) I'm getting the
error "invalid transaction termination".

There are something wrong with the attached example or am I missing
something?

Regards,

-- 
Fabrízio de Royes Mello
diff --git a/src/test/modules/worker_spi/worker_spi.c b/src/test/modules/worker_spi/worker_spi.c
index d0acef2652..0b9e4e9335 100644
--- a/src/test/modules/worker_spi/worker_spi.c
+++ b/src/test/modules/worker_spi/worker_spi.c
@@ -108,8 +108,20 @@ initialize_worker_spi(worktable *table)
 		 "		type text CHECK (type IN ('total', 'delta')), "
 		 "		value	integer)"
 		 "CREATE UNIQUE INDEX \"%s_unique_total\" ON \"%s\" (type) "
-		 "WHERE type = 'total'",
-		 table->schema, table->name, table->name, table->name);
+		 "WHERE type = 'total'; "
+		 "CREATE PROCEDURE \"%s\".\"%s_proc\"() AS $$ "
+		 "DECLARE "
+		 "  i INTEGER; "
+		 "BEGIN "
+		 "  FOR i IN 1..10 "
+		 "  LOOP "
+		 "INSERT INTO \"%s\".\"%s\" VALUES ('delta', i); "
+		 "COMMIT; "
+		 "  END LOOP; "
+		 "END; "
+		 "$$ LANGUAGE plpgsql; ",
+		 table->schema, table->name, table->name, table->name,
+		 table->schema, table->name, table->schema, table->name);
 
 		/* set statement start time */
 		SetCurrentStatementStartTimestamp();
@@ -168,20 +180,8 @@ worker_spi_main(Datum main_arg)
 	table->name = quote_identifier(table->name);
 
 	initStringInfo();
-	appendStringInfo(,
-	 "WITH deleted AS (DELETE "
-	 "FROM %s.%s "
-	 "WHERE type = 'delta' RETURNING value), "
-	 "total AS (SELECT coalesce(sum(value), 0) as sum "
-	 "FROM deleted) "
-	 "UPDATE %s.%s "
-	 "SET value = %s.value + total.sum "
-	 "FROM total WHERE type = 'total' "
-	 "RETURNING %s.value",
-	 table->schema, table->name,
-	 table->schema, table->name,
-	 table->name,
-	 table->name);
+
+	appendStringInfo(, "CALL \"%s\".\"%s_proc\"()", table->schema, table->name);
 
 	/*
 	 * Main loop: do this until SIGTERM is received and processed by
@@ -232,7 +232,7 @@ worker_spi_main(Datum main_arg)
 		 */
 		SetCurrentStatementStartTimestamp();
 		StartTransactionCommand();
-		SPI_connect();
+		SPI_connect_ext(SPI_OPT_NONATOMIC);
 		PushActiveSnapshot(GetTransactionSnapshot());
 		debug_query_string = buf.data;
 		pgstat_report_activity(STATE_RUNNING, buf.data);
@@ -240,30 +240,15 @@ worker_spi_main(Datum main_arg)
 		/* We can now execute queries via SPI */
 		ret = SPI_execute(buf.data, false, 0);
 
-		if (ret != SPI_OK_UPDATE_RETURNING)
-			elog(FATAL, "cannot select from table %s.%s: error code %d",
- table->schema, table->name, ret);
-
-		if (SPI_processed > 0)
-		{
-			bool		isnull;
-			int32		val;
-
-			val = DatumGetInt32(SPI_getbinval(SPI_tuptable->vals[0],
-			  SPI_tuptable->tupdesc,
-			  1, ));
-			if (!isnull)
-elog(LOG, "%s: count in %s.%s is now %d",
-	 MyBgworkerEntry->bgw_name,
-	 table->schema, table->name, val);
-		}
+		if (ret != SPI_OK_UTILITY)
+			elog(FATAL, "failed to call procedure");
 
 		/*
 		 * And finish our transaction.
 		 */
-		SPI_finish();
 		PopActiveSnapshot();
 		CommitTransactionCommand();
+		SPI_finish();
 		debug_query_string = NULL;
 		pgstat_report_stat(false);
 		pgstat_report_activity(STATE_IDLE, NULL);


Re: [GSoC 2021 project summary] PL/Julia

2021-08-24 Thread Fabrízio de Royes Mello
On Tue, Aug 24, 2021 at 5:26 AM Konstantina Skovola 
wrote:
>
> Hello hackers,
>
> Here is a summary of what was implemented over the summer in PL/Julia:
>
> 1. Added support for more datatypes as input and output:
> NULL, boolean, numeric types, composite types, arrays of base types can
now be passed as input arguments to PL/Julia functions. Users can also
return the above, or sets of the above from PL/Julia UDFs.
> 2. Added trigger support - users can write trigger functions in PL/Julia
> 3. Added event trigger support
> 4. Added support for the DO command
> 5. Added functions for database access from PL/Julia:
> spi_exec(query, limit) and spi_exec(query) for SQL-statement execution,
> spi_fetchrow(cursor) and spi_cursor_close(cursor) to return rows and to
close the cursor respectively,
> spi_prepare(query, argtypes) to prepare and save an execution plan and
> spi_exec_prepared(plan, args, limit) to execute a previously prepared
plan.
>
> A brief presentation of the above
>
https://docs.google.com/presentation/d/1cTnsUWiH6o0YH6MlZoPLofna3eNT3P3r9HSL9Dyte5U/edit?usp=sharing
> Documentation with use examples
> https://gitlab.com/konskov/pljulia/-/blob/main/README.md
>
> Currently the extension works for version 13 and Julia versions >= 1.6
(Thanks to Imre Samu for testing!)
>

Awesome Konstantina, it was a pleasure working with you in this project.

Looking forward to your next contributions to PostgreSQL.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Minimal logical decoding on standbys

2021-03-24 Thread Fabrízio de Royes Mello
On Wed, Mar 24, 2021 at 3:57 AM Drouvot, Bertrand 
wrote:
>
> Thanks for pointing out, fixed in v14 attached.
>

Thanks... now everything is working as expected... changed the status to
Ready for Commiter:
https://commitfest.postgresql.org/32/2968/

Regards,

-- 
   Fabrízio de Royes Mello
   PostgreSQL Developer at OnGres Inc. - https://ongres.com


Re: Minimal logical decoding on standbys

2021-03-23 Thread Fabrízio de Royes Mello
>
> done in v13 attached.
>

All tests passed and everything looks good to me... just a final minor fix
on regression tests:

diff --git a/src/test/regress/expected/rules.out
b/src/test/regress/expected/rules.out
index b0e17d4e1d..961ec869a6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1869,9 +1869,9 @@ pg_stat_database_conflicts| SELECT d.oid AS datid,
 pg_stat_get_db_conflict_tablespace(d.oid) AS confl_tablespace,
 pg_stat_get_db_conflict_lock(d.oid) AS confl_lock,
 pg_stat_get_db_conflict_snapshot(d.oid) AS confl_snapshot,
-pg_stat_get_db_conflict_logicalslot(d.oid) AS confl_logicalslot,
 pg_stat_get_db_conflict_bufferpin(d.oid) AS confl_bufferpin,
-pg_stat_get_db_conflict_startup_deadlock(d.oid) AS confl_deadlock
+pg_stat_get_db_conflict_startup_deadlock(d.oid) AS confl_deadlock,
+pg_stat_get_db_conflict_logicalslot(d.oid) AS confl_logicalslot
FROM pg_database d;
 pg_stat_gssapi| SELECT s.pid,
 s.gss_auth AS gss_authenticated,

We moved "pg_stat_database_conflicts.confl_logicalslot" to the end of the
column list but forgot to change the regression test expected result.

Regards,

-- 
   Fabrízio de Royes Mello
   PostgreSQL Developer at OnGres Inc. - https://ongres.com


Re: Minimal logical decoding on standbys

2021-03-23 Thread Fabrízio de Royes Mello
On Tue, Mar 23, 2021 at 10:18 AM Fabrízio de Royes Mello <
fabriziome...@gmail.com> wrote:
>
> LGTM too... Reviewing new changes now to move it forward and make this
patch set ready for commiter review.
>

According to the feature LGTM and all tests passed. Documentation is also
OK. Some minor comments:

+
+ A logical replication slot can also be created on a hot standby. To
prevent
+ VACUUM from removing required rows from the system
+ catalogs, hot_standby_feedback should be set on the
+ standby. In spite of that, if any required rows get removed, the slot
gets
+ dropped.  Existing logical slots on standby also get dropped if
wal_level
+ on primary is reduced to less than 'logical'.
+

Remove extra space before "Existing logical slots..."

+pg_stat_get_db_conflict_logicalslot(D.oid) AS
confl_logicalslot,

Move it to the end of pg_stat_database_conflicts columns


+* is being reduced.  Hence this extra check.

Remove extra space before "Hence this..."


+   /* Send the other backend, a conflict recovery signal */
+
+   SetInvalidVirtualTransactionId(vxid);

Remove extra empty line


+   if (restart_lsn % XLOG_BLCKSZ != 0)
+   elog(ERROR, "invalid replay pointer");

Add an empty line after this "IF" for code readability


+void
+ResolveRecoveryConflictWithLogicalSlots(Oid dboid, TransactionId xid,
+   char *conflict_reason)
+{
+   int i;
+   boolfound_conflict = false;
+
+   if (max_replication_slots <= 0)
+   return;

What about adding an "Assert(max_replication_slots >= 0);" before the
replication slots check?

One last thing is about the name of TAP tests, we should rename them
because there are other TAP tests starting with 022_ and 023_. It should be
renamed to:

src/test/recovery/t/022_standby_logical_decoding_xmins.pl ->
src/test/recovery/t/024_standby_logical_decoding_xmins.pl
src/test/recovery/t/023_standby_logical_decoding_conflicts.pl
-> src/test/recovery/t/025_standby_logical_decoding_conflicts.pl

Regards,

-- 
   Fabrízio de Royes Mello
   PostgreSQL Developer at OnGres Inc. - https://ongres.com


Re: Minimal logical decoding on standbys

2021-03-23 Thread Fabrízio de Royes Mello
On Tue, Mar 23, 2021 at 8:47 AM Drouvot, Bertrand 
wrote:
>
> I have one remark regarding the conflicts:
>
> The logical slots are dropped if a conflict is detected.
>
> But, if the slot is not active before being dropped (say wal_level is
changed to  < logical on master and a logical slot is not active on the
standby) then its corresponding
pg_stat_database_conflicts.confl_logicalslot is not incremented (as it
would be incremented "only" during the cancel of an "active" backend).
>
> I think, it should be incremented in all the cases (active or not), what
do you think?
>

Good catch... IMHO it should be incremented as well!!!

> I updated the patch to handle this scenario (see the new
pgstat_send_droplogicalslot() in
v12-0003-Handle-logical-slot-conflicts-on-standby.patch).
>

Perfect.

> I also added more tests in 023_standby_logical_decoding_conflicts.pl to
verify that pg_stat_database_conflicts.confl_logicalslot is updated as
expected (see check_confl_logicalslot() in
v12-0004-New-TAP-test-for-logical-decoding-on-standby.patch).
>

Perfect.

> Except this remark and the associated changes, then it looks good to me.
>

LGTM too... Reviewing new changes now to move it forward and make this
patch set ready for commiter review.

Regards,

-- 
   Fabrízio de Royes Mello
   PostgreSQL Developer at OnGres Inc. - https://ongres.com


Re: Minimal logical decoding on standbys

2021-03-22 Thread Fabrízio de Royes Mello
On Thu, Mar 18, 2021 at 5:34 AM Drouvot, Bertrand 
wrote:
>
> Thanks!
>
> Just made minor changes to make it compiling again on current master
(mainly had to take care of ResolveRecoveryConflictWithSnapshotFullXid()
that has been introduced in e5d8a99903).
>
> Please find enclosed the new patch version that currently passes "make
check" and the 2 associated TAP tests.
>

Unfortunately it still not applying to the current master:

$ git am ~/Downloads/v10-000*.patch

Applying: Allow logical decoding on standby.
Applying: Add info in WAL records in preparation for logical slot conflict
handling.
error: patch failed: src/backend/access/nbtree/nbtpage.c:32
error: src/backend/access/nbtree/nbtpage.c: patch does not apply
Patch failed at 0002 Add info in WAL records in preparation for logical
slot conflict handling.
hint: Use 'git am --show-current-patch' to see the failed patch
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".


> I'll have a look to the whole thread to check if there is anything else
waiting in the pipe regarding this feature, unless some of you know off the
top of their head?
>

Will do the same!!! But as far I remember last time I checked it everything
discussed is covered in this patch set.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: pg_dump new feature: exporting functions only. Bad or good idea ?

2021-03-17 Thread Fabrízio de Royes Mello
On Wed, Mar 17, 2021 at 2:00 PM Lætitia Avrot 
wrote:
>
> Hey hackers,
>
> I had this idea, that I raised and cherished like my baby to add a switch
in `pg_dump` to allow exporting stored functions (and procedures) only.
>
> However, when I finally got the time to look at it in detail, I found out
there was no way to solve the dependencies in the functions and procedures,
so that the exported file, when re-played could lead to invalid objects.
>
> So, I decided this would not make Postgres better and decide to walk off
this patch.
>
> Anyhow, when sharing my thoughts, several people told me to ask the
community about adding this feature because this could be useful in some
use cases. Another argument is that should you have all your functions in
one schema and your tables in another, exporting only the function schema
will lead to the same kind of file that could lead to invalid objects
created when the file is re-played against a database that does not have
the tables.
>
> Of course, the documentation would add a warning against object
invalidity should only the stored functions/procedures be exported.
>
> So, my question is: what do you think about such a feature? is it worth
it?
>

Make total sense since we already have --function=NAME(args) on pg_restore
and it doesn't solve dependencies... so we can add it to also export
function/procedure contents.

+1 for general idea.

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Extensibility of the PostgreSQL wire protocol

2021-02-12 Thread Fabrízio de Royes Mello
On Thu, Feb 11, 2021 at 12:07 PM Tom Lane  wrote:
>
> Robert Haas  writes:
> > On Thu, Feb 11, 2021 at 9:42 AM Jonah H. Harris 
wrote:
> >> As Jan said in his last email, they're not proposing all the different
> >> aspects needed. In fact, nothing has actually been proposed yet. This
> >> is an entirely philosophical debate. I don't even know what's being
> >> proposed at this point - I just know it *could* be useful. Let's just
> >> wait and see what is actually proposed before shooting it down, yes?
>
> > I don't think I'm trying to shoot anything down, because as I said, I
> > like extensibility and am generally in favor of it. Rather, I'm
> > expressing a concern which seems to me to be justified, based on what
> > was posted. I'm sorry that my tone seems to have aggravated you, but
> > it wasn't intended to do so.
>
> Likewise, the point I was trying to make is that a "pluggable wire
> protocol" is only a tiny part of what would be needed to have a credible
> MySQL, Oracle, or whatever clone.  There are large semantic differences
> from those products; there are maintenance issues arising from the fact
> that we whack structures like parse trees around all the time; and so on.
> Maybe there is some useful thing that can be accomplished here, but we
> need to consider the bigger picture rather than believing (without proof)
> that a few hook variables will be enough to do anything.
>

Just to don't miss the point, creating a compat protocol to mimic others
(TDS,
MySQL, etc) is just one use case.

There are other use cases to make wire protocol extensible, for example for
telemetry I can use some hooks to propagate context [1] and get more
detailed
tracing information about the negotiation between frontend and backend and
being able to implement a truly query tracing tool, for example.

Another use case is extending the current protocol to, for example, send
more
information about query execution on CommandComplete command instead of
just the number of affected rows.

About the HTTP protocol I think PG should have it, maybe pure HTTP (no
REST,
just HTTP) because it's the most interoperable. Performance can still be
very good
with HTTP2, and you have a huge ecosystem of tools and proxies (like Envoy)
that
would do wonders with this. You could safely query a db from a web page
(passing
through proxies that would do auth, TLS, etc). Or maybe a higher performing
gRPC
version (which is also HTTP2 and is amazing), but this makes it a bit more
difficult
to query from a web page. In either case, context propagation is already
built-in, and
in a standard way.

Regards,

[1] https://www.w3.org/TR/trace-context/

-- 
   Fabrízio de Royes Mello
   PostgreSQL Developer at OnGres Inc. - https://ongres.com


Re: Extensibility of the PostgreSQL wire protocol

2021-02-10 Thread Fabrízio de Royes Mello
On Wed, Feb 10, 2021 at 1:43 PM Robert Haas  wrote:
>
> On Mon, Jan 25, 2021 at 10:07 AM Jan Wieck  wrote:
> > Our current plan is to create a new set of API calls and hooks that
allow to register additional wire protocols. The existing backend libpq
implementation will be modified to register itself using the new API. This
will serve as a proof of concept as well as ensure that the API definition
is not slanted towards a specific protocol. It is also similar to the way
table access methods and compression methods are added.
>
> If we're going to end up with an open source implementation of
> something useful in contrib or whatever, then I think this is fine.
> But, if not, then we're just making it easier for Amazon to do
> proprietary stuff without getting any benefit for the open-source
> project. In fact, in that case PostgreSQL would ensure have to somehow
> ensure that the hooks don't get broken without having any code that
> actually uses them, so not only would the project get no benefit, but
> it would actually incur a small tax. I wouldn't say that's an
> absolutely show-stopper, but it definitely isn't my first choice.

As far I understood Jan's proposal is to add enough hooks on PostgreSQL to
enable us to extend the wire protocol and add a contrib module as an
example (maybe TDS, HTTP or just adding new capabilities to current
implementation).

Regards,

-- 
   Fabrízio de Royes Mello
   PostgreSQL Developer at OnGres Inc. - https://ongres.com


Re: Minimal logical decoding on standbys

2021-02-04 Thread Fabrízio de Royes Mello
On Thu, Feb 4, 2021 at 1:49 PM Drouvot, Bertrand 
wrote:
>
> Had to do a little change to make it compiling again (re-add the heapRel
argument in _bt_delitems_delete() that was removed by commit
dc43492e46c7145a476cb8ca6200fc8eefe673ef).
>
> Given that this attached v9 version:
>
> compiles successfully on current master
> passes "make check"
> passes the 2 associated tap tests "make -C src/test/recovery check
PROVE_TESTS=t/022_standby_logical_decoding_xmins.pl PROVE_FLAGS=-v"  and
"make -C src/test/recovery check PROVE_TESTS=t/
023_standby_logical_decoding_conflicts.pl PROVE_FLAGS=-v"
>

Perfect thanks... will review ASAP!

> wouldn't that make sense to (re)add this patch in the commitfest?
>

Added to next commitfest: https://commitfest.postgresql.org/32/2968/

Regards,

-- 
   Fabrízio de Royes Mello
   PostgreSQL Developer at OnGres Inc. - https://ongres.com


Re: [UNVERIFIED SENDER] Re: Minimal logical decoding on standbys

2021-01-25 Thread Fabrízio de Royes Mello
On Mon, Jan 18, 2021 at 8:48 AM Drouvot, Bertrand 
wrote:
>
> 3 and 4 were failing because the
ResolveRecoveryConflictWithLogicalSlots() call was missing in
ResolveRecoveryConflictWithSnapshot(): the new version attached adds it.
>
> The new version attached also provides a few changes to make it compiling
on the current master (it was not the case anymore).
>
> I also had to change 023_standby_logical_decoding_conflicts.pl (had to
call $node_standby->create_logical_slot_on_standby($node_master,
'otherslot', 'postgres'); at the very beginning of the "DROP DATABASE
should drops it's slots, including active slots" section)
>

Awesome and thanks a lot.

Seems your patch series is broken... can you please `git format-patch` and
send again?

Regards,

-- 
   Fabrízio de Royes Mello
   PostgreSQL Developer at OnGres Inc. - https://ongres.com


Re: Minimal logical decoding on standbys

2020-12-15 Thread Fabrízio de Royes Mello
On Wed, Mar 18, 2020 at 4:50 PM Alvaro Herrera 
wrote:
>
>
> well, not "forever", but:
>
> $ make check PROVE_TESTS=t/019_standby_logical_decoding_conflicts.pl
PROVE_FLAGS=-v
> ...
> cd /pgsql/source/master/src/test/recovery &&
TESTDIR='/home/alvherre/mnt/crypt/alvherre/Code/pgsql/build/master/src/test/recovery'
PATH="/pgsql/build/master/tmp_install/pgsql/install/master/bin:$PATH"
LD_LIBRARY_PATH="/pgsql/build/master/tmp_install/pgsql/install/master/lib"
 PGPORT='655432'
PG_REGRESS='/home/alvherre/mnt/crypt/alvherre/Code/pgsql/build/master/src/test/recovery/../../../src/test/regress/pg_regress'
REGRESS_SHLIB='/pgsql/build/master/src/test/regress/regress.so'
/usr/bin/prove -I /pgsql/source/master/src/test/perl/ -I
/pgsql/source/master/src/test/recovery -v t/
019_standby_logical_decoding_conflicts.pl
> t/019_standby_logical_decoding_conflicts.pl ..
> 1..24
> ok 1 - dropslot on standby created
> ok 2 - activeslot on standby created
> # poll_query_until timed out executing this query:
> # SELECT '0/35C9190' <= replay_lsn AND state = 'streaming' FROM
pg_catalog.pg_stat_replication WHERE application_name = 'standby';
> # expecting this output:
> # t
> # last actual query output:
> #
> # with stderr:
> Bailout called.  Further testing stopped:  system pg_ctl failed
> Bail out!  system pg_ctl failed
> FAILED--Further testing stopped: system pg_ctl failed
> make: *** [Makefile:19: check] Error 255
>

After rebase and did minimal tweaks (duplicated oid, TAP tests numbering)
I'm facing similar problem but in other place:


make -C src/test/recovery check PROVE_TESTS=t/
023_standby_logical_decoding_conflicts.pl PROVE_FLAGS=-v
...
/usr/bin/mkdir -p '/data/src/pg/main/src/test/recovery'/tmp_check
cd . && TESTDIR='/data/src/pg/main/src/test/recovery'
PATH="/d/src/pg/main/tmp_install/home/fabrizio/pgsql/logical-decoding-standby/bin:$PATH"
LD_LIBRARY_PATH="/d/src/pg/main/tmp_install/home/fabrizio/pgsql/logical-decoding-standby/lib"
 PGPORT='65432'
PG_REGRESS='/data/src/pg/main/src/test/recovery/../../../src/test/regress/pg_regress'
REGRESS_SHLIB='/d/src/pg/main/src/test/regress/regress.so' /usr/bin/prove
-I ../../../src/test/perl/ -I . -v t/
023_standby_logical_decoding_conflicts.pl
t/023_standby_logical_decoding_conflicts.pl ..
1..24
ok 1 - dropslot on standby created
ok 2 - activeslot on standby created
not ok 3 - dropslot on standby dropped

#   Failed test 'dropslot on standby dropped'
#   at t/023_standby_logical_decoding_conflicts.pl line 67.
#  got: 'logical'
# expected: ''
not ok 4 - activeslot on standby dropped

#   Failed test 'activeslot on standby dropped'
#   at t/023_standby_logical_decoding_conflicts.pl line 68.
#  got: 'logical'
# expected: ''


TAP tests hang forever in `check_slots_dropped` exactly here:

# our client should've terminated in response to the walsender error
eval {
$slot_user_handle->finish;
};

Regards,

-- 
   Fabrízio de Royes Mello
   PostgreSQL Developer at OnGres Inc. - https://ongres.com
From a3e42f2fc53afd2bcbe6da9d029d91694a34562e Mon Sep 17 00:00:00 2001
From: Amit Khandekar 
Date: Thu, 16 Jan 2020 10:05:15 +0530
Subject: [PATCH v7 1/5] Allow logical decoding on standby.

Allow a logical slot to be created on standby. Restrict its usage
or its creation if wal_level on primary is less than logical.
During slot creation, it's restart_lsn is set to the last replayed
LSN. Effectively, a logical slot creation on standby waits for an
xl_running_xact record to arrive from primary. Conflicting slots
would be handled in next commits.

Andres Freund and Amit Khandekar.
---
 src/backend/access/transam/xlog.c | 11 +
 src/backend/replication/logical/decode.c  | 22 -
 src/backend/replication/logical/logical.c | 37 ---
 src/backend/replication/slot.c| 57 +++
 src/backend/replication/walsender.c   | 10 ++--
 src/include/access/xlog.h |  1 +
 6 files changed, 98 insertions(+), 40 deletions(-)

diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 8dd225c2e1..609edbaca6 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -5038,6 +5038,17 @@ LocalProcessControlFile(bool reset)
 	ReadControlFile();
 }
 
+/*
+ * Get the wal_level from the control file. For a standby, this value should be
+ * considered as its active wal_level, because it may be different from what
+ * was originally configured on standby.
+ */
+WalLevel
+GetActiveWalLevel(void)
+{
+	return ControlFile->wal_level;
+}
+
 /*
  * Initialization of shared memory for XLOG
  */
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 3f84ee99b8..bb7c80d6cc 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c

Re: Add important info about ANALYZE after create Functional Index

2020-11-09 Thread Fabrízio de Royes Mello
On Mon, 9 Nov 2020 at 20:27 Bruce Momjian  wrote:

>
> I see REINDEX CONCURRENTLY was fixed in head, but the docs didn't get
> updated to mention the need to run ANALYZE or wait for autovacuum before
> expression indexes can be fully used by the optimizer.  Instead of
> putting this mention in the maintenance section, I thought the CREATE
> INDEX page make more sense, since it is more of a usability issue,
> rather than "why use expression indexes".  Patch attached, which I plan
> to apply to all supported branches.
>


Did a quick review and totally agree... thanks a lot Bruce to help us don’t
miss it.

Regards,


-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Add important info about ANALYZE after create Functional Index

2020-11-01 Thread Fabrízio de Royes Mello
On Sun, 1 Nov 2020 at 09:29 Michael Paquier  wrote:

> On Sun, Nov 01, 2020 at 09:23:44AM +0900, Michael Paquier wrote:
> > By doing so, there is no need to include pg_statistic.h in index.c.
> > Except that, the logic looks fine at quick glance.  In the long-term,
> > I also think that it would make sense to move both routnes out of
> > heap.c into a separate pg_statistic.c.  That's material for a separate
> > patch of course.
>
> I have looked again at that, and applied it after some tweaks.
> Particularly, I did not really like the use of "old" and "new" for the
> copy from the old to a new relation in the new function, so I have
> replaced that by "from" and "to".
>
>
Awesome thanks!!

Regards,

-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Add important info about ANALYZE after create Functional Index

2020-10-31 Thread Fabrízio de Royes Mello
On Fri, Oct 30, 2020 at 3:22 AM Michael Paquier  wrote:
>
> And in spirit, it is possible to address this issue with the patch
> attached which copies the set of stats from the old to the new index.

Did some tests and everything went ok... some comments below!

> For a non-concurrent REINDEX, this does not happen because we keep the
> same base relation, while we replace completely the relation with a
> concurrent operation.

Exactly!

> We have a RemoveStatistics() in heap.c, but I
> did not really see the point to invent a copy flavor for this
> particular case.  Perhaps others have an opinion on that?
>

Even if we won't use it now, IMHO it is more legible to separate this
responsibility into its own CopyStatistics function as attached.

Regards,

-- 
   Fabrízio de Royes Mello
   PostgreSQL Developer at OnGres Inc. - https://ongres.com
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 67144aa3c9..7da1f7eb56 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -3142,6 +3142,46 @@ cookConstraint(ParseState *pstate,
 	return expr;
 }
 
+/*
+ * CopyStatistics --- copy entries in pg_statistic from old to new rel
+ *
+ */
+void
+CopyStatistics(Oid oldRelId, Oid newRelId)
+{
+	HeapTuple   tup;
+	SysScanDesc scan;
+	ScanKeyData key[1];
+	Relation	statrel;
+
+	statrel = table_open(StatisticRelationId, RowExclusiveLock);
+	ScanKeyInit([0],
+Anum_pg_statistic_starelid,
+BTEqualStrategyNumber, F_OIDEQ,
+ObjectIdGetDatum(oldRelId));
+
+	scan = systable_beginscan(statrel, StatisticRelidAttnumInhIndexId,
+			  true, NULL, 1, key);
+
+	while (HeapTupleIsValid((tup = systable_getnext(scan
+	{
+		Form_pg_statistic statform;
+
+		/* make a modifiable copy */
+		tup = heap_copytuple(tup);
+		statform = (Form_pg_statistic) GETSTRUCT(tup);
+
+		/* update the copy of the tuple and insert it */
+		statform->starelid = newRelId;
+		CatalogTupleInsert(statrel, tup);
+
+		heap_freetuple(tup);
+	}
+
+	systable_endscan(scan);
+
+	table_close(statrel, RowExclusiveLock);
+}
 
 /*
  * RemoveStatistics --- remove entries in pg_statistic for a rel or column
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 0974f3e23a..a6975c2dbe 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -49,6 +49,7 @@
 #include "catalog/pg_inherits.h"
 #include "catalog/pg_opclass.h"
 #include "catalog/pg_operator.h"
+#include "catalog/pg_statistic.h"
 #include "catalog/pg_tablespace.h"
 #include "catalog/pg_trigger.h"
 #include "catalog/pg_type.h"
@@ -1711,6 +1712,11 @@ index_concurrently_swap(Oid newIndexId, Oid oldIndexId, const char *oldName)
 		}
 	}
 
+	/*
+	 * Copy over any data of pg_statistic from the old index to the new one.
+	 */
+	CopyStatistics(oldIndexId, newIndexId);
+
 	/* Close relations */
 	table_close(pg_class, RowExclusiveLock);
 	table_close(pg_index, RowExclusiveLock);
diff --git a/src/include/catalog/heap.h b/src/include/catalog/heap.h
index d31141c1a2..3d42edbbf6 100644
--- a/src/include/catalog/heap.h
+++ b/src/include/catalog/heap.h
@@ -134,6 +134,7 @@ extern void RemoveAttributeById(Oid relid, AttrNumber attnum);
 extern void RemoveAttrDefault(Oid relid, AttrNumber attnum,
 			  DropBehavior behavior, bool complain, bool internal);
 extern void RemoveAttrDefaultById(Oid attrdefId);
+extern void CopyStatistics(Oid oldRelId, Oid newRelId);
 extern void RemoveStatistics(Oid relid, AttrNumber attnum);
 
 extern const FormData_pg_attribute *SystemAttributeDefinition(AttrNumber attno);
diff --git a/src/test/regress/expected/create_index.out b/src/test/regress/expected/create_index.out
index 6ace7662ee..012c1eb067 100644
--- a/src/test/regress/expected/create_index.out
+++ b/src/test/regress/expected/create_index.out
@@ -2551,6 +2551,17 @@ CREATE UNIQUE INDEX concur_exprs_index_pred ON concur_exprs_tab (c1)
 CREATE UNIQUE INDEX concur_exprs_index_pred_2
   ON concur_exprs_tab ((1 / c1))
   WHERE ('-H') >= (c2::TEXT) COLLATE "C";
+ANALYZE concur_exprs_tab;
+SELECT starelid::regclass, count(*) FROM pg_statistic WHERE starelid IN (
+  'concur_exprs_index_expr'::regclass,
+  'concur_exprs_index_pred'::regclass,
+  'concur_exprs_index_pred_2'::regclass)
+  GROUP BY starelid ORDER BY starelid::regclass::text;
+starelid | count 
+-+---
+ concur_exprs_index_expr | 1
+(1 row)
+
 SELECT pg_get_indexdef('concur_exprs_index_expr'::regclass);
 pg_get_indexdef
 ---
@@ -2608,6 +2619,17 @@ SELECT pg_get_indexdef('concur_exprs_index_pred_2'::regclass);
  CREATE UNIQUE INDEX concur_exprs_index_pred_2 ON public.concur_exprs_tab USING btree (((1 / c1))) WHERE ('-H':

Re: Add important info about ANALYZE after create Functional Index

2020-10-28 Thread Fabrízio de Royes Mello
On Wed, Oct 28, 2020 at 4:35 PM Tomas Vondra 
wrote:
>
> I don't think anyone proposed to do this through autovacuum. There was a
> reference to auto-analyze but I think that was meant as 'run analyze
> automatically.' Which would work in transactions just fine, I think.
>

Maybe I was not very clear at the beginning so will try to clarify my
thoughts:

1) We should add notes on our docs about the need to issue ANALYZE after
creating indexes using expressions and create extended statistics. Nikolay
sent a patch upthread and we can work on it and back patch.

2) REINDEX CONCURRENTLY does not keep statistics (pg_statistc) like a
regular REINDEX for indexes using expressions and to me it's a bug. Michael
pointed out upthread that maybe we should rework a bit
index_concurrently_swap() to copy statistics from old index to new one.


> But I agree it'd likely be a more complicated patch than it might seem
> at first glance.
>

If we think about a way to kick AutoAnalyze for sure it will be a more
complicated task but IMHO for now we can do it simply just by copying
statistics like I mentioned above.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Add important info about ANALYZE after create Functional Index

2020-10-28 Thread Fabrízio de Royes Mello
On Wed, Oct 28, 2020 at 2:15 AM Michael Paquier  wrote:
>
> On Tue, Oct 27, 2020 at 11:06:22AM -0300, Fabrízio de Royes Mello wrote:
> > When we create a new table or index they will not have statistics until
an
> > ANALYZE happens. This is the default behaviour and I think is not a big
> > problem here, but we need to add some note on docs about the need of
> > statistics for indexes on expressions.
> >
> > But IMHO there is a misbehaviour with the implementation of
CONCURRENTLY on
> > REINDEX because running it will lose the statistics. Have a look the
> > example below:
> >
> > [...]
> >
> > So IMHO here is the place we should rework a bit to execute ANALYZE as a
> > last step.
>
> I agree that this is not user-friendly, and I suspect that we will
> need to do something within index_concurrently_swap() to fill in the
> stats of the new index from the data of the old one (not looked at
> that in details yet).
>

We already do a similar thing for PgStats [1] so maybe we should also copy
pg_statistics from old to new index during the swap.

But I'm not sure if it's totally safe anyway and would be better to create
a new phase to issue ANALYZE if necessary (exists statistics for old index).

Regards,

[1]
https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/catalog/index.c;h=0974f3e23a23726b63246cd3a1347e10923dd541;hb=HEAD#l1693

-- 
   Fabrízio de Royes Mello
   PostgreSQL Developer at OnGres Inc. - https://ongres.com


Re: Add important info about ANALYZE after create Functional Index

2020-10-27 Thread Fabrízio de Royes Mello
On Tue, Oct 27, 2020 at 4:12 AM Nikolay Samokhvalov 
wrote:
>
> On Mon, Oct 26, 2020 at 3:08 PM Fabrízio de Royes Mello <
fabriziome...@gmail.com> wrote:
>>
>> Would be nice if add some information about it into our docs but not
sure where. I'm thinking about:
>> - doc/src/sgml/ref/create_index.sgml
>> - doc/src/sgml/maintenance.sgml (routine-reindex)
>
>
> Attaching the patches for the docs, one for 11 and older, and another for
12+ (which have REINDEX CONCURRENTLY not suffering from lack of ANALYZE).
>

Actually the REINDEX CONCURRENTLY suffers with the lack of ANALYZE. See my
previous message on this thread.

So just adding the note on the ANALYZE docs is enough.


> I still think that automating is the right thing to do but of course,
it's a much bigger topic that a quick fix dor the docs.

So what we need to do is see how to fix REINDEX CONCURRENTLY.

Regards,

-- 
   Fabrízio de Royes Mello
   PostgreSQL Developer at OnGres Inc. - https://ongres.com


Re: Add important info about ANALYZE after create Functional Index

2020-10-27 Thread Fabrízio de Royes Mello
On Mon, Oct 26, 2020 at 7:46 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:
>
> It would seem preferable to call the lack of auto-analyzing after these
operations a bug and back-patch a fix that injects an analyze side-effect
just before their completion.  It doesn't have to be smart either,
analyzing things even if the created (or newly validated) index doesn't
have statistics of its own isn't a problem in my book.
>

When we create a new table or index they will not have statistics until an
ANALYZE happens. This is the default behaviour and I think is not a big
problem here, but we need to add some note on docs about the need of
statistics for indexes on expressions.

But IMHO there is a misbehaviour with the implementation of CONCURRENTLY on
REINDEX because running it will lose the statistics. Have a look the
example below:

fabrizio=# SELECT version();
 version

-
 PostgreSQL 14devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit
(1 row)

fabrizio=# CREATE TABLE t(f1 BIGSERIAL PRIMARY KEY, f2 TEXT) WITH
(autovacuum_enabled = false);
CREATE TABLE
fabrizio=# INSERT INTO t(f2) SELECT repeat(chr(65+(random()*26)::int),
(random()*300)::int) FROM generate_series(1, 1);
INSERT 0 1
fabrizio=# CREATE INDEX t_idx2 ON t(lower(f2));
CREATE INDEX
fabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid =
't_pkey'::regclass;
 count
---
 0
(1 row)

fabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid =
't_idx2'::regclass;
 count
---
 0
(1 row)

fabrizio=# ANALYZE t;
ANALYZE
fabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid =
't_pkey'::regclass;
 count
---
 0
(1 row)

fabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid =
't_idx2'::regclass;
 count
---
 1
(1 row)

fabrizio=# REINDEX INDEX t_idx2;
REINDEX
fabrizio=# REINDEX INDEX t_pkey;
REINDEX
fabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid =
't_pkey'::regclass;
 count
---
 0
(1 row)

fabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid =
't_idx2'::regclass;
 count
---
 1
(1 row)

-- A regular REINDEX don't lose the statistics.


fabrizio=# REINDEX INDEX CONCURRENTLY t_idx2;
REINDEX
fabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid =
't_idx2'::regclass;
 count
---
 0
(1 row)


-- But the REINDEX CONCURRENTLY loses.

So IMHO here is the place we should rework a bit to execute ANALYZE as a
last step.

Regards,

-- 
   Fabrízio de Royes Mello
   PostgreSQL Developer at OnGres Inc. - https://ongres.com


Add important info about ANALYZE after create Functional Index

2020-10-26 Thread Fabrízio de Royes Mello
Hi all,

As you all already know Postgres supports functions in index expressions
(marked as immutable ofc) and for this special index the ANALYZE command
creates some statistics (new pg_statistic entry) about it.

The problem is just after creating a new index or rebuilding concurrently
(using the new REINDEX .. CONCURRENTLY or the old manner creating new one
and then swapping) we need to run ANALYZE to update statistics but we don't
mention it in any part of our documentation.

Last weekend Gitlab went down because the lack of an ANALYZE after
rebuilding concurrently a functional index and they followed the
recommendation we have into our documentation [1] about how to rebuild it
concurrently, but we don't warn users about the ANALYZE after.

Would be nice if add some information about it into our docs but not sure
where. I'm thinking about:
- doc/src/sgml/ref/create_index.sgml
- doc/src/sgml/maintenance.sgml (routine-reindex)

Thoughts?

[1]
https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2885#note_436310499

-- 
   Fabrízio de Royes Mello
   PostgreSQL Developer at OnGres Inc. - https://ongres.com


Re: Commitfest 2020-11

2020-10-26 Thread Fabrízio de Royes Mello
On Mon, Oct 26, 2020 at 3:09 PM Anastasia Lubennikova <
a.lubennik...@postgrespro.ru> wrote:
>
> Hello, hackers!
>
> November commitfest will start just in a few days.
> I'm happy to volunteer to be the CFM for this one. With a help of
> Georgios Kokolatos [1].
>
> It's time to register your patch in the commitfest, if not yet.
>
> If you already have a patch in the commitfest, update its status and
> make sure it still applies and that the tests pass. Check the state at
> http://cfbot.cputube.org/
>
> If there is a long-running stale discussion, please send a short summary
> update about its current state, open questions, and TODOs. I hope, it
> will encourage reviewers to pay more attention to the thread.
>

Awesome!

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Missing TOAST table for pg_class

2020-09-23 Thread Fabrízio de Royes Mello
On Tue, Sep 22, 2020 at 10:57 PM Michael Paquier 
wrote:
>
> On Tue, Sep 22, 2020 at 05:35:48PM -0400, Tom Lane wrote:
> > What exactly do you argue has changed since the previous decision
> > that should cause us to change it?  In particular, where is the
> > additional data to change our minds about the safety of such a thing?
>

>From a technical perspective I really don't know how to solve it, but my
intention here is to raise a hand and demonstrate there are real scenarios
where Postgres breaks so easily.

And unfortunately for the user perspective it sounds a bit fragile. Ok it's
not a very common use case and the solution isn't easy, because if it is
I'm sure it was already solved before.


> Not sure that's safe, as we also want to avoid circular dependencies
> similarly for pg_class, pg_index and pg_attribute.
>

Adding a TOAST can cause circular dependencies between those relations?? If
you don't mind can you explain more about it?

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Missing TOAST table for pg_class

2020-09-22 Thread Fabrízio de Royes Mello
Hi all,

I know it has been discussed before [1] but one of our customers complained
about something weird on one of their multi-tenancy databases (thousands of
schemas with a lot of objects inside and one user by schema).

So when I checked the problem is because the missing TOAST for pg_class,
and is easy to break it by just:

fabrizio=# create table foo (id int);
CREATE TABLE
fabrizio=# do
$$
begin
for i in 1..2500
loop
execute 'create user u' || i;
execute 'grant all on foo to u' || i;
end loop;
end;
$$;
ERROR:  row is too big: size 8168, maximum size 8160
CONTEXT:  SQL statement "grant all on foo to u2445"
PL/pgSQL function inline_code_block line 6 at EXECUTE

Attached patch adds the TOAST to pg_class, and let's open again the
discussion around it.

Regards,

[1]
https://www.postgresql.org/message-id/flat/84ddff04-f122-784b-b6c5-3536804495f8%40joeconway.com

-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
diff --git a/src/include/catalog/toasting.h b/src/include/catalog/toasting.h
index 51491c4513..729bc2cc66 100644
--- a/src/include/catalog/toasting.h
+++ b/src/include/catalog/toasting.h
@@ -51,6 +51,7 @@ extern void BootstrapToastTable(char *relName,
 /* normal catalogs */
 DECLARE_TOAST(pg_aggregate, 4159, 4160);
 DECLARE_TOAST(pg_attrdef, 2830, 2831);
+DECLARE_TOAST(pg_class, 4179, 4180);
 DECLARE_TOAST(pg_collation, 4161, 4162);
 DECLARE_TOAST(pg_constraint, 2832, 2833);
 DECLARE_TOAST(pg_default_acl, 4143, 4144);
diff --git a/src/test/regress/expected/misc_sanity.out b/src/test/regress/expected/misc_sanity.out
index 8538173ff8..0a2f5cc2a2 100644
--- a/src/test/regress/expected/misc_sanity.out
+++ b/src/test/regress/expected/misc_sanity.out
@@ -100,12 +100,9 @@ ORDER BY 1, 2;
  pg_attribute| attfdwoptions | text[]
  pg_attribute| attmissingval | anyarray
  pg_attribute| attoptions| text[]
- pg_class| relacl| aclitem[]
- pg_class| reloptions| text[]
- pg_class| relpartbound  | pg_node_tree
  pg_index| indexprs  | pg_node_tree
  pg_index| indpred   | pg_node_tree
  pg_largeobject  | data  | bytea
  pg_largeobject_metadata | lomacl| aclitem[]
-(11 rows)
+(8 rows)
 


Re: pg_dump bug for extension owned tables

2020-09-04 Thread Fabrízio de Royes Mello
On Fri, Sep 4, 2020 at 3:00 PM Andrew Dunstan <
andrew.duns...@2ndquadrant.com> wrote:
>
>
> Thanks, Committed. Further investigation shows this was introduced in
> release 12, so that's how far back I went.
>

Thanks!

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: pg_dump bug for extension owned tables

2020-08-06 Thread Fabrízio de Royes Mello
On Mon, Jul 13, 2020 at 6:18 PM Fabrízio de Royes Mello <
fabriziome...@gmail.com> wrote:

>
> On Mon, Jul 13, 2020 at 5:05 PM Andrew Dunstan <
> andrew.duns...@2ndquadrant.com> wrote:
> >
> >
> > On 7/13/20 2:46 PM, Fabrízio de Royes Mello wrote:
> > >
> > >
> > > >
> > > > yeah, that's the fix I came up with too. The only thing I added was
> > > > "Assert(tbinfo->attgenerated);" at about line 2097.
> > > >
> > >
> > > Cool.
> > >
> > > >
> > > > Will wait for your TAP tests.
> > > >
> > >
> > > Actually I've sent it already but it seems to have gone to the
> > > moderation queue.
> > >
> > > Anyway attached with your assertion and TAP tests.
> > >
> > >
> >
> >
> >
> > Thanks, that all seems fine. The TAP test changes are a bit of a pain in
> > the neck before release 11, so I think I'll just do those back that far,
> > but the main fix for all live branches.
> >
>
> Sounds good to me.
>
>
Just added to the next commitfest [1] to make sure we'll not lose it.

Regards,

[1] https://commitfest.postgresql.org/29/2671/

-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: pg_dump bug for extension owned tables

2020-07-13 Thread Fabrízio de Royes Mello
On Mon, Jul 13, 2020 at 5:05 PM Andrew Dunstan <
andrew.duns...@2ndquadrant.com> wrote:
>
>
> On 7/13/20 2:46 PM, Fabrízio de Royes Mello wrote:
> >
> >
> > >
> > > yeah, that's the fix I came up with too. The only thing I added was
> > > "Assert(tbinfo->attgenerated);" at about line 2097.
> > >
> >
> > Cool.
> >
> > >
> > > Will wait for your TAP tests.
> > >
> >
> > Actually I've sent it already but it seems to have gone to the
> > moderation queue.
> >
> > Anyway attached with your assertion and TAP tests.
> >
> >
>
>
>
> Thanks, that all seems fine. The TAP test changes are a bit of a pain in
> the neck before release 11, so I think I'll just do those back that far,
> but the main fix for all live branches.
>

Sounds good to me.

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: pg_dump bug for extension owned tables

2020-07-13 Thread Fabrízio de Royes Mello
On Mon, Jul 13, 2020 at 3:29 PM Andrew Dunstan <
andrew.duns...@2ndquadrant.com> wrote:
>
>
> On 7/13/20 10:52 AM, Fabrízio de Royes Mello wrote:
> >
> > On Sat, Jul 11, 2020 at 8:07 PM Andrew Dunstan
> >  > <mailto:andrew.duns...@2ndquadrant.com>> wrote:
> > >
> > >
> > > On 6/26/20 2:10 PM, Fabrízio de Royes Mello wrote:
> > > >
> > > > On Fri, Jun 26, 2020 at 11:55 AM Fabrízio de Royes Mello
> > > > mailto:fabriziome...@gmail.com>
> > <mailto:fabriziome...@gmail.com <mailto:fabriziome...@gmail.com>>>
wrote:
> > > > >
> > > > >
> > > > > On Fri, Jun 26, 2020 at 11:24 AM Andrew Dunstan
> > > >  > <mailto:andrew.duns...@2ndquadrant.com>
> > > > <mailto:andrew.duns...@2ndquadrant.com
> > <mailto:andrew.duns...@2ndquadrant.com>>> wrote:
> > > > > >
> > > > > >
> > > > > > On 6/26/20 9:57 AM, Andrew Dunstan wrote:
> > > > > > > It appears that for extension owned tables
> > tbinfo.attgenerated isn't
> > > > > > > being properly populated, so line 2050 in REL_12_STABLE, which
> > > > is line
> > > > > > > 2109 in git tip, is failing.
> > > > > > >
> > > > > > >
> > > > > >
> > > > > > Should have mentioned this is in src/bin/pg_dump/pg_dump.c
> > > > > >
> > > > >
> > > > > Having a look on it.
> > > > >
> > > >
> > > > Seems when qualify the schemaname the the "tbinfo->interesting"
field
> > > > is not setted for extensions objects, so the getTableAttrs can't
fill
> > > > the attgenerated field properly.
> > > >
> > > > I'm not 100% sure it's the correct way but the attached patch works
> > > > for me and all tests passed. Maybe we should add more TAP tests?
> > > >
> > > >
> > >
> > >
> > > I just tried this patch out on master, with the test case I gave
> > > upthread. It's not working, still getting a segfault.
> > >
> >
> > Ohh really sorry about it... my bad... i completely forgot about it!!!
> >
> > Due to my rush I ended up adding the wrong patch version. Attached the
> > correct version.
> >
> > Will add TAP tests to src/test/modules/test_pg_dump
>
>
> yeah, that's the fix I came up with too. The only thing I added was
> "Assert(tbinfo->attgenerated);" at about line 2097.
>

Cool.

>
> Will wait for your TAP tests.
>

Actually I've sent it already but it seems to have gone to the moderation
queue.

Anyway attached with your assertion and TAP tests.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e758b5c50a..baacd7af63 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -2094,6 +2094,8 @@ dumpTableData_insert(Archive *fout, void *dcontext)
 			if (nfields == 0)
 continue;
 
+			Assert(tbinfo->attgenerated);
+
 			/* Emit a row heading */
 			if (rows_per_statement == 1)
 archputs(" (", fout);
@@ -18037,6 +18039,8 @@ processExtensionTables(Archive *fout, ExtensionInfo extinfo[],
 			configtbl->dataObj->filtercond = pg_strdup(extconditionarray[j]);
 	}
 }
+
+configtbl->interesting = dumpobj;
 			}
 		}
 		if (extconfigarray)
diff --git a/src/test/modules/test_pg_dump/t/001_base.pl b/src/test/modules/test_pg_dump/t/001_base.pl
index ae120a5ee3..78aa07ce51 100644
--- a/src/test/modules/test_pg_dump/t/001_base.pl
+++ b/src/test/modules/test_pg_dump/t/001_base.pl
@@ -135,6 +135,12 @@ my %pgdump_runs = (
 			"$tempdir/defaults_tar_format.tar",
 		],
 	},
+	extension_schema => {
+		dump_cmd => [
+			'pg_dump', '--schema=public', '--inserts',
+			"--file=$tempdir/extension_schema.sql", 'postgres',
+		],
+	},
 	pg_dumpall_globals => {
 		dump_cmd => [
 			'pg_dumpall', '--no-sync',
@@ -301,8 +307,9 @@ my %tests = (
 			\n/xm,
 		like => {
 			%full_runs,
-			data_only=> 1,
-			section_data => 1,
+			data_only=> 1,
+			section_data => 1,
+			extension_schema => 1,
 		},
 	},
 
@@ -536,6 +543,7 @@ my %tests = (
 		like   => {%pgdump_runs},
 		unlike => {
 			data_only  => 1,
+			extension_schema   => 1,
 			pg_dumpall_globals => 1,
 			section_data   => 1,
 			section_pre_data   => 1,
@@ -549,6 +

Re: pg_dump bug for extension owned tables

2020-07-13 Thread Fabrízio de Royes Mello
On Mon, Jul 13, 2020 at 11:52 AM Fabrízio de Royes Mello <
fabriziome...@gmail.com> wrote:
>
>
> On Sat, Jul 11, 2020 at 8:07 PM Andrew Dunstan <
andrew.duns...@2ndquadrant.com> wrote:
> >
> >
> > On 6/26/20 2:10 PM, Fabrízio de Royes Mello wrote:
> > >
> > > On Fri, Jun 26, 2020 at 11:55 AM Fabrízio de Royes Mello
> > > mailto:fabriziome...@gmail.com>> wrote:
> > > >
> > > >
> > > > On Fri, Jun 26, 2020 at 11:24 AM Andrew Dunstan
> > >  > > <mailto:andrew.duns...@2ndquadrant.com>> wrote:
> > > > >
> > > > >
> > > > > On 6/26/20 9:57 AM, Andrew Dunstan wrote:
> > > > > > It appears that for extension owned tables tbinfo.attgenerated
isn't
> > > > > > being properly populated, so line 2050 in REL_12_STABLE, which
> > > is line
> > > > > > 2109 in git tip, is failing.
> > > > > >
> > > > > >
> > > > >
> > > > > Should have mentioned this is in src/bin/pg_dump/pg_dump.c
> > > > >
> > > >
> > > > Having a look on it.
> > > >
> > >
> > > Seems when qualify the schemaname the the "tbinfo->interesting" field
> > > is not setted for extensions objects, so the getTableAttrs can't fill
> > > the attgenerated field properly.
> > >
> > > I'm not 100% sure it's the correct way but the attached patch works
> > > for me and all tests passed. Maybe we should add more TAP tests?
> > >
> > >
> >
> >
> > I just tried this patch out on master, with the test case I gave
> > upthread. It's not working, still getting a segfault.
> >
>
> Ohh really sorry about it... my bad... i completely forgot about it!!!
>
> Due to my rush I ended up adding the wrong patch version. Attached the
correct version.
>
> Will add TAP tests to src/test/modules/test_pg_dump
>

Attached the patch including TAP tests.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e758b5c50a..a998e5c821 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -18037,6 +18037,8 @@ processExtensionTables(Archive *fout, ExtensionInfo extinfo[],
 			configtbl->dataObj->filtercond = pg_strdup(extconditionarray[j]);
 	}
 }
+
+configtbl->interesting = dumpobj;
 			}
 		}
 		if (extconfigarray)
diff --git a/src/test/modules/test_pg_dump/t/001_base.pl b/src/test/modules/test_pg_dump/t/001_base.pl
index ae120a5ee3..78aa07ce51 100644
--- a/src/test/modules/test_pg_dump/t/001_base.pl
+++ b/src/test/modules/test_pg_dump/t/001_base.pl
@@ -135,6 +135,12 @@ my %pgdump_runs = (
 			"$tempdir/defaults_tar_format.tar",
 		],
 	},
+	extension_schema => {
+		dump_cmd => [
+			'pg_dump', '--schema=public', '--inserts',
+			"--file=$tempdir/extension_schema.sql", 'postgres',
+		],
+	},
 	pg_dumpall_globals => {
 		dump_cmd => [
 			'pg_dumpall', '--no-sync',
@@ -301,8 +307,9 @@ my %tests = (
 			\n/xm,
 		like => {
 			%full_runs,
-			data_only=> 1,
-			section_data => 1,
+			data_only=> 1,
+			section_data => 1,
+			extension_schema => 1,
 		},
 	},
 
@@ -536,6 +543,7 @@ my %tests = (
 		like   => {%pgdump_runs},
 		unlike => {
 			data_only  => 1,
+			extension_schema   => 1,
 			pg_dumpall_globals => 1,
 			section_data   => 1,
 			section_pre_data   => 1,
@@ -549,6 +557,7 @@ my %tests = (
 		like   => {%pgdump_runs},
 		unlike => {
 			data_only  => 1,
+			extension_schema   => 1,
 			pg_dumpall_globals => 1,
 			section_data   => 1,
 			section_pre_data   => 1,
@@ -569,6 +578,17 @@ my %tests = (
 			schema_only  => 1,
 			section_pre_data => 1,
 		},
+	},
+
+	# Dumpable object inside specific schema
+	'INSERT INTO public.regress_table_dumpable VALUES (1);' => {
+		create_sql   => 'INSERT INTO public.regress_table_dumpable VALUES (1);',
+		regexp   => qr/^
+			\QINSERT INTO public.regress_table_dumpable VALUES (1);\E
+			\n/xm,
+		like => {
+			extension_schema => 1,
+		},
 	},);
 
 #
diff --git a/src/test/modules/test_pg_dump/test_pg_dump--1.0.sql b/src/test/modules/test_pg_dump/test_pg_dump--1.0.sql
index 3ed007a7b1..90e461ed35 100644
--- a/src/test/modules/test_pg_dump/test_pg_dump--1.0.sql
+++ b/src/test/modules/test_pg_dump/test_pg_dump--1.0.sql
@@ -13,6 +13,11 @@ CREATE SEQUENCE regress_pg_dump_seq;
 CREATE SEQUENCE regress_seq_dumpable;
 SELECT pg_catalog.pg_extension_config_dump('regress_seq_dumpable', '');
 
+CREATE TABLE regress_table_dumpable (
+	col1 int
+);
+SELECT pg_catalog.pg_extension_config_dump('regress_table_dumpable', '');
+
 CREATE SCHEMA regress_pg_dump_schema;
 
 GRANT USAGE ON regress_pg_dump_seq TO regress_dump_test_role;


Re: pg_dump bug for extension owned tables

2020-07-13 Thread Fabrízio de Royes Mello
On Sat, Jul 11, 2020 at 8:07 PM Andrew Dunstan <
andrew.duns...@2ndquadrant.com> wrote:
>
>
> On 6/26/20 2:10 PM, Fabrízio de Royes Mello wrote:
> >
> > On Fri, Jun 26, 2020 at 11:55 AM Fabrízio de Royes Mello
> > mailto:fabriziome...@gmail.com>> wrote:
> > >
> > >
> > > On Fri, Jun 26, 2020 at 11:24 AM Andrew Dunstan
> >  > <mailto:andrew.duns...@2ndquadrant.com>> wrote:
> > > >
> > > >
> > > > On 6/26/20 9:57 AM, Andrew Dunstan wrote:
> > > > > It appears that for extension owned tables tbinfo.attgenerated
isn't
> > > > > being properly populated, so line 2050 in REL_12_STABLE, which
> > is line
> > > > > 2109 in git tip, is failing.
> > > > >
> > > > >
> > > >
> > > > Should have mentioned this is in src/bin/pg_dump/pg_dump.c
> > > >
> > >
> > > Having a look on it.
> > >
> >
> > Seems when qualify the schemaname the the "tbinfo->interesting" field
> > is not setted for extensions objects, so the getTableAttrs can't fill
> > the attgenerated field properly.
> >
> > I'm not 100% sure it's the correct way but the attached patch works
> > for me and all tests passed. Maybe we should add more TAP tests?
> >
> >
>
>
> I just tried this patch out on master, with the test case I gave
> upthread. It's not working, still getting a segfault.
>

Ohh really sorry about it... my bad... i completely forgot about it!!!

Due to my rush I ended up adding the wrong patch version. Attached the
correct version.

Will add TAP tests to src/test/modules/test_pg_dump

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e758b5c50a..a998e5c821 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -18037,6 +18037,8 @@ processExtensionTables(Archive *fout, ExtensionInfo extinfo[],
 			configtbl->dataObj->filtercond = pg_strdup(extconditionarray[j]);
 	}
 }
+
+configtbl->interesting = dumpobj;
 			}
 		}
 		if (extconfigarray)


Re: pg_dump bug for extension owned tables

2020-06-26 Thread Fabrízio de Royes Mello
On Fri, Jun 26, 2020 at 11:55 AM Fabrízio de Royes Mello <
fabriziome...@gmail.com> wrote:
>
>
> On Fri, Jun 26, 2020 at 11:24 AM Andrew Dunstan <
andrew.duns...@2ndquadrant.com> wrote:
> >
> >
> > On 6/26/20 9:57 AM, Andrew Dunstan wrote:
> > > It appears that for extension owned tables tbinfo.attgenerated isn't
> > > being properly populated, so line 2050 in REL_12_STABLE, which is line
> > > 2109 in git tip, is failing.
> > >
> > >
> >
> > Should have mentioned this is in src/bin/pg_dump/pg_dump.c
> >
>
> Having a look on it.
>

Seems when qualify the schemaname the the "tbinfo->interesting" field is
not setted for extensions objects, so the getTableAttrs can't fill the
attgenerated field properly.

I'm not 100% sure it's the correct way but the attached patch works for me
and all tests passed. Maybe we should add more TAP tests?

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index a41a3db876..e12811832f 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -18063,6 +18063,9 @@ processExtensionTables(Archive *fout, ExtensionInfo extinfo[],
 		if (strlen(extconditionarray[j]) > 0)
 			configtbl->dataObj->filtercond = pg_strdup(extconditionarray[j]);
 	}
+
+	if (configtbl->dobj.dump != DUMP_COMPONENT_NONE)
+		configtbl->interesting = true;
 }
 			}
 		}


Re: pg_dump bug for extension owned tables

2020-06-26 Thread Fabrízio de Royes Mello
On Fri, Jun 26, 2020 at 11:24 AM Andrew Dunstan <
andrew.duns...@2ndquadrant.com> wrote:
>
>
> On 6/26/20 9:57 AM, Andrew Dunstan wrote:
> > It appears that for extension owned tables tbinfo.attgenerated isn't
> > being properly populated, so line 2050 in REL_12_STABLE, which is line
> > 2109 in git tip, is failing.
> >
> >
>
> Should have mentioned this is in src/bin/pg_dump/pg_dump.c
>

Having a look on it.

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: PostgreSQL proposal of Google Summer of Code

2020-03-30 Thread Fabrízio de Royes Mello
On Sun, Mar 29, 2020 at 4:48 PM Maryam Farrukh 
wrote:
>
> Hi Fabrizio,
>

Hi Maryam, please try to avoid top posting!!

Returning the discussion to pgsql-hackers.


> Thank you for reaching out. I have a question. I went through the links
you provided me. There
> are already some project ideas over ther. My question is that do we have
to select a project from
> there or come up with an idea of our own.
>

If you have a good idea and that idea fit to GSoC felt free to propose
it... the listed ideas are just ideas.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: PostgreSQL proposal of Google Summer of Code

2020-03-24 Thread Fabrízio de Royes Mello
On Tue, Mar 24, 2020 at 7:07 PM Maryam Farrukh 
wrote:

> Dear Sir/Madam,
>
> I am very much interested in working on a project of PostgreSQL for Google
> summer internship. While I was writing a proposal, I came across some
> guidelines by the company to get in touch about the nature of the project
> and then draft the proposal. I would be very much interested in learning
> more about the project so I can come up with a reasonable proposal.
>
>
Hi Maryam,

You can start having a look on the following links:
https://wiki.postgresql.org/wiki/GSoC
https://wiki.postgresql.org/wiki/GSoC_2020

As an old PostgreSQL GSoC student I can tell you it's an amazing experience.

Regards,

-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: backend type in log_line_prefix?

2020-03-19 Thread Fabrízio de Royes Mello
On Sun, Mar 15, 2020 at 7:32 AM Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:
>
>
> I have committed that last one also, after some corrections.
>

IMHO we should also update file_fdw documentation. See attached!

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
diff --git a/doc/src/sgml/file-fdw.sgml b/doc/src/sgml/file-fdw.sgml
index 28b61c8f2d..ed028e4ec9 100644
--- a/doc/src/sgml/file-fdw.sgml
+++ b/doc/src/sgml/file-fdw.sgml
@@ -261,7 +261,8 @@ CREATE FOREIGN TABLE pglog (
   query text,
   query_pos integer,
   location text,
-  application_name text
+  application_name text,
+  backend_type text
 ) SERVER pglog
 OPTIONS ( filename '/home/josh/data/log/pglog.csv', format 'csv' );
 


Re: Bug in pg_restore with EventTrigger in parallel mode

2020-03-09 Thread Fabrízio de Royes Mello
On Mon, Mar 9, 2020 at 3:59 PM Tom Lane  wrote:
>
> =?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?=  writes:
> > On Mon, Mar 9, 2020 at 12:27 PM Tom Lane  wrote:
> >> Which, TBH, makes me wonder about the validity of the original
complaint
> >> in this thread.  I don't mind delaying ET restore as long as we
feasibly
> >> can; but if you have an ET that is going to misbehave during restore,
> >> you are in for pain, and it's hard to consider that that pain isn't
> >> self-inflicted.
>
> > The proposed patch solve the original complain. I was just trying to
> > understand completely what you pointed out before and I agree with you.
> > Thanks for the clear explanation.
>
> OK, thanks for confirming that this solves your issue in practice.
>
> > About the patch LGTM and IMHO we should back-patch it to all supported
> > versions.
>
> Done.
>

Great, thanks!

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Bug in pg_restore with EventTrigger in parallel mode

2020-03-09 Thread Fabrízio de Royes Mello
On Mon, Mar 9, 2020 at 12:27 PM Tom Lane  wrote:
>
> In the case of event triggers, the obvious counterexample is that if
> you restore ET A and then ET B, ET A might interfere with the attempt
> to restore ET B.  (And we have no way to know whether restoring B
> before A would be better or worse.)
>

Yeap... you're correct.


> So on the whole I find "restore matviews as if they'd been refreshed
> after the restore" to be a more trustworthy approach than the other
> way.  At some level we have to trust that ETs aren't going to totally
> bollix the restore.
>

Ok.

> Which, TBH, makes me wonder about the validity of the original complaint
> in this thread.  I don't mind delaying ET restore as long as we feasibly
> can; but if you have an ET that is going to misbehave during restore,
> you are in for pain, and it's hard to consider that that pain isn't
> self-inflicted.
>

The proposed patch solve the original complain. I was just trying to
understand completely what you pointed out before and I agree with you.
Thanks for the clear explanation.

About the patch LGTM and IMHO we should back-patch it to all supported
versions.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Bug in pg_restore with EventTrigger in parallel mode

2020-03-09 Thread Fabrízio de Royes Mello
On Sat, Mar 7, 2020 at 8:42 PM Tom Lane  wrote:
>
> vignesh C  writes:
> > I'm not sure if we can add a test for this, can you have a thought
> > about this to check if we can add a test.
>
> Yeah, I'm not quite sure if a test is worth the trouble or not.
>
> We clearly do need to restore event triggers later than we do now, even
> without considering parallel restore: they should not be able to prevent
> us from executing other restore actions.  This is just like the rule that
> we don't restore DML triggers until after we've loaded data.
>

Ok.


> However, I think that the existing code is correct to restore event
> triggers before matview refreshes, not after as this patch would have us
> do.  The basic idea for matview refresh is that it should happen in the
> normal running state of the database.  If an event trigger interferes with
> that, it would've done so in normal running as well.
>

I'm not totally sure if it's entirely correct.

For example if I write an EventTrigger to perform some kind of DDL auditing
then during the restore the "refresh maview" operation will be audited and
IMHO it's wrong.


> I'm also not terribly on board with loading more functionality onto the
> RestorePass mechanism.  That's a crock that should go away someday,
> because it basically duplicates and overrides pg_dump's normal object
> sorting mechanism.  So we don't want it doing more than it absolutely
> has to.  But in this case, I don't see any reason why we can't just
> restore event triggers and matviews in the same post-ACL restore pass.

Totally agree with it.


> In a serial restore, that will make the event triggers come first
> because of the existing sort rules.  In a parallel restore, it's possible
> that they'd be intermixed, but that doesn't bother me.  Again, if your
> event triggers have side-effects on your matview refreshes, you're
> going to have some issues anyway.
>

IMHO EventTriggers can't be fired during pg_restore under any circumstances
because can lead us to a different database state than the dump used.


> So that leads me to the attached, which renames the "RESTORE_PASS_REFRESH"
> symbol for clarity, and updates the pg_dump_sort.c code and comments
> to match what's really going on.
>

Ok.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Bug in pg_restore with EventTrigger in parallel mode

2020-02-20 Thread Fabrízio de Royes Mello
On Thu, Feb 20, 2020 at 4:52 AM Michael Paquier  wrote:
>
> That sounds right, as event triggers could interact with GRANT and
> REFRESH of matviews, so they should be logically last.  Looking at the
> recent commit history, this would be similar to 3eb9a5e as we don't
> really have a way to treat event triggers as dependency-sortable
> objects.
>

Indeed... event triggers should be the last thing to be restored.

>  What kind of errors did you see in this customer
> environment?  Errors triggered by one or more event triggers blocking
> some commands based on a tag match?
>

By error I meant the weird behavior I described before that pg_restore
create the event triggers in parallel mode and after that other objects are
created then the event trigger is fired during the restore...

Have a look at the new attached patch.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 77bf9edaba..faa93aaf9a 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -670,11 +670,13 @@ RestoreArchive(Archive *AHX)
 	{
 		/*
 		 * In serial mode, process everything in three phases: normal items,
-		 * then ACLs, then matview refresh items.  We might be able to skip
-		 * one or both extra phases in some cases, eg data-only restores.
+		 * then ACLs, matview refresh items, then event triggers. We might be 
+		 * able to skip one or both extra phases in some cases, eg data-only
+		 * restores.
 		 */
 		bool		haveACL = false;
 		bool		haveRefresh = false;
+		bool		haveEventTrigger = false;
 
 		for (te = AH->toc->next; te != AH->toc; te = te->next)
 		{
@@ -692,6 +694,9 @@ RestoreArchive(Archive *AHX)
 case RESTORE_PASS_REFRESH:
 	haveRefresh = true;
 	break;
+case RESTORE_PASS_EVENT_TRIGGER:
+	haveEventTrigger = true;
+	break;
 			}
 		}
 
@@ -714,6 +719,16 @@ RestoreArchive(Archive *AHX)
 	(void) restore_toc_entry(AH, te, false);
 			}
 		}
+
+		if (haveEventTrigger)
+		{
+			for (te = AH->toc->next; te != AH->toc; te = te->next)
+			{
+if ((te->reqs & (REQ_SCHEMA | REQ_DATA)) != 0 &&
+	_tocEntryRestorePass(te) == RESTORE_PASS_EVENT_TRIGGER)
+	(void) restore_toc_entry(AH, te, false);
+			}
+		}
 	}
 
 	if (ropt->single_txn)
@@ -3084,6 +3099,8 @@ _tocEntryRestorePass(TocEntry *te)
 		return RESTORE_PASS_ACL;
 	if (strcmp(te->desc, "MATERIALIZED VIEW DATA") == 0)
 		return RESTORE_PASS_REFRESH;
+	if (strcmp(te->desc, "EVENT TRIGGER") == 0)
+		return RESTORE_PASS_EVENT_TRIGGER;
 	return RESTORE_PASS_MAIN;
 }
 
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 6768f92976..204019b514 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -220,9 +220,10 @@ typedef enum
 {
 	RESTORE_PASS_MAIN = 0,		/* Main pass (most TOC item types) */
 	RESTORE_PASS_ACL,			/* ACL item types */
-	RESTORE_PASS_REFRESH		/* Matview REFRESH items */
+	RESTORE_PASS_REFRESH,		/* Matview REFRESH items */
+	RESTORE_PASS_EVENT_TRIGGER	/* Event Trigger items */
 
-#define RESTORE_PASS_LAST RESTORE_PASS_REFRESH
+#define RESTORE_PASS_LAST RESTORE_PASS_EVENT_TRIGGER
 } RestorePass;
 
 typedef enum


Re: Bug in pg_restore with EventTrigger in parallel mode

2020-02-13 Thread Fabrízio de Royes Mello
On Thu, Feb 13, 2020 at 12:52 AM Michael Paquier 
wrote:

> On Wed, Feb 12, 2020 at 01:59:05PM -0300, Fabrízio de Royes Mello wrote:
> > In parallel mode it's firing the EventTrigger and it can't be happen.
> > Poking around it I did some test with attached just to leave
> EventTriggers
> > in pending_list to process it in restore_toc_entries_postfork and
> > everything is ok, but my solution is very ugly, so maybe we need to
> invent
> > a new RestorePass to take care of it like RESTORE_PASS_ACL and
> > RESTORE_PASS_REFRESH. I can provide a more polished patch if it'll be a
> > good way to do that.
>
> Could you add that as a bug fix to the next CF [1]?
>
> [1]: https://commitfest.postgresql.org/27/
>
>
Done, thanks!
https://commitfest.postgresql.org/27/2450/

Regards,

-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Bug in pg_restore with EventTrigger in parallel mode

2020-02-12 Thread Fabrízio de Royes Mello
Hi all,

Today digging into a customer issue about errors in pg_restore I realized
that pg_restore dispatch a worker to restore EventTrigger
during restore_toc_entries_parallel. IMHO EventTriggers should be restored
during the restore_toc_entries_postfork in serial mode.

For example this simple database schema:

BEGIN;

CREATE TABLE foo(c1 bigserial NOT NULL, c2 varchar(100) NOT NULL, PRIMARY
KEY (c1));
INSERT INTO foo (c2) SELECT 'Foo '||id FROM generate_series(0,10) id;
CREATE INDEX foo_1 ON foo (c2);

CREATE TABLE bar(c1 bigserial NOT NULL, c2 bigint REFERENCES public.foo, c3
varchar(100), PRIMARY KEY (c1));
INSERT INTO bar (c2, c3) SELECT (random()*10)::bigint+1, 'Bar '||id FROM
generate_series(1,1) id;
CREATE INDEX bar_1 ON bar (c2);
CREATE INDEX bar_2 ON bar (c3);

CREATE OR REPLACE FUNCTION f_test_ddl_trigger()
RETURNS event_trigger AS
$$
DECLARE
  r RECORD;
BEGIN
  FOR r IN
SELECT objid, objsubid, schema_name, objid::regclass::text AS
table_name, command_tag, object_type, object_identity
FROM pg_event_trigger_ddl_commands()
  LOOP
RAISE INFO 'RUN EVENT TRIGGER %', r;
  END LOOP;
END;
$$
LANGUAGE plpgsql;

CREATE EVENT TRIGGER test_ddl_trigger
ON ddl_command_end
EXECUTE PROCEDURE f_test_ddl_trigger();

COMMIT;

Running the dump:
$ bin/pg_dump -Fc -f /tmp/teste.dump fabrizio

Restoring with one worker everything is ok:
fabrizio@macanudo:~/pgsql
$ bin/pg_restore -Fc -d fabrizio_restore_serial /tmp/teste.dump | grep 'RUN
EVENT TRIGGER'

Running with more the one worker:
fabrizio@macanudo:~/pgsql
$ bin/pg_restore -Fc -j2 -d fabrizio_restore_parallel /tmp/teste.dump |
grep 'RUN EVENT TRIGGER'
pg_restore: INFO:  RUN EVENT TRIGGER (16906,0,public,public.bar,"ALTER
TABLE",table,public.bar)

In parallel mode it's firing the EventTrigger and it can't be happen.
Poking around it I did some test with attached just to leave EventTriggers
in pending_list to process it in restore_toc_entries_postfork and
everything is ok, but my solution is very ugly, so maybe we need to invent
a new RestorePass to take care of it like RESTORE_PASS_ACL and
RESTORE_PASS_REFRESH. I can provide a more polished patch if it'll be a
good way to do that.

Regards,

-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 77bf9edaba..9762e4c973 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -4357,7 +4357,8 @@ move_to_ready_list(TocEntry *pending_list,
 		next_te = te->pending_next;
 
 		if (te->depCount == 0 &&
-			_tocEntryRestorePass(te) == pass)
+			_tocEntryRestorePass(te) == pass &&
+			strcmp(te->desc, "EVENT TRIGGER") != 0) /* leave EVENT TRIGGER in pending_list */
 		{
 			/* Remove it from pending_list ... */
 			pending_list_remove(te);


Re: Restore backup file "with oids"

2019-12-18 Thread Fabrízio de Royes Mello
On Wed, Dec 18, 2019 at 2:28 PM gmail Vladimir Koković <
vladimir.koko...@gmail.com> wrote:

> Hi,
>
> Is there somewhere a script that can do a restore backup file "with oids"
> in some newer DB where there is no term "with oids"?
>
>
>
Short answer: No!

Long answer: try use the pg_dump binary from your new version (12)
connecting to your old version... is the best practice when you want to
upgrade your PostgreSQL using dump/restore strategy.

Regards,

-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Hooks for session start and end, take two

2019-10-03 Thread Fabrízio de Royes Mello
On Thu, Oct 3, 2019 at 1:35 AM Andres Freund  wrote:
>
> [...]
>
> In this state I think this patch should be flat out rejected.
>

Ok.

> I'm seriously baffled at how this stuff is being pursued aggressively,
> quite apparently without any serious design considerations. I mean this
> stuff had to be backed out twice, yet it's being reproposed without much
> consideration for concerns?
>

And what if (again) for the first version of session start hook we do it
just for:
- client backends
- background workers

For sure we'll need new design, but for now I'm can't imagine a use case
for sessions different than listed above.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Hooks for session start and end, take two

2019-09-30 Thread Fabrízio de Royes Mello
On Sun, Sep 29, 2019 at 10:29 PM Michael Paquier 
wrote:
>
> This code path can only be taken by normal backends, so that would
> apply, still I don't actually see why we should limit us here on the
> backend side.  If for a reason or another those two code paths begin
> to be taken by a backend with InvalidBackendId, then users of the
> session start/end hook will need to think how to handle it if they
> didn't from the start, which sounds like a good thing to me.
>

Makes sense to me. I become a reviewer and run all tests (make check &&
make check-world) and everything is ok. Changed status to "ready for
commiter".

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Hooks for session start and end, take two

2019-09-28 Thread Fabrízio de Royes Mello
On Fri, Sep 27, 2019 at 4:26 PM legrand legrand 
wrote:
>
> OK I confirm:
> - "client backend" appears at session start and end hook,
> - "autovacuum worker" and "pg_background" only appears at session end hook
>   (backend_start can be retreived from pg_stat_activity),
> - "parallel workers" are not visible at all
>   (because extension cannot assign XIDs during a parallel operation)
>
> All seems fine to me.
>

Hi all,

First of all thanks Michael for bringing this to life again.

I poked a little with the patch and everything is ok. Your check for normal
backend on test_session_hooks is much simpler than I did before:

+/* just consider normal backends */
+if (MyBackendId == InvalidBackendId)
+return;

But one thing came to my mind, why not in this first version we hook just
normal backends?

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Introduce MIN/MAX aggregate functions to pg_lsn

2019-07-05 Thread Fabrízio de Royes Mello
On Fri, Jul 5, 2019 at 12:22 AM Michael Paquier  wrote:
>
> On Thu, Jul 04, 2019 at 01:48:24PM -0300, Fabrízio de Royes Mello wrote:
> > On Thu, Jul 4, 2019 at 10:57 AM Robert Haas 
wrote:
> >> It would be pretty silly to have one and not the other, regardless of
> >> whether we can think of an immediate use case.
> >
> > +1
>
> OK, applied with a catalog version bump.  This is cool to have.
>

Awesome... thanks.

Att,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Introduce MIN/MAX aggregate functions to pg_lsn

2019-07-04 Thread Fabrízio de Royes Mello
On Thu, Jul 4, 2019 at 5:17 AM Michael Paquier  wrote:
>
> On Tue, Jul 02, 2019 at 11:31:49AM -0300, Fabrízio de Royes Mello wrote:
> > New version attached.
>
> This looks in pretty good shape to me, and no objections from me to
> get those functions as the min() flavor is useful for monitoring WAL
> retention for complex deployments.
>
> Do you have a particular use-case in mind for max() one?  I can think
> of at least one case: monitoring the flush LSNs of a set of standbys
> to find out how much has been replayed at most.
>

I use min/max to measure the amount of generated WAL (diff) during some
periods based on wal position stored in some monitoring system.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Introduce MIN/MAX aggregate functions to pg_lsn

2019-07-04 Thread Fabrízio de Royes Mello
On Thu, Jul 4, 2019 at 10:57 AM Robert Haas  wrote:
>
> On Thu, Jul 4, 2019 at 4:17 AM Michael Paquier 
wrote:
> > Do you have a particular use-case in mind for max() one?  I can think
> > of at least one case: monitoring the flush LSNs of a set of standbys
> > to find out how much has been replayed at most.
>
> It would be pretty silly to have one and not the other, regardless of
> whether we can think of an immediate use case.
>

+1


--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Introduce MIN/MAX aggregate functions to pg_lsn

2019-07-02 Thread Fabrízio de Royes Mello
On Tue, Jul 2, 2019 at 7:22 AM Surafel Temesgen 
wrote:
>
> Hi,
> Here are same review comment

Thanks for your review.

> -  any numeric, string, date/time, network, or enum type,
> +  any numeric, string, date/time, network, lsn, or enum type,
>   or arrays of these types
>same as argument type
> In the documentation it refereed as pg_lsn type rather than lsn alone

Fixed.

> +Datum
> +pg_lsn_larger(PG_FUNCTION_ARGS)
> +{
> + XLogRecPtr lsn1 = PG_GETARG_LSN(0);
> + XLogRecPtr lsn2 = PG_GETARG_LSN(1);
> + XLogRecPtr result;
> +
> + result = ((lsn1 > lsn2) ? lsn1 : lsn2);
> +
> + PG_RETURN_LSN(result);
> +}
>
> rather than using additional variable its more readable and effective to
return the argument
> itself like we do in date data type and other place
>

Fixed.

New version attached.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 3a8581d..b7f746b 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -14804,7 +14804,7 @@ NULL baz(3 rows)

max(expression)
   
-  any numeric, string, date/time, network, or enum type,
+  any numeric, string, date/time, network, pg_lsn, or enum type,
  or arrays of these types
   same as argument type
   Yes
@@ -14822,7 +14822,7 @@ NULL baz(3 rows)

min(expression)
   
-  any numeric, string, date/time, network, or enum type,
+  any numeric, string, date/time, network, pg_lsn, or enum type,
  or arrays of these types
   same as argument type
   Yes
diff --git a/src/backend/utils/adt/pg_lsn.c b/src/backend/utils/adt/pg_lsn.c
index 4c329a8..b4c6c23 100644
--- a/src/backend/utils/adt/pg_lsn.c
+++ b/src/backend/utils/adt/pg_lsn.c
@@ -171,6 +171,24 @@ pg_lsn_ge(PG_FUNCTION_ARGS)
 	PG_RETURN_BOOL(lsn1 >= lsn2);
 }
 
+Datum
+pg_lsn_larger(PG_FUNCTION_ARGS)
+{
+	XLogRecPtr	lsn1 = PG_GETARG_LSN(0);
+	XLogRecPtr	lsn2 = PG_GETARG_LSN(1);
+
+	PG_RETURN_LSN((lsn1 > lsn2) ? lsn1 : lsn2);
+}
+
+Datum
+pg_lsn_smaller(PG_FUNCTION_ARGS)
+{
+	XLogRecPtr	lsn1 = PG_GETARG_LSN(0);
+	XLogRecPtr	lsn2 = PG_GETARG_LSN(1);
+
+	PG_RETURN_LSN((lsn1 < lsn2) ? lsn1 : lsn2);
+}
+
 /* btree index opclass support */
 Datum
 pg_lsn_cmp(PG_FUNCTION_ARGS)
diff --git a/src/include/catalog/pg_aggregate.dat b/src/include/catalog/pg_aggregate.dat
index 044695a..242d843 100644
--- a/src/include/catalog/pg_aggregate.dat
+++ b/src/include/catalog/pg_aggregate.dat
@@ -146,6 +146,9 @@
 { aggfnoid => 'max(inet)', aggtransfn => 'network_larger',
   aggcombinefn => 'network_larger', aggsortop => '>(inet,inet)',
   aggtranstype => 'inet' },
+{ aggfnoid => 'max(pg_lsn)', aggtransfn => 'pg_lsn_larger',
+  aggcombinefn => 'pg_lsn_larger', aggsortop => '>(pg_lsn,pg_lsn)',
+  aggtranstype => 'pg_lsn' },
 
 # min
 { aggfnoid => 'min(int8)', aggtransfn => 'int8smaller',
@@ -208,6 +211,9 @@
 { aggfnoid => 'min(inet)', aggtransfn => 'network_smaller',
   aggcombinefn => 'network_smaller', aggsortop => '<(inet,inet)',
   aggtranstype => 'inet' },
+{ aggfnoid => 'min(pg_lsn)', aggtransfn => 'pg_lsn_smaller',
+  aggcombinefn => 'pg_lsn_smaller', aggsortop => '<(pg_lsn,pg_lsn)',
+  aggtranstype => 'pg_lsn' },
 
 # count
 { aggfnoid => 'count(any)', aggtransfn => 'int8inc_any',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8733524..aa8674c 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -6219,6 +6219,9 @@
 { oid => '3564', descr => 'maximum value of all inet input values',
   proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'inet',
   proargtypes => 'inet', prosrc => 'aggregate_dummy' },
+{ oid => '8125', descr => 'maximum value of all pg_lsn input values',
+  proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'pg_lsn',
+  proargtypes => 'pg_lsn', prosrc => 'aggregate_dummy' },
 
 { oid => '2131', descr => 'minimum value of all bigint input values',
   proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'int8',
@@ -6283,6 +6286,9 @@
 { oid => '3565', descr => 'minimum value of all inet input values',
   proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'inet',
   proargtypes => 'inet', prosrc => 'aggregate_dummy' },
+{ oid => '8126', descr => 'minimum value of all pg_lsn input values',
+  proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'pg_lsn',
+  proargtypes => 'pg_lsn', prosrc => 'aggregate_dummy' },
 
 # count has two forms: count(any) and count(*)
 { oid => '2147',
@

Re: Add CREATE DATABASE LOCALE option

2019-06-06 Thread Fabrízio de Royes Mello
On Thu, Jun 6, 2019 at 6:38 AM Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:
>
> On 2019-06-05 22:31, Fabrízio de Royes Mello wrote:
> > On Wed, Jun 5, 2019 at 5:17 PM Peter Eisentraut
> >  > <mailto:peter.eisentr...@2ndquadrant.com>> wrote:
> >>
> >> I propose this patch to add a LOCALE option to CREATE DATABASE.  This
> >> sets both LC_COLLATE and LC_CTYPE with one option.  Similar behavior is
> >> already supported in initdb, CREATE COLLATION, and createdb.
> >>
> >> With collation providers other than libc, having separate lc_collate
and
> >> lc_ctype settings is not necessarily applicable, so this is also
> >> preparation for such future functionality.
> >
> > Cool... would be nice also add some test cases.
>
> Right.  Any suggestions where to put them?
>

Hmm... good question... I thought we already have some regression tests for
{CREATE|DROP} DATABASE but actually we don't... should we add a new one?

Att,

-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Add CREATE DATABASE LOCALE option

2019-06-05 Thread Fabrízio de Royes Mello
On Wed, Jun 5, 2019 at 5:17 PM Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:
>
> I propose this patch to add a LOCALE option to CREATE DATABASE.  This
> sets both LC_COLLATE and LC_CTYPE with one option.  Similar behavior is
> already supported in initdb, CREATE COLLATION, and createdb.
>
> With collation providers other than libc, having separate lc_collate and
> lc_ctype settings is not necessarily applicable, so this is also
> preparation for such future functionality.
>

Cool... would be nice also add some test cases.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Add command column to pg_stat_progress_create_index

2019-05-27 Thread Fabrízio de Royes Mello
On Mon, May 27, 2019 at 4:51 PM Alvaro Herrera 
wrote:
>
> On 2019-May-27, Peter Eisentraut wrote:
>
> > I propose to add a column "command" to pg_stat_progress_create_index.
> > The sibling view pg_stat_progress_cluster already contains such a
> > column.  This can help distinguish which command is running and thus
> > which phases to expect.  It seems reasonable to keep these views
> > consistent, too.  (They are both new in PG12.)  Patch attached.
>
> +1.
>

+1

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Re: Refresh Publication takes hours and doesn´t finish

2019-05-21 Thread Fabrízio de Royes Mello
On Tue, May 21, 2019 at 4:42 PM Tom Lane  wrote:
>
> [ redirecting to pgsql-hackers as the more relevant list ]
>
> I wrote:
> > PegoraroF10  writes:
> >> I tried sometime ago ... but with no responses, I ask you again.
> >> pg_publication_tables is a view that is used to refresh publication,
but as
> >> we have 15.000 tables, it takes hours and doesn't complete. If I
change that
> >> view I can have an immediate result. The question is: Can I change
that view
> >> ? There is some trouble changing those system views ?
>
> > Hmm ... given that pg_get_publication_tables() shouldn't return any
> > duplicate OIDs, it does seem unnecessarily inefficient to put it in
> > an IN-subselect condition.  Peter, is there a reason why this isn't
> > a straight lateral join?  I get a much saner-looking plan from
>
> > FROM pg_publication P, pg_class C
> > -JOIN pg_namespace N ON (N.oid = C.relnamespace)
> > -   WHERE C.oid IN (SELECT relid FROM
pg_get_publication_tables(P.pubname));
> > +JOIN pg_namespace N ON (N.oid = C.relnamespace),
> > +LATERAL pg_get_publication_tables(P.pubname)
> > +   WHERE C.oid = pg_get_publication_tables.relid;
>
> For the record, the attached seems like what to do here.  It's easy
> to show that there's a big performance gain even for normal numbers
> of tables, eg if you do
>
> CREATE PUBLICATION mypub FOR ALL TABLES;
> SELECT * FROM pg_publication_tables;
>
> in the regression database, the time for the select drops from ~360ms
> to ~6ms on my machine.  The existing view's performance will drop as
> O(N^2) the more publishable tables you have ...
>
> Given that this change impacts the regression test results, project
> rules say that it should come with a catversion bump.  Since we are
> certainly going to have a catversion bump before beta2 because of
> the pg_statistic_ext permissions business, that doesn't seem like
> a reason not to push it into v12 --- any objections?
>

I completely agree to push it into v12.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Pluggable Storage - Andres's take

2019-04-08 Thread Fabrízio de Royes Mello
t postmaster.c:1703
#16 0x00afb94e in PostmasterMain (argc=3, argv=0x25ed850) at
postmaster.c:1376
#17 0x00977de8 in main (argc=3, argv=0x25ed850) at main.c:228


Isn't better raise an exception as you did in other functions??

static void
toyam_relation_vacuum(Relation onerel,
  struct VacuumParams *params,
  BufferAccessStrategy bstrategy)
{
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 errmsg("function %s not implemented yet", __func__)));
}

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Lack of new line between IF statements

2019-03-23 Thread Fabrízio de Royes Mello
Hi all,

The attached patch just a very minor adjustment to
src/bin/pg_checksums/pg_checksums.c to add new line between some IF
statements.

Regards,

-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
diff --git a/src/bin/pg_checksums/pg_checksums.c b/src/bin/pg_checksums/pg_checksums.c
index f9ab27c..61bfa45 100644
--- a/src/bin/pg_checksums/pg_checksums.c
+++ b/src/bin/pg_checksums/pg_checksums.c
@@ -435,12 +435,14 @@ main(int argc, char *argv[])
 		fprintf(stderr, _("%s: data checksums are not enabled in cluster\n"), progname);
 		exit(1);
 	}
+
 	if (ControlFile->data_checksum_version == 0 &&
 		mode == PG_MODE_DISABLE)
 	{
 		fprintf(stderr, _("%s: data checksums are already disabled in cluster.\n"), progname);
 		exit(1);
 	}
+
 	if (ControlFile->data_checksum_version > 0 &&
 		mode == PG_MODE_ENABLE)
 	{


Re: Introduce MIN/MAX aggregate functions to pg_lsn

2019-03-23 Thread Fabrízio de Royes Mello
On Fri, Mar 22, 2019 at 10:27 PM Michael Paquier 
wrote:
>
> On Fri, Mar 22, 2019 at 04:49:57PM -0300, Fabrízio de Royes Mello wrote:
> > So attached patch aims to introduce MIN/MAX aggregate functions to
pg_lsn
>
> Fine by me.  This looks helpful for monitoring.
>
> Please make sure to register it to the next commit fest:
> https://commitfest.postgresql.org/23/
> It is too late for Postgres 12 unfortunately.

Sure, added:
https://commitfest.postgresql.org/23/2070/

Regards,
-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Error message inconsistency

2019-03-22 Thread Fabrízio de Royes Mello
On Fri, Mar 22, 2019 at 2:25 PM Simon Riggs  wrote:
>
> As noted by a PostgreSQL user to me, error messages for NOT NULL
constraints are inconsistent - they do not mention the relation name in the
message, as all other variants of this message do. e.g.
>
> postgres=# create table nn (id integer not null);
> CREATE TABLE
> postgres=# insert into nn values (NULL);
> ERROR: null value in column "id" violates not-null constraint
> DETAIL: Failing row contains (null).
>
> postgres=# create table nn2 (id integer check (id is not null));
> CREATE TABLE
> postgres=# insert into nn2 values (NULL);
> ERROR: new row for relation "nn2" violates check constraint "nn2_id_check"
> DETAIL: Failing row contains (null).
>
> I propose the attached patch as a fix, changing the wording (of the first
case) to
> ERROR: null value in column "id" for relation "nn" violates not-null
constraint
>
> It causes breakage in multiple tests, which is easy to fix once/if we
agree to change.
>

I totally agree with that change because I already get some negative
feedback from users about this message too.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Introduce MIN/MAX aggregate functions to pg_lsn

2019-03-22 Thread Fabrízio de Royes Mello
Hi all,

Before we introduce pg_lsn datatype the LSN was expressed as a TEXT type,
so a simple query using MIN/MAX functions works as expected. Query like:

SELECT min(restart_lsn) FROM pg_replication_slots;
SELECT min(sent_lsn) FROM pg_stat_replication ;

So attached patch aims to introduce MIN/MAX aggregate functions to pg_lsn
datatype.

Regards,

-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 1a01473..490f3a8 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -14663,7 +14663,7 @@ NULL baz(3 rows)

max(expression)
   
-  any numeric, string, date/time, network, or enum type,
+  any numeric, string, date/time, network, lsn, or enum type,
  or arrays of these types
   same as argument type
   Yes
@@ -14681,7 +14681,7 @@ NULL baz(3 rows)

min(expression)
   
-  any numeric, string, date/time, network, or enum type,
+  any numeric, string, date/time, network, lsn, or enum type,
  or arrays of these types
   same as argument type
   Yes
diff --git a/src/backend/utils/adt/pg_lsn.c b/src/backend/utils/adt/pg_lsn.c
index 7242d3c..ab393bc 100644
--- a/src/backend/utils/adt/pg_lsn.c
+++ b/src/backend/utils/adt/pg_lsn.c
@@ -155,6 +155,30 @@ pg_lsn_ge(PG_FUNCTION_ARGS)
 	PG_RETURN_BOOL(lsn1 >= lsn2);
 }
 
+Datum
+pg_lsn_larger(PG_FUNCTION_ARGS)
+{
+	XLogRecPtr	lsn1 = PG_GETARG_LSN(0);
+	XLogRecPtr	lsn2 = PG_GETARG_LSN(1);
+	XLogRecPtr	result;
+
+	result = ((lsn1 > lsn2) ? lsn1 : lsn2);
+
+	PG_RETURN_LSN(result);
+}
+
+Datum
+pg_lsn_smaller(PG_FUNCTION_ARGS)
+{
+	XLogRecPtr	lsn1 = PG_GETARG_LSN(0);
+	XLogRecPtr	lsn2 = PG_GETARG_LSN(1);
+	XLogRecPtr	result;
+
+	result = ((lsn1 < lsn2) ? lsn1 : lsn2);
+
+	PG_RETURN_LSN(result);
+}
+
 /* btree index opclass support */
 Datum
 pg_lsn_cmp(PG_FUNCTION_ARGS)
diff --git a/src/include/catalog/pg_aggregate.dat b/src/include/catalog/pg_aggregate.dat
index 044695a..242d843 100644
--- a/src/include/catalog/pg_aggregate.dat
+++ b/src/include/catalog/pg_aggregate.dat
@@ -146,6 +146,9 @@
 { aggfnoid => 'max(inet)', aggtransfn => 'network_larger',
   aggcombinefn => 'network_larger', aggsortop => '>(inet,inet)',
   aggtranstype => 'inet' },
+{ aggfnoid => 'max(pg_lsn)', aggtransfn => 'pg_lsn_larger',
+  aggcombinefn => 'pg_lsn_larger', aggsortop => '>(pg_lsn,pg_lsn)',
+  aggtranstype => 'pg_lsn' },
 
 # min
 { aggfnoid => 'min(int8)', aggtransfn => 'int8smaller',
@@ -208,6 +211,9 @@
 { aggfnoid => 'min(inet)', aggtransfn => 'network_smaller',
   aggcombinefn => 'network_smaller', aggsortop => '<(inet,inet)',
   aggtranstype => 'inet' },
+{ aggfnoid => 'min(pg_lsn)', aggtransfn => 'pg_lsn_smaller',
+  aggcombinefn => 'pg_lsn_smaller', aggsortop => '<(pg_lsn,pg_lsn)',
+  aggtranstype => 'pg_lsn' },
 
 # count
 { aggfnoid => 'count(any)', aggtransfn => 'int8inc_any',
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index acf1131..cfc9b86 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -6189,6 +6189,9 @@
 { oid => '3564', descr => 'maximum value of all inet input values',
   proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'inet',
   proargtypes => 'inet', prosrc => 'aggregate_dummy' },
+{ oid => '8125', descr => 'maximum value of all pg_lsn input values',
+  proname => 'max', prokind => 'a', proisstrict => 'f', prorettype => 'pg_lsn',
+  proargtypes => 'pg_lsn', prosrc => 'aggregate_dummy' },
 
 { oid => '2131', descr => 'minimum value of all bigint input values',
   proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'int8',
@@ -6253,6 +6256,9 @@
 { oid => '3565', descr => 'minimum value of all inet input values',
   proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'inet',
   proargtypes => 'inet', prosrc => 'aggregate_dummy' },
+{ oid => '8126', descr => 'minimum value of all pg_lsn input values',
+  proname => 'min', prokind => 'a', proisstrict => 'f', prorettype => 'pg_lsn',
+  proargtypes => 'pg_lsn', prosrc => 'aggregate_dummy' },
 
 # count has two forms: count(any) and count(*)
 { oid => '2147',
@@ -8355,6 +8361,12 @@
 { oid => '3413', descr => 'hash',
   proname => 'pg_lsn_hash_extended', prorettype => 'int8',
   proargtypes => 'pg_lsn int8', prosrc => 'pg_lsn_hash_extended' },
+{ oid => '8123', descr => 'larger of two',
+  proname => 'pg_lsn_larger', prorettype => 'pg_lsn',
+  proargtypes => 'pg_lsn pg_lsn', prosrc => 'pg_lsn_larger' },
+{ oid => '8124', descr => 'smaller of two',
+  proname => 'pg_lsn_smaller', prorettype

Re: pg_upgrade: Pass -j down to vacuumdb

2019-01-28 Thread Fabrízio de Royes Mello
On Mon, Jan 28, 2019 at 1:15 PM Jesper Pedersen 
wrote:
>
> Done.
>
> Attached is v3.
>

IMHO you should use long option name '--jobs' like others.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Custom text type for title text

2019-01-03 Thread Fabrízio de Royes Mello
Em qui, 3 de jan de 2019 às 20:53, Daniel Heath  escreveu:

> Would this also be appropriate for inclusion as contrib? I'm unfamiliar
> with the policy for what is / is not included there.
>
>
Please do not top post.

At first I recommend you implement it as an extension (using gitlab,
github, bitbucket or something else) and after you have a stable working
code maybe you should try to send it as a contrib module and then the
community will decide to accept it or not.

PostgreSQL is extensible enough to you provide this piece of work without
care with the community decisions. What I mean is you necessarily don’t
need to send it as a contrib module, just maintain it as a separate
extension project.

Regards,



-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Custom text type for title text

2019-01-03 Thread Fabrízio de Royes Mello
Em qui, 3 de jan de 2019 às 20:22, Daniel Heath  escreveu:

> Hi All,
>
> I've frequently seen an issue in applications which store titles (eg of
> books, events, user profiles) where duplicate values are not properly
> vetted.
>
> The 'citext' type is helpful here, but I'd be keen to go further.
>
> I propose a 'titletext' type, which has the following properties when
> compared for equality:
>  * Case insensitivity (like 'citext')
>  * Only considers characters in [:alnum:] (that is, ignores spaces,
> punctuation, etc)
>
> This would be useful for a range of situations where it's important to
> avoid entering duplicate values.
>
> Given the discussion at
> https://www.postgresql.org/message-id/CAKFQuwY9u14TqG8Yzj%3DfAB0tydvvtK7ibgFEx3tegbPWsGjJpg%40mail.gmail.com
> <https://www.postgresql.org/message-id/CAKFQuwY9u14TqG8Yzj=fab0tydvvtk7ibgfex3tegbpwsgj...@mail.gmail.com>
>  I'd
> lean towards making this type not automatically coerce to text (to avoid
> surprising behaviour when comparing text to titletext).
>
> Is a suitable patch likely to be accepted?
>

> You don’t need touch the core to do that. Just implement it as an
extension and share throught some channel like pgxn.org.

Note that citext also is an extension and released as a contrib module.

Regards,

-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: row filtering for logical replication

2018-11-23 Thread Fabrízio de Royes Mello
On Fri, Nov 23, 2018 at 4:13 PM Petr Jelinek 
wrote:
>
> >
> > If carefully documented I see no problem with it... we already have an
> > analogous problem with functional indexes.
>
> The difference is that with functional indexes you can recreate the
> missing object and everything is okay again. With logical replication
> recreating the object will not help.
>

In this case with logical replication you should rsync the object. That is
the price of misunderstanding / bad use of the new feature.

As usual, there are no free beer ;-)

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: row filtering for logical replication

2018-11-23 Thread Fabrízio de Royes Mello
On Fri, Nov 23, 2018 at 3:55 PM Petr Jelinek 
wrote:
>
> On 23/11/2018 17:15, Euler Taveira wrote:
> > Em qui, 22 de nov de 2018 às 20:03, Petr Jelinek
> >  escreveu:
> >> Firstly, I am not sure if it's wise to allow UDFs in the filter clause
> >> for the table. The reason for that is that we can't record all
necessary
> >> dependencies there because the functions are black box for parser. That
> >> means if somebody drops object that an UDF used in replication filter
> >> depends on, that function will start failing. But unlike for user
> >> sessions it will start failing during decoding (well processing in
> >> output plugin). And that's not recoverable by reading the missing
> >> object, the only way to get out of that is either to move slot forward
> >> which means losing part of replication stream and need for manual
resync
> >> or full rebuild of replication. Neither of which are good IMHO.
> >>
> > It is a foot gun but there are several ways to do bad things in
> > postgres. CREATE PUBLICATION is restricted to superusers and role with
> > CREATE privilege in current database. AFAICS a role with CREATE
> > privilege cannot drop objects whose owner is not himself. I wouldn't
> > like to disallow UDFs in row filtering expressions just because
> > someone doesn't set permissions correctly. Do you have any other case
> > in mind?
>
> I don't think this has anything to do with security. Stupid example:
>
> user1: CREATE EXTENSION citext;
>
> user2: CREATE FUNCTION myfilter(col1 text, col2 text) returns boolean
> language plpgsql as
> $$BEGIN
> RETURN col1::citext = col2::citext;
> END;$$
>
> user2: ALTER PUBLICATION mypub ADD TABLE mytab WHERE (myfilter(a,b));
>
> [... replication happening ...]
>
> user1: DROP EXTENSION citext;
>
> And now replication is broken and unrecoverable without data loss.
> Recreating extension will not help because the changes happening in
> meantime will not see it in the historical snapshot.
>
> I don't think it's okay to do completely nothing about this.
>

If carefully documented I see no problem with it... we already have an
analogous problem with functional indexes.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Function like "pg_trigger_depth" for Event Triggers

2018-10-30 Thread Fabrízio de Royes Mello
Hi all,

There are some reason to don't have a similar function to return how many
levels deep into an event trigger like we have using "pg_trigger_depth"??

Maybe one called "pg_event_trigger_depth"??

Regards,

-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Alter index rename concurrently to

2018-10-25 Thread Fabrízio de Royes Mello
On Thu, Oct 25, 2018 at 4:41 AM Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:
>
> On 17/10/2018 23:11, Peter Eisentraut wrote:
> > On 13/10/2018 04:01, Andres Freund wrote:
> >> I don't see how this could be argued. It has to be a self-conflicting
> >> lockmode, otherwise you'd end up doing renames of tables where you
> >> cannot see the previous state. And you'd get weird errors about
updating
> >> invisible rows etc.
> >
> >> I don't buy this description. Imo it's a fundamental correctness
> >> thing. Without it concurrent DDL would potentially overwrite the
rename,
> >> because they could start updating while still seeing the old version.
> >
> > OK, I can refine those descriptions/comments.  Do you have any concerns
> > about the underlying principle of this patch?
>
> Patch with updated comments to reflect your input.
>

Great... this patch will help a lot so I took the liberty to perform some
review:

- the doc and code (simple) looks good
- the patch apply cleanly against current master
- all tests (check and check-world) past without any issue

I also perform some test using DDLs and DMLs in various sessions:

- renaming indexes / dml against same table
- creating and renaming indexes / altering tables in other session
- renaming indexes / dropping indexes

I didn't found any issue, so the patch looks in a very good shape.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Prevent concurrent DROP SCHEMA when certain objects are being initially created in the namespace

2018-09-06 Thread Fabrízio de Royes Mello
On Thu, Sep 6, 2018 at 4:22 PM Michael Paquier  wrote:
>
> On Wed, Sep 05, 2018 at 09:23:37AM -0700, Andres Freund wrote:
> > Well, we kinda do, during some of their own DDL. CF
> > AcquireDeletionLock(), RangeVarGetAndCheckCreationNamespace(), and other
> > LockDatabaseObject() callers.  The
> > RangeVarGetAndCheckCreationNamespace() even locks the schema an object
> > is created in , which is pretty much what we're discussing here.
> >
> > I think the problem with the current logic is more that the
> > findDependentObjects() scan doesn't use a "dirty" scan, so it doesn't
> > ever get to seeing conflicting operations.
>
> Well, it seems to me that we don't necessarily want to go down to that
> for sure on back-branches.  What's actually striking me as a very bad
> thing is the inconsistent behavior when we need to get a namespace OID
> after calling QualifiedNameGetCreationNamespace using a list of defnames
> which does not lock the schema the DDL is working on.  And this happens
> for basically all the object types Jimmy has mentioned.
>
> For example, when creating a composite type, the namespace lock is taken
> through RangeVarGetAndCheckCreationNamespace.  If a user tries to create
> a root type, then we simply don't lock it, which leads to an
> inconsistency of behavior between composite types and root types.  In my
> opinion the inconsistency of behavior for the same DDL is not logic.
>
> So I have been looking at that, and finished with the attached, which
> actually fixes the set of problems reported.  Thoughts?

Hi,

I applied to current master and run "make check-world" and everything is
ok...

I also run some similar tests as Jimmy pointed and using PLpgSQL to execute
DDLs and the new consistent behavior is ok. Also I run one session using
DROP SCHEMA at end and after COMMIT the session 2 report 'ERROR:  schema
"testschema" does not exist', so avoiding concerns about lock overhead
seems the proposed patch is ok.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: [HACKERS] Proposal to add work_mem option to postgres_fdw module

2018-08-27 Thread Fabrízio de Royes Mello
On Mon, Aug 27, 2018 at 3:35 PM Robert Haas  wrote:
>
> On Mon, Aug 27, 2018 at 1:29 AM Michael Paquier 
wrote:
> > It seems to me that you would pass down just a string which gets
> > allocated for "options", and injection risks are something to be careful
> > about.  Another possibility would be an array with comma-separated
> > arguments, say:
> > options = 'option1=foo,option2=bar'
> > There is already some work done with comma-separated arguments for the
> > parameter "extensions", now that's more work.
>
> I like the direction of your thinking, but it seems to me that this
> would cause a problem if you want to set search_path=foo,bar.
>

Maybe we can use multiple "options". Something like:

... OPTIONS ( host 'remhost1', port '5433', dbname 'demodb',
option='option1=foo', option='option2=bar' );

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: psql \df option for procedures

2018-07-02 Thread Fabrízio de Royes Mello
On Mon, Jul 2, 2018 at 7:07 AM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:
>
> psql's \df command current has options a/n/t/w to show
> aggregates/normal/trigger/window functions.  Do we want to add something
> for procedures?
>

+1. I can write a patch to save your time If you don't do it yet.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: New committers announced at PGCon 2018

2018-06-01 Thread Fabrízio de Royes Mello
On Fri, Jun 1, 2018 at 6:05 PM, Tom Lane  wrote:
>
> The core team is pleased to announce the appointment of seven
> new Postgres committers:
>
> Etsuro Fujita
> Peter Geoghegan
> Amit Kapila
> Alexander Korotkov
> Thomas Munro
> Michael Paquier
> Tomas Vondra
>

Great news!! Congrats to everyone!!

Regards,

-- 
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


  1   2   >