Re: Persist MVCC forever - retain history

2020-07-04 Thread Mitar
Hi!

On Fri, Jul 3, 2020 at 12:29 AM Konstantin Knizhnik
 wrote:
> Did you read this thread:
> https://www.postgresql.org/message-id/flat/78aadf6b-86d4-21b9-9c2a-51f1efb8a499%40postgrespro.ru
> I have proposed a patch for supporting time travel (AS OF) queries.
> But I didn't fill a big interest to it from community.

Oh, you went much further than me in this thinking. Awesome!

I am surprised that you are saying you didn't feel big interest. My
reading of the thread is the opposite, that there was quite some
interest, but that there are technical challenges to overcome. So you
gave up on that work?


Mitar

-- 
http://mitar.tnode.com/
https://twitter.com/mitar_m




Re: Built-in connection pooler

2020-07-04 Thread Jaime Casanova
On Wed, 7 Aug 2019 at 02:49, Konstantin Knizhnik
 wrote:
>
> Hi, Li
>
> Thank you very much for reporting the problem.
>
> On 07.08.2019 7:21, Li Japin wrote:
> > I inspect the code, and find the following code in DefineRelation function:
> >
> > if (stmt->relation->relpersistence != RELPERSISTENCE_TEMP
> >   && stmt->oncommit != ONCOMMIT_DROP)
> >   MyProc->is_tainted = true;
> >
> > For temporary table, MyProc->is_tainted might be true, I changed it as
> > following:
> >
> > if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
> >   || stmt->oncommit == ONCOMMIT_DROP)
> >   MyProc->is_tainted = true;
> >
> > For temporary table, it works. I not sure the changes is right.
> Sorry, it is really a bug.
> My intention was to mark backend as tainted if it is creating temporary
> table without ON COMMIT DROP clause (in the last case temporary table
> will be local to transaction and so cause no problems with pooler).
> Th right condition is:
>
>  if (stmt->relation->relpersistence == RELPERSISTENCE_TEMP
>  && stmt->oncommit != ONCOMMIT_DROP)
>  MyProc->is_tainted = true;
>
>

You should also allow cursors without the WITH HOLD option or there is
something i'm missing?

a few questions about tainted backends:
- why the use of check_primary_key() and check_foreign_key() in
contrib/spi/refint.c make the backend tainted?
- the comment in src/backend/commands/sequence.c needs some fixes, it
seems just quickly typed

some usability problem:
- i compiled this on a debian machine with "--enable-debug
--enable-cassert --with-pgport=54313", so nothing fancy
- then make, make install, and initdb: so far so good

configuration:
listen_addresses = '*'
connection_proxies = 20

and i got this:

"""
jcasanov@DangerBox:/opt/var/pgdg/14dev$ /opt/var/pgdg/14dev/bin/psql
-h 127.0.0.1 -p 6543 postgres
psql: error: could not connect to server: no se pudo conectar con el
servidor: No existe el fichero o el directorio
¿Está el servidor en ejecución localmente y aceptando
conexiones en el socket de dominio Unix «/var/run/postgresql/.s.PGSQL.54313»?
"""

but connect at the postgres port works well
"""
jcasanov@DangerBox:/opt/var/pgdg/14dev$ /opt/var/pgdg/14dev/bin/psql
-h 127.0.0.1 -p 54313 postgres
psql (14devel)
Type "help" for help.

postgres=# \q
jcasanov@DangerBox:/opt/var/pgdg/14dev$ /opt/var/pgdg/14dev/bin/psql
-p 54313 postgres
psql (14devel)
Type "help" for help.

postgres=#
"""

PS: unix_socket_directories is setted at /tmp and because i'm not
doing this with postgres user i can use /var/run/postgresql

-- 
Jaime Casanova  www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Include access method in listTables output

2020-07-04 Thread vignesh C
On Tue, Jun 30, 2020 at 2:53 PM Georgios  wrote:
>
>
> As promised, I gladly ament upon your request. Also included a fix for
> a minor oversight in tests, they should now be stable. Finally in this
> version, I extended a bit the logic to only include the access method column
> if the relations displayed can have one, for example sequences.
>

Patch applies cleanly, make check & make check-world passes.
One comment:
+   if (pset.sversion >= 12 && !pset.hide_tableam &&
+   (showTables || showViews || showMatViews ||
showIndexes))
+   appendPQExpBuffer(,
+ ",\n
am.amname as \"%s\"",
+
gettext_noop("Access Method"));

I'm not sure if we should include showViews, I had seen that the
access method was not getting selected for view.

Regards,
Vignesh
EnterpriseDB: http://www.enterprisedb.com




Re: Cleanup - Removed unused function parameter in reorder buffer & parallel vacuum

2020-07-04 Thread vignesh C
On Sat, Jul 4, 2020 at 12:32 PM Amit Kapila  wrote:
>
> On Fri, Jul 3, 2020 at 5:18 PM Masahiko Sawada
>  wrote:
> >
> > On Fri, 3 Jul 2020 at 17:07, vignesh C  wrote:
> > >
> > > Hi,
> > >
> > > While checking through the code I found that  some of the function
> > > parameters in reorderbuffer & vacuumlazy are not used. I felt this
> > > could be removed. I'm not sure if it is kept for future use or not.
> > > Attached patch contains the changes for the same.
> > > Thoughts?
> > >
> >
> > For the part of parallel vacuum change, it looks good to me.
> >
>
> Unlike ReorderBuffer, this change looks fine to me as well.  This is a
> quite recent (PG13) change and it would be good to remove it now.  So,
> I will push this part of change unless I hear any objection in a day
> or so.

Thanks all for your comments, attached patch has the changes that
excludes the changes made in reorderbuffer.

Regards,
Vignesh
EnterpriseDB: http://www.enterprisedb.com
From 70a626eb7a384c2357134142611d511635e5c369 Mon Sep 17 00:00:00 2001
From: Vignesh C 
Date: Sun, 5 Jul 2020 06:11:02 +0530
Subject: [PATCH] Removed unused function parameter in end_parallel_vacuum.

Removed unused function parameter in end_parallel_vacuum. This
function parameter is not used, it can be removed safely.
---
 src/backend/access/heap/vacuumlazy.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 68effca..1bbc459 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -390,7 +390,7 @@ static void update_index_statistics(Relation *Irel, IndexBulkDeleteResult **stat
 static LVParallelState *begin_parallel_vacuum(Oid relid, Relation *Irel,
 			  LVRelStats *vacrelstats, BlockNumber nblocks,
 			  int nindexes, int nrequested);
-static void end_parallel_vacuum(Relation *Irel, IndexBulkDeleteResult **stats,
+static void end_parallel_vacuum(IndexBulkDeleteResult **stats,
 LVParallelState *lps, int nindexes);
 static LVSharedIndStats *get_indstats(LVShared *lvshared, int n);
 static bool skip_parallel_vacuum_index(Relation indrel, LVShared *lvshared);
@@ -1712,7 +1712,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
 	 * during parallel mode.
 	 */
 	if (ParallelVacuumIsActive(lps))
-		end_parallel_vacuum(Irel, indstats, lps, nindexes);
+		end_parallel_vacuum(indstats, lps, nindexes);
 
 	/* Update index statistics */
 	update_index_statistics(Irel, indstats, nindexes);
@@ -3361,8 +3361,8 @@ begin_parallel_vacuum(Oid relid, Relation *Irel, LVRelStats *vacrelstats,
  * context, but that won't be safe (see ExitParallelMode).
  */
 static void
-end_parallel_vacuum(Relation *Irel, IndexBulkDeleteResult **stats,
-	LVParallelState *lps, int nindexes)
+end_parallel_vacuum(IndexBulkDeleteResult **stats, LVParallelState *lps,
+	int nindexes)
 {
 	int			i;
 
-- 
1.8.3.1



Re: Default setting for enable_hashagg_disk (hash_mem)

2020-07-04 Thread Jeff Davis
On Sat, 2020-07-04 at 14:49 +0530, Amit Kapila wrote:
> > We don't even have a user report yet of a
> > regression compared to PG 12, or one that can't be fixed by
> > increasing
> > work_mem.
> > 
> 
> Yeah, this is exactly the same point I have raised above.  I feel we
> should wait before designing any solution to match pre-13 behavior
> for
> hashaggs to see what percentage of users face problems related to
> this
> and how much is a problem for them to increase work_mem to avoid
> regression.

I agree that it's good to wait for actual problems. But the challenge
is that we can't backport an added GUC. Are there other, backportable
changes we could potentially make if a lot of users have a problem with
v13 after release? Or will any users who experience a problem need to
wait for v14?

I'm OK not having a GUC, but we need consensus around what our response
will be if a user experiences a regression. If our only answer is
"tweak X, Y, and Z; and if that doesn't work, wait for v14" then I'd
like almost everyone to be on board with that. If we have some
backportable potential solutions, that gives us a little more
confidence that we can still get that user onto v13 (even if they have
to wait for a point release).

Without some backportable potential solutions, I'm inclined to ship
with either one or two escape-hatch GUCs, with warnings that they
should be used as a last resort. Hopefully users will complain on the
lists (so we understand the problem) before setting them.

It's not very principled, and we may be stuck with some cruft, but it
mitigates the risk a lot. There's a good chance we can remove them
later, especially if it's part of a larger overhall of
work_mem/hash_mem (which might happen fairly soon, given the interest
in this thread), or if we change something about HashAgg that makes the
GUCs harder to maintain.

Regards,
Jeff Davis






Re: Postgres Windows build system doesn't work with python installed in Program Files

2020-07-04 Thread Juan José Santamaría Flecha
On Tue, Jun 2, 2020 at 2:06 AM David Zhang  wrote:

>
> On 2020-05-06 10:45 p.m., Michael Paquier wrote:
> > On Wed, May 06, 2020 at 12:17:03AM +0200, Juan José Santamaría Flecha
> wrote:
> >> Please forgive me if I am being too nitpicky, but I find the comments a
> >> little too verbose, a usage format might be more visual and easier to
> >> explain:
> >>
> >> Usage: build [[CONFIGURATION] COMPONENT]
> >>
> >> The options are  case-insensitive.
> >> CONFIGURATION sets the configuration to build, "debug" or "release" (by
> >> default).
> >> COMPONENT defines a component to build. An empty option means all
> >> components.
> > Your comment makes sense to me.  What about the attached then?  On top
> > of documenting the script usage in the code, let's trigger it if it
> > gets called with more than 3 arguments.  What do you think?
> >
> > FWIW, I forgot to mention that I don't think those warnings are worth
> > a backpatch.  No objections with improving things on HEAD of course.
>
> It would be a bonus if the build.pl can support the "help" in Windows'
> way.
>

Going through the open items in the commitfest, I see that this patch has
not been pushed. It still applies and solves the warning so, I am marking
it as RFC.

Adding a help option is a new feature, that can have its own patch without
delaying this one any further.

Regards,

Juan José Santamaría Flecha


Re: pg_read_file() with virtual files returns empty string

2020-07-04 Thread Joe Conway
On 7/4/20 1:10 PM, Joe Conway wrote:
> On 7/4/20 12:52 PM, Tom Lane wrote:
>> Justin Pryzby  writes:
>>> But I noticed that cfbot is now populating with failures like:
>> 
>>> genfile.c: In function ‘read_binary_file’:
>>> genfile.c:192:5: error: ignoring return value of ‘fread’, declared with 
>>> attribute warn_unused_result [-Werror=unused-result]
>>>  fread(rbuf, 1, 1, file);
>>>  ^
>> 
>> Yeah, some of the pickier buildfarm members (eg spurfowl) are showing
>> that as a warning, too.  Maybe make it like
>> 
>> if (fread(rbuf, 1, 1, file) != 0 || !feof(file))
>> ereport(ERROR,
>> 
>> Probably the feof test is redundant this way, but I'd be inclined to
>> leave it in anyhow.
> 
> Ok, will fix. Thanks for the heads up.

And pushed -- thanks!

Joe
-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: pg_read_file() with virtual files returns empty string

2020-07-04 Thread Joe Conway
On 7/4/20 12:52 PM, Tom Lane wrote:
> Justin Pryzby  writes:
>> But I noticed that cfbot is now populating with failures like:
> 
>> genfile.c: In function ‘read_binary_file’:
>> genfile.c:192:5: error: ignoring return value of ‘fread’, declared with 
>> attribute warn_unused_result [-Werror=unused-result]
>>  fread(rbuf, 1, 1, file);
>>  ^
> 
> Yeah, some of the pickier buildfarm members (eg spurfowl) are showing
> that as a warning, too.  Maybe make it like
> 
> if (fread(rbuf, 1, 1, file) != 0 || !feof(file))
> ereport(ERROR,
> 
> Probably the feof test is redundant this way, but I'd be inclined to
> leave it in anyhow.

Ok, will fix. Thanks for the heads up.

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: pg_read_file() with virtual files returns empty string

2020-07-04 Thread Tom Lane
Justin Pryzby  writes:
> But I noticed that cfbot is now populating with failures like:

> genfile.c: In function ‘read_binary_file’:
> genfile.c:192:5: error: ignoring return value of ‘fread’, declared with 
> attribute warn_unused_result [-Werror=unused-result]
>  fread(rbuf, 1, 1, file);
>  ^

Yeah, some of the pickier buildfarm members (eg spurfowl) are showing
that as a warning, too.  Maybe make it like

if (fread(rbuf, 1, 1, file) != 0 || !feof(file))
ereport(ERROR,

Probably the feof test is redundant this way, but I'd be inclined to
leave it in anyhow.

regards, tom lane




Re: pg_read_file() with virtual files returns empty string

2020-07-04 Thread Justin Pryzby
Hi Joe

Thanks for addressing this.

But I noticed that cfbot is now populating with failures like:

https://travis-ci.org/github/postgresql-cfbot/postgresql/builds/704898559
genfile.c: In function ‘read_binary_file’:
genfile.c:192:5: error: ignoring return value of ‘fread’, declared with 
attribute warn_unused_result [-Werror=unused-result]
 fread(rbuf, 1, 1, file);
 ^
cc1: all warnings being treated as errors
: recipe for target 'genfile.o' failed

-- 
Justin




Re: POC and rebased patch for CSN based snapshots

2020-07-04 Thread movead...@highgo.ca
Hello Andrey

>> I have researched your patch which is so great, in the patch only data
>> out of 'global_snapshot_defer_time' can be vacuum, and it keep dead
>> tuple even if no snapshot import at all,right?
>> 
>> I am thanking about a way if we can start remain dead tuple just before
>> we import a csn snapshot.
>> 
>> Base on Clock-SI paper, we should get local CSN then send to shard nodes,
>> because we do not known if the shard nodes' csn bigger or smaller then
>> master node, so we should keep some dead tuple all the time to support
>> snapshot import anytime.
>> 
>> Then if we can do a small change to CLock-SI model, we do not use the
>> local csn when transaction start, instead we touch every shard node for
>> require their csn, and shard nodes start keep dead tuple, and master node
>> choose the biggest csn to send to shard nodes.
>> 
>> By the new way, we do not need to keep dead tuple all the time and do
>> not need to manage a ring buf, we can give to ball to 'snapshot too old'
>> feature. But for trade off, almost all shard node need wait.
>> I will send more detail explain in few days.
>I think, in the case of distributed system and many servers it can be 
>bottleneck.
>Main idea of "deferred time" is to reduce interference between DML 
>queries in the case of intensive OLTP workload. This time can be reduced 
>if the bloationg of a database prevails over the frequency of 
>transaction aborts.
OK there maybe a performance issue, and I have another question about Clock-SI.

For example we have three  nodes, shard1(as master), shard2, shard3, which
(time of node2) > (time of node2) > (time of node3), and you can see a picture:
http://movead.gitee.io/picture/blog_img_bad/csn/clock_si_question.png 
As far as I know about Clock-SI, left part of the blue line will setup as a 
snapshotif master require a snapshot at time t1. But in fact data A should in 
snapshot butnot and data B should out of snapshot but not.
If this scene may appear in your origin patch? Or something my understand 
aboutClock-SI is wrong?



Re: warnings for invalid function casts

2020-07-04 Thread Tom Lane
Peter Eisentraut  writes:
> Do people prefer a typedef or just writing it out, like it's done in the 
> Python code?

I'm for a typedef.  There is *nothing* readable about "(void (*) (void))",
and the fact that it's theoretically incorrect for the purpose doesn't
exactly aid intelligibility either.  With a typedef, not only are
the uses more readable but there's a place to put a comment explaining
that this is notionally wrong but it's what gcc specifies to use
to suppress thus-and-such warnings.

> But if we prefer a typedef then I'd propose 
> GenericFuncPtr like in the initial patch.

That name is OK by me.

regards, tom lane




Re: pg_read_file() with virtual files returns empty string

2020-07-04 Thread Joe Conway
On 7/2/20 6:29 PM, Tom Lane wrote:
> Joe Conway  writes:
>> On 7/2/20 5:37 PM, Tom Lane wrote:
>>> I still can't get excited about contorting the code to remove that
>>> issue.
> 
>> It doesn't seem much worse than the oom test that was there before -- see 
>> attached.
> 
> Personally I would not bother, but it's your patch.

Thanks, committed that way, ...

>> Are we in agreement that whatever gets pushed should be backpatched through 
>> pg11
>> (see start of thread)?
> 
> OK by me.

... and backpatched to v11.

I changed the new error message to "file length too large" instead of "requested
length too large" since that seems more descriptive of what is actually
happening there. I also changed the corresponding error code to match the one
enlargeStringInfo() would have used because I thought it was more apropos.

Thanks for all the help with this!

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: POC: rational number type (fractions)

2020-07-04 Thread Peter Eisentraut

On 2020-07-03 18:18, Tom Lane wrote:

Andrew Dunstan  writes:

On 7/2/20 10:01 AM, Tom Lane wrote:

Yeah.  We *must not* simply give up on extensibility and decide that
every interesting feature has to be in core.  I don't have any great
ideas about how we grow the wider Postgres development community and
infrastructure, but that certainly isn't the path to doing so.



I've been thinking about this a bit. Right now there isn't anything
outside of core that seems to work well. PGXN was supposed to be our
CPAN equivalent, but it doesn't seem to have worked out that way, it
never really got the traction.


Yeah.  Can we analyze why it hasn't done better?  Can we improve it
rather than starting something completely new?


This should probably all be in a different thread.  But yeah, I think a 
bit more analysis of the problem is needed before jumping into action.


--
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Resetting spilled txn statistics in pg_stat_replication

2020-07-04 Thread Ajin Cherian
On Thu, Jul 2, 2020 at 1:31 PM Masahiko Sawada <
masahiko.saw...@2ndquadrant.com> wrote:

>
>
> Thanks! Yes, I'm working on this patch while considering to send the
> stats to stats collector.
>
> I've attached PoC patch that implements a simple approach. I'd like to
> discuss how we collect the replication slot statistics in the stats
> collector before I bring the patch to completion.
>
>
I understand the patch is only in the initial stage but I just tried
testing it. Using the patch, I enabled logical replication and created two
pub/subs (sub1,sub2) for two seperate tables (t1,t2). I inserted data into
the second table (t2) such that it spills into disk.
Then when I checked the stats using the new function
pg_stat_get_replication_slots() , I see that the same stats are updated for
both the slots, when ideally it should have reflected in the second slot
alone.

postgres=# SELECT s.name, s.spill_txns,s.spill_count,s.spill_bytes
  FROM pg_stat_get_replication_slots() s(name, spill_txns, spill_count,
spill_bytes);
 name | spill_txns | spill_count | spill_bytes
--++-+-
 sub1 |  1 |  20 |  132000
 sub2 |  1 |  20 |  132000
(2 rows)

I haven't debugged the issue yet, I can if you wish but just thought I'd
let you know what I found.

thanks,
Ajin Cherian
Fujitsu Australia


Re: new heapcheck contrib module

2020-07-04 Thread Dilip Kumar
On Sun, Jun 28, 2020 at 11:18 PM Mark Dilger
 wrote:
>
>
>
> > On Jun 28, 2020, at 9:05 AM, Dilip Kumar  wrote:
> >
> > Some more comments on v9_0001.
> > 1.
> > + LWLockAcquire(XidGenLock, LW_SHARED);
> > + nextFullXid = ShmemVariableCache->nextFullXid;
> > + ctx.oldestValidXid = ShmemVariableCache->oldestXid;
> > + LWLockRelease(XidGenLock);
> > + ctx.nextKnownValidXid = XidFromFullTransactionId(nextFullXid);
> > ...
> > ...
> > +
> > + for (ctx.blkno = startblock; ctx.blkno < endblock; ctx.blkno++)
> > + {
> > + int32 mapbits;
> > + OffsetNumber maxoff;
> > + PageHeader ph;
> > +
> > + /* Optionally skip over all-frozen or all-visible blocks */
> > + if (skip_all_frozen || skip_all_visible)
> > + {
> > + mapbits = (int32) visibilitymap_get_status(ctx.rel, ctx.blkno,
> > +);
> > + if (skip_all_visible && (mapbits & VISIBILITYMAP_ALL_VISIBLE) != 0)
> > + continue;
> > + if (skip_all_frozen && (mapbits & VISIBILITYMAP_ALL_FROZEN) != 0)
> > + continue;
> > + }
> > +
> > + /* Read and lock the next page. */
> > + ctx.buffer = ReadBufferExtended(ctx.rel, MAIN_FORKNUM, ctx.blkno,
> > + RBM_NORMAL, ctx.bstrategy);
> > + LockBuffer(ctx.buffer, BUFFER_LOCK_SHARE);
> >
> > I might be missing something, but it appears that first we are getting
> > the nextFullXid and after that, we are scanning the block by block.
> > So while we are scanning the block if the nextXid is advanced and it
> > has updated some tuple in the heap pages,  then it seems the current
> > logic will complain about out of range xid.  I did not test this
> > behavior so please point me to the logic which is protecting this.
>
> We know the oldest valid Xid cannot advance, because we hold a lock that 
> would prevent it from doing so.  We cannot know that the newest Xid will not 
> advance, but when we see an Xid beyond the end of the known valid range, we 
> check its validity, and either report it as a corruption or advance our idea 
> of the newest valid Xid, depending on that check.  That logic is in 
> TransactionIdValidInRel.

That makes sense to me.

>
> > 2.
> > /*
> > * Helper function to construct the TupleDesc needed by verify_heapam.
> > */
> > static TupleDesc
> > verify_heapam_tupdesc(void)
> >
> > From function name, it appeared that it is verifying tuple descriptor
> > but this is just creating the tuple descriptor.
>
> In amcheck--1.2--1.3.sql we define a function named verify_heapam which 
> returns a set of records.  This is the tuple descriptor for that function.  I 
> understand that the name can be parsed as verify_(heapam_tupdesc), but it is 
> meant as (verify_heapam)_tupdesc.  Do you have a name you would prefer?

Not very particular, but if we have a name like
verify_heapam_get_tupdesc, But, just a suggestion so it's your choice
if you prefer the current name I have no objection.

>
> > 3.
> > + /* Optionally skip over all-frozen or all-visible blocks */
> > + if (skip_all_frozen || skip_all_visible)
> > + {
> > + mapbits = (int32) visibilitymap_get_status(ctx.rel, ctx.blkno,
> > +);
> > + if (skip_all_visible && (mapbits & VISIBILITYMAP_ALL_VISIBLE) != 0)
> > + continue;
> > + if (skip_all_frozen && (mapbits & VISIBILITYMAP_ALL_FROZEN) != 0)
> > + continue;
> > + }
> >
> > Here, do we want to test that in VM the all visible bit is set whereas
> > on the page it is not set?  That can lead to a wrong result in an
> > index-only scan.
>
> If the caller has specified that the corruption check should skip over 
> all-frozen or all-visible data, then we cannot load the page that the VM 
> claims is all-frozen or all-visible without defeating the purpose of the 
> caller having specified these options.  Without loading the page, we cannot 
> check the page's header bits.
>
> When not skipping all-visible or all-frozen blocks, we might like to pin both 
> the heap page and the visibility map page in order to compare the two, being 
> careful not to hold a pin on the one while performing I/O on the other.  See 
> for example the logic in heap_delete().  But I'm not sure what guarantees the 
> system makes about agreement between these two bits.  Certainly, the VM 
> should not claim a page is all visible when it isn't, but are we guaranteed 
> that a page that is all-visible will always have its all-visible bit set?  I 
> don't know if (possibly transient) disagreement between these two bits 
> constitutes corruption.  Perhaps others following this thread can advise?

Right, the VM should not claim its all visible when it actually not.
But, IIRC, it is not guaranteed that if the page is all visible then
the VM must set the all visible flag.

> > 4. One cosmetic comment
> >
> > + /* Skip non-varlena values, but update offset first */
> > ..
> > +
> > + /* Ok, we're looking at a varlena attribute. */
> >
> > Throughout the patch, I have noticed that some of your single-line
> > comments have "full stop" whereas other don't.  Can we keep them
> > consistent?
>
> I try to use a "full stop" at the end of sentences, but not 

Re: warnings for invalid function casts

2020-07-04 Thread Peter Eisentraut

On 2020-07-03 16:40, Tom Lane wrote:

Given that gcc explicitly documents "void (*) (void)" as being what
to use, they're going to have a hard time changing their minds about
that ... and gcc is dominant enough in this space that I suppose
other compilers would have to be compatible with it.  So even though
it's theoretically bogus, I suppose we might as well go along with
it.  The typedef will allow a centralized fix if we ever find a
better answer.


Do people prefer a typedef or just writing it out, like it's done in the 
Python code?


Attached is a provisional patch that has it written out.

I'm minimally in favor of that, since the Python code would be 
consistent with the Python core code, and the one other use is quite 
special and it might not be worth introducing a globally visible 
workaround for it.  But if we prefer a typedef then I'd propose 
GenericFuncPtr like in the initial patch.


--
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From 220bab24e5a58f1b5d66336427c0877329a5f88a Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Sat, 4 Jul 2020 13:15:12 +0200
Subject: [PATCH v2] Fix -Wcast-function-type warnings

Discussion: 
https://www.postgresql.org/message-id/flat/20180206200205.f5kvbyn6jawtzi6s%40alap3.anarazel.de
---
 configure | 91 +++
 configure.in  |  2 +
 src/backend/utils/fmgr/dfmgr.c| 14 ++---
 src/backend/utils/hash/dynahash.c |  9 ++-
 src/include/fmgr.h|  6 +-
 src/pl/plpython/plpy_plpymodule.c | 14 ++---
 6 files changed, 118 insertions(+), 18 deletions(-)

diff --git a/configure b/configure
index 2feff37fe3..9907637e31 100755
--- a/configure
+++ b/configure
@@ -5643,6 +5643,97 @@ if test 
x"$pgac_cv_prog_CXX_cxxflags__Wimplicit_fallthrough_3" = x"yes"; then
 fi
 
 
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ${CC} supports 
-Wcast-function-type, for CFLAGS" >&5
+$as_echo_n "checking whether ${CC} supports -Wcast-function-type, for 
CFLAGS... " >&6; }
+if ${pgac_cv_prog_CC_cflags__Wcast_function_type+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  pgac_save_CFLAGS=$CFLAGS
+pgac_save_CC=$CC
+CC=${CC}
+CFLAGS="${CFLAGS} -Wcast-function-type"
+ac_save_c_werror_flag=$ac_c_werror_flag
+ac_c_werror_flag=yes
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_prog_CC_cflags__Wcast_function_type=yes
+else
+  pgac_cv_prog_CC_cflags__Wcast_function_type=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+ac_c_werror_flag=$ac_save_c_werror_flag
+CFLAGS="$pgac_save_CFLAGS"
+CC="$pgac_save_CC"
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: 
$pgac_cv_prog_CC_cflags__Wcast_function_type" >&5
+$as_echo "$pgac_cv_prog_CC_cflags__Wcast_function_type" >&6; }
+if test x"$pgac_cv_prog_CC_cflags__Wcast_function_type" = x"yes"; then
+  CFLAGS="${CFLAGS} -Wcast-function-type"
+fi
+
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ${CXX} supports 
-Wcast-function-type, for CXXFLAGS" >&5
+$as_echo_n "checking whether ${CXX} supports -Wcast-function-type, for 
CXXFLAGS... " >&6; }
+if ${pgac_cv_prog_CXX_cxxflags__Wcast_function_type+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  pgac_save_CXXFLAGS=$CXXFLAGS
+pgac_save_CXX=$CXX
+CXX=${CXX}
+CXXFLAGS="${CXXFLAGS} -Wcast-function-type"
+ac_save_cxx_werror_flag=$ac_cxx_werror_flag
+ac_cxx_werror_flag=yes
+ac_ext=cpp
+ac_cpp='$CXXCPP $CPPFLAGS'
+ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS 
conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_cxx_compiler_gnu
+
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_cxx_try_compile "$LINENO"; then :
+  pgac_cv_prog_CXX_cxxflags__Wcast_function_type=yes
+else
+  pgac_cv_prog_CXX_cxxflags__Wcast_function_type=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext 
$LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+ac_cxx_werror_flag=$ac_save_cxx_werror_flag
+CXXFLAGS="$pgac_save_CXXFLAGS"
+CXX="$pgac_save_CXX"
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: 
$pgac_cv_prog_CXX_cxxflags__Wcast_function_type" >&5
+$as_echo "$pgac_cv_prog_CXX_cxxflags__Wcast_function_type" >&6; }
+if test x"$pgac_cv_prog_CXX_cxxflags__Wcast_function_type" = x"yes"; then
+  CXXFLAGS="${CXXFLAGS} -Wcast-function-type"
+fi
+
+
   # This was included in -Wall/-Wformat in older GCC versions
 
 { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ${CC} supports 
-Wformat-security, for CFLAGS" >&5
diff --git 

Re: A patch for get origin from commit_ts.

2020-07-04 Thread movead...@highgo.ca
>Thanks.  Movead, please note that the patch is waiting on author?
>Could you send an update if you think that those changes make sense?

I make a patch as Michael Paquier described that use a new function to
return transactionid and origin, and I add a origin version to 
pg_last_committed_xact() too,  now it looks like below:


postgres=# SELECT txid_current() as txid \gset
postgres=# SELECT * FROM  pg_xact_commit_timestamp_origin(:'txid');
   timestamp   |   origin 
-+
 2020-07-04 17:52:10.199623+08 |  1
(1 row)

postgres=# SELECT * FROM pg_last_committed_xact_with_origin();
 xid  |   timestamp  | origin 
-++
 506 | 2020-07-04 17:52:10.199623+08 |  1
(1 row)

postgres=#


---
Regards,
Highgo Software (Canada/China/Pakistan) 
URL : www.highgo.ca 
EMAIL: mailto:movead(dot)li(at)highgo(dot)ca


get_origin_from_commit_ts_v3.patch
Description: Binary data


Re: Default setting for enable_hashagg_disk (hash_mem)

2020-07-04 Thread Amit Kapila
On Fri, Jul 3, 2020 at 7:38 PM Bruce Momjian  wrote:
>
> On Thu, Jul  2, 2020 at 08:35:40PM -0700, Peter Geoghegan wrote:
> > But the problem isn't really the hashaggs-that-spill patch itself.
> > Rather, the problem is the way that work_mem is supposed to behave in
> > general, and the impact that that has on hash aggregate now that it
> > has finally been brought into line with every other kind of executor
> > node. There just isn't much reason to think that we should give the
> > same amount of memory to a groupagg + sort as a hash aggregate. The
> > patch more or less broke an existing behavior that is itself
> > officially broken. That is, the problem that we're trying to fix here
> > is only a problem to the extent that the previous scheme isn't really
> > operating as intended (because grouping estimates are inherently very
> > hard). A revert doesn't seem like it helps anyone.
> >
> > I accept that the idea of inventing hash_mem to fix this problem now
> > is unorthodox. In a certain sense it solves problems beyond the
> > problems that we're theoretically obligated to solve now. But any
> > "more conservative" approach that I can think of seems like a big
> > mess.
>
> We don't even have a user report yet of a
> regression compared to PG 12, or one that can't be fixed by increasing
> work_mem.
>

Yeah, this is exactly the same point I have raised above.  I feel we
should wait before designing any solution to match pre-13 behavior for
hashaggs to see what percentage of users face problems related to this
and how much is a problem for them to increase work_mem to avoid
regression.  Say, if only less than 1% of users face this problem and
some of them are happy by just increasing work_mem then we might not
need to do anything.  OTOH, if 10% users face this problem and most of
them don't want to increase work_mem then it would be evident that we
need to do something about it and we can probably provide a guc at
that stage for them to revert to old behavior and do some advanced
solution in the master branch.  I am not sure what is the right thing
to do here but it seems to me we are designing a solution based on the
assumption that we will have a lot of users who will be hit by this
problem and would be unhappy by the new behavior.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: Cleanup - Removed unused function parameter in reorder buffer & parallel vacuum

2020-07-04 Thread Amit Kapila
On Fri, Jul 3, 2020 at 5:18 PM Masahiko Sawada
 wrote:
>
> On Fri, 3 Jul 2020 at 17:07, vignesh C  wrote:
> >
> > Hi,
> >
> > While checking through the code I found that  some of the function
> > parameters in reorderbuffer & vacuumlazy are not used. I felt this
> > could be removed. I'm not sure if it is kept for future use or not.
> > Attached patch contains the changes for the same.
> > Thoughts?
> >
>
> For the part of parallel vacuum change, it looks good to me.
>

Unlike ReorderBuffer, this change looks fine to me as well.  This is a
quite recent (PG13) change and it would be good to remove it now.  So,
I will push this part of change unless I hear any objection in a day
or so.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: Missing "Up" navigation link between parts and doc root?

2020-07-04 Thread Fabien COELHO



Hello Peter,

The original stylesheets explicitly go out of their way to do it that way. 
We can easily fix that by removing that special case.  See attached patch.


That patch only fixes it for the header.  To fix it for the footer as well, 
we'd first need to import the navfooter template to be able to customize it.


Thanks for the patch, which applies cleanly, doc compiles, works for me 
with w3m.



Not a big problem though.


Nope, just mildly irritating for quite a long time:-) So I'd go for back 
patching if it applies cleanly.


--
Fabien.




Re: Missing "Up" navigation link between parts and doc root?

2020-07-04 Thread Fabien COELHO




The original stylesheets explicitly go out of their way to do it that
way.


Can we find any evidence of the reasoning?  As you say, that clearly was
an intentional choice.


Given the code, my guess would be the well-intentioned but misplaced 
desire to avoid a redundancy, i.e. two links side-by-side which point to 
the same place.


--
Fabien.




Re: pgbench: option delaying queries till connections establishment?

2020-07-04 Thread Fabien COELHO



* First patch reworks time measurements in pgbench.
* Second patch adds a barrier before starting the bench

It applies on top of the previous one. The initial imbalance due to 
thread creation times is smoothed.


The usecs patch fails to apply to HEAD, can you please submit a rebased version
of this patchset.  Also, when doing so, can you please rename the patches such
that sort alphabetically in the order in which they are intended to be applied.
The CFBot patch tester will otherwise try to apply them out of order which
cause errors.


Ok. Attached.

--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index 8e728dc094..3c5ef9c27e 100644
--- a/doc/src/sgml/ref/pgbench.sgml
+++ b/doc/src/sgml/ref/pgbench.sgml
@@ -58,8 +58,10 @@ number of clients: 10
 number of threads: 1
 number of transactions per client: 1000
 number of transactions actually processed: 1/1
-tps = 85.184871 (including connections establishing)
-tps = 85.296346 (excluding connections establishing)
+latency average = 11.013 ms
+latency stddev = 7.351 ms
+initial connection time = 45.758 ms
+tps = 896.967014 (without initial connection establishing)
 
 
   The first six lines report some of the most important parameter
@@ -2226,7 +2228,6 @@ END;
   
For the default script, the output will look similar to this:
 
-starting vacuum...end.
 transaction type: builtin: TPC-B (sort of)
 scaling factor: 1
 query mode: simple
@@ -2234,22 +2235,22 @@ number of clients: 10
 number of threads: 1
 number of transactions per client: 1000
 number of transactions actually processed: 1/1
-latency average = 15.844 ms
-latency stddev = 2.715 ms
-tps = 618.764555 (including connections establishing)
-tps = 622.977698 (excluding connections establishing)
+latency average = 10.870 ms
+latency stddev = 7.341 ms
+initial connection time = 30.954 ms
+tps = 907.949122 (without initial connection establishing)
 statement latencies in milliseconds:
-0.002  \set aid random(1, 10 * :scale)
-0.005  \set bid random(1, 1 * :scale)
-0.002  \set tid random(1, 10 * :scale)
-0.001  \set delta random(-5000, 5000)
-0.326  BEGIN;
-0.603  UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
-0.454  SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
-5.528  UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
-7.335  UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
-0.371  INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
-1.212  END;
+0.001  \set aid random(1, 10 * :scale)
+0.001  \set bid random(1, 1 * :scale)
+0.001  \set tid random(1, 10 * :scale)
+0.000  \set delta random(-5000, 5000)
+0.046  BEGIN;
+0.151  UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
+0.107  SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
+4.241  UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
+5.245  UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
+0.102  INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
+0.974  END;
 
   
 
diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index 08a5947a9e..46be67adaf 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -291,9 +291,9 @@ typedef struct SimpleStats
  */
 typedef struct StatsData
 {
-	time_t		start_time;		/* interval start time, for aggregates */
-	int64		cnt;			/* number of transactions, including skipped */
-	int64		skipped;		/* number of transactions skipped under --rate
+	pg_time_usec_t	start_time;	/* interval start time, for aggregates */
+	int64			cnt;		/* number of transactions, including skipped */
+	int64			skipped;	/* number of transactions skipped under --rate
  * and --latency-limit */
 	SimpleStats latency;
 	SimpleStats lag;
@@ -416,11 +416,11 @@ typedef struct
 	int			nvariables;		/* number of variables */
 	bool		vars_sorted;	/* are variables sorted by name? */
 
-	/* various times about current transaction */
-	int64		txn_scheduled;	/* scheduled start time of transaction (usec) */
-	int64		sleep_until;	/* scheduled start time of next cmd (usec) */
-	instr_time	txn_begin;		/* used for measuring schedule lag times */
-	instr_time	stmt_begin;		/* used for measuring statement latencies */
+	/* various times about current transaction in microseconds */
+	pg_time_usec_t	txn_scheduled;	/* scheduled start time of transaction */
+	pg_time_usec_t	sleep_until;	/* scheduled start time of next cmd */
+	pg_time_usec_t	txn_begin;		/* used for measuring schedule lag times */
+	pg_time_usec_t	stmt_begin;		/* used for measuring statement latencies */
 
 	bool		prepared[MAX_SCRIPTS];	/* whether client prepared the script */
 
@@ -449,13 

Re: min_safe_lsn column in pg_replication_slots view

2020-07-04 Thread Amit Kapila
On Wed, Jul 1, 2020 at 8:44 PM Alvaro Herrera  wrote:
>
> On 2020-Jul-01, Amit Kapila wrote:
>
> > Okay, but do we think it is better to display this in
> > pg_replication_slots or some new view like pg_stat_*_slots as being
> > discussed in [1]?  It should not happen that we later decide to move
> > this or similar stats to that view.
>
> It seems that the main motivation for having some counters in another
> view is the ability to reset them; and resetting this distance value
> makes no sense, so I think it's better to have it in
> pg_replication_slots.
>

Fair enough.  It would be good if we can come up with something better
than 'distance' for this column.  Some ideas safe_wal_limit,
safe_wal_size?

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com