Re: Support "Right Semi Join" plan shapes

2024-04-28 Thread wenhui qiu
Hi Richard
 Thank you so much for your tireless work on this,I see the new version
of the patch improves some of the comments .I think it can commit in July


Thanks

On Thu, 25 Apr 2024 at 11:28, Richard Guo  wrote:

> Here is another rebase with a commit message to help review.  I also
> tweaked some comments.
>
> Thanks
> Richard
>


Re: POC: make mxidoff 64 bits

2024-04-23 Thread wenhui qiu
Hi Maxim Orlov
   Thank you so much for your tireless work on this. Increasing the WAL
size by a few bytes should have very little impact with today's disk
performance(Logical replication of this feature wal log is also increased a
lot, logical replication is a milestone new feature, and the community has
been improving the logical replication of functions),I believe removing
troubled postgresql Transaction ID Wraparound was also a milestone  new
feature  adding a few bytes is worth it!

Best regards

On Tue, 23 Apr 2024 at 17:37, Heikki Linnakangas  wrote:

> On 23/04/2024 11:23, Maxim Orlov wrote:
> > PROPOSAL
> > Make multixact offsets 64 bit.
>
> +1, this is a good next step and useful regardless of 64-bit XIDs.
>
> > @@ -156,7 +148,7 @@
> >   ((uint32) ((0x % MULTIXACT_MEMBERS_PER_PAGE) + 1))
> >
> >  /* page in which a member is to be found */
> > -#define MXOffsetToMemberPage(xid) ((xid) / (TransactionId)
> MULTIXACT_MEMBERS_PER_PAGE)
> > +#define MXOffsetToMemberPage(xid) ((xid) / (MultiXactOffset)
> MULTIXACT_MEMBERS_PER_PAGE)
> >  #define MXOffsetToMemberSegment(xid) (MXOffsetToMemberPage(xid) /
> SLRU_PAGES_PER_SEGMENT)
> >
> >  /* Location (byte offset within page) of flag word for a given member */
>
> This is really a bug fix. It didn't matter when TransactionId and
> MultiXactOffset were both typedefs of uint32, but it was always wrong.
> The argument name 'xid' is also misleading.
>
> I think there are some more like that, MXOffsetToFlagsBitShift for example.
>
> --
> Heikki Linnakangas
> Neon (https://neon.tech)
>
>
>
>


Re: allow changing autovacuum_max_workers without restarting

2024-04-17 Thread wenhui qiu
Agree +1,From a dba perspective, I would prefer that this parameter can be
dynamically modified, rather than adding a new parameter,What is more
difficult is how to smoothly reach the target value when the setting is
considered to be too large and needs to be lowered.



Regards

On Tue, 16 Apr 2024 at 01:41, Imseih (AWS), Sami  wrote:

> > Another option could be to just remove the restart-only GUC and hard-code
> > the upper limit of autovacuum_max_workers to 64 or 128 or something.
> While
> > that would simplify matters, I suspect it would be hard to choose an
> > appropriate limit that won't quickly become outdated.
>
> Hardcoded values are usually hard to deal with because they are hidden
> either
> In code or in docs.
>
> > When I thought about this, I considered proposing to add a new GUC for
> > "autovacuum_policy_workers".
>
> > autovacuum_max_workers would be the same as before, requiring a restart
> > to change.  The policy GUC would be the soft limit, changable at runtime
>
> I think autovacuum_max_workers should still be the GUC that controls
> the number of concurrent autovacuums. This parameter is already well
> established and changing the meaning now will be confusing.
>
> I suspect most users will be glad it's now dynamic, but will probably
> be annoyed if it's no longer doing what it's supposed to.
>
> Regards,
>
> Sami
>
>


Re: Commitfest Manager for March

2024-03-20 Thread wenhui qiu
 Hi Aleksander Alekseev
  Could you take a look at the patch (
https://commitfest.postgresql.org/47/4284/),How about your opinion

Thanks

On Tue, 12 Mar 2024 at 21:41, Aleksander Alekseev 
wrote:

> Hi Andrey,
>
> > > If you need any help please let me know.
> >
> > Aleksander, I would greatly appreciate if you join me in managing CF.
> Together we can move more stuff :)
> > Currently, I'm going through "SQL Commands". And so far I had not come
> to "Performance" and "Server Features" at all... So if you can handle
> updating statuses of that sections - that would be great.
>
> OK, I'll take care of the "Performance" and "Server Features"
> sections. I submitted my summaries of the entries triaged so far to
> the corresponding thread [1].
>
> [1]:
> https://www.postgresql.org/message-id/CAJ7c6TN9SnYdq%3DkfP-txgo5AaT%2Bt9YU%2BvQHfLBZqOBiHwoipAg%40mail.gmail.com
>
> --
> Best regards,
> Aleksander Alekseev
>
>
>


Re: Support "Right Semi Join" plan shapes

2024-03-05 Thread wenhui qiu
Hi Alena Rybakina
For merge join
+ /*
+ * For now we do not support RIGHT_SEMI join in mergejoin.
+ */
+ if (jointype == JOIN_RIGHT_SEMI)
+ {
+ *mergejoin_allowed = false;
+ return NIL;
+ }
+
Tanks

On Wed, 6 Mar 2024 at 04:10, Alena Rybakina 
wrote:

> To be honest, I didn't see it in the code, could you tell me where they
> are, please?
> On 05.03.2024 05:44, Richard Guo wrote:
>
>
> On Tue, Jan 30, 2024 at 2:51 PM Alena Rybakina 
> wrote:
>
>> I have reviewed your patch and I think it is better to add an Assert for
>> JOIN_RIGHT_SEMI to the ExecMergeJoin and ExecNestLoop functions to
>> prevent the use of RIGHT_SEMI for these types of connections (NestedLoop
>> and MergeJoin).
>
>
> Hmm, I don't see why this is necessary.  The planner should already
> guarantee that we won't have nestloops/mergejoins with right-semi joins.
>
> Thanks
> Richard
>
> --
> Regards,
> Alena Rybakina
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
>


Re: Support "Right Semi Join" plan shapes

2024-03-04 Thread wenhui qiu
Hi Richard
 Agree +1 ,I think can push now.

Richard

On Tue, 5 Mar 2024 at 10:44, Richard Guo  wrote:

>
> On Tue, Jan 30, 2024 at 2:51 PM Alena Rybakina 
> wrote:
>
>> I have reviewed your patch and I think it is better to add an Assert for
>> JOIN_RIGHT_SEMI to the ExecMergeJoin and ExecNestLoop functions to
>> prevent the use of RIGHT_SEMI for these types of connections (NestedLoop
>> and MergeJoin).
>
>
> Hmm, I don't see why this is necessary.  The planner should already
> guarantee that we won't have nestloops/mergejoins with right-semi joins.
>
> Thanks
> Richard
>


Re: Support "Right Semi Join" plan shapes

2024-03-03 Thread wenhui qiu
HI Richard
 Now it is starting the last commitfest for v17, can you respond to
Alena Rybakina points?


Regards

On Thu, 8 Feb 2024 at 13:50, wenhui qiu  wrote:

> Hi Alena Rybakina
>  I saw this code snippet also disable mergejoin ,I think it same  effect
> + /*
> + * For now we do not support RIGHT_SEMI join in mergejoin.
> + */
> + if (jointype == JOIN_RIGHT_SEMI)
> + {
> + *mergejoin_allowed = false;
> + return NIL;
> + }
> +
>
> Regards
>
> Alena Rybakina  于2024年1月30日周二 14:51写道:
>
>> Hi! Thank you for your work on this subject.
>>
>> I have reviewed your patch and I think it is better to add an Assert for
>> JOIN_RIGHT_SEMI to the ExecMergeJoin and ExecNestLoop functions to
>> prevent the use of RIGHT_SEMI for these types of connections (NestedLoop
>> and MergeJoin).
>> Mostly I'm suggesting this because of the set_join_pathlist_hook
>> function, which is in the add_paths_to_joinrel function, which allows
>> you to create a custom node. What do you think?
>>
>> --
>> Regards,
>> Alena Rybakina
>> Postgres Professional: http://www.postgrespro.com
>> The Russian Postgres Company
>>
>>


Re: Thoughts about NUM_BUFFER_PARTITIONS

2024-02-23 Thread wenhui qiu
Hi Tomas Vondra
Thanks for the information!  But I found postgres pro enterprise
version has been implemented ,However, it defaults to 16 and maxes out at
128, and the maxes are the same as in PostgreSQL.I kindly  hope that if the
developers can explain what the purpose of this is.May be 128 partitions is
the optimal value,It's a parameter to make it easier to adjust the number
of partitions in the future when it's really not enough. and the code
comments also said that  hope to implement the parameter in the future


( https://postgrespro.com/docs/enterprise/16/runtime-config-locks )


log2_num_lock_partitions (integer) #
<https://postgrespro.com/docs/enterprise/16/runtime-config-locks#GUC-LOG2-NUM-LOCK-PARTITIONS>

This controls how many partitions the shared lock tables are divided into.
Number of partitions is calculated by raising 2 to the power of this
parameter. The default value is 4, which corresponds to 16 partitions, and
the maximum is 8. This parameter can only be set in the postgresql.conf file
or on the server command line.

Best wish


On Tue, 20 Feb 2024 at 21:55, Tomas Vondra 
wrote:

> Hi,
>
> On 2/20/24 03:16, wenhui qiu wrote:
> > Hi Heikki Linnakangas
> >I saw git log found this commit:
> >
> https://github.com/postgres/postgres/commit/3acc10c997f916f6a741d0b4876126b7b08e3892
> > ,I don't seem to see an email discussing this commit. As the commit log
> > tells us, we don't know exactly how large a value is optimal, and I
> believe
> > it's more flexible to make it as a parameter.Thank you very much
> > tomas.vondra for explaining the relationship, i see that
> MAX_SIMUL_LWLOCKS
> > was just doubled in this commit, is there a more appropriate ratio
> between
> > them?
> >
>
> I think the discussion for that commit is in [1] (and especially [2]).
>
> That being said, I don't think MAX_SIMUL_LOCKS and NUM_BUFFER_PARTITIONS
> need to be in any particular ratio. The only requirement is that there
> needs to be enough slack, and 72 locks seemed to work quite fine until
> now - I don't think we need to change that.
>
> What might be necessary is improving held_lwlocks - we treat is as LIFO,
> but more as an expectation than a hard rule. I'm not sure how often we
> violate that rule (if at all), but if we do then it's going to get more
> expensive as we increase the number of locks. But I'm not sure this is
> actually a problem in practice, we usually hold very few LWLocks at the
> same time.
>
> As for making this a parameter, I'm rather opposed to the idea. If we
> don't have a very clear idea how to set this limit, what's the chance
> users with little knowledge of the internals will pick a good value?
> Adding yet another knob would just mean users start messing with it in
> random ways (typically increasing it to very high value, because "more
> is better"), causing more harm than good.
>
> Adding it as a GUC would also require making some parts dynamic (instead
> of just doing static allocation with compile-time constants). That's not
> great, but I'm not sure how significant the penalty might be.
>
>
> IMHO adding a GUC might be acceptable only if we fail to come up with a
> good value (which is going to be a trade off), and if someone
> demonstrates a clear benefit of increasing the value (which I don't
> think happen in this thread yet).
>
>
> regards
>
>
> [1]
>
> https://www.postgresql.org/message-id/flat/CAA4eK1LSTcMwXNO8ovGh7c0UgCHzGbN%3D%2BPjggfzQDukKr3q_DA%40mail.gmail.com
>
> [2]
>
> https://www.postgresql.org/message-id/CA%2BTgmoY58dQi8Z%3DFDAu4ggxHV-HYV03-R9on1LSP9OJU_fy_zA%40mail.gmail.com
>
> --
> Tomas Vondra
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: Thoughts about NUM_BUFFER_PARTITIONS

2024-02-19 Thread wenhui qiu
Hi Heikki Linnakangas
   I saw git log found this commit:
https://github.com/postgres/postgres/commit/3acc10c997f916f6a741d0b4876126b7b08e3892
,I don't seem to see an email discussing this commit. As the commit log
tells us, we don't know exactly how large a value is optimal, and I believe
it's more flexible to make it as a parameter.Thank you very much
tomas.vondra for explaining the relationship, i see that MAX_SIMUL_LWLOCKS
was just doubled in this commit, is there a more appropriate ratio between
them?



```
commit 3acc10c997f916f6a741d0b4876126b7b08e3892
Author: Robert Haas 
Date:   Thu Oct 2 13:58:50 2014 -0400

Increase the number of buffer mapping partitions to 128.

Testing by Amit Kapila, Andres Freund, and myself, with and without
other patches that also aim to improve scalability, seems to indicate
that this change is a significant win over the current value and over
smaller values such as 64.  It's not clear how high we can push this
value before it starts to have negative side-effects elsewhere, but
going this far looks OK.

`

wenhui qiu  于2024年2月20日周二 09:36写道:

> Hi Japlin Li
>Thank you for such important information ! Got it
>
> Japin Li  于2024年2月19日周一 10:26写道:
>
>>
>> On Mon, 19 Feb 2024 at 00:56, Tomas Vondra 
>> wrote:
>> > On 2/18/24 03:30, Li Japin wrote:
>> >>
>> >> I find it seems need to change MAX_SIMUL_LWLOCKS if we enlarge the
>> NUM_BUFFER_PARTITIONS,
>> >> I didn’t find any comments to describe the relation between
>> MAX_SIMUL_LWLOCKS and
>> >> NUM_BUFFER_PARTITIONS, am I missing someghing?
>> >
>> > IMHO the relationship is pretty simple - MAX_SIMUL_LWLOCKS needs to be
>> > higher than NUM_BUFFER_PARTITIONS, so that the backend can acquire all
>> > the partition locks if needed.
>> >
>>
>> Thanks for the explanation!  Got it.
>>
>> > There's other places that acquire a bunch of locks, and all of them need
>> > to be careful not to exceed MAX_SIMUL_LWLOCKS. For example gist has
>> > GIST_MAX_SPLIT_PAGES.
>> >
>> >
>> > regards
>>
>


Re: Thoughts about NUM_BUFFER_PARTITIONS

2024-02-19 Thread wenhui qiu
Hi Japlin Li
   Thank you for such important information ! Got it

Japin Li  于2024年2月19日周一 10:26写道:

>
> On Mon, 19 Feb 2024 at 00:56, Tomas Vondra 
> wrote:
> > On 2/18/24 03:30, Li Japin wrote:
> >>
> >> I find it seems need to change MAX_SIMUL_LWLOCKS if we enlarge the
> NUM_BUFFER_PARTITIONS,
> >> I didn’t find any comments to describe the relation between
> MAX_SIMUL_LWLOCKS and
> >> NUM_BUFFER_PARTITIONS, am I missing someghing?
> >
> > IMHO the relationship is pretty simple - MAX_SIMUL_LWLOCKS needs to be
> > higher than NUM_BUFFER_PARTITIONS, so that the backend can acquire all
> > the partition locks if needed.
> >
>
> Thanks for the explanation!  Got it.
>
> > There's other places that acquire a bunch of locks, and all of them need
> > to be careful not to exceed MAX_SIMUL_LWLOCKS. For example gist has
> > GIST_MAX_SPLIT_PAGES.
> >
> >
> > regards
>


Re: Thoughts about NUM_BUFFER_PARTITIONS

2024-02-08 Thread wenhui qiu
Hi Heikki Linnakangas
I think the larger shared buffer  higher the probability of multiple
backend processes accessing the same bucket slot BufMappingLock
simultaneously, (   InitBufTable(NBuffers + NUM_BUFFER_PARTITIONS); When I
have free time, I want to do this test. I have seen some tests, but the
result report is in Chinese


Best wishes

Heikki Linnakangas  于2024年2月8日周四 19:26写道:

> On 08/02/2024 12:17, wenhui qiu wrote:
> > HI hackers
> >  When I read this text in this document there is a paragraph in
> > it(https://www.interdb.jp/pg/pgsql08/03.html
> > <https://www.interdb.jp/pg/pgsql08/03.html>)
> > /*
> > The BufMappingLock is split into partitions to reduce contention in the
> > buffer table (the default is 128 partitions). Each BufMappingLock
> > partition guards a portion of the corresponding hash bucket slots.
> > */,
> >
> > Physical servers with terabytes of RAM are now commonplace,I'm looking
> > at the comments inside the source code.I'm looking at the comments
> > inside the source code to see if they still match the current hardware
> > capability?
>
> The optimal number of partitions has more to do with the number of
> concurrent processes using the buffer cache, rather than the size of the
> cache. The number of CPUs in servers has increased too, but not as much
> as RAM.
>
> But yeah, it's a valid question if the settings still make sense with
> modern hardware.
>
> > The  comment says that in the future there may be a
> > parameter,Iam a  dba now and I try to think of implementing this
> > parameter, but I'm not a professional kernel developer, I still want the
> > community senior developer to implement this parameter
>
> The first thing to do would be to benchmark with different
> NUM_BUFFER_PARTITIONS settings, and see if there's benefit in having
> more partitions.
>
> --
> Heikki Linnakangas
> Neon (https://neon.tech)
>
>


Thoughts about NUM_BUFFER_PARTITIONS

2024-02-08 Thread wenhui qiu
HI hackers
When I read this text in this document there is a paragraph in it(
https://www.interdb.jp/pg/pgsql08/03.html)
/*
The BufMappingLock is split into partitions to reduce contention in the
buffer table (the default is 128 partitions). Each BufMappingLock partition
guards a portion of the corresponding hash bucket slots.
*/,

Physical servers with terabytes of RAM are now commonplace,I'm looking at
the comments inside the source code.I'm looking at the comments inside the
source code to see if they still match the current hardware capability? The
 comment says that in the future there may be a parameter,Iam a  dba now
and I try to think of implementing this parameter, but I'm not a
professional kernel developer, I still want the community senior developer
to implement this parameter

/*
 * It's a bit odd to declare NUM_BUFFER_PARTITIONS and NUM_LOCK_PARTITIONS
 * here, but we need them to figure out offsets within MainLWLockArray, and
 * having this file include lock.h or bufmgr.h would be backwards.
 */

/* Number of partitions of the shared buffer mapping hashtable */
#define NUM_BUFFER_PARTITIONS  128

/*
 * The number of partitions for locking purposes.  This is set to match
 * NUM_BUFFER_PARTITIONS for now, on the basis that whatever's good enough
for
 * the buffer pool must be good enough for any other purpose.  This could
 * become a runtime parameter in future.
 */
#define DSHASH_NUM_PARTITIONS_LOG2 7
#define DSHASH_NUM_PARTITIONS (1 << DSHASH_NUM_PARTITIONS_LOG2)


Re: Support "Right Semi Join" plan shapes

2024-02-07 Thread wenhui qiu
Hi Alena Rybakina
 I saw this code snippet also disable mergejoin ,I think it same  effect
+ /*
+ * For now we do not support RIGHT_SEMI join in mergejoin.
+ */
+ if (jointype == JOIN_RIGHT_SEMI)
+ {
+ *mergejoin_allowed = false;
+ return NIL;
+ }
+

Regards

Alena Rybakina  于2024年1月30日周二 14:51写道:

> Hi! Thank you for your work on this subject.
>
> I have reviewed your patch and I think it is better to add an Assert for
> JOIN_RIGHT_SEMI to the ExecMergeJoin and ExecNestLoop functions to
> prevent the use of RIGHT_SEMI for these types of connections (NestedLoop
> and MergeJoin).
> Mostly I'm suggesting this because of the set_join_pathlist_hook
> function, which is in the add_paths_to_joinrel function, which allows
> you to create a custom node. What do you think?
>
> --
> Regards,
> Alena Rybakina
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
>


Re: Support "Right Semi Join" plan shapes

2024-01-23 Thread wenhui qiu
Hi  vignesh C
   Many thanks, I have marked it to  "ready for committer"

Best wish

vignesh C  于2024年1月23日周二 10:56写道:

> On Mon, 22 Jan 2024 at 11:27, wenhui qiu  wrote:
> >
> > Hi  vignesh CI saw this path has been passed (
> https://cirrus-ci.com/build/6109321080078336),can we push it?
>
> If you have found no comments from your review and testing, let's mark
> it as "ready for committer".
>
> Regards,
> Vignesh
>


Re: Support "Right Semi Join" plan shapes

2024-01-21 Thread wenhui qiu
Hi  vignesh CI saw this path has been passed (
https://cirrus-ci.com/build/6109321080078336),can we push it?

Best wish

Richard Guo  于2024年1月9日周二 18:49写道:

>
> On Sun, Jan 7, 2024 at 3:03 PM vignesh C  wrote:
>
>> One of the tests in CFBot has failed at [1] with:
>> -   Relations: (public.ft1 t1) SEMI JOIN (public.ft2 t2)
>> -   Remote SQL: SELECT r1."C 1", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6,
>> r1.c7, r1.c8 FROM "S 1"."T 1" r1 WHERE ((r1."C 1" < 20)) AND EXISTS
>> (SELECT NULL FROM "S 1"."T 1" r3 WHERE ((r3."C 1" > 10)) AND
>> ((date(r3.c5) = '1970-01-17'::date)) AND ((r3.c3 = r1.c3))) ORDER BY
>> r1."C 1" ASC NULLS LAST
>> -(4 rows)
>> +   Sort Key: t1.c1
>> +   ->  Foreign Scan
>> + Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8
>> + Relations: (public.ft1 t1) SEMI JOIN (public.ft2 t2)
>> + Remote SQL: SELECT r1."C 1", r1.c2, r1.c3, r1.c4, r1.c5,
>> r1.c6, r1.c7, r1.c8 FROM "S 1"."T 1" r1 WHERE ((r1."C 1" < 20)) AND
>> EXISTS (SELECT NULL FROM "S 1"."T 1" r3 WHERE ((r3."C 1" > 10)) AND
>> ((date(r3.c5) = '1970-01-17'::date)) AND ((r3.c3 = r1.c3)))
>> +(7 rows)
>
>
> Thanks.  I looked into it and have figured out why the plan differs.
> With this patch the SEMI JOIN that is pushed down to the remote server
> is now implemented using JOIN_RIGHT_SEMI, whereas previously it was
> implemented using JOIN_SEMI.  Consequently, this leads to changes in the
> costs of the paths: path with the sort pushed down to remote server, and
> path with the sort added atop the foreign join.  And at last the latter
> one wins by a slim margin.
>
> I think we can simply update the expected file to fix this plan diff, as
> attached.
>
> Thanks
> Richard
>


Re: Support "Right Semi Join" plan shapes

2023-12-27 Thread wenhui qiu
 <> fivethous) AND (a.twothousand =
twothousand))
   ->  Materialize
 ->  Seq Scan on int4_tbl i4
(9 rows)
test=# set enable_nestloop =off;
SET
test=# explain (costs off ) SELECT *
FROM int4_tbl i4, tenk1 a
WHERE (a.twothousand, a.fivethous) IN (
SELECT b.twothousand, b.fivethous
FROM tenk1 b
WHERE a.twothousand = b.twothousand  and  a.fivethous <> b.fivethous
)
AND i4.f1 = a.tenthous;
   QUERY PLAN

 Hash Join
   Hash Cond: (a.tenthous = i4.f1)
   ->  Seq Scan on tenk1 a
 Filter: (SubPlan 1)
     SubPlan 1
   ->  Seq Scan on tenk1 b
 Filter: ((a.fivethous <> fivethous) AND (a.twothousand =
twothousand))
   ->  Hash
 ->  Seq Scan on int4_tbl i4
(9 rows)


```

wenhui qiu  于2023年12月15日周五 14:40写道:

> Hi Richard Guo   I see that the test samples are all (exists)
> subqueries ,I think semi join should also support ( in) and ( any)
> subqueries. would you do more test on  ( in) and ( any) subqueries?
>
>
> Best whish
>


Re: Transaction timeout

2023-12-20 Thread wenhui qiu
Hi Junwang Zhao
   Agree +1

Best whish

Junwang Zhao  于2023年12月20日周三 10:35写道:

> On Wed, Dec 20, 2023 at 9:58 AM Thomas wen
>  wrote:
> >
> > Hi Junwang Zhao
> >  #should we invalidate lock_timeout? Or maybe just document this.
> > I think you mean when lock_time is greater than trasaction-time
> invalidate lock_timeout or  needs to be logged ?
> >
> I mean the interleaving of the gucs, which is lock_timeout and the new
> introduced transaction_timeout,
> if lock_timeout >= transaction_timeout, seems no need to enable
> lock_timeout.
> >
> >
> >
> > Best whish
> > 
> > 发件人: Junwang Zhao 
> > 发送时间: 2023年12月20日 9:48
> > 收件人: Andrey M. Borodin 
> > 抄送: Japin Li ; 邱宇航 ; Fujii Masao
> ; Andrey Borodin ;
> Andres Freund ; Michael Paquier <
> michael.paqu...@gmail.com>; Nikolay Samokhvalov ;
> pgsql-hackers ;
> pgsql-hackers@lists.postgresql.org 
> > 主题: Re: Transaction timeout
> >
> > On Tue, Dec 19, 2023 at 10:51 PM Junwang Zhao  wrote:
> > >
> > > On Tue, Dec 19, 2023 at 6:27 PM Andrey M. Borodin <
> x4...@yandex-team.ru> wrote:
> > > >
> > > >
> > > >
> > > > > On 19 Dec 2023, at 13:26, Andrey M. Borodin 
> wrote:
> > > > >
> > > > > I don’t have Windows machine, so I hope CF bot will pick this.
> > > >
> > > > I used Github CI to produce version of tests that seems to be is
> stable on Windows.
> > > > Sorry for the noise.
> > > >
> > > >
> > > > Best regards, Andrey Borodin.
> > >
> > > +   
> > > +If transaction_timeout is shorter than
> > > +idle_in_transaction_session_timeout or
> > > statement_timeout
> > > +transaction_timeout will invalidate longer
> timeout.
> > > +   
> > >
> > > When transaction_timeout is *equal* to
> idle_in_transaction_session_timeout
> > > or statement_timeout, idle_in_transaction_session_timeout and
> statement_timeout
> > > will also be invalidated, the logic in the code seems right, though
> > > this document
> > > is a little bit inaccurate.
> > >
> >
> > Unlike statement_timeout, this timeout can
> only occur
> > while waiting for locks.  Note that if
> > statement_timeout
> > is nonzero, it is rather pointless to set
> > lock_timeout to
> > the same or larger value, since the statement timeout would
> always
> > trigger first.  If log_min_error_statement is
> set to
> > ERROR or lower, the statement that timed out
> will be
> > logged.
> >
> >
> > There is a note about statement_timeout and lock_timeout, set both
> > and lock_timeout >= statement_timeout is pointless, but this logic seems
> not
> > implemented in the code. I am wondering if lock_timeout >=
> transaction_timeout,
> > should we invalidate lock_timeout? Or maybe just document this.
> >
> > > --
> > > Regards
> > > Junwang Zhao
> >
> >
> >
> > --
> > Regards
> > Junwang Zhao
> >
> >
>
>
> --
> Regards
> Junwang Zhao
>
>
>


Re: Add 64-bit XIDs into PostgreSQL 15

2023-12-15 Thread wenhui qiu
Hi Pavel Borisov
Many thanks

Best whish

Pavel Borisov  于2023年12月15日周五 17:13写道:

> Hi, Wenhui!
>
> On Fri, 15 Dec 2023 at 05:52, wenhui qiu  wrote:
>
>> Hi Maxim Orlov
>> Good news,xid64 has achieved a successful first phase,I tried to
>> change the path status (https://commitfest.postgresql.org/43/3594/) ,But
>> it seems incorrect
>>
>> Maxim Orlov  于2023年12月13日周三 20:26写道:
>>
>>> Hi!
>>>
>>> Just to keep this thread up to date, here's a new version after recent
>>> changes in SLRU.
>>> I'm also change order of the patches in the set, to make adding initdb
>>> MOX options after the
>>> "core 64 xid" patch, since MOX patch is unlikely to be committed and now
>>> for test purpose only.
>>>
>>
> If the patch is RwF the CF entry is finished and can't be enabled, rather
> the patch needs to be submitted in a new entry, which I have just done.
> https://commitfest.postgresql.org/46/4703/
>
> Please feel free to submit your review.
>
> Kind regards,
> Pavel Borisov,
> Supabase
>


Support "Right Semi Join" plan shapes

2023-12-14 Thread wenhui qiu
Hi Richard Guo   I see that the test samples are all (exists)
subqueries ,I think semi join should also support ( in) and ( any)
subqueries. would you do more test on  ( in) and ( any) subqueries?


Best whish


Re: Add 64-bit XIDs into PostgreSQL 15

2023-12-14 Thread wenhui qiu
Hi Maxim Orlov
Good news,xid64 has achieved a successful first phase,I tried to change
the path status (https://commitfest.postgresql.org/43/3594/) ,But it seems
incorrect

Maxim Orlov  于2023年12月13日周三 20:26写道:

> Hi!
>
> Just to keep this thread up to date, here's a new version after recent
> changes in SLRU.
> I'm also change order of the patches in the set, to make adding initdb MOX
> options after the
> "core 64 xid" patch, since MOX patch is unlikely to be committed and now
> for test purpose only.
>
> --
> Best regards,
> Maxim Orlov.
>