On Wed, Sep 15, 2021 at 4:14 AM Stephen Frost wrote:
>
> > > I'm not proposing to remove existing archive_command. Just deprecate it
> > > one-WAL-per-call form.
> >
> > Which is a big API beak.
>
> We definitely need to stop being afraid of this. We completely changed
> around how restores
On 2021/09/11 12:21, Fujii Masao wrote:
On 2021/07/23 20:07, Ranier Vilela wrote:
Em sex., 23 de jul. de 2021 às 07:02, Aleksander Alekseev mailto:aleksan...@timescale.com>> escreveu:
Hi hackers,
The following review has been posted through the commitfest application:
On Wed, Sep 15, 2021 at 8:49 AM Peter Smith wrote:
>
> On Tue, Sep 14, 2021 at 8:33 PM Amit Kapila wrote:
> >
> > On Fri, Jun 25, 2021 at 9:20 AM Peter Smith wrote:
> > >
> > > But I recently learned that when there are partitions in the
> > > publication, then toggling the value of the
On Tue, Sep 14, 2021 at 05:30:27PM -0400, John Naylor wrote:
> I've attached the patch for including this update in our sources. I'll
> apply it on master after doing some sanity checks. The announcement can be
> found here:
>
>
On Mon, Sep 6, 2021 at 11:21 PM Alvaro Herrera wrote:
>
> I pushed the clerical part of this -- namely the addition of
> PublicationTable node and PublicationRelInfo struct.
>
One point to note here is that we are developing a generic grammar for
publications where not only tables but other
On Tue, Sep 14, 2021 at 8:33 PM Amit Kapila wrote:
>
> On Fri, Jun 25, 2021 at 9:20 AM Peter Smith wrote:
> >
> > But I recently learned that when there are partitions in the
> > publication, then toggling the value of the PUBLICATION option
> > "publish_via_partition_root" [3] can also
On Tue, Sep 14, 2021 at 06:00:44PM +, Bossart, Nathan wrote:
> I think I see more support for shared_memory_size_in_huge_pages than
> for huge_pages_needed_for_shared_memory at the moment. I'll update
> the patch set in the next day or two to use
> shared_memory_size_in_huge_pages unless
At Tue, 14 Sep 2021 22:32:04 -0300, Alvaro Herrera
wrote in
> On 2021-Sep-14, Alvaro Herrera wrote:
>
> > On 2021-Sep-08, Kyotaro Horiguchi wrote:
> >
> > > Thanks! As my understanding the new record add the ability to
> > > cross-check between a teard-off contrecord and the new record
At Tue, 14 Sep 2021 18:07:31 +, "Bossart, Nathan"
wrote in
> On 9/14/21, 9:18 AM, "Bossart, Nathan" wrote:
> > This is an interesting idea, but the "else" block here seems prone to
> > race conditions. I think we'd have to hold arch_lck to prevent that.
> > But as I mentioned above, if we
On 2021-Sep-14, Alvaro Herrera wrote:
> On 2021-Sep-08, Kyotaro Horiguchi wrote:
>
> > Thanks! As my understanding the new record add the ability to
> > cross-check between a teard-off contrecord and the new record inserted
> > after the teard-off record. I didn't test the version by myself
Hello Melanie
On 2021-Sep-13, Melanie Plageman wrote:
> I also think it makes sense to rename the pg_stat_buffer_actions view to
> pg_stat_buffers and to name the columns using both the buffer action
> type and buffer type -- e.g. shared, strategy, local. This leaves open
> the possibility of
> On Jun 16, 2020, at 6:55 AM, amul sul wrote:
>
> (2) if the session is idle, we also need the top-level abort
> record to be written immediately, but can't send an error to the client until
> the next
> command is issued without losing wire protocol synchronization. For now, we
> just use
> On 15 Sep 2021, at 00:14, Jacob Champion wrote:
> On Mon, 2021-09-13 at 15:04 +0200, Daniel Gustafsson wrote:
>> -# Convert client.key to encrypted PEM (X.509 text) and DER (X.509 ASN.1)
>> formats
>> -# to test libpq's support for the sslpassword= option.
>> -ssl/client-encrypted-pem.key:
On Tue, Sep 14, 2021 at 12:57:47PM -0300, Alvaro Herrera wrote:
> The parentheses that commit e3a87b4991cc removed the requirement for are
> those that the committed code still has, starting at the errcode() line.
> The ones in errmsg() were redundant and have never been necessary.
Indeed,
On Mon, 2021-09-13 at 15:04 +0200, Daniel Gustafsson wrote:
> A few things noted (and fixed as per the attached, which is v6 squashed and
> rebased on HEAD; commitmessage hasn't been addressed yet) while reviewing:
>
> * Various comment reflowing to fit within 79 characters
>
> * Pass through
I wrote:
> Hence, I present the attached, which also tweaks things to avoid an
> extra pq_flush in the end-of-command code path, and improves the
> comments to discuss the issue of NOTIFYs sent by procedures.
Hearing no comments, I pushed that.
> I'm inclined to think we should flat-out reject
> So I wonder, isn't the fixed usage issue specific to LLVM 7
That's definitely possible. I was unable to reproduce the issue I shared in my
original email when postgres was compiled with LLVM 10.
That's also why I sent an email to the pgsql-pkg-yum mailing list about options
to use a newer
Greetings,
* Julien Rouhaud (rjuju...@gmail.com) wrote:
> On Fri, Sep 10, 2021 at 2:03 PM Andrey Borodin wrote:
> > > 10 сент. 2021 г., в 10:52, Julien Rouhaud написал(а):
> > > Yes, but it also means that it's up to every single archiving tool to
> > > implement a somewhat hackish parallel
Zhihong Yu writes:
> In the fix, isUsedSubplan is used to tell whether any given subplan is used.
> Since only one subplan is used, I wonder if the array can be replaced by
> specifying the subplan is used.
That doesn't seem particularly more convenient. The point of the bool
array is to merge
On Tue, Sep 14, 2021 at 10:44 AM Tom Lane wrote:
> Rajkumar Raghuwanshi writes:
> > I am getting "ERROR: subplan "SubPlan 1" was not initialized" error with
> > below test case.
>
> > CREATE TABLE tbl ( c1 int, c2 int, c3 int ) PARTITION BY LIST (c1);
> > create table tbl_null PARTITION OF tbl
Hello Andres,
14.09.2021 08:05, Andres Freund wrote:
>
>> With LLVM 9 on the same Centos 7 I don't get such segfault. Also it
>> doesn't happen on different OSes with LLVM 7.
> That just like an llvm bug to me. Rather than the usage issue addressed in
> this thread.
But Justin seen this
On 9/13/21, 11:06 PM, "Amul Sul" wrote:
> The patch is straightforward but the only concern is that in
> StartupXLOG(), SharedRecoveryState now gets updated only with spin
> lock; earlier it also had ControlFileLock in addition to that. AFAICU,
> I don't see any problem there, since until the
On 9/14/21 2:04 PM, Erik Rijkers wrote:
> On 9/14/21 2:53 PM, Andrew Dunstan wrote:
>> On 9/13/21 5:41 AM, Erik Rijkers wrote:
>>> On 9/2/21 8:52 PM, Andrew Dunstan wrote:
>>>
>
> >> [0001-SQL-JSON-functions-v51.patch]
> >> [0002-JSON_TABLE-v51.patch]
> >>
Dear Alvaro,
We just need to rewrite the scope of the patch so I work on the next
generation. I do not know what was "out of scope" in the current version
/Mark
On Tue, 14 Sep 2021, 8:55 pm Alvaro Herrera,
wrote:
> On 2021-Sep-14, Daniel Gustafsson wrote:
>
> > Given the above, and that
> On 14 Sep 2021, at 20:54, Alvaro Herrera wrote:
>
> On 2021-Sep-14, Daniel Gustafsson wrote:
>
>> Given the above, and that nothing has happened on this thread since this
>> note,
>> I think we should mark this Returned with Feedback and await a new
>> submission.
>> Does that seem
On 2021-Sep-14, Daniel Gustafsson wrote:
> Given the above, and that nothing has happened on this thread since this note,
> I think we should mark this Returned with Feedback and await a new submission.
> Does that seem reasonable Alvaro?
It seems reasonable to me.
--
Álvaro Herrera
> On 14 Jul 2021, at 18:07, Alvaro Herrera wrote:
>
> On 2021-Jul-14, vignesh C wrote:
>
>> The patch does not apply on Head anymore, could you rebase and post a
>> patch. I'm changing the status to "Waiting for Author".
>
> BTW I gave a look at this patch in the March commitfest and concluded
út 14. 9. 2021 v 20:04 odesílatel Erik Rijkers napsal:
> On 9/14/21 2:53 PM, Andrew Dunstan wrote:
> > On 9/13/21 5:41 AM, Erik Rijkers wrote:
> >> On 9/2/21 8:52 PM, Andrew Dunstan wrote:
> >>
>
> >> [0001-SQL-JSON-functions-v51.patch]
> >> [0002-JSON_TABLE-v51.patch]
> >>
On 9/14/21, 9:18 AM, "Bossart, Nathan" wrote:
> This is an interesting idea, but the "else" block here seems prone to
> race conditions. I think we'd have to hold arch_lck to prevent that.
> But as I mentioned above, if we are okay with depending on the
> fallback directory scans, I think we can
On 9/14/21 2:53 PM, Andrew Dunstan wrote:
On 9/13/21 5:41 AM, Erik Rijkers wrote:
On 9/2/21 8:52 PM, Andrew Dunstan wrote:
>> [0001-SQL-JSON-functions-v51.patch]
>> [0002-JSON_TABLE-v51.patch]
>> [0003-JSON_TABLE-PLAN-DEFAULT-clause-v51.patch]
>> [0004-JSON_TABLE-PLAN-clause-v51.patch]
On 9/13/21, 5:49 PM, "Kyotaro Horiguchi" wrote:
> At Tue, 14 Sep 2021 00:30:22 +, "Bossart, Nathan"
> wrote in
>> On 9/13/21, 1:25 PM, "Tom Lane" wrote:
>> > Seems like "huge_pages_needed_for_shared_memory" would be sufficient.
>>
>> I think we are down to either
On 2021-Sep-08, Kyotaro Horiguchi wrote:
> Thanks! As my understanding the new record add the ability to
> cross-check between a teard-off contrecord and the new record inserted
> after the teard-off record. I didn't test the version by myself but
> the previous version implemented the
Rajkumar Raghuwanshi writes:
> I am getting "ERROR: subplan "SubPlan 1" was not initialized" error with
> below test case.
> CREATE TABLE tbl ( c1 int, c2 int, c3 int ) PARTITION BY LIST (c1);
> create table tbl_null PARTITION OF tbl FOR VALUES IN (null);
> create table tbl_def PARTITION OF tbl
On 9/14/21, 7:23 AM, "Dipesh Pandit" wrote:
> I agree that when we are creating a .ready file we should compare
> the current .ready file with the last .ready file to check if this file is
> created out of order. We can store the state of the last .ready file
> in shared memory and compare it
On 2021-Sep-14, Michael Paquier wrote:
> On Mon, Sep 13, 2021 at 08:51:18AM +0530, Bharath Rupireddy wrote:
> > On Mon, Sep 13, 2021 at 8:07 AM Michael Paquier wrote:
> >> On Sun, Sep 12, 2021 at 10:14:36PM -0300, Euler Taveira wrote:
> >>> On Sun, Sep 12, 2021, at 8:02 PM, Bossart, Nathan
Hello
I found that in 0001 you propose to rename few options. Probably we could
rename another option for clarify? I think FAST (it's about some bw limits?)
and WAIT (wait for what? checkpoint?) option names are confusing.
Could we replace FAST with "CHECKPOINT [fast|spread]" and WAIT to
Hi,
On September 14, 2021 7:11:25 AM PDT, Tom Lane wrote:
>Aleksander Alekseev writes:
>>> Initial experiments show no observable problems when copying PGDATA or in
>>> fact using physical streaming replication between the two CPU architectures.
>
>> That's an interesting result. The topic of
On Tue, Sep 7, 2021 at 11:38 AM houzj.f...@fujitsu.com
wrote:
>
> From Tues, Sep 7, 2021 12:02 PM Amit Kapila wrote:
> > On Mon, Sep 6, 2021 at 1:49 PM houzj.f...@fujitsu.com
> > wrote:
> > >
> > > I can reproduce this bug.
> > >
> > > I think the reason is it didn't invalidate all the leaf
On Mon, Sep 13, 2021 at 9:42 PM Robert Haas wrote:
>
> On Mon, Sep 13, 2021 at 7:19 AM Dilip Kumar wrote:
> > Seems like nothing has been done about the issue reported in [1]
> >
> > This one line change shall fix the issue,
>
> Oops. Try this version.
Thanks, this version works fine.
--
Thanks for the feedback.
> The latest post on this thread contained a link to this one, and it
> made me want to rewind to this point in the discussion. Suppose we
> have the following alternative scenario:
>
> Let's say step 1 looks for WAL file 10, but 10.ready doesn't exist
> yet. The
Aleksander Alekseev writes:
>> Initial experiments show no observable problems when copying PGDATA or in
>> fact using physical streaming replication between the two CPU architectures.
> That's an interesting result. The topic of physical replication
> compatibility interested me much back in
> 14 сент. 2021 г., в 18:41, Daniel Gustafsson написал(а):
>
>>> Besides this patch looks good and is ready for committer IMV.
>
> A variant of this patch was originally objected against by Michael, and as
> this
> version is marked Ready for Committer I would like to hear his opinions on
>
"Andrey V. Lepikhov" writes:
> Also, as a test, I used two tables with 1E5 partitions each. I tried to
> do plain SELECT, JOIN, join with plain table. No errors found, only
> performance issues. But it is a subject for another research.
Yeah, there's no expectation that the performance would
>> Besides this patch looks good and is ready for committer IMV.
A variant of this patch was originally objected against by Michael, and as this
version is marked Ready for Committer I would like to hear his opinions on
whether the new evidence changes anything.
--
Daniel Gustafsson
Em ter., 14 de set. de 2021 às 08:49, Rajkumar Raghuwanshi <
rajkumar.raghuwan...@enterprisedb.com> escreveu:
> Hi,
>
> I am getting "ERROR: subplan "SubPlan 1" was not initialized" error with
> below test case.
>
> CREATE TABLE tbl ( c1 int, c2 int, c3 int ) PARTITION BY LIST (c1);
> create
> On 14 Sep 2021, at 11:57, Amit Kapila wrote:
> LGTM as well. Peter E., Daniel, does any one of you is intending to
> push this? If not, I can take care of this.
No worries, I can pick it up.
--
Daniel Gustafsson https://vmware.com/
Hi Jan,
> Initial experiments show no observable problems when copying PGDATA or in
fact using physical streaming replication between the two CPU architectures.
That's an interesting result. The topic of physical replication
compatibility interested me much back in 2017 and I raised this
Hi,
I am getting "ERROR: subplan "SubPlan 1" was not initialized" error with
below test case.
CREATE TABLE tbl ( c1 int, c2 int, c3 int ) PARTITION BY LIST (c1);
create table tbl_null PARTITION OF tbl FOR VALUES IN (null);
create table tbl_def PARTITION OF tbl DEFAULT;
insert into tbl values
Hi Andrey,
> only performance issues
That's interesting. Any chance you could share the hardware
description, the configuration file, and steps to reproduce with us?
--
Best regards,
Aleksander Alekseev
Fellow Postgres Admins and Developers,
With the arrival of ARM compute nodes on AWS and an existing fleet of
Postgres clusters running on x86_64 nodes the question arises how to
migrate existing Postgres clusters to ARM64 nodes, ideally with zero
downtime, as one is used to.
Initial experiments
On Tue, Sep 14, 2021 at 12:18:21PM +0900, Amit Langote wrote:
> Patch updated. Given the new text, I thought it might be better to
> move the paragraph right next to the description of the ResourceOwner
> API at the beginning of the section, because the context seems clearer
> that way.
On Fri, Jun 25, 2021 at 9:20 AM Peter Smith wrote:
>
> But I recently learned that when there are partitions in the
> publication, then toggling the value of the PUBLICATION option
> "publish_via_partition_root" [3] can also *implicitly* change the list
> published tables, and therefore that too
On Wed, Sep 8, 2021 at 5:11 PM Masahiko Sawada wrote:
>
> On Tue, Sep 7, 2021 at 9:01 PM Daniel Gustafsson wrote:
> >
> > > On 7 Sep 2021, at 13:36, Peter Eisentraut
> > > wrote:
> > >
> > > On 12.08.21 04:52, Masahiko Sawada wrote:
> > >> On Wed, Aug 11, 2021 at 5:42 PM Daniel Gustafsson
>
On Mon, Sep 13, 2021 at 10:42 PM Tom Lane wrote:
>
> Alvaro Herrera writes:
> > Am I the only that that thinks this code is doing far too much in a
> > PG_CATCH block?
>
> You mean the one in ReorderBufferProcessTXN? Yeah, that is mighty
> ugly. It might be all right given that it almost
On Tue, Sep 14, 2021 at 6:31 AM houzj.f...@fujitsu.com
wrote:
>
> From Sun, Sept 12, 2021 11:13 PM vignesh C wrote:
> > On Fri, Sep 10, 2021 at 11:21 AM Hou Zhijie wrote:
> > > Attach the without-flag version and add comments about the pubobj_name.
> >
> > Thanks for the changes, the suggested
On Mon, Sep 13, 2021 at 7:06 PM tanghy.f...@fujitsu.com
wrote:
>
> On Sunday, September 12, 2021 11:13 PM vignesh C wrote:
> >
> > Thanks for the changes, the suggested changes make the parsing code
> > simpler. I have merged the changes to the main patch. Attached v27
> > patch has the changes
On 9/9/21 8:38 PM, Jaime Casanova wrote:
On Thu, Sep 09, 2021 at 09:50:46AM +, Aleksander Alekseev wrote:
It looks like this patch needs to be updated. According to
http://cfbot.cputube.org/ it applies but doesn't pass any tests. Changing the
status to save time for reviewers.
The new
On 9/11/21 10:37 PM, Tom Lane wrote:
Aleksander Alekseev writes:
(v2 below is a rebase up to HEAD; no actual code changes except
for adjusting the definition of IS_SPECIAL_VARNO.)
I have looked at this code. No problems found.
Also, as a test, I used two tables with 1E5 partitions each. I
On Mon, Sep 13, 2021 at 11:56:52PM -0400, Sehrope Sarkuni wrote:
> Good catch. Staring at that piece again, that's tricky as it tries to
> aggressively free the buffer before calling write_cvslog(...). Which can't
> just be duplicated for additional destinations.
>
> I think we need to pull up
Hi,
I would like to propose a patch that removes the duplicate code
setting database state in the control file.
The patch is straightforward but the only concern is that in
StartupXLOG(), SharedRecoveryState now gets updated only with spin
lock; earlier it also had ControlFileLock in addition to
60 matches
Mail list logo