On Tuesday, September 24, 2019 5:41 PM (GMT+9), Fujii Masao wrote:
> On Thu, Sep 19, 2019 at 9:42 AM Jamison, Kirk
> wrote:
> >
> > On Wednesday, September 18, 2019 8:38 PM, Fujii Masao wrote:
> > > On Tue, Sep 17, 2019 at 10:44 AM Jamison, Kirk
> > >
&g
On Wednesday, September 18, 2019 8:38 PM, Fujii Masao wrote:
> On Tue, Sep 17, 2019 at 10:44 AM Jamison, Kirk
> wrote:
> >
> > On Friday, September 13, 2019 10:06 PM (GMT+9), Fujii Masao wrote:
> > > On Fri, Sep 13, 2019 at 9:51 PM Alvaro Herrera
> > >
> &g
On Friday, September 13, 2019 10:06 PM (GMT+9), Fujii Masao wrote:
> On Fri, Sep 13, 2019 at 9:51 PM Alvaro Herrera
> wrote:
> >
> > On 2019-Sep-13, Fujii Masao wrote:
> >
> > > On Mon, Sep 9, 2019 at 3:52 PM Jamison, Kirk
> wrote:
> >
> > &
On Friday, September 6, 2019 11:51 PM (GMT+9), Alvaro Herrera wrote:
Hi Alvaro,
Thank you very much for the review!
> On 2019-Sep-05, Jamison, Kirk wrote:
>
> > I also mentioned it from my first post if we can just remove this dead code.
> > If not, it would require to m
On Tuesday, September 3, 2019 9:44 PM (GMT+9), Fujii Masao wrote:
> Thanks for the patch!
Thank you as well for the review!
> -smgrdounlinkfork(SMgrRelation reln, ForkNumber forknum, bool isRedo)
> +smgrdounlinkfork(SMgrRelation reln, ForkNumber *forknum, int nforks,
> bool isRedo)
>
>
On Monday, August 19, 2019 10:39 AM (GMT+9), Masahiko Sawada wrote:
> Fixed.
>
> Attached the updated version patch.
Hi Sawada-san,
I haven't tested it with heavily updated large tables, but I think the patch
is reasonable as it helps to shorten the execution time of vacuum by removing
the
Hi,
> From: Jamison, Kirk [mailto:k.jami...@jp.fujitsu.com]
> Sent: Monday, July 29, 2019 10:53 AM
> To: 'Tomas Vondra'
> Cc: Dean Rasheed ; PostgreSQL Hackers
>
> Subject: RE: Multivariate MCV list vs. statistics target
>
> On Saturday, July 27, 2019 7:06 AM(GM
On Saturday, July 27, 2019 7:06 AM(GMT+9), Tomas Vondra wrote:
> On Fri, Jul 26, 2019 at 07:03:41AM +0000, Jamison, Kirk wrote:
> >On Sat, July 20, 2019 8:12 AM (GMT+9), Tomas Vondra wrote:
> >
> >> >+ /* XXX What if the target is se
On Sat, July 20, 2019 8:12 AM (GMT+9), Tomas Vondra wrote:
> >+/* XXX What if the target is set to 0? Reset the statistic?
> */
> >
> >This also makes me wonder. I haven't looked deeply into the code, but
> >since 0 is a valid value, I believe it should reset the stats.
>
> I agree
Hi,
I repeated the recovery performance test before, and found out that I made a
wrong measurement.
Using the same steps indicated in the previous email (24GB shared_buffers for
my case),
the recovery time still significantly improved compared to head
from "13 minutes" to "4 minutes 44 seconds"
On Tuesday, July 9, 2019, Tomas Vondra wrote:
> >apparently v1 of the ALTER STATISTICS patch was a bit confused about
> >length of the VacAttrStats array passed to statext_compute_stattarget,
> >causing segfaults. Attached v2 patch fixes that, and it also makes sure
> >we print warnings about
Hi Thomas,
Thanks for checking.
> On Fri, Jul 5, 2019 at 3:03 PM Jamison, Kirk wrote:
> > I updated the patch which is similar to V3 of the patch, but
> > addressing my problem in (5) in the previous email regarding
> FreeSpaceMapVacuumRange.
> > It seems to pass the reg
Hi,
> I updated the patch based from comments, but it still fails the regression
> test as indicated in (5) above.
> Kindly verify if I correctly addressed the other parts as what you intended.
>
> Thanks again for the review!
> I'll update the patch again after further comments.
I updated the
On Tuesday, July 2, 2019 4:59 PM (GMT+9), Masahiko Sawada wrote:
> Thank you for updating the patch. Here is the review comments for v2 patch.
Thank you so much for review!
I indicated the addressed parts below and attached the updated patch.
---
visibilitymap.c: visibilitymap_truncate()
> I
On Wednesday, June 26, 2019 6:10 PM(GMT+9), Adrien Nayrat wrote:
> As far as I remember, you should see "relation" wait events (type lock) on
> standby server. This is due to startup process acquiring AccessExclusiveLock
> for the truncation and other backend waiting to acquire a lock to read the
Hi all,
Attached is the v2 of the patch. I added the optimization that Sawada-san
suggested for DropRelFileNodeBuffers, although I did not acquire the lock
when comparing the minBlock and target block.
There's actually a comment written in the source code that we could
pre-check buffer tag for
Hi Sawada-san,
On Thursday, June 13, 2019 8:01 PM, Masahiko Sawada wrote:
> On Thu, Jun 13, 2019 at 6:30 PM Jamison, Kirk
> wrote:
> >
> > On Wednesday, June 12, 2019 4:29 PM (GMT+9), Masahiko Sawada wrote:
> > > ...
> > > I've not look at this patch d
On Wednesday, June 12, 2019 4:29 PM (GMT+9), Masahiko Sawada wrote:
> On Wed, Jun 12, 2019 at 12:25 PM Tsunakawa, Takayuki
> wrote:
> >
> > From: Tomas Vondra [mailto:tomas.von...@2ndquadrant.com]
> > > Years ago I've implemented an optimization for many DROP TABLE
> > > commands in a single
On Tuesday, June 11, 2019 7:23 PM (GMT+9), Adrien Nayrat wrote:
> > Attached is a patch to speed up the performance of truncates of relations.
>
> Thanks for working on this!
Thank you also for taking a look at my thread.
> > If you want to test with large number of relations,
> > you may use
Hi Aya-san,
I tested your v3 patch and it seemed to work on my Linux environment.
However, the CF Patch Tester detected a build failure (probably on Windows).
Refer to: http://commitfest.cputube.org/
Docs:
It would be better to have reference to the documentations of
Frontend/Backend Protocol's
Hello Fujii-san,
On April 18, 2018, Fujii Masao wrote:
> On Fri, Mar 30, 2018 at 12:18 PM, Tsunakawa, Takayuki
> wrote:
>> Furthermore, TRUNCATE has a similar and worse issue. While DROP TABLE
>> scans the shared buffers once for each table, TRUNCATE does that for
>> each fork, resulting
Hi,
I found some minor grammar mistake while reading reloptions.c code comments.
Attached is the fix.
I just changed "affect" to "effect", for both n_distinct and vacuum_truncate.
- * values has no affect until the ...
+ * values has no effect until the ...
Regards,
Kirk Jamison
On Monday, April 8, 2019 9:04 AM (GMT+9), Haribabu Kommi wrote:
>On Thu, Apr 4, 2019 at 3:29 PM Amit Kapila
>mailto:amit.kapil...@gmail.com>> wrote:
>On Wed, Apr 3, 2019 at 10:45 AM Haribabu Kommi
>mailto:kommi.harib...@gmail.com>> wrote:
>>
>> Thanks for the review.
>>
>> While changing the
On Thursday, April 4, 2019 5:20PM (GMT+9), Ryohei Nagaura wrote:
> > From: Fabien COELHO
> > * About socket_timeout v12 patch, I'm not sure there is a consensus.
> I think so too. I just made the patch being able to be committed anytime.
>
> Finally, I rebased all the patches because they have
On Thursday, March 28, 2019 3:13 PM (GMT+9), Haribabu Kommi wrote:
> I tried the approach as your suggested as by not counting the actual parallel
> work
> transactions by just releasing the resources without touching the counters,
> the counts are not reduced much.
>
> HEAD - With 4 parallel
Hi again,
Since Nagaura-san seems to have addressed the additional comments on
tcp user timeout patches, I still retain the status for the patch set as
ready for committer.
As for socket_timeout, I agree that this can be discussed further.
>Fabien Coelho wrote:
>> I still think that this
Hi Nagaura-san,
Thank you for the updated patches.
It became a long thread now, but it's okay, you've done a good amount of work.
There are 3 patches in total: 2 for tcp_user_timeout parameter, 1 for
socket_timeout.
A. TCP_USER_TIMEOUT
Since I've been following the updates, I compiled a
Hi,
>The socket_timeout patch needs the following fixes. Now that others have
>already tested these patches >successfully, they appear committable to me.
In addition, regarding socket_timeout parameter.
I referred to the doc in libpq.sgml, corrected misspellings,
and rephrased the doc a
Hi Nagaura-san,
>I updated my patches.
Thanks.
>In TCP_USER_TIMEOUT backend patch:
> 1) linux ver 2.6.26 -> 2.6.36
"Linux" should be capitalized.
I confirmed that you followed Horiguchi-san's advice
to base the doc from keepalives*.
About your question:
> 3) Same as keepalives*, I used both
On Tuesday, March 26, 2019 2:35 PM (GMT+9), Ryohei Nagaura wrote:
>> Your patch applies, however in TCP_backend_v10 patch, your
>> documentation is missing a closing tag so it could not be
>> tested.
>> When that's fixed, it passes the make check.
>Oops! Fixed.
Ok. Confirmed the fix.
Minor
Hi Nagaura-san,
On Monday, March 25, 2019 2:26 PM (GMT+9), Ryohei Nagaura wrote:
>Yes, I want to commit TCP_USER_TIMEOUT patches in PG12.
>Also I'd like to continue to discuss about socket_timeout after this CF.
Ok. So I'd only take a look at TCP_USER_TIMEOUT parameter for now (this
I tried to confirm the patch with the following configuration:
max_parallel_workers_per_gather = 2
autovacuum = off
postgres=# BEGIN;
BEGIN
postgres=# select xact_commit from pg_stat_database where datname = 'postgres';
xact_commit
-
118
(1 row)
postgres=# explain
Hi Hari-san,
On Sunday, February 10, 2019 2:25 PM (GMT+9), Haribabu Kommi wrote:
> I try to fix it by adding a check for parallel worker or not and based on it
> count them into stats. Patch attached.
>
> With this patch, currently it doesn't count parallel worker transactions, and
> rest of the
On Saturday, March 16, 2019 5:40 PM (GMT+9), Fabien COELHO wrote:
> > Fabien, I was wondering whether you can apply TCP_USER_TIMEOUT patch
> > and continue discussion about 'socket_timeout'?
> I can apply nothing, I'm just a small-time reviewer.
> Committers on the thread are Michaël-san and
Hi Jesper,
> Thanks Kirk !
>
> > On 3/12/19 2:20 PM, Robert Haas wrote:
> > The words 'by default' should be removed here, because there is also
> > no non-default way to get that behavior, either.
> >
>
> Here is v9 based on Kirk's and your input.
Thanks! Although there were trailing
Hi Jesper,
Sorry I almost completely forgot to get back to you on this.
Actually your patch works when I tested it before,
and I understand the intention.
.
Although a point was raised by other developers by making
--jobs optional in the suggested line by using the env variable instead.
> >
Hi Fabien,
>> See the attached latest patch.
>> The attached patch applies, builds cleanly, and passes the regression
>> tests.
>
> No problems on my part as I find the changes logical.
> This also needs a confirmation from Alvaro.
>
> Ok.
>
> You switched the patch to "waiting on author": What
Hi Fabien and Alvaro,
I found that I have already reviewed this thread before,
so I tried to apply the patch, but part of the chunk failed,
because of the unused line below which was already removed in the
recent related commit.
> PGResult*res;
I removed the line and fixed the
On Sunday, March 3, 2019 4:09PM (GMT+9), Fabien COELHO wrote:
>Basically same thing about the tcp_user_timeout guc v8, especially:
> do you have any advice about how I can test the feature, i.e.
> trigger a timeout?
>
>> Patch applies & compiles cleanly. Global check is ok, although there
>> are
Hi Nagaura-san,
Your socket_timeout patch still does not apply either with
git or patch command. It says it's still corrupted.
I'm not sure about the workaround, because the --ignore-space-change
and --ignore-whitespace did not work for me.
Maybe it might have something to do with your editor
On Monday, February 25, 2019 1:49 PM (GMT+9), Nagaura, Ryohei wrote:
> Thank you for discussion.
> I made documentation about socket_timeout and reflected Tsunakawa-san's
> comment in the new patch.
> # Mainly only fixing documentation...
> The current documentation of socket_timeout is as
On Wednesday, February 20, 2019 12:56 PM GMT+9, Robert Haas wrote:
> On Mon, Feb 18, 2019 at 10:06 PM Jamison, Kirk
> wrote:
> > It sounds more logical to me if there is a parameter that switches
> > on/off the logging similar to other postgres logs. I suggest trace_log as
On Friday, February 22, 2019 9:46 AM (GMT+9), Tsunakawa, Takayuki wrote:
> > Terminate any session that has been idle for more than the
> > specified number of seconds to prevent client from infinite
> > waiting for server due to dead connection. This can be used both as a
> > brute force
Hi,
> > tcp_socket_timeout (integer)
> >
> > Terminate and restart any session that has been idle for more than
> > the specified number of milliseconds to prevent client from infinite
> > waiting for server due to dead connection. This can be used both as
> > a brute force global query timeout
On Thursday, February 21, 2019 2:56 PM (GMT+9), Tsunakawa, Takayuki wrote:
>> 1) tcp_user_timeout parameter
>> I think this can be "committed" separately when it's finalized.
> Do you mean you've reviewed and tested the patch by simulating a
> communication failure in the way Nagaura-san
Hi,
I tried to re-read the whole thread.
Based from what I read, there are two proposed timeout parameters,
which I think can be discussed and commited separately:
(1) tcp_user_timeout
(2) tcp_socket_timeout (or suggested client_statement_timeout,
Hi Iwata-san,
Currently, the patch fails to build according to CF app.
As you know, it has something to do with the misspelling of function.
GetTimezoneInformation --> GetTimeZoneInformation
It sounds more logical to me if there is a parameter that switches on/off the
logging
similar to other
On February 14, 2019 6:16 PM +, Andres Freund wrote:
> Hi,
> On 2018-11-28 23:20:03 +0100, Peter Eisentraut wrote:
> > This does not excite me. It seems mostly redundant with using tcpdump.
> I think the one counter-argument to this is that using tcpdump in real-world
> scenarios has
Hi,
On Monday, February 4, 2019 2:15 AM +, Michael Paquier wrote:
> On Tue, Dec 04, 2018 at 04:07:34AM +, Ideriha, Takeshi wrote:
> > Sure. I didn't have a strong opinion about it, so it's ok.
> From what I can see this is waiting input from a native English speaker, so
> for now I have
On February 6, 2019, 8:57 AM +, Andres Freund wrote:
> Maybe I'm missing something here, but why is it actually necessary to
> have the sizes in shared memory, if we're just talking about caching
> sizes? It's pretty darn cheap to determine the filesize of a file that
> has been recently
On February 6, 2019, 08:25AM +, Kyotaro HORIGUCHI wrote:
>At Wed, 6 Feb 2019 06:29:15 +, "Tsunakawa, Takayuki"
> wrote:
>> Although I haven't looked deeply at Thomas's patch yet, there's currently no
>> place to store the size per relation in shared memory. You have to wait for
>> the
Hi,
On February 1, 2019 8:14 PM +, Jesper Pedersen wrote:
> On 2/1/19 4:58 AM, Alvaro Herrera wrote:
>> On 2019-Feb-01, Jamison, Kirk wrote:
>>> I'm not sure if misunderstood the purpose of $VACUUMDB_OPTS. I
>>> thought what the other developers su
On January 31, 2019, 9:29PM +, Jesper Pedersen wrote:
>>> I added most of the documentation back, as requested by Kirk.
>>
>> Actually, I find it useful to be documented. However, major contributors
>> have higher opinions in terms of experience, so I think it's alright with me
>> not to
On February 1, 2019, Tsunakawa, Takayuki wrote:
>> As most people seem to agree adding the reloption, here's the patch.
>> It passes make check, and works like this:
>Sorry, I forgot to include the modified header file. Revised patch attached.
Thank you for this.
I applied the patch. It
On January 29, 2019 8:19 PM +, Jesper Pedersen wrote:
>On 1/29/19 12:08 AM, Michael Paquier wrote:
>> I would document the optional VACUUM_OPTS on the page of pg_upgrade.
>> If Peter thinks it is fine to not do so, that's fine for me as well.
>>
..
>I added most of the documentation back, as
Hi Jesper,
Thanks for updating your patches quickly.
>On 1/28/19 3:50 PM, Peter Eisentraut wrote:
>>> Done, and v4 attached.
>>
>> I would drop the changes in pgupgrade.sgml. We don't need to explain
>> what doesn't happen, when nobody before now ever thought that it would
>> happen.
>>
>>
Hi Jesper,
On Friday, January 25, 2019, Jesper Pedersen
> Thanks for your feedback !
>
> As per Peter's comments I have changed the patch (v2) to not pass down the -j
> option to vacuumdb.
>
> Only an update to the documentation and console output is made in order to
> make it more clear.
Hi,
According to CF app, this patch needs review so I took a look at it.
Currently, your patch applies and builds cleanly, and all tests are also
successful
based from the CF bot patch tester.
I'm not sure if I have understood the argument raised by Peter correctly.
Quoting Peter, "it's not
>On Thu, Nov 15, 2018 at 2:30 PM Alvaro Herrera
>wrote:
>>
>> On 2018-Nov-15, Laurenz Albe wrote:
>>
> > > This new option would not only mitigate the long shared_buffers
> > > scan, it would also get rid of the replication conflict caused by
> > > the AccessExclusiveLock taken during
Hi,
Here are few minor fix in md.c comments
src/backend/storage/smgr/md.c
1. @L174 - removed the unnecessary word "is".
- […] Note that this is breaks mdnblocks() and related functionality [...]
+ […] Note that this breaks mdnblocks() and related functionality [...]
2. @L885 - grammar fix
- We
Hi Thomas,
On Friday, December 28, 2018 6:43 AM Thomas Munro
wrote:
> [...]if you have ideas about the validity of the assumptions, the reason it
> breaks initdb, or any other aspect of this approach (or alternatives), please
> don't let me stop you, and of course please feel free to submit
Hello,
I also find this proposed feature to be beneficial for performance, especially
when we want to extend or truncate large tables.
As mentioned by David, currently there is a query latency spike when we make
generic plan for partitioned table with many partitions.
I tried to apply Thomas'
Hello Fabien,
I have decided to take a look into this patch.
--
patching file src/bin/pgbench/pgbench.c
Hunk #1 succeeded at 296 (offset 29 lines).
[…Snip…]
Hunk #21 succeeded at 5750 (offset 108 lines).
patching file src/include/portability/instr_time.h
….
===
All 189 tests
Hi,
I appreciate the feedback and suggestions.
On Tue, Jul 31, 2018 at 8:01 AM, Robert Haas wrote:
>> How would this work if a relfilenode number that belonged to an old
>> relation got recycled for a new relation?
>> ..
>> I think something like this could be made to work -- both on the
>>
Hi Andres,
> I think this is a case where the potential work arounds are complex enough to
> use significant resources to get right, and are likely to make properly
> fixing the issue harder. I'm willing to comment on proposals that claim not
> to be problmatic in those regards, but I have
Hi,
Thank you for your replies.
On Tue, July 10, 2018 4:15 PM, Andres Freund wrote:
>I think you'd run into a lot of very hairy details with this approach.
>Consider what happens if client processes need fresh buffers and need to write
>out a victim buffer. You'll need to know that the relevant
Hello hackers,
Recently, the problem on improving performance of multiple drop/truncate tables
in a single transaction with large shared_buffers (as shown below) was solved
by commit b416691.
BEGIN;
truncate tbl001;
...
truncate tbl050;
Hi, Fujii-san
I came across this post and I got interested in it,
so I tried to apply/test the patch but I am not sure if I did it correctly.
I set-up master-slave sync, 200GB shared_buffers, 2
max_locks_per_transaction,
1 DB with 500 table partitions shared evenly across 5 tablespaces.
68 matches
Mail list logo