I think it is desirable that this patch should be resubmitted for the next
CommitFest for further review and testing mentioned above.  So I'd like to mark
this patch as Returned with Feedback.  Is it OK?
Sounds like a good idea. Thanks for the review!
Ian Link


Thursday, October 10, 2013 1:01 AM
Ian Link wrote:
Although I asked this question, I've reconsidered about these
parameters, and it seems that these parameters not only make code
rather complex but are a little confusing to users.  So I'd like to propose
to introduce only one parameter:
fast_cache_size.  While users that give weight to update performance
for the fast update technique set this parameter to a large value,
users that give weight not only to update performance but to search
performance set this parameter to a small value.  What do you think about
this?
I think it makes sense to maintain this separation. If the user doesn't need
a per-index setting, they don't have to use the parameter. Since the parameter
is off by default, they don't even need to worry about it.
There might as well be one parameter for users that don't need fine-grained
control. We can document this and I don't think it will be confusing to the
user.
OK, though I'd like to hear the opinion of others.

4. In my understanding, the small value of
gin_fast_limit/fast_cache_size leads to the increase in GIN search
performance, which, however, leads to the decrease in GIN update
performance.  Am I right?  If so, I think the tradeoff should be noted in
the documentation.
I believe this is correct.

5. The following documents in Chapter 57. GIN Indexes need to be
updated: * 57.3.1. GIN Fast Update Technique * 57.4. GIN Tips and
Tricks
Sure, I can add something.

6. I would like to see the results for the additional test cases
(tsvectors).
I don't really have any good test cases for this available, and have very
limited
time for postgres at the moment. Feel free to create a test case, but I don't
believe I can at the moment. Sorry!
Unfortunately, I don't have much time to do such a thing, though I think we
should do that.  (In addition, we should do another performance test to make
clear an influence of fast update performance from introducing these parameters,
which would be required to determine the default values of these parameters.)

7. The commented-out elog() code should be removed.
Sorry about that, I shouldn't have submitted the patch with those still there.

I should have a new patch soonish, hopefully. Thanks for your feedback!
I think it is desirable that this patch should be resubmitted for the next
CommitFest for further review and testing mentioned above.  So I'd like to mark
this patch as Returned with Feedback.  Is it OK?

Thanks,

Best regards,
Etsuro Fujita


Monday, September 30, 2013 3:09 PM
Hi Etsuro,
Sorry for the delay but I have been very busy with work. I have been away from postgres for a while, so I will need a little time to review the code and make sure I give you an informed response. I'll get back to you as soon as I am able. Thanks for understanding.
Ian Link


Friday, September 27, 2013 2:24 AM
I wrote:
I had a look over this patch.  I think this patch is interesting and very
useful.
Here are my review points:
8. I think there are no issues in this patch.  However, I have one question:
how this patch works in the case where gin_fast_limit/fast_cache_size = 0?  In
this case, in my understanding, this patch inserts new entries into the
pending
list temporarily and immediately moves them to the main GIN data structure
using
ginInsertCleanup().  Am I right?  If so, that is obviously inefficient.
Sorry, There are incorrect expressions.  I mean gin_fast_limit > 0 and
fast_cache_size = 0.

Although I asked this question, I've reconsidered about these parameters, and it
seems that these parameters not only make code rather complex but are a little
confusing to users.  So I'd like to propose to introduce only one parameter:
fast_cache_size.  While users that give weight to update performance for the
fast update technique set this parameter to a large value, users that give
weight not only to update performance but to search performance set this
parameter to a small value.  What do you think about this?

Thanks,

Best regards,
Etsuro Fujita


Thursday, September 26, 2013 6:02 AM
Hi Ian,

This patch contains a performance improvement for the fast gin cache. As you
may know, the performance of the fast gin cache decreases with its size.
Currently, the size of the fast gin cache is tied to work_mem. The size of
work_mem can often be quite high. The large size of work_mem is inappropriate
for the fast gin cache size. Therefore, we created a separate cache size
called
gin_fast_limit. This global variable controls the size of the fast gin cache,
independently of work_mem. Currently, the default gin_fast_limit is set to
128kB.
However, that value could need tweaking. 64kB may work better, but it's hard
to say with only my single machine to test on.
On my machine, this patch results in a nice speed up. Our test queries improve
from about 0.9 ms to 0.030 ms. Please feel free to use the test case yourself:
it should be attached. I can look into additional test cases (tsvectors) if
anyone is interested.
In addition to the global limit, we have provided a per-index limit:
fast_cache_size. This per-index limit begins at -1, which means that it is
disabled. If the user does not specify a per-index limit, the index will
simply
use the global limit.
I had a look over this patch.  I think this patch is interesting and very
useful.  Here are my review points:

1. Patch applies cleanly.
2. make, make install and make check is good.
3. I did performance evaluation using your test queries with 64kB and 128kB of
gin_fast_limit (or fast_cache_size), and saw that both values achieved the
performance gains over gin_fast_limit = '256MB'.  64kB worked better than 128kB.
64kB improved from 1.057 ms to 0.075 ms.  Great!
4. In my understanding, the small value of gin_fast_limit/fast_cache_size leads
to the increase in GIN search performance, which, however, leads to the decrease
in GIN update performance.  Am I right?  If so, I think the tradeoff should be
noted in the documentation.
5. The following documents in Chapter 57. GIN Indexes need to be updated:
 * 57.3.1. GIN Fast Update Technique
 * 57.4. GIN Tips and Tricks
6. I would like to see the results for the additional test cases (tsvectors).
7. The commented-out elog() code should be removed.
8. I think there are no issues in this patch.  However, I have one question: how
this patch works in the case where gin_fast_limit/fast_cache_size = 0?  In this
case, in my understanding, this patch inserts new entries into the pending list
temporarily and immediately moves them to the main GIN data structure using
ginInsertCleanup().  Am I right?  If so, that is obviously inefficient.

Sorry for the delay.

Best regards,
Etsuro Fujita


Monday, June 17, 2013 9:42 PM

This patch contains a performance improvement for the fast gin cache. As you may know, the performance of the fast gin cache decreases with its size. Currently, the size of the fast gin cache is tied to work_mem. The size of work_mem can often be quite high. The large size of work_mem is inappropriate for the fast gin cache size. Therefore, we created a separate cache size called gin_fast_limit. This global variable controls the size of the fast gin cache, independently of work_mem. Currently, the default gin_fast_limit is set to 128kB. However, that value could need tweaking. 64kB may work better, but it's hard to say with only my single machine to test on.


On my machine, this patch results in a nice speed up. Our test queries improve from about 0.9 ms to 0.030 ms. Please feel free to use the test case yourself: it should be attached. I can look into additional test cases (tsvectors) if anyone is interested.


In addition to the global limit, we have provided a per-index limit: fast_cache_size. This per-index limit begins at -1, which means that it is disabled. If the user does not specify a per-index limit, the index will simply use the global limit.


I would like to thank Andrew Gierth for all his help on this patch. As this is my first patch he was extremely helpful. The idea for this performance improvement was entirely his. I just did the implementation. Thanks for reading and considering this patch!



Ian Link

Reply via email to