e attached patch for
the same. Please have a look and let me know your thoughts on this. Thanks!
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
On Sun, May 15, 2016 at 1:26 AM, Fabien COELHO <coe...@cri.ensmp.fr> wrote:
>
> These raw tps suggest that {backend
nzeu9vut7tx6b-n1wyouwwfhd6...@mail.gmail.com>*
Note: I am applying this patch on top of commit "
*6150a1b08a9fe7ead2b25240be46dddeae9d98e1*".
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
On Fri, Mar 25, 2016 at 9:29 AM, Amit Kapila <amit.kapil...@gmail.com>
Sharma
EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>
On Sat, Mar 26, 2016 at 9:31 PM, Ashutosh Sharma <ashu.coe...@gmail.com>
wrote:
> Hi,
>
> I am getting some reject files while trying to apply "
> *pinunpin-cas-5.patch*" attac
te mode during recovery");return false;}*
*Solution:* Make use of a global variable "*InitializingParallelWorker"* to
protect the check for *RecoveryInProgress()* when Parallel Worker is being
Initialsed.
PFA patch to fix the issue.
With Regards,
Ashutosh Sharma
EnterpriseDB: *htt
Hi,
I am unable to revert 6150a1b0 on top of recent commit in the master
branch. It seems like there has been some commit made recently that has got
dependency on 6150a1b0.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
On Sun, Mar 27, 2016 at 5:45 PM, Andres Freund
for all the scenarios discussed above and let me
know your thoughts.
With Regards,
Ashutosh Sharma
EnterpriseDB: *http://www.enterprisedb.com <http://www.enterprisedb.com>*
On Sat, Feb 27, 2016 at 9:26 AM, Andres Freund <and...@anarazel.de> wrote:
> On February 26, 2016 7:55:
.
PFA patch for the same.
With Regards,
Ashutosh Sharma
EnterpriseDB: *http://www.enterprisedb.com <http://www.enterprisedb.com>*
On Thu, Apr 14, 2016 at 11:57 PM, Magnus Hagander <mag...@hagander.net>
wrote:
> On Thu, Apr 14, 2016 at 8:20 PM, Ashutosh Sharma <ashu.coe...@
the value in proparallel flag, store the
parallel {safe | unsafe | restricted} info in the buffer that holds the
function definition. PFA patch to fix the issue.
With Regards,
Ashutosh Sharma
EnterpriseDB: *http://www.enterprisedb.com <http://www.enterprisedb.com>*
diff --git a/src/b
.547176
Please let me know if i need to take readings with other client counts as
well.
*Note:* I have taken these readings on postgres master head at,
commit 91fd1df4aad2141859310564b498a3e28055ee28
Author: Tom Lane <t...@sss.pgh.pa.us>
Date: Sun May 8 16:53:55 2016 -0400
With R
8 -T 1800 -M prepared postgres
With Regards,
Ashutosh Sharma
EnterpriseDB: *http://www.enterprisedb.com <http://www.enterprisedb.com>*
On Thu, May 12, 2016 at 9:22 AM, Robert Haas <robertmh...@gmail.com> wrote:
> On Wed, May 11, 2016 at 12:51 AM, Ashutosh Sharma <ashu.coe
, checkpoint_flush_after=0 : TPS = *18614.479962*
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
On Fri, May 13, 2016 at 7:50 PM, Robert Haas <robertmh...@gmail.com> wrote:
> On Fri, May 13, 2016 at 7:08 AM, Ashutosh Sharma <ashu.coe...@gmail.com>
> wr
*8kb), tps = *11434.964545*
4) backend_flush_after = *2MB* (256*8kb), tps = *13477.089417*
*Note:* Above test has been performed on Unpatched master with default
values for checkpoint_flush_after, bgwriter_flush_after
and wal_writer_flush_after.
With Regards,
Ashutosh Sharma
Enterpris
nd pg_xlog is
created then it would be skipped. But, that is not the case for pg_replslot
and pg_stat_tmp. Is this not an issue. Should these directories not be
skipped. Please let me know your thoughts on this. Thanks.
With Regards,
Ashutosh Sharma
EnterpriseDB: *http://www.enterprisedb
.
Attached is the patch that fix this issue.
With Regards,
Ashutosh Sharma
EnterpriseDB: *http://www.enterprisedb.com <http://www.enterprisedb.com>*
diff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.c
index 1008873..5163b85 100644
--- a/src/backend/repli
this lwlock
information is currently missing in the monitoring.sgml file. I have
added the necessary information about the same in this file and
attached is the patch. Thanks.
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgm
> Pushed with minor fixes (white space and bumping the count for the tranche).
>
Thanks. I just missed to increase the LWLock count by one.
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
T
On Fri, Feb 3, 2017 at 8:29 PM, Robert Haas <robertmh...@gmail.com> wrote:
> On Fri, Feb 3, 2017 at 7:41 AM, Ashutosh Sharma <ashu.coe...@gmail.com> wrote:
>> I think UInt32GetDatum(metad->hashm_procid) looks fine, the reason
>> being 'hashm_procid' is defi
er wants it that way.
>
I think it's OK to check hash_bitmap_info() or any other functions
with different page types at least once.
[1]-
https://www.postgresql.org/message-id/CA%2BTgmoZUjrVy52TUU3b_Q5L6jcrt7w5R4qFwMXdeCuKQBmYWqQ%40mail.gmail.com
--
With Regards,
Ashutosh Sharma
Enterprise
>> I think it's OK to check hash_bitmap_info() or any other functions
>> with different page types at least once.
>>
>> [1]-
>> https://www.postgresql.org/message-id/CA%2BTgmoZUjrVy52TUU3b_Q5L6jcrt7w5R4qFwMXdeCuKQBmYWqQ%40mail.gmail.com
>
> Sure, I just don't know if we need to test them 4 or 5
>> 1) Check if an overflow page is a new page. If so, read a bitmap page
>> to confirm if a bit corresponding to this overflow page is clear or
>> not. For this, I would first add Assert statement to ensure that the
>> bit is clear and if it is, then set the statusbit as false indicating
>> that
> As far as I can tell, the hash_bitmap_info() function is doing
> something completely ridiculous. One would expect that the purpose of
> this function was to tell you about the status of pages in the bitmap.
> The documentation claims that this is what the function does: it
> claims that this
On Sat, Jan 28, 2017 at 10:25 PM, Ashutosh Sharma <ashu.coe...@gmail.com> wrote:
> Hi,
>
> Please find my reply inline.
>
>> In hash_bimap_info(), we go to the trouble of creating a raw page but
>> never do anything with it. I guess the idea here is just to err
d the
> wrong macro it would actually fail. I'm not sure that's really the
> best place to spend our effort, though. The moral of this episode is
> that it's important to at least get the right width. Currently,
> getting the wrong signedness doesn't actually break anyth
won't have any zero
pages in Hash Indexes so I don't think we need to have column showing
zero pages (zero_pages). When working on WAL in hash indexes, we found
that WAL routine 'XLogReadBufferExtended' does not expect a page to be
completely zero page else it returns Invalid Buffer. To fix this,
;
Date: Fri Jan 20 20:29:53 2017 -0500
Move some things from builtins.h to new header files
On Thu, Jan 19, 2017 at 12:27 PM, Ashutosh Sharma <ashu.coe...@gmail.com> wrote:
>> However, I've some minor comments on the patch:
>>
>> +/*
>> + * HASH_ALLOCA
modules like pgstattuple where we are just trying to read the
tables.
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
0001-Add-pgstathashindex-to-pgstattuple-extension-v5.patch
Description: invalid/octet-stream
--
Sent via pgsql-hackers mailing list (pgsql-hackers@p
>> Secondly, we will have to input overflow block number as an input to
>> this function so as to determine the overflow bit number which can be
>> used further to identify the bitmap page.
>>
>
> I think you can get that from bucket number by using BUCKET_TO_BLKNO.
> You can get bucket number
Hi,
Please find my reply inline.
> In hash_bimap_info(), we go to the trouble of creating a raw page but
> never do anything with it. I guess the idea here is just to error out
> if the supplied page number is not an overflow page, but there are no
> comments explaining that. Anyway, I'm not
ts = 64
run time duration = 30 mins
read-write workload.
./pgbench -c $threads -j $threads -T $time_for_reading -M prepared
postgres -f /home/ashu/deadlock_report
I hope this makes the things clear now and if there is no more
concerns it can be moved to 'Ready for committer' state. Thank you
w.postgresql.org/message-id/CAE9k0Pke046HKYfuJGcCtP77NyHrun7hBV-v20a0TW4CUg4H%2BA%40mail.gmail.com
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
is bool, you should be using BoolGetDatum().
Sorry to mention but I didn't find any SQL datatype equivalent to
uint32 or uint16 in 'C'. So, currently i am using int4 for uint16 and
int8 for uint32.
> Apart from the above, I did a little work to improve the reporting
> when a page of the
Thanks for reporting it. This is because of incorrect data typecasting.
Attached is the patch that fixes this issue.
On Tue, Feb 21, 2017 at 2:58 PM, Mithun Cy
wrote:
> On Fri, Feb 10, 2017 at 1:06 AM, Robert Haas
> wrote:
>
> > Alright,
On Tue, Feb 21, 2017 at 4:21 PM, Alexander Korotkov
<a.korot...@postgrespro.ru> wrote:
>
> Hi, Ashutosh!
> On Mon, Feb 20, 2017 at 1:20 PM, Ashutosh Sharma <ashu.coe...@gmail.com>
> wrote:
>>
>> Following are the pgbench results for read-write workload,
On Tue, Feb 21, 2017 at 5:52 PM, Alexander Korotkov
<a.korot...@postgrespro.ru> wrote:
> On Tue, Feb 21, 2017 at 2:37 PM, Andres Freund <and...@anarazel.de> wrote:
>>
>> On 2017-02-21 16:57:36 +0530, Ashutosh Sharma wrote:
>> > Yes, there is still some
ible if you added error bars to each data
> >> >> point. Should be simple enough in gnuplot ...
> >> >
> >> > Good point.
> >> > Please find graph of mean and errors in attachment.
> >>
> >> So ... no difference?
> >
>
On Fri, Feb 24, 2017 at 10:41 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
> On 24 February 2017 at 04:41, Ashutosh Sharma <ashu.coe...@gmail.com> wrote:
>>
>> Okay. As suggested by Alexander, I have changed the order of reading and
>> doing initdb for each pgben
) CPU E7- 8830 @ 2.13GHz
Also, Excel sheet (results-readwrite-300-SF) containing the results for all
the 3 runs is attached.
--
With Regards,
Ashutosh Sharma
EnterpriseDB:*http://www.enterprisedb.com <http://www.enterprisedb.com/>*
On Thu, Feb 23, 2017 at 2:44 AM, Alvaro Herrera
Hi,
I am currently testing this patch on a large machine and will share the
test results in few days of time.
Please excuse any grammatical errors as I am using my mobile device. Thanks.
On Feb 11, 2017 04:59, "Tomas Vondra" wrote:
> Hi,
>
> As discussed at the
Hi All,
I too have performed benchmarking of this patch on a large machine
(with 128 CPU(s), 520GB RAM, intel x86-64 architecture) and would like
to share my observations for the same (Please note that, as I had to
reverify readings on few client counts, it did take some time for me
to share
On Wed, Feb 8, 2017 at 11:26 PM, Robert Haas <robertmh...@gmail.com> wrote:
> On Wed, Feb 8, 2017 at 11:58 AM, Ashutosh Sharma <ashu.coe...@gmail.com>
> wrote:
>>> And then, instead, you need to add some code to set bit based on the
>>> bitmap page, lik
> I think you should just tighten up the sanity checking in the existing
> function _hash_ovflblkno_to_bitno rather than duplicating the code. I
> don't think it's called often enough for one extra (cheap) test to be
> an issue. (You should change the elog in that function to an ereport,
> too,
> FWIW it might be interesting to have comparable results from the same
> benchmarks I did. The scripts available in the git repositories should not
> be that hard to tweak. Let me know if you're interested and need help with
> that.
>
Sure, I will have a look into those scripts once I am done
gt; point. Should be simple enough in gnuplot ...
>> >
>> > Good point.
>> > Please find graph of mean and errors in attachment.
>>
>> So ... no difference?
>
>
> Yeah, nothing surprising. It's just another graph based on the same data.
> I wonder how pg
> }
>
> Don't you think we should free the allocated memory in this function?
> Also, why are you 5 as a multiplier in both the above pallocs,
> shouldn't it be 4?
Yes, we should free it. We have used 5 as a multiplier instead of 4
because of ' ' character.
Apart from above com
, MAIN_FORKNUM, blkno,
> RBM_NORMAL, NULL);
> Use BAS_BULKREAD strategy to read the buffer.
>
okay, corrected. Please check the attached v3 patch with corrections.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
0001-Add-pgstathashindex-to-pgstattuple-extension-v3.patch
Des
>
> On 01/18/2017 04:54 AM, Ashutosh Sharma wrote:
>>>
>>> Is there a reason for keeping the input arguments for
>>> hash_bitmap_info() different from hash_page_items()?
>>>
>>
>> Yes, there are two reasons behind it.
>>
>> Firstly, we
error while setting variable
Above error message should also have some expected values with it.
Please note that I haven't gone through the entire mail chain so not
sure if above thoughts have already been raised by others. Sorry about
that.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.ente
8
Also, It would be great if you could confirm as if you have been
getting this issue repeatedly. Thanks.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscr
Hi,
On Fri, Feb 24, 2017 at 12:22 PM, Ashutosh Sharma <ashu.coe...@gmail.com>
wrote:
On Fri, Feb 24, 2017 at 10:41 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
> > On 24 February 2017 at 04:41, Ashutosh Sharma <ashu.coe...@gmail.com>
> wrote:
> >>
> >
On Tue, Feb 28, 2017 at 11:44 PM, Simon Riggs <si...@2ndquadrant.com> wrote:
> On 28 February 2017 at 11:34, Ashutosh Sharma <ashu.coe...@gmail.com>
> wrote:
>
>
>> So, Here are the pgbench results I got with '
>> *reduce_pgxact_access_AtEOXact.v2.patch*' on
> Thanks to Ashutosh Sharma for doing the testing of the patch and
> helping me in analyzing some of the above issues.
Hi All,
I would like to summarize the test-cases that i have executed for
validating WAL logging in hash index feature.
1) I have mainly ran the pgbench test with read
:228
Please let me know for any further inputs.
[1]-
https://www.postgresql.org/message-id/CAE9k0Pmxh-4NAr4GjzDDFHdBKDrKy2FV-Z%2B2Tp8vb2Kmxu%3D6zg%40mail.gmail.com
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
On Wed, Sep 14, 2016 at 2:45 PM, Ashutosh Sharma <ashu.
%3Dbr9UrxMVn_rhWhKPLaHfEdM5A%40mail.gmail.com
Please note that i am performing the test on latest patch for
concurrent hash index and WAL log in hash index shared by Amit
yesterday.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
On Wed, Sep 14, 2016 at 12:04 AM, Jesper Pedersen
5). I think we have added enough functions to show the page level
statistics but not the index level statistics like the total number of
overflow pages , bucket pages, number of free overflow pages, number
of bitmap pages etc. in the hash index. How about adding a function
that would store the in
se category shouldn't it error out.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
/dev/null on line 46
Also, i think the USAGE for hash_metap() is refering to
hash_metap_bytea(). Please correct it. I have just started reviewing
the patch, will keep on posting my comments upon further review.
Thanks.
With Regards,
Ashutosh Sharma.
EnterpriseDB: http://www.enterprisedb.com
On T
hash index after
the WAL patch for hash index. Please have a look and let me know your
thoughts.
[1] -
https://www.postgresql.org/message-id/CAA4eK1JOBX%3DYU33631Qh-XivYXtPSALh514%2BjR8XeD7v%2BK3r_Q%40mail.gmail.com
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
On Sat, Aug
Hi,
I missed to attach the patch in my previous mail. Here i attach the patch.
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
On Tue, Aug 23, 2016 at 11:47 AM, Ashutosh Sharma <ashu.coe...@gmail.com>
wrote:
> Hi All,
>
> I have reverified the code co
r1HPJQ%40mail.gmail.com
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
> Well, that change should be part of Amit's patch to add WAL logging,
> not this patch, whose mission is just to improve test coverage.
I have just removed the warning message from expected output file as i
have performed the testing on Amit's latest patch that removes this
warning message
ttps://www.postgresql.org/message-id/CAMkU%3D1xRt8jBBB7g_7K41W00%3Dbr9UrxMVn_rhWhKPLaHfEdM5A%40mail.gmail.com
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ing this patch further.
> Please correct me if this assessment does not match your expectations.
Thanks for the update. I am absolutely OK with it. I feel it would be
a good idea to review "Exclude additional directories in
pg_basebackup" which also addresses the issue reported by
opinion from
> committer or others as well before adding this target. What do you
> say?
Ok. We can do that.
I have verified the updated v2 patch. The patch looks good to me. I am
marking it as ready for committer. Thanks.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
to share you a next version of patch for
supporting microvacuum in hash index.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi All,
I have added a microvacuum support for hash index access method and
attached is the v1 patch for the same. The patch basically takes care
of the following things:
1. Firstly, it changes the marking of dead tuples from
'tuple-at-a-time' to 'page-at-a-time' during hash index scan. For this
the latest commit in master
branch and could not reproduce here as well. Amit (included in this email
thread) has also tried it once and he was also not able to reproduce it.
Could you please let me know if there is something more that needs to be
done in order to reproduce it other than what you have
Hi,
> What about the patch attached to make things more consistent?
I have reviewed this patch. It does serve the purpose and looks sane
to me. I am marking it as ready for committer.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mail
Hi Amit,
Thanks for showing your interest and reviewing my patch. I have
started looking into your review comments. I will share the updated
patch in a day or two.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
On Fri, Oct 28, 2016 at 4:42 PM, Amit Kapila <amit.ka
Hi,
> While replaying the delete/vacuum record on standby, it can conflict
> with some already running queries. Basically the replay can remove
> some row which can be visible on standby. You need to resolve
> conflicts similar to what we do in btree delete records (refer
> btree_xlog_delete).
Hi,
I have started with the review for this patch and would like to share
some of my initial review comments that requires author's attention.
1) I am getting some trailing whitespace errors when trying to apply
this patch using git apply command. Following are the error messages i
am getting.
wrote:
> On Wed, Dec 07, 2016 at 03:16:09PM +0530, Ashutosh Sharma wrote:
> > Problem Analysis:
> > -
> > Allthough i am very new to Windows, i tried debugging the issue and
> > could find that Backend is not receiving the query executed after
&
Hi,
> Okay, but I think we need to re-enable the existing event handle for
> required event (FD_READ) by using WSAEventSelect() to make it work
> sanely. We have tried something on those lines and it resolved the
> problem. Ashutosh will post a patch on those lines later today. Let
> us know
Hi Micheal,
> I bet that this patch breaks many things for any non-WIN32 platform.
It seems to me like you have already identified some issues when
testing. If yes, could please share it. I have tested my patch on
CentOS-7 and Windows-7 machines and have found no issues. I ran all
the
Hi Micheal,,
> I have just read the patch, and hardcoding the array position for a
> socket event to 0 assumes that the socket event will be *always* the
> first one in the list. but that cannot be true all the time, any code
> that does not register the socket event as the first one in the list
Hi Micheal,
> Ashutosh, could you try it and see if it improves things?
> -
>
Thanks for your patch. I would like to inform you that I didn't find any
improvement with your patch.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
suggestions or inputs would be appreciated.
On Tue, Dec 13, 2016 at 9:34 PM, Ashutosh Sharma <ashu.coe...@gmail.com> wrote:
> Hi Micheal,
>
>>
>> Ashutosh, could you try it and see if it improves things?
>> -
>
> Thanks for your patch. I would like to inform you
ne 555 + 0x19 bytes C
postgres.exe!mainCRTStartup() Line 371 C
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
> It is fine as per current usage of WaitEventSet API's, however,
> Robert's point is valid that user is not obliged to call
> ModifyWaitEvent before WaitEventSetWait. Imagine a case where some
> new user of this API is calling WaitEventSetWait repeatedly without
> calling ModifyWaitEvent.
Oops!
I think
that is just because we are trying to avoid the events for SOCKET from
being re-created again and again. So, for me i think Amit's fix is
absolutely fine and is restricted to Windows. Please do correct me if
my point is wrong. Thank you.
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterpr
ck' to ensure that we do not go
beyond the page size while reading tuples.
ptr = (char *) itup + IndexInfoFindDataOffset(itup->t_info);
+ if (ptr > page + BLCKSZ)
+ /* Error */
dlen = IndexTupleSize(itup) - IndexInfoFindDataOffset(itup->t_info);
Meanwhile, I am working on other
I think this was copied from btreefuncs, but there
> is no buffer release in this code.
Yes, it was copied from btreefuncs and is not required in this case as
we are already passing raw_page as an input to hash_page_items. I have
taken care of it in the updated patch shared up thread.
Wi
Hi,
> +values[j++] = UInt16GetDatum(uargs->offset);
> +values[j++] = CStringGetTextDatum(psprintf("(%u,%u)",
> +
> BlockIdGetBlockNumber(&(itup->t_tid.ip_blkid)),
> +itup->t_tid.ip_posid));
> +
> +ptr = (char *) itup +
t; +
> Please remove extra spaces.
Done. Please refer to v2 patch.
>
> And, please add some test cases for regression tests.
>
Added a test-case. Please check v2 patch attached with this mail.
--
With Regards,
Ashutosh Sharma.
EnterpriseDB: http://www.enterprisedb.com
0001-Add-p
r hash index. Infact I will try to modify an already
existing patch by Peter.
[1]-https://www.postgresql.org/message-id/bcf8c21b-702e-20a7-a4b0-236ed2363d84%402ndquadrant.com
--
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers maili
objects. Other than that,
I feel the patch looks good and has no bug.
--
With Regards,
Ashutosh Sharma.
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
e changed the status of this patch to "Needs review" for
this commit-fest.
[1]
https://www.postgresql.org/message-id/a751842f-2aed-9f2e-104c-34cfe06bfbe2%40redhat.com
With Regards,
Ashutosh Sharma.
EnterpriseDB: http://www.enterprisedb.com
microvacuum_hash_index_v3.patch
Description: i
ttached v4 patch fixes this assertion failure.
> BTW, better rename 'hashkillitems' to '_hash_kill_items' to follow the
> naming convention in hash.h
okay, I have renamed 'hashkillitems' to '_hash_kill_items'. Please
check the attached v4 patch.
With Regards,
Ashutosh Sharma
EnterpriseDB: http
ere as existing
> installations will use the upgrade scripts.
>
> Hence I don't see a reason why we should keep pageinspect--1.5.sql around
> when we can provide a clear interface description in a pageinspect--1.6.sql.
okay, agreed.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprise
s the issue.
>
> Also, the src/backend/access/README file should be updated with a
> description of the changes which this patch provides.
okay, I have updated the insertion algorithm in the README file.
--
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
this function. Attached is the patch for
the same. Please have a look and let me your feedback. I would also
like to mention that this idea basically came from my colleague Kuntal
Ghosh and i implemented it. I have also created a commit-fest entry
for this submission. Thanks.
With Regards,
Ashutosh
Hi All,
I have spent some time in reviewing the latest v8 patch from Jesper
and could find some issues which i would like to list down,
1) Firstly, the DATA section in the Makefile is referring to
"pageinspect--1.6.sql" file and currently this file is missing.
+DATA = pageinspect--1.6.sql
Hi Jesper,
> I was planning to submit a new version next week for CF/January, so I'll
> review your changes with the previous feedback in mind.
>
> Thanks for working on this !
As i was not seeing any updates from you for last 1 month, I thought
of working on it. I have created a commit-fest
;> detail of non-default GUC params and pgbench command are mentioned in
>> the result sheet. I also did the benchmarking with unique values at
>> 300 and 1000 scale factor and its results are provided in
>> 'results-unique-values-default-ff'.
>>
>
> I'm seeing similar
than HEAD, with 7 and 10 SP(s) we do see regression with patch.
Therefore, I think the threshold value of 4 for number of subtransactions
considered in the patch looks fine to me.
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
On Tue, Mar 21, 2017 at 6:19 PM, Amit
On Thu, Mar 23, 2017 at 9:17 AM, Amit Kapila <amit.kapil...@gmail.com> wrote:
>
> On Thu, Mar 23, 2017 at 8:43 AM, Amit Kapila <amit.kapil...@gmail.com> wrote:
> > On Wed, Mar 22, 2017 at 3:39 PM, Ashutosh Sharma <ashu.coe...@gmail.com>
> > wrote:
> >&g
a assciated with registered buffers into the WAL
record followed by the main data (). Hence, the WAL record in btree
and hash are organised differently.
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make
On Fri, Mar 24, 2017 at 9:21 AM, Amit Kapila <amit.kapil...@gmail.com> wrote:
> On Thu, Mar 23, 2017 at 6:46 PM, Ashutosh Sharma <ashu.coe...@gmail.com>
> wrote:
>>>
>>> Oh, okay, but my main objection was that we should not check hash page
>>&g
On Fri, Mar 24, 2017 at 9:46 AM, Ashutosh Sharma <ashu.coe...@gmail.com> wrote:
> On Fri, Mar 24, 2017 at 9:21 AM, Amit Kapila <amit.kapil...@gmail.com> wrote:
>> On Thu, Mar 23, 2017 at 6:46 PM, Ashutosh Sharma <ashu.coe...@gmail.com>
>> wrote:
>>
t;> _hash_vacuum_one_page()
>>> {
>>> ..
>>> deletable[ndeletable++] = offnum;
>>> tuples_removed += 1;--
>>>
>>
>> Yes, I think 'ndeletable' alone should be fine.
>>
>
> I think it would have been probably okay to use *int* for ntu
On Thu, Mar 23, 2017 at 8:29 PM, Jesper Pedersen
<jesper.peder...@redhat.com> wrote:
> Hi,
>
> On 03/22/2017 09:32 AM, Ashutosh Sharma wrote:
>>
>> Done. Please refer to the attached v2 version of patch.
>>
>
> Thanks.
>
>>>> 1) 0001-R
1 - 100 of 230 matches
Mail list logo