On Thu, Sep 21, 2017 at 7:25 AM, Robert Haas <robertmh...@gmail.com> wrote:
> On Wed, Sep 20, 2017 at 9:46 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
>> I think there is some confusion above results is for pgbench simple update
>> (-N) tests where cached snapsh
On Wed, Sep 20, 2017 at 11:45 PM, Robert Haas <robertmh...@gmail.com> wrote:
> On Tue, Aug 29, 2017 at 1:57 AM, Mithun Cy <mithun...@enterprisedb.com>
wrote:
>> All TPS are median of 3 runs
>> Clients TPS-With Patch 05 TPS-Base%Diff
On Mon, Sep 18, 2017 at 1:38 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
> On Sat, Sep 16, 2017 at 3:03 AM, Andres Freund <and...@anarazel.de> wrote:
> So I think performance gain is visible. We saved a good amount of
> execution cycle in SendRowDescriptionMessagewhe
On Wed, Sep 13, 2017 at 7:24 PM, Jesper Pedersen
wrote:
> I have done a run with this patch on a 2S/28C/56T/256Gb w/ 2 x RAID10 SSD
> machine.
>
> Both for -M prepared, and -M prepared -S I'm not seeing any improvements (1
> to 375 clients); e.g. +-1%.
My test was
On Thu, Aug 3, 2017 at 2:16 AM, Andres Freund wrote:
>I think this patch should have a "cover letter" explaining what it's
> trying to achieve, how it does so and why it's safe/correct.
Cache the SnapshotData for reuse:
===
In one of our perf analysis
Hi all please ignore this mail, this email is incomplete I have to add
more information and Sorry accidentally I pressed the send button
while replying.
On Tue, Aug 29, 2017 at 11:27 AM, Mithun Cy <mithun...@enterprisedb.com> wrote:
> Cache the SnapshotData
's a way to avoid copying the snapshot into the cache
> in situations where we're barely ever going to need it. But right now
> the only way I can think of is to look at the length of the
> ProcArrayLock wait queue and count the exclusive locks therein - which'd
> be expensive and intrusive...
&
On Mon, Aug 28, 2017 at 5:50 PM, Tom Lane wrote:
> I think that's probably dead code given that ExecutorRun short-circuits
> everything for NoMovementScanDirection. There is some use of
> NoMovementScanDirection for indexscans, to denote an unordered index,
> but likely that
Hi all,
I was trying to study NoMovementScanDirection part of heapgettup() and
heapgettup_pagemode(). If I am right there is no test in test suit to
hit this code. I did run make check-world could not hit it. Also,
coverage report in
On Fri, Aug 18, 2017 at 9:43 PM, Robert Haas <robertmh...@gmail.com> wrote:
> On Fri, Aug 18, 2017 at 2:23 AM, Mithun Cy <mithun...@enterprisedb.com> wrote:
> Ah, good catch. While I agree that there is probably no great harm
> from skipping the lock here, I think it would be
On Wed, Aug 16, 2017 at 2:08 AM, Robert Haas <robertmh...@gmail.com> wrote:
> On Fri, Jul 14, 2017 at 8:17 AM, Mithun Cy <mithun...@enterprisedb.com> wrote:
>> [ new patch ]
> It's quite possible that in making all of these changes I've
> introduced some bugs, so I th
On 3 Aug 2017 2:16 am, "Andres Freund" wrote:
Hi Andres thanks for detailed review. I agree with all of the comments. I
am going for a vacation. Once I come back I will fix those comments and
will submit a new patch.
On Wed, Aug 2, 2017 at 3:42 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
Sorry, there was an unnecessary header included in proc.c which should
be removed adding the corrected patch.
--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprise
end of the
transaction and others can reuse the cached the snapshot. In the
second approach, every process has to re-compute the snapshot. So I am
keeping with the same approach.
On Mon, Jul 10, 2017 at 10:13 AM, Mithun Cy <mithun...@enterprisedb.com> wrote:
> On Fri, Apr 8, 2016 at 12:13 PM
On Thu, Jul 6, 2017 at 10:52 AM, Amit Kapila wrote:
>>> 3.
>> -- I do not think that is true pages of the unlogged table are also
>> read into buffers for read-only purpose. So if we miss to dump them
>> while we shut down then the previous dump should be used.
>>
>
> I
On Fri, Apr 8, 2016 at 12:13 PM, Robert Haas wrote:
> I think that we really shouldn't do anything about this patch until
> after the CLOG stuff is settled, which it isn't yet. So I'm going to
> mark this Returned with Feedback; let's reconsider it for 9.7.
I am updating
On Thu, Jul 6, 2017 at 10:52 AM, Amit Kapila wrote:
> I am not able to understand what you want to say. Unlogged tables
> should be empty in case of crash recovery. Also, we never flush
> unlogged buffers except for shutdown checkpoint, refer BufferAlloc and
> in
On Mon, Jul 3, 2017 at 3:34 PM, Amit Kapila wrote:
>
> Few comments on the latest patch:
>
> 1.
> + LWLockRelease(_state->lock);
> + if (!is_bgworker)
> + ereport(ERROR,
> + (errmsg("could not perform block dump because dump file is being
> used by PID %d",
> +
On Mon, Jul 3, 2017 at 11:58 AM, Amit Kapila <amit.kapil...@gmail.com> wrote:
> On Sun, Jul 2, 2017 at 10:32 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
>> On Tue, Jun 27, 2017 at 11:41 AM, Mithun Cy <mithun...@enterprisedb.com>
>> wrote:
>>> On F
On Mon, Jul 3, 2017 at 3:55 PM, Amit Kapila <amit.kapil...@gmail.com> wrote:
> On Fri, Jun 23, 2017 at 3:22 AM, Robert Haas <robertmh...@gmail.com> wrote:
>> On Thu, Jun 15, 2017 at 12:35 AM, Mithun Cy <mithun...@enterprisedb.com>
>> wrote:
>>
>> * Ins
On Tue, Jun 27, 2017 at 11:41 AM, Mithun Cy <mithun...@enterprisedb.com> wrote:
> On Fri, Jun 23, 2017 at 5:45 AM, Thom Brown <t...@linux.com> wrote:
>>
>> Also, I find it a bit messy that launch_autoprewarm_dump() doesn't
>> detect an autoprewarm p
On Fri, Jun 23, 2017 at 5:45 AM, Thom Brown wrote:
>
> I also think pg_prewarm.dump_interval should be renamed to
> pg_prewarm.autoprewarm_interval.
Thanks, I have changed it to pg_prewarm.autoprewarm_interval.
>>
>> * In the documentation, don't say "This is a SQL callable
prewarms (with a separate control
> for whether it dumps) seems to make sense. The one time when you want
> to do one without the other is when you first install the extension --
> during the first server lifetime, you'll want to dump, so that after
> the next restart you have something
On Mon, Jun 12, 2017 at 7:34 PM, Amit Kapila <amit.kapil...@gmail.com>
wrote:
> On Mon, Jun 12, 2017 at 6:31 AM, Mithun Cy <mithun...@enterprisedb.com>
> wrote:
> > Thanks, Amit,
> >
>
> + /* Perform autoprewarm's task. */
> + if (todo_task =
Thanks, Amit,
On Fri, Jun 9, 2017 at 8:07 PM, Amit Kapila wrote:
>
> Few comments on the latest patch:
> +
> + /* Prewarm buffer. */
> + buf = ReadBufferExtended(rel, blk->forknum, blk->blocknum, RBM_NORMAL,
> + NULL);
> + if (BufferIsValid(buf))
> + ReleaseBuffer(buf);
On Fri, Jun 9, 2017 at 3:22 PM, Heikki Linnakangas wrote:
> Hmm, there is one problem with our current use of comma as a separator:
you
> cannot use a Unix-domain socket directory that has a comma in the name,
> because it's interpreted as multiple hostnames. E.g. this doesn't
I have merged Rafia's patch for cosmetic changes. I have also fixed
some of the changes you have recommended over that. But kept few as it
is since Rafia's opinion was needed on that.
On Wed, Jun 7, 2017 at 5:57 PM, Robert Haas wrote:
> My experience is the opposite. If I
On Tue, Jun 6, 2017 at 3:48 PM, Neha Sharma
wrote:
> Hi,
>
> I have been testing this feature for a while and below are the observations
> for few scenarios.
>
> Observation:
> scenario 1: If we set pg_prewarm.dump_interval = -1.0,we get an additional
> warning
On Wed, May 31, 2017 at 10:18 PM, Robert Haas wrote:
> + *contrib/autoprewarm.c
>
> Wrong.
-- Oops Sorry fixed.
> +Oiddatabase;/* database */
> +Oidspcnode;/* tablespace */
> +Oidfilenode;/*
On Tue, May 30, 2017 at 12:36 PM, Konstantin Knizhnik
<k.knizh...@postgrespro.ru> wrote:
> On 27.10.2016 14:39, Mithun Cy wrote:
> And as far as I understand pg_autoprewarm has all necessary infrastructure
> to do parallel load. We just need to spawn more than one background worke
On Tue, May 30, 2017 at 10:16 AM, Mithun Cy <mithun...@enterprisedb.com> wrote:
> Thanks Robert,
Sorry, there was one more mistake ( a typo) in dump_now() instead of
using pfree I used free corrected same in the new patch v10.
--
Thanks and Regards
Mithun C Y
Enterpris
Thanks Robert,
On Wed, May 24, 2017 at 5:41 PM, Robert Haas wrote:
> + *
> + * Once the "autoprewarm" bgworker has completed its prewarm task, it will
> + * start a new task to periodically dump the BlockInfoRecords related to
> blocks
> + * which are currently in shared
On Tue, May 23, 2017 at 7:06 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
> Thanks, Andres,
>
> I have tried to fix all of your comments.
There was a typo issue in previous patch 07 where instead of == I have
used !=. And, a mistake in comments. I have corrected same n
to the
subgroups to load corresponding blocks. With this portability issue
which I have mentioned above will no longer exists as we do not store
any map tables within the dump file.
On Thu, Apr 6, 2017 at 4:12 AM, Andres Freund <and...@anarazel.de> wrote:
> On 2017-03-13 18:45:00 +0530, Mithun
On Wed, Apr 12, 2017 at 6:24 PM, Amit Kapila
wrote:
>
> I have looked into the tests and I think we can do some optimization
> without losing much on code coverage. First is we are doing both
> Vacuum Full and Vacuum on hash_split_heap in the same test after
> executing
On Thu, Apr 6, 2017 at 4:12 AM, Andres Freund <and...@anarazel.de> wrote:
> On 2017-03-13 18:45:00 +0530, Mithun Cy wrote:
>> I have implemented a similar logic now. The prewarm bgworker will
>> launch a sub-worker per database in the dump file. And, each
>> sub-worker w
On Tue, Apr 4, 2017 at 9:18 AM, Robert Haas wrote:
> Committed.
Thanks Robert,
And also sorry, one unfortunate thing happened in the last patch while
fixing one of the review comments a variable disappeared from the
equation
@_hash_spareindex.
splitpoint_phases
On Sat, Apr 1, 2017 at 12:31 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
> Also adding a patch which implements the 2nd way.
Sorry, I forgot to add sortbuild_hash patch, which also needs similar
changes for the hash_mask.
--
Thanks and Regards
Mithun C Y
Enterpris
On Sat, Apr 1, 2017 at 7:05 AM, Robert Haas wrote:
> Hmm, don't the changes to contrib/pageinspect/expected/hash.out
> indicate that you've broken something? The hash index has only 4
> buckets, so the new code shouldn't be doing anything differently, but
> you've got
Thanks, I have tried to fix all of the comments.
On Fri, Mar 31, 2017 at 8:10 AM, Robert Haas <robertmh...@gmail.com> wrote:
> On Thu, Mar 30, 2017 at 2:36 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
>> Thanks Robert, I have tried to fix the comments given as below.
Thanks Robert, I have tried to fix the comments given as below.
On Thu, Mar 30, 2017 at 9:19 PM, Robert Haas wrote:
> I think that the macros in hash.h need some more work:
>
> - Pretty much any time you use the argument of a macro, you need to
> parenthesize it in the
On Thu, Mar 30, 2017 at 7:23 PM, Robert Haas wrote:
> I think approach B is incorrect. Suppose we have 1536 buckets and
> hash values 2048, 2049, 4096, 4097, 6144, 6145, 8192, and 8193. If I
> understand correctly, each of these values should be mapped either to
> bucket
Thanks, Amit for a detailed review.
On Wed, Mar 29, 2017 at 4:09 PM, Amit Kapila wrote:
> Okay, your current patch looks good to me apart from minor comments,
> so marked as Read For Committer. Please either merge the
> sort_hash_b_2.patch with main patch or submit it
That means at every new
+split point we double the existing number of buckets. Allocating huge chucks
On Mon, Mar 27, 2017 at 11:56 PM, Jesper Pedersen
wrote:
> I ran some performance scenarios on the patch to see if the increased
> 'spares' allocation had an
On Wed, Mar 29, 2017 at 10:12 AM, Amit Kapila wrote:
>> I wonder if we should consider increasing
>> SPLITPOINT_GROUPS_WITH_ONLY_ONE_PHASE somewhat. For example, split
>> point 4 is responsible for allocating only 16 new buckets = 128kB;
>> doing those in four groups of
On Tue, Mar 28, 2017 at 12:21 PM, Amit Kapila wrote:
> I think both way it can work. I feel there is no hard pressed need
> that we should make the computation in sorting same as what you do
> _hash_doinsert. In patch_B, I don't think new naming for variables is
> good.
On Mon, Mar 27, 2017 at 11:21 AM, Amit Kapila wrote:
> I think we can't change the number of buckets to be created or lowmask
> and highmask calculation here without modifying _h_spoolinit() because
> it sorts the data to be inserted based on hashkey which in turn
>
Thanks, Amit for the review.
On Sat, Mar 25, 2017 at 7:03 PM, Amit Kapila wrote:
>
> I think one-dimensional patch has fewer places to touch, so that looks
> better to me. However, I think there is still hard coding and
> assumptions in code which we should try to
On Sat, Mar 25, 2017 at 11:46 PM, Peter Eisentraut
<peter.eisentr...@2ndquadrant.com> wrote:
> On 3/13/17 09:15, Mithun Cy wrote:
>> A. launch_autoprewarm_dump() RETURNS int4
>> This is a SQL callable function to launch the autoprewarm worker to
>> dump the buffer
On Tue, Mar 21, 2017 at 8:16 AM, Amit Kapila wrote:
> Sure, I was telling you based on that. If you are implicitly treating
> it as 2-dimensional array, it might be easier to compute the array
>offsets.
I think calculating spares offset will not become anyway much
On Fri, Mar 24, 2017 at 1:22 AM, Mithun Cy <mithun...@enterprisedb.com> wrote:
> Hi Amit please find the new patch
The pageinspect.sgml has an example which shows the output of
"hash_metapage_info()". Since we increase the spares array and
eventually ovflpoint, I have
Hi Amit please find the new patch
On Tue, Mar 21, 2017 at 8:16 AM, Amit Kapila wrote:
> It is not only about above calculation, but also what the patch is
> doing in function _hash_get_tbuckets(). By the way function name also
> seems unclear (mainly *tbuckets* in the
Hi Pavan,
On Thu, Mar 23, 2017 at 12:19 AM, Pavan Deolasee
wrote:
> Ok, no problem. I did some tests on AWS i2.xlarge instance (4 vCPU, 30GB
> RAM, attached SSD) and results are shown below. But I think it is important
> to get independent validation from your side too,
On Thu, Mar 23, 2017 at 12:19 AM, Pavan Deolasee
<pavan.deola...@gmail.com> wrote:
>
>
> On Wed, Mar 22, 2017 at 4:53 PM, Mithun Cy <mithun...@enterprisedb.com>
> wrote:
> Ok, no problem. I did some tests on AWS i2.xlarge instance (4 vCPU, 30GB
> RAM, attached SS
On Wed, Mar 22, 2017 at 8:43 AM, Pavan Deolasee
wrote:
> Sorry, I did not mean to suggest that you set it up wrongly, I was just
> trying to point out that the test case itself may not be very practical.
That is cool np!, I was just trying to explain why those tests were
On Wed, Mar 22, 2017 at 1:09 PM, Rushabh Lathia
wrote:
> In the code, gather merge consider the limit for the sort into
> create_ordered_paths,
> which is wrong. Into create_ordered_paths(), GM should not consider the
> limit
> while doing costing for the sort node.
>
>
On Wed, Mar 22, 2017 at 3:44 PM, Pavan Deolasee
wrote:
>
> This looks quite weird to me. Obviously these numbers are completely
> non-comparable. Even the time for VACUUM FULL goes up with every run.
>
> May be we can blame it on AWS instance completely, but the pattern
.299 ms
(7 rows)
On Wed, Mar 22, 2017 at 11:07 AM, Mithun Cy <mithun...@enterprisedb.com> wrote:
> I accidently encountered a case where gather merge was picked as
> default but disabling same by setting max_parallel_workers_per_gather
> = 0; produced a non-parallel plan which was faste
I accidently encountered a case where gather merge was picked as
default but disabling same by setting max_parallel_workers_per_gather
= 0; produced a non-parallel plan which was faster than gather merge,
but its cost is marked too high when compared to gather merge.
I guess we need some cost
On Tue, Mar 21, 2017 at 8:10 PM, Robert Haas wrote:
> If the WAL writing hides the loss, then I agree that's not a big
> concern. But if the loss is still visible even when WAL is written,
> then I'm not so sure.
The tests table schema was taken from earlier tests what
Hi Amit, Thanks for the review,
On Mon, Mar 20, 2017 at 5:17 PM, Amit Kapila wrote:
> idea could be to make hashm_spares a two-dimensional array
> hashm_spares[32][4] where the first dimension will indicate the split
> point and second will indicate the sub-split number.
On Mon, Feb 27, 2017 at 8:29 PM, Alexander Korotkov
wrote:
This patch needs to be rebased.
1. It fails while applying as below
patching file src/test/regress/expected/sysviews.out
Hunk #1 FAILED at 70.
1 out of 1 hunk FAILED -- saving rejects to file
On Thu, Mar 16, 2017 at 10:55 PM, David Steele wrote:
> It does apply with fuzz on 2b32ac2, so it looks like c11453c and
> subsequent commits are the cause. They represent a fairly substantial
> change to hash indexes by introducing WAL logging so I think you should
>
>> - The error handling loop around load_block() suggests that you're
>> expecting some reads to fail, which I guess is because you could be
>> trying to read blocks from a relation that's been rewritten under a
>> different relfilenode, or partially or entirely truncated. But I
>> don't think
Hi all, thanks for the feedback. Based on your recent comments I have
implemented a new patch which is attached below,
On Sun, Feb 26, 2017 at 10:16 PM, Robert Haas wrote:
> This is not easy to fix. The lock has to be taken based on the
> relation OID, not the
Hi all thanks,
I have tried to fix all of the comments given above with some more
code cleanups.
On Wed, Feb 22, 2017 at 6:28 AM, Robert Haas wrote:
> I think it's OK to treat that as something of a corner case. There's
> nothing to keep you from doing that today: just
Thanks, Amit
On Mon, Feb 20, 2017 at 9:51 PM, Amit Kapila wrote:
> How will high and lowmask calculations work in this new strategy?
> Till now they always work on doubling strategy and I don't see you
> have changed anything related to that code. Check below places.
On Fri, Feb 10, 2017 at 1:06 AM, Robert Haas wrote:
> Alright, committed with a little further hacking.
I did pull the latest code, and tried
Test:
create table t1(t int);
create index i1 on t1 using hash(t);
insert into t1 select generate_series(1, 1000);
Hi all,
As of now, we expand the hash index by doubling the number of bucket
blocks. But unfortunately, those blocks will not be used immediately.
So I thought if we can differ bucket block allocation by some factor,
hash index size can grow much efficiently. I have written a POC patch
which does
On Tue, Feb 7, 2017 at 11:21 PM, Erik Rijkers wrote:
> On 2017-02-07 18:41, Robert Haas wrote:
>>
>> Committed with some changes (which I noted in the commit message).
Thanks, Robert and all who have reviewed the patch and given their
valuable comments.
> This has caused a
On Tue, Feb 7, 2017 at 6:11 PM, Beena Emerson wrote:
> Yes it would be better to have only one pg_prewarm worker as the loader is
> idle for the entire server run time after the initial load activity of few
> secs.
Sorry, that is again a bug in the code. The code to
Thanks Beena,
On Tue, Feb 7, 2017 at 4:46 PM, Beena Emerson wrote:
> Few more comments:
>
> = Background worker messages:
>
> - Workers when launched, show messages like: "logical replication launcher
> started”, "autovacuum launcher started”. We should probably have a
On Tue, Feb 7, 2017 at 12:24 PM, Amit Kapila wrote:
> On Tue, Feb 7, 2017 at 11:53 AM, Beena Emerson
> wrote:
>> Are 2 workers required?
>>
>
> I think in the new design there is a provision of launching the worker
> dynamically to dump the
On Tue, Feb 7, 2017 at 11:53 AM, Beena Emerson wrote:
> launched by other applications. Also with max_worker_processes = 2 and
> restart, the system crashes when the 2nd worker is not launched:
> 2017-02-07 11:36:39.132 IST [20573] LOG: auto pg_prewarm load : number of
Hi all,
Here is the new patch which fixes all of above comments, I changed the
design a bit now as below
What is it?
===
A pair of bgwrokers one which automatically dumps buffer pool's block
info at a given interval and another which loads those block into
buffer pool when
the server
On Tue, Jan 31, 2017 at 9:47 PM, Robert Haas wrote:
> Now, I assume that this patch sorts the I/O (although I haven't
> checked that) and therefore I expect that the prewarm completes really
> fast. If that's not the case, then that's bad. But if it is the
> case, then
> HashMetaPage _hash_getcachedmetap(Relation rel, Buffer *metabuf, bool
> force_refresh);
>
> If the cache is initialized and force_refresh is not true, then this
> just returns the cached data, and the metabuf argument isn't used.
> Otherwise, if *metabuf == InvalidBuffer, we set *metabuf to
On Thu, Jan 26, 2017 at 8:45 PM, Peter Eisentraut
wrote:
> Just a thought with an additional use case: If I want to set up a
> standby for offloading queries, could I take the dump file from the
> primary or another existing standby, copy it to the new standby,
On Tue, Jan 24, 2017 at 3:10 PM, Amit Kapila wrote:
> 1.
> @@ -505,26 +505,22 @@ hashbulkdelete(IndexVacuumInfo *info,
> In the above flow, do we really need an updated metapage, can't we use
> the cached one? We are already taking care of bucket split down in
> that
On Thu, Jan 19, 2017 at 5:08 PM, Ashutosh Sharma wrote:
Thanks, Ashutosh and Jesper. I have tested the patch I do not have any
more comments so making it ready for committer.
--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com
--
Sent via
On Fri, Oct 28, 2016 at 6:36 AM, Tsunakawa, Takayuki
wrote:
> I welcome this feature! I remember pg_hibernate did this. I wonder what
> happened to pg_hibernate. Did you check it?
Thanks, when I checked with pg_hibernate there were two things people
complained
On Tue, Jan 24, 2017 at 5:07 AM, Jim Nasby wrote:
> I took a look at this again, and it doesn't appear to be working for me. The
> library is being loaded during startup, but I don't see any further activity
> in the log, and I don't see an autoprewarm file in $PGDATA.
On Tue, Jan 17, 2017 at 10:07 AM, Amit Kapila wrote:
> 1.
> (a) I think you can retain the previous comment or modify it slightly.
> Just removing the whole comment and replacing it with a single line
> seems like a step backward.
-- Fixed, Just modified to say it
> (b)
On Tue, Jan 17, 2017 at 3:06 PM, Amit Kapila wrote:
>
> I think your calculation is not right. 66 indicates LH_BUCKET_PAGE |
> LH_BUCKET_NEEDS_SPLIT_CLEANUP which is a valid state after the split.
> This flag will be cleared either during next split or when vacuum
>
>
> 7. I think it is not your bug, but probably a bug in Hash index
> itself; page flag is set to 66 (for below test); So the page is both
> LH_BUCKET_PAGE and LH_BITMAP_PAGE. Is not this a bug in hash index?
>
> I have inserted 3000 records. Hash index is on integer column.
> select hasho_flag
Test:
--
create table seq_tab(t int);
insert into seq_tab select generate_series(1, 1000);
select count(distinct t) from seq_tab;
#0 0x0094a9ad in pfree (pointer=0x0) at mcxt.c:1007
#1 0x00953be3 in mergeonerun (state=0x1611450) at tuplesort.c:2803
#2 0x00953824
On Fri, Jan 13, 2017 at 12:49 AM, Jesper Pedersen
wrote:
> Rebased, and removed the compile warn in hashfuncs.c
I did some testing and review for the patch. I did not see any major
issue, but there are few minor cases for which I have few suggestions.
1. Source File
There is a typo in comments of function _hash_first(); Adding a fix for same.
--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com
hashsearch_typo01.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
On Wed, Jan 11, 2017 at 12:46 AM, Robert Haas wrote:
> Can we adapt the ad-hoc caching logic in hashbulkdelete() to work with
> this new logic? Or at least update the comments?
I have introduced a new function _hash_getcachedmetap in patch 11 [1] with
this hashbulkdelete()
On Fri, Jan 6, 2017 at 11:43 AM, Amit Kapila wrote:
> Few more comments:
> 1.
> no need to two extra lines, one is sufficient and matches the nearby
> coding pattern.
-- Fixed.
> 2.
> Do you see anywhere else in the code the pattern of using @ symbol in
> comments
On Sat, Jan 7, 2017 at 11:27 AM, Mithun Cy <mithun...@enterprisedb.com> wrote:
Sorry Auto plain text setting has disturbed the table indentation.
Attaching the spreadsheet for same.
--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprise
On Thu, Jan 5, 2017 at 6:15 PM, Amit Kapila wrote:
>
> Your test and results look good, what kind of m/c you have used to
> test this. Let me see if I or one of my colleague can do this and
> similar test on some high-end m/c.
As discussed with Amit, I have tried to run
On Wed, Jan 4, 2017 at 5:21 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
I have re-based the patch to fix one compilation warning
@_hash_doinsert where variable bucket was only used for Asserting but
was not declared about its purpose.
--
Thanks and Regards
Mithun C Y
EnterpriseDB
On Wed, Jan 4, 2017 at 4:19 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
> As part performance/space analysis of hash index on varchar data typevarchar
> data type
> with this patch, I have run some tests for same with modified pgbench.with
> this patch, I have run s
On Tue, Jan 3, 2017 at 12:05 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
As part performance/space analysis of hash index on varchar data type
with this patch, I have run some tests for same with modified pgbench.
Modification includes:
1. Changed aid column of pg_accounts table fr
Thanks Amit for detailed review, and pointing out various issues in
the patch. I have tried to fix all of your comments as below.
On Mon, Jan 2, 2017 at 11:29 AM, Amit Kapila wrote:
> 1.
> usage "into to .." in above comment seems to be wrong.usage "into to .." in
>
On Tue, Dec 27, 2016 at 1:36 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
Oops, patch number should be 08 re-attaching same after renaming.
--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com
diff --git a/src/backend/access/hash/hashinsert.c b/src/backend/acces
On Thu, Dec 22, 2016 at 12:17 PM, Amit Kapila <amit.kapil...@gmail.com> wrote:
> On Wed, Dec 21, 2016 at 9:26 PM, Robert Haas <robertmh...@gmail.com> wrote:
>> On Tue, Dec 20, 2016 at 2:25 PM, Mithun Cy <mithun...@enterprisedb.com>
>> wrote:
>>> --
There seems to be some base bug in _hash_doinsert().
+* XXX this is useless code if we are only storing hash keys.
+ */
+if (itemsz > HashMaxItemSize((Page) metap))
+ereport(ERROR,
+(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+ errmsg("index row
On Tue, Dec 20, 2016 at 3:51 AM, Robert Haas <robertmh...@gmail.com> wrote:
> On Fri, Dec 16, 2016 at 5:16 AM, Mithun Cy <mithun...@enterprisedb.com>
> wrote:
>
>> Shouldn't _hash_doinsert() be using the cache, too
>>>
>>
>> Yes, we have a
1 - 100 of 172 matches
Mail list logo