On Thu, Dec 17, 2015 at 3:15 AM, Mithun Cy <mithun...@enterprisedb.com>
wrote:
> I have rebased the patch and tried to run pgbench.
> I see memory corruptions, attaching the valgrind report for the same.
Sorry forgot to add re-based patch, adding the same now.
After some analysis I
On Thu, Jun 16, 2016 at 12:58 PM, Amit Kapila
wrote:
I have a question regarding code changes in *_hash_first*.
+/*
+ * Conditionally get the lock on primary bucket page for search
while
+* holding lock on meta page. If we have to wait, then
On Fri, Jan 29, 2016 at 5:11 PM, Mithun Cy <mithun...@enterprisedb.com>
wrote
>
>
>
> >I just ran some tests on above patch. Mainly to compare
> >how "longer sort keys" would behave with new(Qsort) and old Algo(RS) for
> sorting.
> >I have 8GB
On Tue, Dec 29, 2015 at 4:33 AM, Peter Geoghegan wrote:
>Attached is a revision that significantly overhauls the memory patch,
>with several smaller changes.
I just ran some tests on above patch. Mainly to compare
how "longer sort keys" would behave with new(Qsort) and old
On Sat, Jan 16, 2016 at 10:23 AM, Amit Kapila <amit.kapil...@gmail.com>
wrote:
> >On Fri, Jan 15, 2016 at 11:23 AM, Mithun Cy <mithun...@enterprisedb.com>
> wrote:
>
>> On Mon, Jan 4, 2016 at 2:28 PM, Andres Freund <and...@anarazel.de> wrote:
>>
>&g
On Thu, Mar 3, 2016 at 11:50 PM, Robert Haas wrote:
>What if you apply both this and Amit's clog control log patch(es)? Maybe
the combination of the two helps substantially more than either >one alone.
I did the above tests along with Amit's clog patch. Machine :8
On Thu, Mar 17, 2016 at 9:00 AM, Amit Kapila
wrote:
>If you see, for the Base readings, there is a performance increase up till
64 clients and then there is a fall at 88 clients, which to me >indicates
that it hits very high-contention around CLogControlLock at 88 clients
On Thu, Mar 10, 2016 at 9:39 PM, Robert Haas wrote:
>I guess there must not be an occurrence of this pattern in the
>regression tests, or previous force_parallel_mode testing would have
>found this problem. Perhaps this patch should add one?
I have added the test to
<amit.kapil...@gmail.com>
wrote:
> On Thu, Mar 10, 2016 at 1:04 PM, Mithun Cy <mithun...@enterprisedb.com>
> wrote:
>
>>
>>
>> On Thu, Mar 3, 2016 at 11:50 PM, Robert Haas <robertmh...@gmail.com>
>> wrote:
>> >What if you apply both this and
On Sat, Mar 12, 2016 at 2:32 PM, Amit Kapila
wrote:
>With force_parallel_mode=on, I could see many other failures as well. I
think it is better to have test, which tests this functionality with
>force_parallel_mode=regress
as per user manual.
Setting this value to
Sorry there was some issue with my mail settings same mail got set more
than once.
--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com
On Sat, Mar 12, 2016 at 2:32 PM, Amit Kapila
wrote:
>With force_parallel_mode=on, I could see many other failures as well. I
think it is better to have test, which tests this functionality with
>force_parallel_mode=regress
as per user manual.
Setting this value to
On Sat, Mar 12, 2016 at 2:32 PM, Amit Kapila
wrote:
>With force_parallel_mode=on, I could see many other failures as well. I
think it is better to have test, which tests this functionality with
>force_parallel_mode=regress
as per user manual.
Setting this value to
On Sat, Mar 12, 2016 at 12:28 PM, Amit Kapila
wrote
>I don't see how this test will fail with force_parallel_mode=regress and
max_parallel_degree > 0 even without the patch proposed to fix the issue in
>hand. In short, I don't think this test would have caught the issue,
On Thu, Mar 3, 2016 at 6:20 AM, Mithun Cy <mithun...@enterprisedb.com>
wrote:
> I will continue to benchmark above tests with much wider range of clients.
Latest Benchmarking shows following results for unlogged tables.
clients BASE ONLY CLOG CHANGES % Increase CLOG CHANGES + SAVE
Hi All,
Explain [Analyze] Select Into table. produces the plan which uses
parallel scans.
*Test:*
create table table1 (n int);
insert into table1 values (generate_series(1,500));
analyze table1;
set parallel_tuple_cost=0;
set max_parallel_degree=3;
postgres=# explain select into
On Thu, Mar 10, 2016 at 1:36 PM, Amit Kapila
wrote:
>Do you think it makes sense to check the performance by increasing CLOG
buffers (patch for same is posted in Speed up Clog thread [1]) >as that
also relieves contention on CLOG as per the tests I have done?
Along with
On Tue, Mar 1, 2016 at 12:59 PM, Amit Kapila
wrote:
>Don't we need to add this only when the xid of current transaction is
valid? Also, I think it will be better if we can explain why we need to
add the our >own transaction id while caching the snapshot.
I have fixed the
I tried to do some benchmarking on postgres master head
commit 72a98a639574d2e25ed94652848555900c81a799
Author: Andres Freund
Date: Tue Apr 26 20:32:51 2016 -0700
CASE : Read-Write Tests when data exceeds shared buffers.
Non Default settings and test
./postgres -c
On Fri, May 6, 2016 at 8:35 PM, Andres Freund wrote:
> Also, do you see read-only workloads to be affected too?
Thanks, I have not tested with above specific commitid which reported
performance issue but
At HEAD commit 72a98a639574d2e25ed94652848555900c81a799
Author: Andres
Tests:
create table mytab(x int,x1 char(9),x2 varchar(9));
create table mytab1(y int,y1 char(9),y2 varchar(9));
insert into mytab values (generate_series(1,5),'aa','aaa');
insert into mytab1 values (generate_series(1,1),'aa','aaa');
insert into mytab values
I did some basic testing of same. In that I found one issue with cursor.
+BEGIN;
+SET enable_seqscan = OFF;
+SET enable_bitmapscan = OFF;
+CREATE FUNCTION declares_cursor(int)
+ RETURNS void
+ AS 'DECLARE c CURSOR FOR SELECT * from con_hash_index_table WHERE keycol
= $1;'
+LANGUAGE SQL;
+
I am attaching the patch to improve some coverage of hash index code [1].
I have added some basic tests, which mainly covers overflow pages. It took
2 sec extra time in my machine in parallel schedule.
Hit Total Coverage
old tests Line Coverage 780 1478 52.7
Function Coverage 63 85 74.1
I have created a patch to cache the meta page of Hash index in
backend-private memory. This is to save reading the meta page buffer every
time when we want to find the bucket page. In “_hash_first” call, we try to
read meta page buffer twice just to make sure bucket is not split after we
found
On Tue, Jan 31, 2017 at 9:47 PM, Robert Haas wrote:
> Now, I assume that this patch sorts the I/O (although I haven't
> checked that) and therefore I expect that the prewarm completes really
> fast. If that's not the case, then that's bad. But if it is the
> case, then
Hi all,
Here is the new patch which fixes all of above comments, I changed the
design a bit now as below
What is it?
===
A pair of bgwrokers one which automatically dumps buffer pool's block
info at a given interval and another which loads those block into
buffer pool when
the server
On Tue, Feb 7, 2017 at 11:53 AM, Beena Emerson wrote:
> launched by other applications. Also with max_worker_processes = 2 and
> restart, the system crashes when the 2nd worker is not launched:
> 2017-02-07 11:36:39.132 IST [20573] LOG: auto pg_prewarm load : number of
Thanks Beena,
On Tue, Feb 7, 2017 at 4:46 PM, Beena Emerson wrote:
> Few more comments:
>
> = Background worker messages:
>
> - Workers when launched, show messages like: "logical replication launcher
> started”, "autovacuum launcher started”. We should probably have a
On Tue, Feb 7, 2017 at 12:24 PM, Amit Kapila wrote:
> On Tue, Feb 7, 2017 at 11:53 AM, Beena Emerson
> wrote:
>> Are 2 workers required?
>>
>
> I think in the new design there is a provision of launching the worker
> dynamically to dump the
On Tue, Feb 7, 2017 at 6:11 PM, Beena Emerson wrote:
> Yes it would be better to have only one pg_prewarm worker as the loader is
> idle for the entire server run time after the initial load activity of few
> secs.
Sorry, that is again a bug in the code. The code to
On Tue, Feb 7, 2017 at 11:21 PM, Erik Rijkers wrote:
> On 2017-02-07 18:41, Robert Haas wrote:
>>
>> Committed with some changes (which I noted in the commit message).
Thanks, Robert and all who have reviewed the patch and given their
valuable comments.
> This has caused a
On Fri, Oct 28, 2016 at 6:36 AM, Tsunakawa, Takayuki
wrote:
> I welcome this feature! I remember pg_hibernate did this. I wonder what
> happened to pg_hibernate. Did you check it?
Thanks, when I checked with pg_hibernate there were two things people
complained
On Tue, Jan 24, 2017 at 5:07 AM, Jim Nasby wrote:
> I took a look at this again, and it doesn't appear to be working for me. The
> library is being loaded during startup, but I don't see any further activity
> in the log, and I don't see an autoprewarm file in $PGDATA.
On Tue, Jan 24, 2017 at 3:10 PM, Amit Kapila wrote:
> 1.
> @@ -505,26 +505,22 @@ hashbulkdelete(IndexVacuumInfo *info,
> In the above flow, do we really need an updated metapage, can't we use
> the cached one? We are already taking care of bucket split down in
> that
On Thu, Jan 26, 2017 at 8:45 PM, Peter Eisentraut
wrote:
> Just a thought with an additional use case: If I want to set up a
> standby for offloading queries, could I take the dump file from the
> primary or another existing standby, copy it to the new standby,
On Thu, Jan 19, 2017 at 5:08 PM, Ashutosh Sharma wrote:
Thanks, Ashutosh and Jesper. I have tested the patch I do not have any
more comments so making it ready for committer.
--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com
--
Sent via
> HashMetaPage _hash_getcachedmetap(Relation rel, Buffer *metabuf, bool
> force_refresh);
>
> If the cache is initialized and force_refresh is not true, then this
> just returns the cached data, and the metabuf argument isn't used.
> Otherwise, if *metabuf == InvalidBuffer, we set *metabuf to
Thanks, Amit
On Mon, Feb 20, 2017 at 9:51 PM, Amit Kapila wrote:
> How will high and lowmask calculations work in this new strategy?
> Till now they always work on doubling strategy and I don't see you
> have changed anything related to that code. Check below places.
On Fri, Feb 10, 2017 at 1:06 AM, Robert Haas wrote:
> Alright, committed with a little further hacking.
I did pull the latest code, and tried
Test:
create table t1(t int);
create index i1 on t1 using hash(t);
insert into t1 select generate_series(1, 1000);
Hi all thanks,
I have tried to fix all of the comments given above with some more
code cleanups.
On Wed, Feb 22, 2017 at 6:28 AM, Robert Haas wrote:
> I think it's OK to treat that as something of a corner case. There's
> nothing to keep you from doing that today: just
Hi all,
As of now, we expand the hash index by doubling the number of bucket
blocks. But unfortunately, those blocks will not be used immediately.
So I thought if we can differ bucket block allocation by some factor,
hash index size can grow much efficiently. I have written a POC patch
which does
On Tue, Jan 17, 2017 at 10:07 AM, Amit Kapila wrote:
> 1.
> (a) I think you can retain the previous comment or modify it slightly.
> Just removing the whole comment and replacing it with a single line
> seems like a step backward.
-- Fixed, Just modified to say it
> (b)
Test:
--
create table seq_tab(t int);
insert into seq_tab select generate_series(1, 1000);
select count(distinct t) from seq_tab;
#0 0x0094a9ad in pfree (pointer=0x0) at mcxt.c:1007
#1 0x00953be3 in mergeonerun (state=0x1611450) at tuplesort.c:2803
#2 0x00953824
>
> 7. I think it is not your bug, but probably a bug in Hash index
> itself; page flag is set to 66 (for below test); So the page is both
> LH_BUCKET_PAGE and LH_BITMAP_PAGE. Is not this a bug in hash index?
>
> I have inserted 3000 records. Hash index is on integer column.
> select hasho_flag
On Tue, Jan 17, 2017 at 3:06 PM, Amit Kapila wrote:
>
> I think your calculation is not right. 66 indicates LH_BUCKET_PAGE |
> LH_BUCKET_NEEDS_SPLIT_CLEANUP which is a valid state after the split.
> This flag will be cleared either during next split or when vacuum
>
On Fri, Aug 19, 2016 at 4:44 PM, Amit Kapila
wrote:
>Can you specify the m/c details as Andres wants tests to be conducted on
some high socket m/c?
As I have specified at the last line of my mail it is a 8 socket intel
machine.
available: 8 nodes (0-7)
node 0 cpus: 0 65
On Wed, Aug 17, 2016 at 9:12 PM, Andres Freund wrote:
> >Yes. I want a long wait list, modified in bulk - which should be the
> >case with the above.
>
I ran some pgbench. And, I do not see much difference in performance, small
variance in perf can be attributed to variance
On Sep 2, 2016 7:38 PM, "Jesper Pedersen"
wrote:
> Could you provide a rebased patch based on Amit's v5 ?
Please find the the patch, based on Amit's V5.
I have fixed following things
1. now in "_hash_first" we check if (opaque->hasho_prevblkno ==
InvalidBlockNumber)
On Mon, Sep 5, 2016 at 4:33 PM, Aleksander Alekseev <
a.aleks...@postgrespro.ru> wrote:
>After a brief examination I've found following ways to improve the patch.
Adding to above comments.
1)
+ /*
+ * consult connection options and check if RO connection is OK
+ * RO connection is OK if readonly
On Aug 31, 2016 1:44 PM, "Victor Wagner" wrote:
> Thanks, I've added this to 7-th (yet unpublished here) version of my
> patch.
Hi victor, just wanted know what your plan for your patch 07. Would you
like to submit it to the community. I have just signed up as a reviewer for
On Fri, Aug 26, 2016 at 10:10 AM, Mithun Cy <mithun...@enterprisedb.com>
wrote:
>
> >rpath,'/home/mithun/edbsrc/patch6bin/lib',--enable-new-dtags -lecpg
> -lpgtypes -lpq -lpgcommon -lpgport -lz -lrt -lcrypt -ldl -lm -o test1
> >../../../../../src/interfaces/libpq/libpq
On Thu, Sep 8, 2016 at 11:21 PM, Jesper Pedersen wrote:
>
> > For the archives, this patch conflicts with the WAL patch [1].
>
> > [1] https://www.postgresql.org/message-id/CAA4eK1JS%2BSiRSQBzEFp
> nsSmxZKingrRH7WNyWULJeEJSj1-%3D0w%40mail.gmail.com
>
Updated the
On Wed, Sep 7, 2016 at 7:26 PM, Victor Wagner wrote:
> No, algorithm here is more complicated. It must ensure that there would
> not be second attempt to connect to host, for which unsuccessful
> connection attempt was done. So, there is list rearrangement.
>Algorithm for pick
This patch do not apply on latest code. it fails as follows
libpq-failover-9.patch:176: trailing whitespace.
thread.o pgsleep.o
libpq-failover-9.patch:184: trailing whitespace.
check:
libpq-failover-9.patch:185: trailing whitespace.
$(prove_check)
libpq-failover-9.patch:186: trailing whitespace.
On Tue, Oct 4, 2016 at 11:55 PM, Jeff Janes wrote:
>Can you describe your benchmarking machine? Your benchmarking data went
up to 128 clients. But how many cores does the machine have? Are
>you testing how well it can use the resources it has, or how well it can
deal with
On Sat, Aug 6, 2016 at 9:41 AM, Amit Kapila wrote:
> I wonder why you have included a new file for these tests, why can't be
these added to existing hash_index.sql.
tests in hash_index.sql did not cover overflow pages, above tests were for
mainly for them. So I thought
On Thu, Mar 17, 2016 at 4:47 AM, David Steele wrote:
>Since there has been no response from the author I have marked this patch
"returned with feedback". Please feel free >to resubmit for 9.7!
I have started to work on this patch, and tried to fix some of the issues
On Tue, Sep 27, 2016 at 1:53 AM, Jeff Janes wrote:
> I think that this needs to be updated again for v8 of concurrent and v5
of wal
Adding the rebased patch over [1] + [2]
[1] Concurrent Hash index.
I have some more comments on libpq-failover-8.patch
+ /* FIXME need to check that port is numeric */
Is this still applicable?.
1)
+ /*
+ * if there is more than one host in the connect string and
+ * target_server_type is not specified explicitely, set
+ * target_server_type to "master"
+
On Thu, Oct 27, 2016 at 5:09 PM, Mithun Cy <mithun...@enterprisedb.com>
wrote:
> This a PostgreSQL contrib module which automatically dump all of the
blocknums
>present in buffer pool at the time of server shutdown(smart and fast mode
only,
>to be enhanced to dump at regular inte
# pg_autoprewarm.
This a PostgreSQL contrib module which automatically dump all of the
blocknums
present in buffer pool at the time of server shutdown(smart and fast mode
only,
to be enhanced to dump at regular interval.) and load these blocks when
server restarts.
Design:
--
We have created
On Thu, Nov 3, 2016 at 7:16 PM, Robert Haas wrote:
> Great, committed. There's still potentially more work to be done
> here, because my patch omits some features that were present in
> Victor's original submission, like setting the failover timeout,
> optionally
On Thu, Nov 10, 2016 at 1:11 PM, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:
> Why don't you add "standby" and "prefer_standby" as the
target_server_type value? Are you thinking that those values are useful
with load balancing > feature?
Yes this patch will only address failover
On Mon, Nov 14, 2016 at 1:37 PM, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:
> No, there's no concern about compatibility. Please look at this:
> https://www.postgresql.org/docs/devel/static/protocol-
flow.html#PROTOCOL-ASYNC
Thanks, my concern is suppose you have 3 server in
On Fri, Nov 11, 2016 at 7:33 PM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:
> We tend to use the term "primary" instead of "master".
Thanks, I will use primary instead of master in my next patch.
>Will this work with logical replication?
I have not tested with logical
On Mon, Nov 14, 2016 at 11:31 AM, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:
> PGconn->target_server_type is not freed in freePGconn().
Thanks, will fix in new patch.
>Could you add PGTARGETSERVERTYPE environment variable? Like other
variables, it will ease testing, since
An updated patch with some fixes for bugs reported earlier,
A. failover_to_new_master_v4.patch
Default value "any" is added to target_server_type parameter during its
definition.
B. libpq-failover-smallbugs_02.patch
Fixed the issue raised by [PATCH] pgpassfile connection option
On Wed, Nov 23, 2016 at 10:19 PM, Catalin Iacob
wrote:
On Tue, Nov 22, 2016 at 8:38 AM, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:
>> If you want to connect to a server where the transaction is read-only,
then shouldn't the connection parameter be
On Fri, Nov 25, 2016 at 10:41 AM, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:
> I agree that pg_conn_host should have hostaddr in addition to host, and
PQhost() return host when host is specified with/without hostaddr specified.
typedef struct pg_conn_host
+{
*+ char *host; /*
On Fri, Nov 25, 2016 at 12:03 PM, Andreas Karlsson
wrote:
> Another thought about this code: should we not check if it is a unix
socket first before splitting the host? While I doubt that it is common to
have a unix >socket in a directory with comma in the path it is a bit
Sorry I took some time on this please find latest patch with addressed
review comments. Apart from fixes for comments I have introduced a new GUC
variable for the pg_autoprewarm "buff_dump_interval". So now we dump the
buffer pool metadata at every buff_dump_interval secs. Currently
> Independently of your patch, while testing I concluded that the
multi-host feature and documentation should be improved:
> - If a connection fails, it does not say for which host/port.
Thanks I will re-test same.
> In fact they are tried in turn if the network connection fails, but not
> if
On Thu, Oct 13, 2016 at 1:40 PM, Michael Paquier
wrote:
> I am attaching that to the next CF.
I have tested this patch. Now we error out as OOM instead of crash.
postgres=# SELECT '12.34'::money;
ERROR: out of memory
On Thu, Nov 17, 2016 at 8:27 AM, Robert Haas wrote:
>but SELECT pg_is_in_recovery() and SHOW transaction_read_only
>exist in older versions so if we pick either of those methods then it
>will just work.
I am adding next version of the patch it have following fixes.
On Sat, Nov 19, 2016 at 8:59 AM, Pavel Stehule
wrote:
>
>
>
>> SELECT MIN(a) m FROM
>>(SELECT a FROM t WHERE a=2) AS v(a)
>>
>> The subquery is going to return an intermediate result of:
>>
>> V:A
>> ---
>> 2
>>
>> And the minimum of that is 2, which is the wrong
>
> > So I am tempted to just
> > hold my nose and hard-code the SQL as JDBC is presumably already
> doing.
JDBC is sending "show transaction_read_only" to find whether it is master
or not.
Victor's patch also started with it, but later it was transformed into
pg_is_in_recovery
by him as it
On Fri, Nov 18, 2016 at 6:39 AM, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:
>Typo. , and "observering" -> "observing".
Thanks fixed.
> + {"target_server_type", "PGTARGETSERVERTYPE",
DefaultTargetServerType, NULL,
> + "Target-Server-Type", "", 6,
Thanks
On Thu, Oct 27, 2016 at 11:15 PM, Robert Haas wrote:
> Thanks. Here's a new version with a fix for that issue and also a fix
> for PQconnectionNeedsPassword(), which was busted in v1.
I did some more testing of the patch for both URI and (host, port)
parameter pairs. I did
On Tue, Nov 1, 2016 at 6:54 PM, Robert Haas wrote:
>That's the wrong syntax. If you look in
> https://www.postgresql.org/docs/devel/static/libpq-connect.html under
> "32.1.1.2. Connection URIs", it gives an example of how to include a
> slash in a pathname. You have to
On Tue, Nov 1, 2016 at 7:44 PM, Robert Haas wrote:
>> Starting program: /home/mithun/libpqbin/bin/./psql
>> postgres://%2home%2mithun:/postgres -U mithun1
>Can you provide a concrete test scenario or some test code that fails?
>connhost is supposed to be getting set in
On Tue, Nov 1, 2016 at 9:42 PM, Robert Haas wrote:
> Ah, nuts. Thanks, good catch. Should be fixed in the attached version.
I repeated the test on new patch, It works fine now, Also did some more
negative tests forcibly failing some internal calls. All tests have passed.
On Wed, Oct 26, 2016 at 8:49 PM, Robert Haas wrote:
> Let me know your thoughts.
One small issue. I tried to run make installcheck after applying patch
there seems to be a invalid write access in code (resulting in crash).
==118997== Invalid write of size 1
==118997==
On Wed, Sep 28, 2016 at 9:56 PM, Robert Haas wrote:
> something committable will come from it, but with 2 days left it's not
> going to happen this CF.
Adding a new patch. This one uses generate series instead of INSERT INTO
SELECT and fixed comments from Alvaro.
--
On Fri, Sep 30, 2016 at 2:14 PM, Victor Wagner wrote:
>Ok, some trailing whitespace and mixing of tabs and spaces
>which git apply doesn't like really present in the patch.
>I'm attached hear version with these issues resolved.
There were some more trailing spaces and spaces
On Wed, Jan 11, 2017 at 12:46 AM, Robert Haas wrote:
> Can we adapt the ad-hoc caching logic in hashbulkdelete() to work with
> this new logic? Or at least update the comments?
I have introduced a new function _hash_getcachedmetap in patch 11 [1] with
this hashbulkdelete()
On Fri, Jan 6, 2017 at 11:43 AM, Amit Kapila wrote:
> Few more comments:
> 1.
> no need to two extra lines, one is sufficient and matches the nearby
> coding pattern.
-- Fixed.
> 2.
> Do you see anywhere else in the code the pattern of using @ symbol in
> comments
There is a typo in comments of function _hash_first(); Adding a fix for same.
--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com
hashsearch_typo01.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
On Fri, Jan 13, 2017 at 12:49 AM, Jesper Pedersen
wrote:
> Rebased, and removed the compile warn in hashfuncs.c
I did some testing and review for the patch. I did not see any major
issue, but there are few minor cases for which I have few suggestions.
1. Source File
On Fri, Dec 2, 2016 at 2:26 AM, Robert Haas wrote:
>Yeah, we should change that. Are you going to write a patch?
Thanks, will work on this will produce a patch to patch to fix.
--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com
On Wed, Nov 30, 2016 at 1:44 AM, Robert Haas wrote:
>On Tue, Nov 29, 2016 at 2:19 PM, Kuntal Ghosh
wrote:
>> I was doing some testing with the patch and I found some inconsistency
>> in the error message.
> Hmm, maybe the query buffer is
On Thu, Dec 1, 2016 at 8:10 PM, Jesper Pedersen
wrote:
>As the concurrent hash index patch was committed in 6d46f4 this patch
needs a rebase.
Thanks Jesper,
Adding the rebased patch.
I have re-run the pgbench readonly tests with below modification.
"alter table
On Tue, Dec 6, 2016 at 1:28 AM, Mithun Cy <mithun...@enterprisedb.com>
wrote:
>
> *Clients *
>
> *Cache Meta Page patch *
>
> *Base code with amits changes*
>
> * %imp*
>
> 1
>
> 17062.513102
>
> 17218.353817
>
> -
On Fri, Dec 2, 2016 at 9:18 AM, Mithun Cy <mithun...@enterprisedb.com>
wrote:
>On Fri, Dec 2, 2016 at 2:26 AM, Robert Haas <robertmh...@gmail.com> wrote:
>>Yeah, we should change that. Are you going to write a patch?
> Thanks, will work on this will produce
On Mon, Dec 5, 2016 at 11:23 PM, Robert Haas wrote:
>I think that you need a restoreErrorMessage call here:
>/* Skip any remaining addresses for this host. */
>conn->addr_cur = NULL;
>if
On Fri, Dec 2, 2016 at 8:54 PM, Robert Haas wrote:
> Couldn't this just be a variable in PQconnectPoll(), instead of adding
> a new structure member?
I have fixed same with a local variable in PQconnectPoll, Initally I
thought savedMessage should have same visiblitly as
On Wed, Nov 30, 2016 at 7:14 AM, Mithun Cy <mithun...@enterprisedb.com>
wrote:
> Thanks, send query resets the errorMessage. Will fix same.
>
*PQsendQuery()->PQsendQueryStart()->resetPQExpBuffer(>errorMessage);*
*Please find the patch which fixes this bug. **conn->errorM
Thanks Amit for detailed review, and pointing out various issues in
the patch. I have tried to fix all of your comments as below.
On Mon, Jan 2, 2017 at 11:29 AM, Amit Kapila wrote:
> 1.
> usage "into to .." in above comment seems to be wrong.usage "into to .." in
>
On Wed, Jan 4, 2017 at 4:19 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
> As part performance/space analysis of hash index on varchar data typevarchar
> data type
> with this patch, I have run some tests for same with modified pgbench.with
> this patch, I have run s
On Wed, Jan 4, 2017 at 5:21 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
I have re-based the patch to fix one compilation warning
@_hash_doinsert where variable bucket was only used for Asserting but
was not declared about its purpose.
--
Thanks and Regards
Mithun C Y
EnterpriseDB
On Tue, Jan 3, 2017 at 12:05 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
As part performance/space analysis of hash index on varchar data type
with this patch, I have run some tests for same with modified pgbench.
Modification includes:
1. Changed aid column of pg_accounts table fr
1 - 100 of 172 matches
Mail list logo