On Wed, Aug 2, 2017 at 3:42 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
Sorry, there was an unnecessary header included in proc.c which should
be removed adding the corrected patch.
--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprise
end of the
transaction and others can reuse the cached the snapshot. In the
second approach, every process has to re-compute the snapshot. So I am
keeping with the same approach.
On Mon, Jul 10, 2017 at 10:13 AM, Mithun Cy <mithun...@enterprisedb.com> wrote:
> On Fri, Apr 8, 2016 at 12:13 PM
On 3 Aug 2017 2:16 am, "Andres Freund" wrote:
Hi Andres thanks for detailed review. I agree with all of the comments. I
am going for a vacation. Once I come back I will fix those comments and
will submit a new patch.
On Fri, Apr 8, 2016 at 12:13 PM, Robert Haas wrote:
> I think that we really shouldn't do anything about this patch until
> after the CLOG stuff is settled, which it isn't yet. So I'm going to
> mark this Returned with Feedback; let's reconsider it for 9.7.
I am updating
On Thu, Jul 6, 2017 at 10:52 AM, Amit Kapila wrote:
>>> 3.
>> -- I do not think that is true pages of the unlogged table are also
>> read into buffers for read-only purpose. So if we miss to dump them
>> while we shut down then the previous dump should be used.
>>
>
> I
On Tue, Jun 27, 2017 at 11:41 AM, Mithun Cy <mithun...@enterprisedb.com> wrote:
> On Fri, Jun 23, 2017 at 5:45 AM, Thom Brown <t...@linux.com> wrote:
>>
>> Also, I find it a bit messy that launch_autoprewarm_dump() doesn't
>> detect an autoprewarm p
On Thu, Jul 6, 2017 at 10:52 AM, Amit Kapila wrote:
> I am not able to understand what you want to say. Unlogged tables
> should be empty in case of crash recovery. Also, we never flush
> unlogged buffers except for shutdown checkpoint, refer BufferAlloc and
> in
On Mon, Jul 3, 2017 at 3:55 PM, Amit Kapila <amit.kapil...@gmail.com> wrote:
> On Fri, Jun 23, 2017 at 3:22 AM, Robert Haas <robertmh...@gmail.com> wrote:
>> On Thu, Jun 15, 2017 at 12:35 AM, Mithun Cy <mithun...@enterprisedb.com>
>> wrote:
>>
>> * Ins
On Mon, Jul 3, 2017 at 11:58 AM, Amit Kapila <amit.kapil...@gmail.com> wrote:
> On Sun, Jul 2, 2017 at 10:32 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
>> On Tue, Jun 27, 2017 at 11:41 AM, Mithun Cy <mithun...@enterprisedb.com>
>> wrote:
>>> On F
On Mon, Jul 3, 2017 at 3:34 PM, Amit Kapila wrote:
>
> Few comments on the latest patch:
>
> 1.
> + LWLockRelease(_state->lock);
> + if (!is_bgworker)
> + ereport(ERROR,
> + (errmsg("could not perform block dump because dump file is being
> used by PID %d",
> +
On Wed, Aug 16, 2017 at 2:08 AM, Robert Haas <robertmh...@gmail.com> wrote:
> On Fri, Jul 14, 2017 at 8:17 AM, Mithun Cy <mithun...@enterprisedb.com> wrote:
>> [ new patch ]
> It's quite possible that in making all of these changes I've
> introduced some bugs, so I th
On Wed, Sep 13, 2017 at 7:24 PM, Jesper Pedersen
wrote:
> I have done a run with this patch on a 2S/28C/56T/256Gb w/ 2 x RAID10 SSD
> machine.
>
> Both for -M prepared, and -M prepared -S I'm not seeing any improvements (1
> to 375 clients); e.g. +-1%.
My test was
On Mon, Sep 18, 2017 at 1:38 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
> On Sat, Sep 16, 2017 at 3:03 AM, Andres Freund <and...@anarazel.de> wrote:
> So I think performance gain is visible. We saved a good amount of
> execution cycle in SendRowDescriptionMessagewhe
On Wed, Sep 20, 2017 at 11:45 PM, Robert Haas <robertmh...@gmail.com> wrote:
> On Tue, Aug 29, 2017 at 1:57 AM, Mithun Cy <mithun...@enterprisedb.com>
wrote:
>> All TPS are median of 3 runs
>> Clients TPS-With Patch 05 TPS-Base%Diff
On Thu, Sep 21, 2017 at 7:25 AM, Robert Haas <robertmh...@gmail.com> wrote:
> On Wed, Sep 20, 2017 at 9:46 PM, Mithun Cy <mithun...@enterprisedb.com> wrote:
>> I think there is some confusion above results is for pgbench simple update
>> (-N) tests where cached snapsh
On Fri, Aug 18, 2017 at 9:43 PM, Robert Haas <robertmh...@gmail.com> wrote:
> On Fri, Aug 18, 2017 at 2:23 AM, Mithun Cy <mithun...@enterprisedb.com> wrote:
> Ah, good catch. While I agree that there is probably no great harm
> from skipping the lock here, I think it would be
Hi all,
I was trying to study NoMovementScanDirection part of heapgettup() and
heapgettup_pagemode(). If I am right there is no test in test suit to
hit this code. I did run make check-world could not hit it. Also,
coverage report in
On Mon, Aug 28, 2017 at 5:50 PM, Tom Lane wrote:
> I think that's probably dead code given that ExecutorRun short-circuits
> everything for NoMovementScanDirection. There is some use of
> NoMovementScanDirection for indexscans, to denote an unordered index,
> but likely that
On Thu, Aug 3, 2017 at 2:16 AM, Andres Freund wrote:
>I think this patch should have a "cover letter" explaining what it's
> trying to achieve, how it does so and why it's safe/correct.
Cache the SnapshotData for reuse:
===
In one of our perf analysis
Hi all please ignore this mail, this email is incomplete I have to add
more information and Sorry accidentally I pressed the send button
while replying.
On Tue, Aug 29, 2017 at 11:27 AM, Mithun Cy <mithun...@enterprisedb.com> wrote:
> Cache the SnapshotData
's a way to avoid copying the snapshot into the cache
> in situations where we're barely ever going to need it. But right now
> the only way I can think of is to look at the length of the
> ProcArrayLock wait queue and count the exclusive locks therein - which'd
> be expensive and intrusive...
&
On Thu, Dec 17, 2015 at 3:15 AM, Mithun Cy <mithun...@enterprisedb.com>
wrote:
> I have rebased the patch and tried to run pgbench.
> I see memory corruptions, attaching the valgrind report for the same.
Sorry forgot to add re-based patch, adding the same now.
After some analysis I
On Thu, Jun 16, 2016 at 12:58 PM, Amit Kapila
wrote:
I have a question regarding code changes in *_hash_first*.
+/*
+ * Conditionally get the lock on primary bucket page for search
while
+* holding lock on meta page. If we have to wait, then
On Fri, Jan 29, 2016 at 5:11 PM, Mithun Cy <mithun...@enterprisedb.com>
wrote
>
>
>
> >I just ran some tests on above patch. Mainly to compare
> >how "longer sort keys" would behave with new(Qsort) and old Algo(RS) for
> sorting.
> >I have 8GB
On Tue, Dec 29, 2015 at 4:33 AM, Peter Geoghegan wrote:
>Attached is a revision that significantly overhauls the memory patch,
>with several smaller changes.
I just ran some tests on above patch. Mainly to compare
how "longer sort keys" would behave with new(Qsort) and old
On Sat, Jan 16, 2016 at 10:23 AM, Amit Kapila <amit.kapil...@gmail.com>
wrote:
> >On Fri, Jan 15, 2016 at 11:23 AM, Mithun Cy <mithun...@enterprisedb.com>
> wrote:
>
>> On Mon, Jan 4, 2016 at 2:28 PM, Andres Freund <and...@anarazel.de> wrote:
>>
>&g
On Thu, Mar 3, 2016 at 11:50 PM, Robert Haas wrote:
>What if you apply both this and Amit's clog control log patch(es)? Maybe
the combination of the two helps substantially more than either >one alone.
I did the above tests along with Amit's clog patch. Machine :8
On Thu, Mar 17, 2016 at 9:00 AM, Amit Kapila
wrote:
>If you see, for the Base readings, there is a performance increase up till
64 clients and then there is a fall at 88 clients, which to me >indicates
that it hits very high-contention around CLogControlLock at 88 clients
On Thu, Mar 10, 2016 at 9:39 PM, Robert Haas wrote:
>I guess there must not be an occurrence of this pattern in the
>regression tests, or previous force_parallel_mode testing would have
>found this problem. Perhaps this patch should add one?
I have added the test to
<amit.kapil...@gmail.com>
wrote:
> On Thu, Mar 10, 2016 at 1:04 PM, Mithun Cy <mithun...@enterprisedb.com>
> wrote:
>
>>
>>
>> On Thu, Mar 3, 2016 at 11:50 PM, Robert Haas <robertmh...@gmail.com>
>> wrote:
>> >What if you apply both this and
On Sat, Mar 12, 2016 at 2:32 PM, Amit Kapila
wrote:
>With force_parallel_mode=on, I could see many other failures as well. I
think it is better to have test, which tests this functionality with
>force_parallel_mode=regress
as per user manual.
Setting this value to
Sorry there was some issue with my mail settings same mail got set more
than once.
--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com
On Sat, Mar 12, 2016 at 2:32 PM, Amit Kapila
wrote:
>With force_parallel_mode=on, I could see many other failures as well. I
think it is better to have test, which tests this functionality with
>force_parallel_mode=regress
as per user manual.
Setting this value to
On Sat, Mar 12, 2016 at 2:32 PM, Amit Kapila
wrote:
>With force_parallel_mode=on, I could see many other failures as well. I
think it is better to have test, which tests this functionality with
>force_parallel_mode=regress
as per user manual.
Setting this value to
On Sat, Mar 12, 2016 at 12:28 PM, Amit Kapila
wrote
>I don't see how this test will fail with force_parallel_mode=regress and
max_parallel_degree > 0 even without the patch proposed to fix the issue in
>hand. In short, I don't think this test would have caught the issue,
On Thu, Mar 3, 2016 at 6:20 AM, Mithun Cy <mithun...@enterprisedb.com>
wrote:
> I will continue to benchmark above tests with much wider range of clients.
Latest Benchmarking shows following results for unlogged tables.
clients BASE ONLY CLOG CHANGES % Increase CLOG CHANGES + SAVE
Hi All,
Explain [Analyze] Select Into table. produces the plan which uses
parallel scans.
*Test:*
create table table1 (n int);
insert into table1 values (generate_series(1,500));
analyze table1;
set parallel_tuple_cost=0;
set max_parallel_degree=3;
postgres=# explain select into
On Thu, Mar 10, 2016 at 1:36 PM, Amit Kapila
wrote:
>Do you think it makes sense to check the performance by increasing CLOG
buffers (patch for same is posted in Speed up Clog thread [1]) >as that
also relieves contention on CLOG as per the tests I have done?
Along with
On Tue, Mar 1, 2016 at 12:59 PM, Amit Kapila
wrote:
>Don't we need to add this only when the xid of current transaction is
valid? Also, I think it will be better if we can explain why we need to
add the our >own transaction id while caching the snapshot.
I have fixed the
I tried to do some benchmarking on postgres master head
commit 72a98a639574d2e25ed94652848555900c81a799
Author: Andres Freund
Date: Tue Apr 26 20:32:51 2016 -0700
CASE : Read-Write Tests when data exceeds shared buffers.
Non Default settings and test
./postgres -c
On Fri, May 6, 2016 at 8:35 PM, Andres Freund wrote:
> Also, do you see read-only workloads to be affected too?
Thanks, I have not tested with above specific commitid which reported
performance issue but
At HEAD commit 72a98a639574d2e25ed94652848555900c81a799
Author: Andres
Tests:
create table mytab(x int,x1 char(9),x2 varchar(9));
create table mytab1(y int,y1 char(9),y2 varchar(9));
insert into mytab values (generate_series(1,5),'aa','aaa');
insert into mytab1 values (generate_series(1,1),'aa','aaa');
insert into mytab values
I did some basic testing of same. In that I found one issue with cursor.
+BEGIN;
+SET enable_seqscan = OFF;
+SET enable_bitmapscan = OFF;
+CREATE FUNCTION declares_cursor(int)
+ RETURNS void
+ AS 'DECLARE c CURSOR FOR SELECT * from con_hash_index_table WHERE keycol
= $1;'
+LANGUAGE SQL;
+
I am attaching the patch to improve some coverage of hash index code [1].
I have added some basic tests, which mainly covers overflow pages. It took
2 sec extra time in my machine in parallel schedule.
Hit Total Coverage
old tests Line Coverage 780 1478 52.7
Function Coverage 63 85 74.1
I have created a patch to cache the meta page of Hash index in
backend-private memory. This is to save reading the meta page buffer every
time when we want to find the bucket page. In “_hash_first” call, we try to
read meta page buffer twice just to make sure bucket is not split after we
found
On Tue, Jan 31, 2017 at 9:47 PM, Robert Haas wrote:
> Now, I assume that this patch sorts the I/O (although I haven't
> checked that) and therefore I expect that the prewarm completes really
> fast. If that's not the case, then that's bad. But if it is the
> case, then
Hi all,
Here is the new patch which fixes all of above comments, I changed the
design a bit now as below
What is it?
===
A pair of bgwrokers one which automatically dumps buffer pool's block
info at a given interval and another which loads those block into
buffer pool when
the server
On Tue, Feb 7, 2017 at 11:53 AM, Beena Emerson wrote:
> launched by other applications. Also with max_worker_processes = 2 and
> restart, the system crashes when the 2nd worker is not launched:
> 2017-02-07 11:36:39.132 IST [20573] LOG: auto pg_prewarm load : number of
Thanks Beena,
On Tue, Feb 7, 2017 at 4:46 PM, Beena Emerson wrote:
> Few more comments:
>
> = Background worker messages:
>
> - Workers when launched, show messages like: "logical replication launcher
> started”, "autovacuum launcher started”. We should probably have a
On Tue, Feb 7, 2017 at 12:24 PM, Amit Kapila wrote:
> On Tue, Feb 7, 2017 at 11:53 AM, Beena Emerson
> wrote:
>> Are 2 workers required?
>>
>
> I think in the new design there is a provision of launching the worker
> dynamically to dump the
On Tue, Feb 7, 2017 at 6:11 PM, Beena Emerson wrote:
> Yes it would be better to have only one pg_prewarm worker as the loader is
> idle for the entire server run time after the initial load activity of few
> secs.
Sorry, that is again a bug in the code. The code to
On Tue, Feb 7, 2017 at 11:21 PM, Erik Rijkers wrote:
> On 2017-02-07 18:41, Robert Haas wrote:
>>
>> Committed with some changes (which I noted in the commit message).
Thanks, Robert and all who have reviewed the patch and given their
valuable comments.
> This has caused a
On Fri, Oct 28, 2016 at 6:36 AM, Tsunakawa, Takayuki
wrote:
> I welcome this feature! I remember pg_hibernate did this. I wonder what
> happened to pg_hibernate. Did you check it?
Thanks, when I checked with pg_hibernate there were two things people
complained
On Tue, Jan 24, 2017 at 5:07 AM, Jim Nasby wrote:
> I took a look at this again, and it doesn't appear to be working for me. The
> library is being loaded during startup, but I don't see any further activity
> in the log, and I don't see an autoprewarm file in $PGDATA.
On Tue, Jan 24, 2017 at 3:10 PM, Amit Kapila wrote:
> 1.
> @@ -505,26 +505,22 @@ hashbulkdelete(IndexVacuumInfo *info,
> In the above flow, do we really need an updated metapage, can't we use
> the cached one? We are already taking care of bucket split down in
> that
On Thu, Jan 26, 2017 at 8:45 PM, Peter Eisentraut
wrote:
> Just a thought with an additional use case: If I want to set up a
> standby for offloading queries, could I take the dump file from the
> primary or another existing standby, copy it to the new standby,
On Thu, Jan 19, 2017 at 5:08 PM, Ashutosh Sharma wrote:
Thanks, Ashutosh and Jesper. I have tested the patch I do not have any
more comments so making it ready for committer.
--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com
--
Sent via
> HashMetaPage _hash_getcachedmetap(Relation rel, Buffer *metabuf, bool
> force_refresh);
>
> If the cache is initialized and force_refresh is not true, then this
> just returns the cached data, and the metabuf argument isn't used.
> Otherwise, if *metabuf == InvalidBuffer, we set *metabuf to
Thanks, Amit
On Mon, Feb 20, 2017 at 9:51 PM, Amit Kapila wrote:
> How will high and lowmask calculations work in this new strategy?
> Till now they always work on doubling strategy and I don't see you
> have changed anything related to that code. Check below places.
On Fri, Feb 10, 2017 at 1:06 AM, Robert Haas wrote:
> Alright, committed with a little further hacking.
I did pull the latest code, and tried
Test:
create table t1(t int);
create index i1 on t1 using hash(t);
insert into t1 select generate_series(1, 1000);
Hi all thanks,
I have tried to fix all of the comments given above with some more
code cleanups.
On Wed, Feb 22, 2017 at 6:28 AM, Robert Haas wrote:
> I think it's OK to treat that as something of a corner case. There's
> nothing to keep you from doing that today: just
Hi all,
As of now, we expand the hash index by doubling the number of bucket
blocks. But unfortunately, those blocks will not be used immediately.
So I thought if we can differ bucket block allocation by some factor,
hash index size can grow much efficiently. I have written a POC patch
which does
On Tue, Jan 17, 2017 at 10:07 AM, Amit Kapila wrote:
> 1.
> (a) I think you can retain the previous comment or modify it slightly.
> Just removing the whole comment and replacing it with a single line
> seems like a step backward.
-- Fixed, Just modified to say it
> (b)
Test:
--
create table seq_tab(t int);
insert into seq_tab select generate_series(1, 1000);
select count(distinct t) from seq_tab;
#0 0x0094a9ad in pfree (pointer=0x0) at mcxt.c:1007
#1 0x00953be3 in mergeonerun (state=0x1611450) at tuplesort.c:2803
#2 0x00953824
>
> 7. I think it is not your bug, but probably a bug in Hash index
> itself; page flag is set to 66 (for below test); So the page is both
> LH_BUCKET_PAGE and LH_BITMAP_PAGE. Is not this a bug in hash index?
>
> I have inserted 3000 records. Hash index is on integer column.
> select hasho_flag
On Tue, Jan 17, 2017 at 3:06 PM, Amit Kapila wrote:
>
> I think your calculation is not right. 66 indicates LH_BUCKET_PAGE |
> LH_BUCKET_NEEDS_SPLIT_CLEANUP which is a valid state after the split.
> This flag will be cleared either during next split or when vacuum
>
On Fri, Aug 19, 2016 at 4:44 PM, Amit Kapila
wrote:
>Can you specify the m/c details as Andres wants tests to be conducted on
some high socket m/c?
As I have specified at the last line of my mail it is a 8 socket intel
machine.
available: 8 nodes (0-7)
node 0 cpus: 0 65
On Wed, Aug 17, 2016 at 9:12 PM, Andres Freund wrote:
> >Yes. I want a long wait list, modified in bulk - which should be the
> >case with the above.
>
I ran some pgbench. And, I do not see much difference in performance, small
variance in perf can be attributed to variance
On Sep 2, 2016 7:38 PM, "Jesper Pedersen"
wrote:
> Could you provide a rebased patch based on Amit's v5 ?
Please find the the patch, based on Amit's V5.
I have fixed following things
1. now in "_hash_first" we check if (opaque->hasho_prevblkno ==
InvalidBlockNumber)
On Mon, Sep 5, 2016 at 4:33 PM, Aleksander Alekseev <
a.aleks...@postgrespro.ru> wrote:
>After a brief examination I've found following ways to improve the patch.
Adding to above comments.
1)
+ /*
+ * consult connection options and check if RO connection is OK
+ * RO connection is OK if readonly
On Aug 31, 2016 1:44 PM, "Victor Wagner" wrote:
> Thanks, I've added this to 7-th (yet unpublished here) version of my
> patch.
Hi victor, just wanted know what your plan for your patch 07. Would you
like to submit it to the community. I have just signed up as a reviewer for
On Fri, Aug 26, 2016 at 10:10 AM, Mithun Cy <mithun...@enterprisedb.com>
wrote:
>
> >rpath,'/home/mithun/edbsrc/patch6bin/lib',--enable-new-dtags -lecpg
> -lpgtypes -lpq -lpgcommon -lpgport -lz -lrt -lcrypt -ldl -lm -o test1
> >../../../../../src/interfaces/libpq/libpq
On Thu, Sep 8, 2016 at 11:21 PM, Jesper Pedersen wrote:
>
> > For the archives, this patch conflicts with the WAL patch [1].
>
> > [1] https://www.postgresql.org/message-id/CAA4eK1JS%2BSiRSQBzEFp
> nsSmxZKingrRH7WNyWULJeEJSj1-%3D0w%40mail.gmail.com
>
Updated the
On Wed, Sep 7, 2016 at 7:26 PM, Victor Wagner wrote:
> No, algorithm here is more complicated. It must ensure that there would
> not be second attempt to connect to host, for which unsuccessful
> connection attempt was done. So, there is list rearrangement.
>Algorithm for pick
This patch do not apply on latest code. it fails as follows
libpq-failover-9.patch:176: trailing whitespace.
thread.o pgsleep.o
libpq-failover-9.patch:184: trailing whitespace.
check:
libpq-failover-9.patch:185: trailing whitespace.
$(prove_check)
libpq-failover-9.patch:186: trailing whitespace.
On Tue, Oct 4, 2016 at 11:55 PM, Jeff Janes wrote:
>Can you describe your benchmarking machine? Your benchmarking data went
up to 128 clients. But how many cores does the machine have? Are
>you testing how well it can use the resources it has, or how well it can
deal with
On Sat, Aug 6, 2016 at 9:41 AM, Amit Kapila wrote:
> I wonder why you have included a new file for these tests, why can't be
these added to existing hash_index.sql.
tests in hash_index.sql did not cover overflow pages, above tests were for
mainly for them. So I thought
On Thu, Mar 17, 2016 at 4:47 AM, David Steele wrote:
>Since there has been no response from the author I have marked this patch
"returned with feedback". Please feel free >to resubmit for 9.7!
I have started to work on this patch, and tried to fix some of the issues
On Tue, Sep 27, 2016 at 1:53 AM, Jeff Janes wrote:
> I think that this needs to be updated again for v8 of concurrent and v5
of wal
Adding the rebased patch over [1] + [2]
[1] Concurrent Hash index.
I have some more comments on libpq-failover-8.patch
+ /* FIXME need to check that port is numeric */
Is this still applicable?.
1)
+ /*
+ * if there is more than one host in the connect string and
+ * target_server_type is not specified explicitely, set
+ * target_server_type to "master"
+
On Thu, Oct 27, 2016 at 5:09 PM, Mithun Cy <mithun...@enterprisedb.com>
wrote:
> This a PostgreSQL contrib module which automatically dump all of the
blocknums
>present in buffer pool at the time of server shutdown(smart and fast mode
only,
>to be enhanced to dump at regular inte
# pg_autoprewarm.
This a PostgreSQL contrib module which automatically dump all of the
blocknums
present in buffer pool at the time of server shutdown(smart and fast mode
only,
to be enhanced to dump at regular interval.) and load these blocks when
server restarts.
Design:
--
We have created
On Thu, Nov 3, 2016 at 7:16 PM, Robert Haas wrote:
> Great, committed. There's still potentially more work to be done
> here, because my patch omits some features that were present in
> Victor's original submission, like setting the failover timeout,
> optionally
On Thu, Nov 10, 2016 at 1:11 PM, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:
> Why don't you add "standby" and "prefer_standby" as the
target_server_type value? Are you thinking that those values are useful
with load balancing > feature?
Yes this patch will only address failover
On Mon, Nov 14, 2016 at 1:37 PM, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:
> No, there's no concern about compatibility. Please look at this:
> https://www.postgresql.org/docs/devel/static/protocol-
flow.html#PROTOCOL-ASYNC
Thanks, my concern is suppose you have 3 server in
On Fri, Nov 11, 2016 at 7:33 PM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:
> We tend to use the term "primary" instead of "master".
Thanks, I will use primary instead of master in my next patch.
>Will this work with logical replication?
I have not tested with logical
On Mon, Nov 14, 2016 at 11:31 AM, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:
> PGconn->target_server_type is not freed in freePGconn().
Thanks, will fix in new patch.
>Could you add PGTARGETSERVERTYPE environment variable? Like other
variables, it will ease testing, since
An updated patch with some fixes for bugs reported earlier,
A. failover_to_new_master_v4.patch
Default value "any" is added to target_server_type parameter during its
definition.
B. libpq-failover-smallbugs_02.patch
Fixed the issue raised by [PATCH] pgpassfile connection option
On Wed, Nov 23, 2016 at 10:19 PM, Catalin Iacob
wrote:
On Tue, Nov 22, 2016 at 8:38 AM, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:
>> If you want to connect to a server where the transaction is read-only,
then shouldn't the connection parameter be
On Fri, Nov 25, 2016 at 10:41 AM, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:
> I agree that pg_conn_host should have hostaddr in addition to host, and
PQhost() return host when host is specified with/without hostaddr specified.
typedef struct pg_conn_host
+{
*+ char *host; /*
On Fri, Nov 25, 2016 at 12:03 PM, Andreas Karlsson
wrote:
> Another thought about this code: should we not check if it is a unix
socket first before splitting the host? While I doubt that it is common to
have a unix >socket in a directory with comma in the path it is a bit
Sorry I took some time on this please find latest patch with addressed
review comments. Apart from fixes for comments I have introduced a new GUC
variable for the pg_autoprewarm "buff_dump_interval". So now we dump the
buffer pool metadata at every buff_dump_interval secs. Currently
> Independently of your patch, while testing I concluded that the
multi-host feature and documentation should be improved:
> - If a connection fails, it does not say for which host/port.
Thanks I will re-test same.
> In fact they are tried in turn if the network connection fails, but not
> if
On Thu, Oct 13, 2016 at 1:40 PM, Michael Paquier
wrote:
> I am attaching that to the next CF.
I have tested this patch. Now we error out as OOM instead of crash.
postgres=# SELECT '12.34'::money;
ERROR: out of memory
On Thu, Nov 17, 2016 at 8:27 AM, Robert Haas wrote:
>but SELECT pg_is_in_recovery() and SHOW transaction_read_only
>exist in older versions so if we pick either of those methods then it
>will just work.
I am adding next version of the patch it have following fixes.
On Sat, Nov 19, 2016 at 8:59 AM, Pavel Stehule
wrote:
>
>
>
>> SELECT MIN(a) m FROM
>>(SELECT a FROM t WHERE a=2) AS v(a)
>>
>> The subquery is going to return an intermediate result of:
>>
>> V:A
>> ---
>> 2
>>
>> And the minimum of that is 2, which is the wrong
>
> > So I am tempted to just
> > hold my nose and hard-code the SQL as JDBC is presumably already
> doing.
JDBC is sending "show transaction_read_only" to find whether it is master
or not.
Victor's patch also started with it, but later it was transformed into
pg_is_in_recovery
by him as it
On Fri, Nov 18, 2016 at 6:39 AM, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:
>Typo. , and "observering" -> "observing".
Thanks fixed.
> + {"target_server_type", "PGTARGETSERVERTYPE",
DefaultTargetServerType, NULL,
> + "Target-Server-Type", "", 6,
Thanks
On Thu, Oct 27, 2016 at 11:15 PM, Robert Haas wrote:
> Thanks. Here's a new version with a fix for that issue and also a fix
> for PQconnectionNeedsPassword(), which was busted in v1.
I did some more testing of the patch for both URI and (host, port)
parameter pairs. I did
On Tue, Nov 1, 2016 at 6:54 PM, Robert Haas wrote:
>That's the wrong syntax. If you look in
> https://www.postgresql.org/docs/devel/static/libpq-connect.html under
> "32.1.1.2. Connection URIs", it gives an example of how to include a
> slash in a pathname. You have to
1 - 100 of 172 matches
Mail list logo