h, this contains the max limit on extra pages,
512(4MB) pages is the max limit.
I have measured the performance also and that looks equally good.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index 8140418..fc
9909
39211 38837
33 19854 19932
39230 38876
34 19867 19949
39249 39088
35 19891 19990
39259 39148
36 20038 20085
39286 39453
37 20083 20128
39435 39563
38 20143 20166
39448 39959
39 20191 20198
39475 40495
40 20437 20455
40375 40664
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Thu, Mar 17, 2016 at 1:31 PM, Petr Jelinek <p...@2ndquadrant.com> wrote:
> Great.
>
> Just small notational thing, maybe this would be simpler?:
> extraBlocks = Min(512, lockWaiters * 20);
>
Done, new patch attached.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.
erformance
at 32 clients, means at a time we are extending 32*20 ~= max (600) pages at
a time. So now with 4MB limit (max 512 pages) Results will looks similar.
So we need to take a decision whether 4MB is good limit, should I change it
?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
d to 20 when tested 50 with 4 byte COPY I did not tested other data
size with 50.).
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
s ?
I tested with Multiple clients loads 1..64, with multiple load size 4
byte records to 1KB Records, COPY/ INSERT and found 20 works best.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
9462139
7198573908162983
8199103992375358
9201693993777481
10 201814000378462
------
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
est flow is very
high).
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
51 130
8 43 147
16 40 209
32 --- 254
64 --- 205
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index 8140418..b73535c 100644
---
er moving the code that adds multiple blocks at a time to
> its own function instead of including it all in line.
>
Done
Attaching a latest patch.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index
264
* (waiter*20)-> First process got the lock will find the lock waiters and
add waiter*20 extra blocks.
In next run I will run beyond 32 also, as we can see even at 32 client its
increasing.. so its clear when it see more contentions, adapting to
contention and adding more blocks..
--
On Wed, Mar 2, 2016 at 11:05 AM, Dilip Kumar <dilipbal...@gmail.com> wrote:
> And this latest result (no regression) is on X86 but on my local machine.
>
> I did not exactly saw what this new version of patch is doing different,
> so I will test this version in other mac
On Wed, Mar 2, 2016 at 10:31 AM, Dilip Kumar <dilipbal...@gmail.com> wrote:
> 1. One option can be as you suggested like ProcArrayGroupClearXid, With
> some modification, because when we wait for the request and extend w.r.t
> that, may be again we face the Context Switch proble
> > > > >>./pgbench -j$ -c$ -T300 -M prepared -S postgres
>> > > > >>client basepatch
>> > > > >>1 7057 5230
>> > > > >>2 10043 9573
>>
nding pace..
If (success - failure > Threshold)
{
// Can not reduce it by big number because, may be more request are
satisfying because this is correct amount, so gradually decrease the pace
and re-analyse the statistics next time.
ExtendByBlock --;
Failure = success= 0
}
Any Suggestions are Welcome...
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Tue, Mar 1, 2016 at 10:19 AM, Dilip Kumar <dilipbal...@gmail.com> wrote:
>
> OK, I will test it, sometime in this week.
>
I have tested this patch in my laptop, and there i did not see any
regression at 1 client
Shared buffer 10GB, 5 mins run with pgbench, read-only test
gt;
OK, I will test it, sometime in this week.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Wed, Feb 10, 2016 at 7:06 PM, Dilip Kumar <dilipbal...@gmail.com> wrote:
>
I have tested Relation extension patch from various aspects and performance
results and other statistical data are explained in the mail.
Test 1: Identify the Heavy Weight lock is the Problem or the Actua
pact, but first one
> could impact performance? Is it possible for you to get perf data with and
> without patch and share with others?
>
I only reverted ac1d794 commit in my test, In my next run I will revert
6150a1b0 also and test.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
oad for 5 mins Run, When I get time, I will run for
longer time and confirm again.
Shared Buffer= 8GB
Scale Factor=300
./pgbench -j$ -c$ -T300 -M prepared -S postgres
client basepatch
1 7057 5230
2 10043 9573
4 2014018188
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
th
> space from FSM. It seems to me that we should re-check the
> availability of page because while one backend is waiting on extension
> lock, other backend might have added pages. To re-check the
> availability we might want to use something similar to
> LWLockAcquireOrWait() semantics as used during WAL writing.
>
I will work on this in next version...
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Sun, Jan 31, 2016 at 11:44 AM, Dilip Kumar <dilipbal...@gmail.com> wrote:
> By looking at the results with scale factor 1000 and 100 i don't see any
> reason why it will regress with scale factor 300.
>
> So I will run the test again with scale factor 300 and this time i am
&
h scale factor 1000 and 100 i don't see any
reason why it will regress with scale factor 300.
So I will run the test again with scale factor 300 and this time i am
planning to run 2 cases.
1. when data fits in shared buffer
2. when data doesn't fit in shared buffer.
--
Regards,
Dilip Kumar
On Thu, Jan 28, 2016 at 4:53 PM, Dilip Kumar <dilipbal...@gmail.com> wrote:
> I did not find in regression in normal case.
> Note: I tested it with previous patch extend_num_pages=10 (guc parameter)
> so that we can see any impact on overall system.
>
Just forgot to mention
On Mon, Jan 25, 2016 at 11:59 AM, Dilip Kumar <dilipbal...@gmail.com> wrote:
1.
>> Patch is not getting compiled.
>>
>> 1>src/backend/access/heap/hio.c(480): error C2065: 'buf' : undeclared
>> identifier
>>
> Oh, My mistake, my preprocessor is ignoring
.mutex access will be quite frequent.
And in this case i do see very good improvement in POWER8 server.
Test Result:
Scale Factor:300
Shared Buffer:512MB
pgbench -c$ -j$ -S -M Prepared postgres
Clientbasepatch
64 222173318318
128195805 26229
endent .c files:
>
Yes, actually i always compile using "make clean;make -j20; make install"
If you want i will run it again may be today or tomorrow and post the
result.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
ber(buffer) after releasing
> the pin on buffer which will be released by
> UnlockReleaseBuffer(). Get the block number before unlocking
> the buffer.
>
Good catch, will fix this also in next version.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
have a new storage_parameter at table level
> extend_by_blocks or something like that instead of GUC. The
> default value of this parameter should be 1 which means retain
> current behaviour by default.
>
+1
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
alue is reduced from 3.86%(master) to
1.72%(patch).
I have plan to do further investigation, in different scenarios of dynahash.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
375919 332670
64 509067 440680
128431346 415121
256380926 379176
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
diff --git a/src/backend/storage/buffer/buf_init.c b/s
d
12054820791
32 372633 355356
64 532052 552148
128412755 478826
256 346701 372057
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
extend_num_page as session level parameter but i
think later we can make this as table property.
Any suggestion on this ?
Apart from this approach, I also tried extending the file in multiple block
in one extend call, but this approach (extending one by one) is performing
better.
--
Regards,
Dilip Kumar
En
On Wed, Jan 6, 2016 at 10:29 PM, Robert Haas <robertmh...@gmail.com> wrote:
> On Mon, Jan 4, 2016 at 8:52 PM, Dilip Kumar <dilipbal...@gmail.com> wrote:
> > One strange behaviour, after increasing number of processor for VM,
> > max_parallel_degree=0; is also performi
On Thu, Dec 24, 2015 at 4:45 AM, Robert Haas <robertmh...@gmail.com> wrote:
> On Wed, Dec 23, 2015 at 2:34 AM, Dilip Kumar <dilipbal...@gmail.com>
> wrote:
> > Yeah right, After applying all three patches this problem is fixed, now
> > parallel hash join is faster th
On Tue, Jan 5, 2016 at 1:52 AM, Robert Haas <robertmh...@gmail.com> wrote:
> On Mon, Jan 4, 2016 at 4:50 AM, Dilip Kumar <dilipbal...@gmail.com> wrote:
> > I tried to create a inner table such that, inner table data don't fit in
> RAM
> > (I created
On Fri, Dec 18, 2015 at 10:51 AM, Dilip Kumar <dilipbal...@gmail.com> wrote:
> On Sun, Jul 19 2015 9:37 PM Andres Wrote,
>
> > The situation the read() protect us against is that two backends try to
> > extend to the same block, but after one of them succeeded the b
ows=300 loops=1)
Planning time: 0.314 ms
Execution time: 17833.143 ms
(11 rows)
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Fri, Dec 18, 2015 at 8:47 PM, Robert Haas <robertmh...@gmail.com> wrote:
> On Fri, Dec 18, 2015 at 3:54 AM, Dilip Kumar <dilip
On Tue, Dec 22, 2015 at 8:30 PM, Robert Haas <robertmh...@gmail.com> wrote:
> On Tue, Dec 22, 2015 at 4:14 AM, Dilip Kumar <dilipbal...@gmail.com>
> wrote:
> > On Fri, Dec 18, 2015 at 8:47 PM Robert Wrote,
> >>> Yes, you are right, that create_gather_
here?
This is TPC-H benchmark case:
we can setup like this..
1. git clone https://tkej...@bitbucket.org/tkejser/tpch-dbgen.git
2. complie using make
3. ./dbgen –v –s 5
4. ./qgen
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Fri, Dec 18, 2015 at 7:59 AM, Robert Haas <robe
d q*_parallel.out respectively.
For Q3 its not selecting parallel plan.
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Thu, Dec 17, 2015 at 11:03 AM, Amit Kapila <amit.kapil...@gmail.com>
wrote:
> On Wed, Dec 16, 2015 at 9:55 PM, Dilip Kumar <dilipbal...@gmail.com&g
ategy);
bistate can be NULL if direct insert instead of copy case
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Sun, Jul 19, 2015 at 9:37 PM, Andres Freund <and...@anarazel.de> wrote:
> On 2015-07-19 11:56:47 -0400, Tom Lane wrote:
> > Andres Freund <and.
relcache reference leak: relation "customer" not closed
CONTEXT: parallel worker, PID 123411
WARNING: Snapshot reference leak: Snapshot 0x2d1fee8 still referenced
CONTEXT: parallel worker, PID 123411
psql:q7.sql:40: WARNING: relcache reference leak: relation "customer" not
There is some minor issue in psql documentations for \setenv
Attached patch is fixing the same.
Actual option
\setenv NAME [VALUE] set or unset environment variable
In document
\setenv [ name [ value ] ] -- name is not optional so it should be like below
\setenv name [ value ]
Regards,
On 23 January 2015 21:10, Alvaro Herrera Wrote,
In case you're up for doing some more work later on, there are two
ideas
here: move the backend's TranslateSocketError to src/common, and try to
merge pg_dump's select_loop function with the one in this new code.
But that's for another patch
On 23 January 2015 23:55, Alvaro Herrera,
-j1 is now the same as not specifying anything, and vacuum_one_database
uses more common code in the parallel and not-parallel cases: the not-
parallel case is just the parallel case with a single connection, so
the setup and shutdown is mostly the
On 22 January 2015 23:16, Alvaro Herrera Wrote,
Here's v23.
There are two things that continue to bother me and I would like you,
dear patch author, to change them before committing this patch:
1. I don't like having vacuum_one_database() and a separate
vacuum_one_database_parallel(). I
On 20 December 2014 16:30, Amit Kapila Wrote,
Summarization of latest changes:
1. Change file name from symlink_label to tablespace_map and changed
the same every where in comments and variable names.
2. This feature will be supportted for both windows and linux; tablespace_map
file will be
On 07 January 2015 11:21, Amit Kapila Wrote,
Tom has spotted this problem and suggested 3 different options
to handle this issue, apart from above 2, third one is Go over to
a byte-count-then-value format. Then Andrew and Heikki
supported/asked to follow option 2 (as is followed in patch) and no
On 29 December 2014 10:22 Amit Kapila Wrote,
Case1:In Case for CompleteDB:
In base code first it will process all the tables in stage 1 then in stage2
and so on, so that at some time all the tables are analyzed at least up to
certain stage.
But If we process all the stages for one table
On 19 December 2014 16:41, Amit Kapila Wrote,
One idea is to send all the stages and corresponding Analyze commands
to server in one go which means something like
BEGIN; SET default_statistics_target=1; SET vacuum_cost_delay=0;
Analyze t1; COMMIT;
BEGIN; SET default_statistics_target=10; RESET
On December 2014 17:31 Amit Kapila Wrote,
I suggest rather than removing, edit the comment to indicate
the idea behind code at that place.
Done
Okay, I think this part of code is somewhat similar to what
is done in pg_dump/parallel.c with some differences related
to handling of inAbort. One
December 2014 20:01
To: Dilip kumar
Cc: Magnus Hagander; Alvaro Herrera; Jan Lentfer; Tom Lane;
PostgreSQL-development; Sawada Masahiko; Euler Taveira
Subject: Re: [HACKERS] TODO : Allow parallel cores to be used by vacuumdb [ WIP
]
On Mon, Dec 1, 2014 at 12:18 PM, Dilip kumar
dilip.ku
here we are setting each target once and doing for all the tables..
Please provide you opinion.
Regards,
Dilip Kumar
test.sql
Description: test.sql
vacuumdb_parallel_v19.patch
Description: vacuumdb_parallel_v19.patch
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org
, analyze_only, freeze, sql);
appendPQExpBuffer(sql, %s, cell-val);
..
}
I think it is better to end command with ';' by using
appendPQExpBufferStr(sql, ;); in above code.
Done
Latest patch is attached, please have a look.
Regards,
Dilip Kumar
vacuumdb_parallel_v18.patch
Description
On 13 November 2014 15:35 Amit Kapila Wrote,
As mentioned by you offlist that you are not able reproduce this
issue, I have tried again and what I observe is that I am able to
reproduce it only on *release* build and some cases work without
this issue as well,
example:
./vacuumdb
)
continue;
Regards,
Dilip Kumar
On 28 October 2014 09:18, Amit Kapila Wrote,
I am worried about the case if after setting the inAbort flag,
PQCancel() fails (returns error).
If select(maxFd + 1, workerset, NULL, NULL, tv); come out, we can know
whether it came out because of cancel query and handle it accordingly.
Yeah,
“SetCancelConn” as Amit has pointed, now this is also fixed.
Regards,
Dilip Kumar
vacuumdb_parallel_v15.patch
Description: vacuumdb_parallel_v15.patch
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref
On 26 September 2014 12:24, Amit Kapila Wrote,
I don't think this can handle cancel requests properly because
you are just setting it in GetIdleSlot() what if the cancel
request came during GetQueryResult() after sending sql for
all connections (probably thats the reason why Jeff is not able
to
On 24 August 2014 11:33, Amit Kapila Wrote
Thanks for you comments, i have worked on both the review comment lists, sent
on 19 August, and 24 August.
Latest patch is attached with the mail..
on 19 August:
You can compare against SQLSTATE by using below API.
val =
On 11 September 2014 10:21, Amit kapila Wrote,
I don't think currently such a limitation is mentioned in docs,
however I think we can update the docs at below locations:
1. In description of pg_start_backup in below page:
On 12 September 2014 14:34, Amit Kapila Wrote
Please find updated patch to include those documentation changes.
Looks fine, Moved to Ready for committer.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.comhttp://www.enterprisedb.com/
On 20 August 2014 19:49, Amit Kapila Wrote
There are some comments I would like to share with you
1. Rebase the patch to current GIT head.
Done.
+ initStringInfo(symlinkfbuf);
I think declaration and initialization of symlinkfbuf string can
be
objection for the same.
Ok, I will take care along with other comments fix..
Regards,
Dilip Kumar
15 July 2014 19:29 Amit Kapila Wrote,
Implementation details:
---
1. This feature is implemented only for tar format in windows
as native windows utilites are not able to create symlinks while
extracting files from tar (It might be possible to create symlinks
if
On 11 August 2014 10:29, Amit kapila wrote,
1. I have fixed all the review comments except few, and modified patch is
attached.
2. For not fixed comments, find inline reply in the mail..
1.
+Number of parallel connections to perform the operation. This option
will enable the
with 1 records and few dead tuples
Test Result
Base Code: 0.59s
Parallel Vacuum Code
2 Threads : 0.50s
4 Threads : 0.29s
8 Threads : 0.18s
Regards,
Dilip Kumar
From: Amit Kapila [mailto:amit.kapil...@gmail.com]
Sent: 31
On 21 July 2014 13:39, Fabien COELHO Wrote
This patch does update the documentation as stated, and make it
consistent with the reality and the embedded psql help. This is an
improvement and I recommand its inclusion.
I would also suggest to move the sentence at the end of the description:
table list in beginning, After that all connections will be involved in
vacuum task.
Please have a look and provide your opinion…
Thanks Regards,
Dilip Kumar
vacuumdb_parallel_v11.patch
Description: vacuumdb_parallel_v11.patch
--
Sent via pgsql-hackers mailing list (pgsql-hackers
:)
Thanks Regards,
Dilip Kumar
From: Magnus Hagander [mailto:mag...@hagander.net]
Sent: 16 July 2014 12:13
To: Alvaro Herrera
Cc: Dilip kumar; Jan Lentfer; Tom Lane; PostgreSQL-development; Sawada
Masahiko; Euler Taveira
Subject: Re: [HACKERS] TODO : Allow parallel cores to be used by vacuumdb
Currently \pset is supported without any argument also, so same is updated in
documentation.
\pset option [ value ]
Changed to
\pset [ option [ value ] ]
Thanks Regards,
Dilip Kumar
psql_doc.patch
Description: psql_doc.patch
--
Sent via pgsql-hackers mailing list (pgsql-hackers
On 12 July 2014 23:25, Emre Hasegeli Wrote,
I have one last comment, after clarifying this I can move it to
ready for committer.
1. In networkjoinsel, For avoiding the case of huge statistics, only
some of the values from mcv and histograms are used (calculated using
SQRT).
-- But in my
, if histograms and mcv both are exist then its fine, but
if only mcv's are there in that case, we can match complete MCV, it will give
better accuracy.
In other function like eqjoinsel also its matching complete MCV.
Thanks Regards,
Dilip Kumar
--
Sent via pgsql-hackers mailing list (pgsql-hackers
On 04 July 2014 12:07, Abhijit Menon-Sen Wrote,
-Original Message-
From: Abhijit Menon-Sen [mailto:a...@2ndquadrant.com]
Sent: 04 July 2014 12:07
To: Dilip kumar
Cc: pgsql-hackers@postgresql.org; furu...@pm.nttdata.co.jp
Subject: Re: [HACKERS] pg_xlogdump --stats
At 2014-06-30 05
-- in inet_his_inclusion_selec comments
histogram boundies - histogram boundaries :)
Thanks Regards,
Dilip Kumar
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
not tried to track down the code that causes it. I did notice
that vacuumdb spends an awful lot of time at the top of the Linux top
output, and this is probably why.
I will look into these and fix..
Thanks Regards,
Dilip Kumar
--
Sent via pgsql-hackers mailing list (pgsql-hackers
.
Thanks Regards,
Dilip Kumar
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 13 June 2014 13:01, Abhijit Menon-Sen Wrote
I've changed this to use %zu at Álvaro's suggestion. I'll post an
updated patch after I've finished some (unrelated) refactoring.
I have started reviewing the patch..
1. Patch applied to git head cleanly.
2. Compiled in Linux -- Some warnings
you want to mention here ?
Regards,
Dilip Kumar
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 24 June 2014 11:02 Jeff Wrote,
I mean that the other commit, the one conflicting with your patch, is still
not finished. It probably would not have been committed if we realized the
problem at the time. That other patch runs analyze in stages at
different settings of
On 23 May 2014 12:43 David Rowley Wrote,
I'm hitting a bit of a roadblock on point 1. Here's a snipped from my latest
attempt:
if (bms_membership(innerrel-relids) == BMS_SINGLETON)
{
int subqueryrelid =
On 19 May 2014 12:15 David Rowley Wrote,
I think you are right here, it would be correct to remove that join, but I
also think that the query in question could be quite easily be written as:
select t1.a from t1 left join t2 on t1.a=t2.b;
Where the join WILL be removed. The distinct clause here
On 18 May 2014 16:38 David Rowley Wrote
Sound like a good idea to me..
I have one doubt regarding the implementation, consider the below query
Create table t1 (a int, b int);
Create table t2 (a int, b int);
Create unique index on t2(b);
select x.a from t1 x left join (select distinct t2.a a1,
)
Sort Key: tb.a
- Seq Scan on t1 tb (cost=0.00..73.04 rows=5004 width=4)
Planning time: 0.286 ms
(10 rows)
Thanks Regards,
Dilip Kumar
merge_join_nonequal.patch
Description: merge_join_nonequal.patch
--
Sent via pgsql-hackers mailing list (pgsql-hackers
for other performance scenarios are welcome..
Thanks Regards,
Dilip Kumar
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 09 April 2014 13:31, Nicolas Barbier Wrote
Do you have a real-world example use case of such joins, to offset the
extra planner time that will likely have to be paid (even for queries
for which the functionality ends up not being used)?
I guess there might be queries that join on “values
and
outer.
* Then cost of NLJ is always O (r*q).
* The cost of this MJ will be b/w: O (n) to O (r*q).
Where r is number of tuple in R (outer relation) and q is number of tuple in Q
(inner Relation).
Please provide your feedback/suggestions.
Thanks Regards,
Dilip Kumar
On 20 December 2013 19:43, MauMau Wrote
[Problem]
If the backend is terminated with SIGKILL while psql is running \copy
table_name from file_name, the \copy didn't end forever. I expected
\copy
to be cancelled because the corresponding server process vanished.
[Cause]
psql could not
On 01/14/2014 11:25 AM Craig Ringer Wrote,
As per current behavior if user want to build in debug mode in
windows, then he need to give debug in capital letters (DEBUG),
I think many user will always make mistake in giving this option, in
my opinion we can make it case insensitive.
As per current behavior if user want to build in debug mode in windows, then he
need to give debug in capital letters (DEBUG),
I think many user will always make mistake in giving this option, in my opinion
we can make it case insensitive.
I have attached a small patch for the same ( just
Below attached patch is implementing following todo item..
machine-readable pg_controldata?
http://www.postgresql.org/message-id/4b901d73.8030...@agliodbs.com
Possible approaches:
1. Implement as backend function and provide a view to user.
- But In this approach user can only get
Below attached patch is implementing following todo item..
machine-readable pg_controldata?
http://www.postgresql.org/message-id/4b901d73.8030...@agliodbs.com
Possible approaches:
1. Implement as backend function and provide a view to user.
- But In this approach user can only get
On Fri, Dec 13, 2013 at 11:25, Sawada Masahiko Wrote
I attached the patch which have modified based on Robert suggestion,
and fixed typo.
I have reviewed the modified patch and I have some comments..
1. Patch need to be rebased (failed applying on head)
2. crc field should be at
On 04 December 2013, Sawada Masahiko Wrote
I attached the patch which have modified based on Robert suggestion,
and fixed typo.
I have reviewed the modified patch and I have some comments..
1. Patch need to be rebased (failed applying on head)
2. crc field should be at end in
,
Dilip Kumar
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 20 November 2013 22:12, Sawada Masahiko Wrote
1. Patch applies cleanly to master HEAD.
2. No Compilation Warning.
3. It works as per the patch expectation.
Some Suggestion:
1. Add new WAL level (all) in comment in postgresql.conf
wal_level = hot_standby
On 13 November 2013 03:17 David Johnston wrote,
Having had this same thought WRT the FOR UPDATE in LOOP bug posting
the lack of a listing of outstanding bugs does leave some gaps. I
would imagine people would appreciate something like:
Frequency: Rare
Severity: Low
Fix Complexity:
On 08 November 2013 03:22, Euler Taveira Wrote
On 07-11-2013 09:42, Dilip kumar wrote:
Dilip, this is on my TODO for 9.4. I've already had a half-backed patch
for it. Let's see what I can come up with.
Ok, Let me know if I can contribute to this..
Is it required to move the common code
On 08 November 2013 13:38, Jan Lentfer
For this use case, would it make sense to queue work (tables) in order of
their size, starting on the largest one?
For the case where you have tables of varying size this would lead to a
reduced overall processing time as it prevents large (read:
301 - 400 of 400 matches
Mail list logo