On Mon, Nov 28, 2016 at 3:00 PM, Dilip Kumar wrote:
> As promised, I have taken the performance with TPCH benchmark and
> still result are quite good. However this are less compared to older
> version (which was exposing expr ctx and slot to heap).
>
> Query Head
On Sat, Nov 19, 2016 at 6:48 PM, Dilip Kumar wrote:
> patch1: Original patch (heap_scankey_pushdown_v1.patch), only
> supported for fixed length datatype and use heap_getattr.
>
> patch2: Switches memory context in HeapKeyTest + Store tuple in slot
> and use slot_getattr instead
.09) AND (l_quantity < '25'::numeric))
Planning time: 0.292 ms
Execution time: 14041.968 ms
(10 rows)
revenue
-
1784930119.2454
(1 row)
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
that they've all finished doing that and are ready to scan.
> When they're all just waiting for one guy to flip a single bit, then
> it's debatable whether a barrier is any simpler than a condition
> variable + a spinlock + a bit!
+1
--
Regards,
Dilip Kumar
EnterpriseDB: h
s called only after Bitmap is ready, then
what about other process, how they are supposed to wait until bitmap
is not ready. If they wait using BarrierWait, it again make the count
1 and everyone is allowed to proceed. Which doesn't seems correct.
Correct me if I am missing something
On Tue, Nov 22, 2016 at 9:05 AM, Dilip Kumar wrote:
> On Fri, Nov 18, 2016 at 9:59 AM, Amit Khandekar
> wrote:
>
> Thanks for the review..
I have worked on these comments..
>
>> In pbms_is_leader() , I didn't clearly understand the significance of
>> the for
On Wed, Nov 23, 2016 at 12:31 PM, Dilip Kumar wrote:
I tried to address these comments in my new version, All comments are
fixed except below
>> + *
>> + *#2. Bitmap processing (Iterate and process the pages).
>> + *. In this phase each worker will iterate over
tach(pei->area);
+ pei->area = NULL;
+ }
After this changes, I am getting DSM segment leak warning.
I am calling dsa_allocate and dsa_free.
On Thu, Nov 24, 2016 at 8:09 PM, Dilip Kumar wrote:
> On Wed, Nov 23, 2016 at 5:42 PM, Thomas Munro
> wrote:
>> ... or we could allow DSA area
ALLEL_AREA_SIZE,
+ LWTRANCHE_PARALLEL_EXEC_AREA,
+ "parallel query memory area");
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
sert */
> #define HEAP_INSERT_SKIP_WAL0x0001
> #define HEAP_INSERT_SKIP_FSM0x0002
>
> Useless whitespace change.
>
> WAIT_EVENT_MQ_RECEIVE,
> WAIT_EVENT_MQ_SEND,
> WAIT_EVENT_PARALLEL_FINISH,
> +WAIT_EVENT_PARALLEL_BITMAP_SCAN,
> WAIT_EVENT_SAFE_SNAPSHOT,
> WAIT_EVENT_SYNC_REP
>
> Missing a documentation update.
I will fix these, in next version.
>
> In general, the amount of change in nodeBitmapHeapScan.c seems larger
> than I would have expected. My copy of that file has 655 lines; this
> patch adds 544 additional lines. I think/hope that some of that can
> be simplified away.
I will work on this.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
_mutex Spinlock. pbms_parallel_iterate() already has
> its own iterator spinlock. Only thing is, workers may not do the
> actual PrefetchBuffer() sequentially. One of them might shoot ahead
> and prefetch 3-4 pages while the other is lagging with the
> sequentially lesser page number; but
On Mon, Nov 14, 2016 at 9:44 PM, Dilip Kumar wrote:
>> Also, what if we abandoned the idea of pushing qual evaluation all the
>> way down into the heap and just tried to do HeapKeyTest in SeqNext
>> itself? Would that be almost as fast, or would it give up most of the
>>
y in
good shapes. So I think other reviewers can take decision.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
I'm
> still playing around with it, but basically the fix is to make the
> growth policy a bit more adaptive.
Okay.. Thanks.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make chang
ll the
> way down into the heap and just tried to do HeapKeyTest in SeqNext
> itself? Would that be almost as fast, or would it give up most of the
> benefits?
This we can definitely test. I will test and post the data.
Thanks for the suggestion.
--
Regards,
Dilip Kumar
EnterpriseDB: h
e SeqScan node.
(i.e in above example on head it was 3216.157 whereas with patch it
was 1884.993).
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Thu, Nov 3, 2016 at 7:29 PM, Robert Haas wrote:
> On Tue, Nov 1, 2016 at 8:31 PM, Kouhei Kaigai wrote:
>> By the w
yncdo not wait for changes to
be written safely to disk\n"));
3.
- while ((c = getopt_long(argc, argv, "acd:f:gh:l:oOp:rsS:tU:vwWx",
long_options, &optindex)) != -1)
+ while ((c = getopt_long(argc, argv, "acd:f:gh:l:NoOp:rsS:tU:vwWx",
long_options, &o
gres postgres[.] tbm_lossify
+0.62% 0.62% postgres postgres[.] AllocSetReset
+0.60% 0.11% postgres [kernel.kallsyms] [k] sys_read
+0.59% 0.10% postgres postgres[.]
advance_transition_function
I think in new hash implementation, de
>
>> Meanwhile I will test it and give the feedback.
>
>
> Thanks.
>
> Updated patch is attached with added regression tests.
I have done with review and test, patch looks fine to me.
moved to "Ready for Committer"
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterpris
#x27;&&' to end of
the previous line.
+ if (cstate->rel->rd_rel->relkind != RELKIND_RELATION
+ && (!cstate->rel->trigdesc ||
+ !cstate->rel->trigdesc->trig_insert_instead_row))
Meanwhile I will test it and give the feedback.
--
Regards,
Dilip Kumar
En
On Sat, Oct 29, 2016 at 12:17 PM, Dilip Kumar wrote:
> What about putting slot reference inside HeapScanDesc ?. I know it
> will make ,heap layer use executor structure but just a thought.
>
> I have quickly hacked this way where we use slot reference in
> HeapScanDesc a
-11-01 14:31:52.235 IST [72343] ERROR: cannot copy to view "ttt_v"
2016-11-01 14:31:52.235 IST [72343] STATEMENT: COPY ttt_v FROM stdin;
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes
ut with that I did not see any
regression in v1).
2. (v1+use_slot_in_HeapKeyTest) is always winner, even at very high selectivity.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
That would be an
>> interesting thing to mention in the summary, I think.
>>
>
> One thing is clear that all results are on either
> synchronous_commit=off or on unlogged tables. I think Dilip can
> answer better which of those are on unlogged and which on
> synchr
re very low and fixed "1".
Do we really need to take care of any user defined function which is
declared with very low cost ?
Because while building index conditions also we don't take care of
such things. Index conditions will always we evaluated first then only
filter will be ap
til we find first non pushable qual (this way
we can maintain the same qual execution order what is there in
existing system).
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Oct 21, 2016 at 7:57 AM, Dilip Kumar wrote:
> On Thu, Oct 20, 2016 at 9:03 PM, Tomas Vondra
> wrote:
>
>> In the results you've posted on 10/12, you've mentioned a regression with 32
>> clients, where you got 52k tps on master but only 48k tps with the p
ontention on ClogControlLock become much worse ?
I will run this test and post the results.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
of individual runs etc.
I saw your report, I think presenting it this way can give very clear idea.
>
> If you want to cooperate on this, I'm available - i.e. I can help you get
> the tooling running, customize it etc.
That will be really helpful, then next time I can also present m
| wal_insert
30 BufferPin | BufferPin
10 LWLockTranche | proc
6 LWLockTranche | buffer_mapping
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
Execution time: 6669.195 ms
(13 rows)
Summary:
-> With patch overall execution is 2 time faster compared to head.
-> Bitmap creation with patch is bit slower compared to head and thats
because of DHT vs efficient hash table.
I found one defect in v2 patch, that I induced during last reb
in shared memory (my current approach)
or we need to copy each hash element at shared location (I think this
is going to be expensive).
Let me know if I am missing something..
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hacker
n so no impact.
Q14, Q15 time spent in BitmapIndex node is < 5% of time spent in
BitmapHeap Node. Q6 it's 20% but I did not see much impact on this in
my local machine. However I will take the complete performance reading
and post the data on my actual performance machine.
--
Regards,
Di
://git.postgresql.org/pg/commitdiff/75ae538bc3168bf44475240d4e0487ee2f3bb376
On Fri, Oct 7, 2016 at 11:46 AM, Dilip Kumar wrote:
> Hi Hackers,
>
> I would like to propose parallel bitmap heap scan feature. After
> running TPCH benchmark, It was observed that many of TPCH queries are
> us
cide decide the operator are safe or not based on their
datatype ?
What I mean to say is instead of checking safety of each operator like
texteq(), text_le()...
we can directly discard any operator involving such kind of data types.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
iance (because earlier I never saw this regression, I can
confirm again with multiple runs).
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
[@power2 ~]$ uname -mrs
Linux 3.10.0-229.14.1.ael7b.ppc64le ppc64le
[@power2 ~]$ lscpu
Architecture: ppc64le
Byte Order:
same can be
that HeapKeyTest is much simplified compared to ExecQual. It's
possible that in future when we try to support more variety of keys,
gain at high selectivity may come down.
WIP patch attached..
Thoughts ?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
heap_scan
ket_head < iterator->last_item_pointer) &&
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
_SIZE
$9 = 1048576
In dsa-v1 problem was not exist because DSA_MAX_SEGMENTS was 1024,
but in dsa-v2 I think it's calculated wrongly.
(gdb) p DSA_MAX_SEGMENTS
$10 = 16777216
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@p
On Thu, Sep 29, 2016 at 8:05 PM, Robert Haas wrote:
> OK, another theory: Dilip is, I believe, reinitializing for each run,
> and you are not.
Yes, I am reinitializing for each run.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailin
what CPU model is Dilip using - I know it's x86, but not which
> generation it is. I'm using E5-4620 v1 Xeon, perhaps Dilip is using a newer
> model and it makes a difference (although that seems unlikely).
I am using "Intel(R) Xeon(R) CPU E7- 8830 @ 2.13
On Wed, Sep 21, 2016 at 8:47 AM, Dilip Kumar wrote:
> Summary:
> --
> At 32 clients no gain, I think at this workload Clog Lock is not a problem.
> At 64 Clients we can see ~10% gain with simple update and ~5% with TPCB.
> At 128 Clients we can see > 50% gain.
&g
On Tue, Sep 20, 2016 at 9:15 AM, Dilip Kumar wrote:
> +1
>
> My test are under run, I will post it soon..
I have some more results now
8 socket machine
10 min run(median of 3 run)
synchronous_commit=off
scal factor = 300
share buffer= 8GB
test1: Simple update(pgbench)
Clients
mage to the SLRU abstraction layer.
>
> I agree with you unless it shows benefit on somewhat more usual
> scenario's, we should not accept it. So shouldn't we wait for results
> of other workloads like simple-update or tpc-b on bigger machines
> before reaching to co
dSetOldestMember
+ 0.66% LockRefindAndRelease
Next I will test, "update with 2 savepoints", "select for update with
no savepoints"
I will also test the granular lock and atomic lock patch in next run..
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
S
> SAVEPOINT s1;
> SELECT tbalance FROM pgbench_tellers WHERE tid = :tid for UPDATE;
> SAVEPOINT s2;
> SELECT abalance FROM pgbench_accounts WHERE aid = :aid for UPDATE;
> END;
> ---
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via
t happens there. Those cases are a lot more likely than
> these stratospheric client counts.
I tested with 64 clients as well..
1. On head we are gaining ~15% with both the patches.
2. But group lock vs granular lock is almost same.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprised
On Wed, Sep 14, 2016 at 10:25 AM, Dilip Kumar wrote:
> I have tested performance with approach 1 and approach 2.
>
> 1. Transaction (script.sql): I have used below transaction to run my
> bench mark, We can argue that this may not be an ideal workload, but I
> tested this to p
will test with 3rd approach also, whenever I get time.
3. Summary:
1. I can see on head we are gaining almost ~30 % performance at higher
client count (128 and beyond).
2. group lock is ~5% better compared to granular lock.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Fri, Sep 9, 2016 at 6:51 PM, Tom Lane wrote:
> Pushed with cosmetic adjustments --- mostly, I thought we needed some
> comments about the topic.
Okay, Thanks.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-h
search.
> You could imagine buying back those cycles by teaching the typcache
> to be able to cache the result of getBaseTypeAndTypmod, but I'm doubtful
> that we really care. This whole setup sequence only happens once per
> query anyway.
Agreed.
--
Regards,
Dilip Kumar
En
alidatorAccess: This is being called from all
>> language validator functions.
>
> This part seems reasonable, since the validator functions are documented
> as something users might call, and CheckFunctionValidatorAccess seems
> like an apropos place to handle it.
--
Regards,
On Wed, Sep 7, 2016 at 8:52 AM, Haribabu Kommi wrote:
> I reviewed and tested the patch. The changes are fine.
> This patch provides better error message compared to earlier.
>
> Marked the patch as "Ready for committer" in commit-fest.
Thanks for the review !
--
laces which are not exposed function,
but I don't this will have any imapct, will this ?
2. lookup_type_cache: This is being called from record_in
(record_in->lookup_rowtype_tupdesc->
lookup_rowtype_tupdesc_internal->lookup_type_cache).
3. CheckFunctionValidatorAccess: This is bein
On Wed, Aug 10, 2016 at 10:04 AM, Dilip Kumar wrote:
> This seems better, after checking at other places I found that for
> invalid type we are using ERRCODE_UNDEFINED_OBJECT and for invalid
> functions we are using ERRCODE_UNDEFINED_FUNCTION. So I have done the
> same way.
>
ons we are using ERRCODE_UNDEFINED_FUNCTION. So I have done the
same way.
Updated patch attached.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
cache_lookup_failure_v2.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
T
ge to "type id %u does not exit" if this
seems better ?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
.
But I think ERRCODE_WRONG_OBJECT_TYPE is better option.
Patch attached for the same.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
cache_lookup_failure_v1.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes
On Tue, Aug 2, 2016 at 3:33 PM, Dilip Kumar wrote:
> There are many more such exposed functions, which can throw cache lookup
> failure error if we pass wrong value.
>
> i.e.
> record_in
> domain_in
> fmgr_c_validator
> edb_get_objecttypeheaddef
> plpgsql_validato
out(
postgres(# cast(98 as integer)) as cstring),
postgres(# cast(1 as oid),
postgres(# cast(35 as integer));
ERROR: cache lookup failed for type 1
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
be reset.
After this querystr is being used in this function.
}
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
there are discussion going on about how to fix this issue.
You can refer this thread.
https://www.postgresql.org/message-id/17863.1469142152%40sss.pgh.pa.us
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
AND (t1.c1 = t2.c2).
So I think these together will make sure that we don't get duplicate tuple
for one outer record.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
his is happening in infinite loop, so we are seeing memory leak.
exec_stmt_dynexecute
{
/* copy it out of the temporary context before we clean up */
querystr = pstrdup(querystr);
}
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
ink that you need
> as well to update the regression test output.
>
I observed one regression failure caused by this fix. So fixed the same in
new patch.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
pg_indexes_fix_v1.patch
Description: Binary data
--
Sent via pgsql-ha
correct fix is to change view definition, as I proposed in above patch.
Any other opinion on this ?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Thu, Jul 14, 2016 at 12:38 PM, Dilip Kumar wrote:
> I am not sure what should be the correct fix for this problem.
>
> I think even if we try to call this function on index oid
> *pg_get_indexdef(*x.indexrelid*) *AS indexdef, problem will not be
> solved, because both wi
x oid *pg_get_indexdef(*
x.indexrelid*) *AS indexdef, problem will not be solved, because both will
fall in same equivalence class hence clause can be distributed to pg_class
also.
Is this a bug ?
If yes, what should be the right fix ?
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
other server process
DETAIL: The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
abnormally and possibly corrupted shared memory.
HINT: In a moment you should be able to reconnect to the database and
repeat your command.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
places will fix other issue (ERROR: requested shared
memory size overflows size_t) also
described in below mail thread
http://www.postgresql.org/message-id/570bacfc.6020...@enterprisedb.com
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
max_parallel_degree_bug.patch
Descripti
On Fri, Apr 8, 2016 at 11:38 AM, Robert Haas wrote:
> Yeah. I've committed the patch now, with some cosmetic cleanup.
>
Thanks Robert !!!
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
in
performance.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
perf_pgbench_ro.sh
Description: Bourne shell script
buffer_content_lock_ptr_rebased_head_temp.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make c
9421*
run5 630772
run6 *284363*
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
Size:*
**
Head: 80 Bytes
Head+0001-WIP-Avoid-the-use-of-a-separate-spinlock-to-protect : 72Bytes
Head+0001-WIP-Avoid-the-use-of-a-separate-spinlock-to-protect+
Pinunpin-cas-8 : 64 Bytes
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
ead_patch_run2.txt--> 20 mins run on head + patch
reading 2
5. head_pinunpin.txt--> 20 mins run on head +
pinunpin-cas-8.patch
6. head_pinunpin_patch.txt --> 20 mins run on head +
pinunpin-cas-8.patch + patch reading
--
Regards,
Dilip Kumar
EnterpriseDB:
--
Run1 429520 372958
Run2 446249 *167189*
Run3 431066 381592
Patch+Pinunpin 64 Client 128 Client
--
Run1 338298 642535
Run2 406240 644187
Run3 595439 *285420 *
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
l(R) Xeon(R) CPU E7- 8830 @ 2.13GHz
Stepping: 2
CPU MHz: 1064.000
BogoMIPS: 4266.62
If you need some more information please let me know ?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
ging multiplier and max limit .
But I think we are ok with the max size as 4MB (512 blocks) right?.
Does this test make sense ?
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Wed, Mar 30, 2016 at 7:51 AM, Dilip Kumar wrote:
> + if (lockWaiters)
> + /*
> + * Here we are using same freespace for all the Blocks, but that
> + * is Ok, because all are newly added blocks and have same freespace
> + * And even some block which we just added to Freespa
backend and now freespace is not same, will not harm
+ * anything, because actual freespace will be calculated by user
+ * after getting the page.
+ */
+ UpdateFreeSpaceMap(relation, firstBlock, blockNum, freespace);
Does this look good ?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprise
ed(__ppc64__) ||
defined(__powerpc64__)
*#define* HAS_TEST_AND_SET
*typedef* *unsigned* *int* slock_t; --> changed like this
*#define* TAS(lock) tas(lock)
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Tue, Mar 29, 2016 at 2:09 PM, Dilip Kumar wrote:
>
> Attaching new version v18
- Some cleanup work on v17.
- Improved *UpdateFreeSpaceMap *function.
- Performance and space utilization are same as V17
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
multi_exte
27;t get then search from top.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
multi_extend_v17.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Mar 28, 2016 at 3:02 PM, Dilip Kumar wrote:
> 1. Relation Size : No change in size, its same as base and v13
>
> 2. INSERT 1028 Byte 1000 tuple performance
> ---
> Client base v13 v15
> 1 117 124 122
> 2 111 1
On Mon, Mar 28, 2016 at 7:21 AM, Dilip Kumar wrote:
> I agree with that conclusion. I'm not quite sure where that leaves
>> us, though. We can go back to v13, but why isn't that producing extra
>> pages? It seems like it should: whenever a bulk extend rolls
On Sun, Mar 27, 2016 at 5:48 PM, Andres Freund wrote:
>
> What's sizeof(BufferDesc) after applying these patches? It should better
> be <= 64...
>
It is 72.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
age where the bulk extend
> rolls over?
>
This is actually multi level tree, So each FSM page contain one slot tree.
So fsm_search_avail() is searching only the slot tree, inside one FSM page.
But we want to go to next FSM page.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
anted to explained the same above.
> Another idea is:
>
> If ConditionalLockRelationForExtension fails to get the lock
> immediately, search the last *two* pages of the FSM for a free page.
>
> Just brainstorming here.
I think this is better option, Since we will search last two
t(s):24
NUMA node(s): 4
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Sat, Mar 26, 2016 at 3:18 PM, Dilip Kumar wrote:
> We could go further still and have GetPageWithFreeSpace() always
>> search the last, say, two pages of the FSM in all cases. But that
>> might be expensive. The extra call to RelationGetNumberOfBlocks seems
>> cheap en
On Sat, Mar 26, 2016 at 3:18 PM, Dilip Kumar wrote:
> search the last, say, two pages of the FSM in all cases. But that
>> might be expensive. The extra call to RelationGetNumberOfBlocks seems
>> cheap enough here because the alternative is to wait for a contended
>> heavy
The extra call to RelationGetNumberOfBlocks seems
> cheap enough here because the alternative is to wait for a contended
> heavyweight lock.
>
I will try the test with this also and post the results.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
copy_script
De
k3ktcmhi...@alap3.anarazel.de
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
263115
64 248109
Note: I think one thread number can be just run to run variance..
Does anyone see problem in updating the FSM tree, I have debugged and saw
that we are able to get the pages properly from tree and same is visible in
performance numbe
gt; >
> > GetNearestPageWithFreeSpace? (although not sure that's accurate
> description, maybe Nearby would be better)
> >
>
> Better than what is used in patch.
>
> Yet another possibility could be to call it as
> GetPageWithFreeSpaceExtende
slot.
I have done performance test just to ensure the result. And performance is
same as old. with both COPY and INSERT.
3. I have also run pgbench read-write what amit suggested upthread.. No
regression or improvement with pgbench workload.
Client basePatch
1 899 914
8 5397 5413
32 18170
ho wants
to add one block or who have got one block added in along with extra block.
I think this way code is simple.. That everybody comes down will add one
block for self use. and all other functionality and logic is above, i.e.
wether to take lock or not, whether to add extra blocks or not..
we don't need atomic
operation here, we are not yet added to the list.
+ if (nextidx != INVALID_PGPROCNO &&
+ ProcGlobal->allProcs[nextidx].clogGroupMemberPage !=
proc->clogGroupMemberPage)
+ return false;
+
+ pg_atomic_write_u32(&proc->clogGroupNext, nextidx);
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Tue, Mar 22, 2016 at 12:31 PM, Dilip Kumar wrote:
> ! pg_atomic_write_u32(&bufHdr->state, state);
> } while (!StartBufferIO(bufHdr, true));
>
> Better Write some comment, about we clearing the BM_LOCKED from stage
> directly and need not to call UnlockBufHdr expli
dr, true));
}
}
--- 826,834
*/
do
{
! uint32 state = LockBufHdr(bufHdr);
! state &= ~(BM_VALID | BM_LOCKED);
! pg_atomic_write_u32(&bufHdr->state, state);
} while (!StartBufferIO(bufHdr, true));
Better Write some comment, about we clearing the BM_LOCKED from stage
di
201 - 300 of 402 matches
Mail list logo