[ 
https://issues.apache.org/jira/browse/HBASE-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16426488#comment-16426488
 ] 

ramkrishna.s.vasudevan commented on HBASE-20188:
------------------------------------------------

The latest results with 8G cache are also with short circuit reads ON? Is there 
any variation in the stack trace? 

Scans in 2.0 are slower because scans are also like preads now. 

bq. 'dfs.client.read.shortcircuit.streams.cache.size' and 
'dfs.client.socketcache.capacity' values?

These values were increased because the default size was causing some issue 
with ShortcircuitCache. 
{code:java}

2017-07-18 22:52:28,969 ERROR [ShortCircuitCache_SlotReleaser] 
shortcircuit.ShortCircuitCache: ShortCircuitCache(0x122da202): failed to 
release short-circuit shared memory slot Slot(slotIdx=26, 
shm=DfsClientShm(f0cce51b1df7a0c887c2b708b1bf702d)) by sending 
ReleaseShortCircuitAccessRequestProto to /var/lib/hadoop-hdfs/dn_socket.  
Closing shared memory segment.

java.net.SocketException: read(2) error: Connection reset by peer 

{code}
WE have not written any detail doc but just collected the observations that we 
got. As I said when you have enough RAM and all data is in page cache and you 
have lot of threads reading from HDFS then Short circuit cache was really 
needed because TCP connection was a problem. 

> [TESTING] Performance
> ---------------------
>
>                 Key: HBASE-20188
>                 URL: https://issues.apache.org/jira/browse/HBASE-20188
>             Project: HBase
>          Issue Type: Umbrella
>          Components: Performance
>            Reporter: stack
>            Assignee: stack
>            Priority: Blocker
>             Fix For: 2.0.0
>
>         Attachments: CAM-CONFIG-V01.patch, HBASE-20188.sh, HBase 2.0 
> performance evaluation - Basic vs None_ system settings.pdf, 
> ITBLL2.5B_1.2.7vs2.0.0_cpu.png, ITBLL2.5B_1.2.7vs2.0.0_gctime.png, 
> ITBLL2.5B_1.2.7vs2.0.0_iops.png, ITBLL2.5B_1.2.7vs2.0.0_load.png, 
> ITBLL2.5B_1.2.7vs2.0.0_memheap.png, ITBLL2.5B_1.2.7vs2.0.0_memstore.png, 
> ITBLL2.5B_1.2.7vs2.0.0_ops.png, 
> ITBLL2.5B_1.2.7vs2.0.0_ops_NOT_summing_regions.png, YCSB_CPU.png, 
> YCSB_GC_TIME.png, YCSB_IN_MEMORY_COMPACTION=NONE.ops.png, YCSB_MEMSTORE.png, 
> YCSB_OPs.png, YCSB_in-memory-compaction=NONE.ops.png, YCSB_load.png, 
> flamegraph-1072.1.svg, flamegraph-1072.2.svg, hbase-env.sh, hbase-site.xml, 
> lock.127.workloadc.20180402T200918Z.svg, 
> lock.2.memsize2.c.20180403T160257Z.svg, run_ycsb.sh, tree.txt
>
>
> How does 2.0.0 compare to old versions? Is it faster, slower? There is rumor 
> that it is much slower, that the problem is the asyncwal writing. Does 
> in-memory compaction slow us down or speed us up? What happens when you 
> enable offheaping?
> Keep notes here in this umbrella issue. Need to be able to say something 
> about perf when 2.0.0 ships.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to