[jira] [Updated] (HBASE-15109) Region server failed to start when "fs.hdfs.impl.disable.cache" is set to true

2016-03-03 Thread Pankaj Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-15109:
-
Attachment: HBASE-15109.patch

Reattaching same file for QA run.

> Region server failed to start when "fs.hdfs.impl.disable.cache" is set to true
> --
>
> Key: HBASE-15109
> URL: https://issues.apache.org/jira/browse/HBASE-15109
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.0.0
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Critical
> Attachments: HBASE-15109-branch-1.patch, HBASE-15109.patch, 
> HBASE-15109.patch
>
>
> Region server failed to start during installing ShutdownHook when 
> "fs.hdfs.impl.disable.cache" is set to true in core-site.xml at HBase side.
> {code}
> 2016-01-14 15:30:56,358 | FATAL | regionserver/ds2/192.168.152.230:21302 | 
> ABORTING region server ds2,21302,1452756654352: Unhandled: Failed suppression 
> of fs shutdown hook: 
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@571527d6 | 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:2054)
> java.lang.RuntimeException: Failed suppression of fs shutdown hook: 
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@571527d6
> at 
> org.apache.hadoop.hbase.regionserver.ShutdownHook.suppressHdfsShutdownHook(ShutdownHook.java:204)
> at 
> org.apache.hadoop.hbase.regionserver.ShutdownHook.install(ShutdownHook.java:84)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:893)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> During installation, first time it will try to suppress the HDFS shutdownhook 
> by removing the hdfsclientfinalizer from HDFS. Since 
> fs.hdfs.impl.disable.cache is enabled, so removal will be unsuccessful from 
> HDFS  (FS is not cached) and RuntimeException will be thrown with "Failed 
> suppression of fs shutdown hook" message.
> In ShutdownHook,
> {code}
> if (!fsShutdownHooks.containsKey(hdfsClientFinalizer) &&
> !ShutdownHookManager.deleteShutdownHook(hdfsClientFinalizer)) {
> throw new RuntimeException("Failed suppression of fs shutdown hook: " +
> hdfsClientFinalizer);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15393) Enable table replication command will fail when parent znode is not /hbase(default) in peer cluster

2016-03-03 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179503#comment-15179503
 ] 

Ashish Singhi commented on HBASE-15393:
---

The ZK client we use to check the existence of peer cluster znode is made of 
source cluster configuration and the peer znode does not exists in active 
cluster hence it will not be added into the validPeers list and since the list 
is empty means no peer found the command will fail.

> Enable table replication command will fail when parent znode is not 
> /hbase(default) in peer cluster
> ---
>
> Key: HBASE-15393
> URL: https://issues.apache.org/jira/browse/HBASE-15393
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.0.3, 0.98.17
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>
> Enable table replication command will fail when parent znode is not 
> /hbase(default) in peer cluster and there is only one peer cluster added in 
> the source cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15393) Enable table replication command will fail when parent znode is not /hbase(default) in peer cluster

2016-03-03 Thread Ashish Singhi (JIRA)
Ashish Singhi created HBASE-15393:
-

 Summary: Enable table replication command will fail when parent 
znode is not /hbase(default) in peer cluster
 Key: HBASE-15393
 URL: https://issues.apache.org/jira/browse/HBASE-15393
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.98.17, 1.0.3
Reporter: Ashish Singhi
Assignee: Ashish Singhi


Enable table replication command will fail when parent znode is not 
/hbase(default) in peer cluster and there is only one peer cluster added in the 
source cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14498) Master stuck in infinite loop when all Zookeeper servers are unreachable

2016-03-03 Thread Pankaj Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-14498:
-
Attachment: HBASE-14498-V6.patch

Removed white spaces and checkstyle warning are fixed.

> Master stuck in infinite loop when all Zookeeper servers are unreachable
> 
>
> Key: HBASE-14498
> URL: https://issues.apache.org/jira/browse/HBASE-14498
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Pankaj Kumar
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-14498-V2.patch, HBASE-14498-V3.patch, 
> HBASE-14498-V4.patch, HBASE-14498-V5.patch, HBASE-14498-V6.patch, 
> HBASE-14498.patch
>
>
> We met a weird scenario in our production environment.
> In a HA cluster,
> > Active Master (HM1) is not able to connect to any Zookeeper server (due to 
> > N/w breakdown on master machine network with Zookeeper servers).
> {code}
> 2015-09-26 15:24:47,508 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Client session timed out, have not heard from server in 
> 33463ms for sessionid 0x104576b8dda0002, closing socket connection and 
> attempting reconnect
> 2015-09-26 15:24:47,877 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host1 2181
> 2015-09-26 15:24:48,236 INFO [main-SendThread(ZK-Host1:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host1 2181
> 2015-09-26 15:24:49,879 WARN 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host1
> 2015-09-26 15:24:49,879 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server 
> ZK-Host1/ZK-IP1:2181. Will not attempt to authenticate using SASL (unknown 
> error)
> 2015-09-26 15:24:50,238 WARN [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host1
> 2015-09-26 15:24:50,238 INFO [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server 
> ZK-Host1/ZK-Host1:2181. Will not attempt to authenticate using SASL (unknown 
> error)
> 2015-09-26 15:25:17,470 INFO [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Client session timed out, have not heard from server in 
> 30023ms for sessionid 0x2045762cc710006, closing socket connection and 
> attempting reconnect
> 2015-09-26 15:25:17,571 WARN [master/HM1-Host/HM1-IP:16000] 
> zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, 
> quorum=ZK-Host:2181,ZK-Host1:2181,ZK-Host2:2181, 
> exception=org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for /hbase/master
> 2015-09-26 15:25:17,872 INFO [main-SendThread(ZK-Host:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host 2181
> 2015-09-26 15:25:19,874 WARN [main-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host
> 2015-09-26 15:25:19,874 INFO [main-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server ZK-Host/ZK-IP:2181. 
> Will not attempt to authenticate using SASL (unknown error)
> {code}
> > Since HM1 was not able to connect to any ZK, so session timeout didnt 
> > happen at Zookeeper server side and HM1 didnt abort.
> > On Zookeeper session timeout standby master (HM2) registered himself as an 
> > active master. 
> > HM2 is keep on waiting for region server to report him as part of active 
> > master intialization.
> {noformat} 
> 2015-09-26 15:24:44,928 | INFO | HM2-Host:21300.activeMasterManager | Waiting 
> for region servers count to settle; currently checked in 0, slept for 0 ms, 
> expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval 
> of 1500 ms. | 
> org.apache.hadoop.hbase.master.ServerManager.waitForRegionServers(ServerManager.java:1011)
> ---
> ---
> 2015-09-26 15:32:50,841 | INFO | HM2-Host:21300.activeMasterManager | Waiting 
> for region servers count to settle; currently checked in 0, slept for 483913 
> ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, 
> interval of 1500 ms. | 
> org.apache.hadoop.hbase.master.ServerManager.waitForRegionServers(ServerManager.java:1011)
> {noformat}
> > At other end, region servers are reporting to HM1 on 3 sec interval. Here 
> > region server retrieve master location from zookeeper only when they 
> > couldn't connect to Master (ServiceException).
> Region Server will not report HM2 as per current design until unless HM1 
> abort,so HM2 will exit(InitializationMonitor) and again wait for region 
> servers in loop.



--
This 

[jira] [Commented] (HBASE-9393) Hbase does not closing a closed socket resulting in many CLOSE_WAIT

2016-03-03 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179488#comment-15179488
 ] 

Ashish Singhi commented on HBASE-9393:
--

[~anoop.hbase], [~busbey], can you please review the latest patch ?

> Hbase does not closing a closed socket resulting in many CLOSE_WAIT 
> 
>
> Key: HBASE-9393
> URL: https://issues.apache.org/jira/browse/HBASE-9393
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.2, 0.98.0
> Environment: Centos 6.4 - 7 regionservers/datanodes, 8 TB per node, 
> 7279 regions
>Reporter: Avi Zrachya
>Assignee: Ashish Singhi
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-9393.patch, HBASE-9393.v1.patch, 
> HBASE-9393.v10.patch, HBASE-9393.v11.patch, HBASE-9393.v12.patch, 
> HBASE-9393.v13.patch, HBASE-9393.v14.patch, HBASE-9393.v2.patch, 
> HBASE-9393.v3.patch, HBASE-9393.v4.patch, HBASE-9393.v5.patch, 
> HBASE-9393.v5.patch, HBASE-9393.v5.patch, HBASE-9393.v6.patch, 
> HBASE-9393.v6.patch, HBASE-9393.v6.patch, HBASE-9393.v7.patch, 
> HBASE-9393.v8.patch, HBASE-9393.v9.patch
>
>
> HBase dose not close a dead connection with the datanode.
> This resulting in over 60K CLOSE_WAIT and at some point HBase can not connect 
> to the datanode because too many mapped sockets from one host to another on 
> the same port.
> The example below is with low CLOSE_WAIT count because we had to restart 
> hbase to solve the porblem, later in time it will incease to 60-100K sockets 
> on CLOSE_WAIT
> [root@hd2-region3 ~]# netstat -nap |grep CLOSE_WAIT |grep 21592 |wc -l
> 13156
> [root@hd2-region3 ~]# ps -ef |grep 21592
> root 17255 17219  0 12:26 pts/000:00:00 grep 21592
> hbase21592 1 17 Aug29 ?03:29:06 
> /usr/java/jdk1.6.0_26/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx8000m 
> -ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode 
> -Dhbase.log.dir=/var/log/hbase 
> -Dhbase.log.file=hbase-hbase-regionserver-hd2-region3.swnet.corp.log ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15392) Single Cell Get reads two HFileBlocks

2016-03-03 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179482#comment-15179482
 ] 

ramkrishna.s.vasudevan commented on HBASE-15392:


One thing is that though you have got the cell that you need ie. INCLUDE match 
code will do
{code}
else {
  this.heap.next();
}
{code}
and that will do a next() call. If that cell is in next block we will have to 
fetch it and later the matcher will decide if we have already crossed the 
required cell. And then say DONE. 
In your case there is a SKIP and then one more next() call which leads to the 
next block. So this SKIP either could be due to duplicate versions or may be 
TTL expired something like that? 
{code}
// check if the cell is expired by cell TTL
if (HStore.isCellTTLExpired(cell, this.oldestUnexpiredTS, this.now)) {
  return MatchCode.SKIP;
} 
{code}
I can see that this code will create an issue as you suggested that though this 
next cell was of the next column still if there is TTL expired we will go and 
fetch the heap#next() thinking that this has to be SKIPped.

> Single Cell Get reads two HFileBlocks
> -
>
> Key: HBASE-15392
> URL: https://issues.apache.org/jira/browse/HBASE-15392
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: stack
>
> A simple Get results in our reading two HFileBlocks, the one that contains 
> the wanted Cell, and the block that follows.
> Here is a bit of custom logging that logs a stack trace on each HFileBlock 
> read so you can see the call stack responsible:
> {code}
> 2016-03-03 22:20:30,191 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> START LOOP
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> QCODE SEEK_NEXT_COL
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileBlockIndex: 
> STARTED WHILE
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.CombinedBlockCache: 
> OUT OF L2
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.BucketCache: Read 
> offset=31409152, len=2103
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.FileIOEngine: 
> offset=31409152, length=2103
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> From Cache [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> Cache hit return [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> java.lang.Throwable
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1515)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:324)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:831)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:812)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:288)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:198)
> at 
> 

[jira] [Commented] (HBASE-15391) Avoid too large "deleted from META" info log

2016-03-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179473#comment-15179473
 ] 

Hadoop QA commented on HBASE-15391:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 2s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/latest/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
39m 31s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 41s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.8.0. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 12s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
8s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791411/HBASE-15391-trunk-v1.diff
 |
| JIRA Issue | HBASE-15391 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 

[jira] [Commented] (HBASE-15392) Single Cell Get reads two HFileBlocks

2016-03-03 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179464#comment-15179464
 ] 

ramkrishna.s.vasudevan commented on HBASE-15392:


[~saint@gmail.com]
Can you get the stack trace as why this SKIP happens
bq.Then Matcher does SKIP
bq. SKIP has us go read the next block.

It would be better if you can print the cell that is coming out of StoreScanner 
next() just after the qcode is retrieved.  I think here in your case you have 
Explicit Column tracker too. 

> Single Cell Get reads two HFileBlocks
> -
>
> Key: HBASE-15392
> URL: https://issues.apache.org/jira/browse/HBASE-15392
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: stack
>
> A simple Get results in our reading two HFileBlocks, the one that contains 
> the wanted Cell, and the block that follows.
> Here is a bit of custom logging that logs a stack trace on each HFileBlock 
> read so you can see the call stack responsible:
> {code}
> 2016-03-03 22:20:30,191 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> START LOOP
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
> QCODE SEEK_NEXT_COL
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileBlockIndex: 
> STARTED WHILE
> 2016-03-03 22:20:30,192 INFO  
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.CombinedBlockCache: 
> OUT OF L2
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.BucketCache: Read 
> offset=31409152, len=2103
> 2016-03-03 22:20:30,192 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.FileIOEngine: 
> offset=31409152, length=2103
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> From Cache [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> 2016-03-03 22:20:30,193 TRACE 
> [B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: 
> Cache hit return [blockType=DATA, fileOffset=2055421, headerSize=33, 
> onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
> prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
> getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
> buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
> dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
> fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
> bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
> includesTags=false, compressAlgo=NONE, compressTags=false, 
> cryptoContext=[cipher=NONE keyHash=NONE]]]
> java.lang.Throwable
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1515)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:324)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:831)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:812)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:288)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:198)
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:321)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:279)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:806)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:795)
> at 
> 

[jira] [Commented] (HBASE-15373) DEPRECATED_NAME_OF_NO_LIMIT_THROUGHPUT_CONTROLLER_CLASS value is wrong in CompactionThroughputControllerFactory

2016-03-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179459#comment-15179459
 ] 

Hudson commented on HBASE-15373:


FAILURE: Integrated in HBase-1.3 #587 (See 
[https://builds.apache.org/job/HBase-1.3/587/])
HBASE-15373 DEPRECATED_NAME_OF_NO_LIMIT_THROUGHPUT_CONTROLLER_CLASS (stack: rev 
202a1e86a7d02a53c72bdd0e0044f7a080e26422)
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/CompactionThroughputControllerFactory.java


> DEPRECATED_NAME_OF_NO_LIMIT_THROUGHPUT_CONTROLLER_CLASS value is wrong in 
> CompactionThroughputControllerFactory
> ---
>
> Key: HBASE-15373
> URL: https://issues.apache.org/jira/browse/HBASE-15373
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 0.98.11, 1.2.0, 1.3.0
>Reporter: stack
>Assignee: stack
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 15373.patch, 15373v2.patch, 15373v2.patch
>
>
> I couldn't turn off compaction throughput by following release note 
> instructions. I fixed release notes in parent but also needs code fix in 
> factory class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14498) Master stuck in infinite loop when all Zookeeper servers are unreachable

2016-03-03 Thread Pankaj Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179444#comment-15179444
 ] 

Pankaj Kumar commented on HBASE-14498:
--

Thanks Ted for reviewing the patch.
Checkstyle warnings are "At-clause should have a non-empty description" for 
multiple @throws. 
I will add a small desc for them.

> Master stuck in infinite loop when all Zookeeper servers are unreachable
> 
>
> Key: HBASE-14498
> URL: https://issues.apache.org/jira/browse/HBASE-14498
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Pankaj Kumar
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-14498-V2.patch, HBASE-14498-V3.patch, 
> HBASE-14498-V4.patch, HBASE-14498-V5.patch, HBASE-14498.patch
>
>
> We met a weird scenario in our production environment.
> In a HA cluster,
> > Active Master (HM1) is not able to connect to any Zookeeper server (due to 
> > N/w breakdown on master machine network with Zookeeper servers).
> {code}
> 2015-09-26 15:24:47,508 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Client session timed out, have not heard from server in 
> 33463ms for sessionid 0x104576b8dda0002, closing socket connection and 
> attempting reconnect
> 2015-09-26 15:24:47,877 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host1 2181
> 2015-09-26 15:24:48,236 INFO [main-SendThread(ZK-Host1:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host1 2181
> 2015-09-26 15:24:49,879 WARN 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host1
> 2015-09-26 15:24:49,879 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server 
> ZK-Host1/ZK-IP1:2181. Will not attempt to authenticate using SASL (unknown 
> error)
> 2015-09-26 15:24:50,238 WARN [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host1
> 2015-09-26 15:24:50,238 INFO [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server 
> ZK-Host1/ZK-Host1:2181. Will not attempt to authenticate using SASL (unknown 
> error)
> 2015-09-26 15:25:17,470 INFO [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Client session timed out, have not heard from server in 
> 30023ms for sessionid 0x2045762cc710006, closing socket connection and 
> attempting reconnect
> 2015-09-26 15:25:17,571 WARN [master/HM1-Host/HM1-IP:16000] 
> zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, 
> quorum=ZK-Host:2181,ZK-Host1:2181,ZK-Host2:2181, 
> exception=org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for /hbase/master
> 2015-09-26 15:25:17,872 INFO [main-SendThread(ZK-Host:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host 2181
> 2015-09-26 15:25:19,874 WARN [main-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host
> 2015-09-26 15:25:19,874 INFO [main-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server ZK-Host/ZK-IP:2181. 
> Will not attempt to authenticate using SASL (unknown error)
> {code}
> > Since HM1 was not able to connect to any ZK, so session timeout didnt 
> > happen at Zookeeper server side and HM1 didnt abort.
> > On Zookeeper session timeout standby master (HM2) registered himself as an 
> > active master. 
> > HM2 is keep on waiting for region server to report him as part of active 
> > master intialization.
> {noformat} 
> 2015-09-26 15:24:44,928 | INFO | HM2-Host:21300.activeMasterManager | Waiting 
> for region servers count to settle; currently checked in 0, slept for 0 ms, 
> expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval 
> of 1500 ms. | 
> org.apache.hadoop.hbase.master.ServerManager.waitForRegionServers(ServerManager.java:1011)
> ---
> ---
> 2015-09-26 15:32:50,841 | INFO | HM2-Host:21300.activeMasterManager | Waiting 
> for region servers count to settle; currently checked in 0, slept for 483913 
> ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, 
> interval of 1500 ms. | 
> org.apache.hadoop.hbase.master.ServerManager.waitForRegionServers(ServerManager.java:1011)
> {noformat}
> > At other end, region servers are reporting to HM1 on 3 sec interval. Here 
> > region server retrieve master location from zookeeper only when they 
> > couldn't connect to Master (ServiceException).
> Region Server will not report HM2 as per current design until unless HM1 
> abort,so HM2 will 

[jira] [Commented] (HBASE-15291) FileSystem not closed in secure bulkLoad

2016-03-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179440#comment-15179440
 ] 

Hudson commented on HBASE-15291:


FAILURE: Integrated in HBase-1.2 #569 (See 
[https://builds.apache.org/job/HBase-1.2/569/])
HBASE-15291 FileSystem not closed in secure bulkLoad (Yong Zhang) (tedyu: rev 
105fd08651eeb664f615253790fbe9595d00d387)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java


> FileSystem not closed in secure bulkLoad
> 
>
> Key: HBASE-15291
> URL: https://issues.apache.org/jira/browse/HBASE-15291
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.2, 0.98.16.1
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.18, 1.4.0
>
> Attachments: HBASE-15291.001.patch, HBASE-15291.002.patch, 
> HBASE-15291.003.patch, HBASE-15291.004.patch, HBASE-15291.addendum, patch
>
>
> FileSystem not closed in secure bulkLoad after bulkLoad  finish, it will 
> cause memory used more and more if too many bulkLoad .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15376) ScanNext metric is size-based while every other per-operation metric is time based

2016-03-03 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-15376:
--
Attachment: HBASE-15376_v3.patch

address [~enis] new comments.

> ScanNext metric is size-based while every other per-operation metric is time 
> based
> --
>
> Key: HBASE-15376
> URL: https://issues.apache.org/jira/browse/HBASE-15376
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
> Attachments: HBASE-15376.patch, HBASE-15376_v1.patch, 
> HBASE-15376_v3.patch
>
>
> We have per-operation metrics for {{Get}}, {{Mutate}}, {{Delete}}, 
> {{Increment}}, and {{ScanNext}}. 
> The metrics are emitted like: 
> {code}
>"Get_num_ops" : 4837505,
> "Get_min" : 0,
> "Get_max" : 296,
> "Get_mean" : 0.2934618155433431,
> "Get_median" : 0.0,
> "Get_75th_percentile" : 0.0,
> "Get_95th_percentile" : 1.0,
> "Get_99th_percentile" : 1.0,
> ...
> "ScanNext_num_ops" : 194705,
> "ScanNext_min" : 0,
> "ScanNext_max" : 18441,
> "ScanNext_mean" : 7468.274651395701,
> "ScanNext_median" : 583.0,
> "ScanNext_75th_percentile" : 583.0,
> "ScanNext_95th_percentile" : 13481.0,
> "ScanNext_99th_percentile" : 13481.0,
> {code}
> The problem is that all of Get,Mutate,Delete,Increment,Append,Replay are time 
> based tracking how long the operation ran, while ScanNext is tracking 
> returned response sizes (returned cell-sizes to be exact). Obviously, this is 
> very confusing and you would only know this subtlety if you read the metrics 
> collection code. 
> Not sure how useful is the ScanNext metric as it is today. We can deprecate 
> it, and introduce a time based one to keep track of scan request latencies. 
> ps. Shamelessly using the parent jira (since these seem relavant). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15392) Single Cell Get reads two HFileBlocks

2016-03-03 Thread stack (JIRA)
stack created HBASE-15392:
-

 Summary: Single Cell Get reads two HFileBlocks
 Key: HBASE-15392
 URL: https://issues.apache.org/jira/browse/HBASE-15392
 Project: HBase
  Issue Type: Sub-task
  Components: BucketCache
Reporter: stack
Assignee: stack


A simple Get results in our reading two HFileBlocks, the one that contains the 
wanted Cell, and the block that follows.

Here is a bit of custom logging that logs a stack trace on each HFileBlock read 
so you can see the call stack responsible:

{code}
2016-03-03 22:20:30,191 INFO  
[B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
START LOOP
2016-03-03 22:20:30,192 INFO  
[B.defaultRpcServer.handler=20,queue=2,port=16020] regionserver.StoreScanner: 
QCODE SEEK_NEXT_COL
2016-03-03 22:20:30,192 INFO  
[B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileBlockIndex: 
STARTED WHILE
2016-03-03 22:20:30,192 INFO  
[B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.CombinedBlockCache: 
OUT OF L2
2016-03-03 22:20:30,192 TRACE 
[B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.BucketCache: Read 
offset=31409152, len=2103
2016-03-03 22:20:30,192 TRACE 
[B.defaultRpcServer.handler=20,queue=2,port=16020] bucket.FileIOEngine: 
offset=31409152, length=2103
2016-03-03 22:20:30,193 TRACE 
[B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: From 
Cache [blockType=DATA, fileOffset=2055421, headerSize=33, 
onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
includesTags=false, compressAlgo=NONE, compressTags=false, 
cryptoContext=[cipher=NONE keyHash=NONE]]]
2016-03-03 22:20:30,193 TRACE 
[B.defaultRpcServer.handler=20,queue=2,port=16020] hfile.HFileReaderImpl: Cache 
hit return [blockType=DATA, fileOffset=2055421, headerSize=33, 
onDiskSizeWithoutHeader=2024, uncompressedSizeWithoutHeader=2020, 
prevBlockOffset=2053364, isUseHBaseChecksum=true, checksumType=CRC32C, 
bytesPerChecksum=16384, onDiskDataSizeWithHeader=2053, 
getOnDiskSizeWithHeader=2057, totalChecksumBytes=4, isUnpacked=true, 
buf=[org.apache.hadoop.hbase.nio.SingleByteBuff@e19fbd54], 
dataBeginsWith=\x00\x00\x00)\x00\x00\x01`\x00\x16user995139035672819231, 
fileContext=[usesHBaseChecksum=true, checksumType=CRC32C, 
bytesPerChecksum=16384, blocksize=65536, encoding=NONE, includesMvcc=true, 
includesTags=false, compressAlgo=NONE, compressTags=false, 
cryptoContext=[cipher=NONE keyHash=NONE]]]
java.lang.Throwable
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1515)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:324)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:831)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:812)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:288)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:198)
at 
org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:321)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:279)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:806)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:795)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:624)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5703)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5849)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5622)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5598)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5584)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2187)
  

[jira] [Commented] (HBASE-15373) DEPRECATED_NAME_OF_NO_LIMIT_THROUGHPUT_CONTROLLER_CLASS value is wrong in CompactionThroughputControllerFactory

2016-03-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179425#comment-15179425
 ] 

Hudson commented on HBASE-15373:


FAILURE: Integrated in HBase-1.4 #2 (See 
[https://builds.apache.org/job/HBase-1.4/2/])
HBASE-15373 DEPRECATED_NAME_OF_NO_LIMIT_THROUGHPUT_CONTROLLER_CLASS (stack: rev 
d089ac1ec6878c24799c1301ae649029c3a010dd)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/CompactionThroughputControllerFactory.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.java


> DEPRECATED_NAME_OF_NO_LIMIT_THROUGHPUT_CONTROLLER_CLASS value is wrong in 
> CompactionThroughputControllerFactory
> ---
>
> Key: HBASE-15373
> URL: https://issues.apache.org/jira/browse/HBASE-15373
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 0.98.11, 1.2.0, 1.3.0
>Reporter: stack
>Assignee: stack
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 15373.patch, 15373v2.patch, 15373v2.patch
>
>
> I couldn't turn off compaction throughput by following release note 
> instructions. I fixed release notes in parent but also needs code fix in 
> factory class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15291) FileSystem not closed in secure bulkLoad

2016-03-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179426#comment-15179426
 ] 

Hudson commented on HBASE-15291:


FAILURE: Integrated in HBase-1.4 #2 (See 
[https://builds.apache.org/job/HBase-1.4/2/])
HBASE-15291 FileSystem not closed in secure bulkLoad (Yong Zhang) (tedyu: rev 
b3c62680bcf93ebab7df9914ff8ca2d28cda3156)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java


> FileSystem not closed in secure bulkLoad
> 
>
> Key: HBASE-15291
> URL: https://issues.apache.org/jira/browse/HBASE-15291
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.2, 0.98.16.1
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.18, 1.4.0
>
> Attachments: HBASE-15291.001.patch, HBASE-15291.002.patch, 
> HBASE-15291.003.patch, HBASE-15291.004.patch, HBASE-15291.addendum, patch
>
>
> FileSystem not closed in secure bulkLoad after bulkLoad  finish, it will 
> cause memory used more and more if too many bulkLoad .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15248) BLOCKSIZE 4k should result in 4096 bytes on disk; i.e. fit inside a BucketCache 'block' of 4k

2016-03-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15248:
--
Summary: BLOCKSIZE 4k should result in 4096 bytes on disk; i.e. fit inside 
a BucketCache 'block' of 4k  (was: One block, one seek: a.k.a BLOCKSIZE 4k 
should result in 4096 bytes on disk)

> BLOCKSIZE 4k should result in 4096 bytes on disk; i.e. fit inside a 
> BucketCache 'block' of 4k
> -
>
> Key: HBASE-15248
> URL: https://issues.apache.org/jira/browse/HBASE-15248
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>
> Chatting w/ a gentleman named Daniel Pol who is messing w/ bucketcache, he 
> wants blocks to be the size specified in the configuration and no bigger. His 
> hardware set ups fetches pages of 4k and so a block that has 4k of payload 
> but has then a header and the header of the next block (which helps figure 
> whats next when scanning) ends up being 4203 bytes or something, and this 
> then then translates into two seeks per block fetch.
> This issue is about what it would take to stay inside our configured size 
> boundary writing out blocks.
> If not possible, give back better signal on what to do so you could fit 
> inside a particular constraint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15391) Avoid too large "deleted from META" info log

2016-03-03 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-15391:

Status: Patch Available  (was: Open)

> Avoid too large "deleted from META" info log
> 
>
> Key: HBASE-15391
> URL: https://issues.apache.org/jira/browse/HBASE-15391
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15391-trunk-v1.diff
>
>
> When deleting a large table in HBase, there will be a large info log in 
> HMaster.
> {code}
> 2016-02-29,05:58:45,920 INFO org.apache.hadoop.hbase.catalog.MetaEditor: 
> Deleted [{ENCODED => 4b54572150941cd03f5addfdeab0a754, NAME => 
> 'YCSBTest,,1453186492932.4b54572150941cd03f5addfdeab0a754.', STARTKEY => '', 
> ENDKEY => 'user01'}, {ENCODED => 715e142bcd6a31d7842abf286ef8a5fe, NAME => 
> 'YCSBTest,user01,1453186492933.715e142bcd6a31d7842abf286ef8a5fe.', STARTKEY 
> => 'user01', ENDKEY => 'user02'}, {ENCODED => 
> 5f9cef5714973f13baa63fba29a68d70, NAME => 
> 'YCSBTest,user02,1453186492933.5f9cef5714973f13baa63fba29a68d70.', STARTKEY 
> => 'user02', ENDKEY => 'user03'}, {ENCODED => 
> 86cf3fa4c0a6b911275512c1d4b78533, NAME => 'YCSBTest,user0...
> {code}
> The reason is that MetaTableAccessor will log all regions when deleting them 
> from meta. See, MetaTableAccessor.java#deleteRegions
> {code}
>   public static void deleteRegions(Connection connection,
>List regionsInfo, long ts) 
> throws IOException {
> List deletes = new ArrayList(regionsInfo.size());
> for (HRegionInfo hri: regionsInfo) {
>   Delete e = new Delete(hri.getRegionName());
>   e.addFamily(getCatalogFamily(), ts);
>   deletes.add(e);
> }
> deleteFromMetaTable(connection, deletes);
> LOG.info("Deleted " + regionsInfo);
>   }
> {code}
> Just change the info log to debug and add a info log about the number of 
> deleted regions. Others suggestions are welcomed~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15391) Avoid too large "deleted from META" info log

2016-03-03 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-15391:

Attachment: HBASE-15391-trunk-v1.diff

Simple patch to master

> Avoid too large "deleted from META" info log
> 
>
> Key: HBASE-15391
> URL: https://issues.apache.org/jira/browse/HBASE-15391
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15391-trunk-v1.diff
>
>
> When deleting a large table in HBase, there will be a large info log in 
> HMaster.
> {code}
> 2016-02-29,05:58:45,920 INFO org.apache.hadoop.hbase.catalog.MetaEditor: 
> Deleted [{ENCODED => 4b54572150941cd03f5addfdeab0a754, NAME => 
> 'YCSBTest,,1453186492932.4b54572150941cd03f5addfdeab0a754.', STARTKEY => '', 
> ENDKEY => 'user01'}, {ENCODED => 715e142bcd6a31d7842abf286ef8a5fe, NAME => 
> 'YCSBTest,user01,1453186492933.715e142bcd6a31d7842abf286ef8a5fe.', STARTKEY 
> => 'user01', ENDKEY => 'user02'}, {ENCODED => 
> 5f9cef5714973f13baa63fba29a68d70, NAME => 
> 'YCSBTest,user02,1453186492933.5f9cef5714973f13baa63fba29a68d70.', STARTKEY 
> => 'user02', ENDKEY => 'user03'}, {ENCODED => 
> 86cf3fa4c0a6b911275512c1d4b78533, NAME => 'YCSBTest,user0...
> {code}
> The reason is that MetaTableAccessor will log all regions when deleting them 
> from meta. See, MetaTableAccessor.java#deleteRegions
> {code}
>   public static void deleteRegions(Connection connection,
>List regionsInfo, long ts) 
> throws IOException {
> List deletes = new ArrayList(regionsInfo.size());
> for (HRegionInfo hri: regionsInfo) {
>   Delete e = new Delete(hri.getRegionName());
>   e.addFamily(getCatalogFamily(), ts);
>   deletes.add(e);
> }
> deleteFromMetaTable(connection, deletes);
> LOG.info("Deleted " + regionsInfo);
>   }
> {code}
> Just change the info log to debug and add a info log about the number of 
> deleted regions. Others suggestions are welcomed~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-03-03 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179372#comment-15179372
 ] 

Liu Shaohui commented on HBASE-15338:
-

[~anoop.hbase]
Sorry, the patch have not be committed~ I just add the release note first and 
then commit it if no objection.
Any more suggestion about the patch v7? Thanks~

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff, HBASE-15338-trunk-v4.diff, 
> HBASE-15338-trunk-v5.diff, HBASE-15338-trunk-v6.diff, 
> HBASE-15338-trunk-v7.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15391) Avoid too large "deleted from META" info log

2016-03-03 Thread Liu Shaohui (JIRA)
Liu Shaohui created HBASE-15391:
---

 Summary: Avoid too large "deleted from META" info log
 Key: HBASE-15391
 URL: https://issues.apache.org/jira/browse/HBASE-15391
 Project: HBase
  Issue Type: Improvement
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Fix For: 2.0.0


When deleting a large table in HBase, there will be a large info log in HMaster.
{code}
2016-02-29,05:58:45,920 INFO org.apache.hadoop.hbase.catalog.MetaEditor: 
Deleted [{ENCODED => 4b54572150941cd03f5addfdeab0a754, NAME => 
'YCSBTest,,1453186492932.4b54572150941cd03f5addfdeab0a754.', STARTKEY => '', 
ENDKEY => 'user01'}, {ENCODED => 715e142bcd6a31d7842abf286ef8a5fe, NAME => 
'YCSBTest,user01,1453186492933.715e142bcd6a31d7842abf286ef8a5fe.', STARTKEY => 
'user01', ENDKEY => 'user02'}, {ENCODED => 5f9cef5714973f13baa63fba29a68d70, 
NAME => 'YCSBTest,user02,1453186492933.5f9cef5714973f13baa63fba29a68d70.', 
STARTKEY => 'user02', ENDKEY => 'user03'}, {ENCODED => 
86cf3fa4c0a6b911275512c1d4b78533, NAME => 'YCSBTest,user0...
{code}
The reason is that MetaTableAccessor will log all regions when deleting them 
from meta. See, MetaTableAccessor.java#deleteRegions
{code}
  public static void deleteRegions(Connection connection,
   List regionsInfo, long ts) 
throws IOException {
List deletes = new ArrayList(regionsInfo.size());
for (HRegionInfo hri: regionsInfo) {
  Delete e = new Delete(hri.getRegionName());
  e.addFamily(getCatalogFamily(), ts);
  deletes.add(e);
}
deleteFromMetaTable(connection, deletes);
LOG.info("Deleted " + regionsInfo);
  }
{code}
Just change the info log to debug and add a info log about the number of 
deleted regions. Others suggestions are welcomed~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-03-03 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179359#comment-15179359
 ] 

Jingcheng Du commented on HBASE-15338:
--

got it. Then I am okay on the patch after this line is removed.

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff, HBASE-15338-trunk-v4.diff, 
> HBASE-15338-trunk-v5.diff, HBASE-15338-trunk-v6.diff, 
> HBASE-15338-trunk-v7.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-03-03 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179354#comment-15179354
 ] 

Anoop Sam John commented on HBASE-15338:


Actually we dont write any blocks as META block as such.  The block category 
META is used for FFT and FileInfo also.. But those are read and set as state in 
HFile's reader instance.  So no Q of caching comes..  Said that, no way we will 
ask to BC whether to cache a META category block.  So its fine.. +1 for not 
changing that line. In latest patch u have that. U will revert it?
BTW is this committed? Am not able to see in commit log !!

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff, HBASE-15338-trunk-v4.diff, 
> HBASE-15338-trunk-v5.diff, HBASE-15338-trunk-v6.diff, 
> HBASE-15338-trunk-v7.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-03-03 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-15338:

Attachment: HBASE-15338-trunk-v7.diff

Update the patch for [~jingcheng...@intel.com] 's suggestion~

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff, HBASE-15338-trunk-v4.diff, 
> HBASE-15338-trunk-v5.diff, HBASE-15338-trunk-v6.diff, 
> HBASE-15338-trunk-v7.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15390) ClientExceptionUtils doesn't handle CallQueueTooBigException properly

2016-03-03 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-15390:
--
Priority: Blocker  (was: Major)

> ClientExceptionUtils doesn't handle CallQueueTooBigException properly
> -
>
> Key: HBASE-15390
> URL: https://issues.apache.org/jira/browse/HBASE-15390
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.2.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Blocker
> Fix For: 1.3.0, 1.2.1
>
>
> In #isMetaClearingException() we're checking for CallQueueTooBigException, 
> but under debugger in tests I saw that what we're really getting in here is 
> RemoteWithExtrasException, which doesn't allow to easily unwrap CQTBE from 
> it, since it's stored in the classname field, and findException() or 
> unwrapRemoteException() fail to unwrap it correctly.
> I think we'd have the same behavior with other exceptions wrapper this way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-03-03 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179343#comment-15179343
 ] 

Liu Shaohui commented on HBASE-15338:
-

[~jingcheng...@intel.com]
The meta blocks don't need to be cached if cache on read is disabled. This is 
expected by the current implementation (without this patch). I will remove the 
line "category == BlockCategory.META" and keep this behavior.

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff, HBASE-15338-trunk-v4.diff, 
> HBASE-15338-trunk-v5.diff, HBASE-15338-trunk-v6.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15390) ClientExceptionUtils doesn't handle CallQueueTooBigException properly

2016-03-03 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15390:

Fix Version/s: 1.3.0

> ClientExceptionUtils doesn't handle CallQueueTooBigException properly
> -
>
> Key: HBASE-15390
> URL: https://issues.apache.org/jira/browse/HBASE-15390
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.2.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 1.3.0, 1.2.1
>
>
> In #isMetaClearingException() we're checking for CallQueueTooBigException, 
> but under debugger in tests I saw that what we're really getting in here is 
> RemoteWithExtrasException, which doesn't allow to easily unwrap CQTBE from 
> it, since it's stored in the classname field, and findException() or 
> unwrapRemoteException() fail to unwrap it correctly.
> I think we'd have the same behavior with other exceptions wrapper this way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14498) Master stuck in infinite loop when all Zookeeper servers are unreachable

2016-03-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179331#comment-15179331
 ] 

Ted Yu commented on HBASE-14498:


lgtm

Test failure is tracked elsewhere.

Please fix checkstyle warnings.

> Master stuck in infinite loop when all Zookeeper servers are unreachable
> 
>
> Key: HBASE-14498
> URL: https://issues.apache.org/jira/browse/HBASE-14498
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Pankaj Kumar
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-14498-V2.patch, HBASE-14498-V3.patch, 
> HBASE-14498-V4.patch, HBASE-14498-V5.patch, HBASE-14498.patch
>
>
> We met a weird scenario in our production environment.
> In a HA cluster,
> > Active Master (HM1) is not able to connect to any Zookeeper server (due to 
> > N/w breakdown on master machine network with Zookeeper servers).
> {code}
> 2015-09-26 15:24:47,508 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Client session timed out, have not heard from server in 
> 33463ms for sessionid 0x104576b8dda0002, closing socket connection and 
> attempting reconnect
> 2015-09-26 15:24:47,877 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host1 2181
> 2015-09-26 15:24:48,236 INFO [main-SendThread(ZK-Host1:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host1 2181
> 2015-09-26 15:24:49,879 WARN 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host1
> 2015-09-26 15:24:49,879 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server 
> ZK-Host1/ZK-IP1:2181. Will not attempt to authenticate using SASL (unknown 
> error)
> 2015-09-26 15:24:50,238 WARN [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host1
> 2015-09-26 15:24:50,238 INFO [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server 
> ZK-Host1/ZK-Host1:2181. Will not attempt to authenticate using SASL (unknown 
> error)
> 2015-09-26 15:25:17,470 INFO [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Client session timed out, have not heard from server in 
> 30023ms for sessionid 0x2045762cc710006, closing socket connection and 
> attempting reconnect
> 2015-09-26 15:25:17,571 WARN [master/HM1-Host/HM1-IP:16000] 
> zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, 
> quorum=ZK-Host:2181,ZK-Host1:2181,ZK-Host2:2181, 
> exception=org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for /hbase/master
> 2015-09-26 15:25:17,872 INFO [main-SendThread(ZK-Host:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host 2181
> 2015-09-26 15:25:19,874 WARN [main-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host
> 2015-09-26 15:25:19,874 INFO [main-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server ZK-Host/ZK-IP:2181. 
> Will not attempt to authenticate using SASL (unknown error)
> {code}
> > Since HM1 was not able to connect to any ZK, so session timeout didnt 
> > happen at Zookeeper server side and HM1 didnt abort.
> > On Zookeeper session timeout standby master (HM2) registered himself as an 
> > active master. 
> > HM2 is keep on waiting for region server to report him as part of active 
> > master intialization.
> {noformat} 
> 2015-09-26 15:24:44,928 | INFO | HM2-Host:21300.activeMasterManager | Waiting 
> for region servers count to settle; currently checked in 0, slept for 0 ms, 
> expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval 
> of 1500 ms. | 
> org.apache.hadoop.hbase.master.ServerManager.waitForRegionServers(ServerManager.java:1011)
> ---
> ---
> 2015-09-26 15:32:50,841 | INFO | HM2-Host:21300.activeMasterManager | Waiting 
> for region servers count to settle; currently checked in 0, slept for 483913 
> ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, 
> interval of 1500 ms. | 
> org.apache.hadoop.hbase.master.ServerManager.waitForRegionServers(ServerManager.java:1011)
> {noformat}
> > At other end, region servers are reporting to HM1 on 3 sec interval. Here 
> > region server retrieve master location from zookeeper only when they 
> > couldn't connect to Master (ServiceException).
> Region Server will not report HM2 as per current design until unless HM1 
> abort,so HM2 will exit(InitializationMonitor) and again wait for region 
> servers in loop.



--
This message was sent by 

[jira] [Updated] (HBASE-15390) ClientExceptionUtils doesn't handle CallQueueTooBigException properly

2016-03-03 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15390:

Description: 
In #isMetaClearingException() we're checking for CallQueueTooBigException, but 
under debugger in tests I saw that what we're really getting in here is 
RemoteWithExtrasException, which doesn't allow to easily unwrap CQTBE from it, 
since it's stored in the classname field, and findException() or 
unwrapRemoteException() fail to unwrap it correctly.

I think we'd have the same behavior with other exceptions wrapper this way.

  was:
In #isMetaClearingException() we're checking for CallQueueTooBigException, but 
under debugger in tests I saw that what we're really getting in here is 
RemoteWithExtrasException, which doesn't allow to easily unwrap CQTBE from it, 
since it's stored in the classname field, and findException() or 
unwrapRemoteException() fail to unwrap it correctly.

I think we'd have the same behavior with other


> ClientExceptionUtils doesn't handle CallQueueTooBigException properly
> -
>
> Key: HBASE-15390
> URL: https://issues.apache.org/jira/browse/HBASE-15390
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.2.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 1.2.1
>
>
> In #isMetaClearingException() we're checking for CallQueueTooBigException, 
> but under debugger in tests I saw that what we're really getting in here is 
> RemoteWithExtrasException, which doesn't allow to easily unwrap CQTBE from 
> it, since it's stored in the classname field, and findException() or 
> unwrapRemoteException() fail to unwrap it correctly.
> I think we'd have the same behavior with other exceptions wrapper this way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15389) Write out multiple files when compaction

2016-03-03 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179326#comment-15179326
 ] 

Duo Zhang commented on HBASE-15389:
---

{quote}
We can add sorting in DateTierCompactionPolicy that uses timestamp as the 
secondary sorting criterion.
{quote}
Fine. Will do this in the next patch.

> Write out multiple files when compaction
> 
>
> Key: HBASE-15389
> URL: https://issues.apache.org/jira/browse/HBASE-15389
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Reporter: Duo Zhang
> Attachments: HBASE-15389-uc.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15339) Improve DateTieredCompactionPolicy

2016-03-03 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179324#comment-15179324
 ] 

Duo Zhang commented on HBASE-15339:
---

{quote}
What I imagine we want is to be able to write to block cache if data is not old 
kind of thing.
{quote}
So if we support write multiple files in different windows for 
DateTieredCompaction, we could just write the data in the newest window into 
BlockCache?

> Improve DateTieredCompactionPolicy
> --
>
> Key: HBASE-15339
> URL: https://issues.apache.org/jira/browse/HBASE-15339
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Duo Zhang
>
> For our MiCloud service, the old data is rarely touched but we still need to 
> keep it, so we want to put the data on inexpensive device and reduce 
> redundancy using EC to cut down the cost.
> With date based tiered compaction introduced in HBASE-15181, new data and old 
> data can be placed in different tier. But the tier boundary moves as time 
> lapse so it is still possible that we do compaction on old tier which breaks 
> our block moving and EC work.
> So here we want to introduce an "archive tier" to better fit our scenario. 
> Add an configuration called "archive unit", for example, year. That means, if 
> we find that the tier boundary is already in the previous year, then we reset 
> the boundary to the start of year and end of year, and if we want to do 
> compaction in this tier, just compact all files into one file. The file will 
> never be changed unless we force a major compaction so it is safe to apply EC 
> and other cost reducing approach on the file. And we make more tiers before 
> this tier year by year. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15390) ClientExceptionUtils doesn't handle CallQueueTooBigException properly

2016-03-03 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15390:

Description: 
In #isMetaClearingException() we're checking for CallQueueTooBigException, but 
under debugger in tests I saw that what we're really getting in here is 
RemoteWithExtrasException, which doesn't allow to easily unwrap CQTBE from it, 
since it's stored in the classname field, and findException() or 
unwrapRemoteException() fail to unwrap it correctly.

I think we'd have the same behavior with other

  was:In #isMetaClearingException() we're checking for 
CallQueueTooBigException, but under debugger in tests I saw that what we're 
really getting in here here is RetriesWithExtrasException, which doesn't allow 
to easily unwrap CQTBE from it, since it's stored in the classname field, and 
findException() or unwrapRemoteException() fail to unwrap it correctly.


> ClientExceptionUtils doesn't handle CallQueueTooBigException properly
> -
>
> Key: HBASE-15390
> URL: https://issues.apache.org/jira/browse/HBASE-15390
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.2.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 1.2.1
>
>
> In #isMetaClearingException() we're checking for CallQueueTooBigException, 
> but under debugger in tests I saw that what we're really getting in here is 
> RemoteWithExtrasException, which doesn't allow to easily unwrap CQTBE from 
> it, since it's stored in the classname field, and findException() or 
> unwrapRemoteException() fail to unwrap it correctly.
> I think we'd have the same behavior with other



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15390) ClientExceptionUtils doesn't handle CallQueueTooBigException properly

2016-03-03 Thread Mikhail Antonov (JIRA)
Mikhail Antonov created HBASE-15390:
---

 Summary: ClientExceptionUtils doesn't handle 
CallQueueTooBigException properly
 Key: HBASE-15390
 URL: https://issues.apache.org/jira/browse/HBASE-15390
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.2.0
Reporter: Mikhail Antonov
Assignee: Mikhail Antonov
 Fix For: 1.2.1


In #isMetaClearingException() we're checking for CallQueueTooBigException, but 
under debugger in tests I saw that what we're really getting in here here is 
RetriesWithExtrasException, which doesn't allow to easily unwrap CQTBE from it, 
since it's stored in the classname field, and findException() or 
unwrapRemoteException() fail to unwrap it correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15386) PREFETCH_BLOCKS_ON_OPEN in HColumnDescriptor is ignored

2016-03-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179298#comment-15179298
 ] 

stack commented on HBASE-15386:
---

[~apurtell] Will do.

Prefetch is useful.

> PREFETCH_BLOCKS_ON_OPEN in HColumnDescriptor is ignored
> ---
>
> Key: HBASE-15386
> URL: https://issues.apache.org/jira/browse/HBASE-15386
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: stack
>
> We use the global flag hbase.rs.prefetchblocksonopen only and ignore the HCD 
> setting.
> Purge from HCD or hook it up again (it probably worked once).
> Thanks to Daniel Pol for finding this one. Let me fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-6721) RegionServer Group based Assignment

2016-03-03 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-6721:
-
Attachment: hbase-6721-v28.patch

rebase. 

> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, hbase-6721-v20.patch, 
> hbase-6721-v21.patch, hbase-6721-v22.patch, hbase-6721-v23.patch, 
> hbase-6721-v25.patch, hbase-6721-v26.patch, hbase-6721-v26_draft1.patch, 
> hbase-6721-v27.patch, hbase-6721-v27.patch, hbase-6721-v27.patch.txt, 
> hbase-6721-v28.patch, immediateAssignments Sequence Diagram.svg, 
> randomAssignment Sequence Diagram.svg, retainAssignment Sequence Diagram.svg, 
> roundRobinAssignment Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15373) DEPRECATED_NAME_OF_NO_LIMIT_THROUGHPUT_CONTROLLER_CLASS value is wrong in CompactionThroughputControllerFactory

2016-03-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15373:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: (was: 0.98.11)
   (was: 1.1.0)
   1.3.0
   Status: Resolved  (was: Patch Available)

Thanks for the reviews lads.

Pushed to 1.3+.

Doesn't seem to make sense on 1.2+ and before.

> DEPRECATED_NAME_OF_NO_LIMIT_THROUGHPUT_CONTROLLER_CLASS value is wrong in 
> CompactionThroughputControllerFactory
> ---
>
> Key: HBASE-15373
> URL: https://issues.apache.org/jira/browse/HBASE-15373
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 0.98.11, 1.2.0, 1.3.0
>Reporter: stack
>Assignee: stack
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 15373.patch, 15373v2.patch, 15373v2.patch
>
>
> I couldn't turn off compaction throughput by following release note 
> instructions. I fixed release notes in parent but also needs code fix in 
> factory class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2016-03-03 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14030:
--
Attachment: HBASE-14030-v37.patch

Patch v37. Fixed failing unit test.

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v18.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v20.patch, HBASE-14030-v21.patch, HBASE-14030-v22.patch, 
> HBASE-14030-v23.patch, HBASE-14030-v24.patch, HBASE-14030-v25.patch, 
> HBASE-14030-v26.patch, HBASE-14030-v27.patch, HBASE-14030-v28.patch, 
> HBASE-14030-v3.patch, HBASE-14030-v30.patch, HBASE-14030-v35.patch, 
> HBASE-14030-v37.patch, HBASE-14030-v4.patch, HBASE-14030-v5.patch, 
> HBASE-14030-v6.patch, HBASE-14030-v7.patch, HBASE-14030-v8.patch, 
> hbase-14030_v36.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2016-03-03 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14030:
--
Status: Patch Available  (was: Open)

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v18.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v20.patch, HBASE-14030-v21.patch, HBASE-14030-v22.patch, 
> HBASE-14030-v23.patch, HBASE-14030-v24.patch, HBASE-14030-v25.patch, 
> HBASE-14030-v26.patch, HBASE-14030-v27.patch, HBASE-14030-v28.patch, 
> HBASE-14030-v3.patch, HBASE-14030-v30.patch, HBASE-14030-v35.patch, 
> HBASE-14030-v37.patch, HBASE-14030-v4.patch, HBASE-14030-v5.patch, 
> HBASE-14030-v6.patch, HBASE-14030-v7.patch, HBASE-14030-v8.patch, 
> hbase-14030_v36.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15291) FileSystem not closed in secure bulkLoad

2016-03-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179265#comment-15179265
 ] 

Hudson commented on HBASE-15291:


FAILURE: Integrated in HBase-1.2-IT #454 (See 
[https://builds.apache.org/job/HBase-1.2-IT/454/])
HBASE-15291 FileSystem not closed in secure bulkLoad (Yong Zhang) (tedyu: rev 
105fd08651eeb664f615253790fbe9595d00d387)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java


> FileSystem not closed in secure bulkLoad
> 
>
> Key: HBASE-15291
> URL: https://issues.apache.org/jira/browse/HBASE-15291
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.2, 0.98.16.1
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.18, 1.4.0
>
> Attachments: HBASE-15291.001.patch, HBASE-15291.002.patch, 
> HBASE-15291.003.patch, HBASE-15291.004.patch, HBASE-15291.addendum, patch
>
>
> FileSystem not closed in secure bulkLoad after bulkLoad  finish, it will 
> cause memory used more and more if too many bulkLoad .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14030) HBase Backup/Restore Phase 1

2016-03-03 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14030:
--
Status: Open  (was: Patch Available)

> HBase Backup/Restore Phase 1
> 
>
> Key: HBASE-14030
> URL: https://issues.apache.org/jira/browse/HBASE-14030
> Project: HBase
>  Issue Type: Umbrella
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-14030-v0.patch, HBASE-14030-v1.patch, 
> HBASE-14030-v10.patch, HBASE-14030-v11.patch, HBASE-14030-v12.patch, 
> HBASE-14030-v13.patch, HBASE-14030-v14.patch, HBASE-14030-v15.patch, 
> HBASE-14030-v17.patch, HBASE-14030-v18.patch, HBASE-14030-v2.patch, 
> HBASE-14030-v20.patch, HBASE-14030-v21.patch, HBASE-14030-v22.patch, 
> HBASE-14030-v23.patch, HBASE-14030-v24.patch, HBASE-14030-v25.patch, 
> HBASE-14030-v26.patch, HBASE-14030-v27.patch, HBASE-14030-v28.patch, 
> HBASE-14030-v3.patch, HBASE-14030-v30.patch, HBASE-14030-v35.patch, 
> HBASE-14030-v4.patch, HBASE-14030-v5.patch, HBASE-14030-v6.patch, 
> HBASE-14030-v7.patch, HBASE-14030-v8.patch, hbase-14030_v36.patch
>
>
> This is the umbrella ticket for Backup/Restore Phase 1. See HBASE-7912 design 
> doc for the phase description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15291) FileSystem not closed in secure bulkLoad

2016-03-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179262#comment-15179262
 ] 

Hudson commented on HBASE-15291:


SUCCESS: Integrated in HBase-1.3-IT #533 (See 
[https://builds.apache.org/job/HBase-1.3-IT/533/])
HBASE-15291 FileSystem not closed in secure bulkLoad (Yong Zhang) (tedyu: rev 
b3c62680bcf93ebab7df9914ff8ca2d28cda3156)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java


> FileSystem not closed in secure bulkLoad
> 
>
> Key: HBASE-15291
> URL: https://issues.apache.org/jira/browse/HBASE-15291
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.2, 0.98.16.1
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.18, 1.4.0
>
> Attachments: HBASE-15291.001.patch, HBASE-15291.002.patch, 
> HBASE-15291.003.patch, HBASE-15291.004.patch, HBASE-15291.addendum, patch
>
>
> FileSystem not closed in secure bulkLoad after bulkLoad  finish, it will 
> cause memory used more and more if too many bulkLoad .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14123) HBase Backup/Restore Phase 2

2016-03-03 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14123:
--
Attachment: HBASE-14123-v9.patch

Rebased Phase 2 patch on the latest HBASE-14030 (v36).

> HBase Backup/Restore Phase 2
> 
>
> Key: HBASE-14123
> URL: https://issues.apache.org/jira/browse/HBASE-14123
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14123-v1.patch, HBASE-14123-v2.patch, 
> HBASE-14123-v3.patch, HBASE-14123-v4.patch, HBASE-14123-v5.patch, 
> HBASE-14123-v6.patch, HBASE-14123-v7.patch, HBASE-14123-v9.patch
>
>
> Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-03-03 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-15338:

  Resolution: Fixed
Hadoop Flags: Reviewed
Release Note: Add a new config: hbase.block.data.cacheonread, which is a 
global switch for caching data blocks on read. The default value of this switch 
is true, and data blocks will be cached on read if the block cache is enabled 
for the family and cacheBlocks flag is set to be true for get and scan 
operations. If this global switch is set to false, data blocks won't be cached 
even if the block cache is enabled for the family and the cacheBlocks flag of 
Gets or Scans are sets as true. Bloom blocks and index blocks are always be 
cached if the block cache of the regionserver is enabled. One usage of this 
switch is for the performance tests for the extreme case that  the cache for 
data blocks all missed and all data blocks are read from underlying file system.
  Status: Resolved  (was: Patch Available)

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff, HBASE-15338-trunk-v4.diff, 
> HBASE-15338-trunk-v5.diff, HBASE-15338-trunk-v6.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14496) Compaction improvements: Delayed compaction in RatioBasedCompactionPolicy

2016-03-03 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179255#comment-15179255
 ] 

Vladimir Rodionov commented on HBASE-14496:
---

{quote}
What if memory pressure causes many small flushes? How does this interact with 
the max number of store files?
{quote}

Should be increased from default value.

> Compaction improvements: Delayed compaction in RatioBasedCompactionPolicy
> -
>
> Key: HBASE-14496
> URL: https://issues.apache.org/jira/browse/HBASE-14496
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14496.v1.patch, HBASE-14496.v2.patch
>
>
> Delayed compaction feature for RatioBasedCompactionPolicy allows to specify 
> maximum compaction delay for newly created store files. Files will be 
> eligible for compaction only if their age exceeds this delay. This will allow 
> to preserve new data in a block cache. For most applications, the newer the 
> data is the more frequently it is accessed. The frequent compactions of a new 
> store files result in high block cache churn rate and affects read 
> performance and read latencies badly. 
> The configuration will be global, per table, per column family.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-03-03 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179245#comment-15179245
 ] 

Jingcheng Du commented on HBASE-15338:
--

Sorry for late response, I thought about adding " category == BlockCategory 
||".META to shouldCacheBlockOnRead(), if it is done like this, we will always 
get true (if block cache is enabled) even if prefetchOnOpen is true? This is 
not expected by the current implementation (without this patch)?
See the code in current implementation.
{code}
public boolean shouldCacheBlockOnRead(BlockCategory category) {
return isBlockCacheEnabled()
&& (cacheDataOnRead ||
category == BlockCategory.INDEX ||
category == BlockCategory.BLOOM ||
(prefetchOnOpen &&
(category != BlockCategory.META &&
 category != BlockCategory.UNKNOWN)));
  }
{code}
Do we only need to add a new method shouldCacheMetadataOnRead with a parameter 
BlockCategory? and let it only be invoked by HFileReaderImpl.getMetaBlock. And 
new method would look like:
{code}
public boolean shouldCacheMetadataOnRead(BlockCategory category) {
return isBlockCacheEnabled() && (cacheDataOnRead || category == 
BlockCategory.META);
  }
{code}
How about this?

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff, HBASE-15338-trunk-v4.diff, 
> HBASE-15338-trunk-v5.diff, HBASE-15338-trunk-v6.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14498) Master stuck in infinite loop when all Zookeeper servers are unreachable

2016-03-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179229#comment-15179229
 ] 

Hadoop QA commented on HBASE-14498:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 39s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 55s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 9m 
56s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 36s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 3m 16s 
{color} | {color:red} Patch generated 4 new checkstyle issues in hbase-client 
(total was 41, now 45). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
50m 39s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 47s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.8.0. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 27s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 33s 
{color} | {color:green} hbase-client in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m 27s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
36s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 195m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0 Failed junit tests | hadoop.hbase.ipc.TestSimpleRpcScheduler |
| JDK v1.7.0_79 Failed junit tests | hadoop.hbase.ipc.TestSimpleRpcScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 

[jira] [Commented] (HBASE-15373) DEPRECATED_NAME_OF_NO_LIMIT_THROUGHPUT_CONTROLLER_CLASS value is wrong in CompactionThroughputControllerFactory

2016-03-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179226#comment-15179226
 ] 

Hadoop QA commented on HBASE-15373:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
34s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 59s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 93m 24s 
{color} | {color:green} hbase-server in the patch passed with JDK v1.8.0. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 88m 46s 
{color} | {color:green} hbase-server in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 230m 1s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791276/15373v2.patch |
| JIRA Issue | HBASE-15373 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git 

[jira] [Commented] (HBASE-14801) Enhance the Spark-HBase connector catalog with json format

2016-03-03 Thread Zhan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179222#comment-15179222
 ] 

Zhan Zhang commented on HBASE-14801:


[~jmhsieh] Any other concerns?

> Enhance the Spark-HBase connector catalog with json format
> --
>
> Key: HBASE-14801
> URL: https://issues.apache.org/jira/browse/HBASE-14801
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
> Attachments: HBASE-14801-1.patch, HBASE-14801-2.patch, 
> HBASE-14801-3.patch, HBASE-14801-4.patch, HBASE-14801-5.patch, 
> HBASE-14801-6.patch, HBASE-14801-7.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15338) Add a option to disable the data block cache for testing the performance of underlying file system

2016-03-03 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179220#comment-15179220
 ] 

Liu Shaohui commented on HBASE-15338:
-

[~anoop.hbase]
Currently, If the per family switch for read cache is set to false,  data 
blocks will not be cached even if the cacheBlocks flag of Gets or Scans are 
sets as true. See HFileReaderImpl#1536.
I think the behavior of the global switch is consistent with that of per family 
switch.

{code}
// Cache the block if necessary
if (cacheBlock && cacheConf.shouldCacheBlockOnRead(category)) {
  cacheConf.getBlockCache().cacheBlock(cacheKey,
cacheConf.shouldCacheCompressed(category) ? hfileBlock : unpacked,
cacheConf.isInMemory(), this.cacheConf.isCacheDataInL1());
}
{code}

> Add a option to disable the data block cache for testing the performance of 
> underlying file system
> --
>
> Key: HBASE-15338
> URL: https://issues.apache.org/jira/browse/HBASE-15338
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15338-trunk-v1.diff, HBASE-15338-trunk-v2.diff, 
> HBASE-15338-trunk-v3.diff, HBASE-15338-trunk-v4.diff, 
> HBASE-15338-trunk-v5.diff, HBASE-15338-trunk-v6.diff
>
>
> When testing and comparing the performance of different file systems(HDFS, 
> Azure blob storage, AWS S3 and so on) for HBase, it's better to avoid the 
> affect of the HBase BlockCache and get the actually random read latency when 
> data block is read from underlying file system. (Usually, the index block and 
> meta block should be cached in memory in the testing).
> So we add a option in CacheConfig to disable the data block cache.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14496) Compaction improvements: Delayed compaction in RatioBasedCompactionPolicy

2016-03-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179216#comment-15179216
 ] 

Hadoop QA commented on HBASE-14496:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HBASE-14496 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/latest/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12767143/HBASE-14496.v2.patch |
| JIRA Issue | HBASE-14496 |
| Powered by | Apache Yetus 0.1.0   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/827/console |


This message was automatically generated.



> Compaction improvements: Delayed compaction in RatioBasedCompactionPolicy
> -
>
> Key: HBASE-14496
> URL: https://issues.apache.org/jira/browse/HBASE-14496
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14496.v1.patch, HBASE-14496.v2.patch
>
>
> Delayed compaction feature for RatioBasedCompactionPolicy allows to specify 
> maximum compaction delay for newly created store files. Files will be 
> eligible for compaction only if their age exceeds this delay. This will allow 
> to preserve new data in a block cache. For most applications, the newer the 
> data is the more frequently it is accessed. The frequent compactions of a new 
> store files result in high block cache churn rate and affects read 
> performance and read latencies badly. 
> The configuration will be global, per table, per column family.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2016-03-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179214#comment-15179214
 ] 

Hadoop QA commented on HBASE-6721:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HBASE-6721 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/latest/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791388/hbase-6721-v27.patch |
| JIRA Issue | HBASE-6721 |
| Powered by | Apache Yetus 0.1.0   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/828/console |


This message was automatically generated.



> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, hbase-6721-v20.patch, 
> hbase-6721-v21.patch, hbase-6721-v22.patch, hbase-6721-v23.patch, 
> hbase-6721-v25.patch, hbase-6721-v26.patch, hbase-6721-v26_draft1.patch, 
> hbase-6721-v27.patch, hbase-6721-v27.patch, hbase-6721-v27.patch.txt, 
> immediateAssignments Sequence Diagram.svg, randomAssignment Sequence 
> Diagram.svg, retainAssignment Sequence Diagram.svg, roundRobinAssignment 
> Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15291) FileSystem not closed in secure bulkLoad

2016-03-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179211#comment-15179211
 ] 

Hudson commented on HBASE-15291:


FAILURE: Integrated in HBase-Trunk_matrix #754 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/754/])
HBASE-15291 FileSystem not closed in secure bulkLoad (Yong Zhang) (tedyu: rev 
4fba1c36275710d9970066310489b927e5d194a1)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java


> FileSystem not closed in secure bulkLoad
> 
>
> Key: HBASE-15291
> URL: https://issues.apache.org/jira/browse/HBASE-15291
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.2, 0.98.16.1
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.18, 1.4.0
>
> Attachments: HBASE-15291.001.patch, HBASE-15291.002.patch, 
> HBASE-15291.003.patch, HBASE-15291.004.patch, HBASE-15291.addendum, patch
>
>
> FileSystem not closed in secure bulkLoad after bulkLoad  finish, it will 
> cause memory used more and more if too many bulkLoad .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15339) Improve DateTieredCompactionPolicy

2016-03-03 Thread Clara Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179205#comment-15179205
 ] 

Clara Xiong commented on HBASE-15339:
-

I intentionally kept the minimal files for compaction for incoming window 
different from other windows. This can be achieved by setting the number high. 
The added delay should work with DTCP seamlessly too.

> Improve DateTieredCompactionPolicy
> --
>
> Key: HBASE-15339
> URL: https://issues.apache.org/jira/browse/HBASE-15339
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Duo Zhang
>
> For our MiCloud service, the old data is rarely touched but we still need to 
> keep it, so we want to put the data on inexpensive device and reduce 
> redundancy using EC to cut down the cost.
> With date based tiered compaction introduced in HBASE-15181, new data and old 
> data can be placed in different tier. But the tier boundary moves as time 
> lapse so it is still possible that we do compaction on old tier which breaks 
> our block moving and EC work.
> So here we want to introduce an "archive tier" to better fit our scenario. 
> Add an configuration called "archive unit", for example, year. That means, if 
> we find that the tier boundary is already in the previous year, then we reset 
> the boundary to the start of year and end of year, and if we want to do 
> compaction in this tier, just compact all files into one file. The file will 
> never be changed unless we force a major compaction so it is safe to apply EC 
> and other cost reducing approach on the file. And we make more tiers before 
> this tier year by year. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15389) Write out multiple files when compaction

2016-03-03 Thread Clara Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179200#comment-15179200
 ] 

Clara Xiong commented on HBASE-15389:
-

We can add sorting in DateTierCompactionPolicy that uses timestamp as the 
secondary sorting criterion. 

> Write out multiple files when compaction
> 
>
> Key: HBASE-15389
> URL: https://issues.apache.org/jira/browse/HBASE-15389
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Reporter: Duo Zhang
> Attachments: HBASE-15389-uc.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15291) FileSystem not closed in secure bulkLoad

2016-03-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179189#comment-15179189
 ] 

Hudson commented on HBASE-15291:


SUCCESS: Integrated in HBase-1.3 #586 (See 
[https://builds.apache.org/job/HBase-1.3/586/])
HBASE-15291 FileSystem not closed in secure bulkLoad (Yong Zhang) (tedyu: rev 
17b39763c96aaca45208cba0ce9ce4fb931eb959)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java


> FileSystem not closed in secure bulkLoad
> 
>
> Key: HBASE-15291
> URL: https://issues.apache.org/jira/browse/HBASE-15291
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.2, 0.98.16.1
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.18, 1.4.0
>
> Attachments: HBASE-15291.001.patch, HBASE-15291.002.patch, 
> HBASE-15291.003.patch, HBASE-15291.004.patch, HBASE-15291.addendum, patch
>
>
> FileSystem not closed in secure bulkLoad after bulkLoad  finish, it will 
> cause memory used more and more if too many bulkLoad .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15381) Implement a distributed MOB compaction by procedure

2016-03-03 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179185#comment-15179185
 ] 

Jingcheng Du commented on HBASE-15381:
--

bq. What do you see when this is going on? A master that lags burdened down by 
all the i/o?
There will be heavy I/O between the node where HM resides and data nodes of 
HDFS. It might impact the network latency between HM and RS. And like what 
Anoop said, the locality will be lost after the compaction. I try to address 
such issues in new implementation.
bq. 

bq. How you see it working? What happens when compactions get backed up?
In the distributed compaction, the compaction is periodically triggered by HM, 
and the job is distributed to all RS by procedure, each RS will find the files 
belong to it and distribute them to online regions.
The mob compaction in each region compact small files in batches.
# Merge small files into a bigger one. (hopefully this big file won't be merged 
again from then on).
# bulkload the hfile which contains the meta cells (reference cells) to HBase.
Then the new data are visible to users. Any exception occurs during each batch 
will trigger a rollback of compaction.

> Implement a distributed MOB compaction by procedure
> ---
>
> Key: HBASE-15381
> URL: https://issues.apache.org/jira/browse/HBASE-15381
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
>
> In MOB, there is a periodical compaction which runs in HMaster (It can be 
> disabled by configuration), some small mob files are merged into bigger ones. 
> Now the compaction only runs in HMaster which is not efficient and might 
> impact the running of HMaster. In this JIRA, a distributed MOB compaction is 
> introduced, it is triggered by HMaster, but all the compaction jobs are 
> distributed to HRegionServers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-6721) RegionServer Group based Assignment

2016-03-03 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-6721:
-
Attachment: hbase-6721-v27.patch

> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, hbase-6721-v20.patch, 
> hbase-6721-v21.patch, hbase-6721-v22.patch, hbase-6721-v23.patch, 
> hbase-6721-v25.patch, hbase-6721-v26.patch, hbase-6721-v26_draft1.patch, 
> hbase-6721-v27.patch, hbase-6721-v27.patch, hbase-6721-v27.patch.txt, 
> immediateAssignments Sequence Diagram.svg, randomAssignment Sequence 
> Diagram.svg, retainAssignment Sequence Diagram.svg, roundRobinAssignment 
> Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15339) Improve DateTieredCompactionPolicy

2016-03-03 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179176#comment-15179176
 ] 

Enis Soztutar commented on HBASE-15339:
---

bq. Is it possible to write data directly into BlockCache when compacting?
There is a setting for writing the index or data blocks directly to the block 
cache from compactions. However, that is an all-or-nothing setting. What I 
imagine we want is to be able to write to block cache if data is not old kind 
of thing. 

> Improve DateTieredCompactionPolicy
> --
>
> Key: HBASE-15339
> URL: https://issues.apache.org/jira/browse/HBASE-15339
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Duo Zhang
>
> For our MiCloud service, the old data is rarely touched but we still need to 
> keep it, so we want to put the data on inexpensive device and reduce 
> redundancy using EC to cut down the cost.
> With date based tiered compaction introduced in HBASE-15181, new data and old 
> data can be placed in different tier. But the tier boundary moves as time 
> lapse so it is still possible that we do compaction on old tier which breaks 
> our block moving and EC work.
> So here we want to introduce an "archive tier" to better fit our scenario. 
> Add an configuration called "archive unit", for example, year. That means, if 
> we find that the tier boundary is already in the previous year, then we reset 
> the boundary to the start of year and end of year, and if we want to do 
> compaction in this tier, just compact all files into one file. The file will 
> never be changed unless we force a major compaction so it is safe to apply EC 
> and other cost reducing approach on the file. And we make more tiers before 
> this tier year by year. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15339) Improve DateTieredCompactionPolicy

2016-03-03 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179137#comment-15179137
 ] 

Duo Zhang commented on HBASE-15339:
---

Is it possible to write data directly into BlockCache when compacting?

> Improve DateTieredCompactionPolicy
> --
>
> Key: HBASE-15339
> URL: https://issues.apache.org/jira/browse/HBASE-15339
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Duo Zhang
>
> For our MiCloud service, the old data is rarely touched but we still need to 
> keep it, so we want to put the data on inexpensive device and reduce 
> redundancy using EC to cut down the cost.
> With date based tiered compaction introduced in HBASE-15181, new data and old 
> data can be placed in different tier. But the tier boundary moves as time 
> lapse so it is still possible that we do compaction on old tier which breaks 
> our block moving and EC work.
> So here we want to introduce an "archive tier" to better fit our scenario. 
> Add an configuration called "archive unit", for example, year. That means, if 
> we find that the tier boundary is already in the previous year, then we reset 
> the boundary to the start of year and end of year, and if we want to do 
> compaction in this tier, just compact all files into one file. The file will 
> never be changed unless we force a major compaction so it is safe to apply EC 
> and other cost reducing approach on the file. And we make more tiers before 
> this tier year by year. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15389) Write out multiple files when compaction

2016-03-03 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-15389:
--
Attachment: HBASE-15389-uc.patch

I have to attend a training course today and it is Friday in China so I upload 
the unfinished patch here.

May still contain some bugs since the testcase has not been done yet.

I add a new StoreEngine and Compactor implementation. When the filesToCompact 
is ready, I calculate the boundaries and write out a file for each window.

The output files have same seqId so I add maxTimestamp to the sorting source 
just after seqId in StoreFile.Comparators.SEQ_ID. Two thought here:
1. Assign different seqIds for the output files. If we have less output files 
than input files, it could work. But this can not be satisfied every time so 
theoretically it is possible that we do not have enough different seqIds to 
assign. Correct me if I'm wrong.
2. Introduce a new comparator instead of modifing SEQ_ID directly. This could 
be done, no problem, but the SEQ_ID comparator is also used outside 
StoreFileManager so we need to find a place to store the comparator which 
should be used and remove the static reference. So here I choose the simple way 
first. Can change later if needed.

[~davelatham] [~claraxiong] PTAL. Thanks very much.

> Write out multiple files when compaction
> 
>
> Key: HBASE-15389
> URL: https://issues.apache.org/jira/browse/HBASE-15389
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Reporter: Duo Zhang
> Attachments: HBASE-15389-uc.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14496) Compaction improvements: Delayed compaction in RatioBasedCompactionPolicy

2016-03-03 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179121#comment-15179121
 ] 

Dave Latham commented on HBASE-14496:
-

What if memory pressure causes many small flushes?  How does this interact with 
the max number of store files?

> Compaction improvements: Delayed compaction in RatioBasedCompactionPolicy
> -
>
> Key: HBASE-14496
> URL: https://issues.apache.org/jira/browse/HBASE-14496
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-14496.v1.patch, HBASE-14496.v2.patch
>
>
> Delayed compaction feature for RatioBasedCompactionPolicy allows to specify 
> maximum compaction delay for newly created store files. Files will be 
> eligible for compaction only if their age exceeds this delay. This will allow 
> to preserve new data in a block cache. For most applications, the newer the 
> data is the more frequently it is accessed. The frequent compactions of a new 
> store files result in high block cache churn rate and affects read 
> performance and read latencies badly. 
> The configuration will be global, per table, per column family.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15339) Improve DateTieredCompactionPolicy

2016-03-03 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179113#comment-15179113
 ] 

Enis Soztutar commented on HBASE-15339:
---

Delayed compactions for the incoming window is also relevant: HBASE-14496. 
[~vrodionov] can comment more on the reasoning, but my understanding is that if 
query load for very-recent data is high, delaying the compaction of files for 
some time can help with smoothing the latencies out. 

> Improve DateTieredCompactionPolicy
> --
>
> Key: HBASE-15339
> URL: https://issues.apache.org/jira/browse/HBASE-15339
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Duo Zhang
>
> For our MiCloud service, the old data is rarely touched but we still need to 
> keep it, so we want to put the data on inexpensive device and reduce 
> redundancy using EC to cut down the cost.
> With date based tiered compaction introduced in HBASE-15181, new data and old 
> data can be placed in different tier. But the tier boundary moves as time 
> lapse so it is still possible that we do compaction on old tier which breaks 
> our block moving and EC work.
> So here we want to introduce an "archive tier" to better fit our scenario. 
> Add an configuration called "archive unit", for example, year. That means, if 
> we find that the tier boundary is already in the previous year, then we reset 
> the boundary to the start of year and end of year, and if we want to do 
> compaction in this tier, just compact all files into one file. The file will 
> never be changed unless we force a major compaction so it is safe to apply EC 
> and other cost reducing approach on the file. And we make more tiers before 
> this tier year by year. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15368) Add relative window support

2016-03-03 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179104#comment-15179104
 ] 

Enis Soztutar commented on HBASE-15368:
---

Sorry to come in late. Agreed with the resolution, unless we have an 
alternative implementation, we should keep this patch parked. 

For going with an alternative windowing policy, we should have a clear 
guideline for users of DTCP to chose which one. It seems that everybody is on 
the same page for the fixed window being the best default choice. 

> Add relative window support
> ---
>
> Key: HBASE-15368
> URL: https://issues.apache.org/jira/browse/HBASE-15368
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-15368-v1.patch, HBASE-15368.patch
>
>
> To better determine 'hot' data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2016-03-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179101#comment-15179101
 ] 

Hadoop QA commented on HBASE-6721:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 2s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/latest/precommit-patchnames for 
instructions. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s {color} 
| {color:red} HBASE-6721 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/latest/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791376/hbase-6721-v27.patch.txt
 |
| JIRA Issue | HBASE-6721 |
| Powered by | Apache Yetus 0.1.0   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/826/console |


This message was automatically generated.



> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, hbase-6721-v20.patch, 
> hbase-6721-v21.patch, hbase-6721-v22.patch, hbase-6721-v23.patch, 
> hbase-6721-v25.patch, hbase-6721-v26.patch, hbase-6721-v26_draft1.patch, 
> hbase-6721-v27.patch, hbase-6721-v27.patch.txt, immediateAssignments Sequence 
> Diagram.svg, randomAssignment Sequence Diagram.svg, retainAssignment Sequence 
> Diagram.svg, roundRobinAssignment Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15389) Write out multiple files when compaction

2016-03-03 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-15389:
-

 Summary: Write out multiple files when compaction
 Key: HBASE-15389
 URL: https://issues.apache.org/jira/browse/HBASE-15389
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-6721) RegionServer Group based Assignment

2016-03-03 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-6721:
-
Attachment: hbase-6721-v27.patch.txt

Reattaching for hadoopqa. 

> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, hbase-6721-v20.patch, 
> hbase-6721-v21.patch, hbase-6721-v22.patch, hbase-6721-v23.patch, 
> hbase-6721-v25.patch, hbase-6721-v26.patch, hbase-6721-v26_draft1.patch, 
> hbase-6721-v27.patch, hbase-6721-v27.patch.txt, immediateAssignments Sequence 
> Diagram.svg, randomAssignment Sequence Diagram.svg, retainAssignment Sequence 
> Diagram.svg, roundRobinAssignment Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15388) Add ACL for some master methods

2016-03-03 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-15388:
-

 Summary: Add ACL for some master methods
 Key: HBASE-15388
 URL: https://issues.apache.org/jira/browse/HBASE-15388
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
 Fix For: 2.0.0, 1.3.0


Some new methods and some old ones do not have ACLs. 

A basic look at the master rpc endpoints results in 
 - Catalog janitor methods 
 - set balancer switch
 - Normalizer methods 
 - split merge switch 
 - mob methods 
 - others? 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2016-03-03 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179067#comment-15179067
 ] 

Francis Liu commented on HBASE-6721:


Looks like the build aborted because it timed out. 

https://builds.apache.org/job/PreCommit-HBASE-Build/816/console

Can we try and rerun the build? I'm going to try and run the tests on my local 
machine as well.


> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, hbase-6721-v20.patch, 
> hbase-6721-v21.patch, hbase-6721-v22.patch, hbase-6721-v23.patch, 
> hbase-6721-v25.patch, hbase-6721-v26.patch, hbase-6721-v26_draft1.patch, 
> hbase-6721-v27.patch, immediateAssignments Sequence Diagram.svg, 
> randomAssignment Sequence Diagram.svg, retainAssignment Sequence Diagram.svg, 
> roundRobinAssignment Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15386) PREFETCH_BLOCKS_ON_OPEN in HColumnDescriptor is ignored

2016-03-03 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179065#comment-15179065
 ] 

Andrew Purtell commented on HBASE-15386:


I vote for hook it up again unless we want to purge the whole prefetch 
experiment on account of it not being used or not being useful.

> PREFETCH_BLOCKS_ON_OPEN in HColumnDescriptor is ignored
> ---
>
> Key: HBASE-15386
> URL: https://issues.apache.org/jira/browse/HBASE-15386
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>Assignee: stack
>
> We use the global flag hbase.rs.prefetchblocksonopen only and ignore the HCD 
> setting.
> Purge from HCD or hook it up again (it probably worked once).
> Thanks to Daniel Pol for finding this one. Let me fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15356) Remove unused Imports

2016-03-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178985#comment-15178985
 ] 

Hudson commented on HBASE-15356:


FAILURE: Integrated in HBase-Trunk_matrix #753 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/753/])
HBASE-15356 Remove unused imports (Youngjoon Kim) (jmhsieh: rev 
f658f3ef83f191c50e602800740af464bd9b00cc)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
* 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestNamespacesInstanceModel.java
* 
hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/metrics/TestBaseSourceImpl.java
* hbase-common/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKConfig.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java
* 
hbase-client/src/test/java/org/apache/hadoop/hbase/security/TestEncryptionUtil.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestWALProcedureStoreOnHDFS.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMobRestoreSnapshotFromClient.java
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestBulkLoad.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScannerRetriableFailure.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestMasterProcedureEvents.java
* hbase-common/src/test/java/org/apache/hadoop/hbase/CategoryBasedTimeout.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEndToEndSplitTransaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/MobSnapshotTestingUtils.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestCustomWALCellCodec.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckTwoRS.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedAction.java
* 
hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/PrefixTreeSeeker.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestQueryMatcher.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestMasterProcedureScheduler.java
* 
hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/wal/TestWALProcedureStore.java
* hbase-client/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKUtil.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterObserver.java
* 
hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftHttpServer.java
* 
hbase-hadoop-compat/src/test/java/org/apache/hadoop/hbase/master/TestMetricsMasterSourceFactory.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaState.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTableMultiplexerFlushCache.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollPeriod.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestBaseLoadBalancer.java
* 
hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/store/TestProcedureStoreTracker.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/OfflineMetaRebuildTestCore.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdaterWithACL.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactSplitThread.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogWriter.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/TestNamespace.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHFileLink.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureMember.java
* 
hbase-procedure/src/test/java/org/apache/hadoop/hbase/procedure2/TestYieldProcedures.java


> Remove unused Imports
> -
>
> Key: HBASE-15356
> URL: https://issues.apache.org/jira/browse/HBASE-15356
> Project: HBase
>  Issue Type: Improvement
>Reporter: Youngjoon Kim
>Assignee: Youngjoon Kim
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-15356.patch
>
>
> Remove unused Imports.




[jira] [Updated] (HBASE-15291) FileSystem not closed in secure bulkLoad

2016-03-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15291:
---
   Resolution: Fixed
Fix Version/s: 1.4.0
   Status: Resolved  (was: Patch Available)

Thanks for the contribution, Yong and Haitao.

> FileSystem not closed in secure bulkLoad
> 
>
> Key: HBASE-15291
> URL: https://issues.apache.org/jira/browse/HBASE-15291
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.2, 0.98.16.1
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.18, 1.4.0
>
> Attachments: HBASE-15291.001.patch, HBASE-15291.002.patch, 
> HBASE-15291.003.patch, HBASE-15291.004.patch, HBASE-15291.addendum, patch
>
>
> FileSystem not closed in secure bulkLoad after bulkLoad  finish, it will 
> cause memory used more and more if too many bulkLoad .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15376) ScanNext metric is size-based while every other per-operation metric is time based

2016-03-03 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178884#comment-15178884
 ] 

Enis Soztutar commented on HBASE-15376:
---

The compilation failed for the hadoop2 module. 

Can you also rename this method: {{updateScannerNextTime}} to 
{{updateScanTime}} and rename the other one to be {{updateScanSize()}}. Other 
than these, patch looks good. You may have to look at the checkstyle warnings. 
I think the UT failures are not related. 

> ScanNext metric is size-based while every other per-operation metric is time 
> based
> --
>
> Key: HBASE-15376
> URL: https://issues.apache.org/jira/browse/HBASE-15376
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
> Attachments: HBASE-15376.patch, HBASE-15376_v1.patch
>
>
> We have per-operation metrics for {{Get}}, {{Mutate}}, {{Delete}}, 
> {{Increment}}, and {{ScanNext}}. 
> The metrics are emitted like: 
> {code}
>"Get_num_ops" : 4837505,
> "Get_min" : 0,
> "Get_max" : 296,
> "Get_mean" : 0.2934618155433431,
> "Get_median" : 0.0,
> "Get_75th_percentile" : 0.0,
> "Get_95th_percentile" : 1.0,
> "Get_99th_percentile" : 1.0,
> ...
> "ScanNext_num_ops" : 194705,
> "ScanNext_min" : 0,
> "ScanNext_max" : 18441,
> "ScanNext_mean" : 7468.274651395701,
> "ScanNext_median" : 583.0,
> "ScanNext_75th_percentile" : 583.0,
> "ScanNext_95th_percentile" : 13481.0,
> "ScanNext_99th_percentile" : 13481.0,
> {code}
> The problem is that all of Get,Mutate,Delete,Increment,Append,Replay are time 
> based tracking how long the operation ran, while ScanNext is tracking 
> returned response sizes (returned cell-sizes to be exact). Obviously, this is 
> very confusing and you would only know this subtlety if you read the metrics 
> collection code. 
> Not sure how useful is the ScanNext metric as it is today. We can deprecate 
> it, and introduce a time based one to keep track of scan request latencies. 
> ps. Shamelessly using the parent jira (since these seem relavant). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2016-03-03 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178876#comment-15178876
 ] 

Enis Soztutar commented on HBASE-6721:
--

bq. It is active by default. If you don't specify the skip-rsgroup property the 
rsgroup module will get built.
Ok, perfect. I did not see the {{true}} tag, 
so I was assuming the other way around. 

Not sure why the hadoopqa is not running for the latest patches.  

> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, hbase-6721-v20.patch, 
> hbase-6721-v21.patch, hbase-6721-v22.patch, hbase-6721-v23.patch, 
> hbase-6721-v25.patch, hbase-6721-v26.patch, hbase-6721-v26_draft1.patch, 
> hbase-6721-v27.patch, immediateAssignments Sequence Diagram.svg, 
> randomAssignment Sequence Diagram.svg, retainAssignment Sequence Diagram.svg, 
> roundRobinAssignment Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14498) Master stuck in infinite loop when all Zookeeper servers are unreachable

2016-03-03 Thread Pankaj Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-14498:
-
Fix Version/s: (was: 1.1.4)
   (was: 1.2.1)
   (was: 1.3.0)
   Status: Patch Available  (was: Reopened)

Attached patch for master branch, please review. 

> Master stuck in infinite loop when all Zookeeper servers are unreachable
> 
>
> Key: HBASE-14498
> URL: https://issues.apache.org/jira/browse/HBASE-14498
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Pankaj Kumar
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-14498-V2.patch, HBASE-14498-V3.patch, 
> HBASE-14498-V4.patch, HBASE-14498-V5.patch, HBASE-14498.patch
>
>
> We met a weird scenario in our production environment.
> In a HA cluster,
> > Active Master (HM1) is not able to connect to any Zookeeper server (due to 
> > N/w breakdown on master machine network with Zookeeper servers).
> {code}
> 2015-09-26 15:24:47,508 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Client session timed out, have not heard from server in 
> 33463ms for sessionid 0x104576b8dda0002, closing socket connection and 
> attempting reconnect
> 2015-09-26 15:24:47,877 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host1 2181
> 2015-09-26 15:24:48,236 INFO [main-SendThread(ZK-Host1:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host1 2181
> 2015-09-26 15:24:49,879 WARN 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host1
> 2015-09-26 15:24:49,879 INFO 
> [HM1-Host:16000.activeMasterManager-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server 
> ZK-Host1/ZK-IP1:2181. Will not attempt to authenticate using SASL (unknown 
> error)
> 2015-09-26 15:24:50,238 WARN [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host1
> 2015-09-26 15:24:50,238 INFO [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server 
> ZK-Host1/ZK-Host1:2181. Will not attempt to authenticate using SASL (unknown 
> error)
> 2015-09-26 15:25:17,470 INFO [main-SendThread(ZK-Host1:2181)] 
> zookeeper.ClientCnxn: Client session timed out, have not heard from server in 
> 30023ms for sessionid 0x2045762cc710006, closing socket connection and 
> attempting reconnect
> 2015-09-26 15:25:17,571 WARN [master/HM1-Host/HM1-IP:16000] 
> zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, 
> quorum=ZK-Host:2181,ZK-Host1:2181,ZK-Host2:2181, 
> exception=org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for /hbase/master
> 2015-09-26 15:25:17,872 INFO [main-SendThread(ZK-Host:2181)] 
> client.FourLetterWordMain: connecting to ZK-Host 2181
> 2015-09-26 15:25:19,874 WARN [main-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Can not get the principle name from server ZK-Host
> 2015-09-26 15:25:19,874 INFO [main-SendThread(ZK-Host:2181)] 
> zookeeper.ClientCnxn: Opening socket connection to server ZK-Host/ZK-IP:2181. 
> Will not attempt to authenticate using SASL (unknown error)
> {code}
> > Since HM1 was not able to connect to any ZK, so session timeout didnt 
> > happen at Zookeeper server side and HM1 didnt abort.
> > On Zookeeper session timeout standby master (HM2) registered himself as an 
> > active master. 
> > HM2 is keep on waiting for region server to report him as part of active 
> > master intialization.
> {noformat} 
> 2015-09-26 15:24:44,928 | INFO | HM2-Host:21300.activeMasterManager | Waiting 
> for region servers count to settle; currently checked in 0, slept for 0 ms, 
> expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval 
> of 1500 ms. | 
> org.apache.hadoop.hbase.master.ServerManager.waitForRegionServers(ServerManager.java:1011)
> ---
> ---
> 2015-09-26 15:32:50,841 | INFO | HM2-Host:21300.activeMasterManager | Waiting 
> for region servers count to settle; currently checked in 0, slept for 483913 
> ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, 
> interval of 1500 ms. | 
> org.apache.hadoop.hbase.master.ServerManager.waitForRegionServers(ServerManager.java:1011)
> {noformat}
> > At other end, region servers are reporting to HM1 on 3 sec interval. Here 
> > region server retrieve master location from zookeeper only when they 
> > couldn't connect to Master (ServiceException).
> Region Server will not report HM2 as per current design until unless HM1 
> abort,so HM2 

[jira] [Commented] (HBASE-15355) region.jsp can not be found on info server of master

2016-03-03 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178823#comment-15178823
 ] 

stack commented on HBASE-15355:
---

bq. BTW, do we have any design or plan to split meta for scaling?

There are the pages I linked in response to Gary. They are hanging of the 
umbrella issues. The conversations have gone stale -- or probably a better 
description is they are put on hold while the procedureV2 stuff is getting 
retrofitted into master branch -- and we need to revive the architectural 
talks. You fellas used to have a few ideas in this area [~cuijianwei] (smile).

> region.jsp can not be found on info server of master
> 
>
> Key: HBASE-15355
> URL: https://issues.apache.org/jira/browse/HBASE-15355
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Priority: Minor
>
> After [HBASE-10569|https://issues.apache.org/jira/browse/HBASE-10569], master 
> is also a regionserver and it will serve regions of system tables. The meta 
> region info could be viewed on master at the address such as : 
> http://localhost:16010/region.jsp?name=1588230740. The real path of 
> region.jsp for the request will be hbase-webapps/master/region.jsp on master, 
> however, the region.jsp is under the directory hbase-webapps/regionserver, so 
> that can not be found on master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15333) Enhance the filter to handle short, integer, long, float and double

2016-03-03 Thread Zhan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhan Zhang reassigned HBASE-15333:
--

Assignee: Zhan Zhang

> Enhance the filter to handle short, integer, long, float and double
> ---
>
> Key: HBASE-15333
> URL: https://issues.apache.org/jira/browse/HBASE-15333
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
>
> Currently, the range filter is based on the order of bytes. But for java 
> primitive type, such as short, int, long, double, float, etc, their order is 
> not consistent with their byte order, extra manipulation has to be in place 
> to take care of them  correctly.
> For example, for the integer range (-100, 100), the filter <= 1, the current 
> filter will return 0 and 1, and the right return value should be (-100, 1]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15373) DEPRECATED_NAME_OF_NO_LIMIT_THROUGHPUT_CONTROLLER_CLASS value is wrong in CompactionThroughputControllerFactory

2016-03-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15373:
--
Attachment: 15373v2.patch

Retry

> DEPRECATED_NAME_OF_NO_LIMIT_THROUGHPUT_CONTROLLER_CLASS value is wrong in 
> CompactionThroughputControllerFactory
> ---
>
> Key: HBASE-15373
> URL: https://issues.apache.org/jira/browse/HBASE-15373
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 0.98.11, 1.2.0, 1.3.0
>Reporter: stack
>Assignee: stack
>Priority: Minor
> Fix For: 2.0.0, 1.1.0, 0.98.11
>
> Attachments: 15373.patch, 15373v2.patch, 15373v2.patch
>
>
> I couldn't turn off compaction throughput by following release note 
> instructions. I fixed release notes in parent but also needs code fix in 
> factory class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15370) Backport Moderate Object Storage (MOB) to branch-1

2016-03-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-15370.

Resolution: Won't Fix
  Assignee: (was: Ted Yu)

Judging from response from the community, the backport wouldn't be done.

> Backport Moderate Object Storage (MOB) to branch-1
> --
>
> Key: HBASE-15370
> URL: https://issues.apache.org/jira/browse/HBASE-15370
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
> Fix For: 1.4.0
>
> Attachments: 15370-test.out, HBASE-15370-branch-1.v1.patch, 
> HBASE-15370-branch-1.v2.patch, merge-conflict.list, mob-cmmits.txt, 
> mob-commits-v2.txt
>
>
> MOB feature was integrated to master branch half a year ago.
> Since then there have been bug fixes which stabilize the feature.
> Some customers have been using it at PB scale.
> Here is discussion thread on dev mailing list:
> http://search-hadoop.com/m/YGbbDSqxD1PYXK62/hbase+MOB+in+branch-1=Re+MOB+in+branch+1+Re+RESULT+VOTE+Merge+branch+hbase+11339+HBase+MOB+to+trunk+
> This issue is to backport MOB feature to branch-1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15336) Support Dataframe writer to the connector

2016-03-03 Thread Zhan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhan Zhang reassigned HBASE-15336:
--

Assignee: Zhan Zhang

> Support Dataframe writer to the connector
> -
>
> Key: HBASE-15336
> URL: https://issues.apache.org/jira/browse/HBASE-15336
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
>
> Currently, the connector only support read path. A complete solution should 
> support both read and writer. This subtask add write support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15334) Add avro support for spark hbase connector

2016-03-03 Thread Zhan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhan Zhang reassigned HBASE-15334:
--

Assignee: Zhan Zhang

> Add avro support for spark hbase connector
> --
>
> Key: HBASE-15334
> URL: https://issues.apache.org/jira/browse/HBASE-15334
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
>
> Avro is a popular format for hbase storage. User may want the support 
> natively in the connector.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2016-03-03 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178768#comment-15178768
 ] 

Francis Liu commented on HBASE-6721:


{quote}
If that is the case, it is still fine. If you don't want to activate, don't 
pass the property at all. Then how we can activate this with !skip-rsgroup ? I 
mean how do you activate the profile with the current patch without specifying 
-P?
{quote}

It is active by default. If you don't specify the skip-rsgroup property the 
rsgroup module will get built. 

ie

# builds hbase-rsgoup
mvn install 

#does not build hbase-rsgroup
mvn install -Dskip-rsgoup 

It's basically the same idea as -DskipITs and -DskipTests.

Of course you can explicitly specify it via the -P switch as well:

# builds hbase-rsgoup
mvn install -P rsgroup

#does not build hbase-rsgroup
mvn install -P !rsgroup




> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, hbase-6721-v20.patch, 
> hbase-6721-v21.patch, hbase-6721-v22.patch, hbase-6721-v23.patch, 
> hbase-6721-v25.patch, hbase-6721-v26.patch, hbase-6721-v26_draft1.patch, 
> hbase-6721-v27.patch, immediateAssignments Sequence Diagram.svg, 
> randomAssignment Sequence Diagram.svg, retainAssignment Sequence Diagram.svg, 
> roundRobinAssignment Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15291) FileSystem not closed in secure bulkLoad

2016-03-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178691#comment-15178691
 ] 

Ted Yu commented on HBASE-15291:


Failed tests were not related to bulk load.

Planning to commit later today.

> FileSystem not closed in secure bulkLoad
> 
>
> Key: HBASE-15291
> URL: https://issues.apache.org/jira/browse/HBASE-15291
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.2, 0.98.16.1
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Fix For: 2.0.0, 1.3.0, 1.2.1, 0.98.18
>
> Attachments: HBASE-15291.001.patch, HBASE-15291.002.patch, 
> HBASE-15291.003.patch, HBASE-15291.004.patch, HBASE-15291.addendum, patch
>
>
> FileSystem not closed in secure bulkLoad after bulkLoad  finish, it will 
> cause memory used more and more if too many bulkLoad .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15291) FileSystem not closed in secure bulkLoad

2016-03-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178665#comment-15178665
 ] 

Hadoop QA commented on HBASE-15291:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 5m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
35m 50s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 147m 0s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 133m 25s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 351m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0 Failed junit tests | 
hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hbase.client.TestBlockEvictionFromClient |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791193/HBASE-15291.004.patch 
|
| JIRA Issue | HBASE-15291 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux pietas.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT 
Wed Sep 3 21:56:12 UTC 

[jira] [Commented] (HBASE-9393) Hbase does not closing a closed socket resulting in many CLOSE_WAIT

2016-03-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178607#comment-15178607
 ] 

Hadoop QA commented on HBASE-9393:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
3s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
35s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 36s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 24s {color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 88m 17s 
{color} | {color:green} hbase-server in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 220m 11s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0 Timed out junit tests | 
org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 |
|   | 
org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint
 |
|   | org.apache.hadoop.hbase.filter.TestFilterWithScanLimits |
|   | org.apache.hadoop.hbase.constraint.TestConstraint |
|   | org.apache.hadoop.hbase.master.TestAssignmentListener |
|   | org.apache.hadoop.hbase.master.cleaner.TestSnapshotFromMaster |
|   | org.apache.hadoop.hbase.master.TestSplitLogManager |
|   | 

[jira] [Commented] (HBASE-15370) Backport Moderate Object Storage (MOB) to branch-1

2016-03-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178594#comment-15178594
 ] 

Ted Yu commented on HBASE-15370:


Another approach is to apply the identified commits related to MOB in reverse 
chronological order on hbase-15370 branch.

This way, every commit is reviewable.
The amount of work is not trivial. Working on integrating the first commit 
(HBASE-11643), there were compilation errors after resolving rejected hunks.

> Backport Moderate Object Storage (MOB) to branch-1
> --
>
> Key: HBASE-15370
> URL: https://issues.apache.org/jira/browse/HBASE-15370
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 1.4.0
>
> Attachments: 15370-test.out, HBASE-15370-branch-1.v1.patch, 
> HBASE-15370-branch-1.v2.patch, merge-conflict.list, mob-cmmits.txt, 
> mob-commits-v2.txt
>
>
> MOB feature was integrated to master branch half a year ago.
> Since then there have been bug fixes which stabilize the feature.
> Some customers have been using it at PB scale.
> Here is discussion thread on dev mailing list:
> http://search-hadoop.com/m/YGbbDSqxD1PYXK62/hbase+MOB+in+branch-1=Re+MOB+in+branch+1+Re+RESULT+VOTE+Merge+branch+hbase+11339+HBase+MOB+to+trunk+
> This issue is to backport MOB feature to branch-1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15383) Load distribute across secondary read replicas for meta

2016-03-03 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178565#comment-15178565
 ] 

Devaraj Das commented on HBASE-15383:
-

True that.. Nice idea.

> Load distribute across secondary read replicas for meta
> ---
>
> Key: HBASE-15383
> URL: https://issues.apache.org/jira/browse/HBASE-15383
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.3.0
>
>
> Right now, we always hit the primary replica for meta and fallback to the 
> secondary replicas in case of a timeout. This can hamper performance in 
> scenarios where meta becomes a hot region e.g. cluster ramp up..clients 
> dropping connections etc.
> It's good to have a load distribution approach on meta's secondary replicas 
> with fallback to primary if we read stale data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15383) Load distribute across secondary read replicas for meta

2016-03-03 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178535#comment-15178535
 ] 

Elliott Clark commented on HBASE-15383:
---

>From the point of view of the client all responses from meta can be stale. So 
>just assume that any secondary is good on the first time and after that go to 
>the primary.

> Load distribute across secondary read replicas for meta
> ---
>
> Key: HBASE-15383
> URL: https://issues.apache.org/jira/browse/HBASE-15383
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.3.0
>
>
> Right now, we always hit the primary replica for meta and fallback to the 
> secondary replicas in case of a timeout. This can hamper performance in 
> scenarios where meta becomes a hot region e.g. cluster ramp up..clients 
> dropping connections etc.
> It's good to have a load distribution approach on meta's secondary replicas 
> with fallback to primary if we read stale data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15356) Remove unused Imports

2016-03-03 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-15356:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

thanks for the patch youngjoon, I've committed to master.

> Remove unused Imports
> -
>
> Key: HBASE-15356
> URL: https://issues.apache.org/jira/browse/HBASE-15356
> Project: HBase
>  Issue Type: Improvement
>Reporter: Youngjoon Kim
>Assignee: Youngjoon Kim
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-15356.patch
>
>
> Remove unused Imports.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15366) Add doc, trace-level logging, and test around hfileblock

2016-03-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178465#comment-15178465
 ] 

Hudson commented on HBASE-15366:


FAILURE: Integrated in HBase-Trunk_matrix #752 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/752/])
HBASE-15366 Add doc, trace-level logging, and test around hfileblock (stack: 
rev 8ace5bbfcea01e02c5661f75fe9458e04fa3b60f)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestForceCacheImportantBlocks.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileScanner.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekToBlockWithEncoders.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestBlocksRead.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBackedByBucketCache.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java


> Add doc, trace-level logging, and test around hfileblock
> 
>
> Key: HBASE-15366
> URL: https://issues.apache.org/jira/browse/HBASE-15366
> Project: HBase
>  Issue Type: Sub-task
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 15366.patch, 15366v2.patch, 15366v3.patch, 
> 15366v4.patch, 15366v4.patch
>
>
> What hfileblock is doing -- that it overreads when pulling in from hdfs to 
> fetch the header of the next block to save on seeks; that it caches the block 
> and overread and then adds an extra 13 bytes to the cached entry; that 
> buckets in bucketcache have at least four hfileblocks in them and so on -- 
> was totally baffling me. This patch docs the class, adds some trace-level 
> logging so you can see if you are doing the right thing, and then adds a test 
> of file-backed bucketcache that checks that persistence is working.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15370) Backport Moderate Object Storage (MOB) to branch-1

2016-03-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178438#comment-15178438
 ] 

Ted Yu commented on HBASE-15370:


Thanks [~jmhsieh] for the quick response.

bq. periodically merge branch-1 into hbase-15370

Shouldn't this be done after the hbase-15370 branch is created ?
Reasoning is that post hbase-11339 merge patches are not in branch-1. This 
would make merging hbase-15370 branch into branch-1 a lot easier.
If so, wouldn't we encounter the same number of merge conflicts merging 
branch-1 to hbase-15370 branch ?

> Backport Moderate Object Storage (MOB) to branch-1
> --
>
> Key: HBASE-15370
> URL: https://issues.apache.org/jira/browse/HBASE-15370
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 1.4.0
>
> Attachments: 15370-test.out, HBASE-15370-branch-1.v1.patch, 
> HBASE-15370-branch-1.v2.patch, merge-conflict.list, mob-cmmits.txt, 
> mob-commits-v2.txt
>
>
> MOB feature was integrated to master branch half a year ago.
> Since then there have been bug fixes which stabilize the feature.
> Some customers have been using it at PB scale.
> Here is discussion thread on dev mailing list:
> http://search-hadoop.com/m/YGbbDSqxD1PYXK62/hbase+MOB+in+branch-1=Re+MOB+in+branch+1+Re+RESULT+VOTE+Merge+branch+hbase+11339+HBase+MOB+to+trunk+
> This issue is to backport MOB feature to branch-1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15370) Backport Moderate Object Storage (MOB) to branch-1

2016-03-03 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178425#comment-15178425
 ] 

Jonathan Hsieh commented on HBASE-15370:


Hey [~tedyu], sorry if it wasn't clear -- I was suggesting:

1) create hbase-15370 branch from current hbase-11339 branch.
2) backport post hbase-11339 merge patches from trunk to hbase-15370  (with 
quick reviews)
3) periodically merge branch-1 into hbase-15370.  This let's us clearly see the 
deltas that were fixed with these merges and doesn't "pollute" branch-1.  
4) when things are stable, call branch merge vote.

> Backport Moderate Object Storage (MOB) to branch-1
> --
>
> Key: HBASE-15370
> URL: https://issues.apache.org/jira/browse/HBASE-15370
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 1.4.0
>
> Attachments: 15370-test.out, HBASE-15370-branch-1.v1.patch, 
> HBASE-15370-branch-1.v2.patch, merge-conflict.list, mob-cmmits.txt, 
> mob-commits-v2.txt
>
>
> MOB feature was integrated to master branch half a year ago.
> Since then there have been bug fixes which stabilize the feature.
> Some customers have been using it at PB scale.
> Here is discussion thread on dev mailing list:
> http://search-hadoop.com/m/YGbbDSqxD1PYXK62/hbase+MOB+in+branch-1=Re+MOB+in+branch+1+Re+RESULT+VOTE+Merge+branch+hbase+11339+HBase+MOB+to+trunk+
> This issue is to backport MOB feature to branch-1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15370) Backport Moderate Object Storage (MOB) to branch-1

2016-03-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15370:
---
Attachment: merge-conflict.list

I tried to follow [~jmhsieh]'s proposal:

create branch hbase-15370 off of branch-1
merge branch hbase-11339 into hbase-15370

merge-conflict.list is the list of files where there is conflict - there are 
1626 files.
Some of which are obviously not related to MOB feature.


> Backport Moderate Object Storage (MOB) to branch-1
> --
>
> Key: HBASE-15370
> URL: https://issues.apache.org/jira/browse/HBASE-15370
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 1.4.0
>
> Attachments: 15370-test.out, HBASE-15370-branch-1.v1.patch, 
> HBASE-15370-branch-1.v2.patch, merge-conflict.list, mob-cmmits.txt, 
> mob-commits-v2.txt
>
>
> MOB feature was integrated to master branch half a year ago.
> Since then there have been bug fixes which stabilize the feature.
> Some customers have been using it at PB scale.
> Here is discussion thread on dev mailing list:
> http://search-hadoop.com/m/YGbbDSqxD1PYXK62/hbase+MOB+in+branch-1=Re+MOB+in+branch+1+Re+RESULT+VOTE+Merge+branch+hbase+11339+HBase+MOB+to+trunk+
> This issue is to backport MOB feature to branch-1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15383) Load distribute across secondary read replicas for meta

2016-03-03 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178403#comment-15178403
 ] 

Devaraj Das commented on HBASE-15383:
-

The point to note is that responses from secondaries are always flagged as 
"stale", even if the secondary does have the latest updates... Without 
addressing that, it's not easy to address this jira.

> Load distribute across secondary read replicas for meta
> ---
>
> Key: HBASE-15383
> URL: https://issues.apache.org/jira/browse/HBASE-15383
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.3.0
>
>
> Right now, we always hit the primary replica for meta and fallback to the 
> secondary replicas in case of a timeout. This can hamper performance in 
> scenarios where meta becomes a hot region e.g. cluster ramp up..clients 
> dropping connections etc.
> It's good to have a load distribution approach on meta's secondary replicas 
> with fallback to primary if we read stale data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2016-03-03 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178386#comment-15178386
 ] 

Enis Soztutar commented on HBASE-6721:
--

bq. So in your case 'mvn install -Drsgroup=false' would still enable the 
rsgroup profile?
If that is the case, it is still fine. If you don't want to activate, don't 
pass the property at all. Then how we can activate this with {{!skip-rsgroup}} 
? I mean how do you activate the profile with the current patch without 
specifying {{-P}}? 

> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, hbase-6721-v20.patch, 
> hbase-6721-v21.patch, hbase-6721-v22.patch, hbase-6721-v23.patch, 
> hbase-6721-v25.patch, hbase-6721-v26.patch, hbase-6721-v26_draft1.patch, 
> hbase-6721-v27.patch, immediateAssignments Sequence Diagram.svg, 
> randomAssignment Sequence Diagram.svg, retainAssignment Sequence Diagram.svg, 
> roundRobinAssignment Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15136) Explore different queuing behaviors while busy

2016-03-03 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178371#comment-15178371
 ] 

Mikhail Antonov commented on HBASE-15136:
-

Linked jira to fix flaky test.

[~stack] sure, would like to make it open. The indent here is to help prevent / 
mitigate call queue buildups and associated latency spikes in case of either 
sudden increase of traffic rate, or sudden decrease in available request 
throughput (raid card/disk issue, kernel lockup etc). So, thorough test of this 
patch include two things - checking that performance doesn't degrade in normal 
conditions (since locking was changed a bit), and verifying behavior in case of 
various kinds of failures.

I'll post more data as we roll it out.

> Explore different queuing behaviors while busy
> --
>
> Key: HBASE-15136
> URL: https://issues.apache.org/jira/browse/HBASE-15136
> Project: HBase
>  Issue Type: New Feature
>  Components: IPC/RPC
>Reporter: Elliott Clark
>Assignee: Mikhail Antonov
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15136-1.2.v1.patch, HBASE-15136-v2.patch, 
> deadline_scheduler_v_0_2.patch
>
>
> http://queue.acm.org/detail.cfm?id=2839461



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15136) Explore different queuing behaviors while busy

2016-03-03 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-15136.
---
Resolution: Fixed

Resolving again. Issues have been opened to deal w/ the flakies. Thanks 
[~mantonov] and agree on more data first... but consider enabling by default if 
good results.

> Explore different queuing behaviors while busy
> --
>
> Key: HBASE-15136
> URL: https://issues.apache.org/jira/browse/HBASE-15136
> Project: HBase
>  Issue Type: New Feature
>  Components: IPC/RPC
>Reporter: Elliott Clark
>Assignee: Mikhail Antonov
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15136-1.2.v1.patch, HBASE-15136-v2.patch, 
> deadline_scheduler_v_0_2.patch
>
>
> http://queue.acm.org/detail.cfm?id=2839461



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15360) Fix flaky TestSimpleRpcScheduler

2016-03-03 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15360:

Priority: Critical  (was: Major)

> Fix flaky TestSimpleRpcScheduler
> 
>
> Key: HBASE-15360
> URL: https://issues.apache.org/jira/browse/HBASE-15360
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Critical
> Fix For: 1.3.0
>
>
> There were several flaky tests added there as part of HBASE-15306 and likely 
> HBASE-15136.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15136) Explore different queuing behaviors while busy

2016-03-03 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178311#comment-15178311
 ] 

Mikhail Antonov commented on HBASE-15136:
-

[~stack] yeah, sorry about that :( I think the test is flaky, though when I 
tried to reproduce it locally by just running TestSimpleRpcScheduler, I was 
never able to reproduce it. I opened HBASE-15360 to fix the test, let me bump 
it to critical.

Regarding making it on by default - I wouldn't do that just now. We're testing 
it on several clusters now, but want to see a bit more on various workload 
profiles. I would say we can likely turn it on before 1.3 goes out, but I would 
hold off just now.



> Explore different queuing behaviors while busy
> --
>
> Key: HBASE-15136
> URL: https://issues.apache.org/jira/browse/HBASE-15136
> Project: HBase
>  Issue Type: New Feature
>  Components: IPC/RPC
>Reporter: Elliott Clark
>Assignee: Mikhail Antonov
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15136-1.2.v1.patch, HBASE-15136-v2.patch, 
> deadline_scheduler_v_0_2.patch
>
>
> http://queue.acm.org/detail.cfm?id=2839461



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15370) Backport Moderate Object Storage (MOB) to branch-1

2016-03-03 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15178286#comment-15178286
 ] 

Ted Yu commented on HBASE-15370:


I sent an email to dev@hbase @ 9:02 AM

Looks like there might be problem on email server.

> Backport Moderate Object Storage (MOB) to branch-1
> --
>
> Key: HBASE-15370
> URL: https://issues.apache.org/jira/browse/HBASE-15370
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 1.4.0
>
> Attachments: 15370-test.out, HBASE-15370-branch-1.v1.patch, 
> HBASE-15370-branch-1.v2.patch, mob-cmmits.txt, mob-commits-v2.txt
>
>
> MOB feature was integrated to master branch half a year ago.
> Since then there have been bug fixes which stabilize the feature.
> Some customers have been using it at PB scale.
> Here is discussion thread on dev mailing list:
> http://search-hadoop.com/m/YGbbDSqxD1PYXK62/hbase+MOB+in+branch-1=Re+MOB+in+branch+1+Re+RESULT+VOTE+Merge+branch+hbase+11339+HBase+MOB+to+trunk+
> This issue is to backport MOB feature to branch-1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15370) Backport Moderate Object Storage (MOB) to branch-1

2016-03-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15370:
---
Attachment: mob-commits-v2.txt

Gone through MOB related commits one more time.

mob-commits-v2.txt is up-to-date list of commits.

> Backport Moderate Object Storage (MOB) to branch-1
> --
>
> Key: HBASE-15370
> URL: https://issues.apache.org/jira/browse/HBASE-15370
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 1.4.0
>
> Attachments: 15370-test.out, HBASE-15370-branch-1.v1.patch, 
> HBASE-15370-branch-1.v2.patch, mob-cmmits.txt, mob-commits-v2.txt
>
>
> MOB feature was integrated to master branch half a year ago.
> Since then there have been bug fixes which stabilize the feature.
> Some customers have been using it at PB scale.
> Here is discussion thread on dev mailing list:
> http://search-hadoop.com/m/YGbbDSqxD1PYXK62/hbase+MOB+in+branch-1=Re+MOB+in+branch+1+Re+RESULT+VOTE+Merge+branch+hbase+11339+HBase+MOB+to+trunk+
> This issue is to backport MOB feature to branch-1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15370) Backport Moderate Object Storage (MOB) to branch-1

2016-03-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15370:
---
Status: Open  (was: Patch Available)

> Backport Moderate Object Storage (MOB) to branch-1
> --
>
> Key: HBASE-15370
> URL: https://issues.apache.org/jira/browse/HBASE-15370
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 1.4.0
>
> Attachments: 15370-test.out, HBASE-15370-branch-1.v1.patch, 
> HBASE-15370-branch-1.v2.patch, mob-cmmits.txt
>
>
> MOB feature was integrated to master branch half a year ago.
> Since then there have been bug fixes which stabilize the feature.
> Some customers have been using it at PB scale.
> Here is discussion thread on dev mailing list:
> http://search-hadoop.com/m/YGbbDSqxD1PYXK62/hbase+MOB+in+branch-1=Re+MOB+in+branch+1+Re+RESULT+VOTE+Merge+branch+hbase+11339+HBase+MOB+to+trunk+
> This issue is to backport MOB feature to branch-1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >